Through ongoing work, we contribute to large-language-models (LLMs) training through providing human feedback for software engineering contexts. Our work involves evaluating and ranking model-generated code, writing reference implementations, and providing structured feedback to improve correctness, readability, and adherence to best practices.