A comprehensive, evolving document that defines what "good" AI output looks like for your specific use case. This includes scoring rubrics, quality thresholds, edge case handling rules, and examples of acceptable vs. unacceptable outputs. Unlike static documentation, this document is designed to be updated as your understanding of quality evolves, serving as the foundation for all future AI evaluation and improvement efforts.