The following page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features or functionality remain at the sole discretion of GitLab Inc.
The AI Validation Team’s Centralized Evaluation Framework supports the entire end-to-end process of AI feature creation, from selecting the appropriate model for a use case to evaluating the AI features’ output. AI Validation works in concert with other types of evaluation, such as SET Quality testing and diagnostic testing, but is specifically focused on the interaction with Generative AI.
The foundation of the Centralized Evaluation Framework is based upon three main elements: a prompt library, an established ground truth, and validation metrics.
This guide covers the following activities for the Model Validation Process:
Last Reviewed: 2024-10-05
Last Updated: 2024-10-05