Skip to content

Commit 11c8014

Browse files
committed
Add PydanticAI Evals under LLM Evaluation
1 parent 67d9f00 commit 11c8014

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -227,6 +227,8 @@ This repository contains a curated list of 120+ LLM libraries category wise.
227227
| AgentEvals | Evaluators and utilities for evaluating the performance of your agents. | [Link](https://github.com/langchain-ai/agentevals) |
228228
| LLMBox | A comprehensive library for implementing LLMs, including a unified training pipeline and comprehensive model evaluation. | [Link](https://github.com/RUCAIBox/LLMBox) |
229229
| Opik | An open-source end-to-end LLM Development Platform which also includes LLM evaluation. | [Link](https://github.com/comet-ml/opik) |
230+
| PydanticAI Evals | A powerful evaluation framework designed to help you systematically evaluate the performance of LLM applications. | [Link](https://ai.pydantic.dev/evals/) |
231+
230232

231233

232234

0 commit comments

Comments
 (0)