HuggingFace Evaluate
by · unknown · Last verified
A Python library designed for easily evaluating machine learning models and datasets. It provides a standardized way to compute metrics, compare models, and perform robust evaluations across various tasks, supporting reproducibility and quality assurance.
https://huggingface.co/docs/evaluate/index ↗F
F—Critical
Adoption: FQuality: FFreshness: A+Citations: FEngagement: F
Specifications
- Pricing
- unknown
- Capabilities
- Integrations
- Use Cases
- API Available
- No
- SDK Languages
- Tags
- model evaluation, metrics, benchmarking, ML library, dataset evaluation, MLOps
- Added
- 2026-03-25
- Completeness
- 0.6%
Index Score
0Adoption
0
Quality
0
Freshness
100
Citations
0
Engagement
0