Skip to main content
c-metrics
Projects
rouge-scorer
Evaluations
Log in
Sign up
Project
Traces
Evals
Playground
Monitors
Leaders
Threads
Assets
Evaluations
Compare
Select an eval
Select a dataset
Filter
Visualize
Columns
inputs
output
model_latency
RougeScorer
rouge-1
rouge-2
rouge-l
Trace
Feedback
Status
model
self
mean
mean
mean
mean
User
Called
Tokens
full eval
29a1
SummarizerLLMSystem:v2
Evaluation:v3
5.7783
0.3091
0.099
0.2827
11 months ago
563,692
Evaluation.evaluate
83a9
SummarizerLLMSystem:v1
Evaluation:v1
4.59
0.3563
0.1294
0.3334
11 months ago
27,315