Object: Fluency-Eval-tcapelle-fluency-dataset
Object
name
dataset
scorers
trials
evaluate
predict_and_score
summarize
Category
User
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:NdTJ1J1zVU12Mt76mu7pXhNVaYTqIbJa7gTjlNIgTck"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:NdTJ1J1zVU12Mt76mu7pXhNVaYTqIbJa7gTjlNIgTck"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:nVAkGkqCgGtylHJSYZacoFAuBydqjO5gfLKVg5lLQN0"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:XwDvHmcubZaauYwTyAmUU5yLqKEOYOiDXqEIHRq2HS0"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:mxze6iNpaUPpKsM9Cv4hp1DvTIh8GcaKPHiKBPr4vdM"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:mxze6iNpaUPpKsM9Cv4hp1DvTIh8GcaKPHiKBPr4vdM"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:mxze6iNpaUPpKsM9Cv4hp1DvTIh8GcaKPHiKBPr4vdM"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:mxze6iNpaUPpKsM9Cv4hp1DvTIh8GcaKPHiKBPr4vdM"]
1
Evaluation
Fluency Eval: tcapelle/fluency-dataset
["weave:///c-metrics/fluency-eval/op/accuracy:ggMx3KxsNNFGHWp2kjF4X0YeKNZkTbMxHRtLVTzeCPQ","weave:///c-metrics/fluency-eval/object/F1Score:95LlYXrNdUm6K7FvFESi1wIirBRkh8qLJACXt3CtHco"]
1
Evaluation
Total Rows: 9