Ariannapera's group workspace
talkdown-pairs
What makes this group special?
Tags
llama-2-70b_balanced_size:1.0
Notes
Tags
round_2
Author
State
Finished
Start time
October 10th, 2023 6:55:40 PM
Runtime
21m 5s
Tracked hours
20m 56s
Run path
cocoons/crowdsourced_vs_gpt_datasize_v2/4bsxm17x
OS
Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-glibc2.2.5
Python version
3.8.2
Git repository
git clone git@github.com:AGMoller/worker_vs_gpt.git
Git state
git checkout -b "llama-2-70b_balanced_size1.0" 9554cf0a89b61ea91d6860a4e1ca993300af3bc4
Command
-m src.worker_vs_gpt.datasize_experiment
System Hardware
| CPU count | 16 |
| Logical CPU count | 32 |
| GPU count | 1 |
| GPU type | Tesla V100-PCIE-32GB |
W&B CLI Version
0.14.2
Group
talkdown-pairsConfig
Config parameters are your model's inputs. Learn more
- {} 192 keys▶
- "intfloat/e5-base"
- false
- 0.9
- 0.999
- 0.00000001
- false
- [] 1 item▶
- "BertModel"
- 0.1
- "llama-2-70b"
- false
- null
- 16
- null
- false
- false
- null
- 0
- "intfloat/e5-base"
- null
- null
- null
- false
- 0
- true
- null
- null
- null
- null
- 1,800
- [] 0 items
- null
- null
- false
- null
- 0
- true
- false
- false
- false
- false
- 0
- null
- null
- 0
- null
- "epoch"
- 30,522
- 0
- 0
- 0
46 ... 95▶▶96 ... 145▶▶146 ... 187▶▶
Summary
Summary metrics are your model's outputs. Learn more
- {} 24 keys▶
- "table-file"
- 0.8317757009345794
- 0.8314776331619819
- 0.4814859330654144
- 0.8771788990825689
- 1.062
- 403.002
- 25.423
- 0.7101226993865031
- 0.7092849610384095
- 0.6031389236450195
- 0.7870121193872559
- 6.6034
- 98.736
- 6.209
- 10
- 2,730
- 0
- 0.0097
- 10,232,680,831,777,440
- 0.16149708665770926
- 1,243.6213
- 35.027
- 2.195
Artifact Outputs
This run produced these artifacts as outputs. Total: 2. Learn more
Type
Name
Consumer count
Loading...