Kojima-takeshi188's group workspace
kakuta-2056-gpt2-large-lr0.0001-iter600
What makes this group special?
Tags
kakuta-2056-gpt2-large-lr0.0001-iter600
Notes
Author
State
Finished
Start time
March 1st, 2024 11:56:58 AM
Runtime
6m 8s
Tracked hours
6m 2s
Run path
weblab_lecture/vip_soutyou/3aqwvnet
OS
Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.35
Python version
3.10.6
Git repository
git clone git@github.com:matsuolab/llm-vip-seminar.git
Git state
git checkout -b "kakuta-2056-gpt2-large-lr0.0001-iter600" 7024a87bd57b7ad64a9284520d10f3ab1d3fc327
Command
/home/acc12029sa/llm-vip-seminar-vl04/notebooks/llm/train_speech.py --your_name kakuta --model_name gpt2-large --total_iteration_num 600 --learning_rate 0.0001
System Hardware
CPU count | 72 |
Logical CPU count | 144 |
GPU count | 8 |
GPU type | NVIDIA A100-SXM4-40GB |
W&B CLI Version
0.15.8
Config
Config parameters are your model's inputs. Learn more
No config parameters were saved for this run.
Check the configuration documentation for more information.
Summary
Summary metrics are your model's outputs. Learn more
- {} 5 keys▶
- 0.0001
- 1.2609049479166667
- 1.220703125
- "table-file"
- 1.277587890625
Artifact Inputs
This run consumed these artifacts as inputs. Learn more
Type
Name
Consumer count
Loading...
Artifact Outputs
This run produced these artifacts as outputs. Total: 61. Learn more
Type
Name
Consumer count
Loading...