Tootsie 8B rewarm v1 ("adept-phoenix")
See https://github.com/stanford-crfm/marin/issues/600 for narrative
Created on March 14|Last edited on May 12
Comment
Big Idea:
- From monumental-jellyfish
- Core tootsie DCLM mix to 3.7 T tokens
- Cooldown on Dolmino HQ data (without synth math or Flan) to 4.8T tokens
- rewarm over 2000 steps to peak LR, training on 50/50 DCLM Mix and (nemotron+starcoder)
- train for a while
NB that the final run (lime green, phase 3) starts from a slightly earlier checkpoint from the red run since I messed the red one up.
Lineage Runs
This set of panels contains runs from a private project, which cannot be shown in this report
Run set
6662
Add a comment