Skip to main content

streaming canary-llm

Created on May 11|Last edited on May 14


–Base scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sa_lhmain3cross.sh
–w/o LLM init scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sb_lhmain3cross.sh
More xattn 2->7 and less LLM layers 22->16 scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sc_lhmain3cross.sh
–Prompt based scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sd_lhmain3cross.sh
*Encoder is from Kunal

LLM frozen with LoRA tune, scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_se_lhmain3cross.sh
try to increase # of xattn when freezing LLM/mtron NMT model
7 scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sf_lhmain3cross.sh
10 scripts/oci_canary_gpt_llama_tiny_crossbmg4eghel_sg_lhmain3cross.sh

Section 1


Run: crossbmg4egc_lhmain3cross_oci_FC-GPT_llama_tiny_canaryset_b6s4kf-ASR-AST_lr1e-4wd1e-3_CosineAnnealing_warmup2000_minlr1e-6_gbs1024_mbs16_ep200
432
Run set 2
Run set 5