미세 조정 중
Weights & Biases
Weights &에 대한 세계적 수준의 AI 팀 교육 및 대규모 모델 미세 조정에 참여하세요. 편견을 갖고 최고의 AI를 구축하세요
.
그만큼세계 최고의 ML 팀 신뢰하다Weights & Biases
Weights & Biases 미세 조정 여부에 관계없이 모든 미세 조정 프레임워크 및 미세 조정 공급자와 함께 작동합니다.LLM, 확산 모델,또는 다중 모델:
The Hugging Face transformers and TRL libraries have a powerful integration to turn on experiment tracking.
See our Transformers documentation for how to get started.
Examples
Code:
# 1. Define which wandb project to log to
wandb.init(project="llama-4-fine-tune")
# 2. turn on model checkpointing
os.environ["WANDB_LOG_MODEL"] = "checkpoint"
# 3. Add "wandb" in your `TrainingArguments`
args = TrainingArguments(..., report_to="wandb")
# 4. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(..., args=args)
# OR if using TRL, W&B logging will begin automatically when your start training your Trainer
trainer = SFTTrainer(..., args=args)
# Start training
trainer.train()
Axolotl is built on the Hugging Face transformers Trainer, with a lot of additional modifications optimized for LLM fine-tuning. Pass the wandb arguments below to your config.yml
file to turn on W&B logging.
Code:
# pass a project name to turn on W&B logging
wandb_project: llama-4-fine-tune
# "checkpoint" to log model to wandb Artifacts every `save_steps`
# or "end" to log only at the end of training
wandb_log_model: checkpoint
# Optional, your username or W&B Team name
wandb_entity:
# Optional, naming your W&B run
wandb_run_id:
You can also use more advanced W&B settings by setting additional environment variables here.
Lightning is a powerful trainer that lets you get started training in only a few lines. See the W&B Lightning documentation and the Lightning documentation to get started.
You can also use more advanced W&B settings by setting additional environment variables here.
Code:
import wandb
# 1. Start a W&B run
run = wandb.init(project="my_first_project")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# 3. Log metrics to visualize performance over time
for i in range(10):
run.log({"loss": loss})
You can also use more advanced W&B settings by setting additional environment variables here.
MosaicML’s Composer library is a powerful, open source framework for training models and is what powers their LLM Foundry library. The Weights & Biases integration with Composer can be added to training with just a few lines of code.
See the MosaicML Composer documentation for more.
Code:
from composer import Trainer
from composer.loggers import WandBLogger
# initialise the logger
wandb_logger = WandBLogger(
project="llama-4-fine-tune",
log_artifacts=true, # optional
entity= , # optional
name= , # optional
init_kwargs={"group": "high-bs-test"} # optional
)
# pass the wandb_logger to the Trainer, logging will begin on training
trainer = Trainer(..., loggers=[wandb_logger])
You can also use more advanced W&B settings by passing additional wandb.int
parameters to the init_kwargs
argument. You can also modify additional W&B settings via the environment variables here.
Hugging Face Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules.
With our diffusers autologger you can log your generations from a diffusers pipeline to Weights & Biases in just 1 line of code.
Examples:
- A Guide to Prompt Engineering for Stable Diffusion
- PIXART-α: A Diffusion Transformer Model for Text-to-Image Generation
Code:
# import the autolog function
from wandb.integration.diffusers import autolog
# call the W&B autologger before calling the pipeline
autolog(init={"project":"diffusers_logging"})
# Initialize the diffusion pipeline
pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/sdxl-turbo"
)
# call the pipeline to generate the images
images = pipeline("a photograph of a dragon")
OpenAI fine-tuning for GPT-3.5 and GPT-4 is powerful, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.
See our OpenAI Fine-Tuning documentation for how to get started.
Examples:
- Fine-tuning OpenAI’s GPT with Weights & Biases
- Fine-Tuning ChatGPT for Question Answering
- Does Fine-tuning ChatGPT-3.5 on Gorilla improve API and Tool Usage Peformance?
Code:
from wandb.integration.openai import WandbLogger
# call your OpenAI fine-tuning code here ...
# call .sync to log the results from the fine-tuning job to W&B
WandbLogger.sync(id=openai_fine_tune_job_id, project="My-OpenAI-Fine-Tune")
MosaicML offer fast and efficient fine-tuning and inference, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.
See the MosaicML Fine-Tuning documentation for how to turn on W&B logging.
Code:
Add the following to your YAML config file to turn on W&B logging:
integrations:
- integration_type: wandb
# Weights and Biases project name
project: llama-4-fine-tuning
# The username or team name the Weights and Biases project belongs to
entity: < your W&B username or team name >
Together.ai offer fast and efficient fine-tuning and inference for the latest open source models, and with the Weights & Biases integration you can keep track of every experiment!
See the Together.ai Fine-Tuning documentation for how to get started with fine-tuning.
Code:
# CLI
together finetune create .... --wandb-api-key $WANDB_API_KEY
# Python
import together
resp = together.Finetune.create(..., wandb_api_key = '1a2b3c4d5e.......')
If using the command line interface, pass your W&B API key to the wandb-api-key
argument to turn on W&B logging. If using the python library, you can pass your W&B API key to the wandb_api_key
parameter:
The Hugging Face AutoTrain library offers LLM fine-tuning. By passing the --report-to wandb
argument you can turn on W&B logging.
Code:
# CLI
autotrain llm ... --report-to wandb
OpenAI fine-tuning for GPT-3.5 and GPT-4 is powerful, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.
See our OpenAI Fine-Tuning documentation for how to get started.
Examples:
- How to Fine-Tune Your OpenAI GPT-3.5 and GPT-4 Models
- Fine-Tuning ChatGPT for Question Answering
- Does Fine-tuning ChatGPT-3.5 on Gorilla improve API and Tool Usage Performance
Code:
from wandb.integration.openai import WandbLogger
# call your OpenAI fine-tuning code here ...
# call .sync to log the results from the fine-tuning job to W&B
WandbLogger.sync(id=openai_fine_tune_job_id, project="My-OpenAI-Fine-Tune")
하는 법을 배우다LLM을 미세 조정하다 허깅페이스와 함께
This interactive Weights & Biases report walks you through how to fine-tune an LLM with HuggingFace Trainer, walking through a few popular methods like LoRA and model freezing.
신뢰할 수 있는 최첨단 LLM을 구축하는 팀
VP of Technology
“W&B는 모든 프로젝트를 간결하게 보여줍니다. 우리는 실행을 비교하고, 모든 것을 한곳에 집계하고, 무엇이 잘 작동하고 다음에 무엇을 시도할지 직관적으로 결정할 수 있습니다.”
VP of Product- OpenAI
“우리는 거의 모든 모델 훈련에 W&B를 사용합니다.”
Product Manager- Cohere
“W&B를 사용하면 모든 후보 모델을 한 번에 검사할 수 있습니다. 이는 각 고객에게 가장 적합한 모델을 이해하는 데 매우 중요합니다. 보고서는 우리에게도 큰 도움이 되었습니다. 이를 통해 미묘한 기술 정보를 다음과 같은 방식으로 원활하게 전달할 수 있습니다. 기술팀이 아닌 팀도 소화할 수 있습니다.”