미세 조정 중
Weights & Biases

Weights &에 대한 세계적 수준의 AI 팀 교육 및 대규모 모델 미세 조정에 참여하세요. 편견을 갖고 최고의 AI를 구축하세요
.

그만큼세계 최고의 ML 팀 신뢰하다Weights & Biases

Weights & Biases 미세 조정 여부에 관계없이 모든 미세 조정 프레임워크 및 미세 조정 공급자와 함께 작동합니다.LLM, 확산 모델,또는 다중 모델:

The Hugging Face transformers and TRL libraries have a powerful integration to turn on experiment tracking.

See our Transformers documentation for how to get started.

Examples

Code:

				
					# 1. Define which wandb project to log to
wandb.init(project="llama-4-fine-tune")

# 2. turn on model checkpointing
os.environ["WANDB_LOG_MODEL"] = "checkpoint" 

# 3. Add "wandb" in your `TrainingArguments`
args = TrainingArguments(..., report_to="wandb")

# 4. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(..., args=args)

# OR if using TRL, W&B logging will begin automatically when your start training your Trainer
trainer = SFTTrainer(..., args=args)

# Start training
trainer.train()
				
			

Axolotl is built on the Hugging Face transformers Trainer, with a lot of additional modifications optimized for LLM fine-tuning. Pass the wandb arguments below to your config.yml file to turn on W&B logging. 

Code:

				
					# pass a project name to turn on W&B logging 
wandb_project: llama-4-fine-tune

# "checkpoint" to log model to wandb Artifacts every `save_steps` 
# or "end" to log only at the end of training
wandb_log_model: checkpoint

# Optional, your username or W&B Team name
wandb_entity: 

# Optional, naming your W&B run
wandb_run_id: 
				
			


You can also use more advanced W&B settings by setting additional environment variables here.

Lightning is a powerful trainer that lets you get started training in only a few lines. See the W&B Lightning documentation and the Lightning documentation to get started. 

You can also use more advanced W&B settings by setting additional environment variables here

Code:

				
					import wandb

# 1. Start a W&B run
run = wandb.init(project="my_first_project")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# 3. Log metrics to visualize performance over time
for i in range(10):
 run.log({"loss": loss})
				
			


You can also use more advanced W&B settings by setting additional environment variables here

MosaicML’s Composer library is a powerful, open source framework for training models and is what powers their LLM Foundry library. The Weights & Biases integration with Composer can be added to training with just a few lines of code.

See the MosaicML Composer documentation for more.

Code:

				
					from composer import Trainer
from composer.loggers import WandBLogger

# initialise the logger
wandb_logger = WandBLogger(
	project="llama-4-fine-tune",
  log_artifacts=true,  # optional
	entity= <your W&B username or team name>,  # optional
	name= <set a name for your W&B run>,  # optional
	init_kwargs={"group": "high-bs-test"}   # optional
)

# pass the wandb_logger to the Trainer, logging will begin on training
trainer = Trainer(..., loggers=[wandb_logger])
				
			

You can also use more advanced W&B settings by passing additional wandb.int parameters to the init_kwargs argument. You can also modify additional W&B settings via the environment variables here.

Hugging Face Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules.

With our diffusers autologger you can log your generations from a diffusers pipeline to Weights & Biases in just 1 line of code.

Examples:

Code:

				
					# import the autolog function
from wandb.integration.diffusers import autolog

# call the W&B autologger before calling the pipeline
autolog(init={"project":"diffusers_logging"})

# Initialize the diffusion pipeline
pipeline = DiffusionPipeline.from_pretrained(
	"stabilityai/sdxl-turbo"
)

# call the pipeline to generate the images
images = pipeline("a photograph of a dragon")
				
			

OpenAI fine-tuning for GPT-3.5 and GPT-4 is powerful, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.

See our OpenAI Fine-Tuning documentation for how to get started.

Examples: 

Code:

				
					from wandb.integration.openai import WandbLogger 

# call your OpenAI fine-tuning code here ...

# call .sync to log the results from the fine-tuning job to W&B
WandbLogger.sync(id=openai_fine_tune_job_id, project="My-OpenAI-Fine-Tune")
				
			

MosaicML offer fast and efficient fine-tuning and inference, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.

See the MosaicML Fine-Tuning documentation for how to turn on W&B logging.

Code:

Add the following to your YAML config file to turn on W&B logging:

				
					integrations:
  - integration_type: wandb

    # Weights and Biases project name
    project: llama-4-fine-tuning

    # The username or team name the Weights and Biases project belongs to
    entity: < your W&B username or team name >
				
			

Together.ai offer fast and efficient fine-tuning and inference for the latest open source models, and with the Weights & Biases integration you can keep track of every experiment!

See the Together.ai Fine-Tuning documentation for how to get started with fine-tuning.

Code:

				
					# CLI
together finetune create .... --wandb-api-key $WANDB_API_KEY


# Python
import together

resp = together.Finetune.create(..., wandb_api_key = '1a2b3c4d5e.......')
				
			

If using the command line interface, pass your W&B API key to the wandb-api-key argument to turn on W&B logging. If using the python library, you can pass your W&B API key to the wandb_api_key parameter:

The Hugging Face AutoTrain library offers LLM fine-tuning. By passing the --report-to wandb argument you can turn on W&B logging.

Code:

				
					# CLI
autotrain llm ... --report-to wandb
				
			

OpenAI fine-tuning for GPT-3.5 and GPT-4 is powerful, and with the Weights & Biases integration you can keep track of every experiment, every result and every dataset version used.

See our OpenAI Fine-Tuning documentation for how to get started.

Examples:

Code:

				
					from wandb.integration.openai import WandbLogger 

# call your OpenAI fine-tuning code here ...

# call .sync to log the results from the fine-tuning job to W&B
WandbLogger.sync(id=openai_fine_tune_job_id, project="My-OpenAI-Fine-Tune")
				
			

하는 법을 배우다 미세 조정하다 LLM 무료 LLM 과정에서

 이 무료 과정에서는 강력한 LLM을 만들기 위한 아키텍처, 교육 기술 및 미세 조정 방법을 살펴봅니다. Jonathan Frankle(MosaicML) 및 기타 업계 리더로부터 이론과 실무 경험을 얻고 LoRA 및 RLHF와 같은 최첨단 기술을 배우십시오.

하는 법을 배우다LLM을 미세 조정하다 허깅페이스와 함께

This interactive Weights & Biases report walks you through how to fine-tune an LLM with HuggingFace Trainer, walking through a few popular methods like LoRA and model freezing. 

신뢰할 수 있는 최첨단 LLM을 구축하는 팀

Samuel Weinbach

VP of Technology

“W&B는 모든 프로젝트를 간결하게 보여줍니다. 우리는 실행을 비교하고, 모든 것을 한곳에 집계하고, 무엇이 잘 작동하고 다음에 무엇을 시도할지 직관적으로 결정할 수 있습니다.”

Peter Welinder
VP of Product- OpenAI

“우리는 거의 모든 모델 훈련에 W&B를 사용합니다.”

Ellie Evans
Product Manager- Cohere

“W&B를 사용하면 모든 후보 모델을 한 번에 검사할 수 있습니다. 이는 각 고객에게 가장 적합한 모델을 이해하는 데 매우 중요합니다. 보고서는 우리에게도 큰 도움이 되었습니다. 이를 통해 미묘한 기술 정보를 다음과 같은 방식으로 원활하게 전달할 수 있습니다. 기술팀이 아닌 팀도 소화할 수 있습니다.”

무게 & 편견이 작용함

웨이트스 & Biases 플랫폼은 작업 흐름을 처음부터 끝까지 간소화하는 데 도움이 됩니다.

모델

실험

추적 및 시각화 ML 실험

스윕

최적화 초매개변수

모델 레지스트리

ML 모델 등록 및 관리

자동화

워크플로우를 자동으로 트리거

시작하다

패키징하고 실행하기 ML 워크플로 작업

짜다

흔적

탐색하고
LLM 디버그

평가

GenAI 애플리케이션에 대한 엄격한 평가

핵심

유물

ML 파이프라인 버전 관리 및 관리

테이블

ML 데이터 시각화 및 탐색

보고서

ML 통찰력 문서화 및 공유

세계 최고의 기계 학습 팀에서는 Weights & 편견. 시작하는 데 어떻게 도움을 드릴 수 있는지 알려주세요.