Traces
All Ops
Filter
inputs
Trace
Feedback
Status
article_title
article_type
author
...json_examples
...txt_examples
markdown_summaries
max_results
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
["Summary of https://magazine.sebastianraschka.com/p/llm-research-insights-instruction:\nThe article from \"Ahead of AI\" discusses recent advancements in large language models (LLMs) with a focus on instruction finetuning and parameter-efficient finetuning using LoRA. It highlights three key research papers from May 2024.\n\n1. **Instruction Tuning With Loss Over Instructions**: The paper challenges the common practice of masking instructions during finetuning. It explores the impact of masking...
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
N/A
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
["Summary of https://magazine.sebastianraschka.com/p/llm-research-insights-instruction:\nThe article from \"Ahead of AI\" discusses recent advancements in large language model (LLM) research, focusing on instruction finetuning and parameter-efficient finetuning with LoRA. It highlights three key papers from May 2024.\n\n1. **Instruction Tuning With Loss Over Instructions**: This section examines a paper questioning the common practice of masking instructions during finetuning. The study compares...
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
N/A
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
["Summary of https://magazine.sebastianraschka.com/p/llm-research-insights-instruction:\nThe article from \"Ahead of AI\" discusses recent advancements in large language model (LLM) research, focusing on instruction finetuning and parameter-efficient finetuning with LoRA. It highlights three key papers and their findings.\n\n1. **Instruction Tuning With Loss Over Instructions**: This section examines the practice of masking instructions during finetuning. The paper \"Instruction Tuning With Loss...
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
N/A
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
["Summary of https://magazine.sebastianraschka.com/p/llm-research-insights-instruction:\nThe article from \"Ahead of AI\" discusses recent advancements in instruction finetuning and parameter-efficient finetuning with LoRA in large language models (LLMs), focusing on three new research papers. \n\n### 1. Instruction Tuning With Loss Over Instructions\nThe first paper challenges the common practice of masking instructions during finetuning. Typically, instructions are masked when calculating the ...
N/A
How to fine-tune Gemma 3 270M on python code data
tutorial
brett
[]
[{"file_name":"qat.txt","content":"<!--\nArticle Type: explainer, with example of Weights & Biases\nPrimary Keyword: data lineage\nSecondary Keywords: what is data lineage, data lineage meaning, data lineage tools, automated data lineage\nTitle: Quantization-Aware Training (QAT): A comprehensive guide and tutorial\nAuthor: Brett\n-->\n\n# Quantization-Aware Training (QAT): A Step-by-Step Guide with PyTorch\n\n## Introduction\n[LLM-generated first paragraph suggestion: Quantization aware training...
N/A
N/A
1-18 of 18
Per page:
50