Skip to main content

HuggingTweets - Generate Tweets with Huggingface

Fine-tune a pre-trained Transformer on Yann LeCun's tweets
Created on May 14|Last edited on May 14

Introduction

In this project, we'll show you how to fine-tune a pre-trained transformer on Yann LeCun's tweets using HuggingFace's transformers library – a collection of popular model architectures for natural language processing – including BERT, GPT-2, RoBERTa, T5 and hundreds of others. We're also going to use the new Weights & Biases integration to log model performance and model predictions automatically.

Github repo →

Without further ado, let's look at the predictions our model makes. In the next sections, we'll walk you through how to do this yourself.

Disclaimer: this demo is not to be used to publish any false generated information but to perform research on Natural Language Generation (NLG).

The Model Predictions




Run set
0


Fine-Tuning The HuggingFace Model Yourself

Generating tweets based on your favorite people by fine-tuning a transformer from HuggingFace, and visualizing its performance and predictions in Weights & Biases is simple!

If you just want to test the demo, click on below link and share your predictions on Twitter with #huggingtweets!

Open In Colab

To understand how the model works, check huggingtweets.ipynb or use the following link.

Open In Colab

Share your results

If you get an interesting result, we'd absolutely love to see it! 🤗

Please tweet us at @weights_biases and @huggingface.

Resources to dive further

Got questions?

If you have any questions about using W&B to track your model performance and predictions, please reach out in our slack community. Our team would love to make your experience a good one.

More Resources