HuggingTweets - Generate Tweets with Huggingface
Introduction
In this project, we'll show you how to fine-tune a pre-trained transformer on Jeff Dean's tweets using HuggingFace's transformers library – a collection of popular model architectures for natural language processing – including BERT, GPT-2, RoBERTa, T5 and hundreds of others. We're also going to use the new Weights & Biases integration to log model performance and model predictions automatically.
Github repo →
Without further ado, let's look at the predictions our model makes. In the next sections, we'll walk you through how to do this yourself.
Disclaimer: this demo is not to be used to publish any false generated information but to perform research on Natural Language Generation (NLG).
The Model Predictions
Fine-Tuning The HuggingFace Model Yourself
Generating tweets based on your favorite people by fine-tuning a transformer from HuggingFace, and visualizing its performance and predictions in Weights & Biases is simple!
If you just want to test the demo, click on below link and share your predictions on Twitter with #huggingtweets
!
To understand how the model works, check huggingtweets.ipynb
or use the following link.
Share your results
If you get an interesting result, we'd absolutely love to see it! 🤗
Please tweet us at @weights_biases and @huggingface.
Resources to dive further
Got questions?
If you have any questions about using W&B to track your model performance and predictions, please reach out in our slack community. Our team would love to make your experience a good one.
More Resources
- A Step by Step Guide: Track your Hugging Face model performance with Weights & Biases
- Does model size matter? A comparison of BERT and DistilBERT using Sweeps
- Who is "Them"? Text Disambiguation with Transformers