Model Database's logo
Join the Model Database community

and get access to the augmented documentation experience

to get started

Sentiment Examples

The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as lvwerra/distilbert-imdb).

Here’s an overview of the notebooks and scripts in the trl repository:

File Description Colab link
gpt2-sentiment.ipynb Fine-tune GPT2 to generate positive movie reviews. Open In Colab
gpt2-sentiment-control.ipynb Fine-tune GPT2 to generate movie reviews with controlled sentiment. Open In Colab
gpt2-sentiment.py Same as the notebook, but easier to use to use in multi-GPU setup with any architecture. x

Installation

pip install trl
#optional: wandb
pip install wandb

Note: if you don’t want to log with wandb remove log_with="wandb" in the scripts/notebooks. You can also replace it with your favourite experiment tracker that’s supported by accelerate.

Launch scripts

The trl library is powered by accelerate. As such it is best to configure and launch trainings with the following commands:

accelerate config # will prompt you to define the training configuration
accelerate launch yourscript.py # launches training

Few notes on multi-GPU

To run in multi-GPU setup with DDP (distributed Data Parallel) change the device_map value to device_map={"": Accelerator().process_index} and make sure to run your script with accelerate launch yourscript.py. If you want to apply naive pipeline parallelism you can use device_map="auto".