Skip to content

Mastering Fine-Tuning LLM Models Effortlessly with Hugging Face

Notifications You must be signed in to change notification settings

kerbachi/bert-peft-fine-tuning

Repository files navigation

Fine-Tuning BERT for Sentiment Analysis

This repository contains the code used for fine-tuning a pre-trained BERT model using the Hugging Face Transformers library for sentiment analysis.

Table of Contents

  • Introduction
  • Setup
  • Code Structure
  • Usage
  • Results
  • Introduction

This project demonstrates the step-by-step process of fine-tuning a pre-trained BERT model using the Hugging Face Transformers library for sentiment analysis. We use the IMDB dataset and the bert-base-cased model to achieve state-of-the-art results.

Setup

Code Structure

The code is organized as follows:

  • data: contains the IMDB dataset
  • models: contains the pre-trained bert-base-cased model and the fine-tuned PEFT model
  • train.py: contains the training code for the fine-tuned PEFT model
  • inference.py: contains the inference code for the fine-tuned PEFT model

Usage

  1. Train the fine-tuned PEFT model: python train.py
  2. Run inference on a sample text: python inference.py

Results

The accuracy of the fine-tuned PEFT model is:

  • Foundational model without fine tuning: 0.496%
  • Training #1: 0.88%
  • Training #2: 0.899% Note: The results may vary depending on the system configuration and the dataset used.

License

This project is licensed under the MIT License.

Acknowledgments

This project was inspired by the Hugging Face Transformers library and the IMDB dataset.

Contact

This repository is used in the blog: Link to the blog

If you have any questions or would like to contribute to this project, please contact me at: m AT kerbachi dot com.

About

Mastering Fine-Tuning LLM Models Effortlessly with Hugging Face

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published