Skip to content

This project focuses on Qlora-Peft-LLM-s, a fine-tuned version of TinyLlama for optimized performance in natural language tasks. Using techniques like PEFT (Parameter Efficient Fine-Tuning), it enhances the model’s efficiency while maintaining accuracy. Ideal for applications that require a lightweight, fine-tuned language model for real-time tasks

License

Notifications You must be signed in to change notification settings

Warishayat/Qlora-Peft-LLM-s

Repository files navigation

Qlora-Peft-LLMs-Prompt-Generation

This repository contains implementations of prompt generation using the Qlora method and the PEFT (Parameter-Efficient Fine-Tuning) approach. The project leverages the TinyLlama model and utilizes the BitsAndBytesConfig for loading the model in 8-bit configuration, ensuring efficient computation. It includes notebooks for both Kaggle and Google Colab, showcasing model performance and prompt generation capabilities.

Features

  • Efficient prompt generation utilizing the Qlora method
  • Implemented in both Kaggle and Google Colab notebooks
  • Demonstrates model performance metrics using TensorBoard
  • Easy-to-use interface for generating prompts based on user-defined titles

Model and Dataset

Installation

To run the notebooks, ensure you have the following libraries installed. A requirements.txt file is included for easy installation:

pip install -r requirements.txt

Model

The trained model is saved in zip format for easy access and deployment. Ensure to unzip the model before use. The model is configured with BitsAndBytesConfig to load it in 8-bit format for optimized performance.

Usage

The notebooks in this repository demonstrate how to generate prompts based on user-defined titles. Both Kaggle and Google Colab notebooks are included for ease of access.

Notebooks

Model Performance

Performance metrics of the model can be found within the Performance Directory, with visualizations provided through TensorBoard.

Results

You can view the results generated with new data in the Results Directory.

Contributing

Contributions are welcome! If you'd like to contribute to this project, please fork the repository and submit a pull request.

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.

About

This project focuses on Qlora-Peft-LLM-s, a fine-tuned version of TinyLlama for optimized performance in natural language tasks. Using techniques like PEFT (Parameter Efficient Fine-Tuning), it enhances the model’s efficiency while maintaining accuracy. Ideal for applications that require a lightweight, fine-tuned language model for real-time tasks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published