Skip to content

public repository for MyGPT installation instructions

Notifications You must be signed in to change notification settings

mb-group/MyGPT_public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MyGPT logo

MyGPT_public

Note: This repository contains installation instructions for the MyGPT with Docker images and will not need source code. If you need access to MyGPT source to help in development process, please contact Jaimin Patel (Email: jaimin.patel@stjude.org) or appropriate person.

Note2: This repository is for public use and will not contain any private information. If you need access to private repository, please contact Jaimin Patel

ChatGPT has revolutionized creative occupations, but tasks requiring factual backing suffer from generalized models and limitations such as hallucinations and inconsistency. Here, we present MyGPT — an open-source Large Language Model (LLM) pipeline to ask questions for content from a curated list of publications or video/audio lectures. MyGPT minimizes hallucination by providing a context for the question and generates accurate answers with source citing. MyGPT can run on personal devices or cloud infrastructures and can help with complex tasks such as literature review and learning.

Pipeline

MyGPT pipeline

We have divided the MyGPT pipeline architecture into three sections:

  1. User interface (UI): The UI is the front-end of the pipeline. It is a web application that allows users to interact with the pipeline. The UI is built using ReactJS.
  2. Backend server: The backend server is responsible for handling requests from the UI and sending them to the LLM server. The backend server is built using Python Django.
  3. LLM server: The LLM server is responsible for generating answers to the questions asked by the user. We are using Ollama for the LLM server.

Installation

MyGPT can be installed on following environments:

Personal Computer

MyGPT is using Ollama for LLM server, and it requires at least 8GB (16GB for better response time) of RAM and 10GB of disk space. Also, Ollama is providing direct installation on Mac and Linux only. For Windows users we will use Docker to run Ollama.

To run the pipleine on following environments, follow the instructions:

Server or VM with GPU

MyGPT can be hosted on a server or VM with GPU. For this installation we recommand to host User interface (UI), Backend server and Ollama (LLM server) on 3 seperate VMs. The Ollama VM should have a GPU with CUDA installed on the server/VM.

To run the pipleine on VM/Server, follow the instructions:

Cloud services (Azure)

MyGPT can be hosted on any cloud service but we are providing Azure as an example deploymnet. For this installation we recommand to host User interface (UI), Backend server and Ollama (LLM server) on 3 seperate VMs. The Ollama VM should have a GPU with CUDA installed on the VM.

To run the pipleine on Azure, follow the instructions:

User Interface

MyGPT user interface will allow users to check the publcation library, ask questions, and get answers. The user interface is built using ReactJS.

Here is an example of the user interface with question, answer, and source citing:

MyGPT user interface

FAQs

Check out the FAQs for common questions and answers.

Developer's Guide

Developers who are interested in using MyGPT API can check the developer's guide.

Issues

If you come across any bug or error, please report it in the issues section.

About

public repository for MyGPT installation instructions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published