Note: This repository contains installation instructions for the MyGPT with Docker images and will not need source code. If you need access to MyGPT source to help in development process, please contact Jaimin Patel (Email: jaimin.patel@stjude.org) or appropriate person.
Note2: This repository is for public use and will not contain any private information. If you need access to private repository, please contact Jaimin Patel
ChatGPT has revolutionized creative occupations, but tasks requiring factual backing suffer from generalized models and limitations such as hallucinations and inconsistency. Here, we present MyGPT — an open-source Large Language Model (LLM) pipeline to ask questions for content from a curated list of publications or video/audio lectures. MyGPT minimizes hallucination by providing a context for the question and generates accurate answers with source citing. MyGPT can run on personal devices or cloud infrastructures and can help with complex tasks such as literature review and learning.
We have divided the MyGPT pipeline architecture into three sections:
- User interface (UI): The UI is the front-end of the pipeline. It is a web application that allows users to interact with the pipeline. The UI is built using ReactJS.
- Backend server: The backend server is responsible for handling requests from the UI and sending them to the LLM server. The backend server is built using Python Django.
- LLM server: The LLM server is responsible for generating answers to the questions asked by the user. We are using Ollama for the LLM server.
MyGPT can be installed on following environments:
MyGPT is using Ollama for LLM server, and it requires at least 8GB (16GB for better response time) of RAM and 10GB of disk space. Also, Ollama is providing direct installation on Mac and Linux only. For Windows users we will use Docker to run Ollama.
To run the pipleine on following environments, follow the instructions:
-
Mac
-
Linux
-
Windows
These instructions are simple and easy to follow. You can also modify bash scripts as per your convenience.
MyGPT can be hosted on a server or VM with GPU. For this installation we recommand to host User interface (UI), Backend server and Ollama (LLM server) on 3 seperate VMs. The Ollama VM should have a GPU with CUDA installed on the server/VM.
To run the pipleine on VM/Server, follow the instructions:
MyGPT can be hosted on any cloud service but we are providing Azure as an example deploymnet. For this installation we recommand to host User interface (UI), Backend server and Ollama (LLM server) on 3 seperate VMs. The Ollama VM should have a GPU with CUDA installed on the VM.
To run the pipleine on Azure, follow the instructions:
MyGPT user interface will allow users to check the publcation library, ask questions, and get answers. The user interface is built using ReactJS.
Here is an example of the user interface with question, answer, and source citing:
Check out the FAQs for common questions and answers.
Developers who are interested in using MyGPT API can check the developer's guide.
If you come across any bug or error, please report it in the issues section.