This repository contains Syllabus for LLM(large language model), RAG(retrieval augmented generation), AI Agent and MCP(Model Context Protocol) class focusing on creative AI agent development, modeling, and computing as the viewpoint of usecase. The colab code, source, presentation and reference with AI tools like below can be used for developing LLM, RAG and AI Agent. If you want to know the LLM, RAG and AI Agent with MCP subjects and materials, refer to the below link.
- Syllabus If you need AI deep learning foundation, refer the below link.
- AI foundation and tutorial with code: This repository contains materials for an AI Foundation seminar, covering fundamental concepts of AI, Machine Learning, Deep Learning, Natural Language Processing, Transformers, and Large Language Models (LLMs), including agent-based approaches and related services. It is designed to provide hands-on experience, primarily utilizing Jupyter Notebooks.
- Computer Vision with Deep Learning: This course goes beyond simply running pre-existing code. The core objective is to foster a deep understanding by having you implement the internal mechanisms of key deep learning models—such as CNN, ResNet, R-CNN, and YOLO—from the ground up. With hands-on exercises in PyTorch and Keras, you will gain proficiency in translating complex theories into functional code.
In reference, the subjects are like below.
- Transformer encoder and decoder tutoring. Transformer scratch source code
- Token and Embedding for NLP (natural language process) using huggingface
- Multi-modal like CLIP, LLaVA
- Stable Diffusion and prompt engineering using text to image, video, audio, sound, document (word, presentation) and code (app, game) tools
- LLM. Train and Finetune for model like gemma, llama
- RAG and Langchain. Vector DB like FAISS, Chroma DB, Graph DB using Neo4j
- Chatbot with Ollama. Gradio and Streamlit for UX
- AI Agent and MCP
- LLM Internal Code Analysis like Deepseek, Manus
- Vibe coding using Copilot, GPT etc
- Project and documentation
LLM uses deep learning model architecture like transformer which uses numerical analysis, linear algebra, so it's better to understand the below subjects before starting it.
- linear algebra. Dataset like Matrix, tensor handling and visualization cheat sheet
- newton mathod for equation solution. In addition, differential Calculus includes newton method.
First, clone this repository.
git clone https://github.com/mac999/LLM-RAG-Agent-Tutorial.git
Second, check syllabus and lesson plan to understand LLM, RAG and AI agent development courses.
Before running the example code, ensure you have Colab Pro, Python 3.10 or higher installed. Some tool or library use NVIDIA GPU, so if you want to use it, prepare notebook computer with NVIDIA GPU(recommend 8GB. minimum 4GB)
Follow the instructions below to set up your environment:
In refernce, this lesson will use the below
- Colab Pro: To run jupyter notebook, You need to create Colab account. For finetuning LLM, Colab Pro will be needed.
- OpenAI: To use ChatGPT LLM model and API, You need to create ChatGPT account and OpenAI API key
- Huggingface: For uisng LLM, Stable Diffusion-based model, You need to sign up Huggingface. In example, Single Image-to-3D model
- Ollama: For using AI tools in open source LLM projects. You need to install NVIDIA cuda for run it.
- Generative AI service like Sora, Runway, Dream machine, Kling AI, Canva etc: For prompt engineering practice. If you want to use it temporarily and don't share your private, use temp email like Temp Mail.
- LangChain with Vector DB: For RAG, langchain, chroma db etc will be used.
For GPU-accelerated tasks, you need to install the correct NVIDIA drivers for your GPU.
- Download NVIDIA Drivers: NVIDIA Driver Downloads
- Steps:
- Identify your GPU model using the NVIDIA Control Panel or the
nvidia-smi
command in the terminal. - Download the latest driver for your GPU model.
- Install the driver and reboot your system.
- Identify your GPU model using the NVIDIA Control Panel or the
To confirm installation:
nvidia-smi
CUDA is required for running GPU-accelerated operations.
- Download CUDA Toolkit: CUDA Toolkit
- During installation:
- Match the CUDA version with your NVIDIA driver and GPU model. Refer to the compatibility chart on the download page.
- Install the CUDA Toolkit with default options.
- Add the CUDA binary paths to your environment variables.
Ensure that Python (safe version 3.10 or 3.12) is installed on your system. About macbook, please refer to how to install python on mac.
- Download Python: python.org
- During installation:
- Check the box for "Add Python to PATH".
- Install Python along with pip.
To confirm installation in terminal(DOS command in windows. Shell terminal in linux):
python --version
Ensure that Anaconda (version 24.0 or later) is installed on your system.
- Download Anaconda: Anaconda
Make Accounts for OpenAI, Huggingface
- Sign up Huggingface and make API tokento develop Open source LLM-base application
- Sign up OpenAI API to develop ChatGPT-based application (*Note: don't check auto-subscription option)
- If AI-related models or tools will be used (such as LLM model fine-tuning with Ollama), install stable PyTorch(11.8 version) and additional packages:
pip install openai pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Run the following command to install the required libraries:
pip install pandas numpy
pip install ollama openai transformers huggingface_hub langchain
For examples that utilize Ollama, follow the installation instructions from the Ollama website.
Install Sublime for editing source code
Install vscode for debuging code. Please refer to how to install vscode.
After completing the installations, verify that the environment is set up correctly:
-
Check Python Version:
python --version
-
Verify NVIDIA Drivers:
nvidia-smi
-
Confirm CUDA Version:
nvcc --version
-
Test Python Libraries: Create a test script and import the installed libraries:
import pandas as pd import numpy as np import torch print("Libraries are installed successfully!") print(f"torch version = {torch.__version__}")
If you're interested in media art, refer to the below link. The repository includes examples to experiment with generative media art.
For Blender using AI-Assisted Modeling,
- Download Blender: blender.org
- After installation:
- Enable the Python Console within Blender to run scripts directly.
- Ensure Blender uses the same Python environment where required libraries are installed.
If you're interested in text to 3D model, you can find Text-to-3D model tool the below link.
- Text-to-3D model code: Using Open-Source Models with Blender for AI-Assisted 3D Modeling: Comparative Study with OpenAI GPT
If you're interested in Arduino based generative AI, refer to the below.
- Chat with ChatGPT through Arduino IoT Cloud
- ChatGPT Arduino library
- ChatGPT_Client_For_Arduino
- ChatGPT for Arduino
- Using ChatGPT to Write Code for Arduino and ESP32
- Lee Boonstra, Google, 2025, Prompt Engineering
- Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic, Google, 2024, Agents
- OpenAI, A practical guide to building agents
- Keith Chugg, USC, 2020, Brief Introduction to Natural Language Processing
- Mustafa Degerli, Bilkent University, 2014, Software Processes and Software Development Process Models
- NVIDIA cuda programming, open source and AI
This repository is part of my ongoing work on AI, LLMs, and Transformer-based architectures. I am open to research collaboration, academic exchange, and joint projects with universities, public institutions, company and research labs.
For collaboration inquiries, please feel free to reach out: 📧 [laputa99999@gmail.com] | 🌐 [LinkedIn or Personal Website]
This repository is licensed under the MIT License. You are free to use, modify, and distribute the code for personal or commercial projects.
Ph.D, Taewook Kang(laputa99999@gmail.com)