Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
380 changes: 380 additions & 0 deletions Class1/class1/repo/submitted/submission_week1.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,380 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "41305a19",
"metadata": {},
"source": [
"# 🏁 Submission Summary (MLE Class – Week 1 Homework)\n",
"\n",
"This notebook contains my completed work for Tasks 1–3, as well as documentation and an optional system design combining Ollama, LangChain, and MCP."
]
},
{
"cell_type": "markdown",
"id": "0aa56a8a",
"metadata": {},
"source": [
"# 🔷 Task 1 – MCP (Model Context Protocol) Overview\n",
"1.1 What is MCP?\n",
"\n",
"MCP (Model Context Protocol) is a standardized open protocol designed to allow AI models (LLMs) to safely access external tools and resources. It defines a structured interface that tools must follow, enabling models to interact with:\n",
"\n",
"file systems\n",
"\n",
"databases\n",
"\n",
"APIs\n",
"\n",
"developer tools\n",
"\n",
"search engines\n",
"\n",
"custom extensions\n",
"\n",
"without the model needing bespoke integration code for each tool.\n",
"\n",
"1.2 Why MCP Matters\n",
"\n",
"MCP provides:\n",
"\n",
"Security — precise control over what tools the model can access\n",
"\n",
"Interoperability — tools can be reused by different LLMs\n",
"\n",
"Extensibility — easy to add additional functions\n",
"\n",
"Simplicity — unifies tool-calling under one protocol\n",
"\n",
"It is becoming a foundational part of LLM-based system design.\n",
"\n",
"1.3 How MCP Fits Into This Assignment\n",
"\n",
"Although not required to implement fully here, MCP appears in the homework as an optional “advanced integration” component. Its role is to serve as a standardized layer connecting:\n",
"\n",
"Ollama (local model runtime)\n",
"\n",
"LangChain (LLM orchestration)\n",
"\n",
"Various tools (file access, web search, etc.)"
]
},
{
"cell_type": "markdown",
"id": "75acdb10",
"metadata": {},
"source": [
"# 🔷 Task 2 – Running Local LLMs with Ollama\n",
"2.1 Goal\n",
"\n",
"Install and run a local LLM (Llama 3) using Ollama, and verify access through an OpenAI-compatible API.\n",
"\n",
"2.2 What I Did\n",
"2.2.1 Verified installation\n",
"ollama --version\n",
"\n",
"2.2.2 Pulled and ran Llama 3\n",
"ollama run llama3\n",
"\n",
"2.2.3 Created a dedicated environment\n",
"conda create -n ollama314 python=3.14\n",
"conda activate ollama314\n",
"pip install openai\n",
"\n",
"2.2.4 Wrote and ran a local API test\n",
"Code Cell: ollama_openai_test.py\n",
"\n",
"\n",
"2.3 Results\n",
"\n",
"The script successfully returned a coherent response from the local Llama 3 model, proving:\n",
"\n",
"Ollama installed correctly\n",
"\n",
"Llama 3 model runs offline\n",
"\n",
"OpenAI API calls are compatible with Ollama"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "992d61b0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== Response from local llama3 ===\n",
"Nice to meet you!\n",
"\n",
"According to my understanding, we're currently having a conversation within the confines of the Olmec (Ollama) chatbot platform. This means I'm indeed \"running\" locally, meaning all our interactions will be contained within this sandbox environment. No external connections or integrations with outside services are necessary.\n",
"\n",
"How's your experience been so far on Ollama? Do you have any specific topics or questions you'd like to explore? I'm here to assist and chat with you!\n"
]
}
],
"source": [
"from openai import OpenAI\n",
"\n",
"client = OpenAI(\n",
" base_url=\"http://localhost:11434/v1\",\n",
" api_key=\"ollama\",\n",
")\n",
"\n",
"response = client.chat.completions.create(\n",
" model=\"llama3\",\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": \"Hi! Please confirm you are running locally via Ollama.\"}\n",
" ],\n",
")\n",
"\n",
"print(\"=== Response from local llama3 ===\")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "4f94bd1c",
"metadata": {},
"source": [
"# 🔷 Task 3 – Using LangChain (LCEL) with Ollama\n",
"3.1 Goal\n",
"\n",
"Create a LangChain Expression Language (LCEL) pipeline using a local LLM from Ollama.\n",
"\n",
"3.2 Install LangChain\n",
"See the pip install code below\n",
"\n",
"3.3 LangChain LCEL Script\n",
"\n",
"Code Cell: ollama_langchain_lcel.py\n",
"\n",
"3.4 Results\n",
"\n",
"Running the script successfully produced a multi-paragraph explanation generated by the local Llama 3 model via LangChain LCEL.\n",
"\n",
"This confirms:\n",
"\n",
"LangChain is working\n",
"\n",
"LCEL is correctly constructing a chain\n",
"\n",
"Local Llama 3 can serve as the model in the chain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "78c8a9bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: langchain>=0.3.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (0.3.27)\n",
"Requirement already satisfied: langchain-openai in c:\\users\\willi\\anaconda3\\lib\\site-packages (0.3.35)\n",
"Requirement already satisfied: requests in c:\\users\\willi\\anaconda3\\lib\\site-packages (2.32.5)\n",
"Requirement already satisfied: SQLAlchemy<3,>=1.4 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (1.4.22)\n",
"Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (4.0.3)\n",
"Requirement already satisfied: pydantic<3.0.0,>=2.7.4 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (2.12.4)\n",
"Requirement already satisfied: PyYAML>=5.3 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (6.0.2)\n",
"Requirement already satisfied: langchain-core<1.0.0,>=0.3.72 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (0.3.79)\n",
"Requirement already satisfied: langsmith>=0.1.17 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (0.4.37)\n",
"Requirement already satisfied: langchain-text-splitters<1.0.0,>=0.3.9 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain>=0.3.0) (0.3.11)\n",
"Requirement already satisfied: openai<3.0.0,>=1.104.2 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-openai) (2.7.2)\n",
"Requirement already satisfied: tiktoken<1.0.0,>=0.7.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-openai) (0.12.0)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from requests) (2.5.0)\n",
"Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from requests) (2025.11.12)\n",
"Requirement already satisfied: idna<4,>=2.5 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from requests) (3.2)\n",
"Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from requests) (2.0.4)\n",
"Requirement already satisfied: tenacity!=8.4.0,<10.0.0,>=8.1.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-core<1.0.0,>=0.3.72->langchain>=0.3.0) (9.1.2)\n",
"Requirement already satisfied: typing-extensions<5.0.0,>=4.7.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-core<1.0.0,>=0.3.72->langchain>=0.3.0) (4.15.0)\n",
"Requirement already satisfied: packaging<26.0.0,>=23.2.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-core<1.0.0,>=0.3.72->langchain>=0.3.0) (25.0)\n",
"Requirement already satisfied: jsonpatch<2.0.0,>=1.33.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langchain-core<1.0.0,>=0.3.72->langchain>=0.3.0) (1.33)\n",
"Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from jsonpatch<2.0.0,>=1.33.0->langchain-core<1.0.0,>=0.3.72->langchain>=0.3.0) (3.0.0)\n",
"Requirement already satisfied: orjson>=3.9.14 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langsmith>=0.1.17->langchain>=0.3.0) (3.11.4)\n",
"Requirement already satisfied: requests-toolbelt>=1.0.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langsmith>=0.1.17->langchain>=0.3.0) (1.0.0)\n",
"Requirement already satisfied: zstandard>=0.23.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langsmith>=0.1.17->langchain>=0.3.0) (0.25.0)\n",
"Requirement already satisfied: httpx<1,>=0.23.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from langsmith>=0.1.17->langchain>=0.3.0) (0.28.1)\n",
"Requirement already satisfied: anyio in c:\\users\\willi\\anaconda3\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.1.17->langchain>=0.3.0) (4.11.0)\n",
"Requirement already satisfied: httpcore==1.* in c:\\users\\willi\\anaconda3\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.1.17->langchain>=0.3.0) (1.0.9)\n",
"Requirement already satisfied: h11>=0.16 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from httpcore==1.*->httpx<1,>=0.23.0->langsmith>=0.1.17->langchain>=0.3.0) (0.16.0)\n",
"Requirement already satisfied: jiter<1,>=0.10.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from openai<3.0.0,>=1.104.2->langchain-openai) (0.12.0)\n",
"Requirement already satisfied: sniffio in c:\\users\\willi\\anaconda3\\lib\\site-packages (from openai<3.0.0,>=1.104.2->langchain-openai) (1.3.1)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from openai<3.0.0,>=1.104.2->langchain-openai) (1.9.0)\n",
"Requirement already satisfied: tqdm>4 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from openai<3.0.0,>=1.104.2->langchain-openai) (4.67.1)\n",
"Requirement already satisfied: exceptiongroup>=1.0.2 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from anyio->httpx<1,>=0.23.0->langsmith>=0.1.17->langchain>=0.3.0) (1.3.0)\n",
"Requirement already satisfied: typing-inspection>=0.4.2 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from pydantic<3.0.0,>=2.7.4->langchain>=0.3.0) (0.4.2)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from pydantic<3.0.0,>=2.7.4->langchain>=0.3.0) (0.7.0)\n",
"Requirement already satisfied: pydantic-core==2.41.5 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from pydantic<3.0.0,>=2.7.4->langchain>=0.3.0) (2.41.5)\n",
"Requirement already satisfied: greenlet!=0.4.17 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from SQLAlchemy<3,>=1.4->langchain>=0.3.0) (1.1.1)\n",
"Requirement already satisfied: regex>=2022.1.18 in c:\\users\\willi\\anaconda3\\lib\\site-packages (from tiktoken<1.0.0,>=0.7.0->langchain-openai) (2025.11.3)\n",
"Requirement already satisfied: colorama in c:\\users\\willi\\anaconda3\\lib\\site-packages (from tqdm>4->openai<3.0.0,>=1.104.2->langchain-openai) (0.4.4)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"pip install \"langchain>=0.3.0\" langchain-openai requests"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "72ce788e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
">>> Sending question...\n",
"\n",
"Cloud Large Language Models (LLMs) and local LLMs like Ollama differ primarily in their deployment architecture and accessibility.\n",
"\n",
"Cloud LLMs are trained on massive datasets and hosted on cloud infrastructure, such as Google Cloud AI Platform or Amazon SageMaker. These models can be accessed through APIs or web interfaces, allowing developers to integrate them into various applications without the need for extensive computational resources or data storage. Cloud LLMs are ideal for applications that require high scalability, flexibility, and collaboration among teams.\n",
"\n",
"Local LLMs like Ollama, on the other hand, are trained and run locally on a user's device or a dedicated server. These models are often designed for specific use cases or industries and can be more tailored to an organization's needs. Local LLMs offer greater control over data privacy and security, as well as reduced latency and costs compared to cloud-based solutions. However, they may require more computational resources and expertise in model deployment and maintenance.\n",
"\n",
"In summary, cloud LLMs provide scalability, flexibility, and collaboration capabilities, while local LLMs like Ollama offer greater control over data and lower costs. The choice between the two ultimately depends on the specific requirements of your project or application.\n"
]
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def build_chain():\n",
"\n",
" llm = ChatOpenAI(\n",
" model=\"llama3\",\n",
" base_url=\"http://localhost:11434/v1\",\n",
" api_key=\"ollama\",\n",
" temperature=0.3,\n",
" )\n",
"\n",
" prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful, concise tutor for a machine learning student.\"),\n",
" (\"user\", \"Question: {question}\\n\\nPlease answer in 2–4 short paragraphs.\"),\n",
" ]\n",
" )\n",
"\n",
" parser = StrOutputParser()\n",
"\n",
" chain = prompt | llm | parser\n",
" return chain\n",
"\n",
"\n",
"def main():\n",
" chain = build_chain()\n",
" question = \"What is the difference between cloud LLMs and local LLMs like Ollama?\"\n",
" print(\">>> Sending question...\\n\")\n",
" print(chain.invoke({\"question\": question}))\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" main()"
]
},
{
"cell_type": "markdown",
"id": "f6960669",
"metadata": {},
"source": [
"# 🔷 Optional Advanced Design – Ollama + LangChain + MCP\n",
"\n",
"Although optional, I designed an extended system combining all three components.\n",
"\n",
"4.1 System Overview\n",
"\n",
"A local AI assistant architecture:\n",
"\n",
"Ollama runs the local LLM\n",
"\n",
"LangChain (LCEL) orchestrates prompts and workflow\n",
"\n",
"MCP provides external tool access (files, search, APIs)\n",
"\n",
"4.2 High-Level Architecture\n",
"\n",
"User Query\n",
" ↓\n",
"LangChain LCEL Chain\n",
" ↓\n",
"Local LLM (Ollama) ←→ MCP Tools (Filesystem, Search, etc.)\n",
" ↓\n",
"Final Response\n",
"\n",
"\n",
"4.3 Example Use Case\n",
"\n",
"“Summarize my local notes and connect them to MCP concepts.”\n",
"\n",
"Workflow:\n",
"\n",
"MCP File Server → read local files\n",
"\n",
"LangChain → summarize and analyze\n",
"\n",
"Ollama → generate final explanation\n",
"\n",
"This integrates local LLM power with safe tool access."
]
},
{
"cell_type": "markdown",
"id": "2435f62f",
"metadata": {},
"source": [
"# 🔷 Reflection\n",
"\n",
"Through this assignment, I learned how to:\n",
"\n",
"Run LLMs locally (Ollama)\n",
"\n",
"Expose local LLMs through an OpenAI-compatible API\n",
"\n",
"Build LangChain LCEL pipelines\n",
"\n",
"Understand where MCP fits in modern LLM tooling architectures\n",
"\n",
"This provided a hands-on understanding of how real-world LLM applications are structured."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fdbb2a54",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading