Skip to content

Commit 1bf73b7

Browse files
hello-world like LangChain starter kit (#107)
2 parents a680973 + 60b18f7 commit 1bf73b7

File tree

3 files changed

+43
-3
lines changed

3 files changed

+43
-3
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@ Each layer has a certain strength of communication inbuilt
4646
`GitHub actions` configured\
4747
`Vale.sh` configured at PR level\
4848
`Pre-commit hooks` configured for code linting/formatting\
49+
[LangChain](https://python.langchain.com/) Basics & workflows\
4950
✅ Environment management via [pixi](https://prefix.dev/)\
5051
✅ Reading data from online sources using [intake](https://github.com/intake/intake)\
5152
✅ Sample pipeline built using [Dagster](https://github.com/dagster-io/dagster)\
@@ -54,7 +55,6 @@ Each layer has a certain strength of communication inbuilt
5455
✅ Web UI build on [Flask](https://flask.palletsprojects.com/en/3.0.x/) \
5556
✅ Web UI re-done and expanded with [FastHTML](https://docs.fastht.ml/)\
5657
✅ Leverage AI models to analyse data [GitHub AI models Beta](https://docs.github.com/en/github-models/prototyping-with-ai-models)
57-
✨ LangChain integration
5858

5959
### ☕️ Quickly getting started with DataJourney
6060

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
import os
2+
from langchain.prompts import PromptTemplate
3+
from langchain.llms import OpenAI
4+
5+
# NOTE: Using GitHub models here (add your preferred token attribute)
6+
7+
token = os.environ["GITHUB_TOKEN"]
8+
endpoint = "https://models.inference.ai.azure.com"
9+
model_name = "gpt-4o-mini"
10+
11+
llm = OpenAI(
12+
base_url=endpoint,
13+
api_key=token,
14+
model=model_name,
15+
temperature=0.7
16+
)
17+
18+
# Step 2: Define a Prompt Template
19+
prompt = PromptTemplate(
20+
input_variables=["question"],
21+
template="You are a helpful assistant. Answer the following question in a clear and concise way: {question}"
22+
)
23+
24+
25+
# Step 3: Create a Function to Get AI Responses
26+
def get_answer(question):
27+
# Format the prompt with the input question
28+
formatted_prompt = prompt.format(question=question)
29+
30+
# Use the LLM to generate a response
31+
response = llm(formatted_prompt)
32+
return response
33+
34+
35+
# Example Usage
36+
if __name__ == "__main__":
37+
question = "What is LangChain and why is it useful?"
38+
answer = get_answer(question)
39+
print("AI Speaks:", answer)

pixi.toml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,5 +55,6 @@ DJ_panel_app = { cmd = "python stock_price_twilio_app.py", depends-on = "DJ_pack
5555
DJ_flask_app = { cmd = "python app.py", depends-on = "DJ_package", cwd = "analytics_framework/intake/web_ui_flask" }
5656
DJ_fasthtml_app = { cmd = "python app.py", depends-on = "DJ_package", cwd = "analytics_framework/intake/web_ui_fasthtml" }
5757
DJ_mito_app = { cmd = "jupyter notebook mito_exp.ipynb", depends-on = "DJ_package", cwd = "usage_guide"}
58-
DJ_llm_analysis_gpt_4o = {cmd = "python analyse_my_data__gpt_4o_mini.py", cwd= "analytics_framework/ai_modeling"}
59-
DJ_advance_llm_analysis = {cmd = "python advance_analysis_coral_bleeching.py", cwd= "analytics_framework/ai_modeling"}
58+
DJ_llm_analysis_gpt_4o = {cmd = "python analyse_my_data__gpt_4o_mini.py", cwd = "analytics_framework/ai_modeling"}
59+
DJ_advance_llm_analysis = {cmd = "python advance_analysis_coral_bleeching.py", cwd = "analytics_framework/ai_modeling"}
60+
DJ_hello_world_langchain = {cmd = "python hello_world_lc.py", cwd = "analytics_framework/langchain"}

0 commit comments

Comments
 (0)