Skip to content

LangChain-powered web researcher chatbot. Searches for sources on the web and cites them in generated answers.

License

Notifications You must be signed in to change notification settings

marvins56/weblangchain

 
 

Repository files navigation

🦜️🌐 WebLangChain

This repo is an example of performing retrieval using the entire internet as a document store.

Try it live: weblangchain.vercel.app

✅ Running locally

By default, WebLangChain uses Tavily to fetch content from webpages. You can get an API key from by signing up. If you'd like to swap in a different base retriever (e.g. if you want to use your own data source), you can modify the get_base_retriever() method in main.py.

  1. Install backend dependencies: poetry install.
  2. Make sure to set your environment variables to configure the application:
export OPENAI_API_KEY=
export TAVILY_API_KEY=

# for tracing
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
export LANGCHAIN_API_KEY=
export LANGCHAIN_PROJECT=
  1. Start the Python backend with poetry run make start.
  2. Install frontend dependencies by running cd nextjs, then yarn.
  3. Run the frontend with yarn dev for frontend.
  4. Open localhost:3000 in your browser.

⚙️ How it works

The general retrieval flow looks like this:

  1. Pull in raw content related to the user's initial query using a retriever that wraps Tavily's Search API.
    • For subsequent conversation turns, we also rephrase the original query into a "standalone query" free of references to previous chat history.
  2. Because the size of the raw documents usually exceed the maximum context window size of the model, we perform additional contextual compression steps to filter what we pass to the model.
    • First, we split retrieved documents using a text splitter.
    • Then we use an embeddings filter to remove any chunks that do not meet a similarity threshold with the initial query.
  3. The retrieved context, the chat history, and the original question are passed to the LLM as context for the final generation.

Here's a LangSmith trace illustrating the above:

https://smith.langchain.com/public/f4493d9c-218b-404a-a890-31c15c56fff3/r

It's built using:

🚀 Deployment

The live version is hosted on Fly.dev and Vercel. The backend Python logic is found in main.py, and the frontend Next.js app is under nextjs/.

About

LangChain-powered web researcher chatbot. Searches for sources on the web and cites them in generated answers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 66.7%
  • Python 30.9%
  • Dockerfile 0.8%
  • CSS 0.7%
  • JavaScript 0.4%
  • Makefile 0.3%
  • Procfile 0.2%