diff --git a/docs/demo_agent.ipynb b/docs/demo_agent.ipynb
new file mode 100644
index 0000000..a460c00
--- /dev/null
+++ b/docs/demo_agent.ipynb
@@ -0,0 +1,448 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "intro",
+ "metadata": {},
+ "source": [
+ "# langchain-parallel Demo: AI Agent with Web Research Tools\n",
+ "\n",
+ "This notebook demonstrates how to build an AI agent that can search the web, extract content from URLs, and answer questions with real-time information using **langchain-parallel**.\n",
+ "\n",
+ "We'll showcase three powerful tools:\n",
+ "- **ParallelWebSearchTool**: Search the web with natural language queries\n",
+ "- **ParallelExtractTool**: Extract clean content from web pages\n",
+ "- **ChatParallelWeb**: Chat model with built-in web research capabilities\n",
+ "\n",
+ "Then we'll combine them into a **LangGraph agent** that can autonomously decide which tools to use to answer complex questions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "setup-header",
+ "metadata": {},
+ "source": [
+ "## Setup\n",
+ "\n",
+ "First, let's install the required packages:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "install",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Note: you may need to restart the kernel to use updated packages.\n"
+ ]
+ }
+ ],
+ "source": [
+ "%pip install --quiet -U \"langchain>=1.1.0\" langchain-anthropic \"langchain-parallel>=0.2.0\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "credentials-header",
+ "metadata": {},
+ "source": [
+ "### Credentials\n",
+ "\n",
+ "Set up your API keys. You'll need:\n",
+ "- **Parallel API key**: Get one at [parallel.ai](https://parallel.ai)\n",
+ "- **Anthropic API key**: For the agent's LLM (Claude)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "credentials",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "if not os.environ.get(\"PARALLEL_API_KEY\"):\n",
+ " os.environ[\"PARALLEL_API_KEY\"] = getpass.getpass(\"Parallel API key: \")\n",
+ "\n",
+ "if not os.environ.get(\"ANTHROPIC_API_KEY\"):\n",
+ " os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Anthropic API key: \")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "tools-header",
+ "metadata": {},
+ "source": [
+ "## The Tools\n",
+ "\n",
+ "Let's initialize our three tools and see them in action."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "id": "init-tools",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Search Tool: parallel_web_search\n",
+ "Extract Tool: parallel_extract\n",
+ "Chat Tool: ChatParallelWeb\n"
+ ]
+ }
+ ],
+ "source": [
+ "from langchain_parallel import ParallelWebSearchTool, ParallelExtractTool, ChatParallelWeb\n",
+ "\n",
+ "# Initialize the tools\n",
+ "search_tool = ParallelWebSearchTool()\n",
+ "extract_tool = ParallelExtractTool()\n",
+ "chat = ChatParallelWeb(model=\"speed\")\n",
+ "\n",
+ "print(f\"Search Tool: {search_tool.name}\")\n",
+ "print(f\"Extract Tool: {extract_tool.name}\")\n",
+ "print(f\"Chat Tool: {chat.get_name()}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "search-demo-header",
+ "metadata": {},
+ "source": [
+ "### 1. ParallelWebSearchTool\n",
+ "\n",
+ "Search the web using natural language. Perfect for finding current information, news, and research."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "search-demo",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found 5 results:\n",
+ "\n",
+ "1. The Latest AI News and AI Breakthroughs that Matter Most\n",
+ " https://www.crescendo.ai/news/latest-ai-news-and-updates\n",
+ "\n",
+ "2. Autonomous AI Agents: 7 Breakthroughs Reshaping Our Future\n",
+ " https://www.cognitivetoday.com/2025/12/autonomous-ai-agents-breakthroughs/\n",
+ "\n",
+ "3. autonomous systems News & Articles - IEEE Spectrum\n",
+ " https://spectrum.ieee.org/tag/autonomous-systems\n",
+ "\n",
+ "4. AI Agents Lead The 8 Tech Trends Transforming ...\n",
+ " https://www.forbes.com/sites/bernardmarr/2025/12/01/ai-agents-lead-the-8-tech-trends-transforming-enterprise-in-2026/\n",
+ "\n",
+ "5. AI Agents in 2025: Expectations vs. Reality\n",
+ " https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Search using a natural language objective\n",
+ "search_result = search_tool.invoke({\n",
+ " \"objective\": \"What are the latest breakthroughs in AI agents and autonomous systems?\",\n",
+ " \"max_results\": 5\n",
+ "})\n",
+ "\n",
+ "print(f\"Found {len(search_result['results'])} results:\\n\")\n",
+ "for i, result in enumerate(search_result['results'], 1):\n",
+ " print(f\"{i}. {result['title']}\")\n",
+ " print(f\" {result['url']}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "extract-demo-header",
+ "metadata": {},
+ "source": [
+ "### 2. ParallelExtractTool\n",
+ "\n",
+ "Extract clean, structured content from any URL. Great for reading articles, documentation, or research papers."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "extract-demo",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Title: Large language model - Wikipedia\n",
+ "Content length: 196689 characters\n",
+ "\n",
+ "Preview:\n",
+ "asks, especially [language generation](/wiki/Natural_language_generation \"Natural language generation\") . [[ 1 ]]() [[ 2 ]]() The largest and most capable LLMs are [generative](/wiki/Generative_artificial_intelligence \"Generative artificial intelligence\") pre-trained [transformers](/wiki/Transformer_architecture \"Transformer architecture\") ( [GPTs](/wiki/Generative_pre-trained_transformer \"Generative pre-trained transformer\") ) and provide the core capabilities of modern [chatbots](/wiki/Chatbot \"Chatbot\") . LLMs can be [fine-tuned](/wiki/Fine-tuning_\\(deep_learning\\) \"Fine-tuning (deep learning)\") for specific tasks or guided by [prompt engineering](/wiki/Prompt_engineering \"Prompt engineering\") . [[ 3 ]]() These models acquire [predictive power](/wiki/Predictive_learning \"Predictive learning\") regarding [syntax](/wiki/Syntax \"Syntax\") , [semantics](/wiki/Semantics \"Semantics\") , and [ontologies](/wiki/Ontology_\\(information_science\\) \"Ontology (information science)\") [[ 4 ]]() inherent in human [language corpora](/wiki/Text_corpus \"Text corpus\") , but they also inherit inaccuracies and [biases](/wiki/Algorithmic_bias \"Algorithmic bias\") present in the [data](/wiki/Training,_validation,_and_test_data_sets \"Training, validation, and test data sets\") they are trained on. [[ 5 ]]()\n",
+ "\n",
+ "They consist of billions to trillions of [parameters](/wiki/Parameter \"Parameter\") and operate as general-purpose sequence models, generating, ...\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Extract content from a URL\n",
+ "extract_result = extract_tool.invoke({\n",
+ " \"urls\": [\"https://en.wikipedia.org/wiki/Large_language_model\"]\n",
+ "})\n",
+ "\n",
+ "print(f\"Title: {extract_result[0]['title']}\")\n",
+ "print(f\"Content length: {len(extract_result[0]['content'])} characters\")\n",
+ "print(f\"\\nPreview:\\n{extract_result[0]['content'][7000:8500]}...\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "chat-demo-header",
+ "metadata": {},
+ "source": [
+ "### 3. ChatParallelWeb\n",
+ "\n",
+ "A chat model with built-in web research capabilities. It can access real-time information directly."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "id": "chat-demo",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here are some of the top AI research news stories from today, December 1, 2025, based on the provided search results:\n",
+ "\n",
+ "**AI Model and Technology Advancements:**\n",
+ "\n",
+ "* **Runway's Gen 4.5 AI Video Model:** Runway announced Gen 4.5, a new AI model for generating high-definition videos from written prompts. It is reported to outperform models from Google and OpenAI in an independent benchmark, specifically in understanding physics, human motion, camera movements, and cause and effect.\n",
+ "* **TwelveLabs' Marengo 3.0:** TwelveLabs launched Marengo 3.0, a video foundation model available on Amazon Bedrock, that understands video in a human-like manner by connecting dialogue, gestures, movement, and emotion across time.\n",
+ "* **MIT Research:** MIT researchers have discovered a shortcoming that makes large language models less reliable. They also debuted a generative AI model, BoltzGen, that can create molecules for hard-to-treat diseases.\n",
+ "* **Purdue University's RAPTOR:** Purdue University researchers introduced RAPTOR, an AI-powered defect-detection system for semiconductor chips, combining high-resolution X-ray imaging and machine learning.\n",
+ "\n",
+ "**AI Applications and Impact:**\n",
+ "\n",
+ "* **S&P Global and AWS Collaboration:** S&P Global is collaborating with AWS to integrate its data directly into customer AI workflows, enabling AI agents to answer complex market, financial, and energy-related questions.\n",
+ "* **Lyft and AWS Partnership:** Lyft is partnering with AWS and Anthropic to bring agentic AI to drivers, aiming to improve customer service and reduce resolution times.\n",
+ "* **AI in Healthcare:** There's increasing use of AI in healthcare, with a focus on drug discovery, medical imaging, and personalized medicine. HHS is also doubling childhood cancer research funding to accelerate AI projects.\n",
+ "* **AI for Climate Change:** UC San Diego researchers are using AI to accelerate climate forecasts and help fight wildfires.\n",
+ "\n",
+ "**AI Ethics and Governance:**\n",
+ "\n",
+ "* **AI Safety Concerns:** A lawsuit in Colorado is examining the potential role of an AI chatbot in a teenager's suicide, raising concerns about the safety and ethical implications of AI.\n",
+ "* **Hollywood Backlash to AI Actors:** The creation of an AI-generated \"actress\" named Tilly Norwood has sparked outrage in Hollywood, raising concerns about the future of human creativity and labor.\n",
+ "* **Stanford AI Index Report:** The 2025 AI Index Report highlights trends in AI development, ethical considerations, public opinion, and the increasing role of governments in AI regulation and investment.\n",
+ "\n",
+ "**Other Notable News:**\n",
+ "\n",
+ "* **AI in Education:** College students are increasingly choosing AI majors over computer science, and there's growing discussion around integrating AI into K-12 education.\n",
+ "* **AI Adoption Divide:** A Microsoft report indicates that while many people have used AI tools, billions still lack the infrastructure to access the technology.\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "response = chat.invoke([\n",
+ " (\"system\", \"You are a helpful assistant with access to real-time web information.\"),\n",
+ " (\"human\", \"What are the top AI research news stories today?\")\n",
+ "])\n",
+ "\n",
+ "print(response.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "agent-header",
+ "metadata": {},
+ "source": [
+ "## Building an Agent with LangChain 1.x\n",
+ "\n",
+ "Now let's combine these tools into a powerful agent using LangChain's `create_agent`. The agent can autonomously decide when to search the web, when to extract content from URLs, and how to synthesize information to answer complex questions.\n",
+ "\n",
+ "This uses the new LangChain 1.x API which provides:\n",
+ "- Simple model specification via string (e.g., `\"anthropic:claude-opus-4-5-20251101\"`)\n",
+ "- `system_prompt` for guiding agent behavior\n",
+ "- Built on LangGraph's durable runtime under the hood"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "id": "agent-setup",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain.agents import create_agent\n",
+ "\n",
+ "# Create the agent with our tools (using Claude Opus 4.5)\n",
+ "tools = [search_tool, extract_tool]\n",
+ "\n",
+ "agent = create_agent(\n",
+ " \"anthropic:claude-opus-4-5-20251101\",\n",
+ " tools,\n",
+ " system_prompt=\"\"\"You are a helpful research assistant with access to web search and content extraction tools.\n",
+ "\n",
+ "When answering questions:\n",
+ "1. Use parallel_web_search to find relevant, current information\n",
+ "2. Use parallel_extract to get detailed content from specific URLs when needed\n",
+ "3. Synthesize the information into a clear, comprehensive answer\n",
+ "4. Always cite your sources with URLs\n",
+ "\n",
+ "Be thorough but concise in your responses.\"\"\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "agent-demo-header",
+ "metadata": {},
+ "source": [
+ "### Agent in Action\n",
+ "\n",
+ "Let's ask our agent some questions that require real-time web research!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "id": "agent-demo-1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import textwrap\n",
+ "\n",
+ "# Helper function to run the agent with streaming output\n",
+ "def ask_agent(question: str, width: int = 140):\n",
+ " print(f\"Question: {question}\\n\")\n",
+ " print(\"-\" * width)\n",
+ " \n",
+ " for event in agent.stream({\"messages\": [(\"human\", question)]}, stream_mode=\"values\"):\n",
+ " message = event[\"messages\"][-1]\n",
+ " if hasattr(message, 'content') and message.content:\n",
+ " if hasattr(message, 'tool_calls') and message.tool_calls:\n",
+ " for tool_call in message.tool_calls:\n",
+ " print(f\"π§ Using tool: {tool_call['name']}\")\n",
+ " elif message.type == \"ai\":\n",
+ " wrapped = textwrap.fill(message.content, width=width)\n",
+ " print(f\"\\n㪠Answer:\\n{wrapped}\")\n",
+ " \n",
+ " return event"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "id": "agent-demo-2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Question: What are the most significant AI developments this week?\n",
+ "\n",
+ "--------------------------------------------------------------------------------------------------------------------------------------------\n",
+ "π§ Using tool: parallel_web_search\n",
+ "π§ Using tool: parallel_web_search\n",
+ "\n",
+ "π¬ Answer:\n",
+ "Based on my research, here are the **most significant AI developments this week** (late November/early December 2025): ## π¬ Runway Launches\n",
+ "Gen 4.5 - Leading AI Video Model **Runway** announced **Gen 4.5**, their new AI video generation model that now **ranks #1 on the Video\n",
+ "Arena leaderboard**, surpassing Google's Veo 3 and OpenAI's Sora 2 Pro. The model excels at understanding physics, human motion, and camera\n",
+ "movements. CEO CristΓ³bal Valenzuela noted they \"managed to out-compete trillion-dollar companies with a team of 100 people.\" - Source:\n",
+ "[CNBC](https://www.cnbc.com/2025/12/01/runway-gen-4-5-video-model-google-open-ai.html) ## βοΈ AWS re:Invent 2025 Major Announcements\n",
+ "**Amazon Web Services** is unveiling significant AI innovations at their flagship event in Las Vegas: - **TwelveLabs Marengo 3.0** - A\n",
+ "breakthrough video understanding model on Amazon Bedrock that processes video as a \"complete, dynamic system\" with 50% storage cost\n",
+ "reduction - **Amazon Connect Agentic AI** - New agentic self-service capabilities powered by Nova Sonic for natural voice interactions -\n",
+ "**Visa + AWS Partnership** - Enabling AI agents to transact securely and autonomously on behalf of users - Source: [About\n",
+ "Amazon](https://www.aboutamazon.com/news/aws/aws-re-invent-2025-ai-news-updates) ## π€ NVIDIA Releases Open-Source AI Models at NeurIPS\n",
+ "**NVIDIA** announced new open AI tools for speech, safety, and autonomous driving at NeurIPS, including: - **NVIDIA DRIVE Alpamayo-R1** -\n",
+ "Described as \"the world's first open industry-scale reasoning vision\" model for autonomous driving - Source: [NVIDIA\n",
+ "Blog](https://blogs.nvidia.com/blog/neurips-open-source-digital-physical-ai/) ## π¦ HSBC Partners with Mistral AI **HSBC** announced a\n",
+ "strategic multi-year partnership with **Mistral AI** to accelerate generative AI adoption across their global banking operations, including:\n",
+ "- AI-powered productivity tools for employees - Enhanced financial analysis capabilities - Multilingual translation and reasoning services -\n",
+ "Source: [HSBC](https://www.hsbc.com/news-and-views/news/media-releases/2025/hsbc-and-mistral-ai-join-forces-to-accelerate-ai-adoption-\n",
+ "across-global-bank) ## π Fujitsu's Multi-AI Agent Collaboration Technology **Fujitsu** developed technology enabling secure collaboration\n",
+ "among AI agents from different companies within supply chains. Field trials with Rohto Pharmaceutical will begin in January 2026. - Source:\n",
+ "[Fujitsu](https://global.fujitsu/en-global/pr/news/2025/12/01-02) ## π Key Industry Trends (McKinsey State of AI 2025) According to\n",
+ "McKinsey's latest survey: - **62% of organizations** are experimenting with AI agents - Nearly two-thirds have **not yet begun scaling AI**\n",
+ "across the enterprise - AI agent use is highest in **IT, knowledge management, tech/media/telecom, and healthcare** - Source:\n",
+ "[McKinsey](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai) ## π Top AI Models Right Now (December 2025)\n",
+ "The current leading models include: 1. **Claude Opus 4.5** (Anthropic) - Released Nov 24, best for coding and agents 2. **GPT-5.1** (OpenAI)\n",
+ "- Released Nov 12, improved conversational abilities 3. **Gemini 3 Pro** (Google) - Leading in multimodal excellence and reasoning 4. **Grok\n",
+ "4** (xAI) - Strong multimodal capabilities - Source: [The Prompt Buddy](https://www.thepromptbuddy.com/prompts/best-ai-models-\n",
+ "december-2025-top-language-models-you-can-use-today) The overarching theme this week is the rise of **agentic AI** - AI systems that can\n",
+ "autonomously take actions and complete complex tasks across workflows, with major announcements from AWS, Visa, Fujitsu, and others focusing\n",
+ "on this capability.\n"
+ ]
+ }
+ ],
+ "source": [
+ "result = ask_agent(\"What are the most significant AI developments this week?\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "id": "9c57c797",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Question: Give me a one sentence summary of https://docs.langchain.com/oss/python/integrations/tools/parallel_search\n",
+ "\n",
+ "--------------------------------------------------------------------------------------------------------------------------------------------\n",
+ "π§ Using tool: parallel_extract\n",
+ "\n",
+ "π¬ Answer:\n",
+ "The `ParallelWebSearchTool` is a LangChain integration that provides access to Parallel's real-time web search API, combining search,\n",
+ "scraping, and content extraction into a single API call that returns structured, LLM-optimized results with features like domain filtering,\n",
+ "customizable excerpts, and async support.\n"
+ ]
+ }
+ ],
+ "source": [
+ "result = ask_agent(\"Give me a one sentence summary of https://docs.langchain.com/oss/python/integrations/tools/parallel_search\")"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "langchain",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}