Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More
Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces β all through one unified library.
- π Key Features
- βοΈ Installation
- π₯οΈ Command Line Interface
- π OpenAI-Compatible API Server
- πΈοΈ Scout: HTML Parser & Web Crawler
- π€ AI Models and Voices
- π¬ AI Chat Providers
- π¨βπ» Advanced AI Interfaces
- π€ Contributing
- π Acknowledgments
Important
Webscout supports three types of compatibility:
- Native Compatibility: Webscout's own native API for maximum flexibility
- OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
- Local LLM Compatibility: Run local models with OpenAI-compatible servers
Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.
Note
Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.
Search & AI
- Comprehensive Search: Access multiple search engines including DuckDuckGo, Yep, Bing, Brave, Yahoo, Yandex, Mojeek, and Wikipedia for diverse search results (Search Documentation)
- AI Powerhouse: Access and interact with various AI models through three compatibility options:
- Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
- OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
- Local LLMs: Run local models with OpenAI-compatible servers (see Inferno documentation)
- AI Search: AI-powered search engines with advanced capabilities
Media & Content Tools
- YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
- Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
- Text-to-Image: Generate high-quality images using a wide range of AI art providers
- Weather Tools: Retrieve detailed weather information for any location
Developer Tools
- GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
- SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
- LitPrinter: Styled console output with rich formatting and colors
- LitLogger: Simplified logging with customizable formats and color schemes
- LitAgent: Modern user agent generator that keeps your requests undetectable
- Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
- GGUF Conversion: Convert and quantize Hugging Face models to GGUF format
- Utility Decorators: Easily measure function execution time (
timeIt
) and add retry logic (retry
) to any function - Stream Sanitization Utilities: Advanced tools for cleaning, decoding, and processing data streams
Privacy & Utilities
- Tempmail & Temp Number: Generate temporary email addresses and phone numbers
- Awesome Prompts: Curated collection of system prompts for specialized AI personas
Webscout supports multiple installation methods to fit your workflow:
# Install from PyPI
pip install -U webscout
# Install with API server dependencies
pip install -U "webscout[api]"
# Install with development dependencies
pip install -U "webscout[dev]"
UV is a fast Python package manager. Webscout has full UV support:
# Install UV first (if not already installed)
pip install uv
# Install Webscout with UV
uv add webscout
# Install with API dependencies
uv add "webscout[api]"
# Run Webscout directly with UV (no installation needed)
uv run webscout --help
# Run with API dependencies
uv run webscout --extra api webscout-server
# Install as a UV tool for global access
uv tool install webscout
# Use UV tool commands
webscout --help
webscout-server
# Clone the repository
git clone https://github.com/pyscout/Webscout.git
cd Webscout
# Install in development mode with UV
uv sync --extra dev --extra api
# Or with pip
pip install -e ".[dev,api]"
# Or with uv pip
uv pip install -e ".[dev,api]"
# Pull and run the Docker image
docker pull pyscout/webscout:latest
docker run -it pyscout/webscout:latest
After installation, you can immediately start using Webscout:
# Check version
webscout version
# Search the web
webscout text -k "python programming"
# Start API server
webscout-server
# Get help
webscout --help
Webscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:
After installing with uv tool install webscout
or pip install webscout
:
# Get help
webscout --help
# Start API server
webscout-server
# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run --extra api webscout-server
# Traditional Python module execution
python -m webscout --help
python -m webscout-server
π Web Search Commands
Webscout provides comprehensive CLI commands for all search engines. See the Search Documentation for detailed command reference.
For detailed information about the OpenAI-compatible API server, including setup, configuration, and usage examples, see the OpenAI API Server Documentation.
Webscout provides easy access to a wide range of AI models and voice options.
LLM Models
Access and manage Large Language Models with Webscout's model utilities.
from webscout import model
from rich import print
# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")
# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} models")
# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
for i, model_name in enumerate(available_models, 1):
print(f" {i}. {model_name}")
else:
print(f" {available_models}")
TTS Voices
Access and manage Text-to-Speech voices across multiple providers.
from webscout import model
from rich import print
# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")
# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} voices")
# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices
print(f" - {voice_name}: {voice_id}")
if len(available_voices) > 5:
print(f" ... and {len(available_voices) - 5} more")
Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.
Provider | Description | Key Features |
---|---|---|
OPENAI |
OpenAI's models | GPT-3.5, GPT-4, tool calling |
GEMINI |
Google's Gemini models | Web search capabilities |
Meta |
Meta's AI assistant | Image generation, web search |
GROQ |
Fast inference platform | High-speed inference, tool calling |
LLAMA |
Meta's Llama models | Open weights models |
DeepInfra |
Various open models | Multiple model options |
Cohere |
Cohere's language models | Command models |
PerplexityLabs |
Perplexity AI | Web search integration |
YEPCHAT |
Yep.com's AI | Streaming responses |
ChatGPTClone |
ChatGPT-like interface | Multiple model options |
TypeGPT |
TypeChat models | Multiple model options |
Example: Using Meta AI
from webscout import Meta
# For basic usage (no authentication required)
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")
# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])
# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
print(media["url"])
Example: GROQ with Tool Calling
from webscout import GROQ, DuckDuckGoSearch
import json
# Initialize GROQ client
client = GROQ(api_key="your_api_key")
# Define helper functions
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a web search"""
try:
ddg = DuckDuckGoSearch()
results = ddg.text(query, max_results=3)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define tool specifications
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
]
# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)
response = client.chat("Find information about quantum computing", tools=tools)
print(response)
GGUF Model Conversion
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.
from webscout.Extra.gguf import ModelConverter
# Create a converter instance
converter = ModelConverter(
model_id="mistralai/Mistral-7B-Instruct-v0.2", # Hugging Face model ID
quantization_methods="q4_k_m" # Quantization method
)
# Run the conversion
converter.convert()
Method | Description |
---|---|
fp16 |
16-bit floating point - maximum accuracy, largest size |
q2_k |
2-bit quantization (smallest size, lowest accuracy) |
q3_k_l |
3-bit quantization (large) - balanced for size/accuracy |
q3_k_m |
3-bit quantization (medium) - good balance for most use cases |
q3_k_s |
3-bit quantization (small) - optimized for speed |
q4_0 |
4-bit quantization (version 0) - standard 4-bit compression |
q4_1 |
4-bit quantization (version 1) - improved accuracy over q4_0 |
q4_k_m |
4-bit quantization (medium) - balanced for most models |
q4_k_s |
4-bit quantization (small) - optimized for speed |
q5_0 |
5-bit quantization (version 0) - high accuracy, larger size |
q5_1 |
5-bit quantization (version 1) - improved accuracy over q5_0 |
q5_k_m |
5-bit quantization (medium) - best balance for quality/size |
q5_k_s |
5-bit quantization (small) - optimized for speed |
q6_k |
6-bit quantization - highest accuracy, largest size |
q8_0 |
8-bit quantization - maximum accuracy, largest size |
python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes and commit them with descriptive messages
- Push your branch to your forked repository
- Submit a pull request to the main repository
- All the amazing developers who have contributed to the project
- The open-source community for their support and inspiration
Made with β€οΈ by the Webscout team