Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More
Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces β all through one unified library.
Important
Webscout supports three types of compatibility:
- Native Compatibility: Webscout's own native API for maximum flexibility
- OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
- Local LLM Compatibility: Run local models with Inferno, an OpenAI-compatible server
Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README.
Note
Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.
- Comprehensive Search: Leverage Google, DuckDuckGo, and Yep for diverse search results
- AI Powerhouse: Access and interact with various AI models through three compatibility options:
- Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
- OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
- Local LLMs with Inferno: Run local models with an OpenAI-compatible server
- AI Search: AI-powered search engines with advanced capabilities
- YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
- Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
- Text-to-Image: Generate high-quality images using a wide range of AI art providers
- Weather Tools: Retrieve detailed weather information for any location
- GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
- SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
- LitPrinter: Styled console output with rich formatting and colors
- LitLogger: Simplified logging with customizable formats and color schemes
- LitAgent: Modern user agent generator that keeps your requests undetectable
- Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
- Inferno: Run local LLMs with an OpenAI-compatible API and interactive CLI
- GGUF Conversion: Convert and quantize Hugging Face models to GGUF format
- Tempmail & Temp Number: Generate temporary email addresses and phone numbers
- Awesome Prompts: Curated collection of system prompts for specialized AI personas
Install Webscout using pip:
pip install -U webscout
Webscout provides a powerful command-line interface for quick access to its features:
python -m webscout --help
Command | Description |
---|---|
python -m webscout answers -k "query" |
Perform an answers search |
python -m webscout chat |
Start an interactive AI chat session |
python -m webscout images -k "query" |
Search for images |
python -m webscout maps -k "query" |
Perform a maps search |
python -m webscout news -k "query" |
Search for news articles |
python -m webscout suggestions -k "query" |
Get search suggestions |
python -m webscout text -k "query" |
Perform a text search |
python -m webscout translate -k "text" |
Translate text |
python -m webscout version |
Display the current version |
python -m webscout videos -k "query" |
Search for videos |
python -m webscout weather -l "location" |
Get weather information |
Inferno provides commands for managing and using local LLMs:
python -m inferno --help
Command | Description |
---|---|
python -m inferno pull <model> |
Download a model from Hugging Face |
python -m inferno list |
List downloaded models |
python -m inferno serve <model> |
Start a model server with OpenAI-compatible API |
python -m inferno run <model> |
Chat with a model interactively |
python -m inferno remove <model> |
Remove a downloaded model |
python -m inferno version |
Show version information |
Note
Hardware requirements for running models:
- Around 2 GB of RAM for 1B models
- Around 4 GB of RAM for 3B models
- At least 8 GB of RAM for 7B models
- 16 GB of RAM for 13B models
- 32 GB of RAM for 33B models
- GPU acceleration is recommended for better performance
Webscout provides multiple search engine interfaces for diverse search capabilities.
from webscout import YepSearch
# Initialize YepSearch
yep = YepSearch(
timeout=20, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = yep.text(
keywords="artificial intelligence",
region="all", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
# Image Search
image_results = yep.images(
keywords="nature photography",
region="all",
safesearch="moderate",
max_results=10
)
# Get search suggestions
suggestions = yep.suggestions("hist")
from webscout import GoogleSearch
# Initialize GoogleSearch
google = GoogleSearch(
timeout=10, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = google.text(
keywords="artificial intelligence",
region="us", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
for result in text_results:
print(f"Title: {result.title}")
print(f"URL: {result.url}")
print(f"Description: {result.description}")
# News Search
news_results = google.news(
keywords="technology trends",
region="us",
safesearch="moderate",
max_results=5
)
# Get search suggestions
suggestions = google.suggestions("how to")
# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)
Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the WEBS
and AsyncWEBS
classes.
from webscout import WEBS
# Use as a context manager for proper resource management
with WEBS() as webs:
# Simple text search
results = webs.text("python programming", max_results=5)
for result in results:
print(f"Title: {result['title']}\nURL: {result['url']}")
import asyncio
from webscout import AsyncWEBS
async def search_multiple_terms(search_terms):
async with AsyncWEBS() as webs:
# Create tasks for each search term
tasks = [webs.text(term, max_results=5) for term in search_terms]
# Run all searches concurrently
results = await asyncio.gather(*tasks)
return results
async def main():
terms = ["python", "javascript", "machine learning"]
all_results = await search_multiple_terms(terms)
# Process results
for i, term_results in enumerate(all_results):
print(f"Results for '{terms[i]}':\n")
for result in term_results:
print(f"- {result['title']}")
print("\n")
# Run the async function
asyncio.run(main())
Note
Always use these classes with a context manager (with
statement) to ensure proper resource management and cleanup.
The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.
Method | Description | Example |
---|---|---|
text() |
General web search | webs.text('python programming') |
answers() |
Instant answers | webs.answers('population of france') |
images() |
Image search | webs.images('nature photography') |
videos() |
Video search | webs.videos('documentary') |
news() |
News articles | webs.news('technology') |
maps() |
Location search | webs.maps('restaurants', place='new york') |
translate() |
Text translation | webs.translate('hello', to='es') |
suggestions() |
Search suggestions | webs.suggestions('how to') |
weather() |
Weather information | webs.weather('london') |
from webscout import WEBS
with WEBS() as webs:
results = webs.text(
'artificial intelligence',
region='wt-wt', # Optional: Region for results
safesearch='off', # Optional: 'on', 'moderate', 'off'
timelimit='y', # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
max_results=10 # Optional: Limit number of results
)
for result in results:
print(f"Title: {result['title']}")
print(f"URL: {result['url']}")
print(f"Description: {result['body']}\n")
from webscout import WEBS
import datetime
def fetch_formatted_news(keywords, timelimit='d', max_results=20):
"""Fetch and format news articles"""
with WEBS() as webs:
# Get news results
news_results = webs.news(
keywords,
region="wt-wt",
safesearch="off",
timelimit=timelimit, # 'd'=day, 'w'=week, 'm'=month
max_results=max_results
)
# Format the results
formatted_news = []
for i, item in enumerate(news_results, 1):
# Format the date
date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')
# Create formatted entry
entry = f"{i}. {item['title']}\n"
entry += f" Published: {date}\n"
entry += f" {item['body']}\n"
entry += f" URL: {item['url']}\n"
formatted_news.append(entry)
return formatted_news
# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))
from webscout import WEBS
with WEBS() as webs:
# Get weather for a location
weather = webs.weather("New York")
# Access weather data
if weather:
print(f"Location: {weather.get('location', 'Unknown')}")
print(f"Temperature: {weather.get('temperature', 'N/A')}")
print(f"Conditions: {weather.get('condition', 'N/A')}")
Webscout provides easy access to a wide range of AI models and voice options.
Access and manage Large Language Models with Webscout's model utilities.
from webscout import model
from rich import print
# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")
# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} models")
# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
for i, model_name in enumerate(available_models, 1):
print(f" {i}. {model_name}")
else:
print(f" {available_models}")
Access and manage Text-to-Speech voices across multiple providers.
from webscout import model
from rich import print
# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")
# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} voices")
# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices
print(f" - {voice_name}: {voice_id}")
if len(available_voices) > 5:
print(f" ... and {len(available_voices) - 5} more")
Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.
Provider | Description | Key Features |
---|---|---|
OPENAI |
OpenAI's models | GPT-3.5, GPT-4, tool calling |
GEMINI |
Google's Gemini models | Web search capabilities |
Meta |
Meta's AI assistant | Image generation, web search |
GROQ |
Fast inference platform | High-speed inference, tool calling |
LLAMA |
Meta's Llama models | Open weights models |
DeepInfra |
Various open models | Multiple model options |
Cohere |
Cohere's language models | Command models |
PerplexityLabs |
Perplexity AI | Web search integration |
Anthropic |
Claude models | Long context windows |
YEPCHAT |
Yep.com's AI | Streaming responses |
ChatGPTClone |
ChatGPT-like interface | Multiple model options |
TypeGPT |
TypeChat models | Code generation focus |
from webscout import WEBS
# Initialize and use Duckchat
with WEBS() as webs:
response = webs.chat(
"Explain quantum computing in simple terms",
model='gpt-4o-mini' # Options: mixtral-8x7b, llama-3.1-70b, claude-3-haiku, etc.
)
print(response)
from webscout import Meta
# For basic usage (no authentication required)
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="[email protected]", fb_password="your_password")
# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])
# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
print(media["url"])
from webscout import GROQ, WEBS
import json
# Initialize GROQ client
client = GROQ(api_key="your_api_key")
# Define helper functions
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a web search"""
try:
results = WEBS().text(query, max_results=3)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define tool specifications
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
]
# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)
response = client.chat("Find information about quantum computing", tools=tools)
print(response)
Webscout provides direct interfaces to language and vision-language models through the LLM
and VLM
classes.
from webscout.LLM import LLM, VLM
# Text-only model interaction
llm = LLM("meta-llama/Meta-Llama-3-70B-Instruct")
response = llm.chat([
{"role": "user", "content": "Explain the concept of neural networks"}
])
print(response)
# Vision-language model interaction
vlm = VLM("cogvlm-grounding-generalist")
response = vlm.chat([
{
"role": "user",
"content": [
{"type": "image", "image_url": "path/to/image.jpg"},
{"type": "text", "text": "Describe what you see in this image"}
]
}
])
print(response)
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.
from webscout.Extra.gguf import ModelConverter
# Create a converter instance
converter = ModelConverter(
model_id="mistralai/Mistral-7B-Instruct-v0.2", # Hugging Face model ID
quantization_methods="q4_k_m" # Quantization method
)
# Run the conversion
converter.convert()
Method | Description |
---|---|
fp16 |
16-bit floating point - maximum accuracy, largest size |
q2_k |
2-bit quantization (smallest size, lowest accuracy) |
q3_k_l |
3-bit quantization (large) - balanced for size/accuracy |
q3_k_m |
3-bit quantization (medium) - good balance for most use cases |
q3_k_s |
3-bit quantization (small) - optimized for speed |
q4_0 |
4-bit quantization (version 0) - standard 4-bit compression |
q4_1 |
4-bit quantization (version 1) - improved accuracy over q4_0 |
q4_k_m |
4-bit quantization (medium) - balanced for most models |
q4_k_s |
4-bit quantization (small) - optimized for speed |
q5_0 |
5-bit quantization (version 0) - high accuracy, larger size |
q5_1 |
5-bit quantization (version 1) - improved accuracy over q5_0 |
q5_k_m |
5-bit quantization (medium) - best balance for quality/size |
q5_k_s |
5-bit quantization (small) - optimized for speed |
q6_k |
6-bit quantization - highest accuracy, largest size |
q8_0 |
8-bit quantization - maximum accuracy, largest size |
python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes and commit them with descriptive messages
- Push your branch to your forked repository
- Submit a pull request to the main repository
- All the amazing developers who have contributed to the project
- The open-source community for their support and inspiration
Made with β€οΈ by the Webscout team