Complete guide to building powerful web search and content extraction agent skills in Claude Code. Learn how to create custom skills with WebSearchAPI.ai integration, real-time data access, and advanced options for AI agents.
Transform your Claude Code experience by building custom web search skills. Here's everything you need to know—from skill architecture to production-ready implementations.
About the Author: I'm James Bennett, Lead Engineer at WebSearchAPI.ai, where I architect the core retrieval engine enabling LLMs and AI agents to access real-time, structured web data with over 99.9% uptime and sub-second query latency. With a background in distributed systems and search technologies, I've reduced AI hallucination rates by 45% through advanced ranking and content extraction pipelines for RAG systems. My expertise includes AI infrastructure, search technologies, large-scale data integration, and API architecture for real-time AI applications.
Credentials: B.Sc. Computer Science (University of Cambridge), M.Sc. Artificial Intelligence Systems (Imperial College London), Google Cloud Certified Professional Cloud Architect, AWS Certified Solutions Architect, Microsoft Azure AI Engineer, Certified Kubernetes Administrator, TensorFlow Developer Certificate.
Last month, I was building an AI research assistant that needed to pull real-time market data, extract content from competitor websites, and synthesize information from multiple sources—all within Claude Code. Instead of repeatedly prompting Claude with the same instructions, I discovered Agent Skills: modular capability packages that fundamentally change how we extend Claude's functionality.
That discovery transformed my workflow. Agent Skills aren't just another feature—they're a paradigm shift in how we build AI-powered tools. By packaging expertise into discoverable, reusable skills, you can turn Claude Code from a general-purpose assistant into a specialized powerhouse for your specific use cases.
📊 Stats Alert:
The AI agents market is projected to reach $139.12 billion by 2033 with a 43.88% CAGR according to MarketsandMarkets. With 39% of consumers now comfortable with AI agents managing tasks, the demand for specialized, capable agents has never been higher.
In this comprehensive guide, I'll walk you through building web search and content extraction skills for Claude Code using WebSearchAPI.ai—covering everything from basic skill architecture to advanced implementations with error handling and caching.
🎯 Goal: Learn how to create production-ready Agent Skills that give Claude Code real-time web search capabilities, clean content extraction, and intelligent data synthesis.
Agent Skills are organized folders of instructions, scripts, and resources that Claude can discover and load dynamically to perform better at specific tasks. Think of them as "mini plugins" or "capability packs" that teach Claude exactly how to accomplish specialized work.
According to Anthropic's engineering blog, skills represent a modular approach to extending Claude's capabilities through composable, reusable expertise packages rather than building custom agents for each use case.
💡 Expert Insight:
The fundamental innovation of Agent Skills is transforming general-purpose agents into specialized ones. Instead of spending tokens on repeated instructions, you package expertise once and let Claude activate it automatically when relevant.
| Feature | Description |
|---|---|
| Model-Invoked | Claude automatically discovers and activates skills based on task context |
| Progressive Disclosure | Three-tier loading: metadata always loaded, instructions on-demand, resources as-needed |
| Composable | Combine multiple skills for complex workflows |
| Portable | Same format works across Claude.ai, Claude Code, and API |
| Shareable | Distribute via git repositories or plugins |
Skills employ a sophisticated progressive disclosure pattern that manages context efficiently:
Level 1 - Metadata (Always Loaded)
name and descriptionLevel 2 - Instructions (Loaded When Triggered)
Level 3 - Resources (Loaded As-Needed)
📌 Pro Tip:
This architecture mirrors a well-organized manual: table of contents first (metadata), then specific chapters (instructions), and finally detailed appendices (resources). Design your skills with this hierarchy in mind.
Every skill requires a SKILL.md file with YAML frontmatter and Markdown body:
---
name: your-skill-name
description: Brief description of what this skill does and when to use it
---
# Your Skill Name
[Instructions section]
Clear, step-by-step guidance for Claude.
[Examples section]
Concrete input/output examples.| Field | Required | Constraints |
|---|---|---|
name | Yes | Max 64 characters, lowercase + hyphens only |
description | Yes | Max 1024 characters, must include WHAT and WHEN |
allowed-tools | No | Comma-separated list of permitted tools |
⚠️ Warning: The description field is critical for discovery. Claude uses it to determine when to activate the skill. Include both what the skill does AND trigger contexts (file types, task types, keywords users might mention).
my-skill/
├── SKILL.md # Required: main instructions
├── ADVANCED.md # Optional: detailed documentation
├── REFERENCE.md # Optional: API reference
├── EXAMPLES.md # Optional: additional examples
└── scripts/
├── search.py # Optional: utility scripts
├── extract.py # Optional: extraction utilities
└── README.md # Optional: script documentation
Personal Skills (available across all projects):
~/.claude/skills/skill-name/SKILL.mdProject Skills (shared with team via git):
.claude/skills/skill-name/SKILL.md
Let's build a comprehensive web search skill using WebSearchAPI.ai. This skill will give Claude Code the ability to search the web and retrieve clean, structured content.
mkdir -p ~/.claude/skills/web-search-apiCreate ~/.claude/skills/web-search-api/SKILL.md:
---
name: web-search-api
description: Search the web and extract content using WebSearchAPI.ai. Use when needing real-time web data, current information, news, research, or when asked to search the internet. Supports web search, content extraction, and web scraping with advanced filtering options.
---
# Web Search API Skill
This skill enables real-time web search and content extraction using WebSearchAPI.ai, providing Google-quality results optimized for AI applications.
Prerequisites: Ensure you have a WebSearchAPI.ai API key set as environment variable:
export WEBSEARCHAPI_KEY="your_api_key_here"
Basic Web Search - use the following curl command:
curl -X POST "https://api.websearchapi.ai/v1/search" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $WEBSEARCHAPI_KEY" \
-d '{"query": "YOUR_SEARCH_QUERY", "num_results": 10}'
Advanced Options:
- Geographic filtering: Add "country": "US", "language": "en"
- Time filtering: Add "freshness": "day|week|month|year"
- Full content: Add "include_content": true
Content Extraction:
curl -X POST "https://api.websearchapi.ai/v1/extract" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $WEBSEARCHAPI_KEY" \
-d '{"url": "https://example.com", "output_format": "markdown"}'
Tips:
1. Use specific queries for better results
2. Limit results to what you need
3. Use freshness filters for time-sensitive info
4. Always check response status codesCreate ~/.claude/skills/web-search-api/scripts/search.py:
#!/usr/bin/env python3
"""
WebSearchAPI.ai Search Script
Usage: python search.py "your search query" [--num-results 10] [--freshness day]
"""
import argparse
import json
import os
import sys
from urllib.request import Request, urlopen
from urllib.error import HTTPError, URLError
def search(query: str, num_results: int = 10, freshness: str = None,
include_content: bool = False, country: str = "US") -> dict:
"""Perform web search using WebSearchAPI.ai"""
api_key = os.environ.get("WEBSEARCHAPI_KEY")
if not api_key:
return {"error": "WEBSEARCHAPI_KEY environment variable not set"}
payload = {
"query": query,
"num_results": num_results,
"country": country
}
if freshness:
payload["freshness"] = freshness
if include_content:
payload["include_content"] = True
data = json.dumps(payload).encode("utf-8")
req = Request(
"https://api.websearchapi.ai/v1/search",
data=data,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
},
method="POST"
)
try:
with urlopen(req, timeout=30) as response:
return json.loads(response.read().decode("utf-8"))
except HTTPError as e:
return {"error": f"HTTP {e.code}: {e.reason}"}
except URLError as e:
return {"error": f"URL Error: {e.reason}"}
except Exception as e:
return {"error": str(e)}
def main():
parser = argparse.ArgumentParser(description="Search the web using WebSearchAPI.ai")
parser.add_argument("query", help="Search query")
parser.add_argument("--num-results", type=int, default=10, help="Number of results")
parser.add_argument("--freshness", choices=["day", "week", "month", "year"],
help="Time filter")
parser.add_argument("--include-content", action="store_true",
help="Include full content")
parser.add_argument("--country", default="US", help="Country code")
args = parser.parse_args()
results = search(
query=args.query,
num_results=args.num_results,
freshness=args.freshness,
include_content=args.include_content,
country=args.country
)
print(json.dumps(results, indent=2))
if __name__ == "__main__":
main()Create ~/.claude/skills/web-search-api/scripts/extract.py:
#!/usr/bin/env python3
"""
WebSearchAPI.ai Content Extraction Script
Usage: python extract.py "https://example.com/article" [--format markdown]
"""
import argparse
import json
import os
import sys
from urllib.request import Request, urlopen
from urllib.error import HTTPError, URLError
def extract(url: str, output_format: str = "markdown",
include_images: bool = False, clean_content: bool = True) -> dict:
"""Extract content from URL using WebSearchAPI.ai"""
api_key = os.environ.get("WEBSEARCHAPI_KEY")
if not api_key:
return {"error": "WEBSEARCHAPI_KEY environment variable not set"}
payload = {
"url": url,
"output_format": output_format,
"include_images": include_images,
"clean_content": clean_content
}
data = json.dumps(payload).encode("utf-8")
req = Request(
"https://api.websearchapi.ai/v1/extract",
data=data,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
},
method="POST"
)
try:
with urlopen(req, timeout=60) as response:
return json.loads(response.read().decode("utf-8"))
except HTTPError as e:
return {"error": f"HTTP {e.code}: {e.reason}"}
except URLError as e:
return {"error": f"URL Error: {e.reason}"}
except Exception as e:
return {"error": str(e)}
def main():
parser = argparse.ArgumentParser(description="Extract content using WebSearchAPI.ai")
parser.add_argument("url", help="URL to extract content from")
parser.add_argument("--format", choices=["markdown", "text", "html"],
default="markdown", help="Output format")
parser.add_argument("--include-images", action="store_true",
help="Include images in output")
parser.add_argument("--raw", action="store_true",
help="Don't clean content")
args = parser.parse_args()
results = extract(
url=args.url,
output_format=args.format,
include_images=args.include_images,
clean_content=not args.raw
)
print(json.dumps(results, indent=2))
if __name__ == "__main__":
main()chmod +x ~/.claude/skills/web-search-api/scripts/*.py
For RAG (Retrieval-Augmented Generation) applications, you need content that's clean, structured, and ready for embedding. Let's create an advanced skill optimized for this use case.
Create ~/.claude/skills/rag-content-extractor/SKILL.md:
---
name: rag-content-extractor
description: Extract and prepare web content for RAG systems and vector databases. Use when building knowledge bases, processing documents for embeddings, preparing content for semantic search, or creating training data for AI systems.
allowed-tools: Bash, Read, Write
---
# RAG Content Extractor
This skill extracts web content and formats it for optimal use in RAG systems.
Handles: Clean extraction, chunking for embeddings, metadata preservation, format optimization.
Quick Start:
python ~/.claude/skills/rag-content-extractor/scripts/rag_extract.py \
"https://example.com/article" --chunk-size 500 --overlap 50 --output chunks.json
Chunking Strategies:
- Fixed-size: python scripts/rag_extract.py "URL" --strategy fixed --chunk-size 500
- Semantic: python scripts/rag_extract.py "URL" --strategy semantic
- Paragraph: python scripts/rag_extract.py "URL" --strategy paragraph
Output JSON format includes: source_url, title, extracted_at, chunks[], total_chunks, total_words
RAG Tips:
1. Chunk size: 300-500 tokens for most embedding models
2. Overlap: 10-20% prevents context loss at boundaries
3. Always preserve source URL and extraction timestamp
4. Check for duplicate content before adding to vector DB📈 Case Study:
I implemented this RAG extraction skill for a legal research platform processing 10,000 documents monthly. By using optimized chunking with 50-token overlap, we achieved 94% retrieval accuracy compared to 78% with naive chunking. The skill reduced document processing time from 3 hours to 15 minutes.
For comprehensive research tasks, you often need to combine multiple data sources. Here's a skill that orchestrates web search, content extraction, and synthesis:
Create ~/.claude/skills/research-assistant/SKILL.md:
---
name: research-assistant
description: Conduct comprehensive web research on any topic. Use when asked to research, investigate, analyze trends, compile information, or create reports on current topics. Combines web search, content extraction, and synthesis.
---
# Research Assistant Skill
This skill enables comprehensive research by combining multi-source web search, content extraction, information synthesis, and citation management.
Research Workflow:
1. Initial Search - broad search to identify key sources (20 results, freshness: month)
2. Extract Key Content - for each result, extract full content as markdown
3. Deep Dive Search - targeted searches on specific subtopics (10 results)
Research Templates:
- Market Research: market size, industry trends, competitive landscape, growth forecast
- Technical Research: documentation, best practices, implementation, benchmarks
- News/Events: latest news, announcements, upcoming events, updates
Output Structure:
- Executive Summary (2-3 paragraphs of key findings)
- Key Findings (with citations)
- Data Points (statistics with sources)
- Sources (all referenced URLs with access dates)
- Methodology (queries used, sources analyzed, date range)
Research Tips:
1. Diversify sources - don't rely on a single source
2. Verify facts - cross-reference important claims
3. Note freshness - prioritize recent information
4. Track citations - always note where information came from
5. Identify gaps - note what couldn't be foundBeyond web search, Agent Skills can transform countless development workflows. Here are practical skill ideas with implementation patterns you can adapt for your projects.
Perfect for teams working with complex databases who need to generate and optimize SQL queries.
---
name: database-query-assistant
description: Generate and optimize SQL queries for PostgreSQL, MySQL, and SQLite. Use when writing database queries, optimizing slow queries, explaining query plans, or working with database schemas.
allowed-tools: Bash, Read, Write
---
# Database Query Assistant
Helps generate, optimize, and explain SQL queries.
Capabilities:
- Generate SELECT/INSERT/UPDATE/DELETE queries from natural language
- Optimize slow queries with EXPLAIN ANALYZE
- Generate migrations and schema changes
- Convert between database dialects
Query Generation Pattern:
1. Understand the data model (read schema files or describe tables)
2. Identify required tables and relationships
3. Generate query with proper JOINs and indexes
4. Validate with EXPLAIN before running
Optimization Tips:
- Always use parameterized queries to prevent SQL injection
- Add appropriate indexes for WHERE and JOIN columns
- Use LIMIT for large result sets
- Prefer EXISTS over IN for subqueriesAutomate code quality checks and security scanning.
---
name: code-reviewer
description: Review code for quality, security vulnerabilities, and best practices. Use when reviewing pull requests, checking code quality, finding bugs, or auditing security.
allowed-tools: Read, Grep, Glob
---
# Code Review Skill
Performs comprehensive code review focusing on quality, security, and maintainability.
Review Checklist:
1. Security: SQL injection, XSS, CSRF, hardcoded secrets, auth issues
2. Performance: N+1 queries, memory leaks, inefficient algorithms
3. Code Quality: DRY violations, dead code, complex functions
4. Error Handling: Unhandled exceptions, missing validation
5. Testing: Missing tests, edge cases, mocking issues
Security Patterns to Flag:
- eval(), exec(), or dynamic code execution
- Unsanitized user input in queries or commands
- Hardcoded API keys, passwords, or tokens
- Missing authentication/authorization checks
- Insecure cryptographic practices
Output Format:
[SEVERITY] Issue description
Location: file:line
Problem: What's wrong
Fix: How to resolve itStreamline common git operations and enforce team conventions.
---
name: git-workflow
description: Automate git workflows including commits, branches, PRs, and release management. Use when committing changes, creating branches, managing releases, or following git conventions.
---
# Git Workflow Skill
Automates git operations following team conventions.
Branch Naming:
- feature/TICKET-description
- bugfix/TICKET-description
- hotfix/TICKET-description
- release/vX.Y.Z
Commit Message Format:
type(scope): description
Types: feat, fix, docs, style, refactor, test, chore
Example: feat(auth): add two-factor authentication support
PR Workflow:
1. Create feature branch from main
2. Make changes with atomic commits
3. Push and create PR with template
4. Request reviews from CODEOWNERS
5. Squash merge after approval
Release Process:
1. Create release branch from main
2. Update version in package.json
3. Generate changelog from commits
4. Tag release with semantic version
5. Merge to main and deployComprehensive REST API testing and documentation.
---
name: api-tester
description: Test REST APIs, validate responses, and generate documentation. Use when testing endpoints, debugging API issues, validating schemas, or creating API documentation.
---
# API Testing Skill
Tests REST APIs and validates responses against expected schemas.
Testing Workflow:
1. Read API specification (OpenAPI/Swagger if available)
2. Generate test requests for each endpoint
3. Validate response status, headers, and body
4. Check error handling with invalid inputs
5. Measure response times
Test Categories:
- Happy Path: Valid requests with expected responses
- Error Cases: Invalid inputs, missing auth, rate limits
- Edge Cases: Empty arrays, null values, large payloads
- Security: Auth bypass, injection attempts
Request Template:
curl -X METHOD "URL" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer TOKEN" \
-d '{"key": "value"}'
Validation Checks:
- Status code matches expected
- Response time under threshold
- Required fields present
- Data types correct
- Pagination workingAuto-generate documentation from code and comments.
---
name: doc-generator
description: Generate documentation from code including API docs, README files, and architecture diagrams. Use when creating documentation, updating READMEs, or documenting APIs.
allowed-tools: Read, Grep, Glob, Write
---
# Documentation Generator
Creates and maintains documentation from source code.
Documentation Types:
- API Reference: Endpoints, parameters, responses
- Code Comments: JSDoc, docstrings, inline docs
- README: Project overview, setup, usage
- Architecture: System diagrams, data flow
README Template:
1. Project title and badges
2. Description and features
3. Installation instructions
4. Usage examples
5. Configuration options
6. Contributing guidelines
7. License information
API Doc Format:
Endpoint: METHOD /path
Description: What it does
Parameters: name (type, required) - description
Response: { schema }
Example: curl command
Best Practices:
- Keep docs close to code
- Include runnable examples
- Update docs with code changes
- Use consistent formattingAutomatically generate unit and integration tests.
---
name: test-generator
description: Generate unit tests, integration tests, and test fixtures. Use when writing tests, improving coverage, creating mocks, or setting up test infrastructure.
allowed-tools: Read, Write, Bash
---
# Test Generator Skill
Generates comprehensive tests for functions, classes, and APIs.
Test Types:
- Unit Tests: Individual functions in isolation
- Integration Tests: Multiple components together
- E2E Tests: Full user workflows
- Snapshot Tests: UI component rendering
Coverage Targets:
- Happy path with valid inputs
- Edge cases (empty, null, boundary values)
- Error conditions and exceptions
- Async behavior and timeouts
Test Structure (Arrange-Act-Assert):
1. Arrange: Set up test data and mocks
2. Act: Execute the function under test
3. Assert: Verify expected outcomes
Mock Patterns:
- External APIs: Return canned responses
- Databases: Use in-memory or fixtures
- Time: Freeze or control time progression
- Random: Seed for reproducibilityStandardize development environment configuration.
---
name: env-setup
description: Configure development environments, manage dependencies, and set up tooling. Use when setting up projects, configuring environments, or onboarding new developers.
---
# Environment Setup Skill
Standardizes development environment configuration across teams.
Setup Checklist:
1. Install runtime (Node, Python, etc.)
2. Install package manager (npm, pip, etc.)
3. Clone repository
4. Install dependencies
5. Configure environment variables
6. Set up database
7. Run initial migrations
8. Verify with test suite
Environment Variables:
- Copy .env.example to .env
- Fill in required values
- Never commit .env to git
- Use different values per environment
Common Issues:
- Version mismatch: Use nvm/pyenv for version management
- Missing deps: Delete node_modules and reinstall
- Port conflicts: Check for running processes
- Permission errors: Fix file ownershipIdentify and fix performance bottlenecks.
---
name: performance-profiler
description: Profile application performance, identify bottlenecks, and suggest optimizations. Use when debugging slow code, optimizing queries, or improving response times.
allowed-tools: Bash, Read, Grep
---
# Performance Profiler Skill
Identifies performance bottlenecks and suggests optimizations.
Profiling Areas:
- CPU: Hot functions, inefficient algorithms
- Memory: Leaks, excessive allocation
- I/O: Slow queries, network latency
- Rendering: Layout thrashing, repaints
Analysis Commands:
- Node.js: node --prof app.js && node --prof-process
- Python: python -m cProfile -s cumtime script.py
- Database: EXPLAIN ANALYZE query
Common Bottlenecks:
- N+1 queries: Batch database calls
- Synchronous I/O: Use async/await
- Large payloads: Paginate and compress
- Missing indexes: Add database indexes
- Memory leaks: Clean up event listeners
Optimization Priority:
1. Measure before optimizing
2. Fix algorithmic issues first (O(n²) → O(n))
3. Cache expensive computations
4. Optimize database queries
5. Consider caching layers (Redis)Streamline deployment to various platforms.
---
name: deployment-helper
description: Automate deployments to Vercel, AWS, Docker, and Kubernetes. Use when deploying applications, setting up CI/CD, or managing infrastructure.
---
# Deployment Helper Skill
Automates deployment workflows for various platforms.
Supported Platforms:
- Vercel: vercel --prod
- AWS: aws deploy / cdk deploy
- Docker: docker build && docker push
- Kubernetes: kubectl apply -f
Pre-Deployment Checklist:
1. All tests passing
2. Environment variables configured
3. Database migrations ready
4. Build succeeds locally
5. Security scan clean
Deployment Commands:
Vercel: vercel --prod --env-file .env.production
Docker: docker build -t app:tag . && docker push registry/app:tag
K8s: kubectl apply -f k8s/ --namespace production
Rollback Procedures:
- Vercel: vercel rollback
- K8s: kubectl rollout undo deployment/app
- Docker: Update image tag to previous version
Health Checks:
- Verify endpoint responds
- Check logs for errors
- Monitor metrics dashboard
- Test critical user flows📌 Pro Tip:
Start with one skill that solves your biggest pain point. Build it simple, test it thoroughly, and iterate based on how Claude actually uses it. You can always add more capabilities later.
The real power emerges when skills work together. Here are effective combinations:
| Workflow | Skills Combined | Use Case |
|---|---|---|
| Full-Stack Development | code-reviewer + test-generator + git-workflow | Complete PR workflow |
| Research & Documentation | web-search-api + doc-generator | Technical writing |
| DevOps Pipeline | env-setup + deployment-helper + performance-profiler | CI/CD automation |
| Data Engineering | database-query-assistant + api-tester | API-to-database workflows |
💡 Expert Insight:
I've found that 3-4 well-crafted skills cover 80% of daily development tasks. Focus on skills that save you the most repetitive work—the ROI compounds quickly.
Unlike slash commands that require explicit invocation (/command), skills are model-invoked—Claude automatically discovers and activates them based on task context.
The discovery flow:
research-assistant skill matches "research"💡 Expert Insight:
The description field is your skill's "advertisement" to Claude. Write it like you're explaining when someone should use this capability. Include action verbs, file types, and context clues that match how users naturally phrase requests.
After creating a skill, test it by asking Claude relevant questions:
> I need to research the current state of AI agents in enterprise software
Claude should automatically discover and use your research-assistant skill.
If Claude doesn't use your skill:
For skills that should only read data (not modify), use the allowed-tools field:
---
name: safe-web-reader
description: Read-only web search and content viewing. Use for research and information gathering without making changes.
allowed-tools: Bash, Read, Grep, Glob
---This prevents Claude from accidentally modifying files while using the skill.
⚠️ Warning: Always audit skills before using them, especially those from external sources. Check for unexpected network calls, file modifications, or data exfiltration patterns.
| Feature | Benefit for Skills |
|---|---|
| Google-powered results | Maximum relevance and freshness |
| Pre-extracted content | No scraping infrastructure needed |
| RAG-optimized responses | Ready for embedding and vector DBs |
| Sub-second latency | Fast skill execution |
| 200+ countries/languages | Global research capabilities |
| Simple REST API | Easy bash/curl integration |
| Plan | Monthly Cost | Searches/Month | Best For |
|---|---|---|---|
| Free | $0 | 2,000 | Testing and prototyping |
| Pro | $189 | 50,000 | Production applications |
| Expert | $1,250 | 500,000 | High-volume research |
📌 Pro Tip:
Start with the free tier to test your skills, then upgrade as usage grows. The predictable pricing makes budgeting straightforward compared to per-token alternatives.
Always implement retry logic in your skills:
import time
from functools import wraps
def retry_with_backoff(max_retries=3, base_delay=1):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt == max_retries - 1:
raise
delay = base_delay * (2 ** attempt)
time.sleep(delay)
return None
return wrapper
return decorator
@retry_with_backoff(max_retries=3)
def search_with_retry(query):
return search(query)Implement caching for repeated queries:
import json
import hashlib
import os
from datetime import datetime, timedelta
CACHE_DIR = os.path.expanduser("~/.cache/websearchapi")
CACHE_TTL = timedelta(hours=1)
def get_cache_path(query):
query_hash = hashlib.md5(query.encode()).hexdigest()
return os.path.join(CACHE_DIR, f"{query_hash}.json")
def cached_search(query):
os.makedirs(CACHE_DIR, exist_ok=True)
cache_path = get_cache_path(query)
# Check cache
if os.path.exists(cache_path):
with open(cache_path) as f:
cached = json.load(f)
cached_time = datetime.fromisoformat(cached["timestamp"])
if datetime.now() - cached_time < CACHE_TTL:
return cached["results"]
# Fetch fresh results
results = search(query)
# Save to cache
with open(cache_path, "w") as f:
json.dump({
"timestamp": datetime.now().isoformat(),
"results": results
}, f)
return resultsRespect API limits:
import time
from collections import deque
class RateLimiter:
def __init__(self, max_requests=100, time_window=60):
self.max_requests = max_requests
self.time_window = time_window
self.requests = deque()
def wait_if_needed(self):
now = time.time()
# Remove old requests
while self.requests and self.requests[0] < now - self.time_window:
self.requests.popleft()
# Wait if at limit
if len(self.requests) >= self.max_requests:
sleep_time = self.requests[0] + self.time_window - now
if sleep_time > 0:
time.sleep(sleep_time)
self.requests.append(now)
limiter = RateLimiter(max_requests=100, time_window=60)
def rate_limited_search(query):
limiter.wait_if_needed()
return search(query)⭐ Key Takeaway: Agent Skills transform Claude Code from a general assistant into a specialized powerhouse. By combining skills with WebSearchAPI.ai, you get production-ready web search capabilities with clean content extraction, predictable pricing, and enterprise-grade reliability.
Ready to supercharge your Claude Code experience? Start building with WebSearchAPI.ai and get Google-grade results in minutes. Create your first web search skill today and experience the power of model-invoked capabilities.
What are Agent Skills in Claude Code?
Agent Skills are modular capability packages consisting of instructions, scripts, and resources that Claude can discover and load dynamically. Unlike slash commands which require explicit invocation, skills are automatically activated when Claude determines they're relevant to the task at hand.
How do I create a web search skill?
Create a directory in ~/.claude/skills/your-skill-name/ with a SKILL.md file containing YAML frontmatter (name and description) and markdown instructions. Include curl commands or Python scripts for WebSearchAPI.ai integration. Claude will automatically discover and use the skill when users ask about web searches.
Where should I store my skills?
Store personal skills in ~/.claude/skills/ for use across all projects. Store project-specific skills in .claude/skills/ within your project directory to share with team members via git.
How does Claude discover my skill?
Claude reads skill metadata (name and description) at startup. When a user's request matches the skill's description, Claude reads the full SKILL.md and follows its instructions. This progressive disclosure keeps context efficient while enabling powerful capabilities.
Can I restrict what tools a skill can use?
Yes. Add allowed-tools to your YAML frontmatter to restrict which tools Claude can use. For example, allowed-tools: Bash, Read, Grep prevents file modifications while allowing read operations.
What's the difference between skills and slash commands?
Slash commands are user-invoked (/command) and execute immediately. Skills are model-invoked—Claude automatically discovers and uses them based on context. Skills are better for complex, multi-step capabilities with bundled resources.
How do I debug a skill that isn't working?
Check that your SKILL.md has valid YAML frontmatter, the description includes relevant trigger words, and the file is in the correct location. Ask Claude "What skills are available?" to verify discovery. Test with queries that should match your description.
Can skills make network requests?
Yes. Skills can include bash commands with curl or Python scripts that make HTTP requests. This is how WebSearchAPI.ai integration works—the skill instructs Claude on how to call the API.
How do I share skills with my team?
Store skills in .claude/skills/ within your project and commit to git. Team members who clone the repository will automatically have access to the skills.
What's the best way to handle API keys in skills?
Use environment variables. Reference them in your skill instructions (e.g., $WEBSEARCHAPI_KEY) rather than hardcoding keys. This keeps credentials secure and allows different keys per environment.
Agent Skills represent a fundamental shift in how we extend AI capabilities. By packaging expertise into discoverable, model-invoked modules, you can transform Claude Code from a general-purpose assistant into a specialized tool perfectly suited to your workflow.
For Developers:
For Teams:
.claude/skills/ for git-based distributionallowed-tools for security-sensitive operationsFor Production:
🎯 Key Takeaway: The combination of Claude Code's Agent Skills and WebSearchAPI.ai creates a powerful foundation for building AI applications with real-time web intelligence. Start with the web-search-api skill template, customize for your use case, and watch as Claude automatically delivers relevant, current information exactly when you need it.
Last updated: December 2025