400 free credits — no credit card required.Start building
Logo
400 free credits — no credit card required

Crawl Data API

Scrape Tavily Crawl data with one API call. Multi-page crawl starting from a root URL. Returns each crawled page with its extracted content (unlike map, which returns only URLs). Use `instructions` to guide the crawler in natural language — Tavily uses an LLM to follow only the paths matching your intent. Use `select_*` / `exclude_*` filters (comma-separated regex patterns) to constrain scope.

Try the Tavily Crawl API

See real data before writing a single line

GET/v1/tavily/crawl

Root URL to begin crawling.

12 optional parameters

Maximum link depth from the root URL. Defaults to 1.

Maximum number of links followed per level (per page). Defaults to 20.

Total number of pages the crawler will process before stopping. Defaults to 50.

Natural-language instructions for the crawler (e.g. 'Find all product pages with pricing').

Comma-separated regex patterns — only crawl URLs whose path matches.

Comma-separated regex patterns — only crawl URLs whose domain matches.

Comma-separated regex patterns — skip URLs whose path matches.

Comma-separated regex patterns — skip URLs whose domain matches.

Whether to follow links to external domains. Defaults to true.

Per-page extraction strategy. `basic` is faster; `advanced` handles harder pages.

Output format for extracted content. `markdown` (default) or `text`.

Comma-separated list of category hints to bias the crawl toward.

Searching 27 platforms in parallel

·TikTok·Instagram·YouTube·Facebook·X·LinkedIn·Reddit·Threads·Pinterest·Twitch·Truth Social·Snapchat·Kick·TikTok Shop·Amazon Shop·Linktree·Komi·Pillar·lnk.bio·Facebook Ads·Google Ads·LinkedIn Ads·Google Search·Polymarket·Tavily·Hacker News·GitHub·Perplexity·UUtility·Universal Search
·TikTok·Instagram·YouTube·Facebook·X·LinkedIn·Reddit·Threads·Pinterest·Twitch·Truth Social·Snapchat·Kick·TikTok Shop·Amazon Shop·Linktree·Komi·Pillar·lnk.bio·Facebook Ads·Google Ads·LinkedIn Ads·Google Search·Polymarket·Tavily·Hacker News·GitHub·Perplexity·UUtility·Universal Search
·TikTok·Instagram·YouTube·Facebook·X·LinkedIn·Reddit·Threads·Pinterest·Twitch·Truth Social·Snapchat·Kick·TikTok Shop·Amazon Shop·Linktree·Komi·Pillar·lnk.bio·Facebook Ads·Google Ads·LinkedIn Ads·Google Search·Polymarket·Tavily·Hacker News·GitHub·Perplexity·UUtility·Universal Search
·TikTok·Instagram·YouTube·Facebook·X·LinkedIn·Reddit·Threads·Pinterest·Twitch·Truth Social·Snapchat·Kick·TikTok Shop·Amazon Shop·Linktree·Komi·Pillar·lnk.bio·Facebook Ads·Google Ads·LinkedIn Ads·Google Search·Polymarket·Tavily·Hacker News·GitHub·Perplexity·UUtility·Universal Search
Tavily API

What can you do with the Crawl API?

The Crawl endpoint gives you structured Tavily data with computed fields in a single request. No scraping infrastructure to build or maintain.

Example Request

GET /v1/tavily/crawl?url=https%3A%2F%2Fdocs.tavily.com

Parameters

ParameterRequiredDescription
urlYesRoot URL to begin crawling.
max_depthNoMaximum link depth from the root URL. Defaults to 1.
max_breadthNoMaximum number of links followed per level (per page). Defaults to 20.
limitNoTotal number of pages the crawler will process before stopping. Defaults to 50.
instructionsNoNatural-language instructions for the crawler (e.g. 'Find all product pages with pricing').
select_pathsNoComma-separated regex patterns — only crawl URLs whose path matches.
select_domainsNoComma-separated regex patterns — only crawl URLs whose domain matches.
exclude_pathsNoComma-separated regex patterns — skip URLs whose path matches.
exclude_domainsNoComma-separated regex patterns — skip URLs whose domain matches.
allow_externalNoWhether to follow links to external domains. Defaults to true.
extract_depthNoPer-page extraction strategy. `basic` is faster; `advanced` handles harder pages. (basic | advanced)
formatNoOutput format for extracted content. `markdown` (default) or `text`. (markdown | text)
categoriesNoComma-separated list of category hints to bias the crawl toward.
API Details

How does the Tavily Crawl API work?

Send a GET request with your API key and get back clean, structured JSON. Every response follows our unified schema with computed fields.

Method

GET

Response

JSON

Why SocialCrawl

Why use SocialCrawl for Tavily Crawl data?

We handle the complexity of Tavily data extraction so you can focus on building. Unified schema, AI enrichment, and zero platform logic in your code.

Developer First

How do you scrape social media data in seconds?

The fastest social media scraping API for developers. Scrape profiles, posts, comments, and analytics from 27 platforms covering 10B+ monthly active users.

One schema, every platform

Query 27 platforms with identical response structures. Write your integration once.

Computed fields, not just scraped

Every response includes engagement_rate, estimated_reach, content_category, and language — ready to use.

No code required

Visual Data Explorer — paste any URL, get rich result cards, sortable tables, CSV export.

import requests

response = requests.get(
    'https://www.socialcrawl.dev/v1/tiktok/profile',
    params={'handle': 'charlidamelio'},
    headers={'x-api-key': 'sc_YOUR_API_KEY'}
)
data = response.json()
[ .JSON ]
{
  "success": true,
  "platform": "tiktok",
  "data": {
    "author": { "username": "charlidamelio", "followers": 124000 },
    "engagement": { "likes": 5200, "engagement_rate": 0.045 },
    "metadata": { "language": "en", "content_category": "food" }
  }
}
and many more

Ready to scrape Tavily Crawl data?

Get your API key and start pulling Tavily data in under 60 seconds.

Start for free