Profile Repos Data API
Scrape GitHub Profile Repos data with one API call. Returns the public repositories owned by the user under `data.items[]` — each entry includes `id`, `name`, `full_name`, `description`, `language`, `stargazers_count`, `forks_count`, `created_at`, and `pushed_at`. Sortable by `created`, `updated`, `pushed`, or `full_name`. Use `/v1/github/repo` for a single richer repo dossier.
Try the GitHub Profile Repos API
See real data before writing a single line
Searching 27 platforms in parallel
What can you do with the Profile Repos API?
The Profile Repos endpoint gives you structured GitHub data with computed fields in a single request. No scraping infrastructure to build or maintain.
Example Request
GET /v1/github/profile/repos?handle=octocatParameters
| Parameter | Required | Description |
|---|---|---|
| handle | Yes | GitHub username. |
| type | No | Filter — `all`, `owner`, or `member`. Defaults to `owner`. (all | owner | member) |
| sort | No | Sort field — `created`, `updated`, `pushed`, or `full_name`. Defaults to `full_name`. (created | updated | pushed | full_name) |
| direction | No | `asc` or `desc`. Defaults to `asc` for full_name, `desc` otherwise. (asc | desc) |
| per_page | No | Repos per page (1–100). Defaults to 30. |
| page | No | 1-indexed page number for pagination. |
How does the GitHub Profile Repos API work?
Send a GET request with your API key and get back clean, structured JSON. Every response follows our unified schema with computed fields.
Method
GET
Response
JSON
How do you scrape social media data in seconds?
The fastest social media scraping API for developers. Scrape profiles, posts, comments, and analytics from 27 platforms covering 10B+ monthly active users.
One schema, every platform
Query 27 platforms with identical response structures. Write your integration once.
Computed fields, not just scraped
Every response includes engagement_rate, estimated_reach, content_category, and language — ready to use.
No code required
Visual Data Explorer — paste any URL, get rich result cards, sortable tables, CSV export.
import requests
response = requests.get(
'https://www.socialcrawl.dev/v1/tiktok/profile',
params={'handle': 'charlidamelio'},
headers={'x-api-key': 'sc_YOUR_API_KEY'}
)
data = response.json(){
"success": true,
"platform": "tiktok",
"data": {
"author": { "username": "charlidamelio", "followers": 124000 },
"engagement": { "likes": 5200, "engagement_rate": 0.045 },
"metadata": { "language": "en", "content_category": "food" }
}
}Ready to scrape GitHub Profile Repos data?
Get your API key and start pulling GitHub data in under 60 seconds.
