Scrape Jobs
Create and manage Reddit scraping jobs.
POST
/api/v1/workspaces/{workspace_id}/jobsCreate Scrape Job
Start a new scraping job for a subreddit or user.
Path Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| workspace_id | uuid | Required | Workspace ID |
Request Body
| Name | Type | Required | Description |
|---|---|---|---|
| target | string | Required | Subreddit name or username (1-255 chars) |
| target_type | string | Default: subreddit | subreddit or user |
| mode | string | Default: full | full, history, or monitor |
| limit | integer | Default: 100 | Number of posts to scrape (1-10000) |
| download_media | boolean | Default: true | Download images and videos |
| scrape_comments | boolean | Default: true | Also scrape comments |
| plugins | string[] | Default: [] | List of plugin names to run |
json
{
"target": "technology",
"target_type": "subreddit",
"mode": "full",
"limit": 100,
"download_media": true,
"scrape_comments": true
}Code Examples
bash
curl -X POST https://api.sentrasa.com/api/v1/workspaces/{workspace_id}/jobs \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"target": "technology",
"target_type": "subreddit",
"mode": "full",
"limit": 100
}'Response
201Job created and queuedjson
{
"id": "job_550e8400-e29b-41d4-a716-446655440000",
"workspace_id": "ws_550e8400-e29b-41d4-a716-446655440000",
"target": "technology",
"target_type": "subreddit",
"mode": "full",
"config": {},
"status": "pending",
"started_at": null,
"completed_at": null,
"duration_seconds": null,
"posts_scraped": 0,
"comments_scraped": 0,
"media_downloaded": 0,
"error_message": null,
"progress": {},
"created_at": "2026-03-20T14:00:00Z"
}GET
/api/v1/workspaces/{workspace_id}/jobsList Jobs
List scrape jobs with optional status filtering.
Path Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| workspace_id | uuid | Required | Workspace ID |
Query Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| status | string | Optional | Filter by status: pending, running, completed, cancelled |
| limit | integer | Default: 50 | Max results (1-200) |
| offset | integer | Default: 0 | Offset for pagination |
Code Examples
bash
curl "https://api.sentrasa.com/api/v1/workspaces/{workspace_id}/jobs?status=completed&limit=20" \
-H "X-API-Key: rp_your_api_key"Response
200Paginated job listjson
{
"jobs": [
{
"id": "job_550e8400-e29b-41d4-a716-446655440000",
"target": "technology",
"target_type": "subreddit",
"status": "completed",
"posts_scraped": 100,
"comments_scraped": 450,
"created_at": "2026-03-20T14:00:00Z"
}
],
"total": 1,
"has_more": false
}GET
/api/v1/workspaces/{workspace_id}/jobs/{job_id}Get Job
Get details and progress of a specific scrape job.
Path Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| workspace_id | uuid | Required | Workspace ID |
| job_id | uuid | Required | Job ID |
Code Examples
bash
curl https://api.sentrasa.com/api/v1/workspaces/{workspace_id}/jobs/{job_id} \
-H "X-API-Key: rp_your_api_key"Response
200Job detailsjson
{
"id": "job_550e8400-e29b-41d4-a716-446655440000",
"workspace_id": "ws_550e8400-e29b-41d4-a716-446655440000",
"target": "technology",
"target_type": "subreddit",
"mode": "full",
"config": {},
"status": "completed",
"started_at": "2026-03-20T14:00:05Z",
"completed_at": "2026-03-20T14:02:30Z",
"duration_seconds": 145.2,
"posts_scraped": 100,
"comments_scraped": 450,
"media_downloaded": 32,
"error_message": null,
"progress": {
"posts": 100,
"comments": 450
},
"created_at": "2026-03-20T14:00:00Z"
}DELETE
/api/v1/workspaces/{workspace_id}/jobs/{job_id}Cancel Job
Cancel a pending or running scrape job.
Path Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| workspace_id | uuid | Required | Workspace ID |
| job_id | uuid | Required | Job ID |
Code Examples
bash
curl -X DELETE https://api.sentrasa.com/api/v1/workspaces/{workspace_id}/jobs/{job_id} \
-H "X-API-Key: rp_your_api_key"Response
200Job cancelledjson
{
"message": "Job cancelled successfully"
}