List Jobs
/api/v1/jobs/listOverview
Get a list of all jobs submitted by your client account with filtering, pagination, and sorting options.
Query Parameters
pageintegerdefault: 1Page number (starting from 1)
Example: ?page=2
page_sizeintegerdefault: 50Number of jobs per page (max: 100)
Example: ?page_size=25
status_filterstringFilter by job status
Options: queued, processing, completed, failed, cancelled
Example: ?status_filter=completed
type_filterstringFilter by job type
Options: spiderSite, spiderMaps
Example: ?type_filter=spiderSite
sort_bystringdefault: created_atField to sort by
Options: created_at, updated_at, status
Example: ?sort_by=updated_at
sort_orderstringdefault: descSort order
Options: asc, desc
Example: ?sort_order=asc
formatstringResponse format for AI agent integration (v2.60.0)
Options:
yaml- Token-efficient YAML list formatmd- Human-readable Markdown table format
Default: JSON (no format parameter)
Response
totalintegerTotal number of jobs matching the filter
pageintegerCurrent page number
page_sizeintegerNumber of items per page
total_pagesintegerTotal number of pages available
jobsarrayArray of job objects
Job Object
job_idstringUnique job identifier (UUID)
typestringJob type (spiderSite or spiderMaps)
statusstringCurrent job status
urlstringThe URL that was scraped
created_atstringISO 8601 timestamp when job was created
updated_atstringISO 8601 timestamp of last update
worker_idstringID of the worker that processed the job (if assigned)
Example Request
- cURL - Basic
- cURL - With Filters
- Python
- JavaScript
curl https://spideriq.ai/api/v1/jobs/list \
-H "Authorization: Bearer <your_token>"
curl "https://spideriq.ai/api/v1/jobs/list?status_filter=completed&type_filter=spiderSite&page=1&page_size=20" \
-H "Authorization: Bearer <your_token>"
import requests
url = "https://spideriq.ai/api/v1/jobs/list"
headers = {
"Authorization": "Bearer <your_token>"
}
params = {
"status_filter": "completed",
"type_filter": "spiderSite",
"page": 1,
"page_size": 20
}
response = requests.get(url, headers=headers, params=params)
print(response.json())
const params = new URLSearchParams({
status_filter: 'completed',
type_filter: 'spiderSite',
page: '1',
page_size: '20'
});
const response = await fetch(
`https://spideriq.ai/api/v1/jobs/list?${params}`,
{
headers: {
'Authorization': 'Bearer <your_token>'
}
}
);
const data = await response.json();
console.log(data);
Example Response
- 200 OK
- 400 Bad Request
{
"total": 1234,
"page": 1,
"page_size": 50,
"total_pages": 25,
"jobs": [
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"type": "spiderSite",
"status": "completed",
"url": "https://example.com",
"created_at": "2025-10-27T10:00:00Z",
"updated_at": "2025-10-27T10:02:45Z",
"worker_id": "spider-site-main-1"
},
{
"job_id": "660e8400-e29b-41d4-a716-446655440001",
"type": "spiderMaps",
"status": "completed",
"url": "https://maps.google.com/...",
"created_at": "2025-10-27T09:55:00Z",
"updated_at": "2025-10-27T09:56:30Z",
"worker_id": "spider-maps-main-1"
},
{
"job_id": "770e8400-e29b-41d4-a716-446655440002",
"type": "spiderSite",
"status": "processing",
"url": "https://blog.example.com",
"created_at": "2025-10-27T10:05:00Z",
"updated_at": "2025-10-27T10:05:30Z",
"worker_id": "spider-site-main-2"
}
]
}
{
"detail": "Invalid page_size. Maximum allowed is 100."
}
Pagination Example
- Python - Iterate All Pages
- JavaScript - Iterate All Pages
import requests
def get_all_jobs(auth_token, status_filter=None, type_filter=None):
"""Fetch all jobs across multiple pages"""
url = "https://spideriq.ai/api/v1/jobs/list"
headers = {"Authorization": f"Bearer {auth_token}"}
all_jobs = []
page = 1
page_size = 100 # Use maximum page size
while True:
params = {
"page": page,
"page_size": page_size
}
if status_filter:
params["status_filter"] = status_filter
if type_filter:
params["type_filter"] = type_filter
response = requests.get(url, headers=headers, params=params)
data = response.json()
all_jobs.extend(data["jobs"])
# Check if we've reached the last page
if page >= data["total_pages"]:
break
page += 1
return all_jobs
# Usage
jobs = get_all_jobs(
"<your_token>",
status_filter="completed",
type_filter="spiderSite"
)
print(f"Found {len(jobs)} jobs")
async function getAllJobs(authToken, statusFilter, typeFilter) {
const url = 'https://spideriq.ai/api/v1/jobs/list';
const headers = {
'Authorization': `Bearer ${authToken}`
};
let allJobs = [];
let page = 1;
const pageSize = 100; // Use maximum page size
while (true) {
const params = new URLSearchParams({
page: page.toString(),
page_size: pageSize.toString()
});
if (statusFilter) params.append('status_filter', statusFilter);
if (typeFilter) params.append('type_filter', typeFilter);
const response = await fetch(`${url}?${params}`, { headers });
const data = await response.json();
allJobs = allJobs.concat(data.jobs);
// Check if we've reached the last page
if (page >= data.total_pages) {
break;
}
page++;
}
return allJobs;
}
// Usage
const jobs = await getAllJobs(
'<your_token>',
'completed',
'spiderSite'
);
console.log(`Found ${jobs.length} jobs`);
Use Cases
Get Recent Completed Jobs
curl "https://spideriq.ai/api/v1/jobs/list?status_filter=completed&sort_by=updated_at&sort_order=desc&page_size=10" \
-H "Authorization: Bearer <your_token>"
Get Failed Jobs for Debugging
curl "https://spideriq.ai/api/v1/jobs/list?status_filter=failed&sort_by=updated_at&sort_order=desc" \
-H "Authorization: Bearer <your_token>"
Get All SpiderMaps Jobs
curl "https://spideriq.ai/api/v1/jobs/list?type_filter=spiderMaps" \
-H "Authorization: Bearer <your_token>"
Get Jobs Currently Processing
curl "https://spideriq.ai/api/v1/jobs/list?status_filter=processing" \
-H "Authorization: Bearer <your_token>"
Notes
Default sorting: Jobs are sorted by created_at in descending order (newest first) by default.
Performance: Use the maximum page_size=100 for fewer API calls when fetching large datasets.
Rate limits apply: Each request counts toward your 100 requests/minute limit. When iterating through many pages, implement rate limiting in your code.