List Campaign Jobs
GET
/api/v1/jobs/spiderMaps/campaigns/{campaign_id}/jobsOverview
Retrieve a paginated list of all jobs submitted for a specific campaign. This endpoint is useful for:
- Getting job IDs to use with
/{campaign_id}/jobs/{job_id}/results - Monitoring job progress within a campaign
- Identifying failed jobs for investigation or retry
Path Parameters
campaign_idstringrequiredThe campaign identifier
Query Parameters
statusstringFilter by campaign location status: pending, submitted, completed, failed, skipped
job_statusstringFilter by actual job queue status: queued, processing, completed, failed, cancelled
pageintegerdefault: 1Page number for pagination
page_sizeintegerdefault: 50Items per page (1-100)
include_summarybooleandefault: trueInclude summary counts by status
Response
campaign_idstringThe campaign identifier
totalintegerTotal number of jobs matching filters
pageintegerCurrent page number
page_sizeintegerItems per page
jobsarrayArray of job items
summaryobjectJob counts by status (when include_summary=true)
Examples
List All Jobs for a Campaign
- cURL
- Python
- JavaScript
curl https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/camp_lu_restaurants_abc123/jobs \
-H "Authorization: Bearer <your_token>"
import requests
campaign_id = "camp_lu_restaurants_abc123"
response = requests.get(
f"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/{campaign_id}/jobs",
headers={"Authorization": "Bearer <your_token>"}
)
result = response.json()
print(f"Total jobs: {result['total']}")
print(f"Summary: {result['summary']}")
for job in result['jobs']:
print(f" {job['search_string']}: {job['status']} - {job['results_count'] or 0} businesses")
if job['job_id']:
print(f" Job ID: {job['job_id']}")
const campaignId = "camp_lu_restaurants_abc123";
const response = await fetch(
`https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/${campaignId}/jobs`,
{
headers: { Authorization: "Bearer <your_token>" }
}
);
const result = await response.json();
console.log(`Total jobs: ${result.total}`);
console.log(`Summary:`, result.summary);
// Get all completed job IDs
const completedJobIds = result.jobs
.filter(job => job.status === 'completed' && job.job_id)
.map(job => job.job_id);
console.log(`Completed job IDs:`, completedJobIds);
Response:
{
"campaign_id": "camp_lu_restaurants_abc123",
"total": 12,
"page": 1,
"page_size": 50,
"jobs": [
{
"job_id": "550e8400-e29b-41d4-a716-446655440001",
"location_id": 12345,
"search_string": "Luxembourg City, Luxembourg",
"display_name": "Luxembourg City",
"status": "completed",
"job_status": "completed",
"results_count": 47,
"error_message": null,
"submitted_at": "2025-12-22T17:15:00Z",
"completed_at": "2025-12-22T17:17:30Z",
"sequence_order": 1
},
{
"job_id": "550e8400-e29b-41d4-a716-446655440002",
"location_id": 12346,
"search_string": "Esch-sur-Alzette, Luxembourg",
"display_name": "Esch-sur-Alzette",
"status": "completed",
"job_status": "completed",
"results_count": 23,
"error_message": null,
"submitted_at": "2025-12-22T17:17:35Z",
"completed_at": "2025-12-22T17:19:10Z",
"sequence_order": 2
},
{
"job_id": null,
"location_id": 12347,
"search_string": "Differdange, Luxembourg",
"display_name": "Differdange",
"status": "pending",
"job_status": null,
"results_count": null,
"error_message": null,
"submitted_at": null,
"completed_at": null,
"sequence_order": 3
}
],
"summary": {
"pending": 5,
"submitted": 2,
"completed": 4,
"failed": 1,
"skipped": 0
}
}
Filter by Status
Get only completed jobs:
curl "https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/camp_lu_restaurants_abc123/jobs?status=completed" \
-H "Authorization: Bearer <your_token>"
Filter by Job Status
Get jobs that are currently processing:
curl "https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/camp_lu_restaurants_abc123/jobs?job_status=processing" \
-H "Authorization: Bearer <your_token>"
Get Failed Jobs
Identify failed jobs for investigation:
curl "https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/camp_lu_restaurants_abc123/jobs?status=failed" \
-H "Authorization: Bearer <your_token>"
Pagination
Get the second page of results:
curl "https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/camp_lu_restaurants_abc123/jobs?page=2&page_size=20" \
-H "Authorization: Bearer <your_token>"
Common Use Cases
Get All Job IDs for Detailed Results
import requests
campaign_id = "camp_lu_restaurants_abc123"
base_url = "https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns"
headers = {"Authorization": "Bearer <your_token>"}
# Get all completed jobs
response = requests.get(
f"{base_url}/{campaign_id}/jobs?status=completed",
headers=headers
)
jobs = response.json()["jobs"]
# Fetch detailed results for each job
for job in jobs:
if job["job_id"]:
results = requests.get(
f"{base_url}/{campaign_id}/jobs/{job['job_id']}/results",
headers=headers
)
print(f"{job['search_string']}: {results.json()['businesses_total']} businesses")
Monitor Campaign Progress
import time
import requests
def monitor_campaign(campaign_id, token):
headers = {"Authorization": f"Bearer {token}"}
url = f"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/{campaign_id}/jobs"
while True:
response = requests.get(url, headers=headers)
data = response.json()
summary = data["summary"]
total = sum(summary.values())
completed = summary["completed"] + summary["failed"]
print(f"Progress: {completed}/{total} ({summary})")
if summary["pending"] == 0 and summary["submitted"] == 0:
print("Campaign complete!")
break
time.sleep(30) # Check every 30 seconds
Status Definitions
Campaign Location Status (status)
| Status | Description |
|---|---|
pending | Location hasn't been processed yet (no job submitted) |
submitted | Job has been submitted to the queue |
completed | Job finished successfully |
failed | Job failed with an error |
skipped | Location was skipped (e.g., no results expected) |
Job Queue Status (job_status)
| Status | Description |
|---|---|
queued | Job is waiting in RabbitMQ queue |
processing | Worker is actively processing the job |
completed | Job finished successfully |
failed | Job failed during processing |
cancelled | Job was cancelled |
Related Endpoints
- Get Campaign Status - Overall campaign progress
- Get Workflow Job Results - Detailed results for a specific job
- Get Workflow Results - Aggregated results for entire campaign