Submit Job
POST
/api/v1/jobs/submitOverview
Submit a new job to scrape a single URL. This endpoint queues your job for processing by available workers.
Request Body
urlstringrequiredThe URL to scrape (must be a valid HTTP/HTTPS URL)
Example: https://example.com
job_typestringrequiredType of scraping job to perform
Options:
spiderSite- Website scraping using Crawl4AIspiderMaps- Google Maps business scraping
instructionsstringOptional AI instructions for content extraction (spiderSite only)
Example: "Extract all product names and prices"
Response
successbooleanWhether the job was successfully queued
job_idstringUnique identifier for the submitted job (UUID format)
typestringType of job submitted (spiderSite or spiderMaps)
statusstringInitial job status (always queued)
messagestringHuman-readable confirmation message
Example Request
- cURL
- Python
- JavaScript
curl -X POST https://spideriq.ai/api/v1/jobs/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"job_type": "spiderSite",
"instructions": "Extract all contact information"
}'
import requests
url = "https://spideriq.ai/api/v1/jobs/submit"
headers = {
"Authorization": "Bearer <your_token>",
"Content-Type": "application/json"
}
data = {
"url": "https://example.com",
"job_type": "spiderSite",
"instructions": "Extract all contact information"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
const response = await fetch('https://spideriq.ai/api/v1/jobs/submit', {
method: 'POST',
headers: {
'Authorization': 'Bearer <your_token>',
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: 'https://example.com',
job_type: 'spiderSite',
instructions: 'Extract all contact information'
})
});
const data = await response.json();
console.log(data);
Example Response
- 201 Created
- 400 Bad Request
- 401 Unauthorized
- 429 Too Many Requests
{
"success": true,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"type": "spiderSite",
"status": "queued",
"message": "Job queued successfully"
}
{
"detail": "Invalid URL format. Please provide a valid HTTP/HTTPS URL."
}
{
"detail": "Invalid authentication token format. Expected: client_id:api_key:api_secret"
}
{
"detail": "Rate limit exceeded. Maximum 100 requests per minute."
}
Next Steps
After submitting a job:
- Poll for status using GET /api/v1/jobs/{id}/status
- Retrieve results when status is
completedusing GET /api/v1/jobs/{id}/results
note
Jobs are processed asynchronously. Use the /status endpoint to monitor progress.
Job Types Explained
spiderSite
- Scrapes website content using Crawl4AI library
- Supports AI-powered content extraction with custom instructions
- Best for: Blogs, articles, product pages, documentation
spiderMaps
- Scrapes Google Maps business data
- Extracts: Name, address, phone, website, hours, reviews, etc.
- Best for: Local business research, competitor analysis