Skip to main content

Submit Job

POST/api/v1/jobs/submit

Overview

Submit a new job to scrape a single URL. This endpoint queues your job for processing by available workers.

Request Body

urlstringrequired

The URL to scrape (must be a valid HTTP/HTTPS URL)

Example: https://example.com

job_typestringrequired

Type of scraping job to perform

Options:

  • spiderSite - Website scraping using Crawl4AI
  • spiderMaps - Google Maps business scraping
instructionsstring

Optional AI instructions for content extraction (spiderSite only)

Example: "Extract all product names and prices"

Response

successboolean

Whether the job was successfully queued

job_idstring

Unique identifier for the submitted job (UUID format)

typestring

Type of job submitted (spiderSite or spiderMaps)

statusstring

Initial job status (always queued)

messagestring

Human-readable confirmation message

Example Request

curl -X POST https://spideriq.ai/api/v1/jobs/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"job_type": "spiderSite",
"instructions": "Extract all contact information"
}'

Example Response

{
"success": true,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"type": "spiderSite",
"status": "queued",
"message": "Job queued successfully"
}

Next Steps

After submitting a job:

  1. Poll for status using GET /api/v1/jobs/{id}/status
  2. Retrieve results when status is completed using GET /api/v1/jobs/{id}/results
note

Jobs are processed asynchronously. Use the /status endpoint to monitor progress.

Job Types Explained

spiderSite

  • Scrapes website content using Crawl4AI library
  • Supports AI-powered content extraction with custom instructions
  • Best for: Blogs, articles, product pages, documentation

spiderMaps

  • Scrapes Google Maps business data
  • Extracts: Name, address, phone, website, hours, reviews, etc.
  • Best for: Local business research, competitor analysis