Create Campaign
/api/v1/jobs/spiderMaps/campaigns/submitOverview
Create a new SpiderMaps campaign that will scrape business listings across multiple locations in a country. The campaign system handles location iteration automatically, making it easy to scrape entire countries or regions.
v2.14.0 Feature: The Campaign System includes a location database with 44,691 cities across 240 countries.
v2.15.0 Workflow Orchestration: Enable automatic job chaining with the workflow parameter to run SpiderMaps → SpiderSite → SpiderVerify automatically. See Orchestrated Campaigns Guide.
Request Body
search_querystringrequiredThe search query for Google Maps (e.g., "restaurants", "hotels", "dentists").
Note: query is also accepted for backwards compatibility but is deprecated.
country_codestringrequiredISO 2-letter country code (e.g., "FR" for France, "DE" for Germany)
namestringOptional campaign name for identification
max_resultsintegerdefault: 100v2.34.0: Maximum results per location (1-500). Previously hardcoded to 100.
extract_reviewsbooleandefault: falsev2.34.0: Extract customer reviews from each business.
extract_photosbooleandefault: falsev2.34.0: Extract photo URLs from listings.
langstringdefault: env2.34.0: Language code for Google Maps (en, de, fr, es, etc.).
store_imagesbooleandefault: truev2.34.0: Store business images in SeaweedFS media server.
validate_phonesbooleandefault: truev2.34.0: Validate and format phone numbers using libphonenumber.
fuzziq_enabledbooleanv2.34.0: Enable FuzzIQ deduplication. Uses client-level setting if not specified.
fuzziq_unique_onlybooleanv2.34.0: Return only unique records, filtering out duplicates.
skip_proxybooleandefault: falsev2.34.0: Skip mobile proxy for this campaign's jobs.
testbooleandefault: falsev2.34.0: Route all campaign jobs to the test queue.
filterobjectLocation filter configuration
workflowobjectv2.15.0: Workflow orchestration configuration for automatic job chaining.
Filter Modes
| Mode | Description | Use Case |
|---|---|---|
all | All locations in the country | Complete country coverage |
population | Filter by population range | Target cities of specific size |
cities_only | Only cities, no postcodes | Skip postcode-level granularity |
custom | Specific location IDs | Target specific pre-selected cities |
regions | Specific admin regions | Target states/provinces |
Response
campaign_idstringUnique campaign identifier (e.g., "camp_fr_restaurant_20251222_abc123")
statusstringCampaign status: active, paused, completed, stopped
querystringThe search query used
country_codestringThe country code
total_locationsintegerNumber of locations to be scraped
next_location_idintegerID of the first location to be processed
created_atstringISO timestamp of creation
has_workflowbooleanv2.15.0: Whether workflow orchestration is enabled for this campaign
workflow_configobjectv2.15.0: The workflow configuration if enabled (mirrors the request)
max_resultsintegerv2.34.0: Maximum results per location
extract_reviewsbooleanv2.34.0: Whether review extraction is enabled
extract_photosbooleanv2.34.0: Whether photo extraction is enabled
langstringv2.34.0: Language code for Google Maps
store_imagesbooleanv2.34.0: Whether image storage is enabled
validate_phonesbooleanv2.34.0: Whether phone validation is enabled
fuzziq_enabledbooleanv2.34.0: Whether FuzzIQ deduplication is enabled
fuzziq_unique_onlybooleanv2.34.0: Whether to return unique records only
skip_proxybooleanv2.34.0: Whether mobile proxy is skipped
testbooleanv2.34.0: Whether test queue routing is enabled
Examples
Basic Campaign (All Locations)
- cURL
- Python
curl -X POST https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"query": "restaurants",
"country_code": "LU"
}'
import requests
response = requests.post(
"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit",
headers={"Authorization": "Bearer <your_token>"},
json={
"query": "restaurants",
"country_code": "LU"
}
)
campaign = response.json()
print(f"Campaign created: {campaign['campaign_id']}")
print(f"Locations to scrape: {campaign['total_locations']}")
Response:
{
"campaign_id": "camp_lu_restaurants_20251222171302_cee5b94a",
"status": "active",
"query": "restaurants",
"country_code": "LU",
"total_locations": 12,
"next_location_id": 28451,
"created_at": "2025-12-22T17:13:02Z"
}
Population-Filtered Campaign
- cURL
- Python
curl -X POST https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"query": "hotels",
"country_code": "DE",
"name": "Germany Hotels - Major Cities",
"filter": {
"mode": "population",
"min_population": 500000
}
}'
import requests
response = requests.post(
"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit",
headers={"Authorization": "Bearer <your_token>"},
json={
"query": "hotels",
"country_code": "DE",
"name": "Germany Hotels - Major Cities",
"filter": {
"mode": "population",
"min_population": 500000
}
}
)
campaign = response.json()
print(f"Campaign: {campaign['campaign_id']}")
print(f"Cities with 500k+ population: {campaign['total_locations']}")
Response:
{
"campaign_id": "camp_de_hotels_20251222171518_f65c4dce",
"status": "active",
"query": "hotels",
"country_code": "DE",
"total_locations": 14,
"next_location_id": 12301,
"created_at": "2025-12-22T17:15:18Z"
}
Campaign with SpiderMaps Options (v2.34.0)
Full control over SpiderMaps extraction options:
- cURL
- Python
curl -X POST https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"search_query": "restaurants",
"country_code": "LU",
"max_results": 50,
"extract_reviews": true,
"lang": "en",
"store_images": true,
"validate_phones": true,
"fuzziq_enabled": true,
"test": false
}'
import requests
response = requests.post(
"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit",
headers={"Authorization": "Bearer <your_token>"},
json={
"search_query": "restaurants",
"country_code": "LU",
"max_results": 50,
"extract_reviews": True,
"lang": "en",
"store_images": True,
"validate_phones": True,
"fuzziq_enabled": True,
"test": False
}
)
campaign = response.json()
print(f"Campaign: {campaign['campaign_id']}")
print(f"Max results per location: {campaign['max_results']}")
Response:
{
"campaign_id": "camp_lu_restaurants_20260209_a1b2c3d4",
"status": "active",
"search_query": "restaurants",
"country_code": "LU",
"total_locations": 12,
"max_results": 50,
"extract_reviews": true,
"lang": "en",
"store_images": true,
"validate_phones": true,
"fuzziq_enabled": true,
"test": false
}
Orchestrated Campaign (v2.15.0)
Full workflow with SpiderMaps → SpiderSite → SpiderVerify chain:
- cURL
- Python
curl -X POST https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit \
-H "Authorization: Bearer <your_token>" \
-H "Content-Type: application/json" \
-d '{
"query": "restaurants",
"country_code": "FR",
"name": "France Restaurants - Lead Gen",
"filter": {
"mode": "population",
"min_population": 100000
},
"workflow": {
"spidersite": {
"enabled": true,
"max_pages": 10,
"crawl_strategy": "bestfirst",
"enable_spa": true,
"extract_company_info": true,
"product_description": "Restaurant management software",
"icp_description": "Restaurant owners seeking efficiency"
},
"spiderverify": {
"enabled": true,
"max_emails_per_business": 5,
"check_gravatar": false
},
"filter_social_media": true,
"filter_review_sites": true
}
}'
import requests
response = requests.post(
"https://spideriq.ai/api/v1/jobs/spiderMaps/campaigns/submit",
headers={"Authorization": "Bearer <your_token>"},
json={
"query": "restaurants",
"country_code": "FR",
"name": "France Restaurants - Lead Gen",
"filter": {
"mode": "population",
"min_population": 100000
},
"workflow": {
"spidersite": {
"enabled": True,
"max_pages": 10,
"crawl_strategy": "bestfirst",
"enable_spa": True,
"extract_company_info": True,
"product_description": "Restaurant management software",
"icp_description": "Restaurant owners seeking efficiency"
},
"spiderverify": {
"enabled": True,
"max_emails_per_business": 5,
"check_gravatar": False
},
"filter_social_media": True,
"filter_review_sites": True
}
}
)
campaign = response.json()
print(f"Orchestrated campaign: {campaign['campaign_id']}")
print(f"Workflow enabled: {campaign['has_workflow']}")
Response:
{
"campaign_id": "camp_fr_restaurants_20251223_a1b2c3d4",
"status": "active",
"query": "restaurants",
"country_code": "FR",
"total_locations": 42,
"next_location_id": 15201,
"created_at": "2025-12-23T10:30:00Z",
"has_workflow": true,
"workflow_config": {
"spidersite": {
"enabled": true,
"max_pages": 10,
"crawl_strategy": "bestfirst"
},
"spiderverify": {
"enabled": true,
"max_emails_per_business": 5
},
"filter_social_media": true,
"filter_review_sites": true
}
}
For orchestrated campaigns, use the /workflow-results endpoint to get aggregated data from all three services (SpiderMaps + SpiderSite + SpiderVerify) in a single response.
Error Responses
Invalid Country Code
{
"detail": "No locations found for country XX"
}
No Matching Locations
{
"detail": "No locations match the specified filter criteria"
}
Next Steps
After creating a campaign, use the /next endpoint to start submitting jobs:
Get Next Location
Get the next location and automatically submit a SpiderMaps job