Queue Statistics
/api/v1/system/queue-statsOverview
Retrieve real-time statistics about the job processing queues, including queue sizes, worker availability, and processing rates. This endpoint does not require authentication.
Authentication
No authentication required - This endpoint is publicly accessible for monitoring.
Response
timestampstringISO 8601 timestamp when stats were collected
queuesobjectStatistics for each job queue
workersobjectWorker availability and activity
processingobjectCurrent processing statistics
Queues Object
queues.spider_siteobjectSpiderSite queue statistics
queued- Jobs waiting in queueprocessing- Jobs currently being processedtotal- Total jobs in system
queues.spider_mapsobjectSpiderMaps queue statistics
queued- Jobs waiting in queueprocessing- Jobs currently being processedtotal- Total jobs in system
Workers Object
workers.spider_siteobjectSpiderSite worker information
active- Number of active workersidle- Number of idle workerstotal- Total workers available
workers.spider_mapsobjectSpiderMaps worker information
active- Number of active workersidle- Number of idle workerstotal- Total workers available
Processing Object
processing.jobs_last_hourintegerJobs completed in the last hour
processing.jobs_last_24hintegerJobs completed in the last 24 hours
processing.avg_processing_timenumberAverage job processing time in seconds
processing.current_ratenumberCurrent processing rate (jobs per minute)
Example Request
- cURL
- Python
- JavaScript
curl https://spideriq.ai/api/v1/system/queue-stats
import requests
response = requests.get("https://spideriq.ai/api/v1/system/queue-stats")
stats = response.json()
print(f"SpiderSite Queue: {stats['queues']['spider_site']['queued']} queued")
print(f"SpiderSite Workers: {stats['workers']['spider_site']['active']} active")
print(f"Processing Rate: {stats['processing']['current_rate']} jobs/min")
const response = await fetch(
'https://spideriq.ai/api/v1/system/queue-stats'
);
const stats = await response.json();
console.log('SpiderSite Queue:', stats.queues.spider_site.queued);
console.log('Active Workers:', stats.workers.spider_site.active);
console.log('Processing Rate:', stats.processing.current_rate, 'jobs/min');
Example Response
{
"timestamp": "2025-10-27T10:30:45Z",
"queues": {
"spider_site": {
"queued": 23,
"processing": 4,
"total": 27
},
"spider_maps": {
"queued": 5,
"processing": 2,
"total": 7
}
},
"workers": {
"spider_site": {
"active": 4,
"idle": 0,
"total": 4
},
"spider_maps": {
"active": 2,
"idle": 0,
"total": 2
}
},
"processing": {
"jobs_last_hour": 342,
"jobs_last_24h": 2807,
"avg_processing_time": 12.5,
"current_rate": 5.7
}
}
Use Cases
Estimate Processing Time
Calculate estimated wait time before submitting jobs:
import requests
def estimate_wait_time(job_type="spiderSite"):
"""Estimate wait time for new job"""
url = "https://spideriq.ai/api/v1/system/queue-stats"
response = requests.get(url)
stats = response.json()
queue = stats["queues"].get(job_type.lower().replace("spider", "spider_"), {})
workers = stats["workers"].get(job_type.lower().replace("spider", "spider_"), {})
queued = queue.get("queued", 0)
active_workers = workers.get("active", 1)
avg_time = stats["processing"]["avg_processing_time"]
# Estimate: (queued jobs / active workers) * avg processing time
estimated_wait = (queued / max(active_workers, 1)) * avg_time
print(f"Queue Statistics for {job_type}:")
print(f" Jobs in queue: {queued}")
print(f" Active workers: {active_workers}")
print(f" Avg processing time: {avg_time}s")
print(f" Estimated wait: {estimated_wait:.1f}s (~{estimated_wait/60:.1f} minutes)")
return estimated_wait
# Usage
wait_time = estimate_wait_time("spiderSite")
Monitor Queue Load
Check queue load before submitting bulk jobs:
import requests
def should_submit_bulk_jobs(threshold=50):
"""Check if queue can handle bulk submission"""
url = "https://spideriq.ai/api/v1/system/queue-stats"
response = requests.get(url)
stats = response.json()
spider_site_queue = stats["queues"]["spider_site"]["queued"]
spider_maps_queue = stats["queues"]["spider_maps"]["queued"]
total_queued = spider_site_queue + spider_maps_queue
if total_queued < threshold:
print(f"✓ Queue load low ({total_queued} jobs) - safe to submit bulk")
return True
else:
print(f"✗ Queue load high ({total_queued} jobs) - consider waiting")
return False
# Usage
if should_submit_bulk_jobs():
# Submit your bulk jobs
pass
Real-time Dashboard
Create a monitoring dashboard:
async function updateQueueDashboard() {
const response = await fetch(
'https://spideriq.ai/api/v1/system/queue-stats'
);
const stats = await response.json();
// Update queue displays
document.getElementById('spidersite-queued').textContent =
stats.queues.spider_site.queued;
document.getElementById('spidermaps-queued').textContent =
stats.queues.spider_maps.queued;
// Update worker displays
document.getElementById('spidersite-workers').textContent =
`${stats.workers.spider_site.active}/${stats.workers.spider_site.total}`;
document.getElementById('spidermaps-workers').textContent =
`${stats.workers.spider_maps.active}/${stats.workers.spider_maps.total}`;
// Update processing stats
document.getElementById('jobs-last-hour').textContent =
stats.processing.jobs_last_hour;
document.getElementById('jobs-last-24h').textContent =
stats.processing.jobs_last_24h;
document.getElementById('current-rate').textContent =
`${stats.processing.current_rate.toFixed(1)} jobs/min`;
document.getElementById('avg-time').textContent =
`${stats.processing.avg_processing_time.toFixed(1)}s`;
// Update queue load indicator
const totalQueued = stats.queues.spider_site.queued + stats.queues.spider_maps.queued;
const loadIndicator = document.getElementById('queue-load');
if (totalQueued < 20) {
loadIndicator.textContent = 'LOW';
loadIndicator.className = 'badge-success';
} else if (totalQueued < 50) {
loadIndicator.textContent = 'MODERATE';
loadIndicator.className = 'badge-warning';
} else {
loadIndicator.textContent = 'HIGH';
loadIndicator.className = 'badge-danger';
}
}
// Update every 10 seconds
setInterval(updateQueueDashboard, 10000);
updateQueueDashboard();
Adaptive Rate Limiting
Adjust submission rate based on queue load:
import requests
import time
def adaptive_bulk_submit(urls, job_type="spiderSite", auth_token=""):
"""Submit jobs with adaptive rate limiting based on queue load"""
submit_url = "https://spideriq.ai/api/v1/jobs/submit"
stats_url = "https://spideriq.ai/api/v1/system/queue-stats"
headers = {"Authorization": f"Bearer {auth_token}"}
for i, url in enumerate(urls):
# Check queue stats every 10 jobs
if i % 10 == 0:
response = requests.get(stats_url)
stats = response.json()
queue_key = job_type.lower().replace("spider", "spider_")
queued = stats["queues"][queue_key]["queued"]
# Adjust delay based on queue size
if queued < 20:
delay = 0.1 # Fast submission
elif queued < 50:
delay = 0.5 # Moderate
else:
delay = 2.0 # Slow down
print(f"Queue size: {queued}, Using delay: {delay}s")
else:
delay = 0.1
# Submit job
response = requests.post(
submit_url,
headers=headers,
json={"url": url, "job_type": job_type}
)
if response.status_code == 201:
job = response.json()
print(f"✓ Submitted {i+1}/{len(urls)}: {job['job_id']}")
else:
print(f"✗ Failed {i+1}/{len(urls)}: {response.json()['detail']}")
time.sleep(delay)
# Usage
urls = ["https://example.com/page1", "https://example.com/page2", ...]
adaptive_bulk_submit(urls, "spiderSite", "<your_token>")
Alert on Queue Backlog
Monitor for queue backlogs:
import requests
import time
def monitor_queue_backlog(threshold=100, check_interval=60):
"""Alert when queue backlog exceeds threshold"""
url = "https://spideriq.ai/api/v1/system/queue-stats"
while True:
response = requests.get(url)
stats = response.json()
total_queued = (
stats["queues"]["spider_site"]["queued"] +
stats["queues"]["spider_maps"]["queued"]
)
if total_queued > threshold:
print(f"⚠️ ALERT: Queue backlog at {total_queued} jobs (threshold: {threshold})")
# Send alert via email/Slack/etc
send_alert(f"SpiderIQ queue backlog: {total_queued} jobs")
time.sleep(check_interval)
Understanding the Metrics
Queue Metrics
- queued - Jobs waiting to be processed
- processing - Jobs currently being worked on
- total - Sum of queued + processing
Worker Metrics
- active - Workers currently processing jobs
- idle - Workers available but not processing
- total - All workers connected to queue
Processing Metrics
- jobs_last_hour - Throughput in the last 60 minutes
- jobs_last_24h - Daily throughput
- avg_processing_time - Mean time to complete a job
- current_rate - Real-time processing speed (jobs/min)
Performance Indicators
Low queue load: < 20 jobs queued - Submit jobs freely
Moderate load: 20-50 jobs queued - Consider spreading submissions
High load: > 50 jobs queued - Wait or reduce submission rate
Best Practices
Check before bulk operations: Always check queue stats before submitting large batches of jobs.
Monitor processing rate: If current_rate is low and queue is growing, workers may be experiencing issues.
Use for capacity planning: Historical data (jobs_last_24h) helps plan scaling needs.