VoidMobVoidMob

Proxy Logs: What to Track and What to Ignore

Practical logging blueprint focused on success rate, ban diagnostics, and real-world proxy performance signals rather than generic SIEM theory

VoidMob Team
9 min read

Proxy Logging: What Matters and What Doesn't

Most proxy setups collect everything and surface nothing useful. Teams drown in gigabytes of connection timestamps while missing the three metrics that actually predict downtime or bans.

Quick Summary TLDR

  • 1Default proxy logs capture too much noise and miss critical metrics. Track success rate, timing signals, and failure modes instead of verbose fields like full user-agent strings.
  • 2Segment success rate by carrier, region, session duration, and target domain. Aggregate metrics hide localized failures that cost money.
  • 3Classify failure modes as hard blocks (403, captcha), soft blocks (timeout, TLS failure), or silent failures (200 OK with login wall). Silent failures are the tricky ones.
  • 4Strip logs to 9 essential fields: session metadata, timing breakdowns, and outcome tracking. Reduces storage by 80%+ and enables real-time analysis.

A common scenario at scale: analyzing failed scraping jobs across 40+ concurrent sessions reveals the culprit isn't rate limits or IP reputation. It's inconsistent session handoff between carrier towers generating 300-400ms latency spikes that trigger anti-bot timeouts. Nothing in the default proxy logs flags it. Standard monitoring dashboards show green across the board while success rates drop significantly.

Why Default Proxy Logs Miss What Matters

Out-of-the-box proxy log monitoring captures HTTP status codes, bandwidth consumed, and request counts. That's fine for diagnosing a 502 gateway error. Completely useless for understanding why a mobile proxy fleet suddenly can't hold sessions past 90 seconds, or why certain geolocations fail verification checks.

The gap gets worse with mobile proxies. Datacenter proxies fail predictably. You hit a block, you rotate, done. Mobile IPs fail because carrier NAT changed mid-session. Because tower handoff broke TLS state. Because the device went idle and iOS throttled background connections. None of that shows up as a clean 403 or captcha challenge.

Most proxy SIEM integrations treat mobile traffic like datacenter traffic, which creates blind spots. They'll count requests per IP and call it proxy cybersecurity. Meanwhile the actual risk vectors (SIM authentication failures, abnormal APN routing, device fingerprint inconsistencies) go completely unlogged.

Mobile networks operate through Carrier-Grade NAT (CGNAT) which places hundreds of users behind single IP addresses, adding another layer of complexity to mobile proxy infrastructure that standard logging systems weren't designed to handle.

Don't Log PII in Proxy Sessions

Avoid capturing POST body content, cookie values, or auth tokens in proxy logs. Even anonymized, these create compliance risk. Stick to metadata: status, timing, session ID, error codes.

What Actually Predicts Proxy Performance

Success rate is the north star metric, but only if you segment it correctly.

Overall success rate across all traffic hides localized failures. Breaking it down by carrier and region matters - AT&T proxies in Dallas might hold 95%+ success while T-Mobile in Seattle drops to 80-85% during peak hours. Session duration buckets tell a different story too. Sub-60-second requests versus long-lived sessions (5+ minutes) often show wildly different block rates.

Target domain category is the third dimension. Social platforms, e-commerce checkouts, and API endpoints each have distinct fingerprinting behavior.

When running scraping workloads across multiple e-commerce sites, this pattern emerges clearly. Aggregate proxy success rate might look fine at 88-90%. Segmented by target domain, certain sites show sub-70% success with specific mobile carriers. Switching those routes to different carrier proxies can bring overall success above 92-94% without adding more IPs.

92-96%
Typical Session Success Rate
General web scraping benchmark
300-400ms
Avg. Connection Time
Including DNS and TLS handshake
8-12min
Typical Ban Recovery Time
For mobile IP rotation cycles

Block rate analysis needs more context than "request denied." The failure mode matters:

  • Hard blocks (403, captcha challenge, account suspension)
  • Soft blocks (timeout, connection reset, TLS handshake failure)
  • Silent failures (200 OK but response content shows login wall or bot detection)

Silent failures are the tricky part. Common scenario: verification checks report 90%+ success because they only check HTTP status codes. Actual success (parsing the expected page element) sits at 65-70%. Proxy logs show green while the business logic is failing.

Essential Fields for Proxy Log Management

Strip proxy logs down to these core fields and catching 90% of issues becomes faster than tracking everything:

Session metadata:

  • Unique session ID
  • Proxy IP and port
  • Carrier identifier (if mobile)
  • Geolocation (city-level is enough)
  • Device type or user-agent class

Timing signals:

DNS resolution over 200ms on mobile proxies usually means carrier DNS is flaking or the APN route is suboptimal. TLS negotiation over 1.5 seconds often indicates middlebox interference or carrier-level inspection. These timing breakdowns matter more than raw request counts, especially when diagnosing intermittent issues.

proxy-log-entry.jsonjson
1{
2"session_id": "px_8f4a29b1",
3"proxy_ip": "172.58.34.12",
4"carrier": "verizon_us",
5"geo": "chicago_il",
6"target_host": "example.com",
7"dns_ms": 87,
8"tcp_ms": 142,
9"tls_ms": 890,
10"ttfb_ms": 1205,
11"total_ms": 1647,
12"status_code": 200,
13"response_size": 24680,
14"failure_mode": null
15}

Outcome tracking:

  • HTTP status code
  • Response size
  • Failure classification (ban, timeout, parse error, success)
  • Retry count if applicable

Don't log full response bodies. Log a hash of the first 2KB if detecting page variation is necessary. Keeps storage sane and avoids accidentally capturing user data.

What to Ignore (Seriously)

  • User-agent strings in full. Classify them (desktop_chrome, mobile_safari, bot) and move on. Storing every User-Agent permutation bloats your database and adds zero diagnostic value.
  • Referrer headers. Unless debugging a specific referrer-based block, skip it. Same with most request headers beyond Host and Content-Type.
  • Successful request bodies. If the request worked, the payload doesn't need to be in proxy logs. If it failed, log the error response snippet (first 500 chars), not the request that triggered it.
  • Bandwidth metrics per request. Track bandwidth at the session or hourly level, not per individual request. Per-request bandwidth logging creates massive log volume with minimal insight unless diagnosing a specific data transfer issue.

Verbose logging feels safer but kills performance and makes pattern recognition harder. Common scenario: logging 18+ fields per request generates 4+ GB of proxy logs daily. Trimming to 9 essential fields can drop storage to under 700MB, and query performance improves enough that real-time block rate analysis becomes possible instead of daily batch jobs.

FieldTrack It?Why / Why Not
Session IDYesEssential for tracing multi-request flows
Full user-agentNoClassify instead; raw UAs bloat storage
DNS + TLS timingYesEarly warning for carrier routing issues
Request bodyNoPrivacy risk, rarely useful post-success
Failure modeYesDifferentiates ban types for smart retry
Referrer headerNoLow diagnostic value in most proxy use cases

Alerts That Actually Work

Set thresholds on rolling windows, not absolute counts. "More than 50 failures" is meaningless if request volume doubled. "Success rate below 85% over 10 minutes" catches real problems.

Configure separate alert channels for different failure modes:

  • Hard blocks - immediate Slack ping, rotate IP pool
  • Timeout spikes - warning email, check carrier status
  • Silent failures - log for review, might indicate target site changes

When alerts fire only when proxy success rate drops below 88% for 15+ minutes, this approach typically reduces alert noise by 70-80% while catching actual incidents within 5-10 minutes of onset.

"Trimming proxy logs from 18 fields to 9 can cut storage by 80%+ and make real-time block rate analysis finally possible."

Integrating Logs with Broader Infrastructure

Running SMS verification services alongside proxies means correlating session IDs across systems. This helps diagnose whether a verification failure stemmed from the proxy connection, the SMS delivery, or the target platform's fraud detection.

For mobile proxy fleets, tag logs with device state if accessible (battery level, signal strength, active vs. idle). iOS devices throttle network when backgrounded. Android behavior varies by OEM. Knowing device state explains otherwise mysterious connection drops.

Proxy cybersecurity isn't about logging everything. It's about logging the right signals so answering "Why did this fail?" takes under 60 seconds. Most proxy log management systems optimize for compliance and audit trails, which is fine for enterprises. For growth teams running scraping ops or verification workflows, speed and signal matter more than completeness.

For more on optimizing your proxy infrastructure, see our guide on proxy performance optimization and proxy maintenance checklist.

1How long should proxy logs be retained?

7 days of detailed logs, 90 days of aggregated metrics. Detailed proxy logs older than a week rarely get queried, and storage costs add up fast with high request volumes.

2What's a realistic proxy success rate target for mobile proxies?

92-96% for general web scraping, 88-93% for social platforms with aggressive anti-bot measures. Below 85% indicates configuration issues or burned IPs.

3Should DNS timing be tracked separately from connection time?

Yes. DNS delays point to carrier or resolver issues. Connection delays indicate routing or rate limiting. Lumping them together hides the root cause.

4Do I need a dedicated proxy SIEM solution?

Not unless managing 500+ concurrent proxy sessions. Most teams get better ROI from a lightweight logging stack (Loki, Grafana, or even structured logs with basic alerting) tuned to the specific fields above.

5How do you detect silent proxy failures programmatically?

Log a hash of expected page elements (title tag, specific div ID, API response structure). Compare actual response hash to expected. Mismatch means silent failure even if HTTP 200.

Wrapping Up

Effective proxy log monitoring comes down to tracking session success segmented by carrier and target, capturing timing breakdowns that reveal infrastructure issues, and classifying failure modes so retries are intelligent. Everything else is noise.

Strip logging to the nine fields that matter. Set alerts on rolling success rate windows. Correlate proxy sessions with upstream services when diagnosing multi-step workflows. Less spending on storage, faster issue detection.

Mobile Proxies with Built-in Performance Visibility

VoidMob's dashboard surfaces success rate, carrier routing, and session diagnostics without the logging headache. No KYC, instant activation, real 4G/5G IPs.