The problem Hawk Mode Agent Pricing Docs Join the beta — it's free
FREE BETA — All plans free until launch

"your api is down" — now what?

Know why your API broke
not just that it did.

Captures DNS timing, TLS handshake, TTFB, and response body at the exact moment of failure — so the answer arrives with the alert. No more SSH. No more guessing.
Built for backend engineers, solo devs, and small teams running APIs in production.

🦅 Start monitoring free →

No credit card · All plans free during beta · 30 days notice before any charges

APIs behind firewalls? Pinghawk Agent runs inside your network. COMING SOON

By signing up you agree to our Terms and Privacy Policy

🦅 hawk mode — forensic snapshot
SNAPSHOT #3 · ap-southeast-1
status_code   502
Bad Gateway — upstream service returned an invalid response

dns_ms        3ms ✓ normal
tls_ms        12ms ✓ normal
ttfb_ms       1,847ms ↑ slow
time to first byte — how long the server took to start responding
total_ms      1,891ms
response_body
{"error":"upstream_timeout","service":"payment-gateway"}

DNS ✓  TLS ✓  Server responded —
upstream failed. Your server is fine.

When your API fails,
the worst part isn't
the outage.

It's figuring out what happened. By the time you SSH in, check the logs, try to reproduce — the evidence is gone. And you're debugging a ghost.

prod-server — bash

03:14 AM ALERT: api.myapp.io is DOWN

03:16 AM $ ssh prod-server

03:18 AM $ tail -f /var/log/app.log

...nothing obvious

03:24 AM $ curl -v api.myapp.io/payments

HTTP/1.1 200 OK

...it's back up. What happened?

03:31 AM $ grep -i error /var/log/app.log | tail -20

...logs already rotated

03:42 AM — gave up. No root cause found.

28 minutes. No answer. Back to bed.

Not more monitoring.
Better monitoring.

Most tools stop at "something is wrong." Pinghawk answers the question that actually matters.

A typical monitoring alert
ALERT: API DOWN
endpoint: api.myapp.io/payments
status: DOWN
root cause: HTTP 502 — Bad Gateway
started: 2026-03-26 09:05:25
That's everything. Now what?
💻 SSH into server
📋 Check logs manually
🔄 Try to reproduce
👻 Evidence already gone
⏱ 25–45 minutes to root cause
vs
A Pinghawk alert
🦅 INCIDENT DETECTED
endpoint: api.myapp.io/payments
status: TIMEOUT
region: ap-southeast-1
🦅 HAWK MODE — 3 FORENSIC SNAPSHOTS
SNAPSHOT #1 · captured silently
dns_lookup1ms (~est.)
tls_handshake5ms (SSL)
ttfb4,007ms ↑ critical
↳ server is very slow or overloaded
total_time4,312ms
SNAPSHOT #2 · captured silently
dns_lookup2ms (~est.)
tls_handshake4ms (SSL)
ttfb10,002ms ↑ critical
↳ server is very slow or overloaded
total_time10,374ms ↑ worse
SNAPSHOT #3 · alert fires
errorTIMEOUT
↳ Server did not respond within the allowed time. May be overloaded, crashed, or unreachable.
total_time10,000ms
DNS ✓ TLS ✓ Server ✗ — degrading under load
TTFB 4,007ms10,002mstimeout. Server was degrading under load until it stopped responding entirely. You see the failure developing across three snapshots.
TTFB (time to first byte) — how long the server takes to start responding. Under 200ms is normal.
⚡ Answer arrives with the alert
We call this 🦅 Hawk Mode — captured automatically on every failure.
Every other tool
Your API is down.
Good luck figuring out why.
SSH in. Check logs. Hope the evidence is still there.
Pinghawk
Your API is down.
Here's exactly why, captured at the moment it happened.
DNS, TLS, TTFB, response body — already in your inbox.

The alert used to be the beginning of the investigation.
With Pinghawk, it's the end.

So we flipped
the model.

Instead of checking after failure, Pinghawk captures everything at the exact moment it happens. No reproduction required. No guesswork. No SSH.

Failure detected
Check fails. Pinghawk silently starts watching.
📸
Snapshot #1 captured
DNS, TLS, TTFB, response body — all recorded at failure moment.
📸
Snapshot #2 captured
Second failure. Hawk Mode continues capturing snapshots silently.
🦅
Alert sent — with the answer
3rd failure confirms outage. Alert fires with all 3 snapshots attached. Root cause included.
Recovery detected
Endpoint responds healthy again. Resolution alert sent with total downtime and incident link. Incident closed.
app.pinghawk.io/monitors
Your Monitors Last checked 23s ago
api.myapp.io/health
99.98%142ms
api.myapp.io/users
99.91%891ms
api.myapp.io/payments
98.2%timeout
1 active incident · payments · 🦅 snapshot ready

Built for one thing:
telling you exactly
why things broke.

At the moment of failure, Pinghawk automatically captures a full forensic snapshot — DNS, TLS, TTFB, response body, everything. No setup required. It's always watching.

What every snapshot captures
dns
DNS lookup time
Was it DNS-related or server-side? Know instantly without guessing.
tls
TLS handshake duration
Detect SSL and certificate issues the moment they occur.
ttfb
Time to first byte
Reveals slow databases and overloaded servers immediately.
body
Response body (first 2kb)
Your API's own error message, captured automatically at the exact moment of failure.
http
Status code in plain English
502 means Bad Gateway. 429 means you're being throttled. Every status code explained — no Googling required.
Σ
Total request time
See the full request lifecycle in one number. Compare across snapshots to see if the failure is stable or worsening.
EN
Every metric explained in plain English
Not just raw numbers — every value includes a human-readable annotation. "4,007ms ↑ critical — server is very slow or overloaded." No threshold tables, no guesswork.
Additional capabilities
3 snapshots per incident
Captures the progression of failure across multiple checks — not just a single moment in time.
When snapshots are captured
FAILURE #1 09:05:25 GMT

Snapshot captured silently. No alert, no incident.

dns: 3mstls: 12msttfb: 1,847ms ↑502
FAILURE #2 09:06:25 GMT

Snapshot captured silently. Still no alert — could be a fluke.

dns: 2mstls: 4msttfb: 10,002ms ↑↑502
FAILURE #3 — ALERT FIRES 09:07:25 GMT

3rd consecutive failure. Incident created. All 3 snapshots linked. Alert sent to all channels.

TIMEOUT10,000ms
RECOVERY 2 consecutive successes

Incident resolved. Recovery alert sent with total downtime duration. Counter resets — next outage triggers a fresh incident.

Why 3 failures before alerting?
Server restarts, DNS blips, and transient errors resolve in seconds. Alerting on the first failure would mean constant noise. Three consecutive failures means something is actually wrong — and by then, you already have 3 forensic snapshots showing how the failure developed.

Cron jobs & scheduled tasks

Silent failures are the worst kind. Your backup stops running at 2am. Your sync job starts skipping records. Nobody notices — until it's too late.

Pinghawk's dead man's switch pings a unique URL at the end of each job. Miss it once — you're alerted within minutes.

🔒
SSL certificate expiry

An expired certificate means your API goes dark for every user instantly. No warning, no grace period — just failures.

Pinghawk warns you at 30, 14, and 7 days before expiry. And if a TLS handshake fails during a check, it's caught in the Hawk Mode snapshot too. Automatic. No configuration needed.

Set up in under
60 seconds.

No agents. No SDKs. No YAML. Paste a URL and you're monitoring.

01 — ADD
Paste your endpoint

Any URL. Pick GET, POST, or HEAD. Set your check interval. Done. Custom headers coming at launch.

02 — MONITOR
We watch constantly

Pinghawk checks your endpoint every 30s–5min. Three consecutive failures confirm a real outage — dramatically reducing false alerts from transient blips.

03 — ALERT
You get answers, not alarms

When something breaks, your alert includes the full Hawk Mode snapshot — DNS, TLS, TTFB, response body. You know the root cause before you open your laptop.

04 — RECOVER
Know when it's back

When your endpoint recovers, Pinghawk sends a resolution alert with total downtime and incident link. One incident, two alerts — down and up. No noise in between.

Everything you need.
Nothing you don't.

Every feature earns its place. If it's here, it's because you'll actually use it.

🌍
Multi-region checks Pro · at launch

Verify from 3 global regions simultaneously. If two agree it's down, it's down. No false alarms from local network blips or transient issues.

🔒
SSL certificate alerts

Warned 30, 14, and 7 days before expiry. Never get caught by a certificate error in production again. Works for any HTTPS monitor — no configuration needed.

Cron job monitoring

Dead man's switch for your scheduled jobs. Know immediately when a backup, sync, or cleanup task silently fails to run. Alerts via email, Slack, Discord, and webhook — not just email.

📋
Public status pages

Shareable, branded pages your customers can bookmark. They stay informed. You get fewer "is it down?" support messages.

🔕
Smart alert deduplication

Three consecutive failures required before alerting — eliminates false positives from transient blips. One alert per incident, then silence until recovery. Recovery alerts include total downtime duration. No 3am spam.

📈
Response time tracking

Continuous latency monitoring with degraded and critical thresholds. Catch slowdowns before they become outages — not just 5xx errors.

🔑
POST, HEAD & custom methods

HTTP method selection (GET, POST, HEAD) works now. Custom request headers and auth token injection coming at launch. Works with any API — REST or GraphQL.

60-second setup

Paste a URL. Pick an interval. Done. No agents, no SDKs, no YAML. If setup takes more than a minute, we failed.

Alerts that arrive
with answers.

Every alert includes the endpoint, response time, region, and Hawk Mode snapshot — so you can act without logging in first.

Alert channels
Email

Every plan. Always on. Sent to your account email instantly.

All plans
Slack

Rich Block Kit messages to any channel. Incident details + action buttons.

Indie+
Discord

Rich embeds with colour-coded status, timing data, and incident links.

Indie+
Webhook

JSON payload to any URL. Signed with HMAC-SHA256 so you can verify it's from Pinghawk.

Indie+
SMS

Text message alerts for critical incidents. Coming at launch.

Pro · soon
INCIDENT DETECTED

3rd consecutive failure on api.myapp.io/payments

All channels notified simultaneously
Email delivered · 0.3s
Slack #incidents delivered · 0.5s
Discord #alerts delivered · 0.4s
Webhook delivered · 0.2s HMAC ✓
DEDUP ACTIVE

Incident is open. No further alerts until recovery.
Your inbox stays clean. No 3am spam.

RECOVERED

2 consecutive successes. Incident resolved.
Recovery alert sent to all channels with total downtime: 12m 34s

Coming soon

Pinghawk Agent

Your APIs behind firewalls, VPNs, and private networks are invisible to cloud monitoring. Pinghawk Agent runs inside your network — so the endpoints that are restricted from outside access are no longer unmonitored.

!
The problem with cloud-only monitoring

Internal health checks, staging APIs, database endpoints, admin panels, microservices behind a VPC — these are unreachable from the internet by design. Firewalls and network policies block all external access. That means your most critical infrastructure has zero monitoring visibility.

How Pinghawk Agent works

1
Deploy inside your network

A lightweight npm package that runs on any machine with Node.js. One command, no config files, no open inbound ports. It sits behind your firewall — right next to the services it monitors.

2
Checks endpoints locally with Hawk Mode

The agent performs HTTP checks from inside your network and captures the same forensic timing data as cloud Hawk Mode — DNS, TCP, TLS, TTFB, response body. All captured locally.

3
Reports back to your dashboard

Results flow into the same pipeline — same incident detection (3-strike rule), same alerts to all your channels, same dashboard. Cloud and agent monitors appear side by side.

Behind firewall Agent Cloud Pinghawk API You Dashboard

Available on Indie (2 agents) and Pro (10 agents) plans. See pricing → | Read the docs →

$ npx pinghawk-agent --key YOUR_API_KEY
Coming soon

Pinghawk Agent

API https://api.pinghawk.io

Agent office-network

Monitors 4 assigned

✓ Connected · Running... (Ctrl+C to stop)

Internal API up 42ms dns:3 tcp:8 tls:0 ttfb:38

Staging DB up 18ms dns:2 tcp:5 tls:0 ttfb:14

Auth Service timeout 10,000ms TIMEOUT

Redis Health up 3ms dns:0 tcp:1 tls:0 ttfb:2

Reported 4 result(s)

Next poll in 30s

Now monitorable with Agent

http://10.0.1.50:8080 http://192.168.1.20:3000 http://internal-db:5432 http://staging.local http://admin.internal

Works with Docker · Kubernetes · AWS VPC · DigitalOcean · Hetzner · any machine with Node.js 18+

Simple, honest
pricing.

Priced for developers who ship things themselves. No per-seat fees. No hidden limits. No "contact sales."

Start free. No credit card. No lock-in.

🎉 All plans are FREE during beta No credit card required. Beta ends at launch — you'll get 30 days notice before any charges begin.
Free
$0
forever
  • 5 monitors
  • 5-minute check interval
  • Email alerts only
  • 7-day history
  • Public status page
  • "Powered by Pinghawk" on status page
  • No Hawk Mode snapshots
  • No agents
Get free beta access
Pro
$19/mo
$0 during beta
then $19/mo after launch
  • 100 monitors
  • 30-second check interval
  • Everything in Indie
  • 🤖 10 agents coming soon
  • Multi-region checks (3 regions) at launch
  • 90-day history
  • SMS alerts at launch
  • Custom domain for status page at launch
  • Priority support
Join the beta — it's free
COMING PHASE 2
Smart API Validation

Check more than status codes. Define expected response shapes — alert before users hit a data bug.

COMING PHASE 2
Developer CLI

Add monitors without leaving your terminal. pinghawk add https://api.example.com

👋
Built from a question, not a business plan.

While exploring monitoring tools for a side project, I noticed they all had the same blind spot: they'd tell you something was down, but never why. You'd get the alert at 3am and spend the next 45 minutes SSHing into servers, checking logs, running curl — only to find the issue had already resolved itself and the evidence was gone.

That question — why can't the alert include the answer? — became Pinghawk.
— The Pinghawk founder · hello@pinghawk.io

Stop guessing why
things broke.

Get alerts with answers, not just problems. All plans free during beta — 30 days notice before any charges begin.

🦅 Start monitoring free →

Or drop your email to follow the build:

No credit card required

By joining you agree to our Terms and Privacy Policy

✓ You're on the list — we'll be in touch!