Web Services

Metrics

Built-in metrics for request volume, latency, errors, and resource usage.

Every web service gets an automatic metrics dashboard with no configuration required. There's nothing to install, no Prometheus to stand up, and no scrape targets to maintain. Open the Metrics tab on any service to see the four signals that matter for most operational questions.

What you get

  • Request volume. Requests per second, broken down by status code (2xx, 3xx, 4xx, 5xx) so you can spot an error spike without opening logs.
  • Latency. p50, p95, and p99 response times, which is almost always what you want when investigating "why does the app feel slow". p50 tells you the median experience; p99 tells you the tail.
  • Error rate. 5xx responses as a percentage of traffic. Useful for alerting thresholds.
  • CPU and memory. Container-level resource usage, so you can tell whether "slow" is a code problem or a "we need a bigger machine" problem.
  • Cold starts. Count of instances booted per minute. High cold-start rates are a signal you should set min instances above zero.

Retention

Metrics are retained for 30 days at 1-minute resolution. Older metrics are downsampled to hourly and kept for a year, which is enough to spot seasonality without ballooning storage.

Alerting

Set thresholds on any metric from the Alerts tab. When a threshold is breached, Appentic emails the workspace members. This is deliberately simple: the goal is to get you to look at the dashboard, not to replace a full incident management stack. Slack and PagerDuty integrations are on the roadmap.

Debugging from metrics

When a metric alerts, the usual debugging flow is: open the metrics tab, note the time range of the anomaly, switch to the Logs tab, and use the same time range to pull matching log lines. Both tabs share a time selector, so you don't have to type timestamps twice.