Grafana + Prometheus: Set Up Production Monitoring on Elestio

Grafana + Prometheus: Set Up Production Monitoring on Elestio

You're running services in production. Something breaks at 2 AM. By the time you notice, users have already noticed. The fix isn't more coffee — it's knowing something broke before anyone else does.

Grafana and Prometheus are the monitoring stack that most of the industry runs on. Prometheus collects metrics. Grafana turns them into dashboards and alerts. Together, they give you visibility into CPU, memory, disk, network, and anything else that exposes a /metrics endpoint — which, in 2026, is basically everything.

Here's how to set them up on Elestio and wire them together.

What Each Tool Does

Prometheus is a time-series database with a pull-based architecture. Every 15 seconds (configurable), it scrapes HTTP endpoints on your services, collects metrics, and stores them. You query the data with PromQL — a purpose-built query language that handles rates, aggregations, and predictions.

Grafana is the visualization layer. It connects to Prometheus (and dozens of other data sources), lets you build dashboards, and fires alerts when things go wrong. It's where you'll actually spend your time.

Node Exporter is the piece most people forget. It's a lightweight agent that exposes host-level metrics — CPU per core, memory breakdown, disk I/O, network throughput, filesystem usage. Without it, Prometheus only sees itself.

Step 1: Deploy Both on Elestio

The fastest path: deploy Grafana and Prometheus as separate services on Elestio. Each starts at ~$16/month on Netcup (2 CPU, 4 GB RAM). For a monitoring stack watching a handful of services, that's plenty.

Both deploy in under 5 minutes. Elestio handles SSL, backups, and updates automatically.

If you want everything on a single VM, you can use a Docker Compose stack instead:

services:
  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus:/etc/prometheus
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
    ports:
      - "172.17.0.1:9090:9090"
    restart: always

  node-exporter:
    image: prom/node-exporter:latest
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
    ports:
      - "172.17.0.1:9100:9100"
    restart: always

  grafana:
    image: grafana/grafana:latest
    volumes:
      - grafana_data:/var/lib/grafana
    ports:
      - "172.17.0.1:3000:3000"
    restart: always

volumes:
  prometheus_data:
  grafana_data:

Note the 172.17.0.1 bindings — that's Elestio's Docker bridge. External access goes through the Nginx reverse proxy with automatic SSL.

Step 2: Configure Prometheus Scrape Targets

Edit prometheus.yml to tell Prometheus what to monitor:

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

This gives you two scrape targets: Prometheus itself (internal health) and Node Exporter (host metrics — CPU, memory, disk, network). Restart Prometheus to apply:

docker-compose restart prometheus

Need to monitor more services? Add targets. Most databases and tools expose Prometheus-compatible metrics natively — PostgreSQL, Redis, Nginx, and hundreds more have dedicated exporters.

Step 3: Connect Prometheus to Grafana

Open your Grafana instance and navigate to Connections > Data Sources > Add data source > Prometheus.

Set the URL:

  • Same Docker network: http://prometheus:9090
  • Separate Elestio instances: http://172.17.0.1:9090 (or your Prometheus instance URL)

Click Save & Test. You should see "Successfully queried the Prometheus API."

Step 4: Import a Dashboard

You could build panels from scratch, but there's no reason to. Grafana's community has thousands of pre-built dashboards.

Go to Dashboards > New > Import, enter ID 1860, and click Load. This is the "Node Exporter Full" dashboard — the gold standard for host monitoring. It gives you 70+ panels covering:

  • CPU usage per core and mode (user, system, iowait, idle)
  • Memory breakdown (used, cached, buffered, available)
  • Disk I/O and filesystem usage per mountpoint
  • Network throughput per interface
  • System load averages and uptime

Select your Prometheus data source and import. You'll have a production-grade monitoring dashboard in 30 seconds.

Step 5: Set Up Alerts

Dashboards are useless if nobody's watching them. Alerts fix that.

In Grafana, go to Alerting > Alert rules > New alert rule. Here are four essential alerts to start with:

Alert PromQL Condition Threshold
High CPU usage100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)> 80% for 5 min
Memory critical(node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100< 10% for 5 min
Disk almost full(node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100< 15%
Service downup{job="node-exporter"}== 0 for 1 min

Configure a contact point (email, Slack, Discord, PagerDuty — Grafana supports them all) and route alerts there. Now you'll know about problems before your users do.

Why Not Just Use Datadog?

Fair question. Datadog and New Relic are excellent products. But they charge per host, per metric, or per GB ingested. At 10 hosts, you're looking at $2,000–3,000/year. At 100 hosts, it's $10,000+.

Grafana + Prometheus on Elestio costs ~$29–59/month regardless of how many hosts you monitor. The metrics stay on your infrastructure. The dashboards are yours. And PromQL is the industry standard — skills transfer everywhere.

The trade-off: you manage the stack instead of paying someone else to. On Elestio, that trade-off shrinks to almost nothing — backups, updates, and SSL are handled for you.

Troubleshooting

"No data" in Grafana panels after importing dashboard 1860 The dashboard expects a job label matching node. Check your prometheus.yml — if your job is named node-exporter, either rename it to node or update the dashboard's variable to match your job name.

Prometheus can't reach Node Exporter If both are in Docker Compose, make sure they're on the same network. Use docker exec prometheus wget -qO- http://node-exporter:9100/metrics to verify connectivity.

Grafana shows "Data source is not working" Check the Prometheus URL. If Grafana and Prometheus are on separate Elestio instances, you'll need to use the Prometheus instance's public URL or set up an internal network between them.

Metrics appear but gaps exist in graphs Usually caused by scrape intervals that are too aggressive for the network. Increase scrape_interval from 15s to 30s and set scrape_timeout to match.

Thanks for reading. See you in the next one 👋