Promtail Is Dead: How to Migrate Your Log Pipeline to Grafana Alloy Before It Breaks
Promtail hit end-of-life on March 2, 2026. If you're still running it, your log pipeline is now officially unsupported. No more security patches, no more bug fixes, no more updates. Grafana merged Promtail's code into Alloy over a year ago, and the Loki Helm chart is forking to a community-maintained repository on March 16. The clock is ticking.
The good news? The migration is straightforward. Here's how to do it without breaking your logging stack.
Why Grafana Killed Promtail
Promtail did one thing: tail log files and ship them to Loki. It was simple, reliable, and honestly, pretty boring in the best way.
The problem was scope. Grafana needed a unified agent that handles logs, metrics, traces, and profiles in a single binary. Maintaining four separate collectors (Promtail for logs, Grafana Agent for metrics, Tempo for traces) was becoming unsustainable. So they built Alloy, their distribution of the OpenTelemetry Collector, and folded everything into it.
Promtail entered long-term support in February 2025. Commercial support ended February 28, 2026. On March 2, it goes fully EOL.
What Alloy Gives You That Promtail Didn't
If you think of Alloy as "Promtail with extra steps," you're underselling it. Here's what's actually different:
Unified telemetry pipeline. One agent for logs, metrics, traces, and profiles. No more running Promtail alongside a separate Prometheus agent.
Native OpenTelemetry support. Alloy is 100% OTLP compatible. If your applications already emit OpenTelemetry data, Alloy ingests it natively without translation layers.
Component-based architecture. Instead of a monolithic config file, Alloy uses composable components that you wire together. Collectors feed into transformers, which feed into writers. You can build exactly the pipeline you need.
Kubernetes-native log collection. Promtail tailed container log files from /var/log/containers on disk. Alloy's loki.source.kubernetes component uses the Kubernetes API directly, which is cleaner and more reliable.
Built-in service discovery. Alloy discovers targets using the same relabeling logic Prometheus uses. If you're already comfortable with relabel_configs, you'll feel right at home.
The Migration: Step by Step
1. Use the Built-in Converter
Alloy ships with a convert command that translates your Promtail config automatically:
alloy convert --source-format=promtail \
--output=/etc/alloy/config.alloy \
/etc/promtail/config.yml
This handles most configurations cleanly. The converter will warn you about anything it can't translate automatically.
2. Review the Converted Config
The converted config uses Alloy's component syntax. A basic Promtail setup that scraped local files and pushed to Loki becomes something like this:
local.file_match "logs" {
path_targets = [{"__path__" = "/var/log/*.log"}]
}
loki.source.file "local_files" {
targets = local.file_match.logs.targets
forward_to = [loki.write.default.receiver]
}
loki.write "default" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
Notice the component chaining: file_match discovers files, source.file reads them, and loki.write ships them to Loki. Each component has a clear responsibility.
3. Handle Kubernetes Deployments
If you're running Promtail as a DaemonSet in Kubernetes, replace it with an Alloy DaemonSet. The Helm chart makes this easy:
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install alloy grafana/alloy \
--namespace monitoring \
--set alloy.configMap.content="$(cat /etc/alloy/config.alloy)"
For Kubernetes log collection, use loki.source.kubernetes instead of file tailing:
loki.source.kubernetes "pods" {
targets = discovery.kubernetes.pods.targets
forward_to = [loki.write.default.receiver]
}
4. Update Your Dashboards
Alloy exposes different metric names than Promtail. If you have Grafana dashboards or alerts referencing Promtail metrics, you'll need to update them. Common renames:
| Promtail Metric | Alloy Equivalent |
|---|---|
promtail_targets_active_total |
loki_source_file_targets_active_total |
promtail_read_bytes_total |
loki_source_file_read_bytes_total |
promtail_sent_entries_total |
loki_write_sent_entries_total |
5. Validate and Cut Over
Run Alloy alongside Promtail temporarily to verify log delivery:
# Check Alloy is collecting logs
curl http://localhost:12345/metrics | grep loki_write_sent
# Compare with Promtail's output
curl http://localhost:9080/metrics | grep promtail_sent
Once you confirm parity, remove the Promtail deployment.
Heads Up: The Loki Helm Chart Fork
On March 16, 2026, the official Loki Helm chart moves to a community-maintained repository at grafana-community/helm-charts. If you deploy Loki via Helm, update your chart source:
helm repo add grafana-community https://grafana-community.github.io/helm-charts
helm repo update
The chart in the main Loki repository will only be maintained for Grafana Enterprise Logs (GEL) users going forward.
Troubleshooting Common Issues
High memory usage after migration. Alloy's Kubernetes log source uses the API server, which can be chattier than file tailing. Set appropriate __rate_limit__ labels on high-volume pods.
Missing logs after conversion. The converter doesn't handle custom tracing configurations. If you had OpenTracing or Jaeger integration in Promtail, configure tracing separately in Alloy using the otelcol.exporter components.
Metric name mismatches in alerts. This is the most common post-migration issue. Grep your alerting rules for promtail_ prefixes and update them to the Alloy equivalents listed above.
Deploy Your Observability Stack on Elestio
If you'd rather skip the infrastructure management entirely, Elestio offers fully managed Grafana + Loki instances. Automated backups, updates, SSL, and monitoring are handled for you, starting at ~$29/month. You get a production-ready observability stack without the operational overhead of managing Helm charts, container updates, and certificate rotations yourself.
Thanks for reading. See you in the next one.