SigNoz + OpenTelemetry: Build a Complete Observability Stack for Your Microservices
Your microservices are slow. Users are complaining. And somewhere in your distributed system, there's a bottleneck you can't find because you're flying blind.
Most teams solve this by throwing money at Datadog or New Relic. Then they get the invoice and start questioning their life choices. There's a better way: SigNoz with OpenTelemetry gives you the same enterprise-grade observability without the enterprise-grade pricing.
Why OpenTelemetry Changes Everything
OpenTelemetry is the CNCF-backed standard for collecting traces, metrics, and logs. Before OTel, every observability vendor had their own proprietary agents. Switch vendors? Reinstrument everything. Not anymore.
With OpenTelemetry, you instrument once and send data anywhere. SigNoz is built natively on OpenTelemetry, which means zero vendor lock-in and full compatibility with the ecosystem.
Deploy SigNoz in 5 Minutes
The fastest way to get SigNoz running is through Elestio's managed service. Select your cloud provider, pick your instance size (2 CPU / 4GB RAM minimum for small workloads), and click deploy. You'll have a fully configured SigNoz instance with SSL, backups, and monitoring in about five minutes.
Once deployed, access your SigNoz dashboard at your instance URL. The default credentials are in your Elestio dashboard.
Instrumenting a Node.js Application
Let's instrument a real Express.js application. First, install the OpenTelemetry packages:
npm install @opentelemetry/api \
@opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-http
Create a tracing.js file that initializes OpenTelemetry before your app starts:
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const traceExporter = new OTLPTraceExporter({
url: 'https://your-signoz-instance.elest.io/v1/traces',
});
const metricExporter = new OTLPMetricExporter({
url: 'https://your-signoz-instance.elest.io/v1/metrics',
});
const sdk = new NodeSDK({
traceExporter,
metricReader: new PeriodicExportingMetricReader({
exporter: metricExporter,
exportIntervalMillis: 60000,
}),
instrumentations: [getNodeAutoInstrumentations()],
serviceName: 'my-express-app',
});
sdk.start();
Now modify your package.json to load tracing before your app:
{
"scripts": {
"start": "node --require ./tracing.js app.js"
}
}
That's it. Your Express routes, database queries, and HTTP calls are now automatically traced.
Understanding the Three Pillars
SigNoz gives you unified visibility across traces, metrics, and logs.
Traces show the journey of a request through your system. When a user hits your API, you see exactly which services were called, how long each took, and where failures occurred. The flamegraph view is particularly useful for spotting that one slow database query hiding in your call stack.
Metrics provide the big picture. SigNoz automatically calculates RED metrics (Rate, Errors, Duration) for your services. You can also push custom metrics for business KPIs.
Logs complete the picture. With OpenTelemetry's log correlation, you can jump from a slow trace directly to the relevant log entries. No more grep-ing through gigabytes of logs.
Cost Comparison
Here's why self-hosted observability makes financial sense:
| Provider | 100GB/month | 500GB/month |
|---|---|---|
| Datadog | ~$500 | ~$2,500 |
| New Relic | ~$400 | ~$2,000 |
| SigNoz on Elestio | ~$59 | ~$119 |
The Elestio cost is your infrastructure (8 CPU / 16GB for 500GB workloads). No per-host fees. No per-seat fees. No surprise overages.
Production Tips
Size your instance correctly. For most teams, start with 4 CPU / 8GB RAM. SigNoz uses ClickHouse under the hood, so it's memory-hungry but incredibly fast at queries.
Set retention policies. By default, SigNoz keeps data for 7 days. For production, configure retention based on your compliance needs:
# In your SigNoz config
queryService:
storage:
ttl:
traces: 168h # 7 days
metrics: 720h # 30 days
logs: 336h # 14 days
Use sampling for high-volume services. If you're pushing millions of spans per minute, configure head-based or tail-based sampling in your OpenTelemetry Collector to reduce costs without losing visibility.
Troubleshooting
Traces not appearing? Check that your OTLP endpoint URL is correct and includes /v1/traces. Verify your SigNoz instance is accessible from your application network. Test connectivity with curl:
curl -v https://your-signoz-instance.elest.io/v1/traces
High memory usage? ClickHouse caches aggressively. This is normal. If you're hitting limits, increase your instance size or reduce your retention window.
Missing automatic instrumentation? Some libraries need explicit instrumentation. Check the OpenTelemetry registry for available instrumentations for your stack.
What's Next
Once you have basic observability working, explore SigNoz's alerting features. Set up alerts for p99 latency thresholds, error rate spikes, and service availability. The alert rules use PromQL, so if you're coming from Prometheus, you'll feel right at home.
For custom domain setup with automated SSL on your SigNoz instance, follow Elestio's domain configuration guide.
The days of paying thousands per month for observability are over. With SigNoz and OpenTelemetry, you get the insights you need to ship reliable software without the financial pain.
Thanks for reading. See you in the next one.