

In distributed systems, MongoDB empowers critical workloads where flexible schemas, dynamic queries, and large-scale horizontal sharding introduce unique performance challenges. Engineers often face spikes in connection pools, replication lag across shards, or inefficient index usage that leads to unpredictable latency and cache pressure all directly affecting application responsiveness.
Instead of relying on multiple monitoring agents or database-specific plugins, OpenTelemetry (OTel) provides a standardized pipeline to collect MongoDB’s core operational and query performance signals. This unified approach helps teams surface replication delays, identify inefficient queries, and monitor resource usage consistently across environments.
When instrumenting MongoDB with OpenTelemetry, we typically choose between application-level and database-level monitoring, each exposing a different scope of telemetry based on where they are instrumented from:
This captures query spans, command timings, connection events, and request-level context directly from your application through OTel SDKs integrated at the MongoDB driver or framework level.
It enables you to correlate API calls with MongoDB operations such as find, insert, aggregate, or update, helping you identify slow queries, inefficient query patterns, connection churn, and latency introduced by application logic or network boundaries.
This collects engine-level MongoDB metrics including replication lag, connection pool usage, cache utilization (wiredTiger metrics), document operation rates, and index efficiency using the OpenTelemetry MongoDB Receiver or exporter integrations.
It provides visibility into the internal behavior of the database engine, enabling detection of issues like cache pressure, unindexed queries, slow-running aggregations, or underperforming replica sets, which may not surface through application-level traces alone.
The table highlights how client-side instrumentation uncovers application-driven query behavior, whereas server-side monitoring exposes MongoDB’s internal engine health.
For this guide, the focus is on server-side telemetry (engine-level metrics) that offer essential insights into query latency, replication lag, cache usage, and operation counts, giving teams broad database-level insights with minimal overhead.
With these insights in place, we move to monitoring MongoDB with OpenTelemetry, where engine metrics are collected and transformed into consistent telemetry signals.
When monitoring MongoDB with OpenTelemetry, metrics are pulled directly from MongoDB’s diagnostic interfaces and converted into consistent telemetry signals.
However, critical insight like index efficiency, and replication health normally exposed via commands such as serverStatus and dbStats are automatically collected by the MongoDB Receiver, eliminating custom tooling and ensuring unified backend analysis.
Note: MongoDB Atlas vs Self-Managed MongoDB
Monitoring MongoDB with OpenTelemetry depends on your deployment model:
MongoDB Receiver – Used for self-managed MongoDB clusters. It queries diagnostic commands (serverStatus,dbStats, connection metrics,wiredTiger stats) to collect process-level telemetry.
MongoDB Atlas Receiver – Integrates with the Atlas Monitoring API to gather cluster-level metrics, events, and alerts from fully managed MongoDB deployments.
In his guide will be focusing on the MongoDB Receiver for self-managed MongoDB environments.
MongoDB Receiver uses the Go-based MongoDB driver internally to extract real-time metrics from serverStatus, dbStats, and other diagnostic interfaces, converting the BSON responses into structured OpenTelemetry metrics and giving visibility into overall database health.
Prerequisites : A MongoDB instance with read-only diagnostic privileges (e.g., via the clusterMonitor role) to allow access to commands likeserverStatus,dbStats, andreplSetGetStatus.
Define the MongoDB receiver in your OpenTelemetry Collector to enable periodic scraping of MongoDB’s diagnostic metrics with configuration shown below :
apiVersion: v1
kind: ConfigMap
metadata:
[...]
receivers:
mongodb:
collection_interval: 10s
hosts:
- endpoint: "<YOUR_MONGODB_ENDPOINT>/?authSource=admin"
username: "<USER_NAME>"
password: "<PASSWORD>"
tls:
insecure: true
insecure_skip_verify: true
[...]
Replace <YOUR_MONGODB_ENDPOINT>, <USER_NAME>, and <PASSWORD> with your MongoDB host, username, and password. (eg. mongodb://user:pass@host:27017/?authSource=admin)
Ensure the monitoring user has permissions to fetch metrics via commands likeserverStatus,dbStats, and replication queries.
Use TLS settings appropriate for your deployment and security requirements.
Update the service.pipelines.metrics section receive data from the mongodb receiver, defined above:
service:
pipelines:
metrics:
receivers: [mongodb]
[...]
With the MongoDB receiver enabled, the Collector continuously gathers database metrics and streams them to your chosen monitoring backend.
This setup offers agentless MongoDB visibility while keeping the telemetry flow streamlined and performant. The following diagram will provide the full data flow MongoDB through the Collector to the backend.
This diagram provides a comprehensive view of the telemetry flow using OpenTelemetry, covering both the application and MongoDB database layers.

serverStatus and dbStats to gather database metrics.
This diagram offers end-to-end visibility into application level telemetry and MongoDB’s internal performance signal , enabling clear monitoring of runtime behaviour and database health of a single OpenTelemetry pipeline leading directly into effective metrics visualization.
When your OpenTelemetry pipeline is configured, you can integrate Prometheus as a backend to observe MongoDB metrics in near real time and define alerting rules for high-impact database conditions.
Here, we’ve queried mongodb_global_lock_time_milliseconds_total, which records the total duration the global lock has been held. This metric helps identify lock contention and execution stalls, making it easier to detect and diagnose potential performance degradation.

To explore essential MongoDB monitoring metrics including lock duration, replication lag and operation throughput making them operationally significant, refer to MongoDB Monitoring guide.
You can also integrate a tracing backend like Jaeger to visualize operations including command execution latency, query-processing time, and driver–server round-trip delays.
Here is an example of MongoDB traces being visualized on the Jaeger dashboard:

With these visualizations in place, we gained consolidated insight into MongoDB’s runtime behavior from lock patterns to replication characteristics enabling consistent monitoring across both the application and the database.
In this guide, we walked through how to monitor MongoDB using OpenTelemetry’s server-side instrumentation model, providing a consistent and vendor-neutral way to collect engine-level signals from diagnostic interfaces and helping you to track core database behaviour without relying on external agents or custom scripts.
This configuration establishes a consistent, telemetry-driven approach to monitoring MongoDB performance across environments .
To explore more,check the official OpenTelemetry documentation , MongoDB monitoring references, and the Collector contrib GitHub repository

.png)