Monitoring MongoDB Metrics with OpenTelemetry

November 21, 2025
Tags:
Observability
OpenTelemetry
MongoDB Monitoring

In distributed systems, MongoDB  empowers critical workloads where flexible schemas, dynamic queries, and large-scale horizontal sharding introduce unique performance challenges. Engineers often face spikes in connection pools, replication lag across shards, or inefficient index usage that leads to unpredictable latency and cache pressure all directly affecting application responsiveness.

Instead of relying on multiple monitoring agents or database-specific plugins, OpenTelemetry (OTel) provides a standardized pipeline to collect MongoDB’s core operational and query performance signals. This unified approach helps teams surface replication delays, identify inefficient queries, and monitor resource usage consistently across environments.

Client and Server-Side MongoDB Instrumentation

When instrumenting MongoDB with OpenTelemetry, we typically choose between application-level and database-level monitoring, each exposing a different scope of telemetry based on where they are instrumented from:

Client-side instrumentation (Application-Level Telemetry)

This captures query spans, command timings, connection events, and request-level context directly from your application through OTel SDKs integrated at the MongoDB driver or framework level.

It enables you to correlate API calls with MongoDB operations such as find, insert, aggregate, or update, helping you identify slow queries, inefficient query patterns, connection churn, and latency introduced by application logic or network boundaries.

Server-side instrumentation (Database Engine Telemetry)

This collects engine-level MongoDB metrics including replication lag, connection pool usage, cache utilization (wiredTiger metrics), document operation rates, and index efficiency using the OpenTelemetry MongoDB Receiver or exporter integrations.

It provides visibility into the internal behavior of the database engine, enabling detection of issues like cache pressure, unindexed queries, slow-running aggregations, or underperforming replica sets, which may not surface through application-level traces alone.

Category Client-Side (Application-Level) Server-Side (Database Engine)
Metrics Query latency, command timings, connections Replication lag, cache stats, op counters, index stats
Traces Spans for find/insert/update/aggregate operations Engine-level spans and internal operation timings
Performance Slow queries due to app logic, driver, or network issues Cache pressure, unindexed scans, slow aggregations
Errors Driver errors, timeouts, failed commands Replication failures, disk I/O, storage warnings
When to Use Debug application logic, query efficiency, and latency Monitor database health, throughput, and resource saturation
Critical Observations Optimize workload distribution, connection pooling, and query plans Identify internal DB bottlenecks not visible from app-level traces

The table highlights how client-side instrumentation uncovers application-driven query behavior, whereas server-side monitoring exposes MongoDB’s internal engine health.

For this guide, the focus is on server-side telemetry (engine-level metrics) that offer essential insights into query latency, replication lag, cache usage, and operation counts, giving teams broad database-level insights with minimal overhead.

With these insights in place, we move to monitoring MongoDB with OpenTelemetry, where engine metrics are collected and transformed into consistent telemetry signals.

Monitoring MongoDB with OpenTelemetry

When monitoring MongoDB with OpenTelemetry, metrics are pulled directly from MongoDB’s diagnostic interfaces and converted into consistent telemetry signals. 

However, critical insight like index efficiency, and replication health normally exposed via commands such as serverStatus and dbStats are automatically collected by the MongoDB Receiver, eliminating custom tooling and ensuring unified backend analysis.

Note: MongoDB Atlas vs Self-Managed MongoDB

Monitoring MongoDB with OpenTelemetry depends on your deployment model:

MongoDB Receiver
– Used for self-managed MongoDB clusters. It queries diagnostic commands (serverStatus, dbStats, connection metrics, wiredTiger stats) to collect process-level telemetry.

MongoDB Atlas Receiver – Integrates with the Atlas Monitoring API to gather cluster-level metrics, events, and alerts from fully managed MongoDB deployments.

In his guide will be focusing  on the MongoDB Receiver for self-managed MongoDB environments.

Instrumenting MongoDB with OpenTelemetry

MongoDB Receiver uses the Go-based MongoDB driver internally to extract real-time metrics from serverStatus, dbStats, and other diagnostic interfaces, converting the BSON responses into structured OpenTelemetry metrics and giving visibility into overall database health.

Prerequisites : A MongoDB instance with read-only diagnostic privileges (e.g., via the clusterMonitor role) to allow access to commands like serverStatus, dbStats, and replSetGetStatus.

Step 1: Configuring MongoDB Receiver in the OTel Collector

Define the MongoDB receiver in your OpenTelemetry Collector to enable periodic scraping of MongoDB’s diagnostic metrics with configuration shown below : 

apiVersion: v1

kind: ConfigMap

metadata:
[...]
receivers:
  mongodb:
    collection_interval: 10s
    hosts:
      - endpoint: "<YOUR_MONGODB_ENDPOINT>/?authSource=admin"
    username: "<USER_NAME>"
    password: "<PASSWORD>"
    tls:
      insecure: true
      insecure_skip_verify: true
      [...]

  • endpoint - Specifies the MongoDB URI where the receiver connects to pull diagnostic metrics.   
  • collection_interval - Defines how frequently the Collector scrapes MongoDB monitoring data. 
  • hosts - Lists MongoDB instances to monitor, each referenced through a valid connection string.

Replace <YOUR_MONGODB_ENDPOINT>, <USER_NAME>, and <PASSWORD> with your MongoDB host, username, and password. (eg. mongodb://user:pass@host:27017/?authSource=admin)

Ensure the monitoring user has permissions to fetch metrics via commands like serverStatus, dbStats, and replication queries.

Use TLS settings appropriate for your deployment and security requirements.

Step 2: Add the MongoDB Receiver to the Metrics Pipeline

Update the service.pipelines.metrics section receive data from the mongodb receiver, defined above: 

service:
  pipelines:
    metrics:
      receivers: [mongodb]
      [...]

With the MongoDB receiver enabled, the Collector continuously gathers database metrics and streams them to your chosen monitoring backend.

This setup offers agentless MongoDB visibility while keeping the telemetry flow streamlined and performant. The following diagram will provide the full data flow MongoDB through the Collector to the backend.

Telemetry Flow

This diagram provides a comprehensive view of the telemetry flow using OpenTelemetry, covering both the application and MongoDB database layers.

  • Data Sources: The application (instrumented via OpenTelemetry SDKs/agents), emits metrics and traces through the OTLP protocol, while the MongoDB Receiver polls diagnostic commands like serverStatus and dbStats to gather database metrics.

  • OTel Collector : The Collector accepts telemetry over OTLP (gRPC on 4317 or HTTP on 4318), applies optional processors such as batching or sampling, and normalizes the application and MongoDB telemetry before exporting it to the configured backends.

  • Backends: Once processed, the Collector exports the telemetry to your configured backends. For example, Jaeger for traces, Prometheus for metrics, and OpenSearch for logs enabling centralized analysis and visualization across the application and MongoDB layers.

This diagram offers end-to-end visibility into application level telemetry and MongoDB’s internal performance signal , enabling clear monitoring of runtime behaviour and database health of a single OpenTelemetry pipeline leading directly into effective metrics visualization.

Visualising Metrics

When your OpenTelemetry pipeline is configured, you can integrate Prometheus  as a backend to observe MongoDB metrics in near real time and define alerting rules for high-impact database conditions.

Here, we’ve queried mongodb_global_lock_time_milliseconds_total, which records the total duration the global lock has been held. This metric helps identify lock contention and execution stalls, making it easier to detect and diagnose potential performance degradation.

To explore essential MongoDB monitoring metrics including lock duration, replication lag and operation throughput making them operationally significant, refer to MongoDB Monitoring guide.

You can also integrate a tracing backend like Jaeger to visualize operations including command execution latency, query-processing time, and driver–server round-trip delays.

Here is an example of MongoDB traces being visualized on the Jaeger dashboard:

With these visualizations in place, we gained consolidated insight into MongoDB’s runtime behavior from lock patterns to replication characteristics enabling consistent monitoring across both the application and the database.

Conclusion

In this guide, we walked through how to monitor MongoDB using OpenTelemetry’s server-side instrumentation model, providing a consistent and vendor-neutral way to collect engine-level signals from diagnostic interfaces and helping you to  track core database behaviour without relying on external agents or custom scripts.

This configuration establishes a consistent, telemetry-driven approach to monitoring MongoDB performance across environments .

To explore more,check the official OpenTelemetry documentation ,  MongoDB monitoring references, and the Collector contrib GitHub repository

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Isha Bhardwaj
Linked In

Receive blog & product updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.