Observability Exporter
Export observability data to InfluxDB.
The NSO Observability Exporter (OE) package allows Cisco NSO to export observability-related data using software-industry-standard formats and protocols, such as the OpenTelemetry protocol (OTLP). It supports the export of progress traces using OTLP, as well as the export of transaction metrics based on the progress trace data into an InfluxDB database.
Observability Data Types
To provide insight into the state and working of a system, operators make use of different types of data:
Logs: Information about events taking place in the system, usually for humans to interpret.
Traces: Detailed information about the requests as they traverse the system.
Metrics: Measures of quantifiable aspects of the system for statistical analysis, such as the amount of successful and failed requests.
Each of the data types serves a different purpose. Metrics allow you to get a high-level view of whether the system behaves in an expected manner, for example, no or few failed requests. Metrics also help identify the load on the system (i.e. CPU usage, number of concurrent requests, etc.) but they do not inform you what is happening with a particular request or transaction, for example, the one that is failing.
Tracing, on the other hand, shows the path and the time that the request took in different parts of the overall system. Perhaps the request failed because one of the subsystems took too long to provide the necessary data. That's the kind of information a trace gives you.
However, to understand what took a specific subsystem a long time to respond, you need to consult the relevant logs.
As these are different types of data, different software solutions exist to process, store, and examine them.
For tracing, the package exports progress trace data using the standard OTLP format. Each trace carries a trace-id
that uniquely identifies it and can be supplied as part of the request (see the Progress Trace section in the NSO Development Guide for details), allowing you to find the relevant data in a busy system. Tools such as Jaeger or Grafana (with Grafana Tempo) can then ingest the OTLP data and present it in a graphical way for further analysis.
The Observability Exporter package also performs additional processing of the tracing data and exports the calculated metrics to an InfluxDB time-series database. Using Grafana or a similar tool, you can extract and accumulate the relevant values to produce customized dashboards, for example, showing the average transaction length for each type of service in NSO.
The package exports four different types of metrics, called measurements, to InfluxDB:
span
: Data for individual parts of the transaction, also called spans.span-count
: Number of concurrent spans, for example, how many transactions are in the prepare phase (prepare span) at the same time.transaction
: Sum of span durations per transaction, for example, the cumulative time spent in creating code when a transaction configures multiple services.transaction-lock
: Details about transaction lock, such as queue length when acquiring or releasing the lock.
Installation
To install the Observability Exporter add-on, follow the steps below:
Install the prerequisite Python packages:
parsedatetime
,opentelemetry-exporter-otlp
, andinfluxdb
. To install the packages, run the commandpip install -r src/requirements.txt
from the package folder.Add the Observability Exporter package in a manner suitable for your NSO installation. This usually entails copying the package file to the appropriate
packages/
folder and performing a package reload. For more information, refer to the NSO product documentation on package management.
Configure Data Export
Observability Exporter configuration resides under the progress export
container in NSO. All export functions can be enabled or disabled through the top-level enabled
leaf.
To configure exporting tracing data, use the otlp
container. This is a presence container that controls whether export is enabled or not. In the container, you can define the target host and port for sending data, as well as the transport used. Unless configured otherwise, the data is exported to the localhost using the default OTLP port, so there is minimal configuration required if you run the collector locally, for example, on the same system or as a sidecar in a container deployment.
The InfluxDB export is configured and enabled using the influxdb
presence container, where you set the host to export metrics to. You can also customize the port number, username, password, and database name used for the connection.
Under the progress export
you can also configure extra-tags
, the additional tag name-value pairs that the system adds to the measurements. These are currently only used for InfluxDB.
The following is a sample configuration snippet using different syntax styles:
Using InfluxDB 2.x
Note that the current version of the Observability Exporter uses InfluxDB v1 API. If you run an InfluxDB 2.x database instance, you need to enable v1 API client access with the influx v1 auth create
command or a similar mechanism. Refer to Influxdata docs for more information.
Minimal Tracing Example with Jaeger
This example shows how to use the Jaeger software (https://www.jaegertracing.io) to visualize the progress traces. It requires you to install Jaeger on the same system as NSO and is therefore only suitable for demo or development purposes.
First, make sure that you have a running NSO instance and that you have successfully added the Observability Exporter package. To verify, run the
show packages package observability-exporter
command from the NSO CLI.Download and run a recent Jaeger all-in-one binary from the Jaeger website, using the
--collector.otlp.enabled
switch:Keep Jaeger running, and from another terminal, enter the NSO CLI to enable OTLP data export:
Jaeger should now be receiving the transaction traces. However, if you have no running transactions in the system, there will be no data. So, make sure that you have some traces by performing a trivial configuration change:
Now you can connect to the Jaeger UI at http://localhost:16686 to explore the data. In the Search pane, select "NSO" service and click Find Traces.
Clicking on one of the traces will bring you to the trace view, such as the following one.
Minimal Metrics Example with InfluxDB
This example shows you how to store and do basic processing and visualization of data in InfluxDB. It requires you to install InfluxDB on the same system as NSO and is therefore only suitable for demo or development purposes.
First, ensure you have a running NSO instance and have successfully added the Observability Exporter package. To verify, run the
show packages package observability-exporter
command from the NSO CLI.Next, set up an InfluxDB instance. Download and install the InfluxDB 2 binaries and the corresponding influx CLI appropriate for your NSO system. See Influxdata docs for details, e.g.
brew install influxdb influxdb-cli
on a macOS system.Make sure that you have started the instance, then complete the initial configuration of InfluxDB. During the configuration, create an organization named
my-org
and a bucket namednso
. Do not forget to perform the Influx CLI setup. To verify that everything works in the end, run:In the output, find the ID of the NSO bucket that you have created. For example, here it is
5d744e55fb178310
but yours will be different.Create a username/password pair for
v1
API access:Use the
BUCKET_ID
that you have found in the output of the previous command.Now connect to the NSO CLI and configure the InfluxDB exporter to use this instance:
The username and password should match those created with the previous command, while the database name (using the default of
nso
here) should match the bucket name. Make sure that you have some data for export by performing a trivial configuration change:Open the InfluxDB UI at http://localhost:8086 and log in, then select the Data Explorer from the left-hand menu. Using the query builder, you can explore and visualize the data.
For example, select the
nso
bucket,span
measurement, andduration
as a field filter. Keeping other settings at their default values, it will graph the average (mean) times that various parts of the transaction take. If you wish, you can further configure another filter forname
, to only show the values for the selected part.Note that the above image shows data for multiple transactions over a span of time. If there is only a single transaction, the graph will look empty and will instead show a single data point when you hover over it.
Observability Exporter Integration with Grafana
This example shows integrating the Observability Exporter with Grafana to monitor NSO application performance.
First, ensure you have a running NSO instance and have successfully added the Observability Exporter package. To verify, run the
show packages package observability-exporter
command from the NSO CLI.Next, set up an InfluxDB instance. Follow steps 2 to 4 from the Minimal Metrics Example with InfluxDB.
Next, set up a Grafana instance. Refer to Grafana Docs for installing Grafana on your system. A MacOS example:
Install Grafana.
Start the Grafana instance.
Configure the Grafana Organization name.
Add InfluxDB as a Data Source in Grafana. Download the file influxdb-data-source.json and replace "my-token" with the actual token from the InfluxDB instance in the file and run the below command.
Set up the NSO example Dashboard. This step requires the JQ command-line tool to be installed first on the system.
Download the sample NSO dashboard JSON file dashboard-nso-local.json and run the below command. Replace the
"value"
field's value with the actual Jaeger UI URL where"name"
isINPUT_JAEGER_BASE_URL
under"inputs"
.(Optional) Set the NSO dashboard as a default dashboard in Grafana.
Connect to the NSO CLI and configure the InfluxDB exporter:
To perform a few trivial configuration changes, open the Grafana UI at http://localhost:3000/ and log in with username
admin
and passwordadmin
. Setting the NSO dashboard as a default dashboard will show different charts and graphs showing NSO metrics.Below are the panels showing metrics related to the transactions, such as transaction throughput, longest transactions, transaction locks held, and queue length.
Below are the panels showing metrics related to the services, such as mean/max duration for
create service
, mean duration forrun service
, and the service's longest spans.Below are the panels showing metrics related to the devices, such as device locks held, longest device connection, longest device sync-from, and concurrent device operations.
Observability Exporter Docker multi-container setup example
All previously mentioned databases and virtualization software can also be brought up in a Docker environment with Docker volumes, making it possible to persist the metric data in the data stores after shutting down the Docker containers.
To facilitate bringing up the containers and the interconnectivity of databases and virtualization containers, a setup bash script called setup.sh
is provided together with a compose.yaml
file that describes all Docker containers to create and start, as well as configuration files to configure each container.
This diagram shows an overview of the containers that Compose creates and starts and how they are connected.
To create the Docker environment described above, follow these steps:
Make sure Docker and Docker Compose are installed on your machine. Refer to the Docker documentation on installing Docker for your respective OS. You can verify that Docker and Compose are installed by executing the following commands in a terminal and getting a version number as output:
docker
docker compose
Download the NSO Observability Exporter package from CCO, untar it, and
cd
into thesetup
folder:Make the
setup.sh
script executable:Run the
setup.sh
script without arguments to use the default ports for containers and the default username and password for InfluxDB or supply arguments to set a specific port for each container:Use the default values defined in the script.
Provide port values and InfluxDB configuration.
To run secure protocol configuration, whether it's HTTP or gRPC, utilize the provided setup script with the appropriate security settings. Ensure the necessary security certificates and keys are available. For HTTPS and gRPC Secure, a TLS certificate and private key files are necessary. For instructions on creating self-signed certificates, refer to Creating Self-Signed Certificate.
The script will output NSO configuration to configure the Observability Exporter and URLs to visit the dashboards of some of the containers.
You can run the
setup.sh
script with the--help
flag to print help information about the script and see the default values used for each flag.
Enable HTTPS to enable OTLP through HTTPS, root certificate authority (CA) certificate file in PEM format needs to be specified in NSO configuration for both traces and metrics.
After configuring the Observability Exporter with the NSO configuration printed by the
setup.sh
script, e.g., using the CLIload
command or thencs_load
tool, traces, and metric data should be seen in Jaeger, InfluxDB, and Grafana as shown in the previous setup.The setup can be brought down with the following commands:
Bring down containers only.
Bring down containers and remove volumes.
Creating Self-Signed Certificate
Prerequisites: OpenSSL: Ensure that OpenSSL is installed on your system. Most Unix-like systems come with OpenSSL pre-installed.
Generate a Private Key: First, generate a private key using OpenSSL. Run the following command in your terminal or command prompt:
Install OpenSSL:
Create a Root CA (Certificate Authority):
Generate SSL Certificates Signed by the Root CA:
Use the Certificates:
Export NSO Traces and Metrics to Splunk Observability Cloud
In the previous test environment setup, we exported traces to Jaeger and metrics to Prometheus but progress-trace and metrics can also be sent to Splunk Observability Cloud.
In order to send traces and metrics to Splunk Observability Cloud, either the OpenTelemetry Collector Contrib or Splunk OpenTelemetry Collector can be used.
Here is an example config that can be used with the OpenTelemetry Collector Contrib to send traces and metrics:
An access token and the endpoint of your Splunk Observability Cloud instance are needed to start exporting traces and metrics. The access token can be found under the settings -> Access Tokens
menu in your Splunk Observability Cloud dashboard. The endpoint can be constructed by looking at your Splunk Observability Cloud URL and replacing <SIGNALFX_REALM>
with the one you see in the URL. e.g. https://ingest.us1.signalfx.com/v2/trace.
Traces can be accessed at https://app.us1.signalfx.com/#/apm/traces and Metrics are available when accessing or creating a dashboard at https://app.us1.signalfx.com/#/dashboards.
More options for the sapm
and signalfx
exporters can be found at https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/sapmexporter/README.md and https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/signalfxexporter/README.md respectively.
In the current Observability Exporter version, metrics from spans, that is metrics that are currently sent directly to InfluxDB, cannot be sent to Splunk.
Export NSO Traces and Metrics to Splunk Enterprise
Download Splunk Enterprise. Visit the Splunk Enterprise. Select the appropriate version for your operating system (Linux, Windows, macOS). Download the installer package.
Install Splunk Enterprise.
On Linux:
Transfer the downloaded
.rpm
or.deb
file to your Linux server.Install the package:
For RPM-based distributions (RedHat/CentOS):
For DEB-based distributions (Debian/Ubuntu):
On Windows:
Run the downloaded
.msi
installer.Follow the prompts to complete the installation.
Start Splunk.
On Linux:
On Windows:
Open the Splunk Enterprise application from the Start Menu.
Access Splunk Web Interface.
Navigate to http://:8000. Log in with the default credentials (admin/changeme).
Create an Index via the Splunk Web Interface:
Click on Settings in the top-right corner.
Under the Data section, click on Indexes.
Create a New Index:
Click on the New Index button.
Fill in the required details:
Index Name: Enter a name for your index (e.g., nso_traces, nso_metrics).
Index Data Type: Select the type of data (e.g., Events or Metrics).
Home Path, Cold Path, and Thawed Path: Leave these as default unless you have specific requirements.
Click on the Save button.
Enable HTTP Event Collector (HEC) on Splunk Enterprise. Before you can use Event Collector to receive events through HTTP, you must enable it. For Splunk Enterprise, enable HEC through the Global Settings dialog box.
Click Settings > Data Inputs.
Click HTTP Event Collector.
Click Global Settings.
In the All Tokens toggle button, select Enabled.
Choose a nso_traces/nso_metrics for their respective HEC tokens.
Click Save.
Create an Event Collector token on Splunk Enterprise. To use HEC, you must configure at least one token.
Click Settings > Add Data.
Click monitor.
Click HTTP Event Collector.
In the Name field, enter a name for the token.
Click Next.
Click Review.
Confirm that all settings for the endpoint are correct, click Submit. Otherwise, click < to make changes.
Configure the OpenTelemetry Protocol (OTLP) Collector:
Create or edit the
otelcol.yaml
file to include the HEC configuration. Example configuration:
Save the configuration file.
Support
For additional support questions, refer to Cisco Support.
Last updated