Containerized NSO
Deploy NSO in a containerized setup using Cisco-provided images.
Last updated
Was this helpful?
Deploy NSO in a containerized setup using Cisco-provided images.
Last updated
Was this helpful?
NSO can be deployed in your environment using a container, such as Docker. Cisco offers two pre-built images for this purpose that you can use to run NSO and build packages (see ).
Migration Information
If you are migrating from an existing NSO System Install to a container-based setup, follow the guidelines given below in .
Running NSO in a container offers several benefits that you would generally expect from a containerized approach, such as ease of use and convenient distribution. More specifically, a containerized NSO approach allows you to:
Run a container image of a specific version of NSO and your packages which can then be distributed as one unit.
Deploy and distribute the same version across your production environment.
Use the Build Image containing the necessary environment for compiling NSO packages.
Cisco provides the following two NSO images based on Red Hat UBI.
Development Host
None or Local Install
Build Image
System Install
Production Image
System Install
Use the pre-built image as the base image in the container file (e.g., Dockerfile) and mount your own packages (such as NEDs and service packages) to run a final image for your production environment (see examples below).
The Build Image is a separate standalone NSO image with the necessary environment and software for building packages. It is provided specifically to address the developer needs of building packages.
The container provides the necessary environment to build custom packages. The Build Image adds a few Linux packages that are useful for development, such as Ant, JDK, net-tools, pip, etc. Additional Linux packages can be added using, for example, the dnf
command. The dnf list installed
command lists all the installed packages.
To fetch and extract NSO images:
Extract the image and other files from the signed package, for example:
To run the images, make sure that your system meets the following requirements:
A system running Linux x86_64
or ARM64
, or macOS x86_64
or Apple Silicon. Linux for production.
A container platform. Docker is the recommended platform and is used as an example in this guide for running NSO images. You may use another container runtime of your choice. Note that commands in this guide are Docker-specific. if you use another container runtime, make sure to use the respective commands.
This section covers the necessary administrative information about the NSO Production Image.
If you have NSO installed as a System Install, you can migrate to the Containerized NSO setup by following the instructions in this section. Migrating your Network Services Orchestrator (NSO) to a containerized setup can provide numerous benefits, including improved scalability, easier version management, and enhanced isolation of services.
The migration process is designed to ensure a smooth transition from a System-Installed NSO to a container-based deployment. Detailed steps guide you through preparing your existing environment, exporting the necessary configurations and state data, and importing them into your new containerized NSO instance. During the migration, consider the container runtime you plan to use, as this impacts the migration process.
Before You Start
We recommend reading through this guide to understand better the expectations, requirements, and functioning aspects of a containerized deployment.
Determine and install the container orchestration tool you plan to use (e.g., Docker, etc.).
Ensure that your current NSO installation is fully operational and backed up and that you have a clear rollback strategy in case any issues arise. Pay special attention to customizations and integrations that your current NSO setup might have, and verify their compatibility with the containerized version of NSO.
Have a contingency plan in place for quick recovery in case any issues are encountered during migration.
Migration Steps
Prepare:
Document your current NSO environment's specifics, including custom configurations and packages.
Perform a complete backup of your existing NSO instance, including configurations, packages, and data.
Migrate:
Stop the current NSO instance.
Save the run directory from the NSO instance in an appropriate place.
Use the same ncs.conf
and High Availability (HA) setup previously used with your System Install. We assume that the ncs.conf
follows the best practice and uses the NCS_DIR
, NCS_RUN_DIR
, NCS_CONFIG_DIR
, and NCS_LOG_DIR
variables for all paths. The ncs.conf
can be added to a volume and mounted to /nso/etc
in the container.
Add the run directory as a volume, mounted to /nso/run
in the container and copy the CDB data, packages, etc., from the previous System Install instance.
Create a volume for the log directory.
Start the container. Example:
Finalize:
Ensure that the containerized NSO instance functions as expected and validate system operations.
Plan and execute your cutover transition from the System-Installed NSO to the containerized version with minimal disruption.
Monitor the new setup thoroughly to ensure stability and performance.
ncs.conf
File Configuration and PreferenceThe run-nso.sh
script runs a check at startup to determine which ncs.conf
file to use. The order of preference is as below:
The ncs.conf
file specified in the Dockerfile (i.e., ENV $NCS_CONFIG_DIR /etc/ncs/
) is used as the first preference.
The second preference is to use the ncs.conf
file mounted in the /nso/etc/
run directory.
If no ncs.conf
file is found at either /etc/ncs
or /nso/etc
, the default ncs.conf
file provided with the NSO image in /defaults
is used.
If you need to perform operations before or after the ncs
process is started in the Production container, you can use Python and/or Bash scripts to achieve this. Add the scripts to the $NCS_CONFIG_DIR/pre-ncs-start.d/
and $NCS_CONFIG_DIR/post-ncs-start.d/
directories to have the run-nso.sh
script run them.
An admin user can be created on startup by the run script in the container. Three environment variables control the addition of an admin user:
ADMIN_USERNAME
: Username of the admin user to add, default is admin
.
ADMIN_PASSWORD
: Password of the admin user to add.
ADMIN_SSHKEY
: Private SSH key of the admin user to add.
As ADMIN_USERNAME
already has a default value, only ADMIN_PASSWORD
, or ADMIN_SSHKEY
need to be set in order to create an admin user. For example:
This can be useful when starting up a container in CI for testing or development purposes. It is typically not required in a production environment where CDB already contains the required user accounts.
The default ncs.conf
NSO configuration file does not enable any northbound interfaces, and no ports are exposed externally to the container. Ports can be exposed externally of the container when starting the container with the northbound interfaces and their ports correspondingly enabled in ncs.conf
.
The backup behavior of running NSO in vs. outside the container is largely the same, except that when running NSO in a container, the SSH and SSL certificates are not included in the backup produced by the ncs-backup
script. This is different from running NSO outside a container where the default configuration path /etc/ncs
is used to store the SSH and SSL certificates, i.e., /etc/ncs/ssh
for SSH and /etc/ncs/ssl
for SSL.
Take a Backup
Let's assume we start a production image container using:
To take a backup:
Run the ncs-backup
command. The backup file is written to /nso/run/backups
.
Restore a Backup
To restore a backup, NSO must be stopped. As you likely only have access to the ncs-backup
tool, the volume containing CDB and other run-time data from inside of the NSO container, this poses a slight challenge. Additionally, shutting down NSO will terminate the NSO container.
To restore a backup:
Shut down the NSO container:
Run the ncs-backup --restore
command. Start a new container with the same persistent shared volumes mounted but with a different command. Instead of running the /run-nso.sh
, which is the normal command of the NSO container, run the restore
command.
Restoring an NSO backup should move the current run directory (/nso/run
to /nso/run.old
) and restore the run directory from the backup to the main run directory (/nso/run
). After this is done, start the regular NSO container again as usual.\
The NSO image /run-nso.sh
script looks for an SSH host key named ssh_host_ed25519_key
in the /nso/etc/ssh
directory to be used by the NSO built-in SSH server for the CLI and NETCONF interfaces.
If an SSH host key exists, which is for a typical production setup stored in a persistent shared volume, it remains the same after restarts or upgrades of NSO. If no SSH host key exists, the script generates a private and public key.
In a high-availability (HA) setup, the host key is typically shared by all NSO nodes in the HA group and stored in a persistent shared volume. This is done to avoid fetching the public host key from the new primary after each failover.
NSO expects to find a TLS certificate and key at /nso/ssl/cert/host.cert
and /nso/ssl/cert/host.key
respectively. Since the /nso
path is usually on persistent shared volume for production setups, the certificate remains the same across restarts or upgrades.
If no certificate is present, one will be generated. It is a self-signed certificate valid for 30 days making it possible to use both in development and staging environments. It is not meant for the production environment. You should replace it with a properly signed certificate for production and it is encouraged to do so even for test and staging environments. Simply generate one and place it at the provided path, for example using the following, which is the command used to generate the temporary self-signed certificate:
The database in NSO, called CDB, uses YANG models as the schema for the database. It is only possible to store data in CDB according to the YANG models that define the schema.
If the YANG models are changed, particularly if the nodes are removed or renamed (rename is the removal of one leaf and an addition of another), any data in CDB for those leaves will also be removed. NSO normally warns about this when you attempt to load new packages, for example, request packages reload
command refuses to reload the packages if the nodes in the YANG model have disappeared. You would then have to add the force argument, e.g., request packages reload force
.
The base Production Image comes with a basic container health check. It uses ncs_cmd
to get the state that NCS is currently in. Only the result status is observed to check if ncs_cmd
was able to communicate with the ncs
process. The result indicates if the ncs
process is responding to IPC requests.
By default, the Linux kernel allows overcommit of memory. However, memory overcommit produces an unexpected and unreliable environment for NSO since the Linux Out Of Memory Killer, or OOM-killer, may terminate NSO without restarting it if the system is critically low on memory.
Also, when the OOM-killer terminates NSO, NSO will not produce a system dump file, and the debug information will be lost. Thus, it is strongly recommended that overcommit is disabled with Linux NSO production container hosts with an overcommit ratio of less than 100% (max).
The /nso-run.sh
script that starts NSO is executed as an ENTRYPOINT
instruction and the CMD
instruction can be used to provide arguments to the entrypoint-script. Another alternative is to use the EXTRA_ARGS
variable to provide arguments. The /nso-run.sh
script will first check the EXTRA_ARGS
variable before the CMD
instruction.
An example using docker run
with the CMD
instruction:
With the EXTRA_ARGS
variable:
An example using a Docker Compose file, compose.yaml
, with the CMD
instruction:
With the EXTRA_ARGS
variable:
This section provides examples to exhibit the use of NSO images.
This example shows how to run the standalone NSO Production Image using the Docker CLI.
The instructions and CLI examples used in this example are Docker-specific. If you are using a non-Docker container runtime, you will need to: fetch the NSO image from the Cisco software download site, then load and run the image with packages and networking, and finally log in to NSO CLI to run commands.
Steps
Follow the steps below to run the Production Image using Docker CLI:
Start your container engine.
Next, load the image and run it. Navigate to the directory where you extracted the base image and load it. This will restore the image and its tag:
Overriding Environment Variables
Overriding basic environment variables (NCS_CONFIG_DIR
, NCS_LOG_DIR
, NCS_RUN_DIR
, etc.) is not supported and therefore should be avoided. Using, for example, the NCS_CONFIG_DIR
environment variable to mount a configuration directory will result in an error. Instead, to mount your configuration directory, do it appropriately in the correct place, which is under /nso/etc
.
Finally, log in to NSO CLI to run commands. Open an interactive shell on the running container and access the NSO CLI.
You can also use the docker exec -it cisco-nso ncs_cli -u admin
command to access the CLI from the host's terminal.
This example describes how to upgrade your NSO to run a newer NSO version in the container. The overall upgrade process is outlined in the steps below. In the example below, NSO is to be upgraded from version 6.3 to 6.4.
To upgrade your NSO version:
Start a container with the docker run
command. In the example below, it mounts the /nso
directory in the container to the NSO-vol
named volume to persist the data. Another option is using a bind mount of the directory on the host machine. At this point, the /cdb
directory is empty.
Perform a backup, either by running the docker exec
command (make sure that the backup is placed somewhere we have mounted) or by creating a tarball of /data/nso
on the host machine.
Stop the NSO by issuing the following command, or by stopping the container itself which will run the ncs stop
command automatically.
Remove the old NSO.
Start a new container and mount the /nso
directory in the container to the NSO-vol
named volume. This time the /cdb
folder is not empty, so instead of starting a fresh NSO, an upgrade will be performed.
At this point, you only have one container that is running the desired version 6.4 and you do not need to uninstall the old NSO.
This example covers the necessary information to manifest the use of NSO images to compile packages and run NSO. Using Docker Compose is not a requirement, but a simple tool for defining and running a multi-container setup where you want to run both the Production and Build images in an efficient manner.
distkey
: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service.
ne
: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients.
docker-compose.yaml
- Docker Compose File ExampleA basic Docker Compose file is shown in the example below. It describes the containers running on a machine:
The Production container runs NSO.
The Build container builds the NSO packages.
A third example
container runs the netsim device.
Note that the packages use a shared volume in this simple example setup. In a more complex production environment, you may want to consider a dedicated redundant volume for your packages.
Follow the steps below to run the images using Docker Compose:
Start the Build container. This starts the services in the Compose file with the profile build
.
Copy the packages from the netsim-sshkey
example and compile them in the NSO Build container. The easiest way to do this is by using the docker exec
command, which gives more control over what to build and the order of it. You can also do this with a script to make it easier and less verbose. Normally you populate the package directory from the host. Here, we use the packages from an example.
Start the netsim container. This outputs the generated init.xml
and ncs.conf
files to the NSO Production container. The --wait
flag instructs to wait until the health check returns healthy.
Start the NSO Production container.
At this point, NSO is ready to run the service example to configure the netsim device(s). A bash script (demo.sh
) that runs the above steps and showcases the netsim-sshkey
example is given below:
This example describes how to upgrade NSO when using Docker Compose.
To upgrade to a new minor or major version, for example, from 6.3 to 6.4, follow the steps below:
Change the image version in the Compose file to the new version, here 6.4.
Run the docker compose up --profile build -d
command to start the Build container with the new image.
Compile the packages using the Build container.
Run the docker compose up --profile prod --wait
command to start the Production container with the new packages that were just compiled.
To upgrade to a new maintenance release version, for example, 6.4.1, follow the steps below:
Change the image version in the Compose file to the new version, here 6.4.1.
Run the docker compose up --profile prod --wait
command.
Upgrading in this way does not require a recompile. Docker detects changes and upgrades the image in the container to the new version.
The Red Hat UBI is an OCI-compliant image that is freely distributable and independent of platform and technical dependencies. You can read more about Red Hat UBI , and about Open Container Initiative (OCI) .
The Production Image is a production-ready NSO image for system-wide deployment and use. It is based on NSO and is available from the site.
Consult the documentation for information on installing NSO on a Docker host, building NSO packages, etc.
See for an example that uses the container to deploy an SSH-key-provisioning nano service.
The README in the example provides a link to the container-based deployment variant of the example. See the setup_ncip.sh
script and README
in the netsim-sshkey
deployment example for details.
The image is available as a signed package (e.g., nso-VERSION.container-image-build.linux.ARCH.signed.bin
) from the Cisco site. You can run the Build Image in different ways, and a simple tool for defining and running multi-container Docker applications is (see examples below).
On Cisco's official site, search for "Network Services Orchestrator". Select the relevant NSO version in the drop-down list, e.g., "Crosswork Network Services Orchestrator 6", and click "Network Services Orchestrator Software". Locate the binary, which is delivered as a signed package (e.g., nso-6.4.container-image-prod.linux.x86_64.signed.bin
).
Verify the compatibility of your current system configurations with the containerized NSO setup. See for more information.
Set up the container environment and download/extract the NSO production image. See for details.
See in System Install for information on memory overcommit recommendations for a Linux system hosting NSO production containers.
If you intend to run multiple images (i.e., both Production and Build), Docker Compose is a tool that simplifies defining and running multi-container Docker applications. See the example () below for detailed instructions.
Start a container from the image. Supply additional arguments to mount the packages and ncs.conf
as separate volumes (), and publish ports for networking () as needed. The container starts NSO using the /run-nso.sh
script. To understand how the ncs.conf
file is used, see .
The packages used in this example are taken from the example: