Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Learn about different ways to deploy NSO.
By installation
By using Cisco-provided container images
Choose this way if you want to install NSO on a system. Before proceeding with the installation, decide on the install type.
The installation of NSO comes in two variants:
All the NSO examples and README steps provided with the installation are based on and intended for Local Install only. Use Local Install for evaluation and development purposes only.
System Install should be used only for production deployment. For all other purposes, use the Local Install procedure.
Choose this way if you want to run NSO in a container, such as Docker. Visit the link below for more information.
Supporting Information
If you are evaluating NSO, you should have a designated support contact. If you have an NSO support agreement, please use the support channels specified in the agreement. In either case, do not hesitate to reach out to us if you have questions or feedback.
Local Install
Local Install is used for development, lab, and evaluation purposes. It unpacks all the application components, including docs and examples. It can be used by the engineer to run multiple, unrelated, instances of NSO for different labs and demos on a single workstation.
System Install
System Install is used when installing NSO for a centralized, always-on, production-grade, system-wide deployment. It is configured as a system daemon that would start and end with the underlying operating system. The default users of admin and operator are not included and the file structure is more distributed.
Explore NSO contents after finishing the installation.
Applies to Local Install.
Before starting NSO, it is recommended to explore the installation contents.
Navigate to the newly created Installation Directory, for example:
The installation directory includes the following contents:
Along with the binaries, NSO installs a full set of documentation available in the doc/
folder in the Installation Directory. There is also an online version of the documentation available on DevNet.
Run index.html
in your browser to explore further.
Local Install comes with a rich set of examples to start using NSO.
In order to communicate with the network, NSO uses NEDs as device drivers for different device types. Cisco has NEDs for hundreds of different devices available for customers, and several are included in the installer in the /packages/neds
directory.
In the example below, NEDs for Cisco ASA, IOS, IOS XR, and NX-OS are shown. Also included are NEDs for other vendors including Juniper JunOS, A10, ALU, and Dell.
The example NEDs included in the installer are intended for evaluation, demonstration, and use with the examples.ncs
. These are not the latest versions available and often do not have all the features available in production NEDs.
A large number of pre-built supported NEDs are available which can be acquired and downloaded by the customers from Cisco Software Download. Note that the specific file names and versions that you download may be different from the ones in this guide. Therefore, remember to update the paths accordingly.
Like the NSO installer, the NEDs are signed.bin
files that need to be run to validate the download and extract the new code.
To install new NEDs:
Change to the working directory where your downloads are. The filenames indicate which version of NSO the NEDs are pre-compiled for (in this case NSO 6.0), and the version of the NED. An example output is shown below.
Use the sh
command to run signed.bin
to verify the certificate and extract the NED tar.gz and other files. Repeat for all files. An example output is shown below.
You now have three tar (.tar.gz
) files. These are compressed versions of the NEDs. List the files to verify as shown in the example below.
Navigate to the packages/neds
directory for your Local Install, for example:
In the /packages/neds
directory, extract the .tar files into this directory using the tar
command with the path to where the compressed NED is located. An example is shown below.
Here is a sample list of the newer NEDs extracted along with the ones bundled with the installation:
The last thing to note is the files ncsrc
and ncsrc.tsch
. These are shell scripts for bash
and tsch
that set up your PATH and other environment variables for NSO. Depending on your shell, you need to source this file before starting NSO.
For more information on sourcing shell script, see the Local Install steps.
Start and stop the NSO daemon.
Applies to Local Install.
The command ncs -h
shows various options when starting NSO. By default, NSO starts in the background without an associated terminal. It is recommended to add NSO to the /etc/init
scripts of the deployment hosts. For more information, see the ncs(1) in Manual Pages.
Whenever you start (or reload) the NSO daemon, it reads its configuration from ./ncs.conf
or ${NCS_DIR}/etc/ncs/ncs.conf
or from the file specified with the -c
option.
Alter your examples to work with System Install.
Applies to System Install.
Since all the NSO examples and README steps that come with the installer are primarily aimed at Local Install, you need to modify them to run them on a System Install.
To work with the System Install structure, this may require a little or bigger modification depending on the example.
For example, to port the example.ncs/development-guide/nano-services/basic-vrouter
example to the System Install structure:
Make the following changes to the basic-vrouter/ncs.conf
file:
Copy the Local Install $NCS_DIR/var/ncs/cdb/aaa_init.xml
file to the basic-vrouter/
folder.
Other, more complex examples may require more ncs.conf
file changes or require a copy of the Local Install default $NCS_DIR/etc/ncs/ncs.conf
file together with the modification described above, or require the Local Install tool $NCS_DIR/bin/ncs-setup
to be installed, as the ncs-setup
command is usually not useful with a System Install. See Migrate to System Install for more information.
Create a new NSO instance for Local Install.
Applies to Local Install.
One of the included scripts with an NSO installation is the ncs-setup
, which makes it very easy to create instances of NSO from a Local Install. You can look at the --help
or ncs-setup(1) in Manual Pages for more details, but the two options we need to know are:
--dest
defines the directory where you want to set up NSO. if the directory does not exist, it will be created.
--package
defines the NEDs that you want to have installed. You can specify this option multiple times.
NCS is the original name of the NSO product. Therefore, many of the commands and application features are prefaced with ncs
. You can think of NCS as another name for NSO.
To create an NSO instance:
Run the command to set up an NSO instance in the current directory with the IOS, NX-OS, IOS-XR and ASA NEDs. You only need one NED per platform that you want NSO to manage, even if you may have multiple versions in your installer neds
directory.
Use the name of the NED folder in ${NCS_DIR}/packages/neds
for the latest NED version that you have installed for the target platform. Use the tab key to complete the path, after you start typing (alternatively, copy and paste). Verify that the NED versions below match what is currently on the sandbox to avoid a syntax error. See the example below.
Check the nso-instance
directory. Notice that several new files and folders are created.
Following is a description of the important files and folders:
ncs.conf
is the NSO application configuration file, and is used to customize aspects of the NSO instance (for example, to change ports, enable/disable features, and so on.) See ncs.conf(5) in Manual Pages for information.
packages/
is the directory that has symlinks to the NEDs that we referenced in the --package
arguments at the time of setup. See NSO Packages in Development for more information.
logs/
is the directory that contains all the logs from NSO. This directory is useful for troubleshooting.
Start the NSO instance by navigating to the nso-instance
directory and typing the ncs
command. You must be situated in the nso-instance
directory each time you want to start or stop NSO. If you have multiple instances, you need to navigate to each one and use the ncs
command to start or stop each one.
Verify that NSO is running by using the ncs --status | grep status
command.
Add Netsim or lab devices using the command ncs-netsim -h
.
Run and interact with practice examples provided with the NSO installer.
Applies to Local Install.
This section provides an overview of how to run the examples provided with the NSO installer. By working through the examples, the reader should get a good overview of the various aspects of NSO and hands-on experience from interacting with it.
This section references the examples located in $NCS_DIR/examples.ncs
. The examples all have README
files that include instructions related to the example.
Make sure that NSO is installed with a Local Install according to the instructions in Local Install.
Source the ncsrc
file in the NSO installation directory to set up a local environment. For example:
Proceed to the example directory:
Follow the instructions in the README
files that are located in the example directories.
Every example directory is a complete NSO run-time directory. The README file and the detailed instructions later in this guide show how to generate a simulated network and NSO configuration for running the specific examples. Basically, the following steps are done:
Create a simulated network using the ncs-netsim --create-network
command:
This creates 3 Cisco IOS devices called ios0
, ios1
, and ios2
.
Create an NSO run-time environment using the ncs-setup
command:
This command uses the --dest
option to create local directories for logs, database files, and the NSO configuration file to the current directory (note that .
refers to the current directory).
Start NCS netsim:
Start NSO:
It is important to make sure that you stop ncs
and ncs-netsim
when moving between examples using the stop
option of the netsim
and the --stop
option of the ncs
.
Some of the most common mistakes are:
Enable your NSO instance for development purposes.
Applies to Local Install
If you intend to use your NSO instance for development purposes, enable the development mode using the command license smart development enable
.
Remove Local Install.
Applies to Local Install.
To uninstall Local Install, simply delete the Install Directory.
Upgrade NSO to a higher version.
Upgrading the NSO software gives you access to new features and product improvements. Every change carries a risk, and upgrades are no exception. To minimize the risk and make the upgrade process as painless as possible, this section describes the recommended procedures and practices to follow during an upgrade.
As usual, sufficient preparation avoids many pitfalls and makes the process more straightforward and less stressful.
There are multiple aspects that you should consider before starting with the actual upgrade procedure. While the development team tries to provide as much compatibility between software releases as possible, they cannot always avoid all incompatible changes. For example, when a deviation from an RFC standard is found and resolved, it may break clients that depend on the non-standard behavior. For this reason, a distinction is made between maintenance and a major NSO upgrade.
A maintenance NSO upgrade is within the same branch, i.e., when the first two version numbers stay the same (x.y in the x.y.z NSO version). An example is upgrading from version 6.0.1 to 6.0.2. In the case of a maintenance upgrade, the NSO release contains only corrections and minor enhancements, minimizing the changes. It includes binary compatibility for packages, so there is no need to recompile the .fxs files for a maintenance upgrade.
Correspondingly, when the first or second number in the version changes, that is called a full or major upgrade. For example, upgrading version 6.0.1 to 6.1 is a major, non-maintenance upgrade. Due to new features, packages must be recompiled, and some incompatibilities could manifest.
In addition to the above, a package upgrade is when you replace a package with a newer version, such as a NED or a service package. Sometimes, when package changes are not too big, it is possible to supply the new packages as part of the NSO upgrade, but this approach brings additional complexity. Instead, package upgrade and NSO upgrade should in general, be performed as separate actions and are covered as such.
To avoid surprises during any upgrade, first ensure the following:
Hosts have sufficient disk space, as some additional space is required for an upgrade.
The software is compatible with the target OS. However, sometimes a newer version of Java or system libraries, such as glibc, may be required.
All the required NEDs and custom packages are compatible with the target NSO version.
Existing packages have been compiled for the new version and are available to you during the upgrade.
Check whether the existing ncs.conf
file can be used as-is or needs updating. For example, stronger encryption algorithms may require you to configure additional keying material.
Review the CHANGES
file for information on what has changed.
If upgrading from a no longer supported software version, verify that the upgrade can be performed directly. In situations where the currently installed version is very old, you may have to upgrade to one or more intermediate versions before upgrading to the target version.
In case it turns out any of the packages are incompatible or cannot be recompiled, you will need to contact the package developers for an updated or recompiled version. For an official Cisco-supplied package, it is recommended that you always obtain a pre-compiled version if it is available for the target NSO release, instead of compiling the package yourself.
If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either ssh
, nct
, or some other remote management command. For the reference example, we use in this chapter, see examples.ncs/development-guide/high-availability/hcc
. The management station uses shell and Python scripts that use ssh
to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access.
Likewise, NSO 5.3 added support for 256-bit AES encrypted strings, requiring the AES256CFB128 key in the ncs.conf
configuration. You can generate one with the openssl rand -hex 32
or a similar command. Alternatively, if you use an external command to provide keys, ensure that it includes a value for an AES256CFB128_KEY
in the output.
Caution
The ncs-backup
(and consequently the nct backup
) command does not back up the /opt/ncs/packages
folder. If you make any file changes, back them up separately.
However, the best practice is not to modify packages in the /opt/ncs/packages
folder. Instead, if an upgrade requires package recompilation, separate package folders (or files) should be used, one for each NSO version.
The upgrade of a single NSO instance requires the following steps:
Create a backup.
Perform a System Install of the new version.
Stop the old NSO server process.
Compact the CDB files write log.
Update the /opt/ncs/current
symbolic link.
If required, update the ncs.conf
configuration file.
Update the packages in /var/opt/ncs/packages/
if recompilation is needed.
Start the NSO server process, instructing it to reload the packages.
The following steps suppose that you are upgrading to the 6.1 release. They pertain to a System Install of NSO, and you must perform them with Super User privileges.
As a best practice, always create a backup before trying to upgrade.
For the upgrade itself, you must first download to the host and install the new NSO release.
Then, you stop the currently running server with the help of the init.d
script or an equivalent command relevant to your system.
Compact the CDB files write log using, for example, the ncs --cdb-compact $NCS_RUN_DIR/cdb
command.
Next, you update the symbolic link for the currently selected version to point to the newly installed one, 6.1 in this case.
While seldom necessary, at this point, you would also update the /etc/ncs/ncs.conf
file.
Now, ensure that the /var/opt/ncs/packages/
directory has appropriate packages for the new version. It should be possible to continue using the same packages for a maintenance upgrade. But for a major upgrade, you must normally rebuild the packages or use pre-built ones for the new version. You must ensure this directory contains the exact same version of each existing package, compiled for the new release, and nothing else.
As a best practice, the available packages are kept in /opt/ncs/packages/
and /var/opt/ncs/packages/
only contains symbolic links. In this case, to identify the release for which they were compiled, the package file names all start with the corresponding NSO version. Then, you only need to rearrange the symbolic links in the /var/opt/ncs/packages/
directory.
Please note that the above package naming scheme is neither required nor enforced. If your package filesystem names differ from it, you will need to adjust the preceding command accordingly.
Finally, you start the new version of the NSO server with the package reload flag set.
NSO will perform the necessary data upgrade automatically. However, this process may fail if you have changed or removed any packages. In that case, ensure that the correct versions of all packages are present in /var/opt/ncs/packages/
and retry the preceding command.
Also, note that with many packages or data entries in the CDB, this process could take more than 90 seconds and result in the following error message:
The above error does not imply that NSO failed to start, just that it took longer than 90 seconds. Therefore, it is recommended you wait some additional time before verifying.
It is imperative that you have a working copy of data available from which you can restore. That is why you must always create a backup before starting an upgrade. Only a backup guarantees that you can rerun the upgrade or back out of it, should it be necessary.
The same steps can also be used to restore data on a new, similar host if the OS of the initial host becomes corrupted beyond repair.
First, stop the NSO process if it is running.
Verify and, if necessary, revert the symbolic link in /opt/ncs/
to point to the initial NSO release.
In the exceptional case where the initial version installation was removed or damaged, you will need to re-install it first and redo the step above.
Verify if the correct (initial) version of NSO is being used.
Next, restore the backup.
Finally, start the NSO server and verify the restore was successful.
Upgrading NSO in a highly available (HA) setup is a staged process. It entails running various commands across multiple NSO instances at different times.
The procedure is almost the same for a maintenance and major NSO upgrade. The difference is that a major upgrade requires the replacement of packages with recompiled ones. Still, a maintenance upgrade is often perceived as easier because there are fewer changes in the product.
The stages of the upgrade are:
First, enable read-only mode on the designated primary
, and then on the secondary
that is enabled for fail-over.
Take a full backup on all nodes.
If using a 3-node setup, disconnect the 3rd, non-fail-over secondary
by disabling HA on this node.
Disconnect the HA pair by disabling HA on the designated primary
, temporarily promoting the designated secondary
to provide the read-only service (and advertise the shared virtual IP address if it is used).
Upgrade the designated primary
.
Disable HA on the designated secondary
node, to allow designated primary
to become actual primary
in the next step.
Activate HA on the designated primary
, which will assume its assigned (primary
) role to provide the full service (and again advertise the shared IP if used). However, at this point, the system is without HA.
Upgrade the designated secondary
node.
Activate HA on the designated secondary
, which will assume its assigned (secondary
) role, connecting HA again.
Verify that HA is operational and has converged.
Upgrade the 3rd, non-fail-over secondary
if it is used, and verify it successfully rejoins the HA cluster.
Enabling the read-only mode on both nodes is required to ensure the subsequent backup captures the full system state, as well as making sure the failover-primary
does not start taking writes when it is promoted later on.
Disabling the non-fail-over secondary
in a 3-node setup right after taking a backup is necessary when using the built-in HA rule-based algorithm (enabled by default in NSO 5.8 and later). Without it, the node might connect to the failover-primary
when the failover happens, which disables read-only mode.
While not strictly necessary, explicitly promoting the designated secondary
after disabling HA on the primary
ensures a fast failover, avoiding the automatic reconnection attempts. If using a shared IP solution, such as the Tail-f HCC, this makes sure the shared VIP comes back up on the designated secondary
as soon as possible. In addition, some older NSO versions do not reset the read-only mode upon disabling HA if they are not acting primary
.
Another important thing to note is that all packages used in the upgrade must match the NSO release. If they do not, the upgrade will fail.
In the case of a major upgrade, you must recompile the packages for the new version. It is highly recommended that you use pre-compiled packages and do not compile them during this upgrade procedure since the compilation can prove nontrivial, and the production hosts may lack all the required (development) tooling. You should use a naming scheme to distinguish between packages compiled for different NSO versions. A good option is for package file names to start with the ncs-MAJORVERSION-
prefix for a given major NSO version. This ensures multiple packages can co-exist in the /opt/ncs/packages
folder, and the NSO version they can be used with becomes obvious.
The following is a transcript of a sample upgrade procedure, showing the commands for each step described above, in a 2-node HA setup, with nodes in their initial designated state. The procedure ensures that this is also the case in the end.
Scripting is a recommended way to upgrade the NSO version of an HA cluster. The following example script shows the required commands and can serve as a basis for your own customized upgrade script. In particular, the script requires a specific package naming convention above, and you may need to tailor it to your environment. In addition, it expects the new release version and the designated primary
and secondary
node addresses as the arguments. The recompiled packages are read from the packages-MAJORVERSION/
directory.
For the below example script, we configured our primary
and secondary
nodes with their nominal roles that they assume at startup and when HA is enabled. Automatic failover is also enabled so that the secondary
will assume the primary
role if the primary
node goes down.
Once the script is completed, it is paramount that you manually verify the outcome. First, check that the HA is enabled by using the show high-availability
command on the CLI of each node. Then connect to the designated secondaries and ensure they have the complete latest copy of the data, synchronized from the primaries.
After the primary
node is upgraded and restarted, the read-only mode is automatically disabled. This allows the primary
node to start processing writes, minimizing downtime. However, there is no HA. Should the primary
fail at this point or you need to revert to a pre-upgrade backup, the new writes would be lost. To avoid this scenario, again enable read-only mode on the primary
after re-enabling HA. Then disable read-only mode only after successfully upgrading and reconnecting the secondary
.
To further reduce time spent upgrading, you can customize the script to install the new NSO release and copy packages beforehand. Then, you only need to switch the symbolic links and restart the NSO process to use the new version.
You can use the same script for a maintenance upgrade as-is, with an empty packages-MAJORVERSION
directory, or remove the upgrade_packages
calls from the script.
Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under examples.ncs/development-guide/high-availability
.
We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in examples.ncs/development-guide/high-availability/hcc
implements shell and Python scripted steps to upgrade the NSO version using ssh
to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the paris
and london
nodes. See the example for details.
Package upgrades are frequent and routine in development but require the same care as NSO upgrades in the production environment. The reason is that the new packages may contain an updated YANG model, resulting in a data upgrade process similar to a version upgrade. So, if a package is removed or uninstalled and a replacement is not provided, package-specific data, such as service instance data, will also be removed.
In a single-node environment, the procedure is straightforward. Create a backup with the ncs-backup
command and ensure the new package is compiled for the current NSO version and available under the /opt/ncs/packages
directory. Then either manually rearrange the symbolic links in the /var/opt/ncs/packages
directory or use the software packages install
command in the NSO CLI. Finally, invoke the packages reload
command. For example:
On the other hand, upgrading packages in an HA setup is an error-prone process. Thus, NSO provides an action, packages ha sync and-reload
to minimize such complexity. This action loads new data models into NSO instead of restarting the server process. As a result, it is considerably more efficient, and the time difference to upgrade can be considerable if the amount of data in CDB is huge.
The action executes on the primary
node. First, it syncs the physical packages from the primary
node to the secondary
nodes as tar archive files, regardless if the packages were initially added as directories or tar archives. Then, it performs the upgrade on all nodes in one go. The action does not perform the sync and the upgrade on the node with none
role.
The packages ha sync
action distributes new packages to the secondary nodes. If a package already exists on the secondary
node, it will replace it with the one on the primary
node. Deleting a package on the primary
node will also delete it on the secondary
node. Packages found in load paths under the installation destination (by default /opt/ncs/current
) are not distributed as they belong to the system and should not differ between the primary
and the secondary
nodes.
It is crucial to ensure that the load path configuration is identical on both primary
and secondary
nodes. Otherwise, the distribution will not start, and the action output will contain detailed error information.
Using the and-reload
parameter with the action starts the upgrade once packages are copied over. The action sets the primary
node to read-only mode. After the upgrade is successfully completed, the node is set back to its previous mode.
If the parameter and-reload
is also supplied with the wait-commit-queue-empty
parameter, it will wait for the commit queue to become empty on the primary
node and prevent other queue items from being added while the queue is being drained.
Using the wait-commit-queue-empty
parameter is the recommended approach, as it minimizes the risk of the upgrade failing due to commit queue items still relying on the old schema.
The packages ha sync and-reload
command has the following known limitations and side effects:
The primary
node is set to read-only
mode before the upgrade starts, and it is set back to its previous mode if the upgrade is successfully upgraded. However, the node will always be in read-write mode if an error occurs during the upgrade. It is up to the user to set the node back to the desired mode by using the high-availability read-only mode
command.
As a best practice, you should create a backup of all nodes before upgrading. This action creates no backups, you must do that explicitly.
Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under examples.ncs/development-guide/high-availability
.
We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in examples.ncs/development-guide/high-availability/hcc
implements shell and Python scripted steps to upgrade the primary
paris
package versions and sync the packages to the secondary
london
using ssh
to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the paris
and london
nodes. See the example for details.
In addition, you must take special care of NED upgrades because services depend on them. For example, since NSO 5 introduced the CDM feature, which allows loading multiple versions of a NED, a major NED upgrade requires a procedure involving the migrate
action.
When a NED contains nontrivial YANG model changes, that is called a major NED upgrade. The NED ID changes, and the first or second number in the NED version changes since NEDs follow the same versioning scheme as NSO. In this case, you cannot simply replace the package, as you would for a maintenance or patch NED release. Instead, you must load (add) the new NED package alongside the old one and perform the migration.
Install NSO for non-production use, such as for development and training purposes.
Complete the following activities in the given order to perform a Local Install of NSO.
Start by setting up your system to install and run NSO.
To install NSO:
Fulfill at least the primary requirements.
If you intend to build and run NSO examples, you also need to install additional applications listed under Additional Requirements.
To download the Cisco NSO installer and example NEDs:
Go to the Cisco's official Software Download site.
Search for the product "Network Services Orchestrator" and select the desired version.
There are two versions of the NSO installer, i.e. for macOS and Linux systems. Download the desired installer.
If your downloaded file is a signed.bin
file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the installer.bin
.
If you only have installer.bin
, skip to the next step.
To unpack the installer:
In the terminal, list the binaries in the directory where you downloaded the installer, for example:
Use the sh
command to run the signed.bin
to verify the certificate and extract the installer binary and other files. An example output is shown below.
List the files to check if extraction was successful.
Local Install of NSO Software is performed in a single user-specified directory, for example in your $HOME
directory. It is always recommended to install NSO in a directory named as the version of the release, for example, if the version being installed is 6.1
, the directory should be ~/nso-6.1
.
To run the installer:
Navigate to your Install Directory.
Run the following command to install NSO in your Install Directory. The --local-install
parameter is optional.
An example output is shown below.
The installation program creates a shell script file named ncsrc
in each NSO installation, which sets the environment variables.
To set the environment variables:
Source the ncsrc
file to get the environment variables settings in your shell. You may want to add this sourcing command to your login sequence, such as .bashrc
.
For csh/tcsh
users, there is a ncsrc.tcsh
file with csh/tcsh
syntax. The example below assumes that you are using bash
, other versions of /bin/sh
may require that you use .
instead of source
.
Most users add source ~/nso-x.x/ncsrc
(where x.x
is the NSO version) to their ~/.bash_profile
, but you can simply do it manually when you want it. Once it has been sourced, you have access to all the NSO executable commands, which start with ncs
.
NSO needs a deployment/runtime directory where the database files, logs, etc. are stored. An empty default directory can be created using the ncs-setup
command.
To create a Runtime Directory:
Create a Runtime Directory for NSO by running the following command. In this case, we assume that the directory is $HOME/ncs-run
.
Start the NSO daemon ncs
.
To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses Cisco Smart Licensing, as described in the Cisco Smart Licensing to make it easy to deploy and manage NSO license entitlements. Login credentials to the Cisco Smart Software Manager (CSSM) account are provided by your Cisco contact and detailed instructions on how to create a registration token can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the Cisco Software Licensing Guide.
To generate a license registration token:
When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
Inspect the requested entitlements using the command show license all
(or by inspecting the NSO daemon log). An example output is shown below.
Frequently Asked Questions (FAQs) about Local Install.
Next Steps
Install NSO for production use in a system-wide deployment.
Complete the following activities in the given order to perform a System Install of NSO.
Start by setting up your system to install and run NSO.
To install NSO:
Fulfill at least the primary requirements.
If you intend to build and run NSO deployment examples, you also need to install additional applications listed under Additional Requirements.
To download the Cisco NSO installer and example NEDs:
Go to the Cisco's official Software Download site.
Search for the product "Network Services Orchestrator" and select the desired version.
There are two versions of the NSO installer, i.e. for macOS and Linux systems. For System Install, choose the Linux OS version.
If your downloaded file is a signed.bin
file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the installer.bin
.
If you only have installer.bin
, skip to the next step.
To unpack the installer:
In the terminal, list the binaries in the directory where you downloaded the installer, for example:
Use the sh
command to run the signed.bin
to verify the certificate and extract the installer binary and other files. An example output is shown below.
List the files to check if extraction was successful.
To run the installer:
Navigate to your Install Directory.
Run the installer with the --system-install
option to perform System Install. This option creates an Install of NSO that is suitable for production deployment.
For example:
Some older NSO releases expect the /etc/init.d/
folder to exist in the host operating system. If the folder does not exist, the installer may fail to successfully install NSO. A workaround that allows the installer to proceed is to create the folder manually, but the NSO process will not automatically start at boot.
The installation is configured for PAM authentication, with group assignment based on the OS group database (e.g. /etc/group
file). Users that need access to NSO must belong to either the ncsadmin
group (for unlimited access rights) or the ncsoper
group (for minimal access rights).
To set up user access:
To create the ncsadmin
group, use the OS shell command:
To create the ncsoper
group, use the OS shell command:
To add an existing user to one of these groups, use the OS shell command:
To set environment variables:
Change to Super User privileges.
The installation program creates a shell script file in each NSO installation which sets the environment variables needed to run NSO. With the --system-install
option, by default, these settings are set on the shell. To explicitly set the variables, source ncs.sh
or ncs.csh
depending on your shell type.
Start NSO.
Once you log on with the user that belongs to ncsadmin
or ncsoper
, you can directly access the CLI as shown below:
As part of the System Install, the NSO daemon ncs
is automatically started at boot time. You do not need to create a Runtime Directory for System Install.
To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses Cisco Smart Licensing to make it easy to deploy and manage NSO license entitlements. Login credentials to the Cisco Smart Software Manager (CSSM) account are provided by your Cisco contact and detailed instructions on how to create a registration token can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the Cisco Software Licensing Guide.
To generate a license registration token:
When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
Inspect the requested entitlements using the command show license all
(or by inspecting the NSO daemon log). An example output is shown below.
Frequently Asked Questions (FAQs) about System Install.
Convert your current Local Install setup to a System Install.
Applies to Local Install.
If you already have a Local Install with existing data that you would like to convert into a System Install, the following procedure allows you to do so. However, a reverse migration from System to Local Install is not supported.
It is possible to perform the migration and upgrade simultaneously to a newer NSO version, however, doing so introduces additional complexity. If you run into issues, first migrate, and then perform the upgrade.
The following procedure assumes that NSO is installed as described in the NSO Local Install process, and will perform an initial System Install of the same NSO version. After following these steps, consult the NSO System Install guide for additional steps that are required for a fully functional System Install.
The procedure also assumes you are using the $HOME/ncs-run
folder as the run directory. If this is not the case, modify the following path accordingly.
To migrate to System Install:
Stop the current (local) NSO instance, if it is running.
Take a complete backup of the Runtime Directory for potential disaster recovery.
Change to Super User privileges.
Start the NSO System Install.
If you have multiple versions of NSO installed, verify that the symbolic link in /opt/ncs
points to the correct version.
Copy the CDB files containing data to the central location.
Ensure that the /var/opt/ncs/packages
directory includes all the necessary packages, appropriate for the NSO version. However, copying the packages directly could later on interfere with the operation of the nct
command. It is better to only use symbolic links in that folder. Instead, copy the existing packages to the /opt/ncs/packages
directory, either as directories or as tarball files. Make sure that each package includes the NSO version in its name and is not just a symlink, for example:
Link to these packages in the /var/opt/ncs/packages
directory.
The reason for prepending ncs-VERSION
to the filename is to allow additional NSO commands, such as nct upgrade
and software packages
to work properly. These commands need to identify which NSO version a package was compiled for.
Edit the /etc/ncs/ncs.conf
configuration file and make the necessary changes. If you wish to use the configuration from Local Install, disable the local authentication, unless you fully understand its security implications.
When starting NSO later on, make sure that you set the package reload option, or use start-with-package-reload
instead of start
with /etc/init.d/ncs
.
Review and complete the steps in NSO System Install, except running the installer, which you have done already. Once completed, you should have a running NSO instance with data from the Local Install.
Remove the package reload option if it was set.
Update log file paths for Java and Python VM through the NSO CLI.
Verify that everything is working correctly.
At this point, you should have a complete copy of the previous Local Install running as a System Install. Should the migration fail at some point and you want to back out of it, the Local Install was not changed and you can easily go back to using it as before.
In the unlikely event of Local Install becoming corrupted, you can restore it from the backup.
Remove System Install.
Applies to System Install.
NSO can be uninstalled using the option only if NSO is installed with --system-install
option. Either part of the static files or the full installation can be removed using ncs-uninstall
option. Ensure to stop NSO before uninstalling.
Executing the above command removes the Installation Directory /opt/ncs
including symbolic links, Configuration Directory /etc/ncs
, Run Directory /var/opt/ncs
, Log Directory /var/log/ncs
, init scripts from /etc/init.d
and user profile scripts from /etc/profile.d
.
To make sure that no license entitlements are consumed after you have uninstalled NSO, be sure to perform the deregister command in the CLI:
Understand NSO deployment with an example setup.
This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the tailf-hcc
layer-2 upgrade deployment scenario described here, check the NSO example set under examples.ncs/development-guide/high-availability/hcc
. The example covers the following topics:
The deployment examples use both the legacy rule-based and recommended HA Raft setup. See for HA details. The HA Raft deployment consists of three nodes running NSO and a node managing them, while the rule-based HA deployment uses only two nodes.
Based on the Raft consensus algorithm, the HA Raft version provides the best fault tolerance, performance, and security and is therefore recommended.
For the HA Raft setup, the NSO nodes paris.fra
, london.eng
, and berlin.ger
nodes make up a cluster of one leader and two followers.
For the rule-based HA setup, the NSO nodes paris
and london
make up one HA pair — one primary and one secondary.
In this container-based example, Docker Compose uses a Dockerfile
to build the container image and install NSO on multiple nodes, here containers. A shell script uses an SSH client to access the NSO nodes from the manager node to demonstrate HA failover and, as an alternative, a Python script that implements SSH and RESTCONF clients.
An admin
user is created on the NSO nodes. Password-less sudo
access is set up to enable the tailf-hcc
server to run the ip
command. The manager's SSH client uses public key authentication, while the RESTCONF client uses a token to authenticate with the NSO nodes.
The example creates two packages using the ncs-make-package
command: dummy
and inert
. A third package, tailf-hcc
, provides VIPs that point to the current HA leader/primary node.
The packages are compressed into a tar.gz
format for easier distribution, but that is not a requirement.
This example uses a minimal Red Hat UBI distribution for hosting NSO with the following added packages:
NSO's basic dependency requirements are fulfilled by adding the Java Runtime Environment (JRE), OpenSSH, and OpenSSL packages.
The OpenSSH server is used for shell access and secure copy to the NSO Linux host for NSO version upgrade purposes. The NSO built-in SSH server provides CLI and NETCONF access to NSO.
The NSO services require Python.
The rsyslog
package enables storing an NSO log file from several NSO logs locally and forwarding some logs to the manager.
The arp
command from the net-tools
and iputils
(ping
) packages have been added for demonstration purposes.
The steps in the list below are performed as root
. Docker Compose will build the container images, i.e., create the NSO installation as root
.
The admin
user will only need root
access to run the ip
command when tailf-hcc
adds the Layer 2 VIP address to the leader/primary node interface.
The initialization steps are also performed as root
for the nodes that make up the HA cluster:
Create the ncsadmin
and ncsoper
Linux user groups.
Create and add the admin
and oper
Linux users to their respective groups.
Perform a system installation of NSO that runs NSO as the admin
user.
The admin
user is granted access to run the ip command from the vipctl
script as root
using the sudo
command as required by the tailf-hcc
package.
The cmdwrapper
NSO program gets access to run the scripts executed by the generate-token
action for generating RESTCONF authentication tokens as the current NSO user.
Password authentication is set up for the read-only oper
user for use with NSO only, which is intended for WebUI access.
The root
user is set up for Linux shell access only.
The NSO installer, tailf-hcc
package, application YANG modules, scripts for generating and authenticating RESTCONF tokens, and scripts for running the demo are all available to the NSO and manager containers.
admin
user permissions are set for the NSO directories and files created by the system install, as well as for the root
, admin
, and oper
home directories.
The ncs.crypto_keys
are generated and distributed to all nodes.
Note: The ncs.crypto_keys
file is highly sensitive. It contains the encryption keys for all encrypted CDB data, which often includes passwords for various entities, such as login credentials to managed devices.
Note: In an NSO System Install setup, not only the TLS certificates (HA Raft) or shared token (rule-based HA) need to match between the HA cluster nodes, but also the configuration for encrypted strings, by default stored in /etc/ncs/ncs.crypto_keys
, needs to match between the nodes in the HA cluster. For rule-based HA, the tokens configured on the secondary nodes are overwritten with the encrypted token of type aes-256-cfb-128-encrypted-string
from the primary node when the secondary connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to be re-established with a "Token mismatch, secondary is not allowed" error.
For HA Raft, TLS certificates are generated for all nodes.
The initial NSO configuration, ncs.conf
, is updated and in sync (identical) on the nodes.
The SSH servers are configured to allow only SSH public key authentication (no password). The oper
user can use password authentication with the WebUI but has read-only NSO access.
The oper
user is denied access to the Linux shell.
The admin
user can access the Linux shell and NSO CLI using public key authentication.
New keys for all users are distributed to the HA cluster nodes and the manager node when the HA cluster is initialized.
The OpenSSH server and the NSO built-in SSH server use the same private and public key pairs located under ~/.ssh/id_ed25519
, while the manager public key is stored in the ~/.ssh/authorized_keys
file for both NSO nodes.
Host keys are generated for all nodes to allow the NSO built-in SSH and OpenSSH servers to authenticate the server to the client.
Each HA cluster node has its own unique SSH host keys stored under ${NCS_CONFIG_DIR}/ssh_host_ed25519_key
. The SSH client(s), here the manager, has the keys for all nodes in the cluster paired with the node's hostname and the VIP address in its /root/.ssh/known_hosts
file.
The host keys, like those used for client authentication, are generated each time the HA cluster nodes are initialized. The host keys are distributed to the manager and nodes in the HA cluster before the NSO built-in SSH and OpenSSH servers are started on the nodes.
As NSO runs in containers, the environment variables are set to point to the system install directories in the Docker Compose .env
file.
NSO runs as the non-root admin
user and, therefore, the ncs
command is used to start NSO instead of the /etc/init.d/ncs
and /etc/profile.d
scripts. The environment variables are copied to a .pam_environment
file so that the root
and admin
users can set the required environment variables when those users access the shell via SSH.
The start script is installed as part of the NSO system install, and it can be customized if you would like to use it to start NSO. The available NSO start script variants can be found under /opt/ncs/current/src/ncs/package-skeletons/etc
. The scripts may provide what you need and can be used as a starting point.
If you are running NSO as the root user and using systemd
, the init.d
script can converted for use with systemd
. Example:\
The OpenSSH sshd
and rsyslog
daemons are started.
The packages from the package store are added to the ${NCS_RUN_DIR}/packages
directory before finishing the initialization part in the root
context.
The NSO smart licensing token is set.
ncs.conf
ConfigurationThe token provided to the user is added to a simple YANG list of tokens where the list key is the username.
The token list is stored in the NSO CDB operational data store and is only accessible from the node's local MAAPI and CDB APIs. See the HA Raft and rule-based HA upgrade-l2/manager-etc/yang/token.yang
file in the examples.
Note: The SSL certificates that NSO generates are self-signed:
Disable /ncs-config/webui/cgi
unless needed.
aaa_init.xml
ConfigurationThe NSO System Install places an AAA aaa_init.xml
file in the $NCS_RUN_DIR/cdb
directory. Compared to a Local Install for development, no users are defined for authentication in the aaa_init.xml
file, and PAM is enabled for authentication. NACM rules for controlling NSO access are defined in the file for users belonging to a ncsadmin
user group and read-only access for a ncsoper
user group. As seen in the previous sections, this example creates Linux root
, admin
, and oper
users, as well as the ncsadmin
and ncsoper
Linux user groups.
PAM authenticates the users using SSH public key authentication without a passphrase for NSO CLI and NETCONF login. Password authentication is used for the oper
user intended for NSO WebUI login and token authentication for RESTCONF login.
Before the NSO daemon is running, and there are no existing CDB files, the default AAA configuration in the aaa_init.xml
is used. It is restrictive and is used for this demo with only a minor addition to allow the oper user to generate a token for RESTCONF authentication.
The NSO authorization system is group-based; thus, for the rules to apply to a specific user, the user must be a member of the group to which the restrictions apply. PAM performs the authentication, while the NSO NACM rules do the authorization.
Adding the admin
user to the ncsadmin
group and the oper
user to the limited ncsoper
group will ensure that the two users get properly authorized with NSO.
Not adding the root
user to any group matching the NACM groups results in zero access, as no NACM rule will match, and the default in the aaa_init.xml
file is to deny all access.
The manager in this example logs into the different NSO hosts using the Linux user login credentials. This scheme has many advantages, mainly because all audit logs on the NSO hosts will show who did what and when. Therefore, the common bad practice of having a shared admin
Linux user and NSO local user with a shared password is not recommended.
The default aaa_init.xml
file provided with the NSO system installation must not be used as-is in a deployment without reviewing and verifying that every NACM rule in the file matches the desired authorization level.
The NSO HA, together with the tailf-hcc
package, provides three features:
All CDB data is replicated from the leader/primary to the follower/secondary nodes.
If the leader/primary fails, a follower/secondary takes over and starts to act as leader/primary. This is how HA Raft works and how the rule-based HA variant of this example is configured to handle failover automatically.
At failover, tailf-hcc
sets up a virtual alias IP address on the leader/primary node only and uses gratuitous ARP packets to update all nodes in the network with the new mapping to the leader/primary node.
Nodes in other networks can be updated using the tailf-hcc
layer-3 BGP functionality or a load balancer. See the NSO example set under examples.ncs/development-guide/high-availability
.
See the NSO example set under examples.ncs/development-guide/high-availability/hcc
for a reference to an HA Raft and rule-based HA tailf-hcc
Layer 3 BGP examples.
The HA Raft and rule-based HA upgrade-l2 examples also demonstrate HA failover, upgrading the NSO version on all nodes, and upgrading NSO packages on all nodes.
Depending on your installation, e.g., the size and speed of the managed devices and the characteristics of your service applications, some default values of NSO may have to be tweaked, particularly some of the timeouts.
Device timeouts. NSO has connect, read, and write timeouts for traffic between NSO and the managed devices. The default value may not be sufficient if devices/nodes are slow to commit, while some are sometimes slow to deliver their full configuration. Adjust timeouts under /devices/global-settings
accordingly.
Service code timeouts. Some service applications can sometimes be slow. Adjusting the /services/global-settings/service-callback-timeout
configuration might be applicable depending on the applications. However, the best practice is to change the timeout per service from the service code using the Java ServiceContext.setTimeout
function or the Python data_set_timeout
function.
There are quite a few different global settings for NSO. The two mentioned above often need to be changed.
The Cisco Smart Licensing CLI command is present only in the Cisco Style CLI, which is the default CLI for this setup.
The NSO system installations performed on the nodes in the HA cluster also install defaults for logrotate. Inspect /etc/logrotate.d/ncs
and ensure that the settings are what you want. Note that the NSO error logs, i.e., the files /var/log/ncs/ncserr.log*
, are internally rotated by NSO and must not be rotated by logrotate
.
For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the examples.ncs/development-guide/high-availability/hcc/README
; the examples integrate with rsyslog
to log the ncs
, developer
, upgrade
, audit
, netconf
, snmp
, and webui-access
logs to syslog with facility
set to daemon
in ncs.conf
.
rsyslogd
on the nodes in the HA cluster is configured to write the daemon facility logs to /var/log/daemon.log
, and forward the daemon facility logs with the severity info
or higher to the manager node's /var/log/ha-cluster.log
syslog.
NED trace logs are a crucial tool for debugging NSO installations and not recommended for deployment. These logs are very verbose and for debugging only. Do not enable these logs in production.
Note that the NED logs include everything, even potentially sensitive data is logged. No filtering is done. The NED trace logs are controlled through the CLI under: /device/global-settings/trace
. It is also possible to control the NED trace on a per-device basis under /devices/device[name='x']/trace
.
There are three different settings for trace output. For various historical reasons, the setting that makes the most sense depends on the device type.
For all CLI NEDs, use the raw
setting.
For all ConfD and netsim-based NETCONF devices, use the pretty setting. This is because ConfD sends the NETCONF XML unformatted, while pretty
means that the XML is formatted.
For Juniper devices, use the raw
setting. Juniper devices sometimes send broken XML that cannot be formatted appropriately. However, their XML payload is already indented and formatted.
For generic NED devices - depending on the level of trace support in the NED itself, use either pretty
or raw
.
For SNMP-based devices, use the pretty
setting.
Thus, it is usually not good enough to control the NED trace from /devices/global-settings/trace
.
The internal NSO log resides at /var/log/ncs/ncserr.*
. The log is written in a binary format. To view the internal error log, run the following command:
All large-scale deployments employ monitoring systems. There are plenty of good tools to choose from, open source and commercial. All good monitoring tools can script (using various protocols) what should be monitored. It is recommended that a special read-only Linux user without shell access be set up like the oper
user earlier in this chapter. A few commonly used checks include:
At startup, check that NSO has been started using the $NCS_DIR/bin/ncs_cmd -c "wait-start 2"
command.
Use the ssh
command to verify SSH access to the NSO host and NSO CLI.
Check disk usage using, for example, the df
utility.
For example, use curl or the Python requests library to verify that the RESTCONF API is accessible.
Check that the NETCONF API is accessible using, for example, the $NCS_DIR/bin/netconf-console
tool with a hello
message.
Verify the NSO version using, for example, the $NCS_DIR/bin/ncs --version
or RESTCONF /restconf/data/tailf-ncs-monitoring:ncs-state/version
.
Check if HA is enabled using, for example, RESTCONF /restconf/data/tailf-ncs-monitoring:ncs-state/ha
.
RESTCONF can be used to view the NSO alarm table and subscribe to alarm notifications. NSO alarms are not events. Whenever an NSO alarm is created, a RESTCONF notification and SNMP trap are also sent, assuming that you have a RESTCONF client registered with the alarm stream or configured a proper SNMP target. Some alarms, like the rule-based HA ha-secondary-down
alarm, require the intervention of an operator. Thus, a monitoring tool should also fetch the NSO alarm list.
Or subscribe to the ncs-alarms
RESTCONF notification stream.
NSO metric has different contexts all containing different counters, gauges, and rate of change gauges. There is a sysadmin
, a developer
and a debug
context. Note that only the sysadmin
context is enabled by default, as it is designed to be lightweight. Consult the YANG module tailf-ncs-metric.yang
to learn the details of the different contexts.
You may read counters by e.g. CLI, as in this example:
You may read gauges by e.g. CLI, as in this example:
You may read rate of change gauges by e.g. CLI, as in this example:
The presented configuration enables the built-in web server for the WebUI and RESTCONF interfaces. It is paramount for security that you only enable HTTPS access with /ncs-config/webui/match-host-name
and /ncs-config/webui/server-name
properly set.
The AAA setup described so far in this deployment document is the recommended AAA setup. To reiterate:
Have all users that need access to NSO authenticated through Linux PAM. This may then be through /etc/passwd
. Avoid storing users in CDB.
Given the default NACM authorization rules, you should have three different types of users on the system.
Users with shell access are members of the ncsadmin
Linux group and are considered fully trusted because they have full access to the system.
Users without shell access who are members of the ncsadmin
Linux group have full access to the network. They have access to the NSO SSH shell and can execute RESTCONF calls, access the NSO CLI, make configuration changes, etc. However, they cannot manipulate backups or perform system upgrades unless such actions are added to by NSO applications.
Users without shell access who are members of the ncsoper
Linux group have read-only access. They can access the NSO SSH shell, read data using RESTCONF calls, etc. However, they cannot change the configuration, manipulate backups, and perform system upgrades.
The default aaa_init.xml
file must not be used as-is before reviewing and verifying that every NACM rule in the file matches the desired authorization level.
A considerably more complex scenario is when users require shell access to the host but are either untrusted or should not have any access to NSO at all. NSO listens to a so-called IPC socket configured through /ncs-config/ncs-ipc-address
. This socket is typically limited to local connections and defaults to 127.0.0.1:4569
for security. The socket multiplexes several different access methods to NSO.
The main security-related point is that no AAA checks are performed on this socket. If you have access to the socket, you also have complete access to all of NSO.
To drive this point home, when you invoke the ncs_cli
command, a small C program that connects to the socket and tells NSO who you are, NSO assumes that authentication has already been performed. There is even a documented flag --noaaa
, which tells NSO to skip all NACM rule checks for this session.
Deploy NSO in a containerized setup using Cisco-provided images.
NSO can be deployed in your environment using a container, such as Docker. Cisco offers two pre-built images for this purpose that you can use to run NSO and build packages (see ).
Migration Information
If you are migrating from an existing NSO System Install to a container-based setup, follow the guidelines given below in .
Running NSO in a container offers several benefits that you would generally expect from a containerized approach, such as ease of use and convenient distribution. More specifically, a containerized NSO approach allows you to:
Run a container image of a specific version of NSO and your packages which can then be distributed as one unit.
Deploy and distribute the same version across your production environment.
Use the Build Image containing the necessary environment for compiling NSO packages.
Cisco provides the following two NSO images based on Red Hat UBI.
Intended Use | Develop NSO Packages | Build NSO Packages | Run NSO | NSO Install Type |
---|
Use the pre-built image as the base image in the container file (e.g., Dockerfile) and mount your own packages (such as NEDs and service packages) to run a final image for your production environment (see examples below).
The $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-sshkey/README
provides a link to the container-based deployment variant of the example. See the setup_ncip.sh
script and README
in the netsim-sshkey
deployment example for details.
The Build Image is a separate standalone NSO image with the necessary environment and software for building packages. It is also a pre-built Red Hat UBI-based image provided specifically to address the developer needs of building packages.
The container provides the necessary environment to build custom packages. The Build Image adds a few Linux packages that are useful for development, such as Ant, JDK, net-tools, pip, etc. Additional Linux packages can be added using, for example, the dnf
command. The dnf list installed
command lists all the installed packages.
To fetch and extract NSO images:
Extract the image and other files from the signed package, for example:
Signed Archive File Pattern
The signed archive file name has the following pattern:
nso-VERSION.container-image-PROD_BUILD.linux.ARCH.signed.bin
, where:
VERSION
denotes the image's NSO version.
PROD_BUILD
denotes the type of the container (i.e., prod
for Production, and build
for Build).
ARCH
is the CPU architecture.
To run the images, make sure that your system meets the following requirements:
A system running Linux x86_64
or ARM64
, or macOS x86_64
or Apple Silicon. Linux for production.
A container platform, such as Docker. Docker is the recommended platform and is used as an example in this guide for running NSO images. You may, however, use another container runtime of your choice. Note, however, that commands in this guide are Docker-specific. if you use another container runtime, make sure to use the respective commands.
Docker on Mac uses a Linux VM to run the Docker engine, which is compatible with the normal Docker images built for Linux. You do not need to recompile your NSO-in-Docker images when moving between a Linux machine and Docker on Mac as they both essentially run Docker on Linux.
This section covers the necessary administrative information about the NSO Production Image.
If you have NSO installed for production use using System Install, you can migrate to the Containerized NSO setup by following the instructions in this section. Migrating your Network Services Orchestrator (NSO) to a containerized setup can provide numerous benefits, including improved scalability, easier version management, and enhanced isolation of services.
The migration process is designed to ensure a smooth transition from a System-Installed NSO to a container-based deployment. Detailed steps guide you through preparing your existing environment, exporting the necessary configurations and state data, and importing them into your new containerized NSO instance. During the migration, consider the container runtime you plan to use, as this impacts the migration process.
Before You Start
We recommend reading through this guide to understand better the expectations, requirements, and functioning aspects of a containerized deployment.
Determine and install the container orchestration tool you plan to use (e.g., Docker, etc.).
Ensure that your current NSO installation is fully operational and backed up and that you have a clear rollback strategy in case any issues arise. Pay special attention to customizations and integrations that your current NSO setup might have, and verify their compatibility with the containerized version of NSO.
Have a contingency plan in place for quick recovery in case any issues are encountered during migration.
Migration Steps
Prepare:
Document your current NSO environment's specifics, including custom configurations and packages.
Perform a complete backup of your existing NSO instance, including configurations, packages, and data.
Migrate:
Stop the current NSO instance.
Save the run directory from the NSO instance in an appropriate place.
Use the same ncs.conf
and High Availability (HA) setup previously used with your System Install. We assume that the ncs.conf
follows the best practice and uses the NCS_DIR
, NCS_RUN_DIR
, NCS_CONFIG_DIR
, and NCS_LOG_DIR
variables for all paths. The ncs.conf
can be added to a volume and mounted to /nso/etc
in the container.
Add the run directory as a volume, mounted to /nso/run
in the container and copy the CDB data, packages, etc., from the previous System Install instance.
Create a volume for the log directory.
Start the container. Example:
Finalize:
Ensure that the containerized NSO instance functions as expected and validate system operations.
Plan and execute your cutover transition from the System-Installed NSO to the containerized version with minimal disruption.
Monitor the new setup thoroughly to ensure stability and performance.
ncs.conf
File Configuration and PreferenceThe run-nso.sh
script runs a check at startup to determine which ncs.conf
file to use. The order of preference is as below:
The ncs.conf
file specified in the Dockerfile (i.e., ENV $NCS_CONFIG_DIR /etc/ncs/
) is used as the first preference.
The second preference is to use the ncs.conf
file mounted in the /nso/etc/
run directory.
If no ncs.conf
file is found at either /etc/ncs
or /nso/etc
, the default ncs.conf
file provided with the NSO image in /defaults
is used.
If the ncs.conf
file is edited after startup, it can be reloaded using MAAPI reload_config()
. Example: $ ncs_cmd -c "reload"
.
If you need to perform operations before or after the ncs
process is started in the Production container, you can use Python and/or Bash scripts to achieve this. Add the scripts to the $NCS_CONFIG_DIR/pre-ncs-start.d/
and $NCS_CONFIG_DIR/post-ncs-start.d/
directories to have the run-nso.sh
script run them.
An admin user can be created on startup by the run script in the container. Three environment variables control the addition of an admin user:
ADMIN_USERNAME
: Username of the admin user to add, default is admin
.
ADMIN_PASSWORD
: Password of the admin user to add.
ADMIN_SSHKEY
: Private SSH key of the admin user to add.
As ADMIN_USERNAME
already has a default value, only ADMIN_PASSWORD
, or ADMIN_SSHKEY
need to be set in order to create an admin user. For example:
This can be useful when starting up a container in CI for testing or development purposes. It is typically not required in a production environment where there is a permanent CDB that already contains the required user accounts.
When using a permanent volume for CDB, etc., and restarting the NSO container multiple times with a different ADMIN_USERNAME
or ADMIN_PASSWORD
, note that the start script uses the ADMIN_USERNAME
and ADMIN_PASSWORD
environment variables to generate an XML file to the CDB directory which NSO reads at startup. When restarting NSO, if the persisted CDB configuration file already exists in the CDB directory, NSO will only load the persisted configuration and no XML files at startup, and the generated add_admin_user.xml
in the CDB directory needs to be loaded by the application, using, for example, the ncs_load
command.
The default ncs.conf
file performs authentication using only the Linux PAM, with local authentication disabled. For the ADMIN_USERNAME
, ADMIN_PASSWORD
, and ADMIN_SSHKEY
variables to take effect, NSO's local authentication, in /ncs-conf/aaa/local-authentication
, needs to be enabled. Alternatively, you can create a local Linux admin user that is authenticated by NSO using Linux PAM.
The default ncs.conf
NSO configuration file does not enable any northbound interfaces, and no ports are exposed externally to the container. Ports can be exposed externally of the container when starting the container with the northbound interfaces and their ports correspondingly enabled in ncs.conf
.
The backup behavior of running NSO in vs. outside the container is largely the same, except that when running NSO in a container, the SSH and SSL certificates are not included in the backup produced by the ncs-backup
script. This is different from running NSO outside a container where the default configuration path /etc/ncs
is used to store the SSH and SSL certificates, i.e., /etc/ncs/ssh
for SSH and /etc/ncs/ssl
for SSL.
Take a Backup
Let's assume we start a production image container using:
To take a backup:
Run the ncs-backup
command. The backup file is written to /nso/run/backups
.
Restore a Backup
To restore a backup, NSO must not be running. As you likely only have access to the ncs-backup
tool, the volume containing CDB, and other run-time data from inside of the NSO container, this poses a slight challenge. Additionally, shutting down NSO will terminate the NSO container.
To restore a backup:
Shut down the NSO container:
Run the ncs-backup --restore
command. Start a new container with the same persistent shared volumes mounted but with a different command. Instead of running the /run-nso.sh
, which is the normal command of the NSO container, run the restore
command.
Restoring an NSO backup should move the current run directory (/nso/run
to /nso/run.old
) and restore the run directory from the backup to the main run directory (/nso/run
). After this is done, start the regular NSO container again as usual.\
The NSO image /run-nso.sh
script looks for an SSH host key named ssh_host_ed25519_key
in the /nso/etc/ssh
directory to be used by the NSO built-in SSH server for the CLI and NETCONF interfaces.
If an SSH host key exists, which is for a typical production setup stored in a persistent shared volume, it remains the same after restarts or upgrades of NSO. If no SSH host key exists, the script generates a private and public key.
In a high-availability (HA) setup, the host key is typically shared by all NSO nodes in the HA group and stored in a persistent shared volume. I.e., each NSO node does not generate its host key to avoid fetching the public host key after each failover from the new primary to access the primary's NSO CLI and NETCONF interfaces.
NSO expects to find a TLS certificate and key at /nso/ssl/cert/host.cert
and /nso/ssl/cert/host.key
respectively. Since the /nso
path is usually on persistent shared volume for production setups, the certificate remains the same across restarts or upgrades.
If no certificate is present, one will be generated. It is a self-signed certificate valid for 30 days making it possible to use both in development and staging environments. It is not meant for the production environment. You should replace it with a properly signed certificate for production and it is encouraged to do so even for test and staging environments. Simply generate one and place it at the provided path, for example using the following, which is the command used to generate the temporary self-signed certificate:
The database in NSO, called CDB, uses YANG models as the schema for the database. It is only possible to store data in CDB according to the YANG models that define the schema.
If the YANG models are changed, particularly if the nodes are removed or renamed (rename is the removal of one leaf and an addition of another), any data in CDB for those leaves will also be removed. NSO normally warns about this when you attempt to load new packages, for example, request packages reload
command refuses to reload the packages if the nodes in the YANG model have disappeared. You would then have to add the force argument, e.g., request packages reload force
.
The base Production Image comes with a basic container health check. It uses ncs_cmd
to get the state that NCS is currently in. Only the result status is observed to check if ncs_cmd
was able to communicate with the ncs
process. The result indicates if the ncs
process is responding to IPC requests.
The default --health-start-period duration
in health check is set to 60 seconds. NSO will report an unhealthy
state if it takes more than 60 seconds to start up. To resolve this, set the --health-start-period duration
value to a relatively higher value, such as 600 seconds, or however long you expect NSO will take to start up.
To disable the health check, use the --no-healthcheck
command.
By default, the Linux kernel allows overcommit of memory. However, memory overcommit produces an unexpected and unreliable environment for NSO since the Linux Out Of Memory Killer, or OOM-killer, may terminate NSO without restarting it if the system is critically low on memory.
Also, when the OOM-killer terminates NSO, NSO will not produce a system dump file, and the debug information will be lost. Thus, it is strongly recommended that overcommit is disabled with Linux NSO production container hosts with an overcommit ratio of less than 100% (max).
By default, NSO writes a system dump to the NSO run-time directory, default NCS_RUN_DIR=/nso/run
. If the NCS_RUN_DIR
is not mounted on the host or to give the NSO system dump file a unique name, the NCS_DUMP="/path/to/mounted/dir/ncs_crash.dump.<my-timestamp>"
variable need to be set.
The docker run
command --memory="[ram]"
and --memory-swap="[ram+swap]"
option settings can be used to limit Docker container memory usage. The default setting is max, i.e., all of the host memory is used. Suppose the Docker container reaches a memory limit set by the --memory option. In that case, the default Docker setting is to have Docker terminate the container, no NSO system dump will be generated, and the debug information will be lost.
The /nso-run.sh
script that starts NSO is executed as an ENTRYPOINT
instruction and the CMD
instruction can be used to provide arguments to the entrypoint-script. Another alternative is to use the EXTRA_ARGS
variable to provide arguments. The /nso-run.sh
script will first check the EXTRA_ARGS
variable before the CMD
instruction.
An example using docker run
with the CMD
instruction:
With the EXTRA_ARGS
variable:
An example using a Docker Compose file, compose.yaml
, with the CMD
instruction:
With the EXTRA_ARGS
variable:
This section provides examples to exhibit the use of NSO images.
This example shows how to run the standalone NSO Production Image using the Docker CLI.
The instructions and CLI examples used in this example are Docker-specific. If you are using a non-Docker container runtime, you will need to: fetch the NSO image from the Cisco software download site, then load and run the image with packages and networking, and finally log in to NSO CLI to run commands.
Steps
Follow the steps below to run the Production Image using Docker CLI:
Start your container engine.
Next, load the image and run it. Navigate to the directory where you extracted the base image and load it. This will restore the image and its tag:
Overriding Environment Variables
Overriding basic environment variables (NCS_CONFIG_DIR
, NCS_LOG_DIR
, NCS_RUN_DIR
, etc.) is not supported and therefore should be avoided. Using, for example, the NCS_CONFIG_DIR
environment variable to mount a configuration directory will result in an error. Instead, to mount your configuration directory, do it appropriately in the correct place, which is under /nso/etc
.
Loading the Packages
Loading the packages by mounting the default load path /nso/run
as a volume is preferred. You can also load the packages by copying them manually into the /nso/run/packages
directory in the container. During development, a bind mount of the package directory on the host machine makes it easy to update packages in NSO by simply changing the packages on the host.
The default load path is configured in the ncs.conf
file as $NCS_RUN_DIR/packages
, where $NCS_RUN_DIR
expands to /nso/run
in the container. To find the load path, check the ncs.conf
file in the /etc/ncs/
directory.
Logging
With the Production Image, use a shared volume to persist data across restarts. If remote (Syslog) logging is used, there is little need to persist logs. If local logging is used, then persistent logging is recommended.
NSO starts a cron job to handle logrotate of NSO logs by default. i.e., the CRON_ENABLE
and LOGROTATE_ENABLE
variables are set to true
using the /etc/logrotate.conf
configuration. See the /etc/ncs/post-ncs-start.d/10-cron-logrotate.sh
script. To set how often the cron job runs, use the crontab file.
Finally, log in to NSO CLI to run commands. Open an interactive shell on the running container and access the NSO CLI.
You can also use the docker exec -it cisco-nso ncs_cli -u admin
command to access the CLI from the host's terminal.
This example describes how to upgrade your NSO to run a newer NSO version in the container. The overall upgrade process is outlined in the steps below. In the example below, NSO is to be upgraded from version 6.0 to 6.1.
To upgrade your NSO version:
Start a container with the docker run
command. In the example below, it mounts the /nso
directory in the container to the NSO-vol
named volume to persist the data. Another option is using a bind mount of the directory on the host machine. At this point, the /cdb
directory is empty.
Perform a backup, either by running the docker exec
command (make sure that the backup is placed somewhere we have mounted) or by creating a tarball of /data/nso
on the host machine.
Stop the NSO by issuing the following command, or by stopping the container itself which will run the ncs stop
command automatically.
Remove the old NSO.
Start a new container and mount the /nso
directory in the container to the NSO-vol
named volume. This time the /cdb
folder is not empty, so instead of starting a fresh NSO, an upgrade will be performed.
At this point, you only have one container that is running the desired version 6.1 and you do not need to uninstall the old NSO.
This example covers the necessary information to manifest the use of NSO images to compile packages and run NSO. Using Docker Compose is not a requirement, but a simple tool for defining and running a multi-container setup where you want to run both the Production and Build images in an efficient manner.
The packages used in this example are taken from the examples.ncs/development-guide/nano-services/netsim-sshkey
example:
distkey
: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service.
ne
: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients.
docker-compose.yaml
- Docker Compose File ExampleA basic Docker Compose file is shown in the example below. It describes the containers running on a machine:
The Production container runs NSO.
The Build container builds the NSO packages.
A third example
container runs the netsim device.
Note that the packages use a shared volume in this simple example setup. In a more complex production environment, you may want to consider a dedicated redundant volume for your packages.
Follow the steps below to run the images using Docker Compose:
Start the Build container. This starts the services in the Compose file with the profile build
.
Copy the packages from the netsim-sshkey
example and compile them in the NSO Build container. The easiest way to do this is by using the docker exec
command, which gives more control over what to build and the order of it. You can also do this with a script to make it easier and less verbose. Normally you populate the package directory from the host. Here, we use the packages from an example.
Start the netsim container. This outputs the generated init.xml
and ncs.conf
files to the NSO Production container. The --wait
flag instructs to wait until the health check returns healthy.
Start the NSO Production container.
At this point, NSO is ready to run the service example to configure the netsim device(s). A bash script (demo.sh
) that runs the above steps and showcases the netsim-sshkey
example is given below:
This example describes how to upgrade NSO when using Docker Compose.
To upgrade to a new minor or major version, for example, from 6.0 to 6.1, follow the steps below:
Change the image version in the Compose file to the new version, i.e., 6.1.
Run the docker compose up --profile build -d
command to start up the Build container with the new image.
Compile the packages using the Build container.
Run the docker compose up --profile prod --wait
command to start the Production container with the new packages that were just compiled.
To upgrade to a new maintenance release version, for example, to 6.1.1, follow the steps below:
Change the image version in the Compose file to the new version, i.e., 6.1.1.
Run the docker compose up --profile prod --wait
command.
Upgrading in this way does not require a recompile. Docker detects changes and upgrades the image in the container to the new version.
Additional preparation steps may be required based on the upgrade and the actual setup, such as when using the Layered Service Architecture (LSA) feature. In particular, for a major NSO upgrade in a multi-version LSA cluster, ensure that the new version supports the other cluster members and follow the additional steps outlined in in Layered Service Architecture.
Finally, regardless of the upgrade type, ensure that you have a working backup and can easily restore the previous configuration if needed, as described in .
If you do not wish to automate the upgrade process, you will need to follow the instructions from and transfer the required files to each host manually. Additional information on HA is available in . However, you can run the high-availability
actions from the preceding script on the NSO CLI as-is. In this case, please take special care of which host you perform each command, as it can be easy to mix them up.
If the only change in the packages is the addition of new NED packages, the and-add
can replace and-reload
command for an even more optimized and less intrusive update. See for details.
In some cases, NSO may warn when the upgrade looks suspicious. For more information on this, see . If you understand the implications and are willing to risk losing data, use the force
option with packages reload
or set the NCS_RELOAD_PACKAGES
environment variable to force
when restarting NSO. It will force NSO to ignore warnings and proceed with the upgrade. In general, this is not recommended.
Migration uses the /ncs:devices/device/migrate
action to change the ned-id of a single device or a group of devices. It does not affect the actual network device, except possibly reading from it. So, the migration does not have to be performed as part of the package upgrade procedure described above but can be done later, during normal operations. The details are described in . Once the migration is complete, you can remove the old NED by performing another package upgrade, where you deinstall the old NED package. It can be done straight after the migration or as part of the next upgrade cycle.
HA is usually not optional for a deployment. Data resides in CDB, a RAM database with a disk-based journal for persistence. Both HA variants can be set up to avoid the need for manual intervention in a failure scenario, where HA Raft does the best job of keeping the cluster up. See for details.
An NSO system installation on the NSO nodes is recommended for deployments. For System Installation details, see the steps.
While this deployment example uses containers, it is intended as a generic deployment guide. For details on running NSO in a container, such as Docker, see .
To fulfill the tailf-hcc
server dependencies, the iproute2
utilities and sudo
packages are installed. See (in the section ) for details on dependencies.
The NSO IPC socket is configured in ncs.conf
to only listen to localhost 127.0.0.1 connections, which is the default setting.
By default, the clients connecting to the NSO IPC socket are considered trusted, i.e., no authentication is required, and the use of 127.0.0.1 with the /ncs-config/ncs-ipc-address
IP address in ncs.conf
to prevent remote access. See and in Manual Pages for more details.
/ncs-config/aaa/pam
is set to enable PAM to authenticate users as recommended. All remote access to NSO must now be done using the NSO host's privileges. See in Manual Pages for details.
Depending on your Linux distribution, you may have to change the /ncs-config/aaa/pam/service
setting. The default value is common-auth
. Check the file /etc/pam.d/common-auth
and make sure it fits your needs. See in Manual Pages for details.
Alternatively, or as a complement to the PAM authentication, users can be stored in the NSO CDB database or authenticated externally. See for details.
RESTCONF token authentication under /ncs-config/aaa/external-validation
is enabled using a token_auth.sh
script that was added earlier together with a generate_token.sh
script. See in Manual Pages for details.
The scripts allow users to generate a token for RESTCONF authentication through, for example, the NSO CLI and NETCONF interfaces that use SSH authentication or the Web interface.
The NSO web server HTTPS interface should be enabled under /ncs-config/webui
, along with /ncs-config/webui/match-host-name = true
and /ncs-config/webui/server-name
set to the hostname of the node, following security best practice. See in Manual Pages for details.
Thus, if this is a production environment and the JSON-RPC and RESTCONF interfaces using the web server are not used solely for internal purposes, the self-signed certificate must be replaced with a properly signed certificate. See in Manual Pages under /ncs-config/webui/transport/ssl/cert-file
and /ncs-config/restconf/transport/ssl/certFile
for more details.
The NSO SSH CLI login is enabled under /ncs-config/cli/ssh/enabled
. See in Manual Pages for details.
The NSO CLI style is set to C-style, and the CLI prompt is modified to include the hostname under /ncs-config/cli/prompt
. See in Manual Pages for details.
NSO HA Raft is enabled under /ncs-config/ha-raft
, and the rule-based HA under /ncs-config/ha
. See in Manual Pages for details.
Depending on your provisioned applications, you may want to turn /ncs-config/rollback/enabled
off. Rollbacks do not work well with nano service reactive FASTMAP applications or if maximum transaction performance is a goal. If your application performs classical NSO provisioning, the recommendation is to enable rollbacks. Otherwise not. See in Manual Pages for details.
The NSO NACM functionality is based on the IETF RFC 8341 with NSO extensions augmented by tailf-acm.yang
. See , for more details.
This example sets up one HA cluster using HA Raft or rule-based HA with the tailf-hcc
server to manage virtual IP addresses. See and for details.
NSO uses Cisco Smart Licensing, which is described in detail in . After registering your NSO instance(s), and receiving a token, following steps 1-6 as described in the section of Cisco Smart Licensing, enter a token from your Cisco Smart Software Manager account on each host. Use the same token for all instances and script entering the token as part of the initial NSO configuration or from the management node:
Use the audit-network-log for recording southbound traffic towards devices. Enable by setting /ncs-config/logs/audit-network-log/enabled
and /ncs-config/logs/audit-network-log/file/enabled
to true in $NCS_CONFIG_DIR/ncs.conf
, See in Manual Pages for more information.
While there is a global log for, for example, compilation errors in /var/log/ncs/ncs-python-vm.log
, logs from user application packages are written to separate files for each package, and the log file naming is ncs-python-vm-
pkg_name
.log
. The level of logging from Python code is controlled on a per package basis. See for more details.
User application Java logs are written to /var/log/ncs/ncs-java-vm.log
. The level of logging from Java code is controlled per Java package. See in Java VM for more details.
If you have more fine-grained authorization requirements than read-write and read-only, additional Linux groups can be created, and the NACM rules can be updated accordingly. See from earlier in this chapter on how the reference example implements users, groups, and NACM rules to achieve the above.
For a detailed discussion of the configuration of authorization rules through NACM, see , particularly the section .
You must protect the socket to prevent untrusted Linux shell users from accessing the NSO instance using this method. This is done by using a file in the Linux file system. The file /etc/ncs/ipc_access
gets created and populated with random data at install time. Enable /ncs-config/ncs-ipc-access-check/enabled
in ncs.conf
and ensure that trusted users can read the /etc/ncs/ipc_access
file, for example, by changing group access to the file. See in Manual Pages for details.
For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The raft-upgrade-l2
project, referenced from the NSO example (set under examples.ncs/development-guide/high-availability/hcc
) together with this Deployment Example section, describes a reference implementation. See for more HA Raft details.
The Red Hat UBI is an OCI-compliant image that is freely distributable and independent of platform and technical dependencies. You can read more about Red Hat UBI , and about Open Container Initiative (OCI) .
The Production Image is a production-ready NSO image for system-wide deployment and use. It is a pre-built Red Hat UBI-based NSO image created on and available from the site.
Consult the documentation for information on installing NSO on a Docker host, building NSO packages, etc.
See for an example that uses the container to deploy an SSH-key-provisioning nano service.
The image is available as a signed package (e.g., nso-VERSION.container-image-build.linux.ARCH.signed.bin
) from the Cisco site. You can run the Build Image in different ways, and a simple tool for defining and running multi-container Docker applications is (see examples below).
On Cisco's official site, search for "Network Services Orchestrator". Select the relevant NSO version in the drop-down list, e.g., "Crosswork Network Services Orchestrator 6", and click "Network Services Orchestrator Software". Locate the binary, which is delivered as a signed package (e.g., nso-6.1.container-image-prod.linux.x86_64.signed.bin
).
Verify the compatibility of your current system configurations with the containerized NSO setup. See for more information.
Set up the container environment and download/extract the NSO production image. See for details.
See in System Install for information on memory overcommit recommendations for a Linux system hosting NSO production containers.
If you intend to run multiple images (i.e., both Production and Build), Docker Compose is a tool that simplifies defining and running multi-container Docker applications. See the example () below for detailed instructions.
Start a container from the image. Supply additional arguments to mount the packages and ncs.conf
as separate volumes (), and publish ports for networking () as needed. The container starts NSO using the /run-nso.sh
script. To understand how the ncs.conf
file is used, see .
Prepare
Install
Finalize
Prepare
Install
Finalize
Development Host | None or Local Install |
Build Image | System Install |
Production Image | System Install |