Extend product functionality to add custom service code or expose data through data provider mechanism.
Loading...
Loading...
Loading...
Run your Python code using Python Virtual Machine (VM).
NSO is capable of starting one or several Python VMs where Python code in user-provided packages can run.
An NSO package containing a python
directory will be considered to be a Python Package. By default, a Python VM will be started for each Python package that has a python-class-name
defined in its package-meta-data.xml
file. In this Python VM, the PYTHONPATH
environment variable will be pointing to the python
directory in the package.
If any required package that is listed in the package-meta-data.xml
contains a python
directory, the path to that directory will be added to the PYTHONPATH
of the started Python VM and thus its accompanying Python code will be accessible.
Several Python packages can be started in the same Python VM if their corresponding package-meta-data.xml
files contain the same python-package/vm-name
.
A Python package skeleton can be created by making use of the ncs-make-package
command:
The tailf-ncs-python-vm.yang
defines the python-vm
container which, along with ncs.conf
, is the entry point for controlling the NSO Python VM functionality. Study the content of the YANG model in the example below (The Python VM YANG Model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf
. Here will follow a description of the most important configuration parameters.
Note that some of the nodes beneath python-vm
are by default invisible due to a hidden attribute. To make everything under python-vm
visible in the CLI, two steps are required:
First, the following XML snippet must be added to ncs.conf
:\
Next, the unhide
command may be used in the CLI session:
The sanity-checks
/self-assign-warning
controls the self-assignment warnings for Python services with off, log, and alarm (default) modes. An example of a self-assignment:
As several service invocations may run in parallel, self-assignment will likely cause difficult-to-debug issues. An alarm or a log entry will contain a warning and a keypath to the service instance that caused the warning. Example log entry:
With the logging
/level
, the amount of logged information can be controlled. This is a global setting applied to all started Python VMs unless explicitly set for a particular VM, see Debugging of Python packages. The levels correspond to the pre-defined Python levels in the Python logging
module, ranging from level-critical
to level-debug
.
Refer to the official Python documentation for the logging
module for more information about the log levels.
The logging
/log-file-prefix
define the prefix part of the log file path used for the Python VMs. This prefix will be appended with a Python VM-specific suffix which is based on the Python package name or the python-package/vm-name
from the package-meta-data.xml
file. The default prefix is logs/ncs-python-vm
so e.g., if a Python package named l3vpn
is started, a logfile with the name logs/ncs-python-vm-l3vpn.log
will be created.
The status
/start
and status
/current
contains operational data. The status
/start
command will show information about what Python classes, as declared in the package-meta-data.xml
file, were started and whether the outcome was successful or not. The status
/current
command will show which Python classes that are currently running in a separate thread. The latter assumes that the user-provided code cooperates by informing NSO about any thread(s) started by the user code, see Structure of the User-provided Code.
The start
and stop
actions make it possible to start and stop a particular Python VM.
The package-meta-data.xml
file must contain a component
of type application
with a python-class-name
specified as shown in the example below.
The component name (L3VPN Service
in the example) is a human-readable name of this application component. It will be shown when doing show python-vm
in the CLI. The python-class-name
should specify the Python class that implements the application entry point. Note that it needs to be specified using Python's dot notation and should be fully qualified (given the fact that PYTHONPATH
is pointing to the package python
directory).
Study the excerpt of the directory listing from a package named l3vpn
below.
Look closely at the python
directory above. Note that directly under this directory is another directory named the package (l3vpn
) that contains the user code. This is an important structural choice that eliminates the chance of code clashes between dependent packages (only if all dependent packages use this pattern of course).
As you can see, the service.py
is located according to the description above. There is also a __init__.py
(which is empty) there to make the l3vpn
directory considered a module from Python's perspective.
Note the _namespaces/l3vpn_ns.py
file. It is generated from the l3vpn.yang
model using the ncsc --emit-python
command and contains constants representing the namespace and the various components of the YANG model, which the User code can import and make use of.
The service.py
file should include a class definition named Service
which acts as the component's entry point. See The Application Component for details.
Notice that there is also a file named upgrade.py
present which holds the implementation of the upgrade
component specified in the package-meta-data.xml
excerpt above. See The Upgrade Component for details regarding upgrade
components.
application
ComponentThe Python class specified in the package-meta-data.xml
file will be started in a Python thread which we call a component
thread. This Python class should inherit ncs.application.Application
and should implement the methods setup()
and teardown()
.
NSO supports two different modes for executing the implementations of the registered callpoints, threading
and multiprocessing
.
The default threading
mode will use a single thread pool for executing the callbacks for all callpoints.
The multiprocessing
mode will start a subprocess for each callpoint. Depending on the user code, this can greatly improve the performance on systems with a lot of parallel requests, as a separate worker process will be created for each Service, Nano Service, and Action.
The behavior is controlled by three factors:
callpoint-model
setting in the package-meta-data.xml
file.
Number of registered callpoints in the Application
.
Operating System support for killing child processes when the parent exits.
If the callpoint-model
is set to multiprocessing
, more than one callpoint is registered in the Application
and the Operating System supports killing child processes when the parent exits, NSO will enable multiprocessing mode.
The Service
class will be instantiated by NSO when started or whenever packages are reloaded. Custom initialization, such as registering service and action callbacks should be done in the setup()
method. If any cleanup is needed when NSO finishes or when packages are reloaded it should be placed in the teardown()
method.
The existing log functions are named after the standard Python log levels, thus in the example above the self.log
object contains the functions debug
,info
,warning
,error
,critical
. Where to log and with what level can be controlled from NSO?
upgrade
ComponentThe Python class specified in the upgrade
section of package-meta-data.xml
will be run by NSO in a separately started Python VM. The class must be instantiable using the empty constructor and it must have a method called upgrade
as in the example below. It should inherit ncs.upgrade.Upgrade
.
Python code packages are not running with an attached console and the standard out from the Python VMs are collected and put into the common log file ncs-python-vm.log
. Possible Python compilation errors will also end up in this file.
Normally the logging objects provided by the Python APIs are used. They are based on the standard Python logging
module. This gives the possibility to control the logging if needed, e.g., getting a module local logger to increase logging granularity.
The default logging level is set to info
. For debugging purposes, it is very useful to increase the logging level:
This sets the global logging level and will affect all started Python VMs. It is also possible to set the logging level for a single package (or multiple packages running in the same VM), which will take precedence over the global setting:
The debugging output is printed to separate files for each package and the log file naming is ncs-python-vm-
pkg_name
.log
Log file output example for package l3vpn
:
There are occasions where the standard Python installation is incompatible or maybe not preferred to be used together with NSO. In such cases, there are several options to tell NSO to use another Python installation for starting a Python VM.
By default NSO will use the file $NCS_DIR/bin/ncs-start-python-vm
when starting a new Python VM. The last few lines in that file read:
As seen above NSO first looks for python3
and if found it will be used to start the VM. If python3
is not found NSO will try to use the command python
instead. Here we describe a couple of options for deciding which Python NSO should start.
NSO can be configured to use a custom start command for starting a Python VM. This can be done by first copying the file $NCS_DIR/bin/ncs-start-python-vm
to a new file and then changing the last lines of that file to start the desired version of Python. After that, edit ncs.conf
and configure the new file as the start command for a new Python VM. When the file ncs.conf
has been changed reload its content by executing the command ncs --reload
.
Example:
Add the following snippet to ncs.conf
:
The new start-command
will take effect upon the next restart or configuration reload.
python3
or python
Another way of telling NSO to start a specific Python executable is to configure the environment so that executing python3
or python
starts the desired Python. This may be done system-wide or can be made specific for the user running NSO.
Changing the last line of $NCS_DIR/bin/ncs-start-python-vm
is of course an option but altering any of the installation files of NSO is discouraged.
Using the multiprocessing library from Python components, where the callpoint-model
is set to threading
, can cause unexpected disconnects from NSO if errors occur in the code executed by the multiprocessing library.
As a workaround to this, either use multiprocessing
as the callpoint-model
or force the start method to be spawn
by executing:
Start user-provided Erlang applications.
NSO is capable of starting user-provided Erlang applications embedded in the same Erlang VM as NSO.
The Erlang code is packaged into applications which are automatically started and stopped by NSO if they are located at the proper place. NSO will search all packages for top-level directories called erlang-lib
. The structure of such a directory is the same as a standard lib
directory in Erlang. The directory may contain multiple Erlang applications. Each one must have a valid .app
file. See the Erlang documentation of application
and app
for more info.
An Erlang package skeleton can be created by making use of the ncs-make-package
command:
Multiple applications can be generated by using the option --erlang-application-name NAME
multiple times with different names.
All application code should use the prefix ec_
for module names, application names, registered processes (if any), and named ets
tables (if any), to avoid conflict with existing or future names used by NSO itself.
The Erlang API to NSO is implemented as an Erlang/OTP application called econfd
. This application comes in two flavors. One is built into NSO to support applications running in the same Erlang VM as NSO. The other is a separate library which is included in source form in the NSO release, in the $NCS_DIR/erlang
directory. Building econfd
as described in the $NCS_DIR/erlang/econfd/README
file will compile the Erlang code and generate the documentation.
This API can be used by applications written in Erlang in much the same way as the C and Java APIs are used, i.e. code running in an Erlang VM can use the econfd
API functions to make socket connections to NSO for the data provider, MAAPI, CDB, etc. access. However, the API is also available internally in NSO, which makes it possible to run Erlang application code inside the NSO daemon, without the overhead imposed by the socket communication.
When the application is started, one of its processes should make initial connections to the NSO subsystems, register callbacks, etc. This is typically done in the init/1
function of a gen_server
or similar. While the internal connections are made using the exact same API functions (e.g. econfd_maapi:connect/2
) as for an application running in an external Erlang VM, any Address
and Port
arguments are ignored, and instead, standard Erlang inter-process communication is used.
There is little or no support for testing and debugging Erlang code executing internally in NSO since NSO provides a very limited runtime environment for Erlang to minimize disk and memory footprints. Thus the recommended method is to develop Erlang code targeted for this by using econfd
in a separate Erlang VM, where an interactive Erlang shell and all the other development support included in the standard Erlang/OTP releases are available. When development and testing are completed, the code can be deployed to run internally in NSO without changes.
For information about the Erlang programming language and development tools, refer to www.erlang.org and the available books about Erlang (some are referenced on the website).
The --printlog
option to ncs
, which prints the contents of the NSO error log, is normally only useful for Cisco support and developers, but it may also be relevant for debugging problems with application code running inside NSO. The error log collects the events sent to the OTP error_logger, e.g. crash reports as well as info generated by calls to functions in the error_logger(3) module. Another possibility for primitive debugging is to run ncs
with the --foreground
option, where calls to io:format/2
etc will print to standard output. Printouts may also be directed to the developer log by using econfd:log/3
.
While Erlang application code running in an external Erlang VM can use basically any version of Erlang/OTP, this is not the case for code running inside NSO, since the Erlang VM is evolving and provides limited backward/forward compatibility. To avoid incompatibility issues when loading the beam
files, the Erlang compiler erlc
should be of the same version as was used to build the NSO distribution.
NSO provides the VM, erlc
and the kernel
, stdlib
, and crypto
OTP applications.
Application code running internally in the NSO daemon can have an impact on the execution of the standard NSO code. Thus, it is critically important that the application code is thoroughly tested and verified before being deployed for production in a system using NSO.
Applications may have dependencies to other applications. These dependencies affect the start order. If the dependent application resides in another package, this should be expressed by using the required package in the package-meta-data.xml
file. Application dependencies within the same package should be expressed in the .app
. See below.
The following config settings in the .app
file are explicitly treated by NSO:
The examples.ncs/getting-started/developing-with-ncs/18-simple-service-erlang
example in the bundled collection shows how to create a service written in Erlang and execute it internally in NSO. This Erlang example is a subset of the Java example examples.ncs/getting-started/developing-with-ncs/4-rfs-service
.
Run your Java code using Java Virtual Machine (VM).
The NSO Java VM is the execution container for all Java classes supplied by deployed NSO packages.
The classes, and other resources, are structured in jar
files and the specific use of these classes is described in the component
tag in the respective package-meta-data.xml
file. Also as a framework, it starts and controls other utilities for the use of these components. To accomplish this, a main class com.tailf.ncs.NcsMain
, implementing the Runnable
interface is started as a thread. This thread can be the main thread (running in a java main()
) or be embedded into another Java program.
When the NcsMain
thread starts it establishes a socket connection towards NSO. This is called the NSO Java VM control socket. It is the responsibility of NcsMain
to respond to command requests from NSO and pass these commands as events to the underlying finite state machine (FSM). The NcsMain
FSM will execute all actions as requested by NSO. This includes class loading and instantiation as well as registration and start of services, NEDs, etc.
When NSO detects the control socket connection from the NSO Java VM, it starts an initialization process:
First, NSO sends a INIT_JVM
request to the NSO Java VM. At this point, the NSO Java VM will load schemas i.e. retrieve all known YANG module definitions. The NSO Java VM responds when all modules are loaded.
Then, NSO sends a LOAD_SHARED_JARS
request for each deployed NSO package. This request contains the URLs for the jars situated in the shared-jar
directory in the respective NSO package. The classes and resources in these jars will be globally accessible for all deployed NSO packages.
The next step is to send a LOAD_PACKAGE
request for each deployed NSO package. This request contains the URLs for the jars situated in the private-jar
directory in the respective NSO package. These classes and resources will be private to the respective NSO package. In addition, classes that are referenced in a component
tag in the respective NSO package package-meta-data.xml
file will be instantiated.
NSO will send a INSTANTIATE_COMPONENT
request for each component in each deployed NSO package. At this point, the NSO Java VM will register a start method for the respective component. NSO will send these requests in a proper start phase order. This implies that the INSTANTIATE_COMPONENT
requests can be sent in an order that mixes components from different NSO packages.
Lastly, NSO sends a DONE_LOADING
request which indicates that the initialization process is finished. After this, the NSO Java VM is up and running.
See Debugging Startup for tips on customizing startup behavior and debugging problems when the Java VM fails to start
The file tailf-ncs-java-vm.yang
defines the java-vm
container which, along with ncs.conf
, is the entry point for controlling the NSO Java VM functionality. Study the content of the YANG model in the example below (The Java VM YANG model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf
.
Many of the nodes beneath java-vm
are by default invisible due to a hidden attribute. To make everything under java-vm
visible in the CLI, two steps are required:
First, the following XML snippet must be added to ncs.conf
:\
Next, the unhide
command may be used in the CLI session:
Each NSO package will have a specific java classloader instance that loads its private jar classes. These package classloaders will refer to a single shared classloader instance as its parent. The shared classloader will load all shared jar classes for all deployed NSO packages.
The jar
's in the shared-jar
and private-jar
directories should NOT be part of the Java classpath.
The purpose of this is first to keep integrity between packages which should not have access to each other's classes, other than the ones that are contained in the shared jars. Secondly, this way it is possible to hot redeploy the private jars and classes of a specific package while keeping other packages in a run state.
Should this class loading scheme not be desired, it is possible to suppress it by starting the NSO Java VM with the system property TAILF_CLASSLOADER
set to false.
This will force NSO Java VM to use the standard Java system classloader. For this to work, all jar
's from all deployed NSO packages need to be part of the classpath. The drawback of this is that all classes will be globally accessible and hot redeploy will have no effect.
There are four types of components that the NSO Java VM can handle:
The ned
type. The NSO Java VM will handle NEDs of sub-type cli
and generic
which are the ones that have a Java implementation.
The callback
type. These are any forms of callbacks that are defined by the DP API.
The application
type. These are user-defined daemons that implement a specific ApplicationComponent
Java interface.
The upgrade
type. This component type is activated when deploying a new version of a NSO package and the NSO automatic CDB data upgrade is not sufficient. See Writing an Upgrade Package Component for more information.
In some situations, several NSO packages are expected to use the same code base, e.g. when third-party libraries are used or the code is structured with some common parts. Instead of duplicate jars in several NSO packages, it is possible to create a new NSO package, add these jars to the shared-jar
directory, and let the package-meta-data.xml
file contains no component definitions at all. The NSO Java VM will load these shared jars and these will be accessible from all other NSO packages.
Inside the NSO Java VM, each component type has a specific Component Manager. The responsibility of these Managers is to manage a set of component classes for each NSO package. The Component Manager acts as an FSM that controls when a component should be registered, started, stopped, etc.
For instance, the DpMuxManager
controls all callback implementations (services, actions, data providers, etc). It can load, register, start, and stop such callback implementations.
NEDs can be of type netconf
, snmp
, cli
, or generic
. Only the cli
and generic
types are relevant for the NSO Java VM because these are the ones that have a Java implementation. Normally these NED components come in self-contained and prefabricated NSO packages for some equipment or class of equipment. It is however possible to tailor make NEDs for any protocol. For more information on this see Network Element Drivers (NEDs) and Writing a data model for a CLI NED in NED Development
Callbacks are the collective name for a number of different functions that can be implemented in Java. One of the most important is the service callbacks, but also actions, transaction control, and data provision callbacks are in common use in an NSO implementation. For more on how to program callback using the DP API, see DP API.
For programs that are none of the above types but still need to access NSO as a daemon process, it is possible to use the ApplicationComponent
Java interface. The ApplicationComponent
interface expects the implementing classes to implement a init()
, finish()
and a run()
method.
The NSO Java VM will start each class in a separate thread. The init()
is called before the thread is started. The run()
runs in a thread similar to the run()
method in the standard Java Runnable
interface. The finish()
method is called when the NSO Java VM wants the application thread to stop. It is the responsibility of the programmer to stop the application thread i.e., stop the execution in the run()
method when finish()
is called. Note, that making the thread stop when finish()
is called is important so that the NSO Java VM will not be hanging at a STOP_VM
request.
An example of an application component implementation is found in SNMP Notification Receiver.
User Implementations typically need resources like Maapi, Maapi Transaction, Cdb, Cdb Session, etc. to fulfill their tasks. These resources can be instantiated and used directly in the user code. This implies that the user code needs to handle connection and close of additional sockets used by these resources. There is however another recommended alternative, and that is to use the Resource manager. The Resource manager is capable of injecting these resources into the user code. The principle is that the programmer will annotate the field that should refer to the resource rather than instantiate it.
This way the NSO Java VM and the Resource manager can keep control over used resources and also can intervene e.g. close sockets at forced shutdowns.
The Resource manager can handle two types of resources: MAAPI
and CDB
.
For both the Maapi and Cdb resource types a socket connection is opened towards NSO by the Resource manager. At a stop, the Resource manager will disconnect these sockets before ending the program. User programs can also tell the resource manager when its resources are no longer needed with a call to ResourceManager.unregisterResources()
.
The resource annotation has three attributes:
type
defines the resource type.
scope
defines if this resource should be unique for each instance of the Java class (Scope.INSTANCE
) or shared between different instances and classes (Scope.CONTEXT
). For CONTEXT scope the sharing is confined to the defining NSO package, i.e., a resource cannot be shared between NSO packages.
qualifier
is an optional string to identify the resource as a unique resource. All instances that share the same context-scoped resource need to have the same qualifier. If the qualifier is not given it defaults to the value DEFAULT
i.e., shared between all instances that have the DEFAULT
qualifier.
When the NSO Java VM starts it will receive component classes to load from NSO. Note, that the component classes are the classes that are referred to in the package-meta-data.xml
file. For each component class, the Resource Manager will scan for annotations and inject resources as specified.
However, the package jars can contain lots of classes in addition to the component classes. These will be loaded at runtime and will be unknown by the NSO Java VM and therefore not handled automatically by the Resource Manager. These classes can also use resource injection but need a specific call to the Resource Manager for the mechanism to take effect. Before the resources are used for the first time the resource should be used, a call of ResourceManager.registerResources(...)
will force the injection of the resources. If the same class is registered several times the Resource manager will detect this and avoid multiple resource injections.
The AlarmSourceCentral
and AlarmSinkCentral
, which is part of the NSO Alarm API, can be used to simplify reading and writing alarms. The NSO Java VM will start these centrals at initialization. User implementations can therefore expect this to be set up without having to handle the start and stop of either the AlarmSinkCentral
or the AlarmSourceCentral
. For more information on the alarm API, see Alarm Manager.
As stated above the NSO Java VM is executed in a thread implemented by the NcsMain
. This implies that somewhere a java main()
must be implemented that launches this thread. For NSO this is provided by the NcsJVMLauncher
class. In addition to this, there is a script named ncs-start-java-vm
that starts Java with the NcsJVMLauncher.main()
. This is the recommended way of launching the NSO Java VM and how it is set up in a default installation. If there is a need to run the NSO Java VM as an embedded thread inside another program. This can be done simply by instantiating the class NcsMain
and starting this instance in a new thread.
However, with the embedding of the NSO Java VM comes the responsibility to manage the life cycle of the NSO Java VM thread. This thread cannot be started before NSO has started and is running or else the NSO Java VM control socket connection will fail. Also, running NSO without the NSO Java VM being launched will render runtime errors as soon as NSO needs NSO Java VM functionality.
To be able to control an embedded NSO Java VM from another supervising Java thread or program an optional JMX interface is provided. The main functionality in this interface is listing, starting, and stopping the NSO Java VM and its Component Managers.
Normal control of the NSO Java engine is performed from NSO e.g. using the CLI. However, NcsMain
class and all component managers implement JMX interfaces to make it possible to control the NSO Java VM also using standard Java tools like JvisualVM and JConsol.
The JMX interface is configured via the Java VM YANG model (see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang
) in the NSO configuration. For JMX connection purposes there are four attributes to configure:
jmx-address
The hostname or IP for the RMI registry.
jmx-port
The port for the RMI registry.
jndi-address
The hostname or IP for the JMX RMI server.
jndi-port
The port for the JMX RMI server.
The JMX connection server uses two sockets for communication with a JMX client. The first socket is the JNDI RMI registry where the JMX Mbean objects are looked up. The second socket is the JMX RMI server from which the JMX connection objects are exported. For all practical purposes, the host/IP for both sockets are the same and only the ports differ.
An example of a JMX connection URL connecting to localhost is: service:jmx:rmi://localhost:4445/jndi/rmi://localhost:4444/ncs
In addition to the JMX URL, the JMX user needs to authenticate using a legitimate user/password from the AAA configuration. An example of JMX authentication using the JConsol standard Java tool is the following:
The following JMX MBeans interfaces are defined:
NSO has extensive logging functionality. Log settings are typically very different for a production system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO Java VM is controlled by different mechanisms. During development, we typically want to turn on the developer-log
. The sample ncs.conf
that comes with the NSO release has log settings suitable for development, while the ncs.conf
created by a System Install are suitable for production deployment.
The NSO Java VM uses Log4j for logging and will read its default log settings from a provided log4j2.xml
file in the ncs.jar
. Following that, NSO itself has java-vm
log settings that are directly controllable from the NSO CLI. We can do:
This will dynamically reconfigure the log level for package com.tailf.maapi
to be at the level trace
. Where the Java logs end up is controlled by the log4j2.xml
file. By default, the NSO Java VM writes to stdout. If the NSO Java VM is started by NSO, as controlled by the ncs.conf
parameter /java-vm/auto-start
, NSO will pick up the stdout of the service manager and write it to:
(The details
pipe command also displays default values)
The section /ncs-config/japi
in ncs.conf
contains a number of very important timeouts. See $NCS_DIR/src/ncs/ncs_config/tailf-ncs-config.yang
and ncs.conf(5) in Manual Pages for details.
new-session-timeout
controls how long NSO will wait for the NSO Java VM to respond to a new session.
query-timeout
controls how long NSO will wait for the NSO Java VM to respond to a request to get data.
connect-timeout
controls how long NSO will wait for the NSO Java VM to initialize a DP connection after the initial socket connect.
Whenever any of these timeouts trigger, NSO will close the sockets from NSO to the NSO Java VM. The NSO Java VM will detect the socket close and exit. If NSO is configured to start (and restart) the NSO Java VM, the NSO Java VM will be automatically restarted. If the NSO Java VM is started by some external entity, if it runs within an application server, it is up to that entity to restart the NSO Java VM.
When using the auto-start
feature (the default), NSO will start the NSO Java VM (as outlined in the start of this section), there are a number of different settings in the java-vm
YANG model (see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang
) that controls what happens when something goes wrong during the startup.
The two timeout configurations connect-time
and initialization-time
are most relevant during startup. If the Java VM fails during the initial stages (during INIT_JVM
, LOAD_SHARED_JARS
, or LOAD_PACKAGE
) either because of a timeout or because of a crash, NSO will log The NCS Java VM synchronization failed
in ncs.log
.
The synchronization error message in the log will also have a hint as to what happened:
closed
usually means that the Java VM crashed (and closed the socket connected to NSO)
timeout
means that it failed to start (or respond) within the time limit. For example, if the Java VM runs out of memory and crashes, this will be logged as closed
.
After logging, NSO will take action based on the synchronization-timeout-action
setting:
log
: NSO will log the failure, and if auto-restart
is set to true NSO will try to restart the Java VM
log-stop
(default): NSO will log the failure, and if the Java VM has not stopped already NSO will also try to stop it. No restart action is taken.
exit
: NSO will log the failure, and then stop NSO itself.
If you have problems with the Java VM crashing during startup, a common pitfall is running out of memory (either total memory on the machine, or heap in the JVM). If you have a lot of Java code (or a loaded system) perhaps the Java VM did not start in time. Try to determine the root cause, check ncs.log and ncs-java-vm.log
, and if needed increase the timeout.
For complex problems, for example with the class loader, try logging the internals of the startup:
Setting this will result in a lot more detailed information in ncs-java-vm.log
during startup.
When the auto-restart
setting is true
(the default), it means that NSO will try to restart the Java VM when it fails (at any point in time, not just during startup). NSO will at most try three restarts within 30 seconds, i.e., if the Java VM crashes more than three times within 30 seconds NSO gives up. You can check the status of the Java VM using the java-vm
YANG model. For example in the CLI:
The start-status
can have the following values:
auto-start-not-enabled
: Autostart is not enabled.
stopped
: The Java VM has been stopped or is not yet started.
started
: The Java VM has been started. See the leaf 'status' to check the status of the Java application code.
failed
: The Java VM has been terminated. If auto-restart
is enabled, the Java VM restart has been disabled due to too frequent restarts.
The status
can have the following values:
not-connected
: The Java application code is not connected to NSO.
initializing
: The Java application code is connected to NSO, but not yet initialized.
running
: The Java application code is connected and initialized.
timeout
: The Java application connected to NSO, but failed to initialize within the stipulated timeout 'initialization-time'.
applications
A list of applications that need to be started before this application can be started. This info is used to compute a valid start order.
included_applications
A list of applications that are started on behalf of this application. This info is used to compute a valid start order.
env
A property list, containing [{Key,Val}]
tuples. Besides other keys, used by the application itself, a few predefined keys are used by NSO. The key ncs_start_phase
is used by NSO to determine which start phase the application is to be started in. Valid values are early_phase0
, phase0
, phase1
, phase1_delayed
and phase2
. Default is phase1
. If the application is not required in the early phases of startup, set ncs_start_phase
to phase2
to avoid issues with NSO services being unavailable to the application. The key ncs_restart_type
is used by NSO to determine which impact a restart of the application will have. This is the same as the restart_type()
type in application
. Valid values are permanent
, transient
and temporary
. Default is temporary
.