Learn about NSO SSH key management.
The SSH protocol uses public key technology for two distinct purposes:
Server Authentication: This use is a mandatory part of the protocol. It allows an SSH client to authenticate the server, i.e. verify that it is really talking to the intended server and not some man-in-the-middle intruder. This requires that the client has prior knowledge of the server's public keys, and the server proves its possession of one of the corresponding private keys by using it to sign some data. These keys are normally called 'host keys', and the authentication procedure is typically referred to as 'host key verification' or 'host key checking'.
Client Authentication: This use is one of several possible client authentication methods, i.e. it is an alternative to the commonly used password authentication. The server is configured with one or more public keys which are authorized for authentication of a user. The client proves possession of one of the corresponding private keys by using it to sign some data - i.e. the exact reverse of the server authentication provided by host keys. The method is called 'public key authentication' in SSH terminology.
These two usages are fundamentally independent, i.e. host key verification is done regardless of whether the client authentication is via public key, password, or some other method. However host key verification is of particular importance when client authentication is done via password, since failure to detect a man-in-the-middle attack in this case will result in the cleartext password being divulged to the attacker.
NSO can act as an SSH server for northbound connections to the CLI or the NETCONF agent, and for connections from other nodes in an NSO cluster - cluster connections use NETCONF, and the server side setup used is the same as for northbound connections to the NETCONF agent. It is possible to use either the NSO built-in SSH server or an external server such as OpenSSH, for all of these cases. When using an external SSH server, host keys for server authentication and authorized keys for client/user authentication need to be set up per the documentation for that server, and there is no NSO-specific key management in this case.
When the NSO built-in SSH server is used, the setup is very similar to the one OpenSSH uses:
The private host key(s) must be placed in the directory specified by /ncs-config/aaa/ssh-server-key-dir
in ncs.conf
, and named either ssh_host_dsa_key
(for a DSA key) or ssh_host_rsa_key
(for a RSA key). The key(s) must be in PEM format (e.g. as generated by the OpenSSH ssh-keygen command), and must not be encrypted - protection can be achieved by file system permissions (not enforced by NSO). The corresponding public key(s) is/are typically stored in the same directory with a .pub
extension to the file name, but they are not used by NSO. The NSO installation creates a DSA private/public key pair in the directory specified by the default ncs.conf
.
The public keys that are authorized for authentication of a given user must be placed in the user's SSH directory. Refer to Public Key Login for details on how NSO searches for the keys to use.
NSO can act as an SSH client for connections to managed devices that use SSH (this is always the case for devices accessed via NETCONF, typically also for devices accessed via CLI), and for connections to other nodes in an NSO cluster. In all cases, a built-in SSH client is used. The $NCS_DIR/examples.ncs/getting-started/using-ncs/8-ssh-keys
example in the NSO example collection has a detailed walk-through of the NSO functionality that is described in this section.
The level of host key verification can be set globally via /ssh/host-key-verification
. The possible values are:
reject-unknown
: The host key provided by the device or cluster node must be known by NSO for the connection to succeed.
reject-mismatch
: The host key provided by the device or cluster node may be unknown, but it must not be different from the "known" key for the same key algorithm, for the connection to succeed.
none
: No host key verification is done - the connection will never fail due to the host key provided by the device or cluster node.
The default is reject-unknown
, and it is not recommended to use a different value, although it can be useful or needed in certain circumstances. E.g. none
maybe useful in a development scenario, and temporary use of reject-mismatch
maybe motivated until host keys have been configured for a set of existing managed devices.
The public host keys for a device that is accessed via SSH are stored in the /devices/device/ssh/host-key
list. There can be several keys in this list, one each for the ssh-ed25519
(ED25519 key), ssh-dss
(DSA key) and ssh-rsa
(RSA key) key algorithms. In case a device has entries in its live-status-protocol
list that use SSH, the host keys for those can be stored in the /devices/device/live-status-protocol/ssh/host-key
list, in the same way as the device keys - however if /devices/device/live-status-protocol/ssh
does not exist, the keys from /devices/device/ssh/host-key
are used for that protocol. The keys can be configured e.g. via input directly in the CLI, but in most cases, it will be preferable to use the actions described below to retrieve keys from the devices. These actions will also retrieve any live-status-protocol
keys for a device.
The level of host key verification can also be set per device, via /devices/device/ssh/host-key-verification
. The default is to use the global value (or default) for /ssh/host-key-verification
, but any explicitly set value will override the global value. The possible values are the same as for /ssh/host-key-verification
.
There are several actions that can be used to retrieve the host keys from a device and store them in the NSO configuration:
/devices/fetch-ssh-host-keys
: Retrieve the host keys for all devices. Successfully retrieved keys are committed to the configuration.
/devices/device-group/fetch-ssh-host-keys
: Retrieve the host keys for all devices in a device group. Successfully retrieved keys are committed to the configuration.
/devices/device/ssh/fetch-host-keys
: Retrieve the host keys for one or more devices. In the CLI, range expressions can be used for the device name, e.g. using '*' will retrieve keys for all devices, etc. The action will commit the retrieved keys if possible, i.e. if the device entry is already committed, otherwise (i.e. if the action is invoked from "configure mode" when the device entry has been created but not committed), the keys will be written to the current transaction, but not committed.
The fingerprints of the retrieved keys will be reported as part of the result from these actions, but it is also possible to ask for the fingerprints of already retrieved keys by invoking the /devices/device/ssh/host-key/show-fingerprint
action (/devices/device/live-status-protocol/ssh/host-key/show-fingerprint
for live-status protocols that use SSH).
This is very similar to the case of a connection to a managed device, it differs mainly in locations - and in the fact that SSH is always used for connection to a cluster node. The public host keys for a cluster node are stored in the /cluster/remote-node/ssh/host-key
list, in the same way as the host keys for a device. The keys can be configured e.g. via input directly in the CLI, but in most cases, it will be preferable to use the action described below to retrieve keys from the cluster node.
The level of host key verification can also be set per cluster node, via /cluster/remote-node/ssh/host-key-verification
. The default is to use the global value (or default) for /ssh/host-key-verification
, but any explicitly set value will override the global value. The possible values are the same as for /ssh/host-key-verification
.
The /cluster/remote-node/ssh/fetch-host-keys
action can be used to retrieve the host keys for one or more cluster nodes. In the CLI, range expressions can be used for the node name, e.g. using '*' will retrieve keys for all nodes, etc. The action will commit the retrieved keys if possible, but if it is invoked from "configure mode" when the node entry has been created but not committed, the keys will be written to the current transaction, but not committed.
The fingerprints of the retrieved keys will be reported as part of the result from this action, but it is also possible to ask for the fingerprints of already retrieved keys by invoking the /cluster/remote-node/ssh/host-key/show-fingerprint
action.
The private key used for public key authentication can be taken either from the SSH directory for the local user or from a list of private keys in the NSO configuration. The user's SSH directory is determined according to the same logic as for the server-side public keys that are authorized for authentication of a given user, see Public Key Login, but of course, different files in this directory are used, see below. Alternatively, the key can be configured in the /ssh/private-key
list, using an arbitrary name for the list key. In both cases, the key must be in PEM format (e.g. as generated by the OpenSSH ssh-keygen command), and it may be encrypted or not. Encrypted keys configured in /ssh/private-key
must have the passphrase for the key configured via /ssh/private-key/passphrase
.
The specific private key to use is configured via the authgroup
indirection and the umap
selection mechanisms as for password authentication, just a different alternative. Setting /devices/authgroups/group/umap/public-key
(or default-map
instead of umap
for users that are not in umap
) without any additional parameters will select the default of using a file called id_dsa
in the local user's SSH directory, which must have an unencrypted key. A different file name can be set via /devices/authgroups/group/umap/public-key/private-key/file/name
. For an encrypted key, the passphrase can be set via /devices/authgroups/group/umap/public-key/private-key/file/passphrase
, or /devices/authgroups/group/umap/public-key/private-key/file/use-password
can be set to indicate that the password used (if any) by the local user when authenticating to NSO should also be used as a passphrase for the key. To instead select a private key from the /ssh/private-key
list, the name of the key is set via /devices/authgroups/group/umap/public-key/private-key/name
.
This is again very similar to the case of a connection to a managed device, since the same authgroup
/umap
scheme is used. Setting /cluster/authgroup/umap/public-key
(or default-map
instead of umap
for users that are not in umap
) without any additional parameters will select the default of using a file called id_dsa
in the local user's SSH directory, which must have an unencrypted key. A different file name can be set via /cluster/authgroup/umap/public-key/private-key/file/name
. For an encrypted key, the passphrase can be set via /cluster/authgroup/umap/public-key/private-key/file/passphrase
, or /cluster/authgroup/umap/public-key/private-key/file/use-password
can be set to indicate that the password used (if any) by the local user when authenticating to NSO should also be used as a passphrase for the key. To instead select a private key from the /ssh/private-key
list, the name of the key is set via /cluster/authgroup/umap/public-key/private-key/name
.