Nano Agent install via public or locally hosted Helm Chart or manually via daemonset; configuring parameters (memory and CPU resources, priority class), and validating install into a K8s cluster.

Published: October 11, 2022

The Spyderbat Nano Agent in a containerized environment can be deployed via a Kubernetes Daemonset to a target Kubernetes Cluster. To guarantee proper coverage, it is important to ensure that a single instance of the Spyderbat Nano Agent runs on every cluster node (and is optionally deployed to API server Control Plane nodes for self-managed clusters).

Spyderbat offers a simple deployment approach via Helm Chart, which is a package manager tool for Kubernetes that creates the necessary pods, permissions, network rules, etc. Instructions are also provided below for cases where the target cluster does not have internet access to the necessary artifacts, and the deployment is executed with a simple Kubernetes Daemonset manifest.

Infrastructure Prerequisites

The Spyderbat Nano Agent leverages eBPF technology on Linux systems to gather data and forward it to the Spyderbat backend. A full list of supported Linux OS can be found on our website here (paragraph 4).

Successful Spyderbat Nano Agent install and new source registration in the Spyderbat UI require that the agent has outbound access on port 443 to https://orc.spyderbat.com, so that the Nano Agent could successfully pull all needed updates and register with the Spyderbat backend. This means that the pod running the Nano Agent should have outbound access from the Kubernetes Cluster and target namespace to the target port and domain above.

In order to verify successful agent installation, the person installing the Spyderbat agent should also ideally have a Spyderbat Admin account in their Spyderbat organization and should be able to access their organization in the Spyderbat UI at https://app.spyderbat.com

Public Helm Chart Deployment: Clone Repo, Update and Install

Below is the set of deployment instructions for your K8 Kubernetes cluster, which is available in the Spyderbat UI Under Sources -> Add New Source. This deployment will run with all default settings for the parameters referenced above, which have been pre-populated.

helm repo add nanoagent https://spyderbat.github.io/nanoagent_helm/
helm repo update
helm install nanoagent nanoagent/nanoagent  
--set nanoagent.agentRegistrationCode=<agent registration code> 
--set nanoagent.orcurl=https://orc.spyderbat.com/

The agent registration code is specific to your organization (see below), and the ORC Url is the endpoint where your Nano Agents will register / communicate to the Spyderbat backend.

If you wish to store your Agent Registration Code in the AWS Secrets Manager, please refer to this article for more information on how to set it up.

The Helm installation commands specific to your organization can be found in the Spyderbat UI by clicking on “New Source” under the “Sources” section of the left-hand navigation. This will lead to an agent installation wizard where the Helm chart details for your organization are available.

Customizing the Helm Chart Values

To get the Helm Chart source, you may clone the repo by running the following command:

git clone https://github.com/spyderbat/nanoagent_helmchart.git

The Spyderbat Helm Chart includes a set of yaml files and configurable parameters that can be optionally modified by the user before running the Helm Chart on a target Kubernetes cluster.


The user will have the ability to specify the resource request for containers in a pod, which will enable kube-scheduler to decide which node to place the Pod on. The user will also be able to specify the resource limit for a container, so the kubelet will enforce those limits so that the running container is not allowed to use more of that resource than the limit set. The kubelet also reserves at least the requested amount of that system resource specifically for that container to use.

There are two resource types to configure: CPU and memory.

CPU is specified in units of Kubernetes CPUs, where 1 CPU unit is equivalent to 1 physical CPU core, or 1 virtual core. For CPU resource the expression 0.1 is equivalent to 100m, which can be read as “one hundred millicpu” or “one hundred millicores”.

The Memory is specified in units of bytes using either an integer format or a power-of-two equivalent. For example, 2048 Mi is the equivalent of 2048 mebibytes or MiB.

By default the resource requests are set to the following values:

  • CPU at 100m = 0.1 of a single CPU core (physical or virtual)

  • Memory at 512Mi = 512 MB

And the resource limits are set to the following values:

  • CPU resources are hard capped at 6 CPU cores

  • Memory resources are hard-capped at 10 GB of RAM

Priority Class

This is a non-namespaced object that defines a mapping from a priority class name to the integer value of the priority: the higher the value, the higher the priority. A PriorityClass object can have any 32-bit integer value smaller than or equal to 1 billion. By default, the priority will be set to lowest.

Once the priority class is set, within the Customer’s priority scale, the agent will be installed on every node in the cluster as to their priority. If the priority class is set too low, then the pods could be preempted or evicted, so if the user wants to ensure that there is an agent installed on every node in the cluster when the pod is created, then the priority should be set accordingly.

For example:

  • 100 – 1000 – low priority

  • 100K+ – ultra-high priority

By default, the priority class is disabled. But if it is enabled, then the default value will auto-set to 1000.

It is important to keep in mind that if the Priority Class remains disabled then the Spyderbat Nano Agent may never get installed in the event there is no capacity.


Namespaces provide a mechanism for isolating groups of resources within a single cluster. If the namespace parameter is set to false and the agent installer is run, a single pod will be created in the default namespace.

Once this parameter is enabled (set to “true”), the “create namespace” argument will be used to create the “Spyderbat” namespace as part of the deployment.

Service Account

During Spyderbat Nano Agent deployment into the Kubernetes cluster, the daemon set puts an agent on every node in the cluster. ClusterMonitor creates a special agent that monitors the Kubernetes cluster itself. It is the ClusterMonitor that needs service account permissions to enable such monitoring. The name of the service account can be changed in values.yaml, it defaults to “spyderbat-serviceaccount”.

The service account uses a “clusterrolebinding” of cluster-admin which allows it to read all the cluster configuration and gives it the ability to terminate pods to stop attacks.

If you do not wish to use preventive actions, the cluster role can be altered in values.yaml to only have “ReadOnly” and “Watch” permissions.

To update desired parameters via the Command Line prompt, use the following command sequence:

helm install nanoagent nanoagent/nanoagent  
--set nanoagent.agentRegistrationCode=<agent registration code> 
--set nanoagent.orcurl=https://orc.spyderbat.com/ 
--set-string resources.requests.cpu=1000m 
--set priorityClassDefault.value=10000

This would replace the resources.requests.cpu with 1000m instead of the default 100m. For numeric settings use --set instead of --set-string

Below is the summary table with all the defaults for your reference:

ParameterParameter from values.yamlDefault StateDefault Value (if enabled)

CPU resource request

requests: cpu



CPU resource limit

limits: cpu



Memory resource request

requests: memory



Memory resource limit

limits: memory



Priority class


enabled: false




enabled: true


Omit Environment



"no" emit all environment variables. "everything" omits all environment variables and "allbutredacted" uses our rules to encrypt variables that look like they contain secrets and emit only those for analysis.

To configure access via a proxy you can add additional parameters to the Helm command line:

--set nanoagent.httpproxy= 
--set nanoagent.httpsproxy=

To set the resource limits additional parameters like there can be added to the Helm command line. We recommend 3-5% of the resources on a node as a limit

--set resources.limits.cpu=2000m 
--set resources.limits.memory=8192M

Helm Chart Package Contents

The Helm Chart packages the following installer files:

  • Nanoagent.yaml file: used to ensure a copy of the pod is created on every node.

  • Serviceaccount.yaml: creates service account as part of the deployment to allow leveraging K8 API’s

  • Namespace.yaml: creates Spyderbat namespace for resource management

  • Priority.yaml: sets priority for Spyderbat pod deployment on all nodes in the K8 cluster

  • Clustermonitor.yaml: creates a ClusterMonitor Nano Agent that collects information from the K8s API

  • Rolebinding.yaml: defines the service account cluster role binding for the Spyderbat service account

  • Values.yaml: contains the user configurable parameters for the Helm Chart install

Deployment via Self-Hosted Helm Chart and Docker Container Image

In the scenario, where you want to host the Helm Chart and Container Image locally, you may leverage the following instructions. Note that the pod running the Spyderbat Nano Agent still requires outbound internet access to https://orc.spyderbat.com on port 443.

On a machine with internet access, you can pull the Spyderbat container image into your local docker system with the following command:

docker pull public.ecr.aws/a6j2k0g1/nano-agent:latest

To see the image id and that it is local:

docker image ls

Which will return results looking like the following:

REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE
<none>                               <none>    72bb338b2313   2 minutes ago   151MB
ubuntu                               latest    27941809078c   3 weeks ago     77.8MB
public.ecr.aws/a6j2k0g1/nano-agent   latest    dde533638cf2   2 months ago    148MB

You can export the image with:

docker image save dde533638cf2 > docker.image.nano_agent

The file docker.image.nano_agent can be imported into your local repository.

Or alternatively, you may download a compressed image like this:

curl https://spyderbat.github.io/nanoagent_helm/docker.image.nano_agent.gz 
--output agentimage.tar.gz

This image is gzip compressed but can be installed into your registry or repository.

To get the Helm Chart for internal hosting:

curl https://spyderbat.github.io/nanoagent_helm/agent_helm.tar 
--output nano_agent_helmchart.tar

You can unpack the Helm Chart with:

tar xvf nano_agent_helmchart.tar

In nanoagent/values.yaml edit the image section to point to your new image registry when you save the container image.

The Helm chart can be used locally, or you can host it.

Deployment via a Daemonset

Should it be required to manually run the Spyderbat Nano Agent install, yaml files can be extracted and run one by one in a very controlled fashion.

To extract the files from the Helm Chart available in a public GitHub repository run the following command using your organization registration code (see Public Helm Chart Deployment: Clone Repo, Update and Install section for detail on how to find your agent registration code):

helm template nanoagent nanoagent/nanoagent 
--set nanoagent.agentRegistrationCode=<agent registration code> 
--set nanoagent.orcurl=https://orc.spyderbat.com/ 
--set spyderbat_tags='CLUSTER_NAME=mycluster:environment=dev'

Once run, this command will produce a batch of yaml files, including the following:

You can then proceed to modify the desired parameters in the respective files as noted above and run individual files one by one to complete the Spyderbat Nano Agent installation.


If the installation proceeded correctly, you should receive the following confirmation:

Once it registers with Spyderbat’s backend, you will be able to see a number of active sources with a recent registration date, corresponding to the number of nodes in the cluster that were targeted with the agent.

Once the Spyderbat Nano Agents have been installed, you can validate the pods are running with the following command:

kubectl get pods -n spyderbat

You should see something like the following – one pod per cluster node:

Note that the free Spyderbat Community account allows you to monitor up to 5 nodes, i.e. register up to 5 sources in the Spyderbat UI. If you have a cluster that contains more than 5 nodes or anticipate scaling up in the near future, please visit https://www.spyderbat.com/pricing/ to sign up for our Professional tier.

Last updated

© SPYDERBAT, Inc., All Rights Reserved