Pre Deployment Environment Data Collection Script

Optimize your Helm Chart values to ensure proper sizing of the Spyderbat Nano Agent parameters for your K8s environment.

What is Pre-Deployment Collection Script and How It Works

To optimally configure and size the Spyderbat Nano agent and backend to support your Kubernetes cluster, we have created a script that will collect some useful data and metrics that Spyderbat can review to optimize the Helm installation or our agents and size Spyderbat backend appropriately.

Script Output Details

The script collects the following data in your environment:

  1. Summary metrics about the number of nodes, pods, deployments, replicasets, daemonsets, services and namespaces, which helps us assess the size and load on your cluster.

  2. Information about the nodes of the cluster, including their provisioned capacity and any taints applied to the nodes, which helps us understand the headroom available in your cluster to add our agents, and helps us pro-actively recommend configuring tolerations on our agents to ensure visibility on all nodes.

  3. Cumulative metrics about what resource requests currently running pods are requesting (CPU, memory), which helps us understand the headroom available in your cluster to add our agents.

  4. The name and namespaces of the deployments, daemonsets and services running on your cluster, which helps us assess if any other daemonsets or deployments could interfere with our agents and helps us discover if your cluster has node-auto-scaling configured.

  5. PriorityClasses currently present for the cluster which helps us assess whether our agent will have sufficient priority to get scheduled on any new nodes being added to the cluster.

The script does NOT collect any of the following:

  • Implementation and status details in the 'spec' and 'status' sections of the pods, deployments or daemonsets.

  • Any sensitive data that might be present in these sections of the k8s resources (environment variables, configs)

Script Execution Prerequisites

Spyderbat Pre-Deployment Collection script should be run from a machine you currently use to manage your cluster from.

Below are the requirements for the script to run successfully:

  1. kubectl and a valid kube config file

    The script will call on the kubectl command to collect cluster information. The cluster(s) to install Spyderbat on should be one of the contexts configured in the kube config file.

Script Execution Steps

First you will need to download the script from this public repository.

After installing the script run it as

./ -h


python3 -h\

For usage info run

usage: [-h] [-c CONTEXT] [-o OUTPUT]

Here are available options:

  • -h, --help show this help message and exit\

  • -c CONTEXT, --context CONTEXT kubectl context to pull from (if none provided, all contexts in the kubectl config will be analyzed)\

  • -o OUTPUT, --output OUTPUT output file (default is Spyderbat-clusterinfo.json.gz)

By default, the script will collect information for all clusters configured in your kubeconfig file.

If you want to collect only for one cluster, use the -c CONTEXT flag, with the name of the context (as available in kubectl config get-contexts) to collect for.

For example:

./ -c qacluster1

By default the output will go into a file called spyderbat-clusterinfo.json.gz. You can use the -o flag to use another filename.

Output Delivery and Review

If the script ran successfully, please send the output file back to Spyderbat. We will review the findings with you to discuss the next steps for your deployment and provide recommendations on how to best configure your deployment parameters to ensure that all Spyderbat Nano Agents come online, initialize fully, and successfully register with the Spyderbat backend.

Here is an example of the output file data:

If you would like to review an example of a full file output, please Contact US.

Last updated

© SPYDERBAT, Inc., All Rights Reserved