Only this pageAll pages
Powered by GitBook
1 of 82

Spyderbat

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Quick Links

Getting Started

Introduction to essential features and functionality of the Spyderbat console.

  • Sign Up and Create an Organization

  • Spyderbat User and Role Management Overview

  • Three Things to Try with Spyderbat Community Edition

Spyderbat Product Docs

Everything you need to know about our cloud-native runtime security platform that will provide you with continuous security monitoring, improved observability and timely alerting.

Getting Started with Spyderbat

Navigating Spyderbat UI

Enhancing Your Spyderbat Experience

Tutorials

Integrations

Try Spyderbat Community Edition for Free

Set up your test organization and deploy up to 5 Nano Agents

Spyderbat System Requirements

Learn about infrastructure prerequisites and supported OS types before deploying Spyderbat

How to Install a Spyderbat Nano Agent onto a K8s Cluster

How-To guide and a 6-minute video to get you started

Get started with monitoring your environment at runtime

Latest runtime security findings at your fingertips

Manage your users and their access permissions

View complete list of integrations

Install the Spyderbat Event Forwarder

Export Spyderbat findings (runtime security flags, Spydertraces) for ingestion into third-party SIEM tools like Splunk, Panther, or Sumo Logic.

The event forwarder is a custom utility that consumes Spyderbat events from the API and emits files containing one of the two types of data, security flags or spydertraces. The event forwarder can be integrated with the third-party systems, such as SIEMs and other alert aggregation points, to forward those files via syslog to be consumed outside of Spyderbat platform to further enhance the security content and improve overall observability and security awareness.

To learn more about the event forwarder and how you can use it to integrate Spyderbat with your other solutions, see .

How to navigate Spyderbat Investigation
Spyderbat Dashboards from A to Z
Invite your team to experience Spyderbat potential
Spyderbat API Integrations
this page

Guardian

Flashback (Go Back In Time)

Notifications

Integrations

Installation

Spyderbat Nano Agent installation and deployment configuration options.

Miscellaneous

Summarize

Reference

Scout (Detections)

Concepts

The Concepts section helps you learn about the Spyderbat Platform. This section is designed to provide a comprehensive understanding of the fundamental ideas and principles that let you view and secur

Create an Organization

Spyderbat Accounts are tied to one or more Organizations. Membership to an Organization is a requirement for installing the Spyderbat Nano Agent and fully utilizing Spyderbat as a whole.

All of the data generated by Spyderbat is tied directly to an Organization. Users can be members of multiple organizations, and may have different RBAC permissions for each org.

Community Organizations

Spyderbat's free-tier is called Community, and all it takes to create a Community Spyderbat Organization is to create a Spyderbat Account. Receive the account creation instructions once you fill out a short contact form.

Professional/Enterprise Organizations

For those interested in trying out the professional and enterprise solutions, . Our team will ensure will set one or more Spyderbat Organization(s) to suit your specific needs.

Reports

Overview of the Reporting section of the console, including report creation, review, download and printing.

Spyderbat provides robust reporting capabilities to help monitor and analyze the operations and security of monitored Linux machines and Kubernetes clusters.

The reporting features are accessible via the "Reports" menu of the portal.

The Reports section contains two submenus:

  • Generated: Review and manage reports generated based on your input parameters.

  • Create: Create new reports based on a predefined inventory of available report types.

Notifications

Get notified when Spyderbat detects operations issues or suspicious behavior at runtime in your environment.

Spyderbat's notification system has 3 main components:

  • : Named destinations to where notifications can be sent.

  • : Templates that define the structure and content of notifications, simplifying the setup process.

Custom Flags

What are Custom Flags?

Custom Flags in Spyderbat are a powerful feature that enable users to create tailored detection rules to monitor activities or behaviors specific to their environment.

Custom Flags are designed to address unique needs that Spyderbat's built-in detections may not cover and may be specific to your organization's requirements.

Spyctl Commands

Readthedocs

Spyctl's commands are documented on

Spyctl CLI

contact sales
Notifiable Objects:

Spyderbat allows users to set up Notifications for the below to stay informed about important events in their Spyderbat Organization.

Here are 3 types of notifiable objects:

1. Saved Queries

What it is: Predefined searches to track specific patterns or behaviors in your data.

Why it’s useful: Automates monitoring by notifying you when new activity matches the query.

Example: Get notified when there’s unusual inbound connection.

2. Custom Flags

What it is: Custom flags enable users to create tailored detection rules to monitor activities or behaviors specific to their environment.

Why it’s useful: Helps focus on what matters to you, like unusual commands or risky actions.

Example: Flag and alert when someone runs a command that requires high privileges.

3. Agent Health Notifications

What it is: Alerts about the health and status of Spyderbat agents.

Why it’s useful: Ensures agents are functioning properly and sending data.

Example: Get notified if an agent goes "Offline" or to "Critical" state.

Note: To Learn How to Configure Notifications for Agent Health using Spyctl Refer here

Quick Start Tutorial

To quickly get started using using Spyderbat Notifications follow our tutorial using spyctl.

How to setup Spyderbat Notifications (Spyctl CLI)

Notification Targets
Notification Templates
If you have already installed Spyctl you can also see the command documentation by running spyctl --help. You can also run spyctl <command> --help for more information about a given command.
readthedocs.io

Search

Scout (Detections)

Flashback

Spyctl CLI

Reports can be customized with a variety of input parameters and exported in multiple formats, including JSON, YAML and PDF. Once generated, reports are stored in the "Generated" section for review, export, and printing.

Creating a Report

To create a report:

  1. Navigate to the Reports section of the portal and click on the Create menu item.

  2. You will see a list of available report templates that you can select from.

Review the descriptions of the reports, and use the preview button to preview a sample of the report type.

  1. Select the desired report type from the list.

  2. Each report type has specific input parameters you must provide to customize the report for your environment, and specific UI controls to select them.

Enter or select the required parameters, such as:

  • Cluster: The Kubernetes cluster for which the report is generated.

  • Start Time and End Time: Defines the reporting period (e.g., last 24 hours or specific time range).

You can give your report a specific name to make it easier to locate it later.

Other report types may have other selectors, such as machine selectors.

  1. Click Create to initiate the report generation process.

Reports may take several minutes to generate, depending on the size of the system and the selected time range. The UI will display a popup at the bottom of the page titled 'Creating report,' with a 'View' link that directs you to the 'Generated' section.

Viewing Generated Reports

All created reports will appear immediately in the Generated Reports section. Some reports take a while to process and render, and will not be immediately available to view. When they are ready, the 'Published' column will change from 'Scheduled' to the time and date when the report was published, and a 'View' button will be available to allow the user to view the report.

Select the report you want to access and click on its 'View' button. The report will render in a pop-up, like so:

From here you can

  1. Inspect the report: Scroll down to see the full report contents if needed

  2. Download Report: Export the report in any of the available formats (e.g., JSON, YAML). By clicking on the 'Download' button in the bottom right corner.

  3. Export to PDF and print: Use the print to PDF button to export and/or print the report in PDF format using your browser capabilities and preferred settings.

Leveraging the Spyderbat Query Language (SpyQL)

Spyderbat allows you to write Custom Flags using the Spyderbat Query Language (SpyQL). SpyQL enables you to craft precise queries that define the conditions for your Custom Flags.

SpyQL supports complex queries that allow you to combine multiple conditions, use various logical operators (AND, OR, NOT), and apply patterns with matches pattern (~=) and regular expressions with Regex using ~~= operator, equality operator =, etc.

SpyQL is used for Historical Search in Console and for Custom Flags. The queries are composed of two parts, the schema or object type you are looking for and the query itself. In Historical Search you must also specify a time window, however Custom Flags operate in real-time so that section is not supported. The SpyQL query below is from Historical Search and is querying for any Container where the cluster-name field matches the value integrationc2 using the equality operator (=):

Query Image

You can use Historical Search in the UI to test your Custom Flag queries.

Custom Flags Key Features:

  1. Real-Time Monitoring: Once set up, Custom flags operate in real-time, triggering immediate flags when a record matches the SpyQL query.

  2. Flexibility: You can define flags that range from broad conditions e.g., anytime a new StatefulSet is created to highly specific scenarios, e.g., a serviceaccount with cluster-admin role created in a particular namespace by a particular user.

  3. Red Flag vs Ops Flag: You can choose between a Custom Red Flag (Security) or a Custom Ops Flag (Devops) based on your detection needs.

    • Redflag: Indicates a security issue or potential malicious activity.

    • Opsflag: Highlights operational or configuration issues that may need attention.

    Custom flags also allow you to add your own description and select severity options such as low, high, critical, or info.

  4. Integration with Spydertraces: Custom Red Flags may trigger and/or contribute to the score of a Spydertrace just like the built-in Spyderbat detections do.

  5. Custom Flag Operations: The ability to create, delete, edit, disable, and enable custom flags further enhances your control over managing the detection process, and gives the ability to evolve as required.

Getting Started

Currently, Custom Flags are only manageable using the Spyctl CLI. Management via the Console UI is coming soon.

Follow the Spyctl CLI tutorial for setting up Custom Flags here.

Configuration Guide - Kubernetes

This guide provides a detailed explanation of the various configuration options available in the YAML configuration file for the Spyderbat AWS Agent. The configuration file allows you to control aspects of the agent's behavior, such as polling, AWS account details, and integration settings.

Configuration Parameters Overview

Below are key configuration parameters that can be set in the values.yaml file of the Helm chart for the Spyderbat AWS Agent:

Credentials

1. awsSecretsManager

  • Description: Configures AWS Secrets Manager integration to store the Spyderbat registration key.

  • Fields:

    • enabled: Whether to use AWS Secrets Manager for storing the registration key.

2. credentials

  • Description: Configures AWS credentials and the Spyderbat registration key.

  • Fields:

    • aws_access_key_id: AWS access key ID (optional).

Spyderbat Configuration Parameters

1. spyderbat_orc_url

  • Description: URL for the Spyderbat orchestration endpoint, used by the agent to communicate with Spyderbat's backend.

  • Example:

2. cluster_name

  • Description: Specifies the cluster name where the AWS Agent is running. This helps in identifying the data source in the Spyderbat UI.

  • Example:

3. awsAgentsConfigs

  • Description: Configures the AWS accounts and services that the agent will monitor.

  • Fields:

    • aws_account_id: AWS account ID to monitor. Set to auto to auto-discover the account ID.

The helm chart can install one or more aws agents (one per account to be monitored). The default installation installs a single agent. You can configure multiple agents by providing multiple sections under the awsAgentsConfigs section.

For each of the section, the configuration options are the same as described in the configuration guide for the single-vm install, which you can consult .

Managing Configuration

  • Updating Configuration: To update the configuration, modify the values.yaml file and upgrade the release using:

  • Validation: Ensure to validate the syntax of the values.yaml before applying changes to avoid runtime issues.

How to Configure Event Forwarder Webhook for Panther

This is not meant to be a comprehensive guide to use the event forwarder with Panther.

Panther schema configuration

Panther requires an ingestion schema to ingest log data. An example schema is provided Here

Download the example schema. In the Panther console, under Configure / Schemas, click "Create New" and give the schema a name, such as SpyderbatR0.

Paste the contents of the example schema in the text box. Validate the schema, then save it.

Panther log source configuration

Configure a log source in Panther. In the Panther console, under Configure / Log Sources, click "Create New" and select "custom log formats."

Next, click "Start" under the category for HTTP logs.

Give the source a name, e.g. Spyderbat Forwarder on HOST_NAME (32 chars max)

Select the Custom.SpyderbatR0 schema created in the previous step.

Set the auth method to Bearer and click the refresh button to generate a bearer secret. Then copy the secret.

NOTE: Once you leave this screen, the secret cannot be retrieved again; It must be replaced.

Click the "Setup" button.

The bearer secret must be converted to base64. An easy way to do this with the Unix shell is to type:

Keep this base64 secret handy for the webhook configuration step.

Event forwarder configuration

Edit the /opt/spyderbat-events/etc/config.yaml configuration file.

Configure your API key and org UUID

spyderbat_org_uid and spyderbat_secret_api_key must be valid. Note that API keys are scoped to a user and not an org; It is recommended to create a service user in the Spyderbat UI, grant it access to the appropriate org, and generate the API key for the service user. API keys expire after 1 year; Plan ahead to keep the key updated.

Add a filter expression such as the one below to capture relevant data

Add a webhook configuration

Your panther source will have an HTTP Ingest URL associated with it. Retrieve it and the secret you created earlier on, and add the webhook configuration:

Save config.yaml file and restart the event forwarder:

sudo sytemctl restart spyderbat-event-forwarder.service

Tail the logs to check for errors:

sudo journalctl -fu spyderbat-event-forwarder.service

Create a Golden Image with the Nano Agent Pre-Installed

Template Spyderbat Nano Agent install via Golden Image for environments with auto-scaling and automation requirements.

Published: July 25, 2022

Using a Golden Image of a virtual machine (VM) to deploy consistently creates a template that reduces errors, ensures consistency, and lowers the level of effort during deployment. The use of a Golden Image is also common in environments with autoscaling and automation.

Below are the steps that should be followed to include the Spyderbat Nano Agent into your Golden Image:

1. Identify the VM that you want to base your Golden Image.

2. Install the Spyderbat Nano Agent on this VM by choosing to add a new source in the Sources section of the UI.

3. After you have successfully installed and registered the Spyderbat Nano Agent, run the following command to stop the Nano Agent (it will take a few seconds to fully stop):

sudo systemctl stop nano_agent.service

4. Remove the unique machine ID (MUID) that associates the Nano Agent with the specific VM it is running on, by executing the following command:

5. Save this VM as your Golden Image using the respective Cloud Platform or Virtual Machine functionality.

6. The Nano Agent service will start automatically when the virtual machine is loaded and boots, at which time a new, unique MUID will be generated and associated with the specific VM.

See also for additional information.

Install the Nano Agent

Once you have successfully logged into your organization, the next step is to install the Nano Agent

Installing the Spyderbat Nano Agent is a requirement for using the multitude of security and operations features that Spyderbat has to offer.

Prerequisites

  • Identify the Linux system(s) or Kubernetes Cluster you wish to secure with the Nano Agent.

  • to see if your environment is compatible.

Installation

  1. - This installation path is tailored for setting up the Spyderbat Nano Agent on a standalone Virtual Machine (VM). Follow the comprehensive steps provided in the link. This approach is ideal for environments with one or more persistent VMs requiring the security and visibility offered by the Nano Agent.

  2. - If you are working with a Kubernetes Cluster and wish to deploy the Nano Agent across the entire cluster, refer to this installation path. This installation path is designed for environments utilizing Kubernetes orchestration, allowing for the automatic deployment and management of the Nano Agent across multiple nodes within the cluster.

For organizations with both type of environments, there is no issue installing some Nano Agents via option 1 and some via option 2.

Linux Standalone

This section covers: adding a source to your monitoring scope in Spyderbat console, generating installation scripts/commands, and running the agent installer on the target machine.

Published: August 23, 2021

The Spyderbat Nano Agent is an extremely lightweight collector to access unprecedented insight from Linux systems and their causal activity and relationships. The Spyderbat Nano Agent leverages proven technology, Enhanced Berkeley Packet Filters or “eBPF”. The Spyderbat Nano Agent collects targeted, non-human readable data using eBPF from modern 64-bit Linux distributions for both x86 and ARM based architectures.

Step 1

Select “Sources” from the left-navigation menu. The wizard will launch automatically if you don’t have any agents installed yet. Doing so will take you to a brief wizard to guide you through the installation in a few simple steps. Click on the blue button to get started.

Step 2

You will see two choices regarding the target system you want to install the Nano Agent on.

  • If you are installing on a virtual Linux system in AWS, select “EC2 instance” – where you’ll want to give the AWS instance an IAM (read only) role to grab metadata like Cloud Tags etc.

  • If you are installing on any instance of Linux (virtual or physical) select “Standalone”.

Ensure you are installing on a Linux system supported by Spyderbat. You can view a complete list of supported versions .

After you have made your selection, hit “next step”.

Note the system you’re installing Spyderbat's Nano Agent should have outbound access on port 443 to https://orc.spyderbat.com

Step 3

In the next step, you’ll see a command you can copy and paste into a terminal on the target system. If you do not have Curl installed on your system, select the wget tab to copy this command instead.

Note - You will need Sudo permissions to install the Spyderbat Nano Agent

The UI provides you with feedback by displaying check marks of the install progress. Once the Spyderbat Nano Agent is installed, registers with Spyderbat, and it transmitting data, you will see that the agent was installed successfully both in your terminal and in the Spyderbat UI.

Once you see every checkmark displayed, click on ‘Next Step’ to be directed to the Sources page. You should now see the system you just installed the Nano Agent onto.

You should see that the source is healthy, the last active time should indicate recent activity and the sparkline will start to indicate a summary of system activity over time. You can rename the Source if necessary.

You are now ready to jump into an Investigation! Click on “View Spydertrace” link for the source will take you by default into the last hour of activity for that system in the “investigate” view.

Congratulations – you installed the Spyderbat Nano Agent!

Happy tracing!

Policies

Policies are the main way for users to configure their Spyderbat environment. Policies provide users with a way to generate tailored alerts, and tune out noise. Currently, policies fall under 1 of 2 categories, guardian and suppression.

  • Guardian Policies are designed to establish expected behavior of the resources within their scope be it Linux Services, Containers, or Kubernetes Clusters.

  • Suppression Policies are a way of tuning out the noise of Spyderbat's built-in detections. While Spydertraces aim at reducing the number of alerts a user should investigate, varying factors can lead to situations where suppression policies are necessary.

Policy Type
Category
Supported Selectors
Supported Response Actions
Supports Rulesets

Related Pages

  • - Actions that can be taken by Guardian Policies.

  • - Reference documentation on the various selector types.

  • - Rulesets supported by some policies.

Install Spydertop CLI

Learn how Spyderbat leverages kernel-level system monitoring and public APIs to expand HTOP functionality to allow analysts to look into system anomalies days or even months later.

Published: September 13, 2022

HTOP’s Strengths and Shortcomings

There is a program called “top” on most Linux systems for simple system monitoring. The tool lists the CPU and memory usage for the computer and each process, just as Task Manager does on Windows. is a more advanced and user-friendly version of top, displaying graphs in addition to raw values and adding colors for readability. Both programs are widely used for monitoring Linux systems, allowing administrators to track processes’ resource usage or quickly get a list of running tasks.

These tools are only designed to show the state of the system at the current moment. They lack the ability to record and display information even over the last few seconds as Task Manager does. This limitation is understandable since neither program is designed to log system performance or give a historical understanding of the machine. But what if the behavior you want to profile is intermittent, and you cannot be on the machine to run top when it happens?

Workload Policies

Published: April 29, 2024

What are Workload Policies?

Workload Policies are the most granular form of Guardian Policy. They define the allowed process and network activity for well-defined workloads. Currently, policies are supported for the following workload types:

  • Containers

Pre Deployment Environment Data Collection Script

Optimize your Helm Chart values to ensure proper sizing of the Spyderbat Nano Agent parameters for your K8s environment.

What is Pre-Deployment Collection Script and How It Works

To optimally configure and size the Spyderbat Nano agent and backend to support your Kubernetes cluster, we have created a script that will collect some useful data and metrics that Spyderbat can review to optimize the Helm installation or our agents and size Spyderbat backend appropriately.

AWS Unattended Install

Automatic installation of the Spyderbat Nano Agent on an AWS EC2 instance with auto-scaling groups using the instance launch wizard.

Published: November 19, 2021

Introduction

In this walkthrough, we’ll show how you can install the Spyderbat Nano Agent automatically when an AWS EC2 instance is created – this can be useful particularly for ephemeral instances, such as when leveraging AWS auto scaling groups for example. We’ll walk through creating an EC2 instance in the AWS console using the instance launch wizard, and leverage the ability to pass in user data at instance creation time – for more information about user data and cloud-init, see AWS docs . For installing the Spyderbat Nano Agent in an attended fashion, see the walkthrough guide .

Install Spyctl CLI

Spyctl, an open source CLI tool, allows you to view and manage resources within your Spyderbat environment.

Source code:

The initial step in utilizing any software package is ensuring its correct installation, so let's get started by walking through the installation process for Spyctl.

Prerequisites

Spydertrace Summarize

Overview

Spyderbat's Summarize feature provides a quick, structured summary of a Spydertrace investigation, enabling users to understand key details without manually analyzing the trace. This feature enhances threat detection efficiency and streamlines the investigative process.

Note: Summarize is available only on an opt-in basis per organization. It requires approval to send data to OpenAI. To enable the feature, navigate to Admin → Organization Management → AI Management. Here, you can opt in or out, track your monthly usage quota, and view the Recent Summarize Usage Log.

Actions

Overview of manual response actions that can be executed in the UI, including killing a process or killing a pod.

The Spyderbat platform features include a powerful response capability: the option to kill processes on Linux machines or pods within Kubernetes clusters directly through the UI. This feature empowers security teams to take immediate action during an investigation, stopping active threats as soon as they are identified.

Whether dealing with malicious processes on a machine or a compromised pod within a Kubernetes cluster, users can mitigate the threat swiftly and efficiently.

This capability ensures real-time responsiveness by terminating the identified threat within seconds. The action is recorded in an audit log for accountability and compliance purposes.

Who Can Kill a Process or Pod?

Killing a process or pod requires specific permissions for the logged in user. By default these permissions are preset for the following Spyderbat Roles:

Spyderbat AWS Agent

Overview of the AWS Agent, deployment options and how to get started

The Spyderbat AWS Agent enables AWS Context Integration in the Spyderbat Platform. This integration provides a comprehensive view of cloud assets and IAM configurations, enhancing the ability to detect and investigate potential security incidents.

For an overview of the AWS Agent and its role in the Spyderbat Platform, refer to the .

Permissions Required by the Spyderbat AWS Agent

To function effectively, the Spyderbat AWS Agent requires specific permissions to collect data from AWS APIs. Below are the key permissions grouped by AWS services:

Guardian & Interceptor

The Spyderbat Guardian Feature is designed to enhance security within your Spyderbat environment. It provides a robust framework for defining and enforcing expected behavior through Guardian Policies. These policies are crucial for maintaining the integrity of your systems and ensuring that only authorized activities are permitted.

Guardian Policies

Guardian Policies are the cornerstone of Guardian, serving as the rulebook for allowed and prohibited activity within your environment. They come in two primary forms:

Workload Policies

Workload Policies are tailored to containers and Linux services, specifying a whitelist of permitted activities. This ensures that only known, safe operations are allowed to execute, providing a first line of defense against unauthorized or malicious behavior.

Read more about Workload Policies here.

Key Components:

  • A comprehensive list allowed process and network activity.

  • Scope: The selectors detailing the specific containers or services to which the policy applies.

  • Response: The mechanism by which the policy take actions.

Ruleset Policies

Ruleset Policies offer a more flexible approach, supporting policy-agnostic rulesets that can be applied across different environments. These rulesets contain both allow and deny rules, providing a granular level of control over the behavior within your systems.

Read more about Ruleset Policies here.

Key Components:

  • Allow Rules: Explicitly permit certain actions, overriding any broader deny rules that may be in place.

  • Deny Rules: Define actions that are explicitly prohibited, regardless of other allow rules.

  • Reusability

Interceptor

The Interceptor feature set allows Guardian to take response actions based on policy violations. When a policy violation occurs, Interceptor Response Actions can trigger actions such as generating alerts, or blocking the offending activity.

More details on response actions can be found here.

Tutorials

Tutorials detailing the creation of the various policy types can be found in the tutorials section of this documentation.

  • Guardian Tutorials

Conclusion

The Spyderbat Guardian Feature is a powerful tool for maintaining security and compliance in containerized and Linux service environments. By effectively utilizing Guardian Policies, you can ensure that your systems operate within the defined parameters of expected behavior, safeguarding against potential threats.

For more detailed information and advanced configurations, please refer to the policy reference guide.

View the supported operating systems
Install the Nano Agent on a Standalone VM
Install the Nano Agent across a Kubernetes Cluster

EC2:

  • ec2:Describe*

  • EKS:

    • eks:List*

    • eks:Describe*

  • IAM Roles and Policies:

    • iam:Get*

    • iam:List*

    • iam:Put*

  • STS (Security Token Service):

    • sts:AssumeRole

    • sts:AssumeRoleWithWebIdentity

  • The agent also supports consuming configured secrets (registration key) in AWS Secrets Manager - which would require an extra permission to access the configured secret arn.

    Permissions are configured using a custom AWS policy attached to the IAM Role that the AWS Agent assumes. How the agent assumes this role depends on the deployment options and is discussed in the more detailed deployment guides.

    Deployment Options for the AWS Agent

    Spyderbat offers multiple deployment options for the AWS Agent to suit different environments and requirements. Below are the currently available deployment methods:

    1. Hosted on an AWS VM: You can deploy the AWS Agent on a virtual machine within your AWS account. This option gives you full control over the agent and its environment.

    2. Hosted on a Kubernetes Cluster: The AWS Agent can be deployed as a Kubernetes pod within a cluster. This is suitable for users who want to integrate AWS context alongside their Kubernetes workloads.

    For detailed installation instructions for each deployment option, refer to the respective guides:

    • AWS VM Deployment Guide

    • Kubernetes Deployment Guide

    Getting Started with the AWS Agent

    To begin using the Spyderbat AWS Agent:

    1. Choose a Deployment Method: Decide whether to deploy the agent on an AWS VM, a Kubernetes cluster, or use the hosted option.

    2. Deploy the Agent: Follow the instructions in the relevant deployment guide to deploy the AWS Agent.

    Once deployed, the agent will start collecting cloud context and feeding it to the Spyderbat Platform, where it can be used for enhanced visibility, detection, and investigation.

    AWS Context integration page in the integration concepts section

    N/A

    No

    linux-service

    Guardian Workload

    Cluster Machine Service

    makeRedFlag makeOpsFlag agentKillProcess agentKillProcessGroup

    No

    container

    Guardian Workload

    Cluster Machine Namespace Pod Container

    makeRedFlag makeOpsFlag agentKillProcess agentKillProcessGroup agentKillPod

    No

    cluster

    Guardian Ruleset

    Cluster

    makeRedFlag makeOpsFlag agentKillPod

    Yes

    trace

    Suppression

    Response Actions
    Selectors
    Rulesets

    Cluster Machine Trace User

    secretArn: The ARN of the secret in AWS Secrets Manager containing the Spyderbat registration key.
  • Example:

  • aws_secret_access_key: AWS secret access key (optional).
  • spyderbat_registration_key: The Spyderbat registration key.

  • Example:

  • Example:

    here
    sudo rm /opt/spyderbat/etc/muid
    How to perform an unattended Spyderbat Nano Agent installation on AWS
    awsSecretsManager:
      enabled: false
      secretArn: <arn of the secret in secrets manager>
    credentials:
      spyderbat_registration_key: <spyderbat registration key>
    awsAgentsConfigs:
      - aws_account_id: auto
    spyderbat_orc_url: https://orc.spyderbat.com
    cluster_name: my-cluster
    helm upgrade aws-agent spyderbat/aws-agent -f values.yaml
    echo -n YOUR_SECRET | base64
    expr: |
        schema startsWith "model_spydertrace:"
        and suppressed == false
        and (
            (score ?? 0) > 50
            or len(policy_name ?? "") > 0
        )
    webhook:
      endpoint_url: PANTHER_INGEST_URL
      compression_algo: zstd
      max_payload_bytes: 500000
      authentication:
        method: bearer
        parameters:
          secret_key: YOUR_BASE64_SECRET

    Enter Spydertop

    Spydertop is an open-source tool developed by Spyderbat that provides a solution for this currently unfulfilled use case. Utilizing Spyderbat’s kernel-level system monitoring and public APIs, it provides the same in-depth information as HTOP, and extends these abilities historically. Spydertop allows analysts to look into system anomalies days or even months after they occur.

    How it works

    Imagine a Kubernetes node that has the Spydertop Nano Agent installed. The agent collects the data necessary for Spydertop to function; for more details, refer to the Installation Guide or watch this video

    on how to get it installed. On this system, there happens to be an application with a bug that causes it to continuously use up more memory. At 2:00 in the morning, the container reaches its memory limit and automatic safeguards restart the application. It begins to function correctly afterward, showing no signs of excessive memory usage.

    In the morning, an analyst sees the crash report and decides to investigate. They start Spydertop on their own machine, and it uses Spyderbat’s public API to collect all the resource usage records from that early morning crash, as well as the active processes, connections, and more. Using these records, Spydertop displays the memory usage of the machine: 95% at 1:30 AM. By stepping through time, the analyst sees the memory slowly increase until the crash. Next, they sort the running processes by memory usage, find the buggy application, and can now resolve the issue.

    How to use Spydertop

    You can try out Spydertop by checking out the public repository or running the docker image. If you don’t have an API key yet, it will guide you through setting one up. After that, it is as simple as picking a machine and what time to investigate (both of which can be passed as command-line options for convenience).

    Once the necessary data has been loaded from the API, Spydertop presents a simple CLI interface. Spydertop aims to make the transition easy for users already accustomed to HTOP, so the user interface, buttons, and keyboard shortcuts are designed to be similar.

    The first few lines display machine-wide resource usage information, such as CPU core usage and disk reads and writes. Taking up the rest of the screen is the process table, which shows resource usage and details for individual processes. Several other tabs are available in this table to show the active sessions, connections, flags, or listening sockets. At the bottom of the screen is a list of quick shortcuts, including a help menu where you can find more detailed information and a list of key binds. A description of command-line options is also available by passing the –help flag.

    Get started using Spydertop for free by installing the python CLI, or try it without an account by running the docker image with a set of example data:

    Here is the built-in help:

    HTOP

    Kubernetes Pod Containers

  • Linux Services

  • Workload policies shine when you have a relatively stable set of activity. For example, if your organization uses third party or custom containerized applications, each container will typically run a few processes and make a few network connections. This activity can be whitelisted and the policy should stabilize in a short amount of time.

    Workload policy do not perform as well for dynamic, constantly changing activity. A development container is a good example of this. When engineers are constantly logging in and running unique and varied commands it becomes impractical to whitelist activity in this way. There will be constant upkeep.

    Using Workload Policies

    As soon as the Spyderbat Nano Agent is installed on a machine it automatically gathers information of the workload running on it. Spyderbat compiles Fingerprints for each Linux Service and Container it sees. You can think of a Fingerprint as the observed process and network activity for a single workload. Fingerprints are used to build Workload Policies.

    Note: While it is technically possible to define your own Workload Policies it is not advisable to do so. In general, it is best to leverage Spyderbat's assisted creation and update features using Fingerprints and Deviations.

    Here is an example of a Fingerprint for a single Container:

    Fingerprints Page

    By default, a Workload Policy is generated by combining all related Fingerprints into a single document. In the image above you can see that this specific container image has 14 instances deployed in our organization. So if one container has network connections that the others do not, the created policy will contain that extra activity.

    Once applied, a policy will constantly monitor for deviant activity within the policy's selector scope. In the policy above, that means any container using the docker.io/library/nginx:latest image will be evaluated against this policy.

    Policy Creation

    When deviant activity is detected, multiple things can occur. First, the policy will generate a Deviation. This is a record that contains all of the information required to update the policy with this new activity. Secondly, the policy will take any configured response actions. By default, the policy will generate a Red Flag which is part of Spyderbat's Scout feature. These red flags will become part of Spydertraces that are viewable on your dashboards. Other actions include killing the deviant process, or in Kubernetes you can kill the entire pod.

    Script Output Details

    The script collects the following data in your environment:

    1. Summary metrics about the number of nodes, pods, deployments, replicasets, daemonsets, services and namespaces, which helps us assess the size and load on your cluster.

    2. Information about the nodes of the cluster, including their provisioned capacity and any taints applied to the nodes, which helps us understand the headroom available in your cluster to add our agents, and helps us pro-actively recommend configuring tolerations on our agents to ensure visibility on all nodes.

    3. Cumulative metrics about what resource requests currently running pods are requesting (CPU, memory), which helps us understand the headroom available in your cluster to add our agents.

    4. The name and namespaces of the deployments, daemonsets and services running on your cluster, which helps us assess if any other daemonsets or deployments could interfere with our agents and helps us discover if your cluster has node-auto-scaling configured.

    5. PriorityClasses currently present for the cluster which helps us assess whether our agent will have sufficient priority to get scheduled on any new nodes being added to the cluster.

    The script does NOT collect any of the following:

    • Implementation and status details in the 'spec' and 'status' sections of the pods, deployments or daemonsets.

    • Any sensitive data that might be present in these sections of the k8s resources (environment variables, configs)

    Script Execution Prerequisites

    Spyderbat Pre-Deployment Collection script should be run from a machine you currently use to manage your cluster from.

    Below are the requirements for the script to run successfully:

    1. python3 https://www.python.org/downloads/\

    2. kubectl and a valid kube config file https://kubernetes.io/docs/tasks/tools/

      The script will call on the kubectl command to collect cluster information. The cluster(s) to install Spyderbat on should be one of the contexts configured in the kube config file.

    Script Execution Steps

    First you will need to download the cluster_collect.py script from this public repository.

    After installing the script run it as

    ./cluster_collect.py -h

    OR

    python3 cluster_collect.py -h\

    For usage info run

    usage: cluster_collect.py [-h] [-c CONTEXT] [-o OUTPUT]

    Here are available options:

    • -h, --help show this help message and exit\

    • -c CONTEXT, --context CONTEXT kubectl context to pull from (if none provided, all contexts in the kubectl config will be analyzed)\

    • -o OUTPUT, --output OUTPUT output file (default is Spyderbat-clusterinfo.json.gz)

    By default, the script will collect information for all clusters configured in your kubeconfig file.

    If you want to collect only for one cluster, use the -c CONTEXT flag, with the name of the context (as available in kubectl config get-contexts) to collect for.

    For example:

    ./cluster_collect.py -c qacluster1

    By default the output will go into a file called spyderbat-clusterinfo.json.gz. You can use the -o flag to use another filename.

    Output Delivery and Review

    If the script ran successfully, please send the output file back to Spyderbat. We will review the findings with you to discuss the next steps for your deployment and provide recommendations on how to best configure your deployment parameters to ensure that all Spyderbat Nano Agents come online, initialize fully, and successfully register with the Spyderbat backend.

    Here is an example of the output file data:

    Click to enlarge

    If you would like to review an example of a full file output, please Contact US.

    Step by step guide

    1) The 1st step is to retrieve the command to install the agent for your organization – click on the “New Source” button in the sources section of the product for your organization

    Spyderbat Nano Agent installation on AWS step 1

    2) Once you click on this button, you should be launched into the agent installation wizard where you will be presented with a link to install the agent, let’s copy the “wget” version of the install command and save that to the notepad.

    Spyderbat Nano Agent installation on AWS step 2

    3) Now go to the AWS EC2 management console.

    4) Go to Instances and use the Launch Instances wizard to request one or more instances.

    5) Choose the desired AMI for the new instances and click Select.

    6) Choose the desired instance type. Then click Configure Instance Details.

    7) At the bottom of the “Configure Instance Details” screen, you will see an “Advanced Details” section with an input box for “User data”

    Spyderbat Nano Agent installation on AWS step 3

    8) In the user data field, we will enter a shell script to run the install command we copied to our notepad, similar to the below (for RedHat family distributions):

    The 1st line indicates this is a bash shell script, the second line ensures the ‘wget’ and ‘lsof’ utilities are installed, and the 3rd line is the install command you copied from the installation wizard. Note that we have omitted “sudo -E” from the command we copied since the user data script is run as root when the instance boots. For Debian family based distributions, the following can be used:

    9) Continue with the steps in the install wizard, or jump to Review and Launch if you are done.

    10) When the instance is created in AWS, it should now download and install the agent as part of the boot sequence (for reference, the cloud-init output log file is created at /var/log/cloud-init-output.log on the created instance) – note you should ensure the instance(s) that are created have outbound access on port 443 to https://orc.spyderbat.com.

    11) Check the “sources” section of the Spyderbat and you should now see your new instance appear in your list of sources.

    You can leverage the user data in a similar fashion when using other mechanisms to create AWS EC2 instances, for example when specifying a launch template for an Auto Scaling group.

    Click here for more information about Spyderbat’s Nano Agent

    here
    here
    Python 3.8 or newer

    Installation Command

    To install spyctl globally requires the pipx utility.

    sudo apt install pipx

    Install spyctl using pipx.

    pipx install spyctl

    Verify the installation.

    spyctl --version

    Alternatively, you can use a virtual environment to install spyctl.

    python -m venv spyctl
    source spyctl/bin/activate
    pip install spyctl

    Verify the installation.

    spyctl --version

    Note: depending on your system, you may need to use python3 or instead of python. If you go the virtual environment route, you may need to install virtualenv first.

    apt install python3.X-venv

    Where python3.X is the version of python you have installed.

    To install Spyctl, simply run this command in your terminal of choice:

    To verify the installation:

    Enabling Shell Completion

    To enable shell completion, follow these steps:

    The default version of Bash for Mac OS X users does not support programmable shell completion. Guides like this will help you install a newer version of Bash.

    Create the Spyctl directory if you haven’t already.

    Generate the shell completion script.

    Add the following line to the end of ~/.bashrc.

    Generate and save the shell completion script.

    Create the Spyctl directory if you haven’t already.

    Generate the shell completion script.

    Add the following line to the end of ~/.zshrc.

    After modifying the shell config, you need to start a new shell in order for the changes to be loaded.

    https://github.com/spyderbat/spyctl

    By default, a monthly quota of 50 is provided, with each trace summary consuming one. You can contact us to request an increase.

    What is Summarize?

    The Summarize feature in Spyderbat generates a concise summary of a Spydertrace, highlighting critical security insights.

    Behind the scenes, it takes the Spydertrace as input, sends it to OpenAI, and generates a concise, easy-to-understand summary.

    How to Use Summarize

    There are two ways to generate a summary: Manual and Automatic.

    1. Manual Summarization

    To manually summarize a Spydertrace, click the Summarize button. The summary generation process may take a few seconds.

    Example 1: Search

    • Search for the relevant Spydertrace.

    • If you find a high-score Spydertrace in a restricted cluster, and want to quickly understand its details, click Summarize to generate a summary instantly.

    Example 2: Investigation

    • Within the Spyderbat Investigation view, click Summarize on top-right to generate a summary.

    • Based on the insights, take immediate action as needed.

    2. Automatic Summarization

    Automatic summarization enables AI-powered summary generation for every Spydertrace saved search.

    • When enabled, the system automatically generates structured summaries for saved Spydertrace investigations.

    Example:

    If you want a summary for every high-score Spydertrace (e.g., score 100), follow these steps:

    • Search for the high-score Spydertrace.

    • Add it to a saved search.

    • Add description, target as desired.

    • In Additional Settings, enable Auto AI Summarization and Save.

    Once enabled, every time a high-score Spydertrace occurs, you will receive a notification with an investigation link to review the Spydertrace. With automatic summarization, you don't have to wait for the summary to generate—it is ready instantly.

    Note: Only enable Automatic AI Summarization based on your organization's quota.

    You can also view summarized traces in AI Management's Recent Logs.

    Benefits of Summarize

    ⏳ Time Efficiency

    Reduces manual effort in analyzing complex security traces.

    ⚡ Quick Incident Response

    Enables security teams to respond faster with key insights readily available.

    🔍 Improved Security Insights

    Highlights critical security concerns such as unauthorized access, suspicious executions, and potential breaches.

    📑 Simplified Investigation

    Provides a structured view of incidents, aiding forensic analysis and remediation planning.

    Conclusion

    Spyderbat’s Summarize feature enhances security investigations by providing automated, structured, and insightful summaries of activities. By leveraging this feature, security teams can quickly detect, understand, and mitigate potential

    • Admin

    • Power User

    Roles for users can be assigned or modiefied in the Admin section, Organization management. For more details, see User and role management overview

    How to Kill a Process or Pod

    1. Initiating an Investigation

    The process starts with an investigation, which can be the result of pivoting from a high scoring trace, or a security redflag, drilling down from a dashboard card, or just from the output of the search results to find specific proccess or kubernetes resources of interest.

    Once you have identified the process of interest in the Investigation section of the UI, you can take action.

    In this example, we identified a netcat server running on port 9000

    2. Taking the Kill Action

    The kill action can be taken either at the bottom of the graph view, or within the process details view. You can opt to kill the process, or, if the process is running in a container from a Kubernetes cluster, to kill the entire pod.

    3. Providing a Reason, and Confirming the Kill Action

    After selecting the "Kill Process" action, a confirmation dialog will appear, prompting you to provide a reason for the action. This reason will be recorded in the audit log for accountability and future reference. Input a clear and concise reason for the kill action, for example "Terminating malicious process" or "Shutting down compromised pod."

    To prevent accidental terminations, Spyderbat requires confirmation before executing the kill action. After entering your reason, click 'YES, KILL PROCESS' to proceed with the process termination.

    The process for killing a pod is exactly the same, just select Kill Pod and follow the confirmation prompt.

    5. Action Execution

    Once confirmed, Spyderbat will automatically terminate the process or pod within seconds. A popup at the bottom-page will provide confirmation.

    6. Review and Audit Logging

    After a process is killed, its icon will be updated to reflect it is now defunct, and the process details will contain action audit log information

    Every kill action, including the reason for the termination and user details, is recorded in Spyderbat’s comprehensive audit log. You can review this log to track all interventions taken during an investigation.

    You can find a full log of all actions taken in your account by navigating to the Reports, Action Log section

    In the actions log you will find what type of action was taken, the action status, when it was created, the action result code, who took the action, the reason provided and what process or pod was impacted.

    You can filter for specific actions you are looking for by clicking on "Filters", and adjust the columns view by clicking on "Columns".

    7. Next Steps

    After the kill action, you should continue to identify root-cause of the unexpected behavior you chose to terminate.

    It is possible another process is still active that could respawn the same type of process you just killed, so reviewing processes again would be a great idea.

    Similarly, if you chose to kill the pod, be mindful a new pod might be automatically created by the cluster, exhibiting the same threat you tried to eliminate. Confirm that the behavior you want to see addressed is not introduced by a higher-up kubernetes resource manager, such as a deployment or statefulset that was compromised.

    here
    Click here to learn more about Spyderbat Investigations

    Saved Searches

    What are Saved Searches?

    Saved Searches in Spyderbat provide a convenient way to store your Search queries in one place, eliminating the need to reconstruct them each time. They also allow you to set up notifications via Email, Slack, PagerDuty, or Webhooks, ensuring you're notified with full context whenever the search criteria are met.

    How to Use Saved Searches in Console

    The Saved Searches page can be accessed from the "Search" section in the side panel. Saved Searches are incredibly simple to use. Here's a quick example to get you started in 4 steps.

    Example Use Case: Monitoring new Cronjobs

    1. Run a Query

      • Enter your desired query for Cronjobs, such as metadata.name ~= "*". Saved Searches eliminate the need to repeatedly construct this query.

      • Use the "Search" button to ensure it works as expected.

    After creating a saved search, you can view and manage it on the Saved Searches page. There, you can edit the conditions, run the query, enable or disable it, or delete it as needed.


    Managing Saved Searches with Spyctl

    Overview

    Spyderbat's Spyctl offers a command-line interface to create, retrieve, and edit Saved Searches efficiently. This document provides a detailed guide on how to perform these actions.


    Retrieving Saved Searches

    To retrieve all existing Saved Searches, use the following command:

    This command lists all Saved Searches currently available in your environment.


    Creating a Saved Search

    The spyctl create saved-query command allows you to define and save a new query. To see all available options, use:

    Example Command Usage:

    Note that $spyctl search --list-schemas command provides a list of all available schemas, helping you identify which Schemas that are accessible for querying.


    Editing a Saved Search

    To edit an existing Saved Search, use the spyctl edit saved-query command. You need to provide the query ID or name as an argument.

    Replace <NAME_OR_ID> with the actual ID or name of the Saved Search you want to edit.

    You should get "Successfully edited Saved Query 'query:id'"* after editing the YAML and applying the change.

    Notification Targets

    What are Notification Targets?

    Notification Targets are named destinations where notifications can be sent.

    You create Notification Targets to receive notifications via email, Slack, AWS SNS, and webhook. You specify which Notification Targets to use when creating Notification Configurations.

    Using Notification Targets

    Notification Targets can be referenced while configuring notifications for Notifiable Objects using Spyctl. You can either specify a Notification Target or a Notification Template that map specific targets to templates like below.

    Example usage with Spyctl:

    Example:

    Usage:

    The $spyctl notifications configure command allows notifications to be sent using either Custom templates with Targets or directly via Targets (using Default Template).

    Types:

    Emails

    Email Notification Targets contain a list of email address destinations for notifications.

    Slack

    Slack Notification Targets contain a single Slack Hook URL destination for notifications.

    Webhook

    Webhook Notification Targets contain a single generic webhook URL destination for notifications.

    PagerDuty

    PagerDuty Notification Targets contain a single routing key used to send notifications to a specific PagerDuty service.

    Manage Notification Targets Using Spyctl

    To start creating Targets follow our tutorial using spyctl ::

    Quick Start Tutorial

    If you already have Targets set, start configuring Spyderbat Notifications using spyctl.

    How to Set up Agent-Health Notifications Using Spyctl

    Overview

    The spyctl create agent-health-notification-settings command in Spyctl allows you to configure notifications for agent health events. This helps you stay informed about the status (Unhealthy, Offline, Online, Healthy) of agents in your environment.

    Prerequisites

    Before configuring agent health notifications, ensure you have:

    • Install Spyctl ()

    • Configure Spyctl with a context ()

    • Spyderbat Notifications

    • What are Notification Targets? ()


    Step-by-Step Guide

    Step 1: Identify Notification Targets

    Before setting up agent health notifications, ensure you have configured notification targets. These can include:

    • Email

    • Slack Channel

    • Webhook

    • PagerDuty


    Step 2: Create an Agent Health Notification Setting

    Use the below command to create a new notification setting. Once configured, agent health alerts are received in real-time on the chosen targets.

    Available Options

    Option
    Description

    Creating a agent-health-notification-settings

    This command creates an agent health notification setting named Agent Health Alerts, which triggers notifications for unhealthy agents and sends them to the specified targets.


    Step 3: Edit an Existing Notification Setting

    To modify an existing agent health notification setting, use:

    For example:


    Step 4: List All Existing Notification Settings

    To view all configured agent health notification settings, run:

    To get a particular agent-health-notification-settings use <NAME_OR_UID>


    Step 5: Delete an Agent Health Notification Setting

    To stop receiving agent health notification, use:

    For example:

    How to Use the Investigations Feature in Spyderbat

    Quick look at the causal graph in the process investigation section of the Spyderbat console, tips on how to add and remove data from the causal graph view and how to share Investigation permalinks.

    Published: August 20, 2021

    Any Record in Spyderbat investigated from a Search or Dashboard card can be viewed in the context of it’s Causal Tree. From within the Investigation section, click on the ‘star‘ to the right of any Record in the Records table.

    On the top of the Causal Tree, there are a number of options.

    • Clear the Causal Tree by clicking the trash can.

    • Use undo/redo actions actions to undo/redo actions performed in the Causal Tree (e.g. adding or removing nodes).

    • The next icon auto-focuses the view,

    • The magnifying glass icons zoom in or out of the Causal Tree, the same as using the scroll wheel on your mouse.

    • The “save to datalayer” button is extremely useful – it allows you to save whatever is displayed on the graph as its own datalayer or subset of records.

    Enabling only that data layer (by disabling any others) allows you to explore only that data set in the Causal Tree and Records table. This can be used to view every process (or command that was executed) in both the tree and in a tabular format without any extraneous data. In addition, by focusing on only a Data Layer saved from the Causal Tree:

    • Use the “Previous Node” and “Next Node” buttons located at the bottom of the Causal Tree to cycle through the tree chronologically

    • Use the “Copy Investigate Link” button to share a very focused set of activity or the story of an attack with a colleague or for future reference.

    A left-click selects a node. This displays more information about the node in the Details panel. It also highlights relevant records in the Records table tab.

    Right-clicking a node is very useful for both removing and adding additional items to the Causal Tree.

    Removing nodes:

    • Selecting “remove self” is a handy way to remove a node and any dependent nodes, for example If by removing a bash process we remove all the child processes under bash.

    • Selecting “auto prune” removes all nodes that do not have a Flag or directly causally connected to a node with a Flag.

    Adding nodes:

    • Children are directly connected to the selected node.

    • Descendants are every following node causally connected to the selected node.

    • Connections are any Network connections with a causal relationship to children or descendants.

    The Causal Tree only displays records captured in the enabled Data Layers. What if there is activity outside the original query time frames of those Data Layers?

    Loading Children or Descendants:

    • Selecting to load Children or Descendants via Search performs a search for other activity across all time known to Spyderbat and bring any activity we find in as another Data Layer

    Lastly, we want to show you a powerful option for the Causal Tree under the options drop-down. Selecting “show relative time” displays a relative time on the Causal Tree for any selected node. For example, in the above screenshot the bash shell node is selected. We can see the relative time of when commands were performed in the bash shell. This is tremendously useful when viewing traces that span across time to visually understand the temporal distance between activities. In the above example, it is clear that the “whoami” command occurred 11 minutes after the previous commands.

    Thank you and Happy Tracing!

    All Operators

    String

    All string operators are case insensitive except for ~~=.

    Symbol
    Description

    =

    Equal to value

    Boolean

    Symbol
    Description

    Number and Integer

    Symbol
    Description

    IP Address

    Symbol
    Description

    List

    Symbol
    Description

    Dictionary

    Symbol
    Description

    Secure your Registration Code with AWS Secrets Manager

    Learn how to leverage AWS Secrets Manager as a secret store for the Nano Agent Registration Code (There is an assumption that you are familiar with AWS, IAM, and EKS and how the 3 interact).

    Overview

    Spyderbat Nano Agent registration code is a unique alpha-numeric combination that is used to associate the installed Nano Agents with your organization in Spyderbat backend and the data. This registration code can only be visible in the Spyderbat UI to the users in your organization with the relevant permissions (check out our article on User Roles nd Permissions for more info).

    You may choose to store your organization's Nano Agent Registration Code in the AWS Secrets Manager, to facilitate the use of automation for agent deployment or to adhere to required internal processes, in which case you will need to follow the steps below.

    Adding the Agent Registration Code to AWS Secrets Manager

    First you will need to store the registration keys in Secrets Manager and get the ARN for it:

    Next step is to create an IAM Policy that allows GetSecretValue and DescribeSecret for it. After that, add the AWS secrets store csi driver to your cluster if it is not already available.

    Accessing the Agent Registration Code in AWS Secrets Manager

    Create a role that will have access to the above mentioned policy and is federated to your eks cluster (see associate-iam-oidc-provider):

    Modifying Helm Chart to Query AWS Secrets Manager

    Now that you have all those values, you can run a Helm chart install of the Nano Agent to reference that secret and mount it accordingly. You could utilize your own custom values.yaml file or by overriding via --set in the Helm CLI:

    The steps above represent one of the ways this task could be accomplished. If you have any questions feel free to contact us at .

    How to Create and Use a Spyderbat API Key

    Spyderbat leverages API very heavily. To configure a variety of inbound and outbound API integrations, you will need to generate your API key. Learn how to create, maintain and manage your API keys.

    Published: April 20, 2022

    Setting up your API key for a user account is necessary to be able to leverage the Spyderbat API. This document outlines how to set up your first API key and perform a basic operation against the Spyderbat API to test it.

    A Note on Spyderbat RBAC

    Spyderbat leverages Role Based Access Control for user accounts, and an API key is bound to a particular user account. A user account may belong to one or more organizations and maps to a particular role in a given organization. Spyderbat currently supports two roles: “Admin” and “Read Only” – an Admin account can perform any operation against the API, and a “Read Only” account is restricted to specific operations including read operations from the API and viewing elements in the UI – see for full dynamically generated API docs that also list API call by supported role.

    How to Create Your API Key

    For the account you are using (Admin or Read Only) – you can click on the account icon in the top right corner of the UI and you will see an “API Keys” Link

    Click on the “API Keys” link and you will be taken to a page where you can create your API Key(s)

    This will bring up a modal box where you can give the key you want to create a name and click on save.

    Once the API key has been created, you can copy the API key from the UI for use with the Spyderbat API (see basic example below).

    Retrieving Your Organization ID

    In addition to the API key, you will also need your “Organization ID” (or Org ID) to leverage the Spyderbat API. This ID is a unique identifier for your organization. One way to find this is to navigate to the dashboard and examine the URL in your browser:

    Your Org ID is the string between “org/” and “/dashboard”. The URL above is:

    So our Org ID in this example is P6V31v0uIG5dtqXTHLsd

    Copy your specific Org ID to the notepad for handy reference.

    Testing the Spyderbat API with a Basic Example

    Now that you have created your API key and have your Org ID, you can query the Spyderbat API with the simple examples below (for more details on the API see )

    To list all the organizations that my user is part of where API_key is the API key you created above:

    To list all the sources/agents in an organization, where API_key is the API key you created above and Org_id is the Org ID you retrieved for your organization:

    AWS Linux VM

    Automatic installation of the Spyderbat AWS Agent on an AWS EC2 instance

    This guide provides detailed instructions on how to deploy the Spyderbat AWS Agent on an AWS Virtual Machine (VM). The AWS Agent runs as a systemd service on the VM to ensure continuous operation. Deploying the agent on an AWS VM gives you complete control over the environment and configuration, allowing you to integrate AWS context into the Spyderbat Platform.

    Prerequisites

    Before deploying the Spyderbat AWS Agent on an AWS VM, make sure you have the following prerequisites in place:

    Kubernetes

    Automatic installation of the Spyderbat AWS Agent on a Kubernetes Cluster

    This guide provides detailed instructions on how to deploy the Spyderbat AWS Agent on a Kubernetes cluster. In a cluster deployment, the AWS Agent runs as a single-pod StatefulSet.

    The most straightforward option is to run the AWS Agent on an EKS cluster within the AWS account that you want to monitor. In this case, the required configuration is minimal, and the deployment process is streamlined. This scenario is described in this guide.

    More advanced configurations are also possible:

    • The agent can poll information from another AWS account, or you can deploy a Helm chart to monitor multiple AWS accounts within a single Helm installation. This requires configuring additional IAM roles to be assumed.

    • The Spyderbat registration key can be managed in AWS Secrets Manager if your organization uses Secrets Manager as the standard secret management technology for cluster workloads.

    Helm Chart

    Installation Prerequisites

    Event forwarder can be configured in an environment that is monitored by Spyderbat Nano Agents. Red flag events and/or spydertraces will only be exported via event forwarder integration for those hosts where a Spyderbat Nano Agent is installed and in good health.

    Only one instance of the event forwarder needs to be configured for each environment as it is associated with a unique organization ID. Having multiple instances of the event forwarder in the same environment can result in duplicate ingestion of security events (red flags or spydertraces).

    Please check out to learn more about the Spyderbat Nano Agent and the installation details.

    Traditional Installer

    Installation Prerequisites

    Event forwarder can be configured in an environment that is monitored by Spyderbat Nano Agents. Red flag events and/or spydertraces will only be exported via event forwarder integration for those hosts where a Spyderbat Nano Agent is installed and in good health.

    Event forwarder installer is extremely lightweight and not demanding in terms of required resources. If the event forwarder instance is installed on a dedicated EC2, then we would need the dedicated EC2 instance to be at least a t4g.micro (arm64) or a t3.micro (x64). Of course, anything larger than that would be acceptable as well. If the EC2 instance has other services and applications on it, then we would expect to have at least 512 MB of memory and at least 1 CPU core to be available to support event forwarder operation.

    Only one instance of the event forwarder needs to be configured for each environment as it is associated with a unique organization ID. Having multiple instances of the event forwarder in the same environment can result in duplicate ingestion of security events (red flags or spydertraces).

    Initial Configuration

    Learn how to quickly configure Spyctl to interact with your Spyderbat data

    Prerequisites

    Search

    Detailed overview of the Search tab in the console and the Spyderbat search language it uses.

    The Spyderbat Search Language is a tool designed to easily query and extract actionable insights from your data within the Spyderbat console. This guide is designed to introduce the language and to help you understand the search functionalities and their practical applications for your organization.

    The Search section of the Spyderbat UI is located second from the top of the left hand navigation panel. When entering the page, you will be presented with a search bar, some example searches, and a list of recent searches if you have run any previously.

    To begin searching, click on the search bar. You’ll need to pick a schema, then enter a query expression in the Spyderbat search language, and finally select a time range. All of these steps will be elaborated on below in more detail. The schema and expression can also be filled by clicking on an example or recent search from below the bar. From there, you can click on the search button to begin loading results, or see a syntax error if the query was invalid.

    Spyderbat Event Forwarder

    Export Spyderbat findings (runtime security flags, Spydertraces) for ingestion into third-party SIEM tools like Splunk, Panther, or Sumo Logic.

    Spyderbat Event Forwarder Overview

    The event forwarder is a custom utility that consumes Spyderbat events from the API and emits files containing one of the two types of data, security flags or spydertraces. The event forwarder can be integrated with the third-party systems, such as SIEMs and other alert aggregation points, to forward those files via syslog to be consumed outside of Spyderbat platform to further enhance the security content and improve overall observability and security awareness.

    For guides detailing the Spyderbat Event Forwarder installation process see .

    Notification Target Management using Spyctl

    To learn more about what Notification Targets are, see:

    Prerequisites

    If you have never used Spyctl start to learn how to install it, then follow the guide.

    Rulesets

    This section documents the various features of Spyderbat Rulesets. It explains how rules can be configured and scoped. It also details how rules are evaluated.

    What Are Rulesets

    For a summary of Rulesets and Ruleset Policies see

    docker -it run spyderbat/spydertop -i examples/minikube-sock-shop.json.gz
    apiVersion: spyderbat/v1
    kind: SpyderbatFingerprint
    metadata:
      name: docker.io/library/nginx:latest
      type: container
      cluster_name: demo_cluster
      containerName: latencytest2
      namespace: default
    spec:
      containerSelector:
        image: docker.io/library/nginx:latest
        imageID: sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: default
      podSelector:
        matchLabels:
          run: latencytest2
      processPolicy:
      - name: nginx
        exe:
        - /usr/sbin/nginx
        euser:
        - root
        id: nginx_0
        children:
        - name: nginx
          exe:
          - /usr/sbin/nginx
          euser:
          - systemd-resolve
          id: nginx_1
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.29.43/32
          - ipBlock:
              cidr: 192.168.43.168/32
          - ipBlock:
              cidr: 192.168.92.6/32
          ports:
          - port: 443
            protocol: TCP
          processes:
          - nginx_1
    containerSelector:
      image: docker.io/library/nginx:latest
    mode: enforce
    enabled: true
    processPolicy:
      - name: nginx
        exe:
        - /usr/sbin/nginx
        euser:
        - root
        id: nginx_0
        children:
        - name: nginx
          exe:
          - /usr/sbin/nginx
          euser:
          - systemd-resolve
          id: nginx_1
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.29.43/32
          - ipBlock:
              cidr: 192.168.43.168/32
          - ipBlock:
              cidr: 192.168.92.6/32
          - ipBlock:
              cidr: 192.168.92.7/32
          - ipBlock:
              cidr: 192.168.92.8/32
          ports:
          - port: 443
            protocol: TCP
          processes:
          - nginx_1
    response:
      default:
        - makeRedFlag:
            severity: high
      actions: []
    #!/bin/bash
    yum install - y wget lsof
    wget -- quiet - O -
    https://orc.spyderbat.com/v1/reg/OMJBdOBVZvzFGEMLgQSt/script | /bin/sh
    #!/bin/bash yum install - y wget lsof wget -- quiet - O - https://orc.spyderbat.com/v1/reg/OMJBdOBVZvzFGEMLgQSt/script | /bin/sh
    mkdir -p ~/.spyctl
    _SPYCTL_COMPLETE=bash_source spyctl > ~/.spyctl/spyctl-complete.bash
    . ~/.spyctl/spyctl-complete.bash
    _SPYCTL_COMPLETE=fish_source spyctl > ~/.config/fish/completions/spyctl-complete.fish

    !=

    Not equal to value

    ~=

    Matches pattern with * and ? wildcards

    ~~=

    Matches regular expression

    =

    Equal to value

    !=

    Not equal to value

    =

    Equal to value

    !=

    Not equal to value

    >

    Greater than value

    >=

    Greater than or equal to value

    <

    Less than value

    <=

    Less than or equal to value

    =

    Equal to value

    !=

    Not equal to value

    <<

    Contained in the cidr given by value

    [*]

    Any element of the list

    [0]

    Element at index 0

    :keys[*]

    Any key of the dictionary

    :values[*]

    Any element at any key

    ["foo"]

    Element at the key "foo"

    pip install spyctl
    spyctl --version
    Manage Notification Targets Using Spyctl
    How to setup Spyderbat Notifications (Spyctl CLI)

    `-o, --output [yaml

    json

    -y, --yes

    Automatically answer yes to all prompts.

    -a, --apply

    Apply the agent health notification settings during creation.

    -n, --name

    Custom name for the agent health notification settings. (Required)

    -d, --description

    Description of the agent health notification settings.

    -q, --scope-query TEXT

    SpyQL query on model_agents table to determine which agents the setting applies to.

    -T, --targets

    Comma-separated list of notification targets.

    --is-disabled

    Disable the agent health notification settings on creation.

    Installation Guide
    Spyctl Initial Configuration
    Spyderbat Notifications Overview
    Notification Targets Management
    [email protected]

    Please check out this section of our portal to learn more about the Spyderbat Nano Agent and the installation details.

    Install Event Forwarder via Traditional Installer

    Before attempting the install, please, make sure you have downloaded the latest version of the event forwarder (latest release).

    • Unpack the tarball:

    The release package filename will differ from the example below.

    • Run the installer:

    You should see output like this:

    • Edit the config file:

    • Start the service:

    • Check the service:

    Use ^C to interrupt the log. If you see errors, check the configuration, restart the service, and check again.

    • Enable the service to run at boot time:

    • If desired, integrate with the Splunk universal forwarder:

    Next Steps

    To learn more about the event forwarder and how you can use it to integrate Spyderbat with your other solutions, see this page.

    Initial Configuration

    In this section you will learn how to configure Spyctl to enable data retrieval from across your entire organization. To do so, you must first create an APISecret and then use that APISecret to set a Context. An APISecret encapsulates your Spyderbat API credentials; the Context specifies where Spyctl should look for data when interacting with the Spyderbat API (e.g., organization, cluster, machine, service, or container image).

    Create an APISecret

    An APISecret encapsulates your Spyderbat API credentials. You must create at least one APISecret in order for Spyctl to access your data via theSpyderbat API.

    To create an APISecret, use an API key generated from the Spyderbat Console.

    Region
    API URL

    United States

    https://api.spyderbat.com

    Mumbai, India

    https://api.mum.prod.spyderbat.com

    Frankfurt, Germany

    https://api.deu.prod.spyderbat.com

    For most users, the API URL will be the one in the United States. If you are unsure which one applies to you, contact [email protected].

    Copy a generated API key and region-specific API URL into the following command:

    For example:

    Spyctl saves APISecrets in $HOME/.spyctl/.secrets/secrets

    Set a Context

    Contexts will let Spyctl know where to look for data. The broadest possible Context is organization-wide. This means that when you run Spyctl commands, the Spyderbat API will return results relevant to your entire organization.

    For the --org field in the following command you may supply the name of your organization which can be found in the top right of the Spyderbat Console or the organization UID which can be found in your web browser’s url when logged into the Spyderbat Console: https://app.spyderbat.com/app/org/UID/dashboard.

    For example:

    You can view your configuration by issuing the following command:

    You should see something like this:

    The global configuration file is located at $HOME/.spyctl/config

    It is possible to create more specific contexts, such as a group of machines or a specific container image. You can think of the fields in your context as filters to limit your scope. Follow this link to learn more about contexts: Contexts

    At this point you should now be able to run spyctl commands that utilize the Spyderbat API.

    Install Spyctl
    Generate a key to access the Spyderbat API
    How Does the Spyderbat Event Forwarder Work?

    Let’s take a closer look at how the event forwarder collects and transports the data.

    Spyderbat analytics system has a specific data store just for redflag events and spydertraces. When a redflag is created on any machine or a spydertrace is kicked off, they get sent to this store. After an agent sees an event, which would create a redflag or trigger a spydertrace, it is stored in the store.

    The Spyderbat event forwarder polls the store using Spyderbat API to access these stored events, using a time window of start time and end time in epoch format. It also queries the list of known hosts to augment the events with the details of the known machines.

    When the event forwarder is started for the first time, it will read back one hour in time to get the last hour of events, but when it is restarted it will read its logs to figure out the last event it emitted and will use that as its start time. It will read all files which match the file name spyderbat_events*.log in the directory specified by the log_path configuration setting, reading in all the events and keeping the latest time, to use as its starting time.

    Once the event forwarder has started, it will either start with reading the last hour of data or start from the last event discovered in the logs (the files named spyderbat_event*.logs). It will set the end time to the current time, so that the query encapsulates a range to ensure that no events are missed.

    Once the data comes back from the query, each event is examined to make sure it isn't duplicated and it is augmented to include the details of the associated machines, and the latest event time seen is captured to use as the next start time for the next query.

    The event forwarder will then sleep for 30 seconds and then repeats the loop using the latest event time as the start time, but because there is a check to always expand the time window to encapsulate a full 5 minute window, it will always look 4 minutes and 30 seconds further into the past beyond the last event, to ensure that no records have been missed, and will use its LRU (Last Recently Used value) to de-duplicate the events it is capturing.

    In the case when there are no events, the start time for the query will not be moved forward until either more events appear, or it reaches its maximum of the 4-hour boundary. This ensures that in the case of an outage on the backend these events will not be missed until they exceed 4 hours of age.

    Filtering Data Using Hard-Coded Expressions

    It is possible to configure the event forwarder to only log the events that meet certain criteria. This can be achieved through optional filtering using an expression syntax. The expression must evaluate to a bool; If it is true, the event will be logged.

    The expression syntax is documented here: https://expr.medv.io/docs/Language-Definition

    If the expression fails to compile, the event forwarder will exit with an error at startup. If the expression fails to evaluate, the event will be logged and the forwarder will continue.

    The most common reason for an expression to fail to evaluate is that the event does not contain the field(s) referenced in the expression. To avoid this problem, check that the fields you are referencing are not nil, or use the short-circuit "??" operator. Schema is guaranteed to be present.

    Here is an example:

    In this example, the expression will log all events with a schema starting with "model_spydertrace:" and a score greater than 1000. It will log everything else except events with a schema starting with "event_redflag:bogons:" or a severity of "low" or "medium".

    Event Forwarder Validation

    To ensure that the event forwarder has been deployed and configured correctly and is working as expected, the following conditions must be met and observed:

    1. Event forwarder should emit logs to standard out that looks like this: "5 new records, most recent 23s ago". Here is the code reference in the forwarder:\

      log.Printf("%d new records, most recent %v ago", newRecords, et.Sub(lastTime.Time()).Round(time.Second))\

    2. Event forwarder should be writing to the specified output in the config file, either to the specified syslog end point or the specified log_path directory with the filename spyderbat_events*.log. You can run 'tail -f spyderbat_events*.log' in that directory to watch the logs being written in real time, you should see events appear within a 10-minute window.

    this section
    Managing Notification Targets

    Create

    To create a new Notification Target you can use the create command:

    Note: This will only create a local yaml file for you to edit. It makes no immediate changes to your Spyderbat environment.

    For example:

    This will create a default Notification Target and save it to a file called target.yaml

    Edit

    When creating new Notification Targets you will need to edit the default document to point to the proper destination. With spyctl you can use the edit command to ensure you don't accidentally introduce syntax errors.

    If you have already applied the Notification Target you may edit the resource using the following:

    For example:

    This will bring up a prompt to select a text editor unless you have already done so previously. Then, using your text editor you may fill in your desired destination or destinations.

    If you save without making any changes, nothing happens to the resource or file you're editing. If you save and there were syntax errors, Spyctl will save your draft to a temporary location and re-open it with comments detailing the errors. Finally, if your changes have no syntax errors, Spyctl will update the resource or file you're editing.

    Note: If you edit a Notification Target in a local file, but the Target has already been applied. You will need to apply the file again for the updates to take effect.

    Apply

    In order for a Notification Target to be usable by the Spyderbat Notifications System you must first apply it using the apply command.

    For example:

    If the operation is successful, your Notification Target will be ready for use.

    Delete

    To remove a Notification Target from the Spyderbat Notifications System you can use the delete command.

    For example:

    View or Download

    You can use the get command to view or download your Notification Targets.

    For example:

    The default output is a tabular summary of your Notification Targets. To download the Notification Target as yaml or json you can use the -o option

    Using the > character you can save the document to a file.

    Notification Targets
    here
    Initial Configuration
    Rules

    Rules are defined as a list in the rules field of a Ruleset's spec.

    Each rule contains a target, verb, list of values, and optional selectors (for additional scoping).

    • Target: what the rule is referring to within the scope of the policy. Targets are RULE_TYPE::SELECTOR_FIELD.

      • ex. container::image this means that we are allowing or denying containers using images specified in the values field.

    • Verb: The currently available verbs for ruleset rules are allow or deny. Any object matching a deny rule will generate a Deviation.

    • Values: This is the set of values that are allowed or denied. If the target is container::image then the values should be container images that are either allowed or denied.

    • Selectors: Optional selectors that further define the scope of a single rule. For instance you may want a rule that defines allowed activity in a specific namespace within a cluster. Different rule types support different selectors.

    For a full breakdown of the available selectors see the Selectors Reference Guide

    Container Rules

    Container rules define which containers are allowed or denied.

    Supported Targets

    container::image

    container::imageID

    container::containerName

    container::containerID

    Supported Selectors

    Cluster

    Machine

    Namespace

    Pod

    Container

    Supported Verbs

    allow

    deny

    Examples:

    Allow the latest apache image in production and staging

    Deny a specific image ID in production

    Ruleset Policies Concepts
    mkdir -p ~/.spyctl
    _SPYCTL_COMPLETE=zsh_source spyctl > ~/.spyctl/spyctl-complete.zsh
    . ~/.spyctl/spyctl-complete.zsh
    spyctl notifications configure saved-query QUERY_UID \
      --target TARGET_NAME_OR_UID \
      --target-map TARGET_NAME_OR_UID=TEMPLATE_NAME_OR_UID
    
    spyctl notifications configure saved-query query:abc \
      --target OperationsTeam \
      --target-map SecurityTeam=email-template \
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: Example
      type: email
    spec:
      emails:
      - [email protected]
      - [email protected]
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: Example
      type: slack
    spec:
      url: https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxx
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: Example
      type: webhook
    spec:
      url: https://my.webhook.example/location/of/webhook
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: Example
      type: pagerduty
    spec:
      routing_key: abcdef1234567890abcdef1234567890
    
    spyctl create  agent-health-notification-settings -h
    spyctl create agent-health-notification-settings \
      --name "Agent Health Alerts" \
      --description "Alerts for agent health issues" \
      --targets "work-email"
    spyctl edit agent-health-notification-settings <NAME_OR_UID>
    spyctl edit agent-health-notification-settings "Agent-Health Alerts"
    spyctl get agent-health-notification-settings
    spyctl get agent-health-notification-settings -o json
    spyctl delete agent-health-notification-settings <NAME_OR_UID>
    spyctl delete agent-health-notification-settings "Agent Health-Alerts"
    aws secretsmanager create-secret --name \<name\> --region \<region\>
    - aws secretsmanager put-secret-value --secret-id \<name\> --region \<region\> --secret-string "{\"spyderbat-registration-key\":\"\<key\>\"}"
    aws secretsmanager get-secret-value --secret-id \<name\> --region \<region\>
    
    helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
    
    helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace kube-system --set syncSecret.enabled=true
    
    kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
    eksctl create iamserviceaccount --name spyderbat-serviceaccount --region="<region>" --cluster "<cluster_name>" --attach-policy-arn "<policy_arn>" --approve --namespace spyderbat
    
    eksctl get iamserviceaccount --name spyderbat-serviceaccount --region="<region>" --cluster "<cluster_name>" --namespace spyderbat
    aws:
        secretsmanager:
            enabled: true
            rolearn: "<role_arn>"
            secretarn: "<secret_arn>"
    helm repo add nanoagent https://spyderbat.github.io/nanoagent_helm/
    helm repo update
    helm install nanoagent nanoagent/nanoagent \  --set nanoagent.orcurl="<orc_url>" \  --namespace spyderbat \  --create-namespace \  --set CLUSTER_NAME="<cluster_name>"
    mkdir /tmp/sef
    tar xfz spyderbat-event-forwarder.5b41e00.tgz -C /tmp/sef
    cd /tmp/sef
    sudo ./install.sh
    spyderbat-event-forwarder is installed!
    
    !!!!!!
    Please edit the config file now:
        /opt/spyderbat-events/etc/config.yaml
    !!!!!!
    
    To start the service, run:
        sudo systemctl start spyderbat-event-forwarder.service
    
    To view the service status, run:
        sudo journalctl -fu spyderbat-event-forwarder.service
    sudo vi /opt/spyderbat-events/etc/config.yaml
    sudo systemctl start spyderbat-event-forwarder.service
    sudo journalctl -fu spyderbat-event-forwarder.service
    sudo systemctl enable spyderbat-event-forwarder.service
    $ sudo splunk add monitor /opt/spyderbat-events/var/log/spyderbat_events.log
    Your session is invalid. Please login.
    Splunk username: <your splunk username>
    Password: <your splunk password>
    Added monitor of '/opt/spyderbat-events/var/log/spyderbat_events.log'.
    spyctl config set-apisecret -k <apikey> -u <apiurl> NAME
    $ spyctl config set-apisecret -k ZXlKaGJHY2lPaUpJVXpJMU5pSXNJbXRwWkNJNkluTm\
    lJaXdpZEhsd0lqb2lTbGRVSW4wLmV5SmxlSEFpT2pFM01EUTVPVGM1TWpBc0ltbGhkQ0k2TVRZM\
    016UTJNVGt4T1N3aWFYTnpJam9pYTJGdVoyRnliMjlpWVhRdWJtVjBJaXdpYzNWaUlqb2ljSGhX\
    YjBwMlVFeElXakJIY1VJd2RXMTNTMEVpZlEuZGpxWkRCOTNuUnB4RUF0UU4yQ0ZrOU5zblQ5Z2Q\
    tN0tYT081TEZBZC1GSQ== -u "https://api.spyderbat.com" my_secret
    
    Set new apisecret 'my_secret' in '/home/demouser/.spyctl/.secrets/secrets'
    spyctl config set-context --org <ORG NAME or UID> --secret <SECRET NAME> NAME
    $ spyctl config set-context --org "John's Org" --secret my_secret my_context
    Set new context 'my_context' in configuration file '/home/demouser/.spyctl/config'.
    spyctl config view
    apiVersion: spyderbat/v1
    kind: Config
    contexts:
    - name: my_context
      secret: my_secret
      context:
        organization: John's Org
    current-context: my_context
    expr: |
          (
           schema startsWith "model_spydertrace:"
           and
           (score ?? 0) > 1000
          )
          or
          (
           not
           (
            schema startsWith "model_spydertrace:"
            or
            schema startsWith "event_redflag:bogons:"
            or
            (severity ?? "") in ["info", "low", "medium"]
           )
          )
    spyctl create notification-target -n NAME -T TYPE
    spyctl create notification-target -n OperationsTeam -T emails > target.yaml
    target.yaml
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: OperationsTeam
    spec:
      emails:
      - [email protected]
    spyctl edit -f FILENAME
    spyctl edit [OPTIONS] notification-target NAME_OR_UID
    spyctl edit -f target.yaml
    spyctl apply -f FILENAME
    spyctl apply -f target.yaml
    spyctl delete [OPTIONS] notification-target NAME_OR_UID
    spyctl delete notification-target OperationsTeam
    spyctl get [OPTIONS] notification-targets [NAME_OR_UID]
    spyctl get notification-targets
    $ spyctl get notification-targets
    Getting notification-targets
    NAME              ID                                  AGE    TYPE      DESTINATIONS
    OperationsTeam    notif_tgt:XXXXXXXXXXXXXXXXXXXXXX    7d     emails               1
    spyctl get notification-targets -o yaml OperationsTeam
    $ spyctl get notification-targets -o yaml OperationsTeam
    apiVersion: spyderbat/v1
    kind: NotificationTarget
    metadata:
      name: OperationsTeam
    spec:
      emails:
      - [email protected]
      - [email protected]
    spyctl get notification-targets -o yaml OperationsTeam > target.yaml
    apiVersion: spyderbat/v1
    kind: SpyderbatRuleset
    metadata:
      createdBy: [email protected]
      creationTimestamp: 1712787972
      lastUpdatedBy: [email protected]
      lastUpdatedTimestamp: 1714162618
      name: demo-cluster-ruleset
      type: cluster
      uid: rs:xxxxxxxxxxxxxxxxxxxx
      version: 1
    spec:
      rules: []
    namespaceSelector:
      matchExpressions:
      - {key: kubernetes.io/metadata.name, operator: In, values: [rsvp-svc-dev, rsvp-svc-prod]}
    target: container::image
    values:
    - docker.io/guyduchatelet/spyderbat-demo:1
    - docker.io/library/mongo:latest
    verb: allow
    namespaceSelector:
      matchExpressions:
      - {key: kubernetes.io/metadata.name, operator: In, values: [staging, production]}
    target: container::image
    values:
    - docker.io/apache:latest
    verb: allow
    namespaceSelector:
      matchLabels:
        kubernetes.io/metadata.name: production
    target: container::imageID
    values:
    - sha256@XXXXXXXXXXXXXXXXXXXXXXXX
    verb: deny
    Running the query retrieves historical data based on past records, which may return matching records or no results if none are found.
  • Save the Query

    • Click the Save Search button.

  • Set Up Notifications

    • Once you save a search, a prompt like the one shown in the image will appear.

    • A default Name is generated, you could also provide a custom name for the Saved Search (e.g., "New Cronjobs Monitoring").

    • Add an optional Description to clarify the query’s purpose.

    • Toggle the Notification Status to "Enabled" if desired immediately. You can also turn this off anytime to stop receiving notifications.

    • Click Add Target to configure your preferred notification channels. You can add multiple targets per query.

    • Note that notifications are sent to the targets when new records matching the query are observed in real time.

    • Configure notifications to be sent through various channels, such as: Email, Slack, PagerDuty, Webhook.

  • Save the Configuration

    • Once all settings are configured, click Save.

  • Sudo Permissions
    : You will need sudo permissions to install the Spyderbat AWS Agent.
  • Outbound Network Access: The system you’re installing Spyderbat's AWS Agent on should have outbound access on port 443 to https://orc.spyderbat.com.

  • AWS Account: You need an AWS account with administrative access to create and configure resources. The VM must be launched within the AWS account that you wish to monitor.

  • VM Instance Profile with Required IAM Role: The VM must have an instance profile attached that includes an IAM Role with the following permissions:

    • EC2: ec2:Describe*

    • EKS: eks:List*, eks:Describe*

    • IAM Roles and Policies: iam:Get*, iam:List*, iam:Put*

    • ECR: ecr:Describe*, ecr:List*, ecr:Get*

    • STS: sts:AssumeRole, sts:AssumeRoleWithWebIdentity

    • Secrets Manager (Optional): Access to the ARN of the configured secret for the registration key.

  • Here is an example permissions policy that can be used when creating the role

    Step-by-Step Deployment

    Step 1: Launch an AWS VM

    Launch an AWS VM within the AWS account you wish to monitor. The instance should be configured with the following settings:

    • Amazon Machine Image (AMI): Use an AMI that supports Linux (e.g., Amazon Linux 2, Ubuntu).

    • Instance Type: Choose an instance type suitable for your workload (e.g., t3.medium).

    • Network Settings: Ensure the instance has access to the internet or appropriate VPC configuration for accessing AWS APIs.

    • IAM Role: Attach the IAM Role created earlier with the required permissions.

    • Configure storage and other instance details as needed.

    Step 2: Connect to the VM and Install Dependencies

    1. Install Docker by following the official Docker installation guide.

    Step 3: Install the Spyderbat AWS Agent

    • Log in to the Spyderbat UI

    • Navigate to the Sources menu (top left)

    • Click on the Add Source button, and select Install AWS Agent

    Add AWS Agent Source

    This will bring you to the following screen:

    AWS VM Curl Install

    The agent installation command is obtained from the Spyderbat UI that you can execute on the VM. Click on the tab 'curl' there, and then the command below will be provided that you can paste. If you do not have Curl installed on your system, select the 'wget' tab to copy this command instead. Then use that in the VM to install the agent.

    Here's how the curl command will look like

    Now execute this script on the AWS VM.

    Step 4: Verify Integration

    The CLI and UI both provide you with feedback on the process. In the UI, check marks of the install progress will be displayed. Once the Spyderbat AWS Agent is installed, registers with Spyderbat, and is transmitting data, you will see that the agent was installed successfully both in your terminal and in the Spyderbat UI.

    Managing the AWS Agent Service

    The Spyderbat AWS Agent runs as a systemd service (aws_agent.service) on the VM. You can use the following commands to manage the AWS Agent service:

    • Check Service Status:

    • Start the Service:

    • Stop the Service:

    • Restart the Service:

    • View Service Logs:

    Troubleshooting

    • Agent Logs: Check the agent logs using the following command:

    • Permission Issues: Ensure the IAM Role attached to the VM has the correct permissions as listed in the prerequisites.

    • Network Connectivity: Verify that the VM has access to the internet or the required VPC endpoints to communicate with AWS services.

    Next Steps

    • Once the AWS Agent is successfully deployed and integrated, you can proceed to use the spyderbat platform to monitor and investigate your assets.

    • The AWS Agents behavior can be customized using a configuration file. For more details on advanced configuration of the agent, consult they Spyderbat AWS Agent Configuration Guide

    For more details on these configurations, please consult the AWS Agent Configuration Guide for Helm.

    Prerequisites

    Before deploying the Spyderbat AWS Agent on an AWS EKS cluster, ensure you have the following prerequisites in place:

    1. Outbound Network Access: The cluster you’re installing Spyderbat's AWS Agent on must have outbound access on port 443 to https://orc.spyderbat.com.

    2. Kubectl and Helm: Install Kubectl and Helm clients, and configure Kubectl for the cluster where you want to install the agent.

    3. AWS Account: The cluster the agent is deployed on must reside in the AWS account that you wish to monitor.

    4. IAM Role: Create an IAM Role that will be associated with the service account used by the AWS Agent.

      • Role Permissions: The role must have the following permissions attached:

        • EC2: ec2:Describe*

        • EKS: eks:List*, eks:Describe*

      Note that <account-id>, <region>, and <open-id-provider-id> are dependent on your local deployment of the EKS cluster. Take note of the ARN of this role, as it will be an input for the Helm chart deployment.

      You do not need to create the Kubernetes service account associated with the role, as the Helm chart installation will handle that.

    Installation with AWS Agent Helm Chart

    Step 1 - Copy the Helm install command from the Spyderbat UI

    • Log in to the Spyderbat UI

    • Navigate to the Sources menu (top left)

    • Click on the Add Source button, and select Install AWS Agent

    Add AWS Agent Source

    This will bring you to the following screen where you can click on the Helm tab to select installation using the Helm chart.

    AWS Helm Install

    In the input fields, enter the following:

    • Cluster Name: The name of the cluster you are deploying to. This will help the AWS Agent associate itself with the cluster and facilitate recognition in the Cluster Health and Sources UI. This is not required but recommended.

    • IAM Role ARN: Enter the ARN of the role you created earlier. This is a required field.

    Upon entering the information, the UI will generate a command that you can use to start the installation. Copy the command. It will be similar to the following (your registration key will differ):

    Step 2 - Run the Helm Command

    In your command-line shell, with Kubectl and Helm installed and configured to use the target cluster as the active context, paste the copied Helm command.

    Step 3 - Validate the Installation

    Check for any reported errors during the installation, and use the following command to validate that awsagent is installed:

    Then use:

    You should see a StatefulSet named awsagent-auto and an associated pod named awsagent-auto-0 running if the installation was successful.

    To check the logs of the AWS Agent pod, use:

    Uninstalling the AWS Agent from Your Cluster

    To remove the AWS Agent, use Helm uninstall:

    Advanced Configuration

    There are various settings that can be customized to address specific needs. These can be achieved by using a custom values.yaml file or by using the --set option during Helm installation.

    For more details on these settings, please consult the AWS Agent Configuration Guide for Helm.

    Install Event Forwarder into a Kubernetes Environment via Helm Chart

    If you are monitoring a Kubernetes cluster, you can use a very quick and easy deployment approach via a simple Helm Chart to install the Spyderbat Event Forwarder. It will produce an output to stdout as well as a pvc backed file for easier consumption.

    You can access our GitHub public repo to retrieve this Helm Chart here.

    You have the following values to override:

    Value

    Description

    Default

    Required

    spyderbat.spyderbat_org_uid

    org uid to use

    your_org_uid

    Y

    spyderbat.spyderbat_secret_api_key

    api key from console

    your_api_key

    Y

    spyderbat.api_host

    matching_filters and expr cannot be combined. Use one or none.

    To validate if the install was successful, run the following command:

    Once run, you should see a similar output to what we have in the example below at the top of the logs followed by any/all events in your organization (possibly filtered if using matching filters) in ndjson format:

    Next Steps

    To learn more about the event forwarder and how you can use it to integrate Spyderbat with your other solutions, see this page.

    this section of our portal
    Schemas

    In Spyderbat, a schema represents a type of data collected by Spyderbat. For example, the schemas shown in the search UI include Process and Connection, which represent data associated to processes and network connections, respectively.

    There are a number of schemas available to search on, each with their own fields that can be queried, which can be found categorized in the schema selector. These schemas are not part of the text of your query, but are selected on the search page. Additionally, a full list of them can be found in the Search Reference.

    Search schemas selector

    The Query Builder

    After selecting a schema, you can either begin to type an expression, or open the Query Builder. The Query Builder is a useful tool for crafting expressions that contains all the information about the schemas, instead of requiring you to check the reference and documentation.

    It contains the full set of fields for each schema, descriptions of the fields, and matches them to comparisons for you, allowing you to easily create effective and correct searches.

    Search query builder

    The Query Builder will open with the same schema as you had selected in the search page, but you can also select a different schema using the dropdown next to the query preview. Additionally, clicking on the query preview will show a list of recent queries constructed using the Query Builder. Clicking on one will populate the Query Builder and allow you to modify it or send it to the search page.

    To modify a query in the Query Builder, use the selector boxes to select a field and what to compare it with. The selectors also contain short descriptions of the fields and operators.

    Expressions

    After selecting the schema, you’ll need to specify the expression for your search. The expression tells the query engine exactly what criteria to use for filtering data. Every schema has its own set of fields that can be used in the expression, and each field has a type such as String or Number that determines the comparisons available. In addition to these simple types, the List and Map types are also available as composite types, which are discussed below.

    Comparisons

    The simplest type of expression is a comparison of a field and a value. For example, given that the Process schema has the Executable field, we can use its shortened name exe in an expression to find all processes with the executable "/usr/bin/bash":

    This expression uses the = operator, which simply checks for equality with the value. Each field type has a set of available operators in addition to =, allowing for more advanced searches. Additionally, inside an expression, fields must always use their shortened name, and they can only be compared with constant values in the expression itself, not the values of other fields.

    A full list of the available comparison operators is available in the Search Reference.

    Pattern Matching

    String fields have two unique operators: ~= and ~~=, which are Unix glob style pattern matching and regular expression matching, respectively. Regular expressions can be researched elsewhere, but glob pattern matching in this context refers to a string with the characters * and ? used as wildcards. They can both be escaped in a pattern to match their literal characters instead of wildcards using the standard escape character \.

    As wildcards, * can match any number of characters, while ? matches exactly one unknown character. For example, we can modify the previous example to check for any bash executable, instead of a specific path:

    CIDR Matching

    The IP address type also has a CIDR matching operator: <<. CIDR blocks can be researched elsewhere, but the operator is essentialy a way to compare against a range of IP addresses instead of individual ones. For example, we can check for connections from any IP address between 192.168.1.0 and 192.168.1.255:

    Logic

    Multiple field comparisons can be combined together using the basic logical operations and, or, and not. They can use any capitalization and may be combined with parentheses to specify precedence. For example:

    This search uses parentheses to guarantee that the Duration field must always be greater than 60 seconds, regardless of the other conditions.

    Field Types

    In addition to the available operators, certain types of fields have special rules. For all types, the value used with any comparisons must be of the same type, although Number and Integer are effectively the same. In addition, all String values must be in either single or double quotes, and may escape any ending quotes inside the string using the standard escape character \.

    There are two outlier types that fields can have: List and Map. These types both have elements of another type, most commonly String. A List represents an ordered collections of elements, while a Map represents key and element pairs. Interacting with these data types is relatively straightforward and can be combined with other comparisons for more complex queries. To use a List, you can either access elements by index – for example, you would use [0] to access the first element in the list – or search for a value occurring anywhere using the [*] syntax. Elements in a Map can be queried by referencing the key in brackets or with the :keys[*] and :values[*] syntax to search for a key or value anywhere in the map, respectively.

    For example, this search finds Process objects with a first argument of “-i”:

    Process objects where any argument is “-i”:

    Kubernetes Pod objects serving the PostgreSQL database in the "production" namespace:

    A full list of fields and their types is available in the Search Reference.

    Filtering with related objects

    An advanced feature of the Spyderbat search language is related object queries - the ability to reference other objects of different schemas in addition to the main object and query fields on those objects. Every schema has a set of related objects alongside its fields, which point to specific other objects of either the same or a different schema.

    After a related object, any field or even related object on the related object's new schema can be chained together using the normal query syntax. This is useful when you want to filter based on information that isn’t contained in the original schema.

    The capabilities are best illustrated with an example, such as the machine reference in the Process object, which points to a related object with the Machine schema - in this case, it would be the machine that the process is running on. Using the Cloud Region field in the Machine schema, we can find all bash processes in the "us-east-1" cloud region, even though we do not gather any region data in the Process schema:

    In the example above, each process has exactly one associated machine. Some schemas have multiple other objects that can be associated, however, such as the children of a process. In that case, the reference must always be followed by [*], similar to how any element of a List can be used in a comparison. For example, the mentioned children[*] reference in the Process schema can be used to find all bash processes that ran "sudo" in a child process:

    Related objects of this type can only be queried with the [*] syntax, and attempting to use an index or a string key will cause a syntax error.

    Time Range

    After selecting a schema and creating an expression, the last part necessary for a search is to select the time range for the query. When doing so, this will return all data that existed at any point during that time range. For example, a search with the Process schema in a time range of the previous 15 minutes will return all matching processes that were running in the past 15 minutes, but could have been started five minutes or a day ago.

    The Spyderbat UI includes a time picker with preset relative times and a custom duration picker. Relative times become constant for a search once it has been run, as opposed to the search continuing to update as time goes by. For a search to continue reporting results, save it to a Dashboard card.

    Search time picker
    Spyderbat search page
    here
    here
    Install Spyderbat’s Nano Agent Step 1
    Install Spyderbat’s Nano Agent Step 2

    Investigations

    Overview of the Process Investigation section, including the causal graph, records table, details section with all the metadata captured by the Nano Agent.

    Published: August 24, 2021

    To view corresponding video, click HERE.

    The Spyderbat Investigation UI

    In the Spyderbat interface, notice the left-hand navigation menu.

    Click on “Investigate” to enter the investigation area of the product.

    Under the investigate header are toggles turning on or off the various components of the investigate screen.

    Search

    The search area allows you to query for records, or all of the information Spyderbat has gathered, for one or more systems over the selected time frame.

    Here we see that we have a 1 hour query for a machine. Selecting the drop-down under ‘Hosts‘ allows you to query a different machine for the same time period.

    You’ll notice that by running a new search results in a new data layer. A data layer is similar to the concept of layers in something like Adobe Photoshop, it allows you to bring in records or information for different times and for different systems, or from search or dashboards – and toggle those datasets on or off for analysis.

    Records

    The Records table below the data layers acts on the data layers that you have enabled. While you can broadly filter data by enabling/disabling data layers, filtering Records allows for finer-grain views into the data set.

    For example, type in an IP address where it says ‘Filter’ and click “Save” to filter down to all records related to that IP.

    You can also use field-based filtering and “facets” to look for that IP.

    For example, click on ‘Filter’ and scroll down the field-based names to find ‘remote_ip’ to see a list of all remote IP addresses included in the enabled data layers (if any).

    Once we have narrowed down to a set of records of interest, we can plot these on the causal graph by selecting the ‘Star‘ icon to the right of the record.

    Select the ‘Flag’ facet. Flags are often a great place to start – they provide interesting security context information that can be overlaid onto the Causal Tree. Use the “add all” button to add all Flags at once to the Causal Tree.

    Causal Tree

    Looking at the causal tree on the right-hand side is a very powerful way to view the causal connections of the underlying data.

    • S nodes represent systems

    • P nodes represent processes

    • C nodes represent connections, which can relate to other connection nodes or to or from remote IPs and ports.

    The Causal Tree displays all the causal activity leading to and following an event, such as an alert I am investigating.

    The grey badges to the right of a node on the Causal Tree show changes in the effective user, or the effective rights of the user when performing tasks. This provides an immediate visual indication of the effective user’s privileges when executing commands.

    Using your mouse scroll bar or icons on the screen, zoom in and out of the Causal Tree.

    Select a node by left-clicking with your mouse to see additional details in the Details panel, or right-click to add or remove information from the Causal Tree.

    Tip – You can clear the graph with the trashcan icon, and the undo and redo buttons are very handy!

    Details

    The Details panel below the graph allows us to drill into details for nodes we’ve selected from the Causal Tree or Records table. Details include useful, context-based information about the selected node and its relationships to other nodes.

    Here we can see the time this process ran, the command line, environment variables and much more, and any related information like flags or associated connections.

    That’s a whistle-stop tour of Spyderbat investigations, we’ll go deeper into the investigation components in other videos and show how you can start or add to an existing investigation using dashboard and search capabilities.

    Thank you and Happy Tracing!

    Three Things to Try with Spyderbat Community Edition

    Review your security monitoring scope, trace your own activity at runtime, and validate detected suspicious activity via Spyderbat flags.

    Published: August 22, 2021

    OK, you installed your first Spyderbat Nano Agent (How-to Install the Spyderbat Nano Agent). Now what?

    1) Look at the last hour of activity

    If you just installed the Spyderbat Nano Agent, you will see the system as a source on the Sources screen.

    On the left, click “View Spydertrace”. This will query the last hour of activity from that system.

    2) View Your Own Activity

    Do you still have a terminal open from when you installed the agent? If not, log back into the system you installed the agent on.

    Run some simple Linux commands;

    Let’s jump back to the Spyderbat investigate screen.

    Under Search, click on the End Time, select the ‘Now‘ button to update the End Time to the current time, and then select the ‘Run New Search‘ button.

    That query brought in records for the requested time period as a new Data Layer.

    Look in the Records table, under the Sessions tab. Can you find your recent session? Click on the ‘Star‘ to the right of the Session Record to see what it looks like in the Causal Tree.

    The session in the above screen shot shows my session using a bash shell. Notice I was logged in as ec2-user. By right-clicking on the bash shell process node in the graph, select “add children” .

    The Causal Tree updates to displays all commands (and processes) that are immediately causally connected to the bash shell. I also see the processes selected in the records table when I view the Records table Process tab.

    By selecting the ‘cat’ node in the Causal Tree or process name in the Records table, the Details panel provides additional details such as the filename, the working directory, environment variables, and more!

    3) View Your First Flag

    Do you recall running the ‘whoami’ command? In our Causal Tree, it is annotated with a little flag.

    Select the ‘whoami’ node in your Causal Tree to view more information from the Details panel.

    Flags are not the same as alerts. Flags color your Causal Tree with interesting information. The source of a Flag can be third-party alerts as well as other context sources. Spyderbat continuously overlays key security and other context as Flags as they occur.

    A single Flag with no causal outcomes is a characteristic of a false positive. A trace of interest will usually include multiple Flags and multiple layers of activity. By viewing alerts and context as Flags, the Causal Tree shows you exactly how they are related, the sequence of activities, and any other activity causally connected.

    Other Things to Try

    Here are some other great things to try with your Spyderbat Community Edition:

    • If you haven’t already, check out the . Challenges are quick exercises that allow you to explore and learn about real attacks in a very fun way

    • Have a colleague do some basic admin tasks on a system that has the Spyderbat Nano Agent installed, see if you can figure out what they did in Spyderbat and compare notes with them.

    • Install Spyderbat on a Vulnhub VM from (see Spyderbat Blog – ) and hack it, and see what Spyderbat shows. Many of the vulnhub images have walkthroughs if you are not an experienced pentester.

    Thank you and happy tracing!

    AWS Integration

    Overview of the AWS Context Integration using the AWS Agent

    Role of the Spyderbat AWS Agent in the Spyderbat Platform

    The Spyderbat platform is designed to provide a central, detailed contextual view of monitored assets. To achieve this, we start by gathering accurate data regarding the assets using our agent technology.

    We then apply contextual and detection analytics to model this context, which forms the core of the platform. To communicate this context, along with security and operational insights, the platform provides a user interface for investigations (backed by an integration API), as well as Reports, Notifications and Actions.

    To provide an integrated view, we collect various types of information:

    • Machine information: This includes data such as processes, listening sockets, connections, and more, collected by the Spyderbat Nano Agent using eBPF technology.

    • Kubernetes orchestration information: This covers details about active deployments, services, and pods in the cluster, gathered by the Spyderbat ClusterMonitor through the Kubernetes API.

    • Kubernetes IAM information: This includes Service Accounts, Roles, ClusterRoles, and bindings within the cluster, also collected via the Kubernetes API.

    The newest agent in this list is the AWS Agent, which uses various AWS APIs to gather cloud context from the AWS backend hosting your assets.

    Currently, the AWS Agent collects the following information from configured AWS accounts:

    1. Cloud Compute Context from AWS EC2 and AWS EKS: This includes all EC2 instances and EKS clusters within an AWS account, along with their detailed configurations and runtime statuses as reported by the AWS API.

    2. Cloud IAM Context from AWS: This includes all AWS Roles and their associated Trust policies and Permission policies.

    The AWS Agent is designed to be extendable to collect more information from additional AWS services. Future integrations are planned for AWS Config, AWS ECR Image Registry, AWS GuardDuty, AWS EKS Audit Logs, and AWS CloudTrail.

    How Is the AWS Context Leveraged in the Platform?

    Investigations UI

    • Kubernetes Workloads: The platform highlights ServiceAccounts used by pods. If a ServiceAccount is linked to an AWS IAM Role (through a role annotation), that IAM Role is also displayed in the AWS accordion in the context of the investigation.

    Kubernetes Service Account for a pod with associated IAM Role:

    Details of the IAM Role integrated inline in the investigation UI:

    • AWS EC2 Information: Machine-associated AWS EC2 information is available in a new 'AWS' accordion in the Investigations UI, under the EC2 subtab.

    • AWS IAM Roles: IAM Roles associated with an EC2 instance (via an instance profile) are displayed within the 'AWS' accordion, in the IAM subtab of the Investigations UI.

    AWS EC2 and IAM integrated context in the investigation UI:

    These updates allow investigators to quickly locate AWS resource context involved in incidents and assess the associated permissions to evaluate potential impact.

    Reports

    Two new reports are available to leverage the AWS and Kubernetes IAM context collected:

    1. AWS Coverage Report

    This report provides an overview of all EC2 instances and EKS clusters discovered within a specified AWS account. By comparing the complete list of compute assets with those that have a Spyderbat agent deployed, the report highlights detection coverage and helps identify assets that require monitoring and protection.

    2. Cluster RBAC Report

    This report provides an analysis of RBAC (Role-Based Access Control) and permissions for all Kubernetes workloads within a cluster. It summarizes the permissions associated with Service Accounts used by workloads — covering both Kubernetes roles and permissions (defining actions workloads can perform within Kubernetes) as well as AWS permissions (if the Service Account is associated with an AWS IAM Role).

    Detection Analytics

    The following detection analytics have been added to leverage the new context available:

    • Creation of new Service Accounts in Kubernetes

    • Deletion of Service Accounts in Kubernetes

    • Creation of new Roles and ClusterRoles in Kubernetes

    • Deletion of Roles and ClusterRoles in Kubernetes

    Getting started with the AWS Agent

    To utilize this capability, an AWS Agent must be deployed to collect AWS API data. One agent is required for each AWS account you wish to monitor.

    Spyderbat offers multiple deployment options for the AWS Agent, including self-hosted on an AWS VM, deployment on a Kubernetes cluster, or a hosted solution managed by Spyderbat.

    For detailed installation and usage instructions, refer to the .

    Notification Template Management using Spyctl

    To learn more about what Notification Templates are, see Notification Templates

    Prerequisites

    If you have never used Spyctl start here to learn how to install it, then follow the Initial Configuration guide.

    What are Notification Templates?

    Notification Templates define the format of notifications sent via different channels such as Email, Slack, Webhook, and PagerDuty. These templates help in customizing the notification messages. You can either specify a Notification Target or a Notification Template that map specific targets to templates.

    Available Notification Template Types:

    1. email - Create an email notification template.

    2. pagerduty - Create a PagerDuty notification template.

    3. slack - Create a Slack notification template.

    4. webhook - Create a webhook notification template.

    Managing Notification Templates

    Create

    To create a new Notification Template, use the create command:

    Note: This will only create a local YAML file for you to edit. It makes no immediate changes to your Spyderbat environment.

    For example:

    This will create a default Slack Notification Template and save it to a file called template.yaml.

    To get the template in JSON format use -o json option:

    Note: Learn How to populate the Template field values

    Apply

    To make a Notification Template available for use, apply it using the apply command:

    For example:

    Get or Download

    You can use the get command to view or download your Notification Templates.

    For example:

    To download a Notification Template as YAML or JSON, use the -o option:

    Using the > character, you can save the document to a file:

    Edit

    To modify an existing Notification Template, use the edit command:

    For example:

    This will open the template in your configured text editor for modification.

    Delete

    To remove a Notification Template from the Spyderbat system, use the delete command:

    For example:

    Notification Templates

    What are Notification Templates?

    Notification Templates define the format and content of notifications sent to different destinations. They allow customization of messages based on the notification type.

    You create Notification Templates for email, Slack, PagerDuty, and webhook notifications. These templates can be used when configuring notifications alongside Notification Targets.

    Note: Notification Templates are Optional when configuring Notifications.

    How to Set Up Spyderbat to Monitor Systems From vulnhub.com

    Published: August 18, 2021

    One fantastic way to see attack techniques is by capturing your attack of a vulnerable image in a Spydertrace.

    If you are not familiar with vulnhub.com, it is a site dedicated to providing “materials that allow anyone to gain practical ‘hands-on’ experience in digital security, computer software & network administration.” Vulnhub.com provides a library of vulnerable images to practice exploits and red-team attack techniques.

    Step 1: Create your attack environment

    To set up images from vulnhub.com and create a safe environment for attacking these images, we suggest using Oracle’s VirtualBox

    Dashboards

    Learn about how to create custom Spyderbat dashboards and dashboard cards using Athena Search or existing dashboard templates, as well as how to share and manage access to the custom dashboards.

    Published: July 20, 2023

    If you are looking for information on out-of-the-box Spyderbat Dashboards, please check out and articles.

    In addition to a number of dashboard cards in 7 different default dashboard categories, Spyderbat users with are able to create custom dashboard cards and categories for their organization in the Spyderbat UI.

    In this article we will discuss:

    How to Set Up Spyderbat to Ingest Falco Alerts

    Let's talk about the most optimal way to integrate Falco security detections with Spyderbat platform to further enhance the cloud-native runtime security monitoring value of Spyderbat.

    Last Updated: August 16, 2024

    You can enhance Spyderbat detections by integrating with the Falco detection rule sets to add more security context to Spyderbat traces and living causal maps, including process details, user sessions, and network connections.

    By integrating with Falco Sidekick, you will be able to identify, collect, and send Falco events to the Spyderbat platform and view them as well as take action within Spyderbat UI.

    Spyderbat offers a simple deployment approach, and all the needed deployment instructions can be viewed here as well as retrieved via the public GitHub repository.

    Dashboards

    Detailed overview of the Dashboard section of the console, including collected types of data, data management (sorting, filtering and grouping), and shortest path to investigating suspicious activity.

    Published: July 20, 2023

    The Dashboard section of the Spyderbat UI is located at the top of the left hand navigation panel, as shown below. If there is at least one source configured in the Spyderbat UI for your organization, you will be directed to the Dashboard homepage upon successful login into the console. If you have not yet set up any Sources (data collection) within your monitoring scope, please refer to our Documentation portal to access one of our .

    Dashboard section provides a consolidated at-a-glance overview of a variety of operational and security data points captured as a result of asset monitoring with active Spyderbat Nano Agents.

    How to Set Up Custom Flags Using Spyctl CLI

    How to Set Up Custom Detections Using Spyctl CLI

    Setting up custom detections using the Spyctl CLI is straightforward. Before you start, ensure you have the Spyctl CLI installed and your organization set as a Spyctl context. You can follow the guide for more details.

    The Spyctl CLI supports various operations for custom flags (also known as custom detections), including creating, editing, deleting, disabling, and enabling. In this section, we'll go through these operations one by one.

    $ spyctl get saved-queries
    $ spyctl create saved-query --help
    
    Options:
      -o, --output [yaml|json|ndjson|default]
      -a, --apply                     Apply the saved query during creation.
      -n, --name TEXT                 The name of the saved query.
      -q, --query TEXT                The query to be saved.
      -d, --description TEXT          A description of the saved query.
      -s, --schema TEXT               The schema of the saved query.
      -y, --yes                       Automatically answer yes to all prompts.
    
    Usage:
      spyctl create saved-query [OPTIONS]
       $ spyctl create saved-query \
         -n "Monitor Deployment with Replicas more than 5" \
         -q "spec.replicas > 5" \
         -s "Deployment"
    $ spyctl edit saved-query <NAME_OR_ID>
    sudo systemctl status aws_agent.service
    sudo systemctl start aws_agent.service
    sudo systemctl stop aws_agent.service
    sudo systemctl restart aws_agent.service
    sudo journalctl -u aws_agent.service
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Effect": "Allow",
    			"Action": [
    				"ec2:Describe*",
    				"eks:List*",
    				"eks:Describe*",
    				"organizations:ListDelegatedAdministrators",
    				"organizations:DescribeOrganization",
    				"organizations:DescribeOrganizationalUnit",
    				"organizations:DescribeAccount",
    				"organizations:ListAWSServiceAccessForOrganization",
    				"iam:Get*",
    				"iam:List*",
     				"iam:Put*",
    				"ecr:Describe*",
    				"ecr:List*",
    				"ecr:Get*"
    			],
    			"Resource": "*"
    		}
    	]
    }
    curl --retry 5 https://orc.spyderbat.com/v1/reg/<registration-key>/script?agentType=aws_agent -o installSpyderbatAws.sh
    sudo -E /bin/sh ./installSpyderbatAws.sh
    helm repo add awsagent https://spyderbat.github.io/aws_agent_helmchart/
    helm repo update
    helm install awsagent awsagent/awsagent \
      --set credentials.spyderbat_registration_key=<registrationKey> \
      --set spyderbat_orc_url=https://orc.kangaroobat.net \
      --set serviceAccount.awsRoleArn=<AWS IAM Role ARN> \
      --namespace spyderbat \
      --create-namespace \
      --set CLUSTER_NAME=<cluster-name>
    helm list
    kubectl get all -n spyderbat
    kubectl logs pod/awsagent-auto-0
    helm list -n spyderbat
    helm uninstall awsagent -n spyderbat
    git clone https://github.com/spyderbat/event-forwarder.git
    cd event-forwarder/helm-chart/event-forwarder
    helm install <release-name> . --namespace spyderbat --set spyderbat.spyderbat_org_uid=<ORG_ID> --set spyderbat.spyderbat_secret_api_key=<API_KEY> --create-namespace
    kubectl logs statefulset.apps/sb-forwarder-event-forwarder -n spyderbat
    starting spyderbat-event-forwarder (commit 4f833d1b02da96fb9df39c38cc9be725e17967fb; 2023-03-29T16:59:19Z; go1.20.2; arm64)
    loading config from ./config.yaml
    org uid: spyderbatuid
    api host: api.kangaroobat.net
    log path: /opt/local/spyderbat/var/log
    local syslog forwarding: false
    {"id":"event_alert:k75NGuJ9Sn0:Y_fKWg:3259:iptables"...
    schema: Process
    query:  exe = "/usr/bin/bash"
    schema: Process
    query:  exe ~= "*bash"
    schema: Connection
    query:  local_ip << 192.168.1.0/24
    schema: Process
    query:  (auser = "root" or euser = "root") and duration > 60
    schema: Process
    query:  args[1] = "-i"
    schema: Process
    query:  args[*] = "-i"
    schema: Pod
    query:  metadata.labels["service"] = "postgres" AND metadata.namespace = "production"
    schema: Process
    query:  exe = "/usr/bin/bash" and machine.cloud_region = "us-east-1"
    schema: Process
    query:  exe = "/usr/bin/bash" and children[*].exe ~= "*sudo"
    https://app.spyderbat.com/app/org/P6V31v0uIG5dtqXTHLsd/dashboard
    curl https://api.prod.spyderbat.com/api/v1/org/ -H "Authorization: Bearer API_key"
    curl https://api.prod.spyderbat.com/api/v1/org/Org_id/source/ -H "Authorization: Bearer API_key"

    api host to use

    api.prod.spyderbat.com

    N

    namespace

    namespace to install to

    spyderbat

    N

    spyderbat.matching_filters

    only write out events that match these regex filters (json/yaml array of strings syntax)

    .*

    N

    spyderbat.expr

    only write out events that match this expression

    true

    N

    Creation of new AWS IAM Roles

  • Deletion of AWS IAM Roles

  • Permission drift in configured Kubernetes Roles or ClusterRoles

  • Permission drift in AWS IAM Roles

  • Compliance checks against Kubernetes RBAC best practices

  • AWS Agent Installation Documentation
    AWS Integration Diagram
    Kubernetes Service Account for a pod with associated IAM Role
    Details of the IAM Role integrated inline in investigation ui
    AWS EC2 and IAM integrated context in the investigation UI
    AWS Asset Report
    Cluster RBAC Report
    Install Spyderbat’s Nano Agent Step 3.1
    Install Spyderbat’s Nano Agent Step 3.2

    IAM Roles and Policies: iam:Get*, iam:List*, iam:Put*

  • STS: sts:AssumeRole, sts:AssumeRoleWithWebIdentity

  • Secrets Manager (Optional): Access to the ARN of the configured secret for the registration key.

  • Role Trust Policy: The IAM Role for the Spyderbat AWS Agent requires a trust policy that allows the Kubernetes Service Account associated with the AWS Agent to assume the role. Below is the trust policy:

  • here
    Using Notification Templates

    Notification Templates can be referenced while configuring notifications for Notifiable Objects using Spyctl. You can either specify a Notification Target or a Notification Template that map specific targets to templates like below.

    Example usage with Spyctl:

    Example:

    Usage:

    The $spyctl notifications configure command allows notifications to be sent using either Custom templates with Targets or directly via Targets (using Default Template).

    Types:

    Note: Below examples shows YAML Templates, but they can also be generated in JSON format.

    Email

    Email Notification Templates define the subject and body format for email notifications.

    Note: Users must populate subject, body_html, and body_text using placeholders to customize the email content.

    Slack

    Slack Notification Templates define the message structure for Slack notifications. Notification templates can be generated in YAML or JSON format as desired.

    After populating template:

    Webhook

    Webhook Notification Templates define the payload structure for webhook notifications.

    After populating template:

    PagerDuty

    PagerDuty Notification Templates define the format for alerts sent to PagerDuty.

    After populating template:

    Use these templates to ensure consistent and structured notifications across different channels.

    Placeholder Fields and Dynamic Variables

    Some fields in Notification Templates are dynamically calculated and replaced at runtime using placeholders. These placeholders allow real-time data insertion into notification messages.

    Understanding Placeholder Fields

    Placeholder fields allow dynamic values to be inserted into notification templates. These fields are replaced with actual data when a notification is sent.

    They're represent with syntax: __field__

    Some Common Spyderbat Internal Placeholder Fields are:

    {{ __source__ }} - Source of the event

    {{ __cluster__ }} - Cluster where the event occurred

    {{ __hr_time__ }} - Human-readable timestamp

    {{ __linkback__ }} - Link to view the event in Spyderbat

    {{ __time_int__ }} - Timestamp in integer format

    {{ __origin__ }} - Origin of the event

    {{ __hostname__ }} - Hostname where the event occurred

    {{ __percent__ }} - Percentage value related to the event

    {{ __pd_severity__ }} - Severity level formatted for PagerDuty

    {{ __query_name__ }} - Name of the saved query that triggered the event

    Example Usage in Email Body:

    Dereferencing Values from the Object:

    Static fields or regular placeholders ({{ severity }}, {{ description }}) are fields that are passed directly from the model object. Static text remains unchanged and does not need placeholders.

    • {{ severity }} - Severity level of the event

    • {{ description }} - Description of the event

    By customizing Notification Templates with placeholders, users can ensure notifications provide meaningful and actionable information tailored to their needs.

    To learn more about Placeholder fields and Constructing templates Read this

    Conclusion

    By following this guide, you can create well-structured, dynamic Notification Templates for different destinations. Using placeholders correctly ensures your notifications contain relevant, real-time data.

    Manage Notification Templates Using Spyctl

    To start creating Templates follow our tutorial using Spyctl : Manage Notification Templates Using Spyctl

    Quick Start Tutorial

    To quickly get started using using Spyderbat Notifications follow our tutorial using spyctl.

    How to setup Spyderbat Notifications (Spyctl CLI)

    1. Create a Custom Detection

    The create command for custom flags allow you to create a custom detection using Spyderbat Query Language (SpyQL) in Spyctl CLI. Spyctl provides help options (--help) to guide you for every command. To view the help for creating a custom flag, run:

    To start, you must select the object you want to generate a flag for. This is done via the --schema option. You can view the list of available search schemas with the spyctl search --describe command.

    Next you will want to craft a query for the schema you just selected. Each schema has a number of searchable fields, you can view them with Spyctl using the spyctl search --describe SCHEMA command. For example: spyctl search --describe Process or spyctl search --describe model_process both will retrieve the same results.

    Using the above information let's create a simple custom flag for a K8s ReplicaSet having more than 6 replica instances:

    Explanation:

    • replica-flag - The name of the custom flag.

    • --schema "Replicaset" - The schema used for the custom flag. To view available schemas/objects for creating custom flags, run $ spyctl search. The list includes processes, connections, all Kubernetes resource schemas, and more. You can also use model_k8s_replicaset for this option.

      • Note: Custom flags cannot be created for event_deviation, event_opsflag, event_redflag, or model_spydertrace Schemas.

    • --query "spec.replicas > 6" - The SpyQL query used for the custom flag. The suggested method is to utilize the search functionality in the UI under the Search Section to identify and test the queries you want to flag. Once identified, you can copy and paste the query as a value for the -q option.

    • --type "redflag" - The type of the custom flag. By default, the flag type is set to redflag

    • --severity "high"- Specifies the perceived severity level of the flag.

    • --description "A ReplicaSet running more than 6 replicas found" - A description of the custom flag.

    You can also include other options like --content and --impact for the custom flag. These will show up in the console during an investigation. The YAML configuration generated by the create command will look like the example below. Verify the yaml before applying.

    This step only generates the YAML. The next step is to apply this flag.

    To apply the custom flag, you have two options:

    a. Apply Immediately: Run the same command as above and include the --apply flag to apply the flag immediately.

    b. Apply from a File: Save the YAML configuration to a file and then apply it using following command: spyctl apply -f FILENAME

    You should get "Successfully applied new custom flag with uid: flag:"* after applying the flag. Once set up, custom flags operate in real-time, triggering immediate flag as the query is met.

    2. Get All Custom Flags

    To retrieve all custom flags that were created, use the following command:

    You'll see a list of custom flags like this:

    3. Edit a Custom Flag

    You can edit a custom flag if required using the below command, by passing the flag ID or name.

    After editing the yaml and saving it, you should see:

    Successfully edited custom flag with uid: flag:*

    4. Delete a Custom Flag

    To remove custom flags that are no longer needed, use the below command:

    5. Disable a Custom Flag

    To temporarily turn off a custom flag without deleting it, use:

    6. Enable a Custom Flag

    If you need to re-enable a custom flag that has been disabled, use:

    here
    sudo journalctl -u aws_agent.service
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "arn:aws:iam::<account-id>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<open-id-provider-id>"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringLike": {
              "oidc.eks.<region>.amazonaws.com/id/<open-id-provider-id>:aud": "sts.amazonaws.com",
              "oidc.eks.<region>.amazonaws.com/id/<open-id-provider-id>:sub": "system:serviceaccount:*:aws-agent"
            }
          }
        }
      ]
    }
     spyctl create notification-template [TYPE] -n NAME
     spyctl create notification-template slack -n slack-template > template.yaml
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: Example
      type: slack
    spec:
      text: ''
      blocks: []
     spyctl create notification-template slack -n slack-template -o json > template.yaml
     spyctl apply -f FILENAME
     spyctl apply -f template.yaml
     spyctl get notification-templates
     spyctl get notification-templates
    Getting notification-templates
    Page 1/1
    NAME                 UID                        TYPE       CREATED                     DESCRIPTION
    test-email-tmpl      tmpl:avgUE                 email      2024-09-30T21:06:03 UTC     Operations teams.
     spyctl get notification-templates -o yaml slack-template
     spyctl get notification-templates -o yaml slack-template > template.yaml
    spyctl edit notification-template NAME_OR_UID
     spyctl edit notification-template slack-template
    spyctl delete notification-template <NAME_OR_UID>
    spyctl delete notification-template slack-template
    spyctl notifications configure saved-query QUERY_UID \
      --target-map TARGET_NAME_OR_UID=TEMPLATE_NAME_OR_UID
    spyctl notifications configure saved-query query:abc \
      --target-map OperationsTeam=email-template \
      --cooldown 300
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: email-template
      type: email
    spec:
      subject: ''
      body_html: ''
      body_text: ''
    spec:
      subject: "Spyderbat Alert: {{ severity }} Severity Detected"
      body_html: "<p>Alert triggered at {{ __hr_time__ }}</p><p>Details: {{ description }}</p>"
      body_text: "Alert triggered at {{ __hr_time__ }}. Details: {{ description }}"
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: slack
      type: slack
    spec:
      text: ''
      blocks: []
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: slack
      type: slack
    spec:
      text: "Alert: {{ severity }} - {{ description }}"
      blocks:
        - type: section
          text:
            type: mrkdwn
            text: "*Alert Triggered at:* {{ __hr_time__ }}\n*Details:* {{ description }}"
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: webhook-template
      type: webhook
    spec:
      payload: {}
      entire_object: false
    spec:
      payload:
        severity: "{{ severity }}"
        details: "{{ description }}"
        timestamp: "{{ __hr_time__ }}"
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: pg
      type: pagerduty
    spec:
      class: null
      component: null
      source: ''
      summary: ''
      severity: ''
      dedup_key: null
      custom_details: {}
      group: null
    spec:
      summary: "Spyderbat Saved Query '{{ __query_name__ }}' Matched"
      source: "{{ __source__ }}"
      severity: "{{ __pd_severity__ }}"
      custom_details: 
        "description": "{{ description }}"
        "cluster": "{{ __cluster__ }}"
        "time": "{{ __hr_time__ }}"
        "linkback": "{{ __linkback__ }}"
    <p>Spyderbat Custom Flag "{{ custom_flag_name }}" Emitted</p>
    <ul>
        <li>Cluster: {{ __cluster__ }}</li>
        <li>Source: {{ __source__ }}</li>
        <li>Time: {{ __hr_time__ }}</li>
    </ul>
    <p>{{ description }}</p>
    <p><a href="{{ __linkback__ }}">View in Spyderbat</a></p>
    $ spyctl create custom-flag --help
    Create a custom flag from a saved query.
    
    This command allows you to write custom detections using the Spyderbat Query
    Language (SpyQL).
    
    At a minimum you must provide the following:
    - schema
    - query
    - description
    - severity
    - name
    
    To view available schema options run:
      'spyctl search --describe'
    To view available query fields for your schema run:
      'spyctl search --describe <schema>'
    Query operators are described here:
      https://docs.spyderbat.com/reference/search/search-operators
    
    Example:
    spyctl create custom-flag --schema Process --query "interactive = true and container_uid ~= '*'" --description "Detects interactive processes in containers" --severity high interactive-container-process
    
    Options:
      -o, --output [yaml|json|ndjson|default]
      -a, --apply                     Apply the custom flag during creation.
      -d, --description               A description explaining what the flag
                                      detects.  [required]
      -q, --query                     Objects matching this query + schema
                                      combination will be flagged. If used, this
                                      will create a saved query.
      -s, --schema                    The schema for the SpyQL query used by the
                                      custom flag. If used, this will create a
                                      saved query.
      -Q, --saved-query               The UID of a previously saved query. If
                                      used, this will override the query and
                                      schema options.
      -t, --type                      The type of the custom flag. One of
                                      ['redflag', 'opsflag'].
      -S, --severity                  The severity of the custom flag. One of
                                      ['critical', 'high', 'medium', 'low',
                                      'info'].  [required]
      -D, --disable                   Disable the custom flag on creation.
      -T, --tags                      The tags associated with the custom flag.
                                      Comma delimited.
      -i, --impact                    The impact of the custom flag on the
                                      organization.
      -c, --content                   Markdown content describing extra details
                                      about the custom flag.
      -N, --saved_query_name          If a new saved query needs to be created,
                                      this overrides the auto-generated name.
      -y, --yes                       Automatically answer yes to all prompts.
    
    Usage:
      spyctl create custom-flag [OPTIONS] NAME
    
    $ spyctl create custom-flag replica-flag --schema "Replicaset" --query "spec.replicas > 6" -t "redflag" --severity "high" --description "A ReplicaSet running more than 6 replicas found"
    apiVersion: spyderbat/v1
    kind: SpyderbatCustomFlag
    metadata:
      name: replica-flag
      schema: model_k8s_replicaset
    spec:
      enabled: true
      query: spec.replicas > 6
      flagSettings:
        type: redflag
        description: A ReplicaSet running more than 5 replicas found
        severity: high
    $ spyctl get custom-flags
    Getting custom-flags
    Page 1/1
    NAME                 UID       DESCRIPTION                                         SEVERITY      SCHEMA                     STATUS    AGE
    replica-flag         flag:*    A ReplicaSet running more than 8 replicas found     high          model_k8s_replicaset       ENABLED   20m
    $ spyctl edit custom-flag <NAME_OR_ID>
    $ spyctl delete custom-flag <NAME_OR_ID>
    $ spyctl disable custom-flag <NAME_OR_ID>
    $ spyctl enable custom-flag NAME_OR_ID.

    Stand up a honeypot or similar system on the internet that can be easily exploited to see what Spyderbat captures!

  • Want to bring in the rest of the team? Try a red team/blue team exercise where the red team attacks a set of Linux systems, and the blue team defends using Spyderbat!

  • Defend The Flag challenges
    vulnhub.com
    How to Setup Community Edition to Monitor Systems from Vulnhub

    Download/install Virtual Box here https://www.virtualbox.org/wiki/Downloads

    Since we will attack the vulnerable image with real attack techniques/exploits, we suggest creating a separate environment. In Virtualbox, we did this by created a “NAT Network”.

    After installing Oracle’s VirtualBox, type on the command line:

    This command both creates a NAT’d Network called “natnet1” and creates a DHCP server in virtualbox using the 192.168.15.0/24 subnet.

    Step 2: Setup your Attack Machine

    We need to attack our vulnhub image from a different system within our NAT Network. We recommend setting up a new instance of Kali Linux.

    https://www.kali.org/get-kali/#kali-virtual-machines

    1) Import this VM into Virtual Box

    2) Under Settings, change the Network Adapter to attach to “NAT Network” with the name “natnet1”

    3) We recommend disabling USB ports and audio on the VM in Virtual Box settings.

    4) Start the machine with “Normal Start”, the default login is “kali/kali”.

    Step 3: Setup a Personal Firewall

    For safety, we want the victim VM to be able to contact the Spyderbat “Orc” but not anything else. Setting this up on a Powerbook, we found Apple’s personal firewall was not sufficient. So we recommend a firewall called “Little Snitch” and configured 2 rules:

    Rule 1: Deny everything outbound from VirtualBox

    Rule 2: Allow traffic on TCP port 443 from Spyderbat – orc.app.spyderbat.com

    Step 4: Choose a vulnhub image for your Victim Machine

    Go to the vulnhub five86-1 https://www.vulnhub.com/entry/five86-1,417/

    Note – you can elect to find a different vulnhub image by using vulnhub.com’s search and typing “Linux”. While most options will have a VirtualBox image, not all will so you may need to change the image installation process.

    Step 5: Install Spyderbat’s Nano Agent on the Victim Machine

    Before we get going, we want to capture everything we do to the victim machine. This means we need to get on the machine in single-user mode to install our nanoagent. To boot into single-user mode (on most Linux distributions):

    1) During the Linux boot process, type ‘e’ to edit the boot loader.

    2) Replace “ro quiet” with “rw init=/bin/bash” to temporarily boot into a writable shell (known as single-user mode)

    3) Press “ctrl-x” to proceed with the boot.

    You should now be in a shell as root. This is a limited shell. To install the Nano Agent, we are going to create a new user, reboot, login as our new user, and install the agent.

    4) Type on the command-line:

    5) Reboot the machine and login as your new user.

    6) On a separate machine, use a browser to login into your Spyderbat Community Edition account.

    7) When prompted, select “Begin Tracing” and follow the instructions to install Spyderbat’s Nano Agent onto the victim machine. See Spyderbat Blog: How to Install Spyderbat’s Nano Agent

    Note – there are a few vulnhub images that have removed the ability to boot into single-user mode. In this case, you may have to get root on the box by going through the attack exercises, installing the Nano Agent once you have rooted the box, then repeating the attack exercises to capture it with Spyderbat.

    Once your Nano Agent is installed, try gaining access to the vulnerable image and then seeing your results in Spyderbat.

    See examples of an attack trace using vulnhub images in Spyderbat’s Defend the Flag Linux Challenges. Sign up here: https://app.spyderbat.com/signup

    Thanks and happy tracing!

    Creating a brand new dashboard card from scratch

  • Creating a new dashboard card off an existing dashboard card

  • Managing custom dashboards and dashboard cards

  • How To Create a Dashboard Card from Scratch

    Spyderbat allows you to create custom dashboard cards based on your specific search queries. Follow the steps below to create a custom dashboard card:

    1. Access the Search Section:

    • In the top-left corner of the Spyderbat interface, navigate to the Search section.

    • The search section includes a list of popular queries to get started with.

    1. Build Your Search Query:

    • The search section provides predefined categories such as System, Operation, Security, and Kubernetes (K8s). Each category offers a variety of search objects/Schemas with relevant fields.

    • For example, if you're searching for Kubernetes Nodes, choose Node from the Kubernetes category. Click on Open Query Builder to start building your query.

    • Select the fields that you need (e.g., Cluster Name, Node Name, etc.) available from the list. You can add additional rows and conditions to refine the query.

    • You can refer the Search doc on how to write Search queries.

    1. Set Filters and Execute the Query:

    • Once your query is built, click on Send to Search.

    • Apply the appropriate time filter (e.g., last 24 hours, custom date range).

    • Click Search to run the query and display the results.

    1. Save the Query as a Dashboard Card:

    • After the results are displayed, click on Save Dashboard Card.

    • You can either add the results to an existing dashboard card or create a New Dashboard Card. Both these options are available in the drop down list.

    • Click Create to finalize and add your custom card to the dashboard.

    • Your custom dashboard card will now be available in the Dashboard section, reflecting real-time data based on the query results.

    How to Build a New Dashboard Card Off an Existing Card

    Perhaps, an easier way to create custom dashboard cards is by tweaking some of the existing out-of-the-box cards that are available in the Spyderbat UI.

    For example, let’s take a look at one of the Security cards named “Recent Spydertraces with Score >50”. Assume you would like to prioritize your focus on Spydertraces with much higher severity scores of 100+. The quickest way to build out a dashboard card like that would be to take the existing card and click “Run In Search”:

    You can see the full query and can easily find the parameter to modify, which would be the score:

    Once you update the score value to “>100”, you can save this as a new Dashboard Card and place it into your custom Dashboard category of choice to be easily accessible. You can also set notifications to be alerted if there is data pulling into that custom card.

    Use this method for minor query changes, else start building your query from Search section.

    One thing to keep in mind: you cannot edit a query in the custom dashboard card. If you saved a card and then decided to further tweak it, you will need to follow the steps outlined above: select a card you wish to modify, click “Run in Search”, update the query as desired, and save as a new dashboard card.

    How to Manage Custom Dashboards

    Once you have created a number of custom dashboards and dashboard cards, they will be visible to all the users in your organization in the Spyderbat UI. Users with adequate permissions will be able to rename dashboards and cards, add new dashboard cards to customer dashboards created by other users and delete dashboard cards and entire dashboards.

    All dashboard management options can be accessed by clicking on the “pencil” icon in the upper right corner of the custom Dashboard, you wish to modify:

    Here you can do a number of things:

    • See if you have configured notifications for any of the cards in your custom dashboard.

    • Hide a dashboard card from view by using the on/off slider on the left side of the dashboard card name

    • Delete a dashboard card by clicking the “x” icon (Note: you will not be prompted to confirm your deletion, but it won’t be applied until you click “Save” in the lower right corner of the Edit window)

    • Rename a dashboard card by clicking a “pencil” icon and then a “save” icon that looks like a floppy disk

    • Change the order of appearance for the dashboard cards within the dashboard by dragging and dropping the “=” on the right hand side of the respective dashboard card names

    • Rename the Dashboard category by clicking the ellipsis (three vertical dots) and selecting the “Edit Dashboard Name” option

    All these changes will only apply after you click “Save” and will be in effect for all users in your organization, as stated on the next screen:

    You can also delete an entire dashboard with all the cards in it by selecting “Delete Dashboard + Cards”, in which case you will need to confirm your decision:

    Custom user-created dashboards will appear in the front positions of the category menu, pushing all default out-of-the-box categories to the right. The order of dashboard categories cannot be modified at this time, and categories cannot be hidden from view.

    All About Spyderbat Dashboards
    Spyderbat Dashboard Categories
    adequate permissions
    Infrastructure Prerequisites

    As a minimum, the user should have an organization set up in the Spyderbat Community Edition. You can go to https://www.spyderbat.com/start-free and request a free trial to install up to 5 Spyderbat nano agents.

    The Spyderbat Nano Agent must be installed on the machines that you wish to monitor using Falco rule sets. The Spyderbat Nano Agent leverages eBPF technology on Linux systems to gather data and forward it to the Spyderbat backend. A full list of supported Linux OS can be found on our website here (paragraph 4).

    Please refer to the following guide on how to install Spyderbat Nano Agent into a Kubernetes cluster.

    Falco does not have to be installed in your environment prior to Spyderbat integration, as it will be taken care of as part of the integration process. We will provide instructions below on how to handle the integration without Falco running yet, as well as if Falco is already in place. For reference, here is the official installation guide available on the Falco Helm chart repository.

    Installing Falco Sidekick Using Helm Chart

    You can configure the Falco Sidekick daemon to connect Falco to your existing ecosystem, which will allow you to take Falco-generated events and forward them to your Spyderbat platform to be seamlessly integrated with the Spyderbat security content and displayed in the causal activity graphs in Spyderbat Investigation UI to supplement and further enrich Spyderbat output.

    If you do not already have Falco installed, you can install it and configure it to use the Spyderbat integration at the same time. First, add the Falco security helm chart repository:

    Then, install Falco and the Spyderbat integration with:

    If you already have Falco installed through the Helm chart, changing helm install to helm upgrade should update it properly. Make sure to include any existing custom configuration that you are using for Falco or the Sidekick Pod.

    The “orguid”, which stands for Unique Organization ID, is specific to your organization and can be retrieved from the Spyderbat UI URL, once you log into your console:

    Aside from enabling and configuring the Spyderbat integration, these configuration options enable additional ID information in the Falco event messages that Spyderbat uses to tie them into our existing context. It also sets the driver type to modern_ebpf instead of the default kernel driver. If your machine does not support the new driver, you may need to remove that argument.

    Please refer to the KBA “How to Set Up Your Spyderbat API Key and Use the Spyderbat API” for more information on Spyderbat API use. For your convenience, the main steps for API Key generation are listed below:

    1. Login into the Spyderbat console

    2. Click on your User icon in the upper right corner and go to the “API Keys” section

    3. If you do not have any active API keys, click “+ Create API Key” and save it in your user profile

    4. Once generated, copy the API key into the clipboard:

    Validation

    If the installation proceeded correctly, you should receive no error messages and can run the following command to validate that all pods deployed successfully:

    You should see a similar output generated if everything is working as expected:

    Once Falco starts detecting suspicious activity, respective “FALCO” labeled flags will be generated in the Spyderbat data stream and made visible in the Spydergraph Investigation section. These Flags can be located by running a search query. You will select the “Search” option in the left-hand navigation menu, run your search query, and then select a Flag you wish to investigate on a visual causal graph by checking the box and clicking “Start Investigation”:

    An example of searching for Falco objects

    Once you click the “Start Investigation” button, you will be redirected to the Investigation page where you will be able to see the selected flags and all associated processes as well as other security content:

    You can also locate these flags by applying filtering options to our default Flags Dashboard and selecting the flags to start an Investigation this way:

    An example of a dashboard filtering rule for Falco.

    Please refer to our Spyderbat Overview Video for a more detailed walkthrough of the UI and its key functionality.

    To stay on top of incoming Falco findings, you can create a custom dashboard card to pull in all Falco flags with desired severity by building the following search query for Redflag objects:

    Once you have run your search, you can save the output as a custom dashboard card to be easily accessible through the UI:

    An example search, highlighting the "Save Dashboard Card" button

    Falco flags will differ in severity values that are mapped to Spyderbat severity values as follows:

    Falco Severity Value
    Spyderbat Severity Value

    Emergency

    Critical

    Critical

    Critical

    Alert

    High

    Error

    High

    Warning

    Medium

    Notice

    Note that the free Spyderbat Community account allows you to monitor up to 5 nodes, i.e. register up to 5 sources in the Spyderbat UI. If you have a cluster that contains more than 5 nodes or anticipate scaling up in the near future, please visit https://www.spyderbat.com/pricing/ to sign up for our Professional tier.

    Dashboard Card Overview

    The Dashboard section comprises several default groups of dashboard cards. Each individual dashboard card represents a structured output of a Athena search query crafted using a set of criteria set forth by Spyderbat security analysts.

    Click to enlarge

    As you can see, all dashboard cards are of the same default height, which means that there are only so many rows that can be displayed within the card even with the scroll bar. Spyderbat dashboard cards surface the top 100 rows, and indicate the total number of rows that meet the dashboard card criteria in the dashboard card header.

    If you need to view or export all the data, you could do it through Search, by clicking “view all [total number]” or “view first 10K”, if there are more than 10K of rows being returned. In the latter case, it is highly advisable to apply additional search or filtering criteria to reduce the volume of data, which we will cover here shortly.

    Modifying Dashboard Card Appearance

    By default all dashboard cards are of fixed height and width. And while the height of the card cannot be adjusted, the width could be maximized to double the amount of available real estate and pull more columns into the immediate card view. To do this, you need to hover over the card you wish to expand and click on the “maximize” symbol:

    You can also adjust the selection of columns displayed by clicking the Columns drop down and updating your selections to show or hide certain columns.

    You can also hide a specific column by selecting “Hide” from the drop-down menu accessible via the ellipsis on that column header:

    Finally, you can move columns around and change their order by dragging them by the column header to the left or right. You can also manually adjust the column width.

    Please note that the changes you make to the formatting and appearance of your dashboard cards will not persist and will be limited to the duration of your user session, so if you were to refresh the page or leave and log back in later, the changes will revert back to the default view.

    Data Filtering and Sorting

    While the search queries behind the default dashboard cards cannot be modified within the default card itself, there are a number of data sorting and filtering options that will allow to fine tune the data output within the default card.

    First feature to note is the option to adjust the time range for which the data is being pulled into the dashboard card. By default, the range is set to 24 hours, but the available options range from 1 hour back to 30 days back and can be applied in the drop-down. Your selection will persist unless you switch to a different organization or refresh the page.

    The Filters option in the upper left part of the card will allow you to apply additional filters to existing columns to only display the data that meet the filtering criteria. To set up a filter, click the FIlters icon, select a column that you wish to filter the data for from the drop down and set up your filtering criteria. Make sure to click Apply Filters to save your filter settings:

    You can apply multiple filters to different columns, using either an “AND” or an “OR” operand to combine them. The total number of filters applied to your dashboard card will be displayed in a little blue dot on the Filters option in the upper left corner of the screen. If there is a filter applied to a column, you will see a small filter icon on that column. If you hover over that icon, you will be reminded how many active filters you have set up against that column and will be able to edit them by clicking on the icon directly:

    To remove the filters you will need to click the X next to the filter and then click Apply Filters button to save your changes:

    You can also sort the data within the selected column in an ascending or descending order by hovering over the column header. The arrow will be then visible next to the header to indicate the filtering option applied - ascending or descending. If the arrow symbol is of light gray color (not white), the Un-sort option is in place:

    Alternatively, the sorting could be applied by clicking the ellipsis icon (three vertical dots) when hovering over the desired column and selecting the Sort ASC or Sort DESC from the menu.

    Just like with rearranging the dashboard card columns, the sorting and filtering of data in the dashboard cards will not persist and will revert back to the default view if you navigate away from the dashboards section.

    Additional filtering and sorting of the data with intent to reduce the noise and improve quality of data from the security perspective can be performed by tweaking and tuning the search query. This can be done by hovering over the desired dashboard card and clicking the “Run in Search” option, which will take you to the Search section of the UI:

    Please refer to this article to learn more about how to create new dashboard cards through Search.

    Data Grouping

    In addition to filtering and sorting the data within the card, some dashboard cards allow grouping the data into summary rows by column values. By default, several cards have been selected by Spyderbat analysts to have Grouping feature enabled and all data grouped based on the specific criteria called out in the first column:

    You can expand a select grouping by clicking on the accordion symbol:

    If you turn off grouping by moving the slider on the right from “Grouping Enabled” to “Grouping Disabled”, all rows will be displayed in an unsorted order.

    When “Grouping Enabled” is on, you can also apply nested grouping options based on the values within other columns, by clicking the ellipsis (three vertical dots), on the column which values you wish to use for the nested rows grouping, and select “Group by [column name]”:

    To remove nested grouping, you will have to follow the same steps and choose “Stop Grouping by [column name]” from the drop down. To remove all grouping, just flip the “Grouping Enabled” slider to “Grouping Disabled”:

    From Dashboards to Investigation

    Besides offering you extensive observability options and holistic view of your security posture, Dashboard cards allow you to easily segway into investigating any suspicious or simply interesting activity in your monitored environment. All you need to do to start an investigation is select one or more rows in one or multiple dashboard cards and click “Start Investigation”.

    Clicking the X in the “Start Investigation” pop-up, will automatically deselect all rows.

    To learn more about Spyderbat Investigation section and how to navigate it, please refer to our Investigations Tutorial.

    At any time during your investigation you can go back to the dashboards section to add more items to your existing investigation or start a brand new investigation:

    If you choose to start a new investigation, the existing open investigation will get overwritten, unless you save an Investigation Link.

    If you are focusing your investigation on K8s assets and inventory, rather than processes, the system will prompt you to run a K8s investigation.

    To learn more about Spyderbat Investigation section and how to navigate it, please refer to our Investigations Tutorial.

    How-To Guides for Spyderbat Nano Agent Installation
    Click here for a tutorial on using the Causal Tree in Investigations.
    Right clicking a node in a Causal Tree
    Top of Causal Tree

    How to Put Guardrails Around Your K8s Clusters Using Spyctl

    This tutorial will walk you through the creation, tuning, and management of Cluster Ruleset Policies.

    Prerequisites

    • Install Spyctl

    • Configure Spyctl with a Context

    • Install the on a cluster via helm install

    What is a Cluster Ruleset Policy?

    A Cluster Ruleset Policy is a special type of focused on establishing allowed or disallowed resources or activity within a Kubernetes Cluster. Through Cluster Ruleset Policies users can receive customized notifications when deviant activity occurs within your clusters. For example, users can specify the container images that are allowed to run within a namespace. Should a new image appear, a deviation is created, with a link to investigate the problem. Users can then take manual or automated actions to address the deviation.

    Creating a Cluster Policy

    Cluster Policies and their accompanying Cluster Rulesets are generated using the spyctl create command. First, identify which cluster you wish to create a cluster policy for.

    For example:

    If the previous command does not return any results, follow the to install the Spyderbat Nano Agent in your K8s cluster.

    Next, consider how you would like the auto-generated rules to be scoped. Certain rule types may be scoped specifically to namespaces.

    Use the following command to generate a cluster policy and its ruleset(s).

    For example:

    By default, rules are generated using data from the last 1.5 hrs. You can use the -t option to override that.

    The file you just generated cluster-policy.yaml now contains the Cluster Policy itself and any automatically generated rulesets the policy requires.

    You can edit or add rules if you wish, or you can apply the policy at this point. To apply this policy, run the following command:

    For example:

    To confirm that your policy applied successfully you can run the following command:

    And to view your cluster-rulesets you can run the command:

    For example:

    [Optional] Adding "Interceptor" Response Actions

    By default, Cluster Policies have a single response action makeRedFlag this action generates a redflag that references a deviant object. For example, if a container violates one of the ruleset rules, a redflag will generate for that container object. Redflags are used to populate security dashboards within the Spyderbat Console, but may also be forwarded to a SIEM and/or used to trigger notifications.

    Containers that violate a cluster policy rule can also be used to trigger the agentKillPod response action. You can add a default action to kill the pod of any violating container by editing the policy yaml:

    Then, under the response section of the spec you can add a new default action:

    Alternatively, you can scope the kill pod action to a sensitive namespace:

    Reviewing Policy Activity

    Using the spyctl logs command, you can see what sorts of activity are going on within the scope of your policy.

    for example:

    Summary and Next Steps

    At this point you should have an applied Cluster Policy in audit mode. This means that your policy is in a learning phase, it will generate logs and deviations, but will not take any response actions. After you feel the policy has stabilized (not generating deviations or generating them rarely) you can set the policy to enforce mode.

    You can create Cluster Policies for any other Kubernetes Clusters you manage.

    For additional details on ruleset rules view the . There you can find out additional scoping options and rule targets.

    For additional details on managing policies (updating, disabling, deleting) see the

    How to Set Up Notifications Using Spyctl

    Configure Notifications using Spyctl to receive alerts for significant Security or Operations events.

    Prerequisites

    • Install Spyctl

    Overview

    Spyderbat's notification system has 3 main components:

    • Notification Targets: Named destinations to where notifications can be sent.

    • Notification Templates: Pre-built templates containing most of the information required to create a Notification Config. These templates simplify the configuration process.

    • Notifications: Allows to you configure notifications to the targets for the Notifiable Objects.

    For the the full documentation of the Spyderbat Notifications System refer to .

    How to Set Up Notifications

    Spyderbat allows you to configure notifications for a variety of resources and targets, enabling you to streamline your workflow and stay informed about important events. Here’s how you can set up notifications for different targets:


    Step 1: Identify Notification Targets/Target Template.

    Before you can configure notifications, you need to decide where to send them. Spyderbat supports several Notification Targets, such as:

    • Email

    • Slack Channel

    • Webhook

    • PagerDuty

    Ensure that you have already configured the Notification Targets for these destinations before proceeding.

    All of the commands to manage Notification Targets using Spyctl can be found .

    Each Notification Target can be mapped to an optional Custom Notification Template, which defines the structure of the notification. Pre-configured templates help streamline the setup process.

    All of the commands to manage Notification Templates using Spyctl can be found .

    If no template is specified, Spyderbat applies its default template.


    Step 2: Access Notification Command Help

    To learn more about the spyctl notifications commands and their usage, run the following command:

    This will display the following help message:

    Notification command allows you to Configure, disable, enable, list Notifications. Let's learn them one by one below.

    Use spyctl notifications <command> --help for more information about a given command.


    Step 3: Configure Notifications for a Resource

    You can configure notifications for 3 resources within Spyderbat's Spyctl CLI, mainly Saved Queries, Custom flags, and Agent Health.

    In this section we'll learn about Configuring Notifications for Saved Queries and Custom flags.

    To Configure Notifications for Agent Health

    To begin, use the spyctl notifications configure command. Below are the available commands and options:

    View Help for Notifications Command:

    To view the available options for configuring notifications, use the following command:

    This will display the following help message:

    For a Saved Query:

    To configure notifications for a saved query, use the following command:

    This will show the usage and options for configuring notifications for a saved query:

    To configure a saved query with a target, use the following command:

    In this example:

    Note: You can configure notifications for multiple targets separated by comma.

    For a Custom Flag:

    To configure notifications for a custom flag, use the following command:

    This will display the usage and options for configuring notifications for a custom flag:

    To configure a custom flag with a target, run:

    This command configures notifications for a custom flag, sending them to the specified target (e.g., PagerDuty).

    With these commands, you can easily configure notifications for saved queries and custom flags, specifying targets, templates, and additional options like cooldown periods


    Step 4: Test Notifications

    After configuring notifications, you can test whether they are properly set up using the below command. This can be done before enabling the notifications.

    This command sends a test notification to the specified target using the provided template. You can have to provide a JSON record file for the template values:

    Example: For a Custom Flag (Redflag) - Provide the Redflag record For a Custom Flag (Opsflag) - Provide the Opsflag record For a Saved QUery - Provide the object record for which query is saved.

    Option
    Description

    You should be able to get a test notification in some-time after setting it up.


    Step 5: Enable or Disable Notifications

    After configuring and testing the notifications, you can toggle their status with the following commands to receive actual real-time Notfications:

    Enable notifications:

    Example:

    Disable notifications:

    Example:

    Suppression & Tuning

    Overview

    Spyderbat is a powerful security tool that leverages Spydertraces to group security alerts (red flags) into scored traces of activity. This documentation page will guide you through the concepts of trace suppression to tune your Spyderbat environment.

    Spydertraces

    Configuration Guide - AWS Linux VM

    Detailed configuration guide for the Spyderbat AWS Agent installed on an AWS VM

    This guide explains how to configure the Spyderbat AWS Agent to collect information from an AWS account and send it to the Spyderbat platform. It provides detailed instructions for locating the configuration file, managing AWS credentials, and configuring all available settings.


    1. Managing the configuration

    The Spyderbat AWS Agent's configuration file is a YAML file named aws-agent.yaml. It is used to control the behavior of the agent, such as which AWS services to monitor, where to send data, and how to manage credentials.

    Spyderbat Nano Agent

    Nano Agent operational principles, compatibility, network requirements and proxy support, general FAQ

    How does Spyderbat collect data?

    Spyderbat collects data by deploying a lightweight “Nano Agent” for Linux based systems. The agent leverages (“extended Berkeley Packet Filter”) filters to build a continuous map of activity within and across systems.

    Selectors

    Selectors are used in various places to scope policies, rules, and actions. Spyderbat's selectors based on .

    Selector Primitives

    Spyderbat's selectors offer set-based selector primitives.

    >clear
    >id
    >ls -la
    > cat .profile
    > whoami
    > exit
    VBoxManage natnetwork add --netname natnet1 --network "192.168.15.0/24" --enable --dhcp on
    adduser <username>
    usermod -aG sudo <username>
    helm repo add falcosecurity https://falcosecurity.github.io/charts
    helm repo update
    helm install falco falcosecurity/falco \
        --create-namespace \
        --namespace falco \
        --set falcosidekick.enabled=true \
        --set falcosidekick.config.spyderbat.orguid="YOUR_ORG_ID" \
        --set falcosidekick.config.spyderbat.apiurl="https://api.spyderbat.com" \
        --set falcosidekick.config.spyderbat.apikey="YOUR_API_KEY" \
        --set extra.args=\{"-p","%proc.pid"\} \
        --set driver.kind=modern_ebpf
    kubectl get pods --all-namespaces
    short_name = "falco_flag"
    here
    creating api key step 1
    Spyderbat Admin Notifications UI
    creating api key step 3
    Retrieving Your Organization ID
    Loading Children or Descendants
    Causal Tree
    Spydertop displays the memory usage of a machine
    How to use Spydertop
    Spydertop built-in help
    Enter Spydertop

    --namespace

    Generate rules for all namespaces including namespace scope

    --namespace NAMESPACE_NAME

    Generate rules for a specific namespace including namespace scope

    OMITTED

    Generate rules for all namespaces scoped globally

    Spyderbat Nano Agent
    Ruleset Policy
    helm installation guide
    Ruleset Reference Guide
    Guardian Policy Management Reference Guide

    -T, --target

    Target name or UID to send a test notification to. (Required)

    -P, --template

    Template name or UID of the same type as the target. (Required)

    -f, --record-file

    File containing a JSON record used to build the notification. (Required)

    Configure Spyctl with a Context
    Spyderbat Notification Concept
    this section
    here
    here
    Refer here

    **matchFields

    pre-defined key value pairs

    **matchFieldsExpressions

    key from pre-defined list, operator, and values

    • * Matches the syntax from Kubernetes

    • ** Unique to Spyderbat's Selectors

    Expressions

    Expressions have 3 fields: key, operator, and values. They allow you to define set-based groupings.

    Example:

    In the example above whatever is being matched on, must have a label with a key app and the value of that label must be either apache or mysql.

    Operators

    Operators define how the set-based expression is to be evaluated.

    In

    The key must exist and the value must be in values

    NotIn

    The key must exists and the value must not be in values

    Exists

    The key must exist

    DoesNotExist

    The key must not exist

    Pod and Namespace Selectors

    Pod and Namespace selectors are defined the exact same way that Kubernetes Pod and Namespace selectors are. Both resources types can have user-defined labels that allow them to be grouped by selectors.

    The labels are found within the Pod and Namespace object yaml.

    Supported Primitives

    matchLabels

    matchExpressions

    Examples:

    Other Selectors

    The following selectors are Custom to Spyderbat's environment. They add an additional level on granularity to scoping operations.

    Supported Primitives

    matchFields

    matchFieldsExpressions

    Cluster Selector

    The Cluster Selector allows for scoping by Kubernetes Cluster. Field values may be wildcarded with an * character.

    Supported Fields
    Description

    name

    The name of the cluster as defined in Spyderbat

    uid

    The Spyderbat-provided uid of the cluster generally begins with clus:

    Example:

    Machine Selector

    The Machine Selector allows for scoping my Machine. A machine in this context is a device with the Spyderbat Nano Agent installed.

    Supported Fields
    Description

    hostname

    The hostname of a host on the network

    uid

    The Spyderbat-provided uid of the machine generally begins with mach:

    Example:

    Container Selector

    The Container Selector allows for scoping by fields associated with containers.

    Supported Fields
    Description

    image

    The container's image name

    imageID

    The container's image hash

    containerName

    The name of a specific container instance (usually auto-generated)

    containerID

    The ID of a specific container instance (usually auto-generated)

    Example:

    Service Selector

    The Service Selector allows for scoping by fields associated with Linux Services

    Supported Fields
    Description

    cgroup

    The cgroup that every process within the service falls under. Ex. systemd:/system.slice/nano-agent.service

    name

    The simple name of the Linux service. Ex. nano-agent.service

    Example:

    Trace Selector

    The Trace Selector is used by Trace Suppression Policies to suppress Spydertraces within a specific scope.

    Supported Fields
    Description

    triggerClass

    The class of flag that triggered the Spydertrace

    triggerAncestors

    The names of the ancestor processes of the flag that triggered the Spydertrace

    Example:

    User Selector

    The User Selector is used by Trace Suppression Policies to suppress Spydertraces triggered by a specific user or users.

    Supported Fields
    Description

    user

    The username of the offending user

    Example:

    Process Selector

    The Process Selector is used to scope by fields associated with a Linux Process.

    Supported Fields
    Description

    name

    The name of the process

    exe

    The executable of the process

    euser

    The username of the process' effective user

    *matchLabels

    user-defined key value pairs

    *matchExpressions

    contain a key, operator, and values

    Kubernetes Labels and Selectors
    spyctl get clusters
    $ spyctl get clusters
    Getting clusters
    NAME            UID               CLUSTER_ID                            FIRST_SEEN            LAST_DATA
    demo-cluster    clus:VyTE0-BPVmo  xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  2024-03-14T17:14:19Z  2024-05-06T18:07:24Z
    spyctl create cluster-policy -C CLUSTER [--namespace [NAMESPACE_NAME]] -n POLICY_NAME > cluster-policy.yaml
    $ spyctl create cluster-policy -C demo-cluster --namespace -n demo-cluster-policy > cluster-policy.yaml
    Validating cluster(s) exist within the system.
    Creating ruleset for cluster demo-cluster
    Generating container rules...
    Cluster(s) validated... creating policy.
    apiVersion: spyderbat/v1
    items:
    - apiVersion: spyderbat/v1
      kind: SpyderbatRuleset
      metadata:
        name: demo-cluster_ruleset
        type: cluster
      spec:
        rules:
        - namespaceSelector:
            matchExpressions:
            - {key: kubernetes.io/metadata.name, operator: In, values: [rsvp-svc-dev, rsvp-svc-prod]}
          verb: allow
          target: container::image
          values:
          - docker.io/guyduchatelet/spyderbat-demo:1
          - docker.io/library/mongo:latest
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          verb: allow
          target: container::image
          values:
          - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.10.1-eksbuild.1
          - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.10.1-eksbuild.1
          - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.7-eksbuild.1
          - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.22.6-eksbuild.1
          - public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-58-g4ddce6a-2024.01.31.21.42
          - registry.k8s.io/csi-secrets-store/driver:v1.4.2
          - registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
          - registry.k8s.io/sig-storage/livenessprobe:v2.12.0
    - apiVersion: spyderbat/v1
      kind: SpyderbatPolicy
      metadata:
        name: demo-cluster-policy
        type: cluster
      spec:
        enabled: true
        mode: audit
        clusterSelector:
          matchFields:
            name: demo-cluster
        rulesets:
        - demo-cluster_ruleset
        response:
          default:
          - makeRedFlag:
              severity: high
          actions: []
    spyctl apply -f FILENAME
    $ spyctl apply -f cluster-policy.yaml
    Successfully applied new cluster ruleset with uid: rs:xxxxxxxxxxxxxxxxxxxx
    Successfully applied new cluster guardian policy with uid: pol:xxxxxxxxxxxxxxxxxxxx
    spyctl get policies --type cluster
    spyctl get rulesets --type cluster
    $ spyctl get policies --type cluster
    UID                       NAME                 STATUS    TYPE       VERSION  CREATE_TIME
    pol:xxxxxxxxxxxxxxxxxxxx  demo-cluster-policy  Auditing  cluster          1  2024-05-06T19:22:43Z
    $
    $ spyctl get rulesets --type cluster
    UID                      NAME                   TYPE       VERSION  CREATE_TIME           LAST_UPDATED
    rs:xxxxxxxxxxxxxxxxxxxx  demo-cluster_ruleset   cluster          1  2024-05-06T19:22:42Z  2024-05-06T19:22:42Z
    spyctl edit policy demo-cluster-policy
    response:
      default:
      - makeRedFlag:
          severity: high
      - agentKillPod:
      actions: []
    response:
      default:
      - makeRedFlag:
          severity: high
      actions:
      - agentKillPod:
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: MY_CRITICAL_NAMESPACE
    spyctl logs policy NAME_OR_UID
    $ spyctl logs policy demo-cluster-policy
    (audit mode): Container image "docker.io/guyduchatelet/spyderbat-demo:2" ns:"rsvp-svc-dev" cluster:"demo-cluster" deviated from policy "integrationc3_policy".
    (audit mode): Would have initiated "makeRedFlag" action for "cont:8vuJRMgyTEs:AAYXziCHi5g:31961a985651". Not initiated due to "audit" mode.
      spyctl notifications -h
    Usage: spyctl notifications [OPTIONS] COMMAND [ARGS]...
    
    Configure notifications for a Spyderbat resource.
    
    Commands:
      configure  Configure notifications for a Spyderbat resource.
      disable    Disable notifications for a Spyderbat resource.
      enable     Enable notifications for a Spyderbat resource.
      list       List notifications on a Spyderbat resource.
     spyctl notifications -h
    Configure notifications for a Spyderbat resource.
    
    Commands:
      custom-flag  Configure notifications for a custom flag.
      saved-query  Configure notifications for a saved query.
      agent-health Configure notificatons for agent heatlh.
    
    Usage:
      spyctl notifications configure [OPTIONS] COMMAND [ARGS]...
    
     spyctl notifications configure saved-query -h
    Usage: spyctl notifications configure saved-query [OPTIONS] NAME_OR_UID
    
      Configure notifications for a saved query.
    
    Options:
      --target-map    Map target names to template names. Can be used multiple times. 
                      Usage: --target-map TGT_NAME=TEMPLATE_NAME
      --targets       The Name or UID of targets to send notifications to.
      --cooldown-by   The cooldown by field(s).
      --cooldown      The cooldown period in seconds.
      --is-disabled   Disable notifications.
    spyctl notifications configure saved-query query:uOabbGEeJ \
    --targets "email-target"
    --targets specifies the target (e.g., email) for sending notifications.
    You can also customize settings like --cooldown or --is-disabled.
    spyctl notifications configure custom-flag -h
    Usage: spyctl notifications configure custom-flag [OPTIONS] NAME_OR_UID
    
      Configure notifications for a custom flag.
    
    Options:
      --target-map    Map target names to template names. Can be used multiple times. 
                      Usage: --target-map TGT_NAME=TEMPLATE_NAME
      --targets       The Name or UID of targets to send notifications to.
      --cooldown-by   The cooldown by field(s).
      --cooldown      The cooldown period in seconds.
      --is-disabled   Disable notifications.
    spyctl notifications configure custom-flag flag:teauh \
    --targets "pagerduty-target"
    spyctl test-notification --target "email-alerts" \
    --template "default-template"
    spyctl test-notification --target "slack-channel" \
    --template "custom-template" \
    --record-file test_record.json
    spyctl notifications enable [OPTIONS] COMMAND [ARGS]...
    spyctl notifications enable saved-query query:PpEjGdOSUJ
    spyctl notifications disable [OPTIONS] COMMAND [ARGS]...
    spyctl notifications disable saved-query query:PpEjGdOSUJ
    matchExpressions:
    - key: app
      operator: In
      values: [apache, mysql]
    podSelector:
      matchLabels:
        app: apache
      matchExpressions:
      - {key: tier, operator: In, values: [frontend, backend]}
      - {key: test, operator: DoesNotExist}
    namespaceSelector:
      matchLabels:
        kubernetes.io/metadata.name: production
      matchExpressions:
      - {key: dedicated-node, operator: Exists}
    clusterSelector:
      matchFields:
        name: demo-cluster
    machineSelector:
      matchFieldsExpressions:
      - {key: hostname, operator: In, values: [test_node, staging_node]}
    containerSelector:
      matchFields:
        image: docker.io/apache
    serviceSelector:
      matchFields:
        cgroup: systemd:/system.slice/nano-agent.service
    traceSelector:
      matchFields:
        triggerClass: redflag/proc/command/high_severity/suspicious/netcat
    userSelector:
      matchFieldsExpressions:
      - {key: user, operator: NotIn, values: [admin, root]}
    processSelector:
      matchFields:
        exe: /bin/bash
    Spydertraces are groups of security alerts that are scored based on the activity they represent. These traces provide a comprehensive view of potentially suspicious activity within your environment and are viewable on the Spyderbat dashboard. From the dashboard, you can investigate each Spydertrace to determine the nature and severity of the activity.

    Alert Suppression

    Alert suppression in Spyderbat allows you to mark known Spydertrace activities as acceptable. Suppressing a Spydertrace reduces its score to 0 and prevents future traces that match the same activity from showing up in your Dashboards. This helps in reducing noise and focusing on genuinely suspicious activities.

    Trace Suppression Policies are the current tool that enables Spydertrace Suppression. Suppression Policies can be generated automatically using the Spyctl CLI and a valid Spydertrace UID.

    Example Suppression Policy:

    This example policy will suppress any Spydertraces triggered via a suspicious nc command with the specific process ancestors of systemd/containerd-shim/sh/python/sh/nc. Within that scope, the policy then specifies what other flags are allowed to be grouped within the trace.

    Should additional flags appear outside of the allowed list, the trace would no longer be suppressed and have a new score based on the severity of any new flags.

    After applying suppression policies it may take up to 24 hours for all of the suppressed Spydertraces to disappear from your dashboard. To circumvent this, you can adjust the dashboard to show results from the last hour.

    Methods of Suppressing Spydertraces.

    There are two main methods to suppress Spydertraces in Spyderbat:

    1. UI-based approach.

    2. CLI-based approach.

    With either approach, once you implement the suppression rule, any active Spydertraces that match the rule's scope and allowed flags will be immediately suppressed. New Spydertraces that fit these criteria will also be automatically suppressed going forward.

    Don’t worry if you don’t know about Suppression Rules yet, we’ll get into it.

    1. Using the UI

    The UI approach involves 4 simple steps: finding the Spydertrace to suppress, clicking Suppress Trace, creating a suppression rule, and clicking Create.

    You can find Spydertraces in three main ways: using Search, Dashboard, or Investigation.

    The key part of this process is creating a suppression rule in the UI. Let's go over that first before explaining how to find Spydertraces.

    Suppression Rule and Suppression Scope Customization:

    Suppression rules allow you to reduce noise in your environment by marking known activity as acceptable, ensuring that your focus only remains on suspicious activities.

    By default, suppression rules are applied globally across your environment (org).

    To target specific areas and reduce the scope of the suppression, you can customize the rule by adding selectors, such as user, machine, cluster name, container, namespace, or any other available identifiers from the drop-down list available.

    After you click "Suppress Trace" for a Spydertrace, this window pops up.

    Using selectors allows you to focus the suppression rule on particular components, ensuring that it only applies where necessary.

    You can limit the suppression Scope to:

    • Specific users to control which individuals the rule affects.

    • Particular machines or hosts to contain the suppression to certain hardware.

    • A particular cluster for Kubernetes-based environments.

    • Specific containers or pods or Namespace to isolate suppression in a containerized setup.

    You can also choose the allowed flags as part of Suppression.

    Spyderbat allows you to edit the Suppression Rule context and the Suppression Rule Name.

    To make the Suppression Rule Context generic add a wildcard (*) to the Trigger Ancestor or the Trigger class as desired. This use of wildcards makes Suppression Rules flexible, allowing you to catch a wider range of patterns and reducing the need for multiple specific rules.

    Note: You cannot edit the selectors in the console once the policy has been created, but using Spyctl CLI you can still edit the raw yaml. For simplicity we recommend deleting and recreating the suppression rule if you wish to edit the selectors.

    Now that you understand what suppression rules are, let’s look at 3 different ways to find Spydertraces and apply suppression rules using the UI.

    a. Searching on Spydertraces:

    In the Search section of the Spyderbat UI, you can search for various Kubernetes objects, processes, connections, and Spyderbat-specific entities like Spydertraces.

    Suppressed traces refer to Spydertraces that have been intentionally suppressed.

    To begin, select Spydertrace, open the query builder, and select the fields you want to query. Use the appropriate operators and time filters to refine your search.

    Below, we've used score>40 query for our Search.

    Additionally, you can apply filters to the result from the top-left corner to further narrow down and investigate the data that interests you.

    Once you’ve found the trace of interest in the search results, select it and click Suppress Trace from the options when prompted. You can also go ahead and Investigate further and then Suppress the trace.

    This will open the Create Trace Suppression Rule page, where you can customize the scope and add multiple selectors from the available list.

    Refer to the 'Suppression Rule and Suppression Scope Customization' and finally, click Create to apply the rule.

    b. Spydertraces Dashboard Card

    Spydertraces represent potential security concerns, and by default, the ‘Security’ dashboard category in the Spyderbat UI includes a card dedicated to all security-related aspects, including Spydertraces.

    You can view several key dashboard cards, such as:

    • Recent Spydertraces with Score > 50: Displays high-priority traces for immediate attention.

    • Suppressed Traces: Lists any traces that have been suppressed.

    Spydertraces are automatically grouped by their trigger short names for easier review in Dashboard cards. If needed, you can ungroup them to focus on individual traces.

    To suppress a specific trace, select it, click Suppress Trace, then customize the scope and settings as described earlier to create the suppression rule. This allows you to refine and manage trace suppression based on your security needs.

    With this Suppression rules setting you have decided to reduce noise in your environment by marking known activity as acceptable.

    Finally, click Create to finalize the rule. You have successfully suppressed a trace. You can also check it out in the “Suppression Trace” dashboard card for quick review.

    You can also create your own Custom Dashboard Card dedicated to your Spydertrace query and suppress the trace from there as desired.

    c. Investigation

    Another method of suppressing a trace is through the Investigation feature in Spyderbat.

    You can start your Kubernetes investigation via Kubernetes Section or Source Section in UI. If you observe a Spydertrace linked to the object that has generated flags, but after review, you determine it is not malicious, In this case, you can also suppress the trace directly from the investigation interface.

    Alternatively, you can:

    a) Search for a Spydertrace and add it to the investigation.

    b) Once added, you can suppress it from the investigation.

    c) Customize the scope if needed, apply selectors, and create/apply the suppression rule.

    2. CLI-Based Approach

    The second method of creating a Suppression trace is using Spyctl CLI.

    In the CLI-based approach, you can easily manage Spydertraces using the Spyctl CLI. Here’s a step-by-step guide to help you navigate:

    (i) View Spydertraces:

    To retrieve a list of Spydertraces, you can use the below command:

    This command will provide summarized information, including the trigger name, count of occurrences, etc for further investigation

    If you're specifically interested in Spydertraces with a score above 50, you can add the --score_above option:

    There are a lot of other filter options that you can provide. Use '$spyctl get spydertraces –help' for more information.

    (ii) Get the UID:

    Identify the UID of the Spydertrace you want to suppress from the list. The UID uniquely identifies each trace and is necessary for suppression.

    (iii) Suppress the Trace:

    Once you have the UID, use the following command to suppress the specific Spydertrace.

    Replace TRACE_UID with the actual UID of the trace. You can also apply additional options for more control over the suppression policy:

    For more details on how to fine-tune your commands, you can always use the --help option: spyctl suppress trace --help

    Editing a Suppression Rule/Policy:

    You cannot edit the selectors for Suppression Rule in Console, but you can edit the raw YAML for a suppression rule or policy in Spyctl CLI using the following command:

    Each suppression rule is linked to a unique policy ID. To find the policy ID, you can use the following command:

    However, for simplicity, we recommend deleting and recreating the suppression rule if you need to modify the selectors.

    Managing Suppression Rules:

    There may be cases where you want to delete or disable a suppressed Rule to further stop the traces from being generated.

    For example, if the conditions that triggered the suppression are no longer valid or if the trace needs to be re-evaluated due to changes in your security policy. Spyderbat allows you to manage these traces easily.

    How to Delete a Suppressed Rule:

    UI approach.

    To permanently remove a suppression Rule:

    • Go to the Suppression Rules section in the Spyderbat Console.

    • Find the suppression rule associated with the trace you wish to delete.

    • Click on the bin icon next to the suppression rule name.

    This will delete the Suppression rule. Deleting a suppressed rule is useful when the trace is no longer relevant to suppress or if you need the activity information in your environment.

    CLI Approach:

    You can delete the Suppression Rule/policy using Spyctl CLI with the following command:

    Each suppression rule is associated with a unique policy ID. To find the policy ID, use the command:

    How to Disable a Suppression Rule:

    UI approach.

    To temporarily disable a suppression rule:

    • Go to the Suppression Rules section

    • Click on View next to the suppression rule you want to manage.

    • Navigate to Rule Settings.

    • Locate the Rule Status option.

    • Set the status from Enabled to Disabled to turn off the suppression rule.

    Disabling a suppression rule is helpful when you want to pause the suppression without permanently deleting the rule, allowing you to enable it later if necessary.

    CLI Approach:

    Then set the enabled field to false.

    Locating the Configuration File

    By default, the configuration file is located at:

    This file can be edited using any text editor with root privileges. For example:

    Applying Changes

    After making changes to the configuration file, the AWS Agent service must be restarted to apply the updates. Use the following command to restart the service:

    Validating Configuration

    To ensure the configuration file is valid, check the service status after restarting:

    If there are any errors, they will be displayed in the status output. Ensure the YAML syntax is correct before restarting the service again.

    Checking agent logs

    The agent logs can be consulted based on the service journal:


    2. AWS Credentials Management

    The Spyderbat AWS Agent requires access to AWS services to collect data. The agent supports multiple methods for obtaining credentials, listed below in the order of precedence:

    1. IAM Instance Profile (Recommended)

    If the agent is deployed on an EC2 instance with an IAM role assigned, it will automatically use the instance profile credentials. This is the most secure and recommended method. No additional configuration is required for this setup.

    2. Environment Variables

    You can set the following environment variables to provide credentials explicitly:

    • AWS_ACCESS_KEY_ID

    • AWS_SECRET_ACCESS_KEY

    For example, add the variables to the environment:

    3. From Files

    The agent can also read credentials from files. This is typically used when credentials are mounted as secrets in Kubernetes or other containerized environments. Place the credentials in the following files:

    • /etc/aws-config/secrets/aws_access_key_id

    • /etc/aws-config/secrets/aws_secret_access_key

    Note: This method is not recommended for standalone deployments.


    3. Configuration Settings

    Below is a detailed explanation of each configuration setting available in the aws-agent.yaml file.

    spyderbat_orc_url

    • Description: The URL of the Spyderbat orchestration API endpoint. This is where the agent sends the collected data.

    • Example:

    • Default: https://orc.spyderbat.com


    outfile

    • Description: Specifies a file where the agent writes the collected data instead of sending it to the Spyderbat backend. This is primarily for debugging purposes.

    • Example:

    • Default: Not set.


    cluster_name

    • Description: The name of the Kubernetes cluster, used for identification in the Spyderbat UI. This is optional for standalone deployments.

    • Example:

    • Default: Not set.


    aws_account_id

    • Description: Specifies the AWS account ID the agent monitors. Use auto for auto-discovery.

    • Example:

    • Default: auto


    role_arn

    • Description: The ARN of the IAM role the agent assumes to gather information. This is useful when explicit AWS credentials are used. It should not be used if the correct role was already assumed through an EC2 IAM Instance Profile.

    • Example:

    • Default: Not set.


    send_buffer_size

    • Description: The number of records accumulated before sending data to the Spyderbat backend.

    • Example:

    • Default: 100


    send_buffer_records_bytes

    • Description: The maximum size (in bytes) of accumulated records before sending to the backend.

    • Example:

    • Default: 1000000 (1 MB)


    send_buffer_max_delay

    • Description: The maximum delay (in seconds) before sending accumulated records, even if the buffer is not full.

    • Example:

    • Default: 30


    log_level

    • Description: Configures the logging level for the agent.

    • Options: DEBUG, INFO, WARNING, ERROR, CRITICAL

    • Example:

    • Default: INFO


    pollers

    • Description: Configures the AWS services and regions to monitor. Each entry specifies a service, polling interval, and regions.

    • Example:

    • Default: Monitors all supported services and regions if not set explicitly.

    Per service in the pollers section, the following properties can be set:

    polling_interval

    • Description: The interval in seconds at which the agent will poll the service.

    • Example:

    • Default: 30

    regions

    • Description: The regions that the agent will poll for the service. If not set, the agent will poll all regions.

    • Example:

    • Default: not set (all regions)

    Example configuration file

    You can find an example illustrated configuration yaml file here

    Why do I need to install an agent?

    Existing endpoint agents and system logs do not include the necessary information required by Spyderbat to build a complete, living map of causal activity within and across systems. Spyderbat’s Nano Agent is optimized to collect this information so that analysts can see the complete causal attack picture across systems, users, and time.

    What is the impact of the Spyderbat Nano Agent on the system?

    Spyderbat has observed minimal impact on system resources (CPU, memory), and minimal network bandwidth impact due to heavy compression.

    What operating systems are currently supported?

    Spyderbat currently supports the following Linux systems:

    Linux Version
    Architecture

    AlmaLinux 9

    x86_64 / Power64le

    Amazon Linux 2

    x86_64 / ARM64

    Amazon Linux 2022

    x86_64 / ARM64

    Amazon Linux 2023

    x86_64

    Amazon Linux Bottlerocket

    x86_64

    CentOS 7 up to 7.6 (with El Repo LT)

    x86_64

    What K8s Distributions are currently supported?

    Spyderbat Nano Agents can be currently installed on the K8s clusters utilizing the following distributions:

    K8s Distribution
    Node Operating System
    Container Runtime

    EKS

    Amazon Linux 2 Bottlerocket

    containerd or Docker

    GKE

    Ubuntu GCOS

    containerd

    Red Hat OpenShift

    V4.xx Power64le

    containerd or Docker

    Rancher RKE and RKE2

    Ubuntu 20 LTS

    Spyderbat Nano Agents use only standard Kubernetes APIs and standard Kubernetes resources. It should run on most Kubernetes clusters.

    What are the Nano Agent’s network requirements?

    Ensure that the systems running the Nano Agent have outbound access on port 443 to https://orc.spyderbat.com.

    Does the Nano Agent support network proxies?

    Yes. If you have a proxy configured and you have Linux environment variables like:

    The installation script will automatically grab the environment variables from your terminal using the “-E” flag and pass those to the agent as required.

    Is information sent securely from the Nano Agent?

    Yes. Spyderbat securely encrypts information sent by the Nano Agent to the Spyderbat backend using TLS.

    Does the Nano Agent support systems hosted in AWS?

    The Nano Agent can be installed on any of the supported systems listed above as virtual or physical machines. Additionally, the Nano Agent collects metadata from AWS instances such as Cloud Tags, Region, Zone etc. To collect this metadata, ensure your AWS instances have an appropriate IAM (read only) role assigned to them such as “AmazonEC2ReadOnlyAccess”, see https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/security-iam-awsmanpol.html

    How do I start and stop the Nano Agent from the command line?

    To start the Nano Agent:

    To stop the Nano Agent:

    eBPF

    Low

    Informational

    Info

    Debug

    Info

    Spyderbat search
    Spyderbat interface left-hand navigation

    Ruleset Policies

    Published: April 29, 2024

    What are Ruleset Policies

    Ruleset Policies are a way of defining a set of allow or deny rules for a given scope. Currently, ruleset policies are supported for the following scope(s):

    • Kubernetes Clusters

    Ruleset Policies, themselves are very simple. They define a scope with a selector, and they contain a list of pointers to reusable rulesets. And they also define a set of response actions when deviations occur.

    The Rulesets used by a Ruleset Policy are policy-agnostic and as such can be defined once and used across multiple policies. Rulesets contain a set of allow or deny rules. Each rule contains a target, verb, list of values, and optional selectors (for additional scoping).

    • Target: what the rule is referring to within the scope of the policy.

      • ex. container::image this means that we are allowing or denying containers using images specified in the values field.

    • Verb: The currently available verbs for ruleset rules are allow or deny. Any object matching a deny rule will generate a Deviation.

    The following is an example rule that allows containers with the images docker.io/guyduchatelet/spyderbat-demo:1 and docker.io/library/mongo:latest in the namespaces rsvp-svc-dev and rsvp-svc-prod.

    The following rule denies the image docker.io/guyduchatelet/spyderbat-demo:2 globally.

    The following is an example ruleset automatically generated from a demo cluster:

    How rules are evaluated

    Rules are evaluated based on a specific hierarchy. Scoped rules take precedence over global rules, explicit rules take precedence over wildcarded rules, deny rules are evaluated first, and anything that matches no rules is denied by default.

    Evaluation Order

    1. Scoped explicit deny

    2. Scoped explicit allow

    3. Scoped wildcarded deny

    4. Scoped wildcarded allow

    Examples

    Scenario 1 (Global Explicit Allow):

    Image: docker.io/guyduchatelet/spyderbat-demo:1

    A container with the image docker.io/guyduchatelet/spyderbat-demo:1 would be allowed globally.

    Scenario 2 (Default Deny)

    Image: docker.io/guyduchatelet/spyderbat-demo:bad-tag

    A container with the image docker.io/guyduchatelet/spyderbat-demo:bad-tag would be denied by default.

    Scenario 3 (Global Explicit Allow with Global Wildcard Deny):

    Image 1: docker.io/guyduchatelet/spyderbat-demo:1 Image 2: docker.io/guyduchatelet/spyderbat-demo:bad-tag

    Global explicit allow is evaluated before global wildcarded deny so Image 1 is allowed. Image 2 is denied by the global wildcarded deny.

    Scenario 3 (Scoped Wildcarded Allow with Global Explicit Deny):

    Image 1: docker.io/guyduchatelet/spyderbat-demo:1 Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-prod} Image 2: docker.io/guyduchatelet/spyderbat-demo:bad-tag Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-prod} Image 3: docker.io/guyduchatelet/spyderbat-demo:bad-tag Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-dev}

    Since the first rule has a namespace selector, that rule is scoped. Scoped wildcarded allow rules are evaluated before global explicit deny rules so Image 1 and Image 2 are allowed. Image 3 is denied by the global explicit deny rule.

    Scenario 4 (Scoped Explicit Allow with Scoped Wildcarded Deny)

    Image 1: docker.io/guyduchatelet/spyderbat-demo:1 Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-prod} Image 2: docker.io/guyduchatelet/spyderbat-demo:bad-tag Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-prod} Image 3: docker.io/guyduchatelet/spyderbat-demo:bad-tag Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-dev}

    Both rules are scoped because they have a namespace selector. Scoped explicit allow rules are evaluated before scoped wildcarded deny rules so Image 1 is allowed. Image 2 is denied by the scope wildcarded deny rule. Image 3 does not match the scope of any rule so it is denied by default.

    Scenario 5 (Scoped Explicit Allow with Scoped Explicit Deny)

    Image: docker.io/guyduchatelet/spyderbat-demo:1 Namespace labels: {kubernetes.io/metadata.name: rsvp-demo-prod}

    Scoped explicit deny rules are evaluated before scope explicit allow rules, so the image is denied by scoped explicit allow.

    Quick Start Tutorial

    To quickly get started using using Cluster Ruleset Policies follow our tutorial using spyctl.

    Guardian Policy Management using Spyctl

    This reference page details the commands used to manage Guardian Workload Policies

    Creating and Applying a Policy

    See the tutorial: How To Lock Down Your Critical Workloads With Policies using Spyctl

    Updating A Policy

    Over time, Policies will generate deviations. Your Linux services and containers will continue to generate activity. Some of that activity may deviate from your policy. Investigating a deviation can lead to one of two scenarios.

    1. There is a legitimate threat take steps to remediate, or

    2. This is additional benign activity that should be added to the policy.

    This reference guide covers the second scenario.

    Viewing Deviations

    Deviations come from processes or connections that deviated from your Guardian Workload Policies. They contain all of the information required to update your policy should you choose to merge them in. You can view Deviations with the get command:

    For example:

    Viewing the Diff

    To see how the merging the deviations into your policy would affect it, you can see a git-like diff with the following command:

    For example:

    First, list the policies you have applied:

    Then select the one you want to diff:

    The default diff query uses all deviations in the last 24 hours. You can use the --latest option to diff the policy against all deviations since the policy was last updated.

    The output of the diff command will display a git-like diff of activity that doesn’t match the Policy. You can use the merge command to add the deviations to the Policy.

    [Optional] Bulk Diff

    You may have many policies, and diffing each one individually might be tiresome. To systematically diff all of your policies, use the following command:

    You can also use the -y option to avoid any prompting.

    Merging in the Deviations

    To update the your policies with known-good deviations you can use the merge command.

    For example:

    The default merge query uses all deviations in the last 24 hours. You can use the --latest option to merge in all deviations since the policy was last updated.

    You will have a chance to review any changes before they are applied.

    [Optional] Bulk Merge

    You may have many policies, merge in updates across all policies may be tiresome. To systematically merge in deviations across all of your policies, use the following command:

    You can use the --yes-except option to avoid all prompts except reviewing the final changes, and you can use the -y option to avoid all prompts entirely.

    Changing a Policy's Mode

    Once your policy rarely produces deviations in audit mode you can change it to enforce mode. To change the Policy to enforce mode you must edit the yaml.

    Use the edit command to edit the Policy's yaml.

    For example:

    Change the mode field in the spec:

    To:

    Then save to apply the update:

    You should now see the following when issuing the get command:

    Disabling and Re-enabling a Policy

    If you notice that a Policy is too noisy, or you want to temporarily disable it, edit the yaml and update the enabled field:

    Use the edit command to edit the Policy's yaml.

    For example:

    To:

    Then save to apply the update:

    To see that the Policy is indeed disabled, issue the command:

    To re-enable a Policy you just can simply remove the enabled field in the spec or change false to true and then apply the Policy file again.

    To see that the action was successful, issue the get command again:

    Deleting a Policy

    If you wish to completely remove a Policy from the Spyderbat Environment of the organization in your current Context you can use the delete command:

    For example:

    Blog Monitor vulnhub image 2
    Blog Monitor vulnhub image 1-1024x540

    Manage Users and Roles

    Learn about creating, modifying and deleting users, as well as assigning access permissions and privileges, in the Spyderbat UI.

    Published: July 20, 2023

    You have set up your organization in the Spyderbat UI, maybe even installed a few Spyderbat Nano Agents. It is time to invite additional team members and set up their login credentials as well as access permissions. In this article we are going to cover the user and role management within your organization.

    Organization Management Section Overview

    If you are the Spyderbat Organization owner, then by default you will be set up with Admin level permissions and therefore will have access to the Admin section of the console, located at the bottom of the left hand navigation panel:

    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      name: Trace Suppression Policy for systemd/containerd-shim/sh/python/sh/nc
      type: trace
    spec:
      traceSelector:
        matchFields:
          triggerAncestors: systemd/containerd-shim/sh/python/sh/nc
          triggerClass: redflag/proc/command/high_severity/suspicious/nc
      enabled: true
      mode: enforce
      allowedFlags:
      - class: redflag/proc/tmp_exec/high_severity/nc
      - class: redflag/proc/command/high_severity/suspicious/nc
      - class: redflag/proc/suspicious_crud_command/high_severity/cat
    $ spyctl get spydertraces
    $ spyctl get spydertraces --score_above 50 
      spyctl suppress trace TRACE_UID
    
    -u, --include-users: Scope the suppression to specific users found in the trace.
    -n, --name: Provide an optional name for the suppression policy. If you don’t provide a name, one will be generated automatically.
    -y, --yes: Automatically answer "yes" to prompts, making the process non-interactive.
    $ spyctl edit trace-suppression-policy <policy_id>
    $ spyctl get policies --type trace
    $ spyctl delete policy <policy_id>
    $ spyctl get policies
    $ spyctl edit trace-suppression-policy <policy_id>
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      ...
    spec:
      allowedFlags:
        ...
      enabled: False
      ...
    spyderbat_orc_url: https://orc.spyderbat.com
    outfile: /tmp/out.json.gz
    cluster_name: staging-cluster-us-east-1
    aws_account_id: auto
    role_arn: arn:aws:iam::123456789012:role/SpyderbatRole
    send_buffer_size: 100
    send_buffer_records_bytes: 1000000
    send_buffer_max_delay: 30
    pollers:
      - service: ec2
        polling_interval: 30
        regions:
          - us-east-1
          - us-west-2
      - service: eks
        polling_interval: 30
        regions:
          - us-east-1
          - us-east-2
      - service: eks
        polling_interval: 30
      - service: eks
        regions:
          - us-east-1
          - us-east-2
    /opt/spyderbat/etc/aws-agent.yaml
    sudo vi /opt/spyderbat/etc/aws-agent.yaml
    sudo systemctl restart aws_agent.service
    sudo systemctl status aws_agent.service
    aws_agent.service - Spyderbat AWS Agent Service
         Loaded: loaded (/etc/systemd/system/aws_agent.service; enabled; preset: disabled)
         Active: active (running) since Wed 2024-12-11 18:48:48 UTC; 3 weeks 6 days ago
       Main PID: 2146512 (aws_agent)
          Tasks: 8 (limit: 1112)
         Memory: 10.1M
            CPU: 4min 7.715s
         CGroup: /system.slice/aws_agent.service
                 ├─2146512 /usr/bin/bash /opt/spyderbat/bin/aws_agent
                 └─2146528 docker run --pull always -v /opt/spyderbat/etc:/etc/aws-config --name aws-agent public.ecr.aws/a6j2k0g1/aws-agent:latest --config /etc/aws->
    
    Jan 08 12:24:30 ip-172-31-86-31.ec2.internal aws_agent[2146528]:  2025-01-08 12:24:30,479:INFO    :poller eks/us-west-1 got 2 records
    Jan 08 12:24:30 ip-172-31-86-31.ec2.internal aws_agent[2146528]:  2025-01-08 12:24:30,755:INFO    :Sending heartbeat and stat update
    Jan 08 12:24:31 ip-172-31-86-31.ec2.internal aws_agent[2146528]:  2025-01-08 12:24:31,346:INFO    :Session(region_name=None) IAM Poller got 56 roles and their inl>
    sudo journalctl -u aws_agent.service
    export AWS_ACCESS_KEY_ID=<your_access_key_id>
    export AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
        https_proxy=:port
      sudo systemctl stop nano_agent.service
      sudo systemctl start nano_agent.service

    CentOS 7.6+ (with Kernel 3.10.0-957+)

    x86_64

    CentOS 8

    x86_64

    Debian 11

    x86_64

    Debian 12

    x86_64 / ARM64

    Debian 13

    x86_64 / ARM64

    Flatcar Container Linux (3227.2.1; 3374.2.3)

    x86_64

    Google Container-Optimized OS (GCOS)

    x86_64

    Kali 2021.2

    x86_64

    RHEL 7.6+ (with Kernel 3.10.0-957+)

    x86_64

    RHEL 8

    x86_64 / Power64le

    RHEL 9

    x86_64 / Power64le

    Rocky Linux 8

    x86_64 / Power64le

    Rocky Linux 9

    x86_64 / Power64le

    Sangoma 16 (with El Repo LT)

    x86_64

    SLES

    x86_64 / Power64le

    Ubuntu 18.04. LTS

    x86_64

    Ubuntu 20 Desktop

    x86_64

    Ubuntu 20.04 LTS

    x86_64 / ARM64

    Ubuntu 20.10

    x86_64

    Ubuntu 22.04

    x86_64

    Ubuntu 24.04

    x86_64

    containerd or Docker

    MicroK8s

    Ubuntu 22 LTS

    containerd

    K3s

    Ubuntu 22 LTS

    containerd

    aks

    Ubuntu 22 LTS

    containerd

    robin.io

    RockyLinux 8 or 9

    containerd

    Spyderbat records
    Spyderbat details
    View Spydertrace
    simple Linux commands
    Spyderbat investigate screen
    whoami node in Causal Tree

    Values: This is the set of values that are allowed or denied. If the target is container::image then the values should be container images that are either allowed or denied.

  • Selectors: Optional selectors that further define the scope of a single rule. For instance you may want a rule that defines allowed activity in a specific namespace within a cluster.

  • Global explicit deny
  • Global explicit allow

  • Global wildcarded deny

  • Global wildcarded allow

  • Default deny

  • *Scoped

    the rule contains a selector

    *Explicit

    the matched value contains no wildcard characters

    *Wildcarded

    the matched value contains a wildcard character *

    How to Put Guardrails Around Your K8s Cluster
    log_level: INFO
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      createdBy: [email protected]
      creationTimestamp: 1712787973
      lastUpdatedBy: [email protected]
      lastUpdatedTimestamp: 1714417836
      name: demo-cluster-policy
      selectorHash: 66e45259eba6ed4365e28e7e673a18cf
      type: cluster
      uid: pol:xxxxxxxxxxxxxxxxxxxx
      version: 1
    spec:
      clusterSelector:
        matchFields:
          name: demo-cluster
      enabled: true
      mode: audit
      rulesets:
      - demo-cluster-ruleset
      response:
        default:
        - makeRedFlag:
            severity: high
        actions: []
    namespaceSelector:
      matchExpressions:
      - {key: kubernetes.io/metadata.name, operator: In, values: [rsvp-svc-dev, rsvp-svc-prod]}
    target: container::image
    values:
    - docker.io/guyduchatelet/spyderbat-demo:1
    - docker.io/library/mongo:latest
    verb: allow
    target: container::image
    values:
    - docker.io/guyduchatelet/spyderbat-demo:2
    verb: deny
    apiVersion: spyderbat/v1
    kind: SpyderbatRuleset
    metadata:
      createdBy: [email protected]
      creationTimestamp: 1712787972
      lastUpdatedBy: [email protected]
      lastUpdatedTimestamp: 1714162618
      name: demo-cluster-ruleset
      type: cluster
      uid: rs:xxxxxxxxxxxxxxxxxxxx
      version: 1
    spec:
      rules:
      - namespaceSelector:
          matchExpressions:
          - {key: kubernetes.io/metadata.name, operator: In, values: [rsvp-svc-dev, rsvp-svc-prod]}
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        - docker.io/library/mongo:latest
        verb: allow
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: kube-system
        target: container::image
        values:
        - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.10.1-eksbuild.1
        - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.10.1-eksbuild.1
        - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.7-eksbuild.1
        - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.22.6-eksbuild.1
        - public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-58-g4ddce6a-2024.01.31.21.42
        - registry.k8s.io/csi-secrets-store/driver:v1.4.2
        - registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
        - registry.k8s.io/sig-storage/livenessprobe:v2.12.0
        verb: allow
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:2
        verb: deny
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: prometheus
        target: container::image
        values:
        - quay.io/prometheus/node-exporter:v1.7.0
        - quay.io/prometheus/pushgateway:v1.7.0
        - registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.1
        verb: allow
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: spyderbat
        target: container::image
        values:
        - public.ecr.aws/a6j2k0g1/aws-agent:latest
        - public.ecr.aws/a6j2k0g1/nano-agent:latest
        verb: allow
    spec:
      rules:
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: allow
    spec:
      rules:
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: allow
    spec:
      rules:
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: allow
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:*
        verb: deny
    spec:
      rules:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: rsvp-demo-prod
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:*
        verb: allow
      - target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:bad-tag
        verb: deny
    spec:
      rules:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: rsvp-demo-prod
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: allow
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: rsvp-demo-prod
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:*
        verb: deny
    spec:
      rules:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: rsvp-demo-prod
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: allow
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: rsvp-demo-prod
        target: container::image
        values:
        - docker.io/guyduchatelet/spyderbat-demo:1
        verb: deny
    spyctl get deviations [NAME_OR_UID]
    spyctl get deviations
    Getting policy deviations from 2024-01-15T23:06:33Z to 2024-01-16T23:06:33Z
    UID                       NAME              STATUS     TYPE       CREATE_TIME           DEVIATIONS_(UNIQ/TOT)
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Auditing   container  2024-01-16T15:00:43Z  2/33
    spyctl diff [OPTIONS] -p [POLICY_NAME_OR_UID,POLICY_NAME_OR_UID2,...]
    spyctl get policies
    UID                       NAME              STATUS     TYPE       CREATE_TIME
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Auditing   container  2024-01-16T15:00:43Z
    spyctl diff -p pol:CB1fSLq4wpkFG5kWsQ2r
    spyctl diff -p
    spyctl merge [OPTIONS] -p [POLICY_NAME_OR_UID,POLICY_NAME_OR_UID2,...]
    spyctl merge -p pol:CB1fSLq4wpkFG5kWsQ2r
    spyctl merge -p
    spyctl edit RESOURCE NAME_OR_UID
    spyctl edit policy pol:CB1fSLq4wpkFG5kWsQ2r
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      ...
    spec:
      ...
      mode: audit
      ...
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      ...
    spec:
      ...
      mode: enforce
      ...
    Successfully edited policy pol:CB1fSLq4wpkFG5kWsQ2r
    spyctl get policies
    UID                       NAME              STATUS      TYPE       CREATE_TIME
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Enforcing   container  2024-01-16T15:00:43Z
    spyctl edit RESOURCE NAME_OR_UID
    spyctl edit policy pol:CB1fSLq4wpkFG5kWsQ2r
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      ...
    spec:
      ...
      enabled: true
      ...
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      ...
    spec:
      ...
      enabled: false
      ...
    Successfully edited policy pol:CB1fSLq4wpkFG5kWsQ2r
    spyctl get policies
    UID                       NAME              STATUS     TYPE       CREATE_TIME
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Disabled   container  2024-01-16T15:00:43Z
    spyctl get policies
    UID                       NAME              STATUS      TYPE       CREATE_TIME
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Enforcing   container  2024-01-16T15:00:43Z
    spyctl delete RESOURCE [OPTIONS] NAME_OR_ID
    spyctl delete policy pol:CB1fSLq4wpkFG5kWsQ2r
    Successfully deleted policy pol:CB1fSLq4wpkFG5kWsQ2r

    Only Admins are able to see this section in the UI. No other user role grants access to Organization Management options.

    Here you are able to perform standard user management activities, as such as:

    • Invite new users

    • Remove existing users

    • Change user roles

    Adding and Removing Users in Your Organization in Spyderbat UI

    If you are an organization Admin and would like to grant access to the Spyderbat UI to other users, you can do so by populating the user’s email address and selecting a role that they should be assigned from the Roles drop-down:

    Once the desired selections have been made, click “Add User” and you will see their email address and role come up under Accounts below and a confirmation will pop up:

    The user will then receive an email confirmation letting them know they have been added by you to your organization in the Spyderbat console.

    To remove a user from the Organization, the Admin must hover over the row with the user’s email address and click the “delete” icon next to it:

    A confirmation will pop up that the user has been removed successfully:

    No notification email will be sent to the user to let them know they have been removed from the organization.

    Once the user has been removed, if they had an active session at the time of removal, they will immediately see the following page:

    Spyderbat User Roles and Definitions

    There are four distinct roles to choose from in every organization in the Spyderbat platform that are available to help manage access to different parts of the UI as well as define permissions associated with managing the monitoring scope, the collected data and the data consumption methods.

    The roles offered today are:

    Admin

    Power

    User

    Agent

    Deployment

    Read

    Only

    Organization and user management

    Full access

    No access

    No access

    No access

    Nano agent install and addition of new sources to scope

    Full access

    Full access

    View and add access

    • Admin

      Users with this role are able to access all sections of the UI and have Read, Edit and Delete permissions where these actions are available. Admins are also able to manage users and access by inviting new users into the organization, deleting existing users or upgrading/downgrading user privileges by changing the assigned roles.

      As a best practice recommendation, you should not have a lot of users with the Admin role in your organization and should likely be limited to 1-2 users

    • Power User

      This user will have full access to all sections of the UI (except for organization and user management) and have Read, Edit and Delete permissions where these actions are available.

    • Agent Deployment

      This role is intended for an onboarding engineer(s) who will be responsible for installing Spyderbat Nano Agents on the hosts (VM’s and K8s clusters) that are part of the organization’s monitoring scope. These users can access Sources and agent Health sections of the Spyderbat UI and have access to agent install scripts and commands.

    • Read Only

      This role is self explanatory. Users with Read Only access are able to see all sections of the UI, except for the Admin section. They can view dashboards, all monitoring and other metadata collected by the Spyderbat Nano Agents in the Sources and Agent Health sections, as well as view process and K8s investigations and share permalinks. They will not be able to make any changes to the data, like change source names, or archive offline sources, for example.

    Changing User Roles (Upgrading or Downgrading Permissions)

    If you are an organization Admin, you are able to change user roles of your organization members in the Admin → Organization Management section. To modify a user’s role, you need to click on the Roles drop-down, select the new role, deselect the existing role and click “Save”:

    Note: if you do not deselect the existing role, then the user will be saved with both roles associated with their account and the higher-permission role will prevail in that case:

    Dashboard Categories

    Details on out-of-the-box dashboards categories including: Security, User Tracking, Policies, Operational Flags, Network Info, Monitored Inventory and Kubernetes Assets.

    Published: July 20, 2023

    If this is your first time in the Spyderbat Dashboard section, please check out our .

    Dashboard section provides a consolidated at-a-glance overview of a variety of operational and security data points captured as a result of asset monitoring with active Spyderbat Nano Agents. This data is presented as a variety of dashboard cards.

    All out-of-the-box dashboard cards are grouped into 7 distinct categories based on the type of data as well as specific security and monitoring objectives that these groups of cards could help with. The 7 default dashboard categories include:

    View access

    Agent health monitoring

    Full access

    Full access

    View access

    View access

    Dashboards review

    Full access

    Full access

    No access

    View access

    Dashboard creation

    Full access

    Full access

    No access

    View access

    Search query creation and execution

    Full access

    Full access

    No access

    View access

    Process investigations

    Full access

    Full access

    No access

    View access

    K8S investigations

    Full access

    Full access

    No access

    View access

    Notifications setup

    Full access

    No access

    No access

    No access

    Blog Monitor vulnhub image 3
    Blog Monitor vulnhub image 4-1024x540
    Falco How to image2 1024x251
    Falco new image5 1024x344
    Falco new image7 1024x861
    filtering flags dashboard
    Falco How to image1 1024x151
  • Security

  • User Tracking

  • Policy

  • Operations

  • Network

  • Inventory

  • Kubernetes

  • The dashboard cards in each default category have been carefully selected by Spyderbat analysts based on the industry's best practices, typical security and operational use cases, as well as unique ability of the Spyderbat monitoring platform to surface critical environment information in an easy to consume manner.

    The dashboard card selection in each category is static and may not be customized. However, the users with appropriate permissions have the ability to create their own dashboards and group them into their own custom categories aligning with unique company security and monitoring goals. Check out this tutorial to learn more about creating your own custom dashboards and dashboard cards.

    Let’s take a look at each of these distinct categories in detail.

    Security Category

    The Security Category focuses on surfacing various security related activities that may be deemed malicious, suspicious or interesting and is targeted for SecOps needs.

    Spyderbat Nano Agent installed on every node in your monitoring scope combined with a sophisticated analytics engine that uses existing databases of known security detections as well as Spyderbat’s proprietary analytics and rules, make it possible to capture and deliver a variety of security findings:

    • Recent Spydertraces with Score >50 and All Recent Spydertraces: Spydertrace is Spyderbat’s unique living graph of activity inside your monitored node or Kubernetes container that is brought to your attention because of a combination of security detections associated with the processes, connections, user actions and other activities that are tied together due to causal dependencies and are all part of the same story. So instead of looking at individual security events and trying to figure out if any of them are related to one another, you can investigate a complete trace of activity where all pieces have been linked together for you by Spyderbat’s powerful analytics engine.\

    • Sensitive Data Found in Environment Variables: Spyderbat will detect leaked credentials including passwords, tokens or secret keys\

    • Recently Observed Listening Sockets: Spyderbat will identify all open ports and listening sockets that wait for connections from remote clients, as they could potentially provide a vector for a remote attacker to gain access to the device. These could be dangerous when the service listening on the port is misconfigured, unpatched, vulnerable to exploits or has poor network security rules.\

    • Recent (Critical and High Severity) Security Flags : Flags are point-in-time security detections of an event and are generated using Spyderbat’s database that includes MITRE ATT&CK scenarios, Spyderbat’s own analytics and any third-party imported databases, that you may have configured (e.g., ).\

    • Processes Executed Out of /tmp: this directory is used to store temporary files, which makes it a target for malware.

    User Tracking Category

    This category focuses on abnormal and potentially suspicious user initiated activity, such as:

    • Interactive User Spydertraces: chain of activity triggered by an interactive process (aka foreground process) launched and controlled by a user through the command line in a terminal session.\

    • Interactive User Sessions: in fact, this dashboard card shows interactive processes and associated effective users that triggered that process.\

    • Interactive User Sessions with Privilege Escalation: a list of interactive processes, associated effective users that triggered the process and the user privilege change/escalation event or multiple events occurring within the same chain of activity\

    • Interactive Shell Inside a Container: any interactive user activity is suspicious, is considered an anti-pattern and may indicate that something malicious is going on. Interactive shell opened inside a container could potentially lead to data exfiltration.

    Policy Category

    Spyderbat Guardian policies must be configured and applied in order to take full advantage of this dashboard category. If you are not familiar with Spyderbat Guardian, please visit Guardian Policies section of our documentation portal.

    The dashboard cards in this category are specifically tailored to surface the following Guardian findings:

    • Container Policy Deviation Spydertraces will show a chain of related activity triggered by a policy violation inside your monitored container\

    • Container Policy Deviation Flags will list all individual point-in-time security detections associated with the applied policies in your containers\

    • Linux Service Policy Deviation Flags will list all individual point-in-time security detections associated with the applied policies on your Linux VM’s for the background services

    Operations Category

    Operations category currently only offers one dashboard card that will reveal any point-in-time security events associated primarily with the monitored infrastructure management and uptime of the assets within, for example, if a pod is running or not, if specific memory management features aimed at preventing memory leaks are enabled or disabled, if any of the conditions or thresholds are exceeded by any of the infrastructure’s critical components.

    Network Category

    In this category you will gain visibility into a variety of network related activity in your monitored environment, including connections made in and out of the monitored hosts, the connection methods utilized, East-West communication between the machines, egress traffic flows.

    By default we are offering the following dashboard card options:

    • Long Lived Egress Connections and Egress Connections with Large Data Transfer will allow you to investigate egress connections that could pose risk of malicious or accidental insider threat. You will be able to see egress connections that remain active for an unusually long period of time or look at large data transfers out of your organization.\

    • Cross-Machine Connections which will help identify potential malicious lateral movement within your environment\

    • Connections to DNS will help you detect any unauthorized activities that could lead to network reconnaissance, malware downloads, communication with attackers’ command and control servers, or data transfers out of a network.\

    • Connections Initiated by an SSH Process will allow you to validate if these connections are legit or suspicious since even though the protocol is inherently secure and one of the most common, it can be a valuable attack vector for hackers who could brute-force credentials and exploit the SSH keys (authentication mechanism, client-server configs and machine identities)

    Inventory Category

    This category is self-explanatory and will show all recently observed main resource assets within your monitored scope. If you bake the Spyderbat Nano Agent installer into your golden image or Kubernetes automation, then you can be sure that the new machines will be monitored the moment they come up. You can refer to this section, All About Spyderbat Nano Agent, on how to deploy into different types of infrastructure and how to use automation.

    Out of the box, Spyderbat will be monitoring your inventory of Linux systems, Kubernetes clusters, Kubernetes Nodes, Pods and Containers. Besides the list of these assets you will be able to easily access the associated asset metadata, view the assets current state, as well as investigate any activity related to that asset within the desired time frame.

    Kubernetes Category

    This last category is a more granular version of the Inventory section, but focused specifically on the Kubernetes infrastructure asset within your monitoring scope. Besides the main asset types already displayed in the Inventory dashboards, such as clusters, nodes, pods and containers, you will also be able to review your services, deployments, replicasets and daemonsets.

    In addition, we will provide a full account of kubectl “delete”, “apply” or “create” commands executed on the monitored clusters within the desired observation period. It will give you the opportunity to make sure these major changes are authorized and expected and take immediate action to stop potential damaging behavior in its tracks.

    Dashboards section

    Kubernetes

    Nano Agent install via public or locally hosted Helm Chart or manually via daemonset; configuring parameters (memory and CPU resources, priority class), and validating install into a K8s cluster.

    Published: October 11, 2022

    The Spyderbat Nano Agent in a containerized environment can be deployed via a Kubernetes Daemonset to a target Kubernetes Cluster. To guarantee proper coverage, it is important to ensure that a single instance of the Spyderbat Nano Agent runs on every cluster node (and is optionally deployed to API server Control Plane nodes for self-managed clusters).

    Spyderbat offers a simple deployment approach via Helm Chart, which is a package manager tool for Kubernetes that creates the necessary pods, permissions, network rules, etc. Instructions are also provided below for cases where the target cluster does not have internet access to the necessary artifacts, and the deployment is executed with a simple Kubernetes Daemonset manifest.

    Infrastructure Prerequisites

    The Spyderbat Nano Agent leverages eBPF technology on Linux systems to gather data and forward it to the Spyderbat backend. A full list of supported Linux OS can be found on our website (paragraph 4).

    Successful Spyderbat Nano Agent install and new source registration in the Spyderbat UI require that the agent has outbound access on port 443 to , so that the Nano Agent could successfully pull all needed updates and register with the Spyderbat backend. This means that the pod running the Nano Agent should have outbound access from the Kubernetes Cluster and target namespace to the target port and domain above.

    In order to verify successful agent installation, the person installing the Spyderbat agent should also ideally have a Spyderbat Admin account in their Spyderbat organization and should be able to access their organization in the Spyderbat UI at

    Public Helm Chart Deployment: Clone Repo, Update and Install

    Below is the set of deployment instructions for your K8 Kubernetes cluster, which is available in the Spyderbat UI Under Sources -> Add New Source. This deployment will run with all default settings for the parameters referenced above, which have been pre-populated.

    The agent registration code is specific to your organization (see below), and the ORC Url is the endpoint where your Nano Agents will register / communicate to the Spyderbat backend.

    If you wish to store your Agent Registration Code in the AWS Secrets Manager, please for more information on how to set it up.

    The Helm installation commands specific to your organization can be found in the Spyderbat UI by clicking on “New Source” under the “Sources” section of the left-hand navigation. This will lead to an agent installation wizard where the Helm chart details for your organization are available.

    Customizing the Helm Chart Values

    To get the Helm Chart source, you may clone the repo by running the following command:

    The Spyderbat Helm Chart includes a set of yaml files and configurable parameters that can be optionally modified by the user before running the Helm Chart on a target Kubernetes cluster.

    Resources

    The user will have the ability to specify the resource request for containers in a pod, which will enable kube-scheduler to decide which node to place the Pod on. The user will also be able to specify the resource limit for a container, so the kubelet will enforce those limits so that the running container is not allowed to use more of that resource than the limit set. The kubelet also reserves at least the requested amount of that system resource specifically for that container to use.

    There are two resource types to configure: CPU and memory.

    CPU is specified in units of Kubernetes CPUs, where 1 CPU unit is equivalent to 1 physical CPU core, or 1 virtual core. For CPU resource the expression 0.1 is equivalent to 100m, which can be read as “one hundred millicpu” or “one hundred millicores”.

    The Memory is specified in units of bytes using either an integer format or a power-of-two equivalent. For example, 2048 Mi is the equivalent of 2048 mebibytes or MiB.

    By default the resource requests are set to the following values:

    • CPU at 100m = 0.1 of a single CPU core (physical or virtual)

    • Memory at 512Mi = 512 MB

    And the resource limits are set to the following values:

    • CPU resources are hard capped at 6 CPU cores

    • Memory resources are hard-capped at 10 GB of RAM

    Priority Class

    This is a non-namespaced object that defines a mapping from a priority class name to the integer value of the priority: the higher the value, the higher the priority. A PriorityClass object can have any 32-bit integer value smaller than or equal to 1 billion. By default, the priority will be set to lowest.

    Once the priority class is set, within the Customer’s priority scale, the agent will be installed on every node in the cluster as to their priority. If the priority class is set too low, then the pods could be preempted or evicted, so if the user wants to ensure that there is an agent installed on every node in the cluster when the pod is created, then the priority should be set accordingly.

    For example:

    • 100 – 1000 – low priority

    • 100K+ – ultra-high priority

    By default, the priority class is disabled. But if it is enabled, then the default value will auto-set to 1000.

    It is important to keep in mind that if the Priority Class remains disabled then the Spyderbat Nano Agent may never get installed in the event there is no capacity.

    Namespace

    Namespaces provide a mechanism for isolating groups of resources within a single cluster. If the namespace parameter is set to false and the agent installer is run, a single pod will be created in the default namespace.

    Once this parameter is enabled (set to “true”), the “create namespace” argument will be used to create the “Spyderbat” namespace as part of the deployment.

    Service Account

    During Spyderbat Nano Agent deployment into the Kubernetes cluster, the daemon set puts an agent on every node in the cluster. ClusterMonitor creates a special agent that monitors the Kubernetes cluster itself. It is the ClusterMonitor that needs service account permissions to enable such monitoring. The name of the service account can be changed in values.yaml, it defaults to “spyderbat-serviceaccount”.

    The service account uses a “clusterrolebinding” of cluster-admin which allows it to read all the cluster configuration and gives it the ability to terminate pods to stop attacks.

    If you do not wish to use preventive actions, the cluster role can be altered in values.yaml to only have “ReadOnly” and “Watch” permissions.

    To update desired parameters via the Command Line prompt, use the following command sequence:

    This would replace the resources.requests.cpu with 1000m instead of the default 100m. For numeric settings use --set instead of --set-string

    Below is the summary table with all the defaults for your reference:

    Parameter
    Parameter from values.yaml
    Default State
    Default Value (if enabled)

    To configure access via a proxy you can add additional parameters to the Helm command line:

    To set the resource limits additional parameters like there can be added to the Helm command line. We recommend 3-5% of the resources on a node as a limit

    Helm Chart Package Contents

    The Helm Chart packages the following installer files:

    • Nanoagent.yaml file: used to ensure a copy of the pod is created on every node.

    • Serviceaccount.yaml: creates service account as part of the deployment to allow leveraging K8 API’s

    • Namespace.yaml: creates Spyderbat namespace for resource management

    • Priority.yaml: sets priority for Spyderbat pod deployment on all nodes in the K8 cluster

    Deployment via Self-Hosted Helm Chart and Docker Container Image

    In the scenario, where you want to host the Helm Chart and Container Image locally, you may leverage the following instructions. Note that the pod running the Spyderbat Nano Agent still requires outbound internet access to on port 443.

    On a machine with internet access, you can pull the Spyderbat container image into your local docker system with the following command:

    To see the image id and that it is local:

    Which will return results looking like the following:

    You can export the image with:

    The file docker.image.nano_agent can be imported into your local repository.

    Or alternatively, you may download a compressed image like this:

    This image is gzip compressed but can be installed into your registry or repository.

    To get the Helm Chart for internal hosting:

    You can unpack the Helm Chart with:

    In nanoagent/values.yaml edit the image section to point to your new image registry when you save the container image.

    The Helm chart can be used locally, or you can host it.

    Deployment via a Daemonset

    Should it be required to manually run the Spyderbat Nano Agent install, yaml files can be extracted and run one by one in a very controlled fashion.

    To extract the files from the Helm Chart available in a public GitHub repository run the following command using your organization registration code (see section for detail on how to find your agent registration code):

    Once run, this command will produce a batch of yaml files, including the following:

    You can then proceed to modify the desired parameters in the respective files as noted above and run individual files one by one to complete the Spyderbat Nano Agent installation.

    Validation

    If the installation proceeded correctly, you should receive the following confirmation:

    Once it registers with Spyderbat’s backend, you will be able to see a number of active sources with a recent registration date, corresponding to the number of nodes in the cluster that were targeted with the agent.

    Once the Spyderbat Nano Agents have been installed, you can validate the pods are running with the following command:

    You should see something like the following – one pod per cluster node:

    Note that the free Spyderbat Community account allows you to monitor up to 5 nodes, i.e. register up to 5 sources in the Spyderbat UI. If you have a cluster that contains more than 5 nodes or anticipate scaling up in the near future, please visit to sign up for our Professional tier.

    Response Actions

    Spyderbat’s Policy Response Actions provide a powerful mechanism for responding to security events and deviations within your environment. These actions allow you to automate responses, enforce security policies, and maintain operational integrity. Response actions fall under two main categories, Agent and Standard.

    Categories of Response Actions

    1. Agent Response Actions:

      • Purpose: These actions are executed directly on machines where the Spyderbat Nano Agent is installed.

      • Targeted Scope: They allow for machine-specific responses.

      • Examples:

        • Kill a process.

        • Kill a pod.

        • Kill a process tree.

    2. Standard Response Actions:

      • Purpose: These actions generate security and operations flags.

      • Insights: They provide visibility into policy violations or anomalies. They serve as alerts that can be further processed by other systems or personnel.

      • Examples:

    Actions

    Response actions are defined within the spec field of policies.

    For Example:

    • The default section contains global actions that apply to the entire policy. Whenever a deviation occurs, default actions are taken if applicable.

    • The actions section allows you to define more specific actions with selectors that narrow the scope of when the action should be executed.

    makeRedFlag Action

    This action makes a security flag. The ultimate consumer of these types of flags are security personnel investigating an anomaly. They can trigger spydertraces and/or be used to trigger notifications.

    Supported Selectors

    Fields

    • severity: The priority level of the red flag. Can be critical, high, medium, low, or info.

    • impact: [Optional] A string describing the security impact should the flag be generated.

    • content: [Optional] A string containing markdown that can detail next steps or who to contact.

    Example:

    makeOpsFlag Action

    This action makes an operations flag. The ultimate consumer of these types of flags are operations personnel responsible for maintaining infrastructure.

    Supported Selectors

    Fields

    • severity: The priority level of the operations flag. Can be critical, high, medium, low, or info.

    • impact: [Optional] A string describing the operations impact should the flag be generated.

    Example:

    agentKillPod

    This action tells the Spyderbat Nano Agent to kill a deviant process.

    Supported Selectors

    Examples:

    Kill pods running deviant netcat processes.

    Kill all pods with deviations

    agentKillProcess

    This action tells the Spyderbat Nano Agent to kill a deviant process.

    Supported Selectors

    Examples:

    Kill deviant processes running the /bin/bash executable.

    Kill all deviant processes

    agentKillProcessGroup

    This action kills a process an any other processes within the same process group (pgid).

    Supported Selectors

    Examples:

    Kill process group of deviant processes running the /bin/bash executable.

    Kill all deviant processes and their associated groups.

    agentKillProcessTree

    This action instructs the Spyderbat Nano Agent to kill a deviant process along with its descendants (child processes). It is used to terminate a process tree, ensuring that both the specified parent process and all its child processes are killed.

    Supported Selectors

    Examples:

    Kill a deviant process and all its descendants:

    This example demonstrates how to use the agentKillProcessTree action to kill a deviant process along with its child processes. In this case, the process to be killed is the one running /bin/bash, and all descendant processes are also terminated.

    Kill all deviant processes and their descendants:

    This configuration will kill all deviant processes and their child processes without specifying any selectors.

    agentReniceProcess

    This action allows the Spyderbat Nano Agent to adjust the priority of deviant processes by "renicing" them. The process's priority (or "nice value") can be changed to either increase or decrease its CPU scheduling priority.

    Supported Selectors

    Priority Range: The priority value is an string that specifies the new priority (or "nice value") for the process. The valid range for priority is -20 to 19, where:

    • -20 is the highest priority (more CPU time),

    • 19 is the lowest priority (less CPU time).

    Note: The default nice value for a process in Linux is 0.

    Examples:

    Renice a deviant process by changing its priority:

    To adjust the priority of a specific deviant process, use the agentReniceProcess action. In this example, the priority of a deviant process running the /bin/bash executable is set to 20, which is a lower priority.

    Renice a specific process by name:

    This example shows how to renice a process running the ping command. The priority is set to -1.

    Renice all deviant processes:

    To renice all deviant processes to a specific priority you can configure the action as follows:

    Related Pages

    • - Reference documentation on the various selector types.

    • - The policies that use response actions.

    Falco
    Renice a process.

    Creating red flags. (Security focused, can trigger Spydertraces)

  • Creating operations flags. (Operations focused, highlight potential problems with infrastructure)

  • content: [Optional] A string containing markdown that can detail next steps or who to contact.

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Cluster

    Machine

    Namespace

    Pod

    Container

    Service

    Process

    Selectors
    Policies
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      name: demo-cluster-policy
      type: cluster
    spec:
      enabled: true
      mode: audit
      clusterSelector:
        matchFields:
          name: demo-cluster
      rulesets:
      - demo-cluster_ruleset
      response:
        default:
        - makeRedFlag:
            severity: high
        actions:
        - agentKillProcess:
            processSelector:
              matchFields:
                exe: /bin/bash
    response:
        default:
        - makeRedFlag:
            severity: high
        actions:
        - makeRedFlag:
          namespaceSelector:
            kubernetes.io/metadata.name: production
          severity: critical
          impact: Unexpected activity on this critical workload could be malicious and should be investigated immediately.
          content: '### Remediation
          1. Contact developer
          2. Confirm if activity is expected or not
          3. If not, conduct investigation
          '
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - makeOpsFlag:
          namespaceSelector:
            kubernetes.io/metadata.name: production
          severity: critical
          impact: This workload appears to be behaving abnormally, operations should investigate.
          content: '### Remediation
          1. Confirm configuration
          2. Deploy fix
          '
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentKillPod:
            processSelector:
              matchFields:
                name: nc
    response:
        default:
        - makeOpsFlag:
            severity: high
        - agentKillPod:
        actions: []
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentKillProcess:
            processSelector:
              matchFields:
                exe: /bin/bash
    response:
        default:
        - makeOpsFlag:
            severity: high
        - agentKillProcess:
        actions: []
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentKillProcessGroup:
            processSelector:
              matchFields:
                exe: /bin/bash
    response:
        default:
        - makeOpsFlag:
            severity: high
        - agentKillProcessGroup:
        actions: []
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentKillProcessTree:
            processSelector:
              matchFields:
                exe: /bin/bash
    response:
        default:
        - makeOpsFlag:
            severity: high
        - agentKillProcessTree:
        actions: []
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentReniceProcess:
            priority: "20"
            processSelector:
              matchFields:
                exe: /bin/bash
    response:
        default:
        - makeOpsFlag:
            severity: high
        actions:
        - agentReniceProcess:
            priority: "-1"
            processSelector:
              name:
                - ping
    response:
        default:
        - makeOpsFlag:
            severity: high
        - agentReniceProcess:
            priority: "10"
        actions: []

    requests: memory

    N/A

    512Mi

    Memory resource limit

    limits: memory

    N/A

    10240Mi

    Priority class

    priorityClassDefault

    enabled: false

    1000

    Namespace

    namespaceSpyderbat

    enabled: true

    spyderbat

    Omit Environment

    OMITENVIRONMENT

    "no"

    "no" emit all environment variables. "everything" omits all environment variables and "allbutredacted" uses our rules to encrypt variables that look like they contain secrets and emit only those for analysis.

  • Clustermonitor.yaml: creates a ClusterMonitor Nano Agent that collects information from the K8s API

  • Rolebinding.yaml: defines the service account cluster role binding for the Spyderbat service account

  • Values.yaml: contains the user configurable parameters for the Helm Chart install

  • CPU resource request

    requests: cpu

    N/A

    100m

    CPU resource limit

    limits: cpu

    N/A

    6000m

    here
    https://orc.spyderbat.com
    https://app.spyderbat.com
    refer to this article
    https://orc.spyderbat.com
    Public Helm Chart Deployment: Clone Repo, Update and Install
    https://www.spyderbat.com/pricing/

    Memory resource request

    Notifications

    This section documents the full capabilities of Notification Templates, and which fields exist to manipulate the template/behavior.

    The Conditions and Triggering Notifications section will briefly explain how the condition field interacts with the Spyderbat data model in order to trigger notifications.

    The Dereferencing Values section describes how you can inject values from the json record that triggered the notification into the notification itself.

    The Internal Functions section details the various functions you can use to add additional context to your notifications or manipulate existing values into a more desirable format.

    Note: The following are advanced concepts. An understanding of them is not required to get started with notifications. Follow the how-to guides to quickly setup commonly-used notifications.

    Data Model Primer

    Notifications in Spyderbat are driven by the data model. The generates raw telemetry and sends it to the Spyderbat Analytics Engine. The Analytics Engine processes the raw data and builds the behavior web that is viewable in the Console. Additionally, the Analytics Engine analyzes data in the behavior web for security detections, operations issues, policy violations, and more.

    The data emitted by the Analytics Engine comes in two flavors: models and events. Notifications are generated by evaluating these two types of records. Models are (potentially) long-lived objects that have a start, middle, and end in their lifecycle. Events represent detections or occurrences that happen at a single point in time.

    Take processes as an example. Spyderbat receives process telemetry and builds models to track the state of the processes themselves. What gets emitted from Spyderbat looks like this:

    Process Model

    The model above is for an interactive bash shell process running on a machine with the Spyderbat Nano Agent installed. It contains all the information required to add it into the behavior web. It also happens that this process is running with an effective user (euser) "root". That is a privileged account and we have a security detection when we see an interactive shell running as root.

    Conditions and Triggering Notifications

    Dereferencing Values

    In Spyderbat's Notification Templates, you can dynamically include specific values from the JSON objects you are monitoring in the Template fields using dereferencing syntax. The syntax for dereferencing is as follows:

    • For direct field access: {{ field_name }}

    • For subfield access within a dictionary: {{ parent_field.sub_field }}

    Let's consider an example JSON object:

    Suppose you have a Saved Query for a Red Flag i.e a security detection on the root bash process above. Events generally have a ref field that points to the id of the model they're related to.

    Examples of Dereferencing Values from the Object Above:

    Example with Email Template:

    These examples demonstrate how you can leverage dereferencing to dynamically include specific values from the JSON object in your notifications for more context on alert. Feel free to adjust the examples based on your specific use cases and requirements.

    Internal Functions

    In Spyderbat's Notification Templates, you can enhance the template alert by using internal functions as well in each Template type. The syntax for using functions is as follows:

    • {{ __FUNCTION_NAME__ [| ARG1, ARG2, ..., ARGN] }}

    Arguments are optional, depending on the function used. The return value of the function will replace the {{ __FUNCTION_NAME__ }} placeholder, or an error message will be displayed if something goes wrong.

    Example JSON object used in function examples:

    This metrics record is used to monitor the resource utilization of the Spyderbat Nano Agent.

    Functions:


    {{ __cluster__ }}

    • Arguments: This function takes 0 arguments

    • Description: Returns the name of the cluster the object is associated with or "No Cluster."

    Pagerduty Template spec Example:


    This would result in a list displayed in the notification:

    • Cluster: No Cluster

    This is because in the metrics object above, the cluster_name field is null.


    {{ __hr_time__ }}

    • Arguments: This function takes 0 arguments

    • Description: Returns a human-readable version of the time field found in the object.

    PagerDuty Template Spec Example:

    This would result in a list displayed in the notification:

    • Time: 2023-12-01 20:02:58UTC

    This converts the epoch time 1701460978.1299076 in the time field in the record above to something human readable.


    {{ __linkback__ }}

    • Arguments: This function takes 0 arguments

    • Description: Returns a relevant URL linking back to the Spyderbat Console for the object being evaluated.

    PagerDuty Template Spec Example:

    This would result in a linkback URL being generated, pointing to the Agent Health page for the agent referenced above "ref": "agent:07Ax6uRpB606065sXXXX". It would display as a link "View in Spyderbat" at the bottom of your notification.


    {{ __origin__ }}

    • Arguments: This function takes 0 arguments

    • Description: Returns a string explaining why the notification was generated.

    PagerDuty Template Spec Example:

    This would result in a message like:

    Notification Origin: This notification was generated because an event_metric record matched the condition specified in notification config "Agent CPU Usage - notif:6voXLIYfRPmTky-XVAaXXX".


    {{ __percent__ | number }}

    • Arguments: This function takes exactly 1 argument

      • number:

        • type: String or Number

        • description: If a String is supplied, the string must be a field in the object with a numerical value.

    PagerDuty Template Spec Example:

    This would result in a list displayed in the notification:

    • CPU Usage: 4.17%

    Here’s an all list of the field names along with their actual function:

    • __hr_time__ – Human-readable timestamp of the event.

    • __time_int__ – Timestamp in integer format.

    • __linkback__ – URL linking back to the event in Spyderbat.

    Conclusion: You can use a mix of both Static values from the Object by dereferencing and using Spyderbat's Internal function to enhance templates for direct context on Notification Alert.

    helm repo add nanoagent https://spyderbat.github.io/nanoagent_helm/
    helm repo update
    helm install nanoagent nanoagent/nanoagent  
    --set nanoagent.agentRegistrationCode=<agent registration code> 
    --set nanoagent.orcurl=https://orc.spyderbat.com/
    git clone https://github.com/spyderbat/nanoagent_helmchart.git
    helm install nanoagent nanoagent/nanoagent  
    --set nanoagent.agentRegistrationCode=<agent registration code> 
    --set nanoagent.orcurl=https://orc.spyderbat.com/ 
    --set-string resources.requests.cpu=1000m 
    --set priorityClassDefault.value=10000
    --set nanoagent.httpproxy=http://123.123.123.123:2/ 
    --set nanoagent.httpsproxy=http://123.123.123.123:2/
    --set resources.limits.cpu=2000m 
    --set resources.limits.memory=8192M
    docker pull public.ecr.aws/a6j2k0g1/nano-agent:latest
    docker image ls
    REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE
    <none>                               <none>    72bb338b2313   2 minutes ago   151MB
    ubuntu                               latest    27941809078c   3 weeks ago     77.8MB
    public.ecr.aws/a6j2k0g1/nano-agent   latest    dde533638cf2   2 months ago    148MB
    docker image save dde533638cf2 > docker.image.nano_agent
    curl https://spyderbat.github.io/nanoagent_helm/docker.image.nano_agent.gz 
    --output agentimage.tar.gz
    curl https://spyderbat.github.io/nanoagent_helm/agent_helm.tar 
    --output nano_agent_helmchart.tar
    tar xvf nano_agent_helmchart.tar
    helm template nanoagent nanoagent/nanoagent 
    --set nanoagent.agentRegistrationCode=<agent registration code> 
    --set nanoagent.orcurl=https://orc.spyderbat.com/ 
    --create-namespace 
    --set spyderbat_tags='CLUSTER_NAME=mycluster:environment=dev'
    kubectl get pods -n spyderbat
    Blog KubeInstall image4
    Blog KubeInstall image7
    Blog KubeInstall image3
    Blog KubeInstall image5
    Blog KubeInstall image6 1024x259
    Blog KubeInstall image1
    Blog KubeInstall image2 1024x458
  • Description: Multiplies an input number by 100, caps the precision at 2 decimal places, and appends a percent (%) symbol.

  • __origin__ – The origin or source of the event.
  • __cluster__ – The cluster where the event occurred.

  • __source__ – The source component or entity that generated the event.

  • __hostname__ – The hostname where the event took place.

  • __percent__ – A percentage value associated with the event.

  • __pd_severity__ – Severity level formatted specifically for PagerDuty.

  • __query_name__ – The name of the saved query that triggered the event.

  • Spyderbat Nano Agent
    {
      "schema": "model_process::1.2.0",
      "id": "proc:_ZO7yNX2S54:ZWn4uw:753728",
      "version": 1701443785,
      "description": "bash [753728, normal] closed from 7dbab6f7-77de-494a-9490-564bc7174611",
      "cgroup": "systemd:/user.slice/user-1000.slice/session-966.scope",
      "time": 1701443785.3360424,
      "create_time": 1701443771.7009835,
      "valid_from": 1701443771.7009835,
      "muid": "mach:_ZO7yNX2S54",
      "pid": 753728,
      "ppid": 753726,
      "ppuid": "proc:_ZO7yNX2S54:ZWn4uw:753726",
      "tpuid": "proc:_ZO7yNX2S54:ZWn4uw:753726",
      "sid": "966",
      "args": [
        "/usr/bin/bash"
      ],
      "cwd": "/home/ubuntu",
      "thread": false,
      "type": "normal",
      "interactive": true,
      "environ": {},
      "duration": 9.437932014465332,
      "name": "bash",
      "title": "/usr/bin/bash",
      "auid": 1000,
      "euid": 0,
      "egid": 0,
      "container": null,
      "auser": "ubuntu",
      "euser": "root",
      "egrp": "root",
      "status": "closed",
      "data_is_complete": true,
      "ancestors": [
        "sudo",
        "bash",
        "sshd",
        "sshd",
        "systemd"
      ],
      "is_causer": false,
      "is_causee": false,
      "prev_time": 1701443781.1389155,
      "expire_at": 1701446399.999999,
      "exit": 0,
      "exe": "/usr/bin/bash",
      "valid_to": 1701443781.1389155,
      "traces": [
        "trace:_ZO7yNX2S54:AAYLdEBuJOo:753573:remote_access"
      ],
      "red_flag_count": 0,
      "red_flags": ["flag:629gia"],
      "ops_flag_count": 0,
      "red_flags": [],
      "schemaType": "model_process",
      "schemaMajorVersion": 1,
      "record_type": "model",
      "versionedId": "proc:_ZO7yNX2S54:ZWn4uw:753728:v1701443785"
    }
    {
      "id": "event_alert:_ZO7yNX2S54:ZWn4uw:753728",
      "schema": "event_redflag:root_shell:1.1.0",
      "description": "ubuntu as root ran unusual interactive shell '/usr/bin/bash'",
      "ref": "proc:_ZO7yNX2S54:ZWn4uw:753728",
      "short_name": "root_shell",
      "class": [
        "redflag",
        "proc",
        "root_shell",
        "critical_severity"
      ],
      "flag_class": "redflag/proc/root_shell/critical_severity",
      "severity": "critical",
      "time": 1701443781.1389155,
      "routing": "customer",
      "version": 2,
      "muid": "mach:_ZO7yNX2S54",
      "name": "bash",
      "auid": 1000,
      "args": [
        "/usr/bin/bash"
      ],
      "auser": "ubuntu",
      "euser": "root",
      "ancestors": [
        "sudo",
        "bash",
        "sshd",
        "sshd",
        "systemd"
      ],
      "mitre_mapping": [
        {
          "sub-technique": "T1059.004",
          "sub-technique_name": "Unix Shell",
          "url": "https://attack.mitre.org/techniques/T1059/004",
          "created": "2020-03-09T14:15:05.330Z",
          "modified": "2021-07-26T22:34:43.261Z",
          "stix": "attack-pattern--a9d4b653-6915-42af-98b2-5758c4ceee56",
          "technique": "T1059",
          "technique_name": "Command and Scripting Interpreter",
          "tactic": "TA0002",
          "tactic_name": "Execution",
          "platform": "Linux"
        }
      ],
      "impact": "A shell owned by root has a dangerous level of permissions.",
      "ppuid": "proc:_ZO7yNX2S54:ZWn4uw:753726",
      "false_positive": false,
      "traces": [
        "trace:_ZO7yNX2S54:AAYLdEBuJOo:753573:remote_access"
      ],
      "traces_suppressed": false,
      "schemaType": "event_redflag",
    }
    apiVersion: spyderbat/v1
    kind: NotificationTemplate
    metadata:
      name: email-template
      type: email
    spec:
      subject: "Spyderbat Alert: {{ severity }} Severity Detected on {{ name }}"
      body_html: |
        <html>
          <body>
            <h4>Spyderbat Alert</h4>
            <p><strong>Severity:</strong> {{ severity }}</p>
            <p><strong>Description:</strong> {{ description }}</p>
            <p><strong>Process:</strong> {{ name }} (Executed by {{ auser }}, Effective user: {{ euser }})</p>
            <p><strong>Command:</strong> {{ args | join(" ") }}</p>
            <p><strong>Detection Time:</strong> {{ time }}</p>
            <p><strong>MITRE Technique:</strong> <a href="{{ mitre_mapping[0].url }}">{{ mitre_mapping[0].technique_name }}</a></p>
          </body>
        </html>
      body_text: |
        Spyderbat Alert
        Severity: {{ severity }}
        Description: {{ description }}
        Process: {{ name }} (Executed by {{ auser }}, Effective user: {{ euser }})
        Command: {{ args | join(" ") }}
        Detection Time: {{ time }}
    {
      "schema": "event_metric:agent:1.0.0",
      "id": "event_metrics:07Ax6uRpB606065sYozQ:ZWXXXX",
      "ref": "agent:07Ax6uRpB606065sXXXX",
      "version": 1,
      "muid": "mach:5sZN4f2mXXX",
      "time": 1701460978.1299076,
      "cpu_cores": 2,
      "total_mem_B": 8173600768,
      "hostname": "example_machine",
      "cluster_name": null,
      "bandwidth_1min_Bps": 1616,
      "cpu_1min_P": {
        "agent": 0.0417,
        "authUID": 0.0002,
        "bashbatUID": 0.0002,
        "grimreaperUID": 0.0034,
        "procmonUID": 0.0003,
        "scentlessUID": 0.0355,
        "snapshotUID": 0
      },
      "mem_1min_B": {
        "agent": 355326000,
        "authUID": 29440000,
        "bashbatUID": 37820000,
        "grimreaperUID": 42692000,
        "procmonUID": 38005000,
        "scentlessUID": 71352000,
        "snapshotUID": 45604000
      },
      "mem_1min_P": {
        "agent": 0.04347239485822658,
        "authUID": 0.003601839732038158,
        "bashbatUID": 0.004627091666633258,
        "grimreaperUID": 0.005223156991853704,
        "procmonUID": 0.004649725510058091,
        "scentlessUID": 0.008729567546208785,
        "snapshotUID": 0.005579425921870522
      },
      "missed_messages": {
        "scentless": {
          "pcap_dropped": 0,
          "data_drop_delayq": 0,
          "data_drop_dns": 0,
          "missed_events": 0
        }
      }
    }
    spec:
      custom_details: 
        "cluster": "{{ __cluster__ }}"
    spec:
      custom_details: 
        "time": "{{ __hr_time__ }}"
    spec:
      custom_details: 
        "linkback": "{{ __linkback__ }}"
    spec:
      custom_details: 
        "origin": "{{ __origin__ }}"
    spec:
      custom_details: 
        "CPU Usage": "{{ __percent__ | cpu_1min_P.agent }}"

    How to Lock Down Your Workloads With Guardian Policies Using Spyctl

    This page will teach you about Guardian Workload Policies. It will explain what they are, how to create them, how to apply them, and how to manage them.

    Prerequisites

    • Install Spyctl

    • Have installed at least one installed on a machine of your choosing

    What is a Guardian Workload Policy

    Spyderbat's Guardian Workload Policy feature empowers users to define a known-good whitelist of process and network behavior for containers and Linux services. These policies serve as a proactive measure, enabling users to receive notifications when Spyderbat detects deviant activity within their workloads. Not only does the Guardian Workload Policy provide alerts, but it also offers the capability to take decisive actions by terminating deviant processes and Kubernetes pods. This comprehensive approach ensures a robust security framework, allowing users to maintain a vigilant stance against potential threats and unauthorized activities within their environments.

    Workload Policies also serve as a way to tune out certain Red Flags & Spydertraces. Red Flags and Spydertraces that would have been generated by your workloads will be marked as exceptions and reduce clutter in your security dashboards.

    Retrieving Fingerprints

    Policies are created from Fingerprints. Fingerprints are auto-generated documents with the process and network activity for a single container instance or instance of a Linux Service.

    To view all Fingerprints generated across your organization, issue the following command:

    For example:

    By default, Spyderbat queries for all Fingerprints in your organization for the last 1.5 hours. This means that it will retrieve Fingerprints for any container or Linux service instance running during that time window (Even if the instances started well before the time window). You can increase the time range with the -t option.

    In this example organization we have 3 workloads running across multiple instances. Spyderbat has Fingerprints for two instances of mongo, 14 instances of nginx, and one instance of node that were online during the query's time window. None of these Fingerprints are covered by a policy as seen in the COVERED_BY_POLICY column.

    Download Fingerprints to a File

    To create a policy we must first download the fingerprints we wish to use to build the policy.

    For example:

    [Optional] Downloading Fingerprints from a K8s Namespace

    In certain instances you may have the same container image running in different Kubernetes namespaces with different allowed network activity. To separate allowed network activity by namespace you can use multiple policies for the same image. Using only Fingerprints from the same namespace will automatically scope the policy to that namespace.

    for example:

    This will only download Fingerprints tied to a the namespace specified in the command.

    Create the Policy

    Once you have the Fingerprints to create the policy from, issue the following command:

    For example:

    Running this command does not make any changes to your Spyderbat Environment. It is not until you have applied a Policy, that enforcement takes effect.

    The Policy file we just created policy.yaml now has a new resource, the kind field is now "SpyderbatPolicy". This document is a merged version of the Fingerprints that went into it:

    Policies are created in audit mode by default. If you apply a Policy in audit mode it will not take response actions, but will log the activity it would have taken. You can use the command spyctl logs policy POLICY_UID to monitor those log.

    Generalize the Policy

    In its current form, this policy will only apply to mongo containers with the latest tag and only with the image ID sha256:68248f2793e077e818710fc5d6f6f93f1ae5739b694d541b7e0cd114e064fa11

    We can remove selector fields and wildcard values to broaden the Policy's scope.

    Using the edit command will open your favorite text-editor and perform syntax checking when you save.

    Then we can generalize the containerSelector and the IP blocks

    In the above policy we removed the imageID field in the containerSelector we also increase the scope of the ipBlock in ingest from multiple /32 CIDRs to a single /16 CIDR.

    Applying the Policy

    To apply a Policy you must use the apply command:

    The apply command will recognize the kind of the file, perform validation, and attempt to apply the resource to the policy database for the organization in your current Context. It accomplishes this via the Spyderbat API.

    For example, to apply the Policy we created above:

    This will apply the Policy to the organization in your current Context.

    To view the applied Policies in your current Context you can use the get command:

    For example:

    To view the yaml of the Policy you just applied, issue the command:

    The Policy will look something like this:

    [Optional] Adding "Interceptor" Response Actions

    When a new Policy is created it will have a default Actions list, and an empty list of actions. The default Actions are taken when a policy is violated and no Actions of the same type in the actions list are taken.

    By default, spyctl includes a makeRedFlag Action in the default section of the policy’s response field. This tells the Spyderbat backend to generate a redflag of high severity which will show up in the Spyderbat Console. The full list of redflag severities, in increasing severity, is as follows:

    • info

    • low

    • medium

    • high

    The Actions in the actions field are taken when certain criteria are met. Every Action in the actions field must include a Selector. Selectors are a way of limiting the scope of an Action. For example, you can tell Spyderbat to kill a bash process that deviates from the Policy by using the processSelector:

    If you are in a Kubernetes environment you can also set up an Action to kill a pod when a Policy violation occurs. Let's say you want to kill a pod in your staging environment, the action would look like so:

    To add a kill process action, edit your policy file. For example:

    And add a kill process Action to the actions list.

    Our Policy now looks like this:

    Summary and Next Steps

    At this point you should have an applied policy in audit mode. You'll find that if you run spyctl get fingerprints --type container or spyctl get fingerprints --type linux-service the fingerprints you included in the policy will now be covered.

    While in audit mode your policy will generate logs of the actions it would have taken, in addition to any deviations it detects. To learn how to manage and update your policies refer to . It explains how to edit, update, and diff your policies. It will also explain when to graduate your policies from audit to enforce mode.

    When all of your policies are in enforce mode and every fingerprint is covered by a policy you will have locked down your environment using Guardian. You will have established a whitelist of activity for your all of your critical workloads which simultaneously allows you to quickly be notified when real threats occur, and reduce red flag and spydertrace noise in your Spyderbat Dashboards.

    critical

    Configure Spyctl with a Context
    Spyderbat Nano Agent
    Guardian Policy Management in Spyctl
    spyctl get fingerprints --type FINGERPRINT_TYPE [NAME_OR_UID]
    $ spyctl get fingerprints --type container docker
    Getting fingerprints from 2024-01-16T13:52:51Z to 2024-01-16T15:22:51Z
    IMAGE_NAME:TAG        IMAGEID       REPO                        COVERED_BY_POLICY    LATEST_TIMESTAMP                                                                                                                                                                                                                                                                 
    mongo:latest          8248f2793e07  docker.io/library           0/2                  2024-01-16T15:00:43Z
    nginx:latest          10d1f5b58f74  docker.io/library           0/14                 2024-01-16T15:01:08Z
    node:v3.23.5          b7f4f7a0ce46  docker.io/calico            0/1                  2024-01-16T15:01:08Z
    spyctl get fingerprints --type TYPE --output OUTPUT NAME_OR_UID > FILENAME
    spyctl get fingerprints --type container -o yaml mongo:latest > fprints.yaml
    spyctl get fingerprints --type container --output OUTPUT [--cluster CLUSTER_NAME_OR_UID] --namespace NAMESPACE NAME_OR_UID > FILENAME
    spyctl get fingerprints --type container -o yaml --namespace dev mongo:latest > fprints-dev.yaml
    spyctl get fingerprints --type container -o yaml --namespace prod mongo:latest > fprints-prod.yaml
    $ spyctl create policy --from-file FILENAME --name NAME_FOR_POLICY --mode MODE > policy.yaml
    $ spyctl create policy --from-file fprints.yaml --name mongo-policy --mode audit > policy.yaml
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      latestTimestamp: 1705435213.327981
      name: mongo-policy
      type: container
    spec:
      containerSelector:
        image: docker.io/library/mongo:latest
        imageID: sha256:68248f2793e077e818710fc5d6f6f93f1ae5739b694d541b7e0cd114e064fa11
      mode: audit
      processPolicy:
      - name: mongod
        exe:
        - /usr/bin/mongod
        id: mongod_0
        euser:
        - mongo
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.0.229/32
          - ipBlock:
              cidr: 192.168.1.146/32
          - ipBlock:
              cidr: 192.168.2.221/32
          - ipBlock:
              cidr: 192.168.4.31/32
          processes:
          - mongod_0
          ports:
          - protocol: TCP
            port: 27017
      response:
        default:
        - makeRedFlag:
            severity: high
        actions: []
    spyctl edit policy.yaml
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      latestTimestamp: 1705435213.327981
      name: mongo-policy
      type: container
    spec:
      containerSelector:
        image: docker.io/library/mongo:*
      mode: audit
      processPolicy:
      - name: mongod
        exe:
        - /usr/bin/mongod
        id: mongod_0
        euser:
        - mongo
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.0.0/16
          processes:
          - mongod_0
          ports:
          - protocol: TCP
            port: 27017
      response:
        default:
        - makeRedFlag:
            severity: high
        actions: []
    $ spyctl apply -f FILENAME
    $ spyctl apply -f policy.yaml
    $ spyctl get RESOURCE [OPTIONS] [NAME_OR_ID]
    $ spyctl get policies
    UID                       NAME              STATUS     TYPE       CREATE_TIME
    pol:CB1fSLq4wpkFG5kWsQ2r  mongo-policy      Auditing   container  2024-01-16T15:00:43Z
    $ spyctl get policies -o yaml CB1fSLq4wpkFG5kWsQ2r
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      name: mongo-policy
      type: container
      uid: pol:CB1fSLq4wpkFG5kWsQ2r
      creationTimestamp: 1673477668
      latestTimestamp: 1670001133
    spec:
      containerSelector:
        image: docker.io/library/mongo:*
      mode: audit
      processPolicy:
      - name: mongod
        exe:
        - /usr/bin/mongod
        id: mongod_0
        euser:
        - mongo
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.0.0/16
          processes:
          - mongod_0
          ports:
          - protocol: TCP
            port: 27017
      response:
        default:
        - makeRedFlag:
            severity: high
        actions: []
    response:
      default:
      - makeRedFlag:
          severity: high
      actions: []
    actions:
    - agentKillProcess:
        processSelector:
          name:
          - bash
    actions:
    - agentKillPod:
        podSelector:
          matchLabels:
            env: staging
    $ spyctl edit policy pol:CB1fSLq4wpkFG5kWsQ2r
    response:
      default:
      - makeRedFlag:
          severity: high
      actions:
      - agentKillProcess:
          processSelector:
            name:
            - bash
    apiVersion: spyderbat/v1
    kind: SpyderbatPolicy
    metadata:
      name: mongo-policy
      type: container
      uid: pol:CB1fSLq4wpkFG5kWsQ2r
      creationTimestamp: 1673477668
      latestTimestamp: 1670001133
    spec:
      containerSelector:
        image: docker.io/library/mongo:*
      mode: audit
      processPolicy:
      - name: mongod
        exe:
        - /usr/bin/mongod
        id: mongod_0
        euser:
        - mongo
      networkPolicy:
        egress: []
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.0.0/16
          processes:
          - mongod_0
          ports:
          - protocol: TCP
            port: 27017
      response:
        default:
        - makeRedFlag:
            severity: high
        actions:
        - agentKillProcess:
            processSelector:
              name:
              - bash
    $ spyctl get fingerprints --type container docker
    Getting fingerprints from 2024-01-16T13:52:51Z to 2024-01-16T15:22:51Z
    IMAGE_NAME:TAG        IMAGEID       REPO                        COVERED_BY_POLICY    LATEST_TIMESTAMP                                                                                                                                                                                                                                                                 
    mongo:latest          8248f2793e07  docker.io/library           2/2                  2024-01-16T15:00:43Z
    nginx:latest          10d1f5b58f74  docker.io/library           0/14                 2024-01-16T15:01:08Z
    node:v3.23.5          b7f4f7a0ce46  docker.io/calico            0/1                  2024-01-16T15:01:08Z

    All Related Objects

    Container

    Link to Fields

    cluster

    • New Schema:

    connections[*]

    • New Schema:

    machine

    • New Schema:

    node

    • New Schema:

    pod

    • New Schema:

    processes[*]

    • New Schema:

    root_process

    • New Schema:

    Cluster

    containers[*]

    • New Schema:

    cronjobs[*]

    • New Schema:

    daemonsets[*]

    • New Schema:

    deployments[*]

    • New Schema:

    jobs[*]

    • New Schema:

    nodes[*]

    • New Schema:

    pods[*]

    • New Schema:

    replicasets[*]

    • New Schema:

    services[*]

    • New Schema:

    statefulsets[*]

    • New Schema:

    Node

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Deployment

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Replicaset

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Daemonset

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Job

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Cronjob

    cluster

    • New Schema:

    Statefulset

    cluster

    • New Schema:

    pods[*]

    • New Schema:

    Service

    cluster

    • New Schema:

    Pod

    cluster

    • New Schema:

    connections[*]

    • New Schema:

    containers[*]

    • New Schema:

    daemonset

    • New Schema:

    deployment

    • New Schema:

    job

    • New Schema:

    machine

    • New Schema:

    node

    • New Schema:

    replicaset

    • New Schema:

    statefulset

    • New Schema:

    Connection

    container

    • New Schema:

    machine

    • New Schema:

    peer_connection

    • New Schema:

    peer_machine

    • New Schema:

    peer_process

    • New Schema:

    pod

    • New Schema:

    processes[*]

    • New Schema:

    Machine

    connections[*]

    • New Schema:

    connections_as_peer[*]

    • New Schema:

    processes[*]

    • New Schema:

    Process

    children[*]

    • New Schema:

    connections[*]

    • New Schema:

    connections_as_peer[*]

    • New Schema:

    container

    • New Schema:

    container_as_root

    • New Schema:

    machine

    • New Schema:

    parent

    • New Schema:

    Cluster
    Connection
    Machine
    Node
    Pod
    Process
    Process
    Link to Fields
    Container
    Cronjob
    Daemonset
    Deployment
    Job
    Node
    Pod
    Replicaset
    Service
    Statefulset
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Link to Fields
    Cluster
    Pod
    Link to Fields
    Cluster
    Link to Fields
    Cluster
    Connection
    Container
    Daemonset
    Deployment
    Job
    Machine
    Node
    Replicaset
    Statefulset
    Link to Fields
    Container
    Machine
    Connection
    Machine
    Process
    Pod
    Process
    Link to Fields
    Connection
    Connection
    Process
    Link to Fields
    Process
    Connection
    Connection
    Container
    Container
    Machine
    Process

    All Fields

    Redflag

    Ancestors

    • Type: List of Strings

    • Description: If the reference object has ancestor processes, this is a list of their names.

    • Field Name: ancestors

    Arguments

    • Type: List of Strings

    • Description: If referencing a process, the arguments of the process that generated the red flag.

    • Field Name: args

    Authenticated User Name

    • Type: String

    • Description: If referencing a process, the authenticated user name of the process that generated the red flag.

    • Field Name: auser

    Cluster Name

    • Type: String

    • Description: If red flag is associated with a cluster, or a node of a cluster, this is the name of the cluster.

    • Field Name: cluster_name

    Description

    • Type: String

    • Description: The reason the red flag was generated.

    • Field Name: description

    Effective User Name

    • Type: String

    • Description: If referencing a process, the effective user name of the process that generated the red flag.

    • Field Name: euser

    ID

    • Type: String

    • Description: The unique ID of the red flag.

    • Field Name: id

    Is Exception

    • Type: Boolean

    • Description: Is the red flag marked as an exception? If so, the red flag was generated by expected activity.

    • Field Name: false_positive

    Machine ID

    • Type: String

    • Description: The unique machine ID associated with the red flag. Generally begins with "mach:".

    • Field Name: muid

    Policy Name

    • Type: String

    • Description: If the red flag is associated with a Guardian policy, this is the name of the policy.

    • Field Name: policy_name

    Policy UID

    • Type: String

    • Description: If the red flag is associated with a Guardian policy, this is the unique ID of the policy.

    • Field Name: policy_uid

    Reference Object

    • Type: String

    • Description: The unique ID of the object that the red flag is associated with.

    • Field Name: ref

    Schema

    • Type: String

    • Description: The full schema string of the red flag.

    • Field Name: schema

    Severity

    • Type: String

    • Description: The security level of the red flag. One of: info, low, medium, high, critical.

    • Field Name: severity

    Spydertraces

    • Type: List of Strings

    • Description: The unique IDs of the spydertraces that this red flag is a part of.

    • Field Name: traces

    Uptime

    • Type: Number

    • Description: The uptime of the object referenced by the redflag.

    • Field Name: uptime

    Opsflag

    Agent Type

    • Type: String

    • Description: The type of agent that generated an opsflag. Used with agent-related opsflags.

    • Field Name: agent_type

    Ancestors

    • Type: List of Strings

    • Description: If the reference object has ancestor processes, this is a list of their names.

    • Field Name: ancestors

    Arguments

    • Type: List of Strings

    • Description: If referencing a process, the arguments of the process that generated the ops flag.

    • Field Name: args

    Authenticated User Name

    • Type: String

    • Description: If referencing a process, the authenticated user name of the process that generated the ops flag.

    • Field Name: auser

    Cluster Name

    • Type: String

    • Description: The name of the cluster associated with an opsflag.

    • Field Name: cluster_name

    Description

    • Type: String

    • Description: The reason the ops flag was generated.

    • Field Name: description

    Effective User Name

    • Type: String

    • Description: If referencing a process, the effective user name of the process that generated the ops flag.

    • Field Name: euser

    False positive

    • Type: Boolean

    • Description: Is the opsflag a false positive?

    • Field Name: false_positive

    Hostname

    • Type: String

    • Description: The hostname of the machine associated with an opsflag.

    • Field Name: hostname

    Is Ephemeral

    • Type: Boolean

    • Description: Is the reference object ephemeral? Used with agent-related opsflags.

    • Field Name: ephemeral

    Machine ID

    • Type: String

    • Description: The unique machine ID associated with the ops flag. Generally begins with 'mach:'.

    • Field Name: muid

    Reference Object

    • Type: String

    • Description: The unique ID of the object that the ops flag is associated with.

    • Field Name: ref

    Schema

    • Type: String

    • Description: The full schema string of the ops flag.

    • Field Name: schema

    Severity

    • Type: String

    • Description: The alert level of the ops flag. One of: info, low, medium, high, critical.

    • Field Name: severity

    UID

    • Type: String

    • Description: The unique ID of the ops flag.

    • Field Name: id

    Uptime

    • Type: Number

    • Description: The uptime of the object referenced by the ops flag.

    • Field Name: uptime

    Spydertrace

    Interactive Users

    • Type: List of Strings

    • Description: The list of interactive users associated with the spydertrace.

    • Field Name: interactive_users

    Is Interactive

    • Type: Boolean

    • Description: Is the spydertrace interactive? Interactive spydertraces are associated with interactive user processes.

    • Field Name: interactive

    Is Overtaken

    • Type: Boolean

    • Description: Has the spydertrace been overtaken by another spydertrace? It is best to set this to false because the overtaking trace contains all of the overtaken trace.

    • Field Name: overtaken

    Is Suppressed

    • Type: Boolean

    • Description: Is the spydertrace suppressed? Suppressed spydertraces are associated with expected activity.

    • Field Name: suppressed

    Machine UID

    • Type: String

    • Description: The unique machine ID associated with the spydertrace. Generally begins with "mach:".

    • Field Name: muid

    Name

    • Type: String

    • Description: The name of the spydertrace.

    • Field Name: name

    Non-Interactive Users

    • Type: List of Strings

    • Description: The list of non-interactive users associated with the spydertrace.

    • Field Name: non_interactive_users

    Root Process Name

    • Type: String

    • Description: Name of the root process of the spydertrace.

    • Field Name: root_proc_name

    Schema

    • Type: String

    • Description: The full schema string of the spydertrace.

    • Field Name: schema

    Score

    • Type: Integer

    • Description: A score ranking the severity of the spydertrace.

    • Field Name: score

    Status

    • Type: String

    • Description: Status of the spydertrace: closed or active.

    • Field Name: status

    Trigger

    • Type: String

    • Description: The unique ID for the object that triggered the spydertrace's creation.

    • Field Name: trigger

    Trigger Short Name

    • Type: String

    • Description: Short name for the object that triggered the spydertrace.

    • Field Name: trigger_short_name

    UID

    • Type: String

    • Description: The unique ID of the spydertrace.

    • Field Name: id

    Container

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the container is a part of

    • Field Name: clustername

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat ID for the kubernetes cluster the container is a part of

    • Field Name: cluster_uid

    Container ID

    • Type: String

    • Description: The long identifier of the container as reported by the container runtime

    • Field Name: container_id

    Container Name

    • Type: String

    • Description: The name of the container as reported by the container runtime

    • Field Name: container_name

    Image

    • Type: String

    • Description: The fully qualified name of the image used to create the container

    • Field Name: image

    Image ID

    • Type: String

    • Description: The identifier of the image used to create the container

    • Field Name: image_id

    Machine UID

    • Type: String

    • Description: The unique spyderbat machine ID the container is running on

    • Field Name: muid

    Pod Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the pod the container is a part of

    • Field Name: pod_labels

    Pod Name

    • Type: String

    • Description: The name of the kubernetes pod the container is a part of

    • Field Name: pod_name

    Pod Namespace

    • Type: String

    • Description: The namespace of the kubernetes pod the container is a part of

    • Field Name: pod_namespace

    Pod Namespace Labels

    • Type: Dictionary of Strings to Strings

    • Description: The labels for the namespace of the kubernetes pod the container is a part of

    • Field Name: pod_namespace_labels

    Pod UID

    • Type: String

    • Description: The unique Spyderbat ID for the kubernetes pod the container is a part of

    • Field Name: pod_uid

    Root process UID

    • Type: String

    • Description: The spyderbat ID of the root process running in the container

    • Field Name: root_puid

    Schema

    • Type: String

    • Description: The Spyderbat schema for the container model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the container

    • Field Name: id

    node_uid

    • Type: String

    • Field Name: node_uid

    Cluster

    Name

    • Type: String

    • Description: The name assigned to the cluster at spyderbat provisioning time

    • Field Name: name

    Schema

    • Type: String

    • Description: The Spyderbat schema for the cluster model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the cluster

    • Field Name: id

    Node

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the node belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat ID for the kubernetes cluster the node belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the node as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the node as reported in the metadata

    • Field Name: metadata.labels

    Machine UID

    • Type: String

    • Description: The unique Spyderbat machine ID for the node

    • Field Name: muid

    Name

    • Type: String

    • Description: The kubernetes name for the node as reported in the metadata

    • Field Name: metadata.name

    Schema

    • Type: String

    • Description: The Spyderbat schema for the node model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for this model

    • Field Name: id

    Deployment

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the deployment belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the deployment belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the deployment as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the deployment as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the deployment as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the deployment as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the deployment model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the deployment

    • Field Name: id

    Replicaset

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the replicaset belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the replicaset belongs to

    • Field Name: cluster_uid

    Deployment name

    • Type: String

    • Description: The name for the deployment the replicaset is owned by (if replicaset is owned by a deployment)

    • Field Name: deployment_name

    Deployment uid

    • Type: String

    • Description: The Spyderbat unique id for the deployment the replicaset is owned by (if replicaset is owned by a deployment)

    • Field Name: deployment_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the replicaset as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the replicaset as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the replicaset as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the replicaset as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the replicaset model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the replicaset

    • Field Name: id

    Daemonset

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the daemonset belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the daemonset belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the daemonset as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the daemonset as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the daemonset as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the daemonset as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the daemonset model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the daemonset

    • Field Name: id

    Job

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the job belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the job belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the job as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the job as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the job as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the job as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the job model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the job

    • Field Name: id

    Cronjob

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the cronjob belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the cronjob belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the cronjob as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the cronjob as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the cronjob as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the cronjob as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the cronjob model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the cronjob

    • Field Name: id

    Statefulset

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the statefulset belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the statefulset belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the statefulset as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the statefulset as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the statefulset as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the statefulset as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the statefulset model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the statefulset

    • Field Name: id

    Service

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the service belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the service belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the service as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the service as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the service as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the service as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the service model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the service

    • Field Name: id

    Pod

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the pod belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the pod belongs to

    • Field Name: cluster_uid

    Deployment UID

    • Type: String

    • Description: The spyderbat unique id for the deployment the pod is associated with

    • Field Name: deployment_uid

    Deployment name

    • Type: String

    • Description: The name of the deployment the pod is associated with

    • Field Name: deployment_name

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the pod as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the pod as reported in the metadata

    • Field Name: metadata.labels

    Machine UID

    • Type: String

    • Description: The unique machine ID associated with this pod

    • Field Name: muid

    Name

    • Type: String

    • Description: The kubernetes name for the pod as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the pod as reported in the metadata

    • Field Name: metadata.namespace

    Node UID

    • Type: String

    • Description: The spyderbat unique id for the node the pod is running on

    • Field Name: node_uid

    Owner Kind

    • Type: String

    • Description: The kind of the resource that owns the pod

    • Field Name: owner_kind

    Owner Name

    • Type: String

    • Description: The name of the resource that owns the pod

    • Field Name: owner_name

    Owner UID

    • Type: String

    • Description: The Spyderbat unique uid of the resource that owns the pod

    • Field Name: owner_uid

    Schema

    • Type: String

    • Description: The Spyderbat schema for the pod model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the pod

    • Field Name: id

    Role

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the role belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the role belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the role as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the role as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the role as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the role as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the role model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the role

    • Field Name: id

    Cluster Role

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the role belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the role belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the role as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the role as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the role as reported in the metadata

    • Field Name: metadata.name

    Schema

    • Type: String

    • Description: The Spyderbat schema for the role model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the role

    • Field Name: id

    Service Account

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the service account belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the service account belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the service account as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the service account as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the service account as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the service account as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the service account model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the service account

    • Field Name: id

    Role Binding

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the rolebinding belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the rolebinding belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the rolebinding as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the rolebinding as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the rolebinding as reported in the metadata

    • Field Name: metadata.name

    Namespace

    • Type: String

    • Description: The kubernetes namespace for the rolebinding as reported in the metadata

    • Field Name: metadata.namespace

    Schema

    • Type: String

    • Description: The Spyderbat schema for the rolebinding model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the rolebinding

    • Field Name: id

    Cluster Role Binding

    Cluster Name

    • Type: String

    • Description: The name of the kubernetes cluster the clusterrolebinding belongs to

    • Field Name: cluster_name

    Cluster UID

    • Type: String

    • Description: The unique Spyderbat id for the kubernetes cluster the clusterrolebinding belongs to

    • Field Name: cluster_uid

    Kubernetes uid

    • Type: String

    • Description: The kubernetes unique id for the clusterrolebinding as reported in the metadata

    • Field Name: metadata.uid

    Labels

    • Type: Dictionary of Strings to Strings

    • Description: The kubernetes labels for the clusterrolebinding as reported in the metadata

    • Field Name: metadata.labels

    Name

    • Type: String

    • Description: The kubernetes name for the clusterrolebinding as reported in the metadata

    • Field Name: metadata.name

    Schema

    • Type: String

    • Description: The Spyderbat schema for the clusterrolebinding model

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the clusterrolebinding

    • Field Name: id

    Listening Socket

    Duration

    • Type: Number

    • Description: The duration of the model in seconds

    • Field Name: duration

    Local IP

    • Type: IP Address

    • Description: The local IP address, or originating address of the connection

    • Field Name: local_ip

    Local port

    • Type: Integer

    • Description: The local port of the connection

    • Field Name: local_port

    Machine UID

    • Type: String

    • Description: The unique machine ID associated with this model or event

    • Field Name: muid

    Process UIDs

    • Type: List of Strings

    • Description: The unique Spyderbat IDs for the associated processes to this socket

    • Field Name: puids

    Schema

    • Type: String

    • Description: The full schema string of the listening socket

    • Field Name: schema

    Status

    • Type: String

    • Description: Status of this model: closed or active

    • Field Name: status

    UID

    • Type: String

    • Description: The unique Spyderbat ID for the listening socket.

    • Field Name: id

    Connection

    Bytes Received

    • Type: Integer

    • Description: The number of bytes received on the local side of the connection.

    • Field Name: bytes_rx

    Bytes Sent

    • Type: Integer

    • Description: The number of bytes sent on to the remote side of the connection.

    • Field Name: bytes_tx

    Cgroup

    • Type: String

    • Description: The latest cgroup associated with the connection.

    • Field Name: cgroup

    Container UID

    • Type: String

    • Description: The unique ID of the container associated with the connection.

    • Field Name: container_uid

    Destination

    • Type: List of Strings

    • Description: The destinations of the connection (max 100 array). "ipv4|ipv6:remote_ip:remote_port".

    • Field Name: dsts

    Direction

    • Type: String

    • Description: The direction of the connection: "inbound", "outbound", or "unknown".

    • Field Name: direction

    Duration

    • Type: Number

    • Description: The duration of the connection model in seconds at time of last update.

    • Field Name: duration

    Family

    • Type: String

    • Description: Family: IPV4 or IPV6.

    • Field Name: family

    Local IP

    • Type: IP Address

    • Description: The local IP address, or originating address of the connection

    • Field Name: local_ip

    Local port

    • Type: Integer

    • Description: The local port of the connection

    • Field Name: local_port

    Machine UID

    • Type: String

    • Description: The unique ID of the machine associated with the connection.

    • Field Name: muid

    Payload

    • Type: String

    • Description: A string representation of the payload of the connection. For example, the domain name of a DNS request response.

    • Field Name: payload

    Peer connection UID

    • Type: String

    • Description: The unique ID of the peer remote connection if seen by Spyderbat.

    • Field Name: peer_cuid

    Peer machine UID

    • Type: String

    • Description: The unique ID of the peer connection's machine if seen by Spyderbat.

    • Field Name: peer_muid

    Peer process UID

    • Type: String

    • Description: The unique ID of the peer connection's process if seen by Spyderbat.

    • Field Name: peer_puid

    Process UID

    • Type: String

    • Description: The unique ID of the latest process associated with the connection.

    • Field Name: puid

    Process UIDs

    • Type: List of Strings

    • Description: The unique IDs of the process(es) associated with the connection.

    • Field Name: puids

    Process name

    • Type: String

    • Description: The name of the process associated with the connection.

    • Field Name: proc_name

    Remote IP

    • Type: IP Address

    • Description: The IP address on the remote side of the connection.

    • Field Name: remote_ip

    Remote hostname

    • Type: String

    • Description: The hostname on the remote side of the connection.

    • Field Name: remote_hostname

    Remote port

    • Type: Integer

    • Description: The port number on the remote side of the connection.

    • Field Name: remote_port

    Schema

    • Type: String

    • Description: The full schema of the connection.

    • Field Name: schema

    Sources

    • Type: List of Strings

    • Description: The objects that are the source of the connection (max 100 array).

    • Field Name: srcs

    Spydertraces

    • Type: List of Strings

    • Description: The unique IDs of the spydertraces this connection is a part of.

    • Field Name: traces

    Status

    • Type: String

    • Description: Status of the connection: closed or active.

    • Field Name: status

    UID

    • Type: String

    • Description: The unique ID for this connection.

    • Field Name: id

    Machine

    Boot Time

    • Type: Number

    • Description: The time at which the machine was booted.

    • Field Name: boot_time

    CPU Architecture

    • Type: String

    • Description: The architecture of the CPU that is installed in the machine.

    • Field Name: machine_processor

    CPU Model

    • Type: String

    • Description: The model of the CPU that is installed in the machine.

    • Field Name: cpu_model

    Cloud Image ID

    • Type: String

    • Description: If from a cloud provider, the image ID.

    • Field Name: cloud_image_id

    Cloud Instance ID

    • Type: String

    • Description: If from a cloud provider, the instance ID of the virtual machine.

    • Field Name: cloud_instance_id

    Cloud Region ID

    • Type: String

    • Description: If from a cloud provider, the region ID.

    • Field Name: cloud_region

    Cloud Tags

    • Type: Dictionary of Strings to Strings

    • Description: If from a cloud provider, the tags associated with the machine.

    • Field Name: cloud_tags

    Cloud Type

    • Type: String

    • Description: If from a cloud provider, the type of cloud provider.

    • Field Name: cloud_type

    Cluster Name

    • Type: String

    • Description: The name of the cluster the machine is associated with.

    • Field Name: cluster_name

    Duration

    • Type: Number

    • Description: The amount of time the machine has been running in seconds.

    • Field Name: duration

    Hostname

    • Type: String

    • Description: The hostname of the machine.

    • Field Name: hostname

    Kernel Modules

    • Type: List of Strings

    • Description: The list of kernel modules that are installed on the machine.

    • Field Name: kernel_mods

    OS Release

    • Type: String

    • Description: The release of the operating system installed on the machine.

    • Field Name: os_release

    OS System

    • Type: String

    • Description: The system of the operating system installed on the machine. Generally "linux".

    • Field Name: os_system

    OS Version

    • Type: String

    • Description: The version of the operating system installed on the machine.

    • Field Name: os_version

    OS name

    • Type: String

    • Description: The name of the operating system installed on the machine.

    • Field Name: os_name

    Private IP Address

    • Type: List of Strings

    • Description: The private IP addresses associated with the machine.

    • Field Name: private_ip

    Public IP Address

    • Type: List of Strings

    • Description: The public IP addresses associated with the machine.

    • Field Name: public_ip

    Schema

    • Type: String

    • Description: The full schema of the machine.

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique ID for this machine.

    • Field Name: id

    Fingerprint

    status

    • Type: String

    • Field Name: status

    cgroup

    • Type: String

    • Field Name: cgroup

    service_name

    • Type: String

    • Field Name: service_name

    image

    • Type: String

    • Field Name: image

    image_id

    • Type: String

    • Field Name: image_id

    container_name

    • Type: String

    • Field Name: container_name

    container_id

    • Type: String

    • Field Name: container_id

    Machine UID

    • Type: String

    • Field Name: muid

    Root Process UID

    • Type: String

    • Field Name: root_puid

    Schema

    • Type: String

    • Field Name: schema

    UID

    • Type: String

    • Description: The unique Spyderbat ID for this model

    • Field Name: id

    Process

    src_uid

    • Type: String

    • Field Name: src_uid

    Ancestors

    • Type: List of Strings

    • Description: A list of the names of the ancestor processes

    • Field Name: ancestors

    Arguments

    • Type: List of Strings

    • Description: The arguments specified when the process is started

    • Field Name: args

    Authenticated user

    • Type: String

    • Description: The authenticated user name

    • Field Name: auser

    CGroup

    • Type: String

    • Description: The Cgroup, if any, associated with the process

    • Field Name: cgroup

    Container

    • Type: String

    • Description: The container ID

    • Field Name: container

    Container UID

    • Type: String

    • Description: The spyderbat ID for the container model, if any

    • Field Name: container_uid

    Duration

    • Type: Number

    • Description: The duration of the model in seconds

    • Field Name: duration

    Effective user

    • Type: String

    • Description: The effective user who created the process

    • Field Name: euser

    Environment Variables

    • Type: Dictionary of Strings to Strings

    • Description: A map with the name and value of all environment variables set at the time of process creation

    • Field Name: environ

    Executable

    • Type: String

    • Description: The pathname of the executable associated with the process

    • Field Name: exe

    Interactive

    • Type: Boolean

    • Description: Specifies if the process is associated with a terminal, and indicates if there is a human user who likely created the process

    • Field Name: interactive

    Machine UID

    • Type: String

    • Description: The unique ID of the associated machine

    • Field Name: muid

    Name

    • Type: String

    • Description: The name of the process

    • Field Name: name

    Organization UID

    • Type: String

    • Description: The unique ID of the Spyderbat organization that owns this data

    • Field Name: org_uid

    PID

    • Type: Integer

    • Description: The Unix process ID for this process

    • Field Name: pid

    Parent PID

    • Type: Integer

    • Description: Unix process ID for the parent of this process

    • Field Name: ppid

    Parent process UID

    • Type: String

    • Description: The unique Spyderbat ID of the parent process object

    • Field Name: ppuid

    Schema

    • Type: String

    • Description: The string model_process:...

    • Field Name: schema

    Session UID

    • Type: String

    • Description: The Spyderbat UID for the associated session

    • Field Name: suid

    Status

    • Type: String

    • Description: Status of this model: closed or active

    • Field Name: status

    Thread

    • Type: Boolean

    • Description: Indicates that this process is a thread

    • Field Name: thread

    Traces

    • Type: List of Strings

    • Description: An array of Spyderbat UID for traces associated with this process

    • Field Name: traces

    UID

    • Type: String

    • Description: The unique Spyderbat ID for this model

    • Field Name: id

    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects
    Link to Related Objects