How to Put Guardrails Around Your K8s Clusters Using Spyctl

This tutorial will walk you through the creation, tuning, and management of Cluster Ruleset Policies.

Prerequisites

What is a Cluster Ruleset Policy?

A Cluster Ruleset Policy is a special type of Ruleset Policy focused on establishing allowed or disallowed resources or activity within a Kubernetes Cluster. Through Cluster Ruleset Policies users can receive customized notifications when deviant activity occurs within your clusters. For example, users can specify the container images that are allowed to run within a namespace. Should a new image appear, a deviation is created, with a link to investigate the problem. Users can then take manual or automated actions to address the deviation.

Creating a Cluster Policy

Cluster Policies and their accompanying Cluster Rulesets are generated using the spyctl create command. First, identify which cluster you wish to create a cluster policy for.

spyctl get clusters

For example:

$ spyctl get clusters
Getting clusters
NAME            UID               CLUSTER_ID                            FIRST_SEEN            LAST_DATA
demo-cluster    clus:VyTE0-BPVmo  xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  2024-03-14T17:14:19Z  2024-05-06T18:07:24Z

If the previous command does not return any results, follow the helm installation guide to install the Spyderbat Nano Agent in your K8s cluster.

Next, consider how you would like the auto-generated rules to be scoped. Certain rule types may be scoped specifically to namespaces.

--namespace

Generate rules for all namespaces including namespace scope

--namespace NAMESPACE_NAME

Generate rules for a specific namespace including namespace scope

OMITTED

Generate rules for all namespaces scoped globally

Use the following command to generate a cluster policy and its ruleset(s).

spyctl create cluster-policy -C CLUSTER [--namespace [NAMESPACE_NAME]] -n POLICY_NAME > cluster-policy.yaml

For example:

spyctl create cluster-policy -C demo-cluster --namespace -n demo-cluster-policy > cluster-policy.yaml
Validating cluster(s) exist within the system.
Creating ruleset for cluster demo-cluster
Generating container rules...
Cluster(s) validated... creating policy.

By default, rules are generated using data from the last 1.5 hrs. You can use the -t option to override that.

The file you just generated cluster-policy.yaml now contains the Cluster Policy itself and any automatically generated rulesets the policy requires.

apiVersion: spyderbat/v1
items:
- apiVersion: spyderbat/v1
  kind: SpyderbatRuleset
  metadata:
    name: demo-cluster_ruleset
    type: cluster
  spec:
    rules:
    - namespaceSelector:
        matchExpressions:
        - {key: kubernetes.io/metadata.name, operator: In, values: [rsvp-svc-dev, rsvp-svc-prod]}
      verb: allow
      target: container::image
      values:
      - docker.io/guyduchatelet/spyderbat-demo:1
      - docker.io/library/mongo:latest
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      verb: allow
      target: container::image
      values:
      - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.10.1-eksbuild.1
      - 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.10.1-eksbuild.1
      - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.7-eksbuild.1
      - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.22.6-eksbuild.1
      - public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-58-g4ddce6a-2024.01.31.21.42
      - registry.k8s.io/csi-secrets-store/driver:v1.4.2
      - registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
      - registry.k8s.io/sig-storage/livenessprobe:v2.12.0
- apiVersion: spyderbat/v1
  kind: SpyderbatPolicy
  metadata:
    name: demo-cluster-policy
    type: cluster
  spec:
    enabled: true
    mode: audit
    clusterSelector:
      matchFields:
        name: demo-cluster
    rulesets:
    - demo-cluster_ruleset
    response:
      default:
      - makeRedFlag:
          severity: high
      actions: []

You can edit or add rules if you wish, or you can apply the policy at this point. To apply this policy, run the following command:

spyctl apply -f FILENAME

For example:

spyctl apply -f cluster-policy.yaml
Successfully applied new cluster ruleset with uid: rs:xxxxxxxxxxxxxxxxxxxx
Successfully applied new cluster guardian policy with uid: pol:xxxxxxxxxxxxxxxxxxxx

To confirm that your policy applied successfully you can run the following command:

spyctl get policies --type cluster

And to view your cluster-rulesets you can run the command:

spyctl get rulesets --type cluster

For example:

$ spyctl get policies --type cluster
UID                       NAME                 STATUS    TYPE       VERSION  CREATE_TIME
pol:xxxxxxxxxxxxxxxxxxxx  demo-cluster-policy  Auditing  cluster          1  2024-05-06T19:22:43Z
$
$ spyctl get rulesets --type cluster
UID                      NAME                   TYPE       VERSION  CREATE_TIME           LAST_UPDATED
rs:xxxxxxxxxxxxxxxxxxxx  demo-cluster_ruleset   cluster          1  2024-05-06T19:22:42Z  2024-05-06T19:22:42Z

[Optional] Adding "Interceptor" Response Actions

By default, Cluster Policies have a single response action makeRedFlag this action generates a redflag that references a deviant object. For example, if a container violates one of the ruleset rules, a redflag will generate for that container object. Redflags are used to populate security dashboards within the Spyderbat Console, but may also be forwarded to a SIEM and/or used to trigger notifications.

Containers that violate a cluster policy rule can also be used to trigger the agentKillPod response action. You can add a default action to kill the pod of any violating container by editing the policy yaml:

spyctl edit policy demo-cluster-policy

Then, under the response section of the spec you can add a new default action:

response:
  default:
  - makeRedFlag:
      severity: high
  - agentKillPod:
  actions: []

Alternatively, you can scope the kill pod action to a sensitive namespace:

response:
  default:
  - makeRedFlag:
      severity: high
  actions:
  - agentKillPod:
      namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: MY_CRITICAL_NAMESPACE

Reviewing Policy Activity

Using the spyctl logs command, you can see what sorts of activity are going on within the scope of your policy.

spyctl logs policy NAME_OR_UID

for example:

spyctl logs policy demo-cluster-policy
(audit mode): Container image "docker.io/guyduchatelet/spyderbat-demo:2" ns:"rsvp-svc-dev" cluster:"demo-cluster" deviated from policy "integrationc3_policy".
(audit mode): Would have initiated "makeRedFlag" action for "cont:8vuJRMgyTEs:AAYXziCHi5g:31961a985651". Not initiated due to "audit" mode.

Summary and Next Steps

At this point you should have an applied Cluster Policy in audit mode. This means that your policy is in a learning phase, it will generate logs and deviations, but will not take any response actions. After you feel the policy has stabilized (not generating deviations or generating them rarely) you can set the policy to enforce mode.

You can create Cluster Policies for any other Kubernetes Clusters you manage.

For additional details on ruleset rules view the Ruleset Reference Guide. There you can find out additional scoping options and rule targets.

For additional details on managing policies (updating, disabling, deleting) see the Guardian Policy Management Reference Guide

Last updated

© SPYDERBAT, Inc., All Rights Reserved