Quick Start: Overview and Setup
This quickstart guide walks you through the minimal required steps to get up and running with KTA. It will show you how to
- set up a local Kubernetes cluster with k3d,
- deploy all KTA-related components, and
- apply a determinsitic demo sample policy to the streaming processing system of your choice.
Prerequisites
- Docker (tested with v28.0.1)
k3d(requirements, installation instructions)kubectl
Setting Up a Local Kubernetes Cluster with k3d
This section shows you how to set up a local, lightweight Kubernetes cluster on your machine using k3d.
First, create a k3d cluster with 1 node.
k3d cluster create kta-quickstart --servers 1 --image rancher/k3s:v1.29.15-k3s1
Check if the cluster is running as expected using
k3d node list
The output should be similar to
NAME ROLE CLUSTER STATUS
k3d-kta-quickstart-registry.localhost registry running
k3d-kta-quickstart-server-0 server kta-quickstart running
k3d-kta-quickstart-serverlb loadbalancer kta-quickstart running
Additionally, check if you can use kubectl to manage the created cluster, e.g., by executing
kubectl get nodes
Deploy KTA Components
A KTA deployment has 2 main components: An instance of the KTA Kubernetes Operator and an autoscaling algorithm (user-defined logic).
KTA Kubernetes Operator
The KTA Kubernetes Operator orchestrates the autoscaling reconciliation process. It invokes the user-defined logic (user-defined functions) of the autoscaling algorithm, takes care of storing the result of each step in the algorithm for subsequent reconcilations, and also executes the scaling action.
KTA is configured by a so-called KTAPolicy, a Kubernetes Custom Resource Definition. An instance of a KTAPolicy configures the autoscaling behavior of a single streaming application, this is, one KTAPolicy is responsible to scale a single streaming query. It has to applied to the cluster after the streaming application has been deployed, as you will see later on.
Deploy the operator using
kubectl apply -f https://raw.githubusercontent.com/dynatrace-oss/kubernetes-topology-autoscaler/refs/tags/v0.1.0-alpha.1/quickstart-examples/kta-quickstart-kubernetes-operator/kubernetes/quickstart-install.yml
You can verify that everything is running as expected by using the commands below.
# Should show "deployment.apps/kta-kubernetes-operator condition met"
kubectl wait --for=condition=available --timeout=240s deployment/kta-kubernetes-operator
# Should show a single endpoint
kubectl get endpoints kta-kubernetes-operator
Autoscaling Algorithm (User-Defined Logic Implemented Using the KTA Python SDK)
The KTA Python SDK assists you in implementing and deploying your custom autoscaling algorithms. An autoscaling algorithm consists of up to 3 steps: Monitor, Analyze (optional), and Plan. In this quickstart guide, you will use a determinsitic demo sample algorithm, which toggles between two states. The sample algorithm also serves as reference how to implement your own autoscaling algorithms.
Deploy the autoscaling algorithm and a corresponding Kubernetes Service to the cluster using
kubectl apply -f https://raw.githubusercontent.com/dynatrace-oss/kubernetes-topology-autoscaler/refs/tags/v0.1.0-alpha.1/quickstart-examples/kta-quickstart-python-sdk/kubernetes/quickstart-algorithm.yml
You can verify that everything is running as expected by using the commands below.
# Should show "deployment.apps/kta-quickstart-algorithm condition met"
kubectl wait --for=condition=available --timeout=240s deployment/kta-quickstart-algorithm
# Should show a single endpoint
kubectl get endpoints kta-quickstart-algorithm
The steps of the autoscaling algorithm are now exposed as individual endpoints under http://kta-quickstart-algorithm.default.svc.cluster.local:8096/api/v1alpha1/[monitor|analyze|plan].
These endpoints will be invoked by the KTA Kubernetes Operator one after another during the reconciliation process.
Choose Your Stream Processing System
Next, choose your desired stream processing system from the list below and follow the steps in the respective guide to see KTA in action.