Skip to content

Tutorials

Installing the KTA Python SDK

Note

The KTA Python SDK will soon be available for download via PyPi.

Download the KTA Python SDK wheel file and place it in your project directory. You can install the KTA Python SDK using pip (pip install kta_python_sdk-<version>-py3-none-any.whl[http-backend]) or, alternatively, add it as a dependency to your pyproject.toml. In the latter case, make sure to include the http-backend extra. For futher details how to install a Python dependency from a file, please refer to the respective documentation.

Implementing Custom Autoscaling Algorithms with the Python SDK

Custom autoscaling algorithms are stateless, user-defined Python functions1. To assist you in the implementation and deployment process, KTA comes with a Python SDK.

An autoscaling algorithm consists of up to 3 steps: Monitor, Analyze (optional) and Plan. Each step returns a result object that is accessible in subsequent steps. For example, the result of the Monitor step is available in the Analyze step of the current MAPE-K loop evaluation. Historical results, i.e., those of previously completed MAPE-K loop evaluations, are also accessible in each step.

Result Structure

Monitor and Analyze Step

The result structure of the Monitor and Analyze step is user-defined. It can be either a Python dict or a class that extends the respective Pydantic model. Using a Pydantic model provides a typed, structured result, which is compatible with static type checkers like mypy

Plan Step

The result structure of the Plan step is pre-defined. The result must include the entire topology, this is, the parallelism of each topology node must be explicitly given, even if unchanged. This requirement is also enforced by the KTA Kubernetes Operator. If your topology consists of a single node, you may directly return an instance of PlanUdfResult.

Autoscaling Algorithm Implementation Options

There are 2 ways to implement an autoscaling algorithm.

  1. Single-class implementation: Implement all steps in 1 class using AutoscalingAlgorithmUDFs.
  2. Multi-class implementation: Implement each step separately (AutoscalingAlgorithmMonitorUDF, AutoscalingAlgorithmAnalyzeUDF, AutoscalingAlgorithmPlanUDF).

The latter is useful when each algorithm step should run in a separate container.

For an example, please refer to the sample algorithm.

Bootstrapping and Deploying an Autoscaling Algorithms

Each of the 3 user-defined autoscaling algorithms steps (Monitor, Analyze, Plan) must be exposed as a separate endpoint.

The KTA Python SDK provides helper functions for application bootstrapping that simplify deployment.

For an example how to deploy an algorithm, please refer to the corresponding quickstart files.

Additionally, you also need to configure the endpoints in the KTAPolicy.

Configuring the KTA Kubernetes Operator

Configurations related to the autoscaling process are defined in a KTAPolicy.

For an example, please refer to the quickstart policies.

Using Out-of-the-Box Stream Processing Algorithms

Currently, KTA only includes a dummy algorithm for testing purposes. You can find a complete example how to use it in the Quick Start.

More built-in algorithms are on our roadmap -- stay tuned! ✨


  1. Actually, they can be written in any programming language, as long as there is a way to handle HTTP requests. However, currently you would have to handle the low-level interactions with the KTA Kubernetes Operator for other programming languages than Python yourself.