Common Use Cases
This document includes several common scenarios in which this project may be deployed along with descriptions on why and how it could be deployed in each case.
Current Containerized HTTP Application In The Cloud, Migrating to Kubernetes
In this use case, an application may be containerized running on a managed cloud platform that supports containers. Below is a non-exhaustive, alphabetically-ordered list of some examples:
The platform may or may not be autoscaling.
Moving this application to Kubernetes may make sense for several reasons, but the pros and cons of that decision are out of scope of this document.
How You’d Move This Application to KEDA-HTTP
If the application is being moved to Kubernetes, you would follow these steps to get it autoscaling and routing with KEDA-HTTP:
- Create a workload and
Service
- Install the HTTP Add-on
- Create a single
HTTPScaledObject
in the same namespace as the workload andService
you created
At that point, the operator will create the proper autoscaling and routing infrastructure behind the scenes and the application will be ready to scale. Any request received by the interceptor with the proper host will be routed to the proper backend.
Current HTTP Server in Kubernetes
In this use case, an HTTP application is already running in Kubernetes, possibly (but not necessarily) already serving in production to the public internet.
In this case, the reasoning for adding the HTTP Add-on would be clear - adding autoscaling based on incoming HTTP traffic.
How You’d Move This Application to KEDA-HTTP
Getting the HTTP Add-on working can be done transparently and without downtime to the application:
- Install the add-on. This step will have no effect on the running application.
- Create a new
HTTPScaledObject
. This step activates autoscaling for the workload that you specify and the application will immediately start scaling up and down based on incoming traffic through the interceptor that was created.