Custom Kubernetes Controller with a Real Use Case: Managing Application Configurations

Hey there! Today, we’re diving into our journey of creating a Custom Kubernetes Controller with a real use case to manage complex application configurations. If you’ve ever felt like Kubernetes wasn’t quite enough for your unique setup, you’re in for a treat. Here’s how we tackled dynamic configuration management challenges with a powerful custom solution.

Custom Kubernetes Controller with a Real Use Case - LitFeeds

Custom Kubernetes Controller : The Real Challenge

In one of our projects, we quickly realized that Kubernetes ConfigMaps couldn’t handle everything we needed. The more we dug, the more we saw the need for a custom solution that would allow us to:

  • Version configurations: Rollback and track versions easily.
  • Validate dynamically: Ensure that configurations met critical criteria before deployment.
  • Integrate with external APIs: Pull configurations from a third-party API in real-time.

We couldn’t find a ready-made solution that met our needs, so we built our own Custom Kubernetes Controller.


Step 1: Setting Up the Go Project

Our adventure began with setting up a Go project to build the controller:

mkdir litfeeds-configurator
cd litfeeds-configurator
go mod init litfeeds-configurator

Step 2: Installing Required Libraries

To make our controller work smoothly, we installed the Kubernetes client-go and controller-runtime libraries:

go get k8s.io/client-go@latest
go get sigs.k8s.io/controller-runtime@latest

These libraries provided the tools to work with Kubernetes resources and manage our custom configurations.

Step 3: Defining the Custom Resource Definition (CRD)

Our Custom Resource Definition (CRD) made it possible to create and manage a new resource type, AppConfig, tailored for our needs. Here’s what config_crd.yaml looked like:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: appconfigs.litfeeds.io
spec:
  group: litfeeds.io
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                version:
                  type: string
                data:
                  type: object
                  additionalProperties:
                    type: string
  scope: Namespaced
  names:
    plural: appconfigs
    singular: appconfig
    kind: AppConfig

Why the CRD?

Our CRD defined AppConfig, letting us manage configurations with a high degree of flexibility in Kubernetes.


Step 4: Building the Custom Kubernetes Controller

With our CRD ready, we created the controller itself. Here’s how the code looked in controller.go:

package main

import (
  "context"
  "fmt"
  "os"
  "sigs.k8s.io/controller-runtime/pkg/client"
  "sigs.k8s.io/controller-runtime/pkg/controller"
  "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
  "sigs.k8s.io/controller-runtime/pkg/handler"
  "sigs.k8s.io/controller-runtime/pkg/manager"
  "sigs.k8s.io/controller-runtime/pkg/source"
)

type AppConfig struct {
  Spec AppConfigSpec `json:"spec"`
}

type AppConfigSpec struct {
  Version string            `json:"version"`
  Data    map[string]string `json:"data"`
}

func main() {
  mgr, err := manager.New(cfg, manager.Options{})
  if err != nil {
    fmt.Fprintf(os.Stderr, "Unable to create manager: %v\n", err)
    os.Exit(1)
  }

  ctrl, err := controller.New("appconfig-controller", mgr, controller.Options{})
  if err != nil {
    fmt.Fprintf(os.Stderr, "Unable to create controller: %v\n", err)
    os.Exit(1)
  }

  if err := ctrl.Watch(&source.Kind{Type: &AppConfig{}}, &handler.EnqueueRequestForObject{}); err != nil {
    fmt.Fprintf(os.Stderr, "Unable to watch AppConfig resources: %v\n", err)
    os.Exit(1)
  }

  if err := mgr.Start(context.Background()); err != nil {
    fmt.Fprintf(os.Stderr, "Unable to start manager: %v\n", err)
    os.Exit(1)
  }
}

Breaking It Down

  • AppConfig: Defines our custom resource type.
  • manager.New: Sets up a manager to handle the controller.
  • controller.New: Creates the controller, which watches over AppConfig resources.
  • ctrl.Watch: Ensures the controller tracks AppConfig changes in real time.

Here’s a breakdown of each function call with a bit of detail that we used for Custom Kubernetes Controller:

1. mgr, err := manager.New(cfg, manager.Options{})
  • Function Signature: func New(config *rest.Config, options Options) (Manager, error)
  • Package: sigs.k8s.io/controller-runtime/pkg/manager

This call creates a new Kubernetes controller Manager, which is the central point for managing and starting our controller. The manager.New function accepts two arguments: a Kubernetes configuration (cfg) that connects to the API server, and an Options struct that allows customization of the manager’s behavior. If this manager cannot be created, an error is returned, which is then printed to stderr and exits the program.


2. ctrl, err := controller.New(“appconfig-controller”, mgr, controller.Options{})
  • Function Signature: func New(name string, mgr Manager, options controller.Options) (Controller, error)
  • Package: sigs.k8s.io/controller-runtime/pkg/controller

This line initializes a new Kubernetes Controller, which is responsible for reconciling the custom resources. The controller.New function requires three parameters: a name (in this case, "appconfig-controller"), the Manager instance (mgr) to attach it to, and an empty Options struct. If the controller fails to initialize, it returns an error that’s logged to stderr, followed by program termination.


3. ctrl.Watch(&source.Kind{Type: &AppConfig{}}, &handler.EnqueueRequestForObject{})
  • Function Signature: func (c *Controller) Watch(src Source, evthandler EventHandler, predicates ...Predicate) error
  • Package: sigs.k8s.io/controller-runtime/pkg/controller

The Watch method instructs the controller to monitor events associated with a specific resource type—in this case, AppConfig. The source.Kind source wraps our custom resource’s Type, and handler.EnqueueRequestForObject ensures that any changes to AppConfig resources are added to the work queue for reconciliation. If this setup fails, it logs an error and exits.


4. mgr.Start(context.Background())
  • Function Signature: func (m Manager) Start(ctx context.Context) error
  • Package: sigs.k8s.io/controller-runtime/pkg/manager

The Start function begins the manager’s control loop, which supervises the controllers and ensures they stay in sync with the cluster’s state. This method takes a context.Context argument for managing cancellation and termination. If its starting fails, an error message is printed, and the program exits.


Step 5: Deploying the CRD and Custom Kubernetes Controller

With the CRD and controller code ready, we deployed it:

kubectl apply -f config_crd.yaml

We also built our controller into a Docker image, then deployed it to Kubernetes with the necessary permissions to manage AppConfig resources.


Step 6: Testing the Custom Controller with AppConfig Resource

To make sure everything was working, we created an AppConfig resource as a test. Here’s the YAML we used in appconfig_example.yaml:

apiVersion: litfeeds.io/v1
kind: AppConfig
metadata:
  name: test-config
  namespace: default
spec:
  version: "v1.0"
  data:
    key1: value1
    key2: value2

We applied it with:

kubectl apply -f appconfig_example.yaml

Creating this Custom Kubernetes Controller with a real use case was a turning point for our configuration management. It empowered us to go beyond Kubernetes’s limitations and added dynamic configuration handling that improved our deployment process. If you’ve hit a wall with Kubernetes, this approach might just be the solution you’re looking for.

Recommended post: Efficient Kubernetes Auto scaling

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *