Learn how to install Kerno on your Kubernetes cluster
We're currently aware of compatibility issues with Docker on Apple M4 machines. For now, if you're installing Kerno using Docker, please use a Mac with an Intel, M1, M2, or M3 chip. We're actively working on a fix and will share updates soon.
To install Kerno on your cluster, you will need your invite code. You can request an invite code here.
Go to app.kerno.io, sign up using your email address, and follow the onboarding instructions.
Prerequisites
AWS Login: Log into AWS with admin privileges.
OIDC (OpenID Connect) Provider
An OIDC Provider is required for your EKS cluster to support Kerno’s functionality. This is typically configured for EKS clusters. Since the OIDC Provider can be used for multiple services, including Kerno, we recommend you manage it within your cluster setup.
Depending on whether you have set your AWS credentials as environment variables or in the .aws/credentials Use one of the following methods:
Using AWS Config File
Set your AWS Credentials as environment variables, add the missing values to the script, and run the script.
Install Kerno Using AWS Config File
Your <k4-key> is automatically generated when you create your account. Each account has a single key, so store it securely.
Using Environment Variables
Set your AWS Credentials as environment variables, add the missing values to the script, and run the script.
Install Kerno Using Environment Variables
If you encounter issues or have questions, message us on Slack, and we’ll gladly help.
Installing Kerno
To install, a gcloud login must be present on the host machine
Log into GCP with admin privileges.
Add the missing values to the script, and run the script.
Kerno GCP Installation
Your <API-key> is automatically generated when you create your account. Each account has a single key, so store it securely.
If you encounter issues or have questions, message us on Slack, and we’ll gladly help.
How it works
Kerno’s K8s-only installation mode is cloud-agnostic and does not include managed object storage by default. This means:
Logs, payloads, and stack traces are not stored persistently.
Temporary samples are written to the main container of the nanobe deployment, at /tmp/samples.
These samples are stored in ephemeral storage, which is automatically cleaned up by the container runtime. By default, the ephemeral storage is configured with the following limits:
When the container reaches the 4Gi limit, older data is automatically overwritten to make room for new samples. This ensures the container doesn’t exceed its storage quota, but it also means that sample data is not guaranteed to persist over time.
We're working on an option to let you connect your own cloud storage (like AWS S3) for persistent storage.
Kerno still sends a limited set of non-PII metrics to our backend, including:
Resource usage: CPU and memory stats for Pods and Nodes
Kubernetes metadata: Pods, Namespaces, DaemonSets, etc.
Kubernetes events: Scheduling, restarts, failures, etc.
Requirements
Access to a Kubernetes cluster.
Cloud provider credentials mounted into the container. This is necessary because kubectl often requires access tokens retrieved through cloud SDKs or CLIs, which in turn depend on local credentials.
Ensure you mount the correct path for your cloud credentials: - AWS:-v ~/.aws:/root/.aws - Azure:-v ~/.azure:/root/.azure - GCP:-v ~/.config/gcloud:/root/.config/gcloud
The cloud credential mount can be omitted if accessing the cluster does not require cloud credentials (e.g., via static kubeconfig).
The manifest contains sensitive information an auth token to be stored as a secret, an installation id so Kerno can uniquely identify the installation, and the API key.
Apply the manifest:
kubectl apply -f kerno-installation.yaml
Alternatively, download and apply in a single step: