Camel K: howto deploy Apache Camel integrations in Kubernetes
2025-03-22
Abstract
In this article I’ll show how to
- create a Apache Camel integration (read data from a database Aws Redshift, processing each row calling a rest service and finally saving the output to aws S3 files).
- deploy the integration to a Kubernetes (k8s) cluster using the command line tool Kamel. I’ll use Minikube as a local Kubernetes cluster. Read the official documentation for installing it.
Setup kamel
As suggested by Camel-k installation
minikube start
minikube addons enable registry
kubectl create ns camel-k
kubectl config set-context --current --namespace=camel-k
kubectl apply -k github.com/apache/camel-k/install/overlays/kubernetes/descoped?ref=v2.6.0 --server-side
Create a file itp.yaml like
apiVersion: camel.apache.org/v1
kind: IntegrationPlatform
metadata:
labels:
app: camel-k
name: camel-k
namespace: camel-k
spec:
build:
registry:
address: registry.io
organization: camel-k
insecure: true
And update the “address” with the ip shown by
kubectl -n kube-system get service registry -o jsonpath='{.spec.clusterIP}'
Then run
kubectl apply -f itp.yaml -n camel-k
kubectl wait --for jsonpath='{.status.phase}'=Ready IntegrationPlatform camel-k --timeout 30s
Creating an integration with Camel
File dbToRestToS3.yaml
- from:
uri: "direct:process_record"
steps:
- log: "${body}"
- log: "${body['col1']}"
- setHeader:
name: "S3Filename"
expression:
simple: "testCanBeDelete/${body['col1']}"
- setHeader:
name: "CamelHttpUri"
expression:
simple: "https://api.restful-api.dev/objects?id=${body['col1']}"
- setHeader:
name: "CamelHttpMethod"
constant: "GET"
- to: "http://RESTSERVICE"
- log: "${body}"
- log: "${in.headers}"
- to:
uri: "kamelet:aws-s3-sink/myconfig"
parameters:
keyName: "${in.headers['S3Filename']}"
- from:
uri: "direct:start"
steps:
- setBody:
constant: "{}"
- to:
uri: "kamelet:aws-redshift-sink"
parameters:
query: select '3' as col1 union select '4' as col1
- split:
expression:
simple: "${body}"
steps:
- to:
uri: "direct:process_record"
- from:
uri: "timer:tick?period=30000"
steps:
- to:
uri: "direct:start"
# - from:
# uri: "cron:tab?schedule=*+*+*+*+?"
# steps:
# - to: "direct:start"
File redshift-sink.properties:
camel.kamelet.aws-redshift-sink.serverPort: 5439
camel.kamelet.aws-redshift-sink.username=myuser
camel.kamelet.aws-redshift-sink.password=myprd
camel.kamelet.aws-redshift-sink.query=select user
camel.kamelet.aws-redshift-sink.databaseName=mydb
File aws-sink.properties:
camel.kamelet.aws-s3-sink.myconfig.bucketNameOrArn=mybucket
camel.kamelet.aws-s3-sink.myconfig.region=eu-west-1
camel.kamelet.aws-s3-sink.myconfig.keyName=testCanBeDelete/replaceme
camel.kamelet.aws-s3-sink.myconfig.accessKey=myaccessKey
camel.kamelet.aws-s3-sink.myconfig.secretKey=mysecretKey
Deployment
kamel run dbToRestToS3.yaml --property file:redshift-sink.properties --property file:aws-sink.properties --dev
Or using secrets
kubectl create secret generic aws-sink-secret --from-file=aws-sink.properties
kubectl create secret generic redshift-sink-secret --from-file=redshift-sink.properties
kamel run --dev --config secret:redshift-sink-secret --config secret:aws-sink-secret dbToRestToS3.yaml
Logging
In production you need to omit the option “–dev” and see logs with
kubectl get pods -n
kubectl logs camel-k-operator-XXXX
kamel get
kamel logs db-to-rest-to-s3