apiVersion: v1
kind: Namespace
metadata:
name: mp-demo
spec:
finalizers:
- kubernetes
Single node instance
What is it about
This page will cover use cases when just one midpoint node is used. There will not be handled specifics related to cluster deployment. For Cluster deployment see Midpoint cluster.
Deployments by the repository
Since Midpoint 4.4 there are two definition for the repository (where the data for the midpoint application are stored).
To make the objects logically grouped we are using Namespace mp-demo. Example of the mp-demo namespace definition | Github
|
Generic
Even the generic repository is deprecated it is so far still available and in some "lab" use cases may be useful. |
The performance is lower in comparison with Native repository but the structure is created automatically once the connection to the database is ready or H2 DB file is created.
This option is mentioned here to have complete information at one place, but we are not recommend this for the production environment or (serious) test environment.
Quick start with H2
This example will start midPoint with default environment variables. This is "kubernetes alternative" to run docker image directly with Docker.
apiVersion: v1
kind: Pod
metadata:
name: mp-h2-demo
namespace: mp-demo
labels:
app: mp-h2-demo
spec:
containers:
- name: mp-h2-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
restartPolicy: Always
The result will be midpoint running on Generic repository located in H2 DB file. We are not handling config.xml file so the default one will be used. There will be no shared secrets as it is not needed (e.g. needed in case of connection to the database). The keystore will be generated with the start of the system so in principle (without extra steps) it is not possible to run it with more nodes (clustered). From Midpoint point of view I would label this option as "hello world" - it is just up and running.
Before we will add next "layer" to the configuration I would re-declare it as statefulset (object which "control" pods).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-h2-demo
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-h2-demo
template:
metadata:
labels:
app: mp-h2-demo
spec:
containers:
- name: mp-h2-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
serviceName: mp-h2-demo
Even we can see "replicas: 1" it is practically fixed value. Don’t try to increase this number (start cluster) as in this configuration it will not work. Cluster configuration will be shown later in the document. |
It could be also noted that in this configuration the service is available only internally in the Kubernetes environment. Theoretically it can be routed but in principle IPs will be dynamic and it will change with every pod re-create. In case you would like to make it "reachable" (the same IP, the same FQDN) please think about using Services and Ingress (of course with proper names and selectors). |
Quick start with PostgreSQL
Now we will add complexity in form of postgresql "store" for the midpoint objects. So far we are still using deprecated generic repository to understand it and get it up and running in step-by-step approach. The benefit now is auto-create of the database structure - we don’t need to handle explicit db init.
We will need postgresql instance. It makes sense to use persistent volume for data store.
Without defining persistent store the data is located on the "shared" volume which is used by node itself. In case of bigger amount of data and not so big storage for the node itself (e.g. local kubernetes cluster) the free space exhausting may occur. |
At this moment we will define the postgresql without persistent volume to keep configuration in necessary minimum for easier understanding.
It has been already mentioned that IP addresses are dynamic. Now we will create two pods - one for the midpoint and one for the postgresql. To be able to predict interconnection we will define service for the database. The connection will be realized via FQDN for the service. The proper "linking" will be handled by the selector pointing to the proper label.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-demo-db
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-demo-db
template:
metadata:
labels:
app: mp-demo-db
spec:
containers:
- name: mp-demo-db
image: 'postgres:13-alpine'
ports:
- name: db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_INITDB_ARGS
value: '--lc-collate=en_US.utf8 --lc-ctype=en_US.utf8'
- name: POSTGRES_USER
value: midpoint
- name: POSTGRES_PASSWORD
value: SuperSecretPassword007
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 10
serviceName: mp-demo-db
You can see necessary init value for the authentication: PGUSER: midpoint PGPASSWD: SuperSecretPassword007 Feel free to change it but keep it consistent with following midpoint definition. Otherwise, the connection will not be established. |
apiVersion: v1
kind: Service
metadata:
name: mp-demo-db
namespace: mp-demo
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
app: mp-demo-db
type: ClusterIP
sessionAffinity: None
The name is important for the FQDN construction. In this documentation we will use default kubernetes domain - FQDN will be <service_name>.<namespace>.svc.cluster.local. This domain may differ based on the environment setting. |
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-pg-demo
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-pg-demo
template:
metadata:
labels:
app: mp-pg-demo
spec:
containers:
- name: mp-pg-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
env:
- name: MP_SET_midpoint_repository_database
value: postgresql
- name: MP_SET_midpoint_repository_jdbcUsername
value: midpoint
- name: MP_SET_midpoint_repository_jdbcPassword
value: SuperSecretPassword007
- name: MP_SET_midpoint_repository_jdbcUrl
value: jdbc:postgresql://mp-demo-db.mp-demo.svc.cluster.local:5432/midpoint
imagePullPolicy: IfNotPresent
serviceName: mp-pg-demo
As far as we are using generic repository we can use "default" config.xml file. All the changes can be overwritten during the start. This is realized by the MP_SET_* values which are handled by the Start script. The name and content for the variables are related to the Repository configuration.
Native
Native repository came with midpoint 4.4. For the purpose of deployment there is few specifics:
-
DB related:
-
it can be operated only on PostgreSQL (PostgreSQL’s features has been utilized during the design)
-
the structure of the DB has to be initiated explicitly - midpoint expects already existing structure
-
-
midPoint related:
-
config.xml file has to be used
-
PostgreSQL should not be an issue as we will use official PostgreSQL image. The tricky part will be related to the second point - init the db structure. Good for us is that all we need is already packed in the midpoint image.
The DB init
We will use what we already have. To make it happen it we will need to utilize init container in the kubernetes. It is container (maybe even more but in parallel not in sequence) which is run before main container. The image for init container and container may differ. To reach the requirement we will need "shared" volume between the containers in the pod. It can be persistent volume but for now we will use emptyDir volume. For serious deployment (even for the test) the persistent volume is good idea. We will use midpoint image as init container and postgres image for "regular" container.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-demo-db
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-demo-db
template:
metadata:
labels:
app: mp-demo-db
spec:
volumes:
- name: init-db
emptyDir: {}
initContainers:
- name: mp-db-init
image: 'evolveum/midpoint:4.4.1-alpine'
command: ["/bin/bash","/opt/midpoint/bin/midpoint.sh","init-native"]
env:
- name: MP_INIT_DB_CONCAT
value: /opt/db-init/010-init.sql
volumeMounts:
- name: init-db
mountPath: /opt/db-init
imagePullPolicy: IfNotPresent
containers:
- name: mp-demo-db
image: 'postgres:13-alpine'
ports:
- name: db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_INITDB_ARGS
value: '--lc-collate=en_US.utf8 --lc-ctype=en_US.utf8'
- name: POSTGRES_USER
value: midpoint
- name: POSTGRES_PASSWORD
value: SuperSecretPassword007
volumeMounts:
- name: init-db
mountPath: /docker-entrypoint-initdb.d/
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 10
serviceName: mp-demo-db-service
The preparation of init-db volume will happen with all the restarts of the DB’s pod. The init process of the DB will be done only once - only in case the db data is not found. The image version (tag) have to be the same for init container of the DB and for the midpoint itself. This way the initialized structure will correspond with the version of midpoint you are deploying. |
Secret Objects
It is possible to utilize Secret objects to store the password instead of keeping it in the configuration of the StatefulSets directly.
Once utilizing Secrets you can choose between more approaches.
-
mount the value as a file to pod’s filesystem
The mounting of the value as a file is the same as in case of config map. The example is shown further in the document. -
pointing to the value
To point the value you can replace the definition:
...
env:
- name: POSTGRES_PASSWORD
value: SuperSecretPassword007
...
with the following definition:
...
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mp-demo
key: password
...
This example expect to have the secret object with the name mp-demo in the same namespace as object where it is used (mp-demo). The value which will be used is content of the key password located in the secret object. |
apiVersion: v1
kind: Secret
metadata:
name: mp-demo
namespace: mp-demo
data:
password: U3VwZXJTZWNyZXRQYXNzd29yZDAwNw==
type: Opaque
In the secret object the values are provided as base64 encoded content.
For our example we have the following values: SuperSecretPassword007 ⇒ U3VwZXJTZWNyZXRQYXNzd29yZDAwNw== |
kubectl create secret generic -n mp-demo mp-demo --from-literal=password=SuperSecretPassword007
Persistent storage
Once you want to use the environment over the restart / pods re-creation the persistent storage is needed. The easiest way how to handle it is using volumeClaimTemplates in the statefulset definition.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-demo-db
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-demo-db
template:
metadata:
labels:
app: mp-demo-db
spec:
volumes:
- name: init-db
emptyDir: {}
initContainers:
- name: mp-db-init
image: 'evolveum/midpoint:4.4.1-alpine'
command: ["/bin/bash","/opt/midpoint/bin/midpoint.sh","init-native"]
env:
- name: MP_INIT_DB_CONCAT
value: /opt/db-init/010-init.sql
volumeMounts:
- name: init-db
mountPath: /opt/db-init
imagePullPolicy: IfNotPresent
containers:
- name: mp-demo-db
image: 'postgres:13-alpine'
ports:
- name: db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_INITDB_ARGS
value: '--lc-collate=en_US.utf8 --lc-ctype=en_US.utf8'
- name: POSTGRES_USER
value: midpoint
- name: POSTGRES_PASSWORD
value: SuperSecretPassword007
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: init-db
mountPath: /docker-entrypoint-initdb.d/
- name: mp-demo-pg-storage
mountPath: /var/lib/postgresql/data
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 10
serviceName: mp-demo-db-service
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mp-demo-pg-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: csi-rbd-ssd
volumeMode: Filesystem
There is optional storageClassName definition where the default storage class is used if the definition is missing. This way each replica will have own persistentVolumeClaim containing suffix like the pod (-0, -1, -2, etc.). |
...
volumeClaimTemplates:
spec:
requests:
storageClassName: csi-rbd-ssd
...
DB Service definition
We will need to have the service definition, so we can target the DB in midPoint configuration. Without the service we are not able to predict "meeting point" in case of IP. In some specific cases we can predict FQDN of the pod but using Service for this purpose is more than good idea. It is even recommended approach - search for kubernetes related resources for more information if needed.
apiVersion: v1
kind: Service
metadata:
name: mp-demo-db
namespace: mp-demo
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
app: mp-demo-db
type: ClusterIP
sessionAffinity: None
Midpoint
To start midpoint with the native repository the "custom" config.xml file has to be used. There is audit related configuration which differs from "default" config.xml and it can’t be overwritten by the MP_SET_* variables. The sample config.xml for native repository is also delivered with midpoint image. With this sample config.xml we have all we need to be able to set all the rest values using MP_SET_* variables.
We will use init container the similar way like in case of DB init. In this case both init container and container will use the same image.
For this documentation purpose we will use the "emptyDir" definition for the volume. This volume will be used for /opt/midpoint/var directory. Based on your deployment specifics you may think about proper volume type.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-pg-demo
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-pg-demo
template:
metadata:
labels:
app: mp-pg-demo
spec:
volumes:
- name: mp-home
emptyDir: {}
initContainers:
- name: mp-config-init
image: 'evolveum/midpoint:4.4.1-alpine'
command: ["/bin/bash","/opt/midpoint/bin/midpoint.sh","init-native"]
env:
- name: MP_INIT_CFG
value: /opt/mp-home
volumeMounts:
- name: mp-home
mountPath: /opt/mp-home
imagePullPolicy: IfNotPresent
containers:
- name: mp-pg-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
env:
- name: MP_SET_midpoint_repository_database
value: postgresql
- name: MP_SET_midpoint_repository_jdbcUsername
value: midpoint
- name: MP_SET_midpoint_repository_jdbcPassword
value: SuperSecretPassword007
- name: MP_SET_midpoint_repository_jdbcUrl
value: jdbc:postgresql://mp-demo-db.mp-demo.svc.cluster.local:5432/midpoint
- name: MP_UNSET_midpoint_repository_hibernateHbm2ddl
value: "1"
- name: MP_NO_ENV_COMPAT
value: "1"
volumeMounts:
- name: mp-home
mountPath: /opt/midpoint/var
imagePullPolicy: IfNotPresent
serviceName: mp-pg-demo
Once you pass the passwords (e.g. for keystore or database) as MP_SET_* parameter it is visible in "About" as text under "JVM properties". The more secure option may be use password_FILE instead of password value.
The handling the secret and configMap objects are very similar. To save some sample config iteration we can directly show also the post-initial-import. For this purpose Start script offer entry point feature. To use it the parameter MP_ENTRY_POINT can be set. In the following example there are 2 XML files with user definition in mp-demo-poi configMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: mp-demo-poi
namespace: mp-demo
data:
test-user.xml: >
<?xml version="1.0" encoding="UTF-8"?>
<user>
<name>test</name>
</user>
test2-user.xml: >
<?xml version="1.0" encoding="UTF-8"?>
<user>
<name>test2</name>
</user>
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-pg-demo
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-pg-demo
template:
metadata:
labels:
app: mp-pg-demo
spec:
volumes:
- name: mp-home
emptyDir: {}
- name: db-pass
secret:
secretName: mp-demo
defaultMode: 420
- name: mp-poi
configMap:
name: mp-demo-poi
defaultMode: 420
initContainers:
- name: mp-config-init
image: 'evolveum/midpoint:4.4.1-alpine'
command: ["/bin/bash","/opt/midpoint/bin/midpoint.sh","init-native"]
env:
- name: MP_INIT_CFG
value: /opt/mp-home
volumeMounts:
- name: mp-home
mountPath: /opt/mp-home
imagePullPolicy: IfNotPresent
containers:
- name: mp-pg-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
env:
- name: MP_ENTRY_POINT
value: /opt/midpoint-dirs-docker-entrypoint
- name: MP_SET_midpoint_repository_database
value: postgresql
- name: MP_SET_midpoint_repository_jdbcUsername
value: midpoint
- name: MP_SET_midpoint_repository_jdbcPassword_FILE
value: /opt/midpoint/config-secrets/password
- name: MP_SET_midpoint_repository_jdbcUrl
value: jdbc:postgresql://mp-demo-db.mp-demo.svc.cluster.local:5432/midpoint
- name: MP_UNSET_midpoint_repository_hibernateHbm2ddl
value: "1"
- name: MP_NO_ENV_COMPAT
value: "1"
volumeMounts:
- name: mp-home
mountPath: /opt/midpoint/var
- name: db-pass
mountPath: /opt/midpoint/config-secrets
- name: mp-poi
mountPath: /opt/midpoint-dirs-docker-entrypoint/post-initial-objects
imagePullPolicy: IfNotPresent
serviceName: mp-pg-demo
Persistent Storage
In the example so far the midpoint deployment is not using persistent volumes.
Once the midpoint pod is re-created there is newly generated keystore. The result is that the midpoint lost the keys to decrypt encrypted values in DB. With the re-generated keystore it is not possible even to login as administrator. In case it is just testing environment you have to recreate both DB and midpoint or utilize persistent volume. Alternative approach is handled in "requirements" to run clustered midpoint - keystore.
Other "lost" thing may be exports, logs and post-initial-objects (better say their "status" .done).
To prevent this situation the midpoint home directory or its subtree can be located on persistent storage.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mp-pg-demo
namespace: mp-demo
spec:
replicas: 1
selector:
matchLabels:
app: mp-pg-demo
template:
metadata:
labels:
app: mp-pg-demo
spec:
volumes:
- name: mp-home
emptyDir: {}
- name: db-pass
secret:
secretName: mp-demo
defaultMode: 420
- name: mp-poi
configMap:
name: mp-demo-poi
defaultMode: 420
initContainers:
- name: mp-config-init
image: 'evolveum/midpoint:4.4.1-alpine'
command: ["/bin/bash","/opt/midpoint/bin/midpoint.sh","init-native"]
env:
- name: MP_INIT_CFG
value: /opt/mp-home
volumeMounts:
- name: mp-home
mountPath: /opt/mp-home
imagePullPolicy: IfNotPresent
containers:
- name: mp-pg-demo
image: 'evolveum/midpoint:4.4.1-alpine'
ports:
- name: gui
containerPort: 8080
protocol: TCP
env:
- name: MP_ENTRY_POINT
value: /opt/midpoint-dirs-docker-entrypoint
- name: MP_SET_midpoint_repository_database
value: postgresql
- name: MP_SET_midpoint_repository_jdbcUsername
value: midpoint
- name: MP_SET_midpoint_repository_jdbcPassword_FILE
value: /opt/midpoint/config-secrets/password
- name: MP_SET_midpoint_repository_jdbcUrl
value: jdbc:postgresql://mp-demo-db.mp-demo.svc.cluster.local:5432/midpoint
- name: MP_UNSET_midpoint_repository_hibernateHbm2ddl
value: "1"
- name: MP_NO_ENV_COMPAT
value: "1"
volumeMounts:
- name: mp-home
mountPath: /opt/midpoint/var
- name: db-pass
mountPath: /opt/midpoint/config-secrets
- name: mp-poi
mountPath: /opt/midpoint-dirs-docker-entrypoint/post-initial-objects
- name: mp-demo-poi-storage
mountPath: /opt/midpoint/var/post-initial-objects
imagePullPolicy: IfNotPresent
serviceName: mp-pg-demo
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mp-demo-poi-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Mi
storageClassName: csi-rbd-ssd
volumeMode: Filesystem
There is optional storageClassName definition where the default storage class is used if the definition is missing. This way yeach replica will have own persistentVolumeClaim containing suffix like the pod (-0, -1, -2, etc.). |