Clustering demo

Last modified 10 Mar 2023 14:07 +01:00

This demo shows how to run two midPoint nodes working against common midPoint repository. For more information you see Clustering / high availability setup.

The image can be found in the Evolveum/midpoint-docker GitHub project.

Starting

$ cd demo/clustering
$ docker-compose up

After docker-compose up command you will see more error logs from midpoint_server_node_a_1 _or midpoint_server_node_b_1. Please you ignore errors, two Midpoint will try to inicialize same repository. You should see this errors only during first starting of the demo._

When docker-compose up command successfully finishes you should see something like this on the console:

midpoint_server_node_b_1  | 2019-02-25 10:15:36,855 [] [main] INFO (org.springframework.boot.web.embedded.tomcat.TomcatWebServer): Tomcat started on port(s): 8080 (http) with context path '/midpoint'
midpoint_server_node_b_1  | 2019-02-25 10:15:36,879 [] [main] INFO (com.evolveum.midpoint.web.boot.MidPointSpringApplication): Started MidPointSpringApplication in 85.847 seconds (JVM running for 87.726)
midpoint_server_node_a_1  | 2019-02-25 10:15:37,140 [] [main] INFO (org.springframework.boot.web.embedded.tomcat.TomcatWebServer): Tomcat started on port(s): 8080 (http) with context path '/midpoint'
midpoint_server_node_a_1  | 2019-02-25 10:15:37,160 [] [main] INFO (com.evolveum.midpoint.web.boot.MidPointSpringApplication): Started MidPointSpringApplication in 82.624 seconds (JVM running for 85.748)

Now you can log into midPoint using http://localhost:8080/midpoint URL for node A and http://localhost:8081/midpoint URL for node B, with an user of administrator and a password of 5ecr3t.

You can safely ignore console messages like this:

clustering_midpoint_data_1  | ERROR:  could not serialize access due to read/write dependencies among transactions
clustering_midpoint_data_1  | DETAIL:  Reason code: Canceled on identification as a pivot, during write.
clustering_midpoint_data_1  | HINT:  The transaction might succeed if retried.

This is a part of standard midPoint conflict resolution process. The mentioned transactions are really retried and they succeed eventually.

Containers

The demo/clustering composition contains the following containers:

Container name Description

`midpoint_server_node_a_1 `

This is the standard container providing midPoint functionality. It contains standalone Tomcat running midPoint application node A.

`midpoint_server_node_b_1 `

This is the standard container providing midPoint functionality. It contains standalone Tomcat running midPoint application node B.

clustering_midpoint_data_1

This container hosts midPoint repository; this time it is implemented on PostgreSQL 9.5 database.

Communication

The containers publish the following TCP ports. (Port mapped to localhost denotes the mapping of container port to the host port where it can be reached from the outside.)

Container Port number Port mapped to localhost Description

`clustering_midpoint_server_node_a_1 `

8080

8080

HTTP port to be used to connect to midPoint application

`clustering_midpoint_server_node_b_1 `

8080

8081

HTTP port to be used to connect to midPoint application

clustering_midpoint_data_1

5432

5432

Port used to connect to the PostgreSQL database

Docker volumes

The following volumes are created to persist data and other relevant files.

Volume name Description Used by container

clustering_midpoint_home_node_a

The midPoint node A home directory. Contains schema extensions, logs, custom libraries, custom ConnId connectors, and so on.

clustering_midpoint_home_node_a

clustering_midpoint_home_node_b

The midPoint node B home directory. Contains schema extensions, logs, custom libraries, custom ConnId connectors, and so on.

clustering_midpoint_home_node_b

clustering_midpoint_data

Volume hosting PostgreSQL database used by midPoint.

clustering_midpoint_data

Configuring the composition

The following configuration properties are supported. Please refer to the main documentation page for their explanation.

Property Default value

REPO_DATABASE_TYPE

postgresql

REPO_URL

default

REPO_HOST

midpoint_data

REPO_PORT

default

REPO_DATABASE

midpoint

REPO_USER

midpoint

REPO_PASSWORD_FILE

/run/secrets/mp_database_password.txt

REPO_MISSING_SCHEMA_ACTION

create

REPO_UPGRADEABLE_SCHEMA_ACTION

stop

REPO_SCHEMA_VERSION_IF_MISSING

REPO_SCHEMA_VARIANT

MP_MEM_MAX

2048m

MP_MEM_INIT

1024m

JAVA_OPTS

-Dmidpoint.nodeId=NodeA -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=20001 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/run/secrets/jmxremote.password -Dcom.sun.management.jmxremote.access.file=/run/secrets/jmxremote.access -Dmidpoint.taskManager.clustered=true

MP_KEYSTORE_PASSWORD_FILE

/run/secrets/mp_keystore_password.txt

TIMEZONE

UTC

Node B have only one properties diferent -Dmidpoint.nodeId=NodeB. You can tailor these to your needs.

The following Docker secrets are used:

Secret Location

mp_database_password.txt

configs-and-secrets/midpoint/application/database_password.txt

mp_keystore_password.txt

configs-and-secrets/midpoint/application/keystore_password.txt

You can modify or replace these files as needed.

Was this page helpful?
YES NO
Thanks for your feedback