values.yamlfile of the Helm chart used for deployment. See the Large Lab Environment's
values.yamlas an example: https://github.com/openMF/ph-ee-env-labs/blob/master/helm/payment-hub-large/values.yaml. The following paragraph summarizes the most important tuneable parameters and their recommended values:
clustersize- number of Zeebe broker nodes in the cluster. We use 3 nodes for the Large Lab Environment.
partitionCount- we used 15 partitions for optimal distribution among broker nodes
replicationFactor- we set this to 2 to achieve a fine balance between replication and performance
cpuThreadCount- we set this <
# of cpu cores in a node-
ioThreadCount>, so we used 6
ioThreadCount- our measurements showed optimal performance using 2 IO threads
JavaOpts- this drives the JVM tuning parameters inside the Zeebe brokers, including garbage collector settings. We experienced with various approaches, the best results (somewhat surprisingly) came using these settings:
-Xms32g -Xmx32g -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:MaxRAMPercentage=25.0 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90
ph-ee-importer-esto achieve a scalable ElasticSearch export. Again, the working setup can be checked in the Large Lab Environment's helm chart at https://github.com/openMF/ph-ee-env-labs/tree/master/helm/payment-hub-large
3x Standard_D13_v2. This turned out to be a relatively cost-saving approach while still providing plenty of resources for the Zeebe cluster.