On each instance, the user is running a large SAP PI installation with multiple application servers and multiple server nodes. The user would have to scale up his environment because of the expected increase in message volume.
There is a trade-off in the overall number of overall server nodes, because of the internal architecture of the current PI release. Adding more additional server nodes will always lead to an additional overhead in the cluster communication for e.g. EOIO message processing or CPACache operations. Therefore, adding additional server nodes will lead to the additional PI internal communication overhead and can also lead to issues with stability for eg: if one of the available server nodes is not answered in time for a cluster broadcast. Therefore, the overall number of server nodes should ideally be limited.
The minimum requirement is having 2 server nodes per application server for software failover. If the hardware of the application server has more capacity then the typical solution is adding more additional server nodes. Rather than adding server nodes this note suggests modifying the configuration of the server nodes for increasing the message throughput on the server note which is already available. This tutorial explains the changes required changes for increasing the overall capacity of a server node.
The suggestions below will upsurge the overall resource consumption on CPU as well on the memory side. The prerequisite is that the hardware should be able to handle the additional resource consumption. This is very important that enough physical RAM is available as no swapping should take place on a PI system.
Furthermore, the user has multiple application servers (>3) configured. For scaling up your environment further, adding additional server nodes is required
The capacity of the available server nodes should be ideally increased, rather than having more than 2 server nodes per instance (with an installation of multiple application servers).
The following configuration changes are integral on all server nodes:
Java Heap Configuration
The maximum and init heap size should be considerably increased from 2 GB to 4 GB for allowing a higher message throughput. This will avoid OutOfMemory errors in case large messages are getting processed. For this set -Xms=4096m and the maximum heap size in the text box of the configtool. (Ensure that -Xmx is not set in the list of Java parameters). The user should ot set different values for init and max heap size for avoiding long GC pause because of compaction.
The new size of the heap should also be substantially increased. The parameter for changing the new size differs between the available Virtual Machines:
- For the SAP VM (PI 7.10 and higher) no adaption is required as the new size is calculated per default on the basis of the ratio (per default 1/6) of the max. heap size.
- For HP and Sun VM set -XX:NewSize=682m -XX:MaxNewSize=682m
- For IBM VM set the parameter -Xmn1500m
Java Perm Size
More class information is stored in the Perm Space and this is all done by enlarging the heap space also the time between two Garbage Collection executions will increase. Therefore it has to be increased to: -
XX:PermSize=1024m and -XX:MaxPermSize=1024m
Number of Threads
For allowing higher message throughout the max, the number of available system and application threads are required to be increased. For doing so, the user should increase the MaxThreadCount for Application Threads (managers -> ApplicationThreadManager) from 350 to 500. For system threads (managers -> ThreadManager) increase MaxThreadCount to 200.
If the user us utilizing a high number of (Java-based) graphical or Java mappings in the classical PI ABAP based scenario, then he should also increase the number of parallel mapping connections for achieving a higher throughput. Per default 20 parallel mappings can be executed at most per server node because of the limitation in the available JCo connections. This should be typically increased to 30.
For doing so first the maximum number of allowed JCo connections has to be altered. To do so go to Visual Admin (NW 7.0x) and change MaxConnections and MaxProcesses to 30 in the properties of the JCo RFC Provider service. Post that the user has to set the Server Count value of JCO destination AI_RUNTIME_<your SID> to 30.
In PI 7. 10 and higher the same change has to be performed using the configtool / NWA. In the configtool the service is known as Rfcengine.
1) Consumer Threads on Adapter Queues:
For increasing the throughput in the Adapter Framework the number of available consumer threads is required to be adapted. No general guideline will be provided as this depends on the adapters which are being used and the message volume in the user ’s scenarios.
2) Avoid the blocking of the interfaces:
The enlarged server nodes will have to process higher message volume. Therefore, it is important to ensure that one interface cannot block others (e.g. a runtime critical interface is blocked because of a high volume interface for initial data load). Especially for JDBC, File and EDI receiver adapters the
.system.queueParallelism.maxReceivers should be explained.
For achieving a higher throughput also the adapter used in the user’s scenarios should be tuned. E.g. the JDBC and File Receiver adapters are per default only working sequential (one message after the other). This can be modified by allowing parallel execution on adapter level incase the backend can handle this. All of this can be done in the Processing tab of the Communication Channel in the field "Maximum Concurrency". You can enter the number of messages to be processed in parallel by the receiver channel.
Note: The above changes in general aims at increasing the overall throughput of an individual server node in his system. By ways and means of enlarging the server nodes, more resources will be required from the operating system. Based on this it is critical to verify the OS for potential bottlenecks as for example the number of sockets or number of files which can be opened at the OS. SAP thereby suggests applying the appropriate performance testing before altering the parameters above in a productive environment.