Online Tutorials & Training Materials |
Register Login

Tuning Expensive SQL Statements

|| 0

Tuning Expensive SQL Statements

What are typical approaches to tune expensive SQL statements?

Depending on the results of the analysis of an SQL statement the following optimizations can be considered.

Possible symptoms Optimization Optimization Details
High number of executions Reduce high number of executions Check from application perspective if the number of executions can be reduced, e.g. by avoiding identical SELECTs or adjusting the application logic.
High number of selected records Reduce number of selected records

Check from application perspective if you can restrict the number of selected records by adding further selection conditions or modifying the application logic.

Check if you can reduce the amount of relevant data by archiving or cleanup of technical tables (SAP doc 2388483).

High lock wait time due to record locks

Reduce record lock contention

Check from application perspective if you can reduce concurrent changes of same records.

Check from application perspective if you can reduce the critical time frame between change operation and next COMMIT.

High lock wait time due to object locks Reduce object lock contention

Check if you can schedule critical offline DDL operations less often or at times of lower workload.

Check if you can use online instead of offline DDL operations.

High total execution time, significant amount of thread samples pointing to internal lock waits Reduce internal lock contention Check if you can reduce the internal lock wait time (SAP doc 1999998).
High system CPU consumption Optimize operating system calls See SAP doc 2100040 and check for system CPU optimizations based on the related call stacks and executed operating system calls.

Execution time higher than expected, optimal index doesn't exist

Table and column scan related thread methods and details like:

searchDocumentsIterateDocidsParallel IndirectScanBvOutJob<range> IndirectScanVecOutJob<range> JEJobReadIndexChunked JobParallelMgetSearch scanWithoutIndex sparseSearch SparseScanRangeVecJob

Call stacks containing:

sse_icc_lib::mgetSearchi_SSE4impl sse_icc_lib::mgetSearchi_SSE4 TRexUtils::IndexVectorRef::mgetSearch TRexUtils::JobParallelMgetSearch::run() TRexUtils::Parallel::JobBase::runEx() TRexUtils::Parallel::JobBase::run
Synchronize index and application design

Check from an application perspective if you can adjust the database request so that existing indexes can be used efficiently.

Create a new index or adjust an existing index so that the SQL statement can be processed efficiently:

  • Make sure that selective fields are contained in the index
  • Use indexes that don't contain more fields than specified in the WHERE clause

See SAP doc 2321573 for more information.

Execution time higher than expected, negative impact by existing partitioning Optimize partitioning layout

Consider the following possible optimization approaches:

  • Make sure that existing partitioning optimally supports the most important SQL statements (e.g. via partition pruning, load distribution and minimization of inter-host communication).
  • Define partitioning on as few columns as possible. A high number of columns used for the partitioning criteria (e.g. HASH partitioning on 10 primary key columns) can result in significant internal overhead during partition pruning evaluation. Call stacks containing QueryMediator::FilterProcessor::getPruningEntries, QueryMediator::PruningOptimization::getPruningEntries, QueryMediator::FilterProcessor::mergePruningEntries, QueryMediator::PruningOptimization::mergePruningEntries, TRexAPI::FilterExpression::insertPruningEntry or TRexAPI::Partitioning::PruningEntry::PruningEntry are good indications for this kind of overhead.
  • For technical reasons partitioning can sometimes introduce performance overhead in combination with join engine accesses. In this case you can check if it is possible to eliminate the use of the join engine, e.g. by removing a DISTINCT aggregation.
  • Use as few partitions as possible. A high number of partitions can result in a significant overhead (e.g. threads with method getColumnStat (SAP doc 2114710) when for each individual partition column statistics need to be retrieved).
  • Try to avoid changes of records that require a remote uniqueness check or a partition move (i.e. changes of partitioning key columns or primary key columns), because a significant overhead is imposed. See SAP doc 2312769 for more information.
  • UPSERT operations on partitioned tables can require significant time in module TrexStore::UdivListContainerMVCC::checkValidEqualSSN (SAP doc 2373312). Increasing the SAP profile parameter dbs/hdb/cmd_buffersize can reduce the overhead.
  • Partitioning in context of data aging (SAP doc 2416490) may result in decreased performance of UPDATE / UPSERT operations (SAP doc 2387064) in module TRexAPI::TableUpdate::execute_update_partitioning_attribute. The related thread method is SearchPartJob.
  • If it is technically possible to disable partitioning, you can consider undoing the partitioning.
  • Starting with SAP HANA 1.00.122 OLTP accesses to partitioned tables are executed single-threaded and so the runtime can be increased compared to a parallelized execution (particularly in cases where a high amount of records is processed). You can set indexserver.ini -> [joins] -> single_thread_execution_for_partitioned_tables to 'false' in order to allow also parallelized processing (e.g. in context of COUNT DISTINCT performance on a partitioned table).
Long runtime with OR condition having selection conditions on both tables Avoid OR in this context

If an OR concatenation is used and the terms reference columns of more than one table, a significant overhead can be caused by the fact that a cartesian product of both individual result sets is required in some cases.

If you face performance issues in this context, you can check if the problem disappears with a simplified SQL statement accessing only one table. If yes, you should check if you can avoid joins with OR concatenated selection conditions on both tables.

Long runtime with join conditions concatenated with OR Avoid OR in this context or upgrade to SAP HANA SPS 12

join conditions concatenated with OR (e.g. "A.X = B.X1 AND A.Y = B.Y1 OR A.X = B.X2 AND A.Y = B.Y2") can impose a significant performance overhead up to SAP HANA SPS 11. In this situation you should either avoid this scenario from application side or consider an upgrade to SAP HANA >= SPS 12 where the optimized Hashed Disjunctive Join is available.

Long runtime with non-unique multi-column index on join columns Consider single-column index

The SAP HANA join engine may disregard columns during a join if they are also specified as selection condition in the WHERE clause ("<column> = ?"). This is a typical constellation for the client column (MANDT, CLIENT, ...). As a consequence it can happen that a secondary index on this and other columns isn't used although it looks perfect. In order to avoid this scenario, you should create non-unique column store indexes as single-column indexes rather than adding columns like client providing limited benefit in terms of filtering.

This issue doesn't apply to row store and unique / primary key indexes.

See SAP doc 2160391 for more information related to SAP HANA indexes.

Execution time higher than expected, significant portion for accessing delta storage Optimize delta storage

Make sure that the auto merge mechanism is properly configured. See SAP doc 2057046 for more details.

Consider smart merges controlled by the application scenario to make sure that the delta storage is minimized before critical processing starts.

Execution time slightly higher than expected Change to row store

In general the number of tables in the row store should be kept on a small level, but under the following circumstances it is an option to check if the performance can be optimized by moving a table to the row store:

  • Involved table located in column store and not too large (<= 2 GB)
  • Many records with many columns selected or a very high number of quick accesses with small result sets performed
Execution time sporadically increased Avoid resource bottlenecks

Check if peaks correlate to resource bottlenecks (CPU, memory, paging) and eliminate bottleneck situations.

Execution time higher than expected, significant portion for sorting (trex_qo trace: doSort) Optimize sorting

Sort operations (e.g. related to ORDER BY) are particularly expensive if all of the following conditions are fulfilled:

  • Sorting of a high number of records
  • Sorting of more than one column
  • Leading sort column has rather few (but more than 1) distinct values

In order to optimize the sort performance you can check from an application side if you can reduce the number of records to be sorted (e.g. by adding further selection conditions) or if you can put a column with a high amount of distinct values at the beginning of the ORDER BY)

Increased runtime of INSERT operation on table with hybrid LOB field

Check disk I/O performance
Adjust hybrid LOB memory threshold

INSERTs in hybrid LOBs have to perform disk I/O if the configured memory threshold is exceeded and the data is stored in a disk LOB. See SAP doc 2220627 for more information related SAP HANA LOBs. In order to optimize performance you can proceed as follows:

  • Check for disk I/O bottlenecks and eliminate them in order to optimize the I/O performance (SAP doc 1999930).
  • Consider increasing the MEMORY THRESHOLD configuration for the hybrid LOB (SAP doc 1994962) so that more data is kept in memory. Be aware that this will increase the memory requirements of SAP HANA, so you have to find an individual trade-off between memory consumption and performance.

Increased runtime of INSERT operations into ABAP tables with INDX structure

Consider adjustment of cluster line size

Tables with INDX structure (e.g. INDX, BALDAT, SOC3, SSCOOKIE) are populated via EXPORT TO DATABASE commands on ABAP side. If bigger chunks of data are exported, the data is split into pieces based on the length of the CLUSTD field of the table. This can result in significant communication overhead. In this case you can reduce the amount of INSERT operations and the network overhead by increasing the length of the CLUSTD column to a larger value.

Starting with SAP ABAP kernel 7.49 it is possible to store the whole EXPORT data in a single table line by defining the CLUSTD column with data element INDX_CLUST_BLOB and dictionary type RAWSTRING. This will reduce the amount of INSERT operations to a minimum.

Long BW query runtime
Long execution time of TREXviaDBSL calls
Long execution time of TREXviaDBSLWithParameter calls

Optimize BW queries

If you aren't aware about the actual query, yet, get in touch with the application team to understand which actual application operations result in long running TREXviaDBSL calls and start the analysis from that perspective. When you are able to reproduce the issue, more detailed traces can be activated as required.

BW allows executing queries in different execution modes which control the utilization of SAP HANA internal engines. Furthermore it is possible and usually recommended to convert infocubes to the HANA optimized type. See BW on HANA and the Query Execution Mode and SAP docs 1682992 and 1734002 for further information and check if modifying the execution mode and / or the infocube type improves the performance of BW queries. See SAP doc 2016832 that describes how to adjust the query execution mode.

SAP doc 1931671 describes how you can configure BW so that queries in general or for particular tables make use of the OLAP engine or the join engine.

SAP doc 2378575 describes how you can translate TREX_EXT_GET_NEXT_AGGR_RESULT calls into SQL rather than using TREXviaDBSL calls.

Starting with SAP HANA SPS 11 it is possible to activate the SQL cache for TREXviaDBSL details:

Parameter Default Details
indexserver.ini -> [sql] -> plan_cache_trexviadbsl_enabled false

Master switch to activate / deactivate SQL cache information for TREXviaDBSL details

Be aware that caching is only possible for OLAP engine queries, not for join engine queries.

When the SQL cache is activated for the TREXviaDBSL queries, you can find individual 'TrexViaDbsl<hex_string>' entries in addition to the summarized TREXviaDBSL entry. As a consequence the utilization of the SQL cache can increase by about 20 %.

It is planned that the <hex_string> can then be used for reproducing the statement execution on SAP HANA side, e.g.:

CALL TREXVIADBSL('olapSearch', TO_BLOB(HEXTOBIN('<hex_string>')), ?); 

SAP doc 2368960 provides a coding correction that is required to use PlanViz (SAP doc 2073964) for TREXviaDBSL calls.

SAP doc 2368305 introcuces a "Generate PlanViz file" debug flag on BW side as of BW 7.50 SP 05.

TREXviaDBSL is used in single node scenarios while TREXviaDBSLWithParameter is used in scale-out environments in order to provide an optimal location of query execution. SAP doc 2515614 describes a bug where TREXviaDBSL is used in scale-out scenarios, so that some communication overhead is possible.

Long BW DTP and transformation runtime using SAP HANA execution mode Optimize BW DTP and transformation setup

See SAP docs 2033679 (BW 7.40 SP 05 - 07) and 2067912 (BW 7.40 SP 08 - 10) in order to make sure that all recommended fixes are implemented. For example, SAP doc 2133987 provides a coding correction for SAPKW74010 in order to speed up delta extractions from a DSO.

See SAP doc 2057542 and consider SAP HANA based transformations available as of BW 7.40 SP 05.

Long runtime of FOR ALL ENTRIES query Adjust FOR ALL ENTRIES transformation

If a FOR ALL ENTRIES selection in SAP ABAP environments takes long and consumes a lot of resources, you can consider adjusting the way how it the database requests are generated.

  • If multiple columns in the WHERE condition reference the FOR ALL ENTRIES list and a SQL statement based on OR concatenations is generated, you can use the DBSL hint dbsl_equi_join as described in SAP docs 1622681 and 1662726. Be aware that this option will only work if references to the FOR ALL ENTRIES list are on consecutive columns in the WHERE clause and that only "=" references are allowed. If these conditions are not fulfilled, a termination with a short dump will happen.
  • In order to take optimal advantage of the dbsl_equi_join hint in BW, you have to make sure that SAP docs 2007363 (7.40 SPS 09) and 2020193 (7.40 SPS 08) are implemented.

See SAP doc 2142945 for more information regarding SAP HANA hints.

Frequent queries with '/* Buffer Loading */' comment Adjust SAP ABAP table buffering

These queries are triggered by SAP ABAP table buffer loads. Check and optimize the SAP ABAP table buffer configuration (transactions ST02, ST10, AL12):

  • Sufficient buffer size
  • Limited number of swaps and invalidations
  • No unnecessary buffering of large tables

See SAP doc 2103827 for configuring the SAP HANA table buffer.

Long runtime of query on SAP HANA DDIC objects Assign CATALOG READ

If the CATALOG READ privilege is not assigned to a user, queries on SAP HANA DDIC objects like TABLES, INDEXES or TABLE_COLUMNS can take much longer, because SAP HANA needs to filter the relevant (own) data and suppress the display of information from other schemas. Make sure that CATALOG READ is assigned (either directly or indicrectly via roles) to users having to access SAP HANA DDIC objects. You can use SQL: "HANA_Security_GrantedRolesAndPrivileges" (SAP doc 1969700) to check if this privilege is already assigned or not.

Long runtime of queries on monitoring views Use fewer and larger selections

Select information from other sources

For technical reasons accesses to monitoring views like M_TABLE_LOCATIONS or M_TABLE_PERSISTENCE_LOCATIONS often scan the complete underlying structures regardless of the WHERE clause. Thus, you should avoid frequent selections of small amounts of data (e.g. one access per table) and use fewer selections reading larger amounts of data (e.g. for all tables of a schema at once).

Alternatively check if you can select the required information from other sources, e.g. monitoring view M_CS_TABLES.

Similar overhead is also required for other monitoring views. The column FILTER_PUSHDOWN_TYPE in internal view SYS.M_MONITOR_COLUMNS_ provides information to what extent a column supports the pushdown of filters.

Wrong join order Update join statistics

An obviously wrong join order can be caused by problems with join statistics. Join statistics are created on the fly when two columns are joined the first time. Up to SPS 08 the initially created join statistics are kept until SAP HANA is restarted. This can cause trouble if join statistics were created at a time when the involved tables had a different filling level, e.g. when they were empty. In this case you can restart the indexserver in order to make sure that new join statistics are created. Starting with SPS 09 SAP HANA will automatically invalidate join statistics (and SQL plans) when the size of an involved table changed significantly.

If you suspect problems with join statistics, you can create a join_eval trace (SAP doc 2119087) and check for lines like:

JoinEvaluator.cpp(01987) : getJoinStatistics: SAPSR3:CRMM_BUAG (-1)/BUAG_GUID<->SAPP25:CRMM_BUAG_H (-1)/BUAG_GUID JoinEvaluator.cpp(01999) : jStats   clSizesL:0 clCntAL:0 cntSJL:0 TTL:0 clSizesR:0 clCntAR:0 cntSJR:0 TTR:0

Zero values in the second line can indicate that join statistics were created at a time when one table was empty.

Join statistics are a SAP HANA internal concept and so they can't be displayed or adjusted.

High runtime, not reproducible in SAP HANA Studio / DBACOCKPIT Recompile execution plan

If the runtime of a query is longer than expected and you can't reproduce the long runtime with SAP HANA Studio or DBACOCKPIT (if bind variables are used: using a prepared statement with proper bind values), the issue can be caused by an inadequate execution plan (e.g. generated based on the bind values of the first execution or based on statistical information collected during the first execution). In this case you can check if an invalidation of the related SQL cache entry can resolve the issue:


You can identify the plan ID related to a statement hash by executing SQL: "HANA_SQL_SQLCache" (STATEMENT_HASH = '<statement_hash>', AGGREGATE_BY = 'NONE', DATA_SOURCE = 'CURRENT')available via SAP doc 1969700.

Depending on the factors considered during next parsing (e.g. set of bind values, dynamic statistics information) a better execution plan may be generated. It can happen that you have to repeate the RECOMPILE command until a good set of bind values is parsed.

Even if the problem is resolved after the RECOMPILE, there is no guarantee that it is permanently fixed, because after every eviction or restart a new parsing happens from scratch. If the problem is supposed to be linked to bind values, you can consider to adjust the application so that different classes of bind values are executed with slightly different SQL statements, so that SAP HANA can parse each statement individually.

As of SPS 09 more frequent reparses happen and so the situation can improve.

As of SPS 09 you can also think about the IGNORE_PLAN_CACHE hint as a last resort (see SAP doc 2142945). Be aware that this will make the performance more predictable, but due to the permanent parsing requirements the quick executions can significantly slow down. So it should only be used in exceptional cases.

Long preparation times Optimize parsing

See SAP doc 2124112 and check if the amount and duration of preparations can be reduced (e.g. by increasing the SQL cache or using bind variables) or if there are other ways to improve the parsing behavior of a specific query.

High runtime with range condition on multi column index Avoid selective ranges on multi column indexes

Range conditions like BETWEEN, "<", ">", ">=", "<=" OR LIKE can't be used in order to restrict the search in a multi column index. As a consequence the performance can be significantly worse than expected. Possible solutions are:

  • Use a single column index rather than a multi column index if possible.
  • Use "=" or "IN" instead of a range condition.
High runtime scanning indexed column with SPARSE or PREFIXED compression Optimize compression

As described in SAP doc 2112604 ("What do I have to take into account in order to make sure that the tables are compressed optimally?"), there can be different reasons why scanning indexed columns with SPARSE or PREFIXED compression doesn't happen efficiently, e.g. no index support or overhead due to bugs. Follow the instructions in SAP doc 2112604 (e.g. new optimize compression run or switch to DEFAULT compression) in order to optimize the behavior.

High runtime scanning index column with advanced compression and FULL index type Optimize compression

Starting with SAP HANA and SAP HANA may use indexes with type FULL on compressed columns. With SAP HANA to, to and 2.00.020 this can cause a performance overhead in context of joins. See SAP doc 2516807 for more information.

High runtime with multiple OR concatenated ranges on indexed column Decompress column

Due to a design limitation with SPS <= 100 SAP HANA doesn't use an index on an advanced compressed column if multiple OR concatenated range conditions exist on the same column. As a consequence statements like


can have a long runtime and high resource consumption. As a workaround you can only use DEFAULT compression for the table. See SAP doc 2112604 for more information.

High runtime with multiple EXISTS in combination with OR Consider UNION or upgrade to Rev. >= 111

Up to SAP HANA Rev. 110 multiple EXISTS semi-joins are not evaluated optimally if combined with OR. As a workaround you can check if OR can be transformed into UNION. As a permanent solution you can use SAP HANA Rev. >= 111.

High runtime, expected single column index doesn't exist Check why expected single column index is missing

Whenever a table column is part of a primary key or unique index, an implicit single column index structure is created. Exceptions and solutions are described in SAP doc 2160391 ("What are BLOCK and FULL indexes?" -> Index type = 'NONE' -> Column indexed = 'X'). You can use SQL: "HANA_Indexes_MissingImplicitSingleColumnIndexes" (SAP doc 1969700) to display columns without the expected single column index.

High runtime with join and TOP <n> Avoid TOP selections on unselective joins

SAP HANA performs a significant amount of join activities on the overall data set before finally returning the TOP <n> records. Therefore you should consider the following options:

  • Provide selective conditions in the WHERE clause so that the amount of joined data is limited.
  • Avoid TOP <n> selections in unselective joins
  • Optimize the join processing, e.g. by defining optimal indexes on the join columns
High runtime with join and thread method UnifyJob Optimize query or upgrade SAP HANA

The thread method UnifyJob indicates a specific join engine processing of intermediate results (SAP doc 2114710). The following optimizations exist:

  • Check if the underlying database request can be simplified.
  • If the thread call stack contains a "LookupHasher" module like ltt::hashtable<... JoinEvaluator::UnifyJob::LookupHasher, ltt::identity... you can consider an upgrade to SAP HANA >= Rev. 122.05 where an optimization is implemented.
High runtime with ORDER BY and TOP <n> Avoid TOP selections in combination with ORDER BY on large data volumes

As described in question "Can sorting be supported by an appropriate index?" below, SAP HANA indexes don't support sorting. Therefore an ORDER BY requires explicit sorting, even if an index on the related column(s) exists. In cases where a high amount of records need to be sorted before the first few records are returned, the runtime can be rather high. In general you can consider the following adjustments:

  • Check from application perspective if the ORDER BY is really required and avoid it if possible.
  • Check if you can specify more selective conditions so that the amount of sorted records is reduced.
High runtime with TOP ? and LIMIT ? Avoid bind variables with TOP and LIMIT

SAP HANA 1.0 has some internal restrictions handling bind variables in context of TOP and LIMIT, therefore this combination should be avoided.

SAP ABAP kernel 7.49 uses "LIMIT ?", so when upgrading to this kernel level, the SAP HANA performance and resource consumption should be carefully tested beforehand.

High runtime of MIN and MAX searches Avoid frequent MIN and MAX searches on large data volumes

Indexes in SAP HANA can't be used to identify the maximum or minimum value of a column directly. Instead the whole column has to be scanned. Therefore you avoid frequent MAX or MIN searches on large data volumes. Possible alternatives are:

  • Sequences
  • Additional selective conditions
  • Maintaining the MIN and MAX values independently in a separate table
High runtime when LIKE condition with leading place holder is evaluated Avoid LIKE conditions with leading place holder

The evaluation of a LIKE condition with a leading place holder (e.g. '%1234') can consume significant time on large tables. The runtime can be high even if the result set was already reduced to a small amount of records (> 0) before evaluating the LIKE condition. The related call stack typically contains modules like AttributeEngine::RoDict::_getNextPattern, AttributeEngine::RoDictDefaultPages::getNext or TRexUtils::WildcardPattern::match.

High runtime with LIKE condition, bind variables and '*' or '?' as part of the bind value Avoid '*' and '?' in bind values for LIKE

'*' and '?' are no wild cards for LIKE conditions but due to some internal transformations they can have a negative impact on the evaluation of LIKE conditions. Therefore you should avoid these characters in LIKE values whenever possible. If the problem happens on table STXH, SAP doc 2208025 is responsible where explicitly a '*' is added to the TDNAME value. A correction is available via SAP doc 2302627.

High runtime of range condition evaluation, even if result set is already restricted Use SAP HANA >= 112.02 or check if LIKE_REGEXP is an option

SAP HANA tends to evaluate range conditions globally, even if the already analyzed predicates have reduced the result set significantly. Starting with SAP HANA Rev. 112.02 this behavior is optimized.

In certain cases you can use LIKE_REGEXP (with the appropriate search pattern) instead of LIKE as a workaround. This should be tested thoroughly because LIKE_REGEXP can also impose overhead in other scenarios.

High runtime of COUNT DISTINCT Use SAP HANA 1.0 >= SPS 09, SAP HANA >= 2.0 or OLAP engine

If SAP HANA <= SPS 08 is used and a COUNT DISTINCT is executed on a column with a high amount of distinct values, a rather larger internal data structure is created regardless of the actual number of records that have to be processed. This can significantly increase the processing time for COUNT DISTINCT. As a workaround you can check if the SQL statement is processed more efficiently using the OLAP engine by using the USE_OLAP_PLAN hint (SAP doc 2142945). As a permanent solution you have to upgrade to SPS 09 or higher, so that the size of the internal data structure takes the amount of processed records into account.

Also with later SAP HANA 1.0 Revisions the COUNT DISTINCT can take a long time and consume significant space in allocator Pool/JoinEvaluator/DictsAndDocs (SAP doc 1999997) if it is used on a large partitioned table and the join engine is implicitly used. In this case you can check if the USE_OLAP_PLAN hint (SAP doc 2142945) can improve the situation. This problem is fixed with SAP HANA 2.0.

See SAP doc 2396894 for more details.

High mass UPDATE runtime Use primary key for updating

Updating a high amount of records in a single command (e.g. "UPDATE ... FROM TABLE" in ABAP systems) is more efficient than performing individual UPDATEs for each record, but it still can consume significant time. A special UPDATE performance optimization is available when the UPDATE is based on a primary key. So if you suffer from long mass UPDATE runtimes you can check if you can implement an appropriate primary key. A unique index is not sufficient for this purpose, a real primary key constraint is needed.

High runtime of TOP 1 requests Check TOP 1 optimization

If on SAP HANA 1.0 SPS 09 to SPS 12 a query with a TOP 1 restriction (e.g. "SELECT TOP 1 ...") runs much longer than expected and call stacks (e.g. via SQL: "HANA_Threads_Callstacks", SAP doc 1969700) indicate that most of the time is spent in UnifiedTable::MVCCObject::generateOLAPBitmapMVCC, you can check if disabling the TOP 1 optimization feature can be used as a workaround (see SAP doc 2238679):

indexserver.ini -> [search] -> qo_top_1_optimization = false

If you see a lot of time spent in UnifiedTable::MVCCObject::generateOLAPBitmapMVCC with SAP HANA SPS 11 or higher, you should check if the underlying table is merged very frequently and take appropriate actions to reduce the merge rate (SAP doc 2057046).

High runtime of anti joins Optimize anti join processing

Anti joins (EXCEPT, subquery with NOT) are often more performance critical than normal joins. The following special situations exist where particularly bad anti join performance can be observed:

  • Long runtimes of database requests with anti joins (e.g. EXCEPT) and call stacks in JoinEvaluator::LoopJob::findJoinPairsTL_native can be caused by a SAP HANA bug that is fixed with Rev. and 2.00.010. With SAP HANA 2.0 the fix is enabled per default. With SAP HANA 1.0 the fix is disabled per default and can be activated with hint CONSERVATIVE_CS_ANTI_JOIN_ESTIMATION or globally with the following parameter: indexserver.ini -> [sql] -> conservative_cs_anti_join_estimation_enabled = true

    As a workaround the NO_GROUPING_SIMPLIFICATION hint (SAP doc 2142945) can be used. If triggered by BW / MDX, you can also disable the RSADMIN parameter MDX_F4_USE_SQL (SAP doc 1865554).

High runtime of certain queries with Rev. 90 to 97.01 in UnifiedTable::MVCCObject coding Upgrade to Rev. >= 97.02

Similar to the TOP 1 issue above also other queries can suffer from long runtimes in modules like UnifiedTable::MVCCObject::isTSBlockGarbageCollected or UnifiedTable::MVCCObject::generateOLAPBitmapMVCC. One main root cause is fixed as of Rev. 97.02, so an upgrade can be considered in case the performance seriously impacts production operation.

Sporadically increased runtimes of calculation scenario accesses Check calculation engine cache size

If accesses to calculation scenarios are sometimes slow, cache displacements can be responsible. You can use SQL: "HANA_Configuration_MiniChecks" (SAP doc 1999993, check ID 460) and SQL: "HANA_CalculationEngine_CalculationScenarios" (SAP doc 1969700) to find out more. If the cache is undersized, you can increase it using the following parameter (default: 1048576):

indexserver.ini -> [calcengine] -> max_cache_size_kb = <size_kb>

See also SAP doc 1988050 for specific problems with SAP HANA <= Rev. 73.

Increased runtime of calculation view and analytic view accesses Check modelling

In case of slow and resource-demanding accesses to a calculation view or analytic view you can check the following aspects:

  • See the SAP HANA Modelling Guide and make sure that modelling best practices are used. Adjust the view definition if required.
  • See SAP docs 2291812 and 2223597 and check if adjusting the execution engine helps to improve the performance.
  • For SQL statements on these views the following best practices should be considered:
    • Avoid data type conversions (implicit or explicit type case) in join definitions and WHERE clauses
    • Avoid joins on calculated columns and calculations in WHERE clauses
    • Avoid non equi join definitions
    • Avoid joining big analytic views of calculation views, instead use UNION
    • Use UNION (with constants) to combine large data sets
    • Minimize the use of expensive calculations, row based expressions and data manipulation including calculated attributes
  • Make sure, e.g. using PlanViz, that push-down works for selective predicates. If not, you can adjust the view or open a SAP incident on component HAN-DB for clarification.
  • See SAP doc 2500573 for more details about column pruning limitations that can negatively impact performance and memory consumption.
High runtime of joins in scale-out environments Check table locations

Make sure that tables involved in critical joins are located on the same host whenever possible, so that unnecessary communication overhead between SAP HANA nodes can be avoided.

High runtime of multi-column joins on three or more tables Upgrade to Rev. >= 102.06 or Rev. >= 112.03

Due to a bug SAP HANA doesn't evaluate filter conditions efficiently if three or more tables are joined on more than one column. See SAP doc 2311087 and consider upgrading to SAP HANA >= 102.06 or >= 112.03.

High runtime in context of self-join Avoid self-join or upgrade to Rev. >= 112.04 or Rev. >= 122

If the value of two columns of the same table is compared (e.g. "MENGE" > "WAMNG"), older Revisions of SAP HANA aren't able to consider a previously reduced result set and always work on the complete data. Starting with SAP HANA Rev. 112.04 and 122 this behavior is optimized resulting in better performance.

High runtime accessing row store tables Check garbage collection

A unexpected high runtime on row store tables can be caused by the following reasons:

  • High number of versions, e.g. due do a blocked garbage collection. See SAP doc 2169283 for more information about SAP HANA garbage collection.
  • Full table scans (call stack module ptime::Table_scan::do_fetch) can slow down due to the row store memory leak (SAP doc 2362759) with SAP HANA Rev. 111 to 112.05 and 120 to 122.01. A SAP HANA restart (even without row store reorganization) optimizes the performance again.
High runtime when using fast data access Optimize or disable FDA

Starting with SAP ABAP kernel 7.45 fast data access (FDA, SAP doc 2399993) is activated per default. As a consequence you can see database statements originating from FOR ALL ENTRIES queries on ABAP side that look like the following pattern:

SELECT /* FDA WRITE */ DISTINCT ... FROM ... ? AS "t_00" ...

The following general options exist to improve these kinds of queries:

Action Details
Global deactivation of FDA for FOR ALL ENTRIES

You can disable FDA in the context of FOR ALL ENTRIES by setting the following online SAP profile parameter:

rsdb/prefer_join_with_fda = 0

Be aware that disabling FDA globally with this parameter can have a negative impact on certain FOR ALL ENTRIES request.

Statement specific deactivation of FDA for FOR ALL ENTRIES

You can specify the following SAP ABAP hint (SAP doc 2142945) on ABAP side in order to selectively disable FDA for a specific FOR ALL ENTRIES query:

%_HINTS HDB '&prefer_join_with_fda 0&'

SAP doc 2372341 provides an ABAP coding correction to disable FDA for accesses to tables /LIME/NQUAN and /LIME/NTREE.

SAP doc 2307383 deactivates FDA for queries in Bank Analyzer function module /BA1/F2_DB_BT_GET_VIA_CON.

SAP doc 2421534 deactivates FDA for queries in Bank Analyzer include /BA1/LF2_DB_READF08.

Statement specific adjustment of SAP HANA execution plan via hint

It may be possible to improve the SAP HANA performance by specifying a statement specific database hint like NO_USE_OLAP_PLAN (SAP doc 2142945) on ABAP side.

Upgrade to more recent SAP HANA Revision Some FDA related problems have been eliminated with newer SAP HANA Revisions, so an upgrade to a more recent Revision may resolve performance issues.
Bad performance on specific row store table, unexpected UNION ALL in execution plan Check table containers

For some reasons (e.g. when a column is added) a row store table can consist out of more than one underlying container. As a consequence, SAP HANA needs to combine the results from several containers using UNION ALL. Existing indexes may only work for a subset of the containers and so they aren't used in the most efficient way. In order to check for the number of containers and generate cleanup commands, you can use SQL: "HANA_Tables_RowStore_TablesWithMultipleContainers" (SAP doc 1969700). A table can be merged into a single container by reorganizing it with the following command:


Be aware that this activity requires a table lock and so concurrent accesses to the same table may be blocked. Therefore it is recommended to perform it during a time of reduced workload. Furthermore you can set a low lock wait timeout on transaction level (e.g. 10000 ms) in order to reduce the risk of long-term lock escalations:

Bad performance in context of UNION ALL Check if disabling column store UNION ALL helps

If a database access including a UNION ALL (e.g. a compatibility view access) takes a long time, it is worth to check if disabling the column store UNION ALL operation with parameter NO_CS_UNION_ALL has a positive effect. See SAP doc 2142945 for more information related to SAP HANA hints. This can at least be a workaround before a final fix is found.

Long runtime of compatibility view accesses Optimize access

Compatibility views in S/4HANA scenarios (e.g. COSS, COSP) are required to map old, non-optimized database requests to the new table structures. These compatibility views are quite complex and so there is a risk of runtime regressions. The following general optimization approaches exist:

  • Use of hints like NO_CS_UNION_ALL to force better execution plans (see "Bad performance in context of UNION ALL" above)
  • Optimizations based on the suggestions in SAP docs 2185026 and 2222535
Increased runtime due to unnecessary distributed execution Check if you can transform from distributed to local execution

In the following cases it is possible to transform a distributed query (i.e. involving more than one SAP HANA node in scale-out scenarios) into a typically more efficient local query:

  • Make sure that statement routing is used (SAP doc  2200772). Otherwise a query may be executed on a node that is different from the table location. In extreme cases this can result in performance regressions of factor 10.
  • Check if you can locate all tables accessed in a join on the same SAP HANA node.

Be aware that in BW tables are on purpose partitioned and distributed across different SAP HANA nodes, so local executions are often neither desired nor possible.

High runtime of CREATE VIEW commands Upgrade to newer SAP HANA Revision

A long runtime of CREATE VIEW commands (e.g. in method drRecreateSecondarySchemas for view M_CONTEXT_MEMORY in schema _SYS_SR_SITE_<site_name>) can be caused by unnecessary statistics collection. The call stack typically contains:


This problem is fixed with SAP HANA Rev. >= 122.09.

High runtime of COMMIT operations Check disk I/O and system replication

If COMMITs take in average more than a few ms, this is an indication for a bottleneck. See SAP doc 2000000 ("How can the performance of COMMIT operations be optimized?") for more information.

Many queries in status 'Running' and thread method 'CloseStatement' Check for concurrent repository activations

Due to a low-level exception handling it can happen that a repository activation (e.g. by executing procedure REPOSITORY_REST) can block many threads in method 'CloseStatement' while purging no longer required temporary tables. (call stack module: ptime::TrexMD::deleteIndex). Despite of status 'Running' the CPU consumption is quite low. In SAP ABAP environments fast data access (FDA) accesses (SAP doc 2399993) suffer to a higher extent as FDA is based on temporary table processing.

Avoid running repository activations during critical business hours in order to avoid a severe impact.

Unexplainable long runtime Optimize internal processing

If you experience a runtime of a SQL statement that is much higher than expected, but none of the above scenarios apply, a few more general options are left.

See SAP doc 2142945 and test if the performance of SQL statements improves with certain hints (e.g. USE_OLAP_PLAN, NO_USE_OLAP_PLAN), because this can provide you with more ideas for workarounds and underlying root causes.

Check if SAP docs exist that describe a performance bug for similar scenarios.

Check if the problem remains after having implemented a recent SAP HANA patch level.

Open a SAP incident on component HAN-DB in order to get assistance from SAP support.