Some widely accepted information about the Oracle database is shown not to be entirely correct upon closer examination.
1. The Oracle cost-based optimizer always selects the access with the lowest calculated costs.
This is not always true since the cost-based optimizer also takes rule-based decisions in certain situations. For example, sometimes the optimizer does not use an access with costs that are a hundred times better, because the first index column is not specified with "=" in the WHERE condition and the access is therefore judged as a "Guess". For more information, seedoc 176754.
2. Indexes should be set up in a way that the selection fields are located at the beginning of the index.
It is much more important that fields that are not specified with "=" in WHERE conditions and that cover a large value range are located as near to the end of the index as possible. In addition, the index should not contain any gaps that are not specified in the WHERE condition. Seedoc 766349 also. If these recommendations are considered and if you can still choose any field sequence (in part), it is usually better to place nonselective fields at the top since their values tend to change less often in several query executions and are always processed in the same index subtree as a result. You can use the index compression better also.
3. Indexes that only index columns with few instances are superfluous.
In many cases, you cannot equate the number of distinct values in a column with the selectivity of certain conditions in this column. Often, conditions for columns with few instances are very selective. Therefore, indexes in columns with few distinct values frequently considerably improve the performance. Seedoc 766349 for a detailed example of selectivity compared with distinct values.
4. Histograms support the CBO in the R/3 environment during the selection of the best access path, because they contain information about the value distribution of columns.
Histograms cannot be analyzed in a useful way in the R/3 environment because SAP uses bind variables during parsing and because Bind Value Peeking is deactivated by default. For more information, also seedoc 797629.
5. To determine the amount of memory that is allocated by Oracle, I only have to determine the size of the SGA.
Although the SGA is an important memory area, the PGA and the process memory can also reach significant sizes. Occasionally, the memory area allocated by the PGA can become even larger than the SGA area. Therefore, you must also include PGA and process memory when you calculate the Oracle memory usage. For more information, seedoc 789011.
6. Local connections to the database use the IPC protocol.
If SAP and Oracle run on one computer, you can, in principle, use the IPC protocol to communicate. Up to Oracle 8.0, local connections were automatically changed from a TCP connection to an IPC connection with the "AUTOMATIC_IPC=ON" sqlnet.ora parameter. However, as of Oracle 8i, this mechanism is no longer available, which means that with newer systems, you can also use a local TCP connection. For more information, seedoc 562403.
7. For a good performance, it is important that the database statistics are up-to-date.
The age of the database statistics does not effect the performance. It is only important that the current dataset is reflected correctly by the statistics and that the cost-based optimizer always chooses the best access path on the basis of the existing statistics. It is not a problem if tables with static contents contain statistics that are several years old. For more information, seedoc 588668.
8. I cannot specify index and join hints in the ABAP when I use views.
You can use views together with index and join hints. However, they are based on a global object representation (<view_name>.<alias>). Refer todoc 772497 for more information.
9. The resulting set of database accesses is always sorted according to the primary key even if I do not use an explicit ORDER BY.
In general, this statement is not correct. The resulting set is only sorted according to this key if Oracle uses the primary index for the access. Since the cost-based optimizer chooses the access path dynamically based on existing statistics, we cannot assume that Oracle ALWAYS uses the primary index for certain accesses.
10. For the index to be used in the best way possible, the fields in the WHERE condition must be entered in the same order as in the index.
This is not correct. The AND operator is commutative, which means that the order of AND-linked conditions is unimportant.
11. The consistency of the database can be checked with "brconnect -f check":
Although the BRCONNECT check function performs a large number of checks (parameterization, critical segments, and so on), it is NOT a consistency check. Therefore, you should not use this function to check the database for corrupt blocks. Instead, you should use one of the options described indoc 540463.
12. The higher the hit rate of the Oracle buffer pool, the better the database performance.
For various reasons, the hit rate of the Oracle buffer pool is not really a good measurement of the database performance and the quality of the SQL statements:
Processing-intensive SQL statements that are executed repeatedly and that almost only read blocks from the buffer pool increase the hit rate of the buffer pool. If you optimize such a statement, the hit rate decreases although the database performance improves.
The database performance is not only determined by disk accesses but also by other factors such as hardware resources or locks. These factors are not reflected in the hit rate.
There are different ways to determine the hit rate (see the related question indoc 618868). Therefore, the hit rate may only be poor because the system performs a large number of direct path operations (for example, parallel query).
To obtain more exact information about the database performance and its improvability, you should use a general database performance analysis (Note 618868), a Wait Event analysis (Note 619188) or an analysis of the SQL statements (Note 766349).
13. ONLINE functions such as INDEX REBUILD ONLINE, VALIDATE STRUCTURE ONLINE, CREATE INDEX ONLINE, ANALYZE STATISTICS, ALTER TABLE SHRINK, EXPLAIN or INDEX COALESCE cannot block any other database accesses.
All of the above actions have to set at least temporary locks in the database. Therefore, you should use ONLINE operations with caution. Examples:
If you start an INDEX REBUILD ONLINE while a long-running DML operation is being executed on the relevant table, changes can no longer be made to the table due to an enqueue (seedoc 682926 for more information). The same applies for ALTER TABLE SHRINK (Note 910389).
If you execute several of the operations mentioned above on the same objects at the same time, a complete access lock may occur (that is, for both DML operations and SELECT operations) due to a "library cache lock" wait. Therefore you should avoid executing several of these kinds of operations for the same objects at the same time.doc also the "library cache lock" situation described indoc 619188.
14. If an index has more than four or five levels, it is highly fragmented:
The height of an index depends on the number of entries and the degree of fragmentation, but it is also very dependent on the lengths of the individual index entries. For example, you can easily build an index with eleven levels by creating an index with more than 1,000 (identical) entries of which each is 6,000 bytes long. Therefore, there is no direct connection between the height of an index and the degree of fragmentation.
15. The "sequential read" action in transaction SM50 or transaction SM66 means that a full table scan is executed.
The "sequential read" action indicates that a read access that may return more than one record is executed in the database. It is unimportant which accesses are used in the database and whether the main processing time is lost on the Oracle server (or, for example, in the network or the database interface). Refer to the related question indoc 618868.
16. A table with a unique index cannot contain two identical records.
Despite a unique index, identical records may occur if these contain a NULL value in all indexed columns, because in this case the particular entry is not included in the index.
17. The clustering factor of an index is improved when it is rebuilt.
The clustering factor of an index indicates how well the table is sorted in comparison to the index (see alsodoc 832343). If you want to reduce the clustering factor, the table must be built with a more favorable sorting. However, rebuilding the index does not have any affect on the clustering factor.
18. If a hint is specified, Oracle always uses the access path described there.
In many situations, Oracle cannot or does not want to use an access path, even if one is forced with a hint. Examples:
An INDEX_SS hint is ineffective in conjunction with bind variables and ranges.
A RULE hint does not work on partitioned tables or IOTs.
If you can select entries that contain NULL values in all indexed columns, the index is not used despite an index hint.
19. If a segment has many extents, this has a negative effect on performance.
The number of extents allocated to a segment has no influence on performance. Only the allocation or deallocation of many extents in a short period with Dictionary Managed Tablespaces is problematic due to ST enqueues (seedoc 745639).
20. The wait events displayed in V$SESSION_WAIT or in the Oracle session overview in ST04/ST04N specify which event the session is currently waiting for.
This statement is only correct if the session is actually currently waiting for a wait event. If CPU is being used instead, the LAST active wait event is displayed for the session, but this wait event may be from quite a long time ago (for example, in the case of processing-intensive SQL statements with a lot of buffer gets). The wait event is only currently active if the WAIT_TIME column contains the value 0.
As of Basis Support Package 7.00 (14) or 7. 10, the session overview in transaction ST04 displays 'CPU' if WAIT_TIME is not 0. As a result, you can immediately determine whether the system is currently waiting or using the CPU.
21. If you search for table entries that either fulfill a condition or do not fulfill a condition, all table entries are returned.
The following statements, which, at first glance, appear to be the same, do not necessarily return the same number of entries:
SELECT COUNT(*) FROM <table>;
SELECT COUNT(*) FROM <table> WHERE <column> = <condition> OR
<column> != <condition>;
On the contrary, the second statement may return a much lower number of entries. This is the case if <column> contains NULL values. For the Oracle database, the value NULL is not a value that is comparable with any condition. Therefore, the second statement only returns entries that do not contain the value NULL in <column>. See alsodoc 658322 (1).
22. SAP indexes must always begin with the client column.
It is not necessary that all SAP indexes start with the client column. In most cases, including the client in the index is more of a disadvantage, since it makes the index larger but - due to the small number of clients in the system - it does not improve the selectivity. Including the client at the beginning of the index is useful only in specific situations. For example:
The client column is selective because the table is filled with data in several clients.
The client is required in the context of a Unicode index.
You want to avoid the problem with ranges in the first index column described indoc 176754 (18).
By including the client in the index, branching from the index to the table is avoided.
The client column is required to support an ORDER BY via the index so that it is no longer required to perform sorting.
23. If database accesses hang for a long time, the problem is caused by a deadlock.
This assertion is incorrect. Genuine deadlocks (in other words, the mutual blocking of several transactions) are quickly recognized by Oracle, and eliminated by canceling one of the blocking transactions with the ORA-00060 error within seconds. Database accesses that hang for a long time may have a wide variety of causes (enqueues or Archiver Stuck, for example), but are not deadlocks.docs 521264 and 619188 describe possible causes and analysis options.
24. Database enqueues are only held DURING an INSERT, UPDATE or DELETE operation.
Oracle enqueues (type TX) protect records in a table from parallel changes due to several transactions, in order to guarantee the consistency of the changes. The enqueues are called when a change starts (INSERT, UPDATE or DELETE), but cannot be released until the change is confirmed (COMMIT) or rejected (ROLLBACK). This means that enqueues are essentially active until the following COMMIT or ROLLBACK , regardless of which activities are executed in the meantime, or how much time lapses until the next COMMIT or ROLLBACK occurs. To reduce enqueue waits on critical tables, you must keep the period between changes and COMMIT or ROLLBACK as short as possible. For more information, also seedoc 745639.
25. A normal BRCONNECT statistical run ensures that all missing statistics are created.
BRCONNECT only tests for missing table and index statistics. This means that the third important type, the column statistics, are not checked separately. It is therefore possible that column statistics could be missing for a long time period, causing poor performance during database accesses. For this reason, it is useful to periodically search for missing column statistics (as described indoc 588668 (25) for example), and to recreate them using the COLLECT option, for example.
brconnect -u / -c -f stats -t <table_name> -f collect
There is only an automatic check for missing column statistics as of BRCONNECT 7.0.
26. Accesses that have low CBO costs are already optimal and do not require further optimization.
This assertion is incorrect, since the calculated costs may be quite different from the actual costs for various reasons:
The CBO lacks information about the bind variable or data distribution contents.
The CBO statistics are not representative.
For this reason, the program must NOT flag a processing-intensive SQL statement as optimal with reference to the calculated costs.
27. If the storage quality or leaf-row quality of an index drops below a certain value, the index must be rebuilt.
The storage quality and leaf-row quality of an index is only a rough index of how critical the effects of the fragmentation of an index are. Therefore, you cannot say that indexes with qualities of less than a certain limit (such as 70% or 50%) must be rebuilt. Load factors, such as occupied blocks in the buffer pool or number of buffer gets per execution, are much more important during the specification of critical indexes. Refer todoc 771929 for more details.
28. Every SAP work process opens exactly one session in the Oracle database
This assertion is not correct. In different situations, an SAP work process opens two or more sessions for the same database in order to be able to carry out several independent transactions. For this reason, it is not unusual that several sessions are assigned to the same SAP work process in the Oracle session overview. However, since these sessions are only managed in a single Oracle shadow process, there is a 1:1 relationship between the SAP work process and the Oracle shadow process.
29. The database time (ST03N, STAD, and so on), "Sequential Read"/"Direct Read" operations (SM50, SM66) and the database share of the ABAP trace (SE30, and so on) only contain the Oracle server time.
This assertion is incorrect; depending on the operation in question, significant time portions may go to the account of other components. The following components are assigned to the determined times:
ST03N, STAD: Oracle client, network, Oracle server
SM50, SM66, SE30: DBI, DBSL, Oracle client, network, Oracle server
If you are unsure, you can usedoc 805934 to find more details on the response times by using a buffer trace in transaction ST05 and by activating work process traces for DBI and DBSL (for example, in transaction SM50).
30. SQL statements with a small number of buffer gets per record are technically already optimal and cannot be optimized any further.
This assertion is incorrect; a small number of buffer gets per record does not give any indication of the technical optimization options. You must always take into account the relevant SQL statement in order to determine an "optimal" number of buffer gets. Often, SQL statements with only 1 buffer get per record can also be optimized significantly. In extreme cases (for example, by optimizing the index clustering factor), improvements of factor 10 and more are possible. On the other hand, there are also SQL statements that cannot be technically optimized any further, even though they have 100 and more buffer gets per record (for example, in the case of complete joins or long IN lists).
31. Tables that are not entered in the statistics control table DBSTATC are not taken into account in the statistical run.
This assertion is incorrect; DBSTATC serves as an EXCEPTION table for the creation of statistics. Therefore, the tables to be defined are those for which there are SPECIAL rules during the creation of statistics (for example, varying accuracy, no statistics, necessary creation of statistics). All SAP tables that are not listed in DBSTATC are edited according to the general rules within the framework of the statistical run. Therefore, it is only necessary to include tables in DBSTATC in exceptional cases. Refer todoc 588668 for more information about creating statistics.
32. Table accesses using Index Unique Scan always return a maximum of one data record.
This assertion does not apply to Oracle cluster tables. Here, despite an Index Unique Scan on the cluster index, a very large number of entries can be returned from the table. No Oracle cluster tables are used in the SAP environment, but there are Oracle cluster tables in the Oracle DDIC. Therefore, extensive ST enqueue problems (Note 745639) result from an unfavorable Index Unique Scan access to the FET$ table or the UET$ table.
If index unique scans are executed in mass actions ("FROM TABLE" ABAP key word), several data records can be displayed in the shared cursor cache for each execution. The reason for this is that one execution incorporates several index unique scan accesses.
33. Oracle cannot be forced to allocate extents in a certain data file of a tablespace.
This is not the case; if required, the ALLOCATE EXTENT command can be used to allocate an extent in a certain data file:
ALTER TABLE ALLOCATE EXTENT (DATAFILE '<path_and_file_name>');
34. The size of the shared pool is determined by the SHARED_POOL_SIZE parameter.
This assertion is incorrect; SHARED_POOL_SIZE contains many but not all components of the shared pool. Some components (such as "db_block_buffers" used to save block header information when parameter DB_BLOCK_BUFFERS is used) do also belong to the shared pool, but are stored outside of SHARED_POOL_SIZE with Oracle <= 9i. For this reason, in individual cases, the actual size of the shared pool can be several hundred MB larger than the memory area defined with SHARED_POOL_SIZE. For more information, also seedoc 789011.
35. A new index is automatically provided with current CBO statistics.
This assertion is not correct. Neither Oracle (<= Release 9i) nor the corresponding SAP function (transport system, SE14) compiles CBO statistics during the creation of a new index. Automatic statistics are only created as of 10g.
To prevent the CBO making wrong decisions because statistics are missing, you must carry out individual administrative tasks to make sure that CBO statistics are compiled immediately after an index is created (for example, by using BRCONNECT or transaction DB20).
36. With an Explain, you always receive the access path that is also used for the execution.
This assertion is incorrect; the access path displayed for an Explain (for example, from transaction ST05) may be significantly different from the access path that is actually used. For more information, seedoc 723879.
37. Increased COMMIT times are always caused by I/O problems in the LGWR process.
You cannot make such a sweeping statement. The LGWR process may have to wait for a CF enqueue for longer when a log switch is executed. Although no I/O is executed here, DML operations must wait for the COMMIT and for the wait event "log file sync" for a long time. Refer to the related information contained indoc 619188.
In the case of direct path INSERTs, during the COMMIT, the system adjusts the high water marks accordingly. This may (particularly in the case of a table with many partitions) cause increased COMMIT times, which are mainly determined by "db file sequential read" waits.
38. The I/O behavior and the CBO cost calculation for Full Table Scans are linked to each other by DB_FILE_MULTIBLOCK_READ_COUNT.
This is not the case. With Oracle 9i, the CBO costs from DB_FILE_MULTIBLOCK_READ_COUNT and from the I/O behavior can be separated when you use WORKLOAD system statistics. Instead, the system statistic MBRC is used for the cost calculation. For more information, seedoc 927295.
As of Oracle 10g, the cost calculation is separated from the I/O behavior by default by means of the _DB_FILE_OPTIMIZER_READ_COUNT parameter.
39. In Oracle 10g or higher, there can be no more ORA-01502 errors as a result of SKIP_UNUSABLE_INDEXES = TRUE.
Even if SKIP_UNUSABLE_INDEXES usually mean that indexes flagged as UNUSABLE are not used and therefore no ORA-01502 error appears, there are exceptions. The following cases, for example, lead to an ORA-01502 error:
You perform a DML operation and a unique index is UNUSABLE. Because the unique index is required to check the unique constraint, the DML operation automatically fails with an ORA-01502 error.
The UNUSABLE index is forced by an Oracle hint.
40. There is always "Direct Read" in transaction SM50 or SM66 for primary key accesses.
This is not the case, "Direct Read" means that no more than one data record can be returned from the database. This is mainly the case for primary key accesses, but it is also the case for other operations (for example, SELECT COUNT) and this means that these are also "Direct Read" accesses.
41. The shared cursor cache in transaction ST04 contains information about all SQL statements that were executed since the start of the database.
Unfortunately, reality is slightly different. SQL statements may get "lost" for various reasons.
Invalidation due to administrative measures (creating new statistics, index rebuilds, other DDL operations)
Displacement due to limited shared pool size.
BW: Invalidations because statements access temporary tables that are subsequently dropped.
Counter overflow (for example, buffer gets at 2 billion (Oracle 9i and lower) or 4 billion (Oracle 10g and higher)).
As of Oracle 10g, the history view DBA_HIST_SQLSTAT provides a better option to analyze actual, most processing-intensive SQL statements.
42. In the SAP table buffer, a table cannot be significantly larger than at Oracle level.
This statement is incorrect. In the SAP table buffer, even queries for values that do not exist are logged. This means that even if a table at Oracle level has only 10 entries, the SAP table buffer may have 1000 and more entries if many different queries are executed for which no data record exists in the table.
In addition, VARCHAR2 column values are always filled up to the defined maximum length, even if the lengths used in Oracle are shorter.
In the case of Unicode, character strings lead to another factor of 2 because SAP works with the character set UTF16, which represents each character with two bytes.
In the case of the character set UTF8, which is used by Oracle, most (western European) characters are represented by one byte.
Therefore, you can generally use the following approach for the space requirement in the SAP table buffer:
"Maximum data record length" (for Unicode * 2) * "Number of data records" + Negative buffer
43. An SQL statement is parsed in the PREPARE step.
No, in the SAP-Oracle environment, this is incorrect. To avoid a roundtrip to the database, an SQL statement is parsed only during the first FETCH phase. As a result, an ST05 trace does not contain information about how long it took to parse an SQL statement. For more information, also seedoc 1093485.
44. "log buffer space" wait situations are triggered by a log buffer that is too small.
In most cases, "log buffer space" waits are the result of LGWR processes that are too slow. Therefore, you must tune the LGWR (seedoc 619188, section "log buffer space"). Increasing the size of the log buffer is useful only in rare cases and does not result in a significant optimization. In most cases, a log buffer with one MB is sufficient.
45. If the error "ORA-01652: unable to extend temp segment ... in tablespace PSAPTEMP" occurs, the table space PSAPTEMP must be increased.
In many cases, increasing PSAPTEMP is not a suitable means of avoiding ORA-01652 errors. Instead, you should always check whether the triggering SQL statement executes unnecessarily large sortings or hash operations that cause a high PSAPTEMP load. Only when the performance of the SQL statement is optimal and cannot be improved further, must PSAPTEMP be increased. See alsodoc 659946.
46. If the error "ORA-01555: snapshot too old" occurs, the table space PSAPUNDO must be increased.
No - you should always first check whether there are database accesses that take an unusually long time and therefore encounter the ORA-01555 problems. You can often avoid ORA-01555 using measures such as tuning expensive accesses (Note 766349) or increasing the COMMIT frequency.
47. An Oracle upgrade is transparent for the SAP application. Therefore, no performance tests are required.
Upgrade to later Oracle versions may significantly impair performance because the central components like the cost based optimizer may display a changed behavior. Regardless of DB-OS abstraction, extensive tests of the effect of Oracle upgrades are required before upgrading a production environment.
48. The hint USE_NL(<table1> <table2>) forces a nested loop join with <table1> as access table and <table2> as internal table.
This is incorrect. The tables specified behind USE_NL are (if possible) used as internal tables by nested loop joins. If a dedicated access table is to be defined in a join, the hint LEADING(<table1>) must be used. Therefore, to be correct, the hints LEADING(<table1>) USE_NL(<table2>) must be specified. This also applies to the join hints USE_HASH and USE_MERGE.
49. On UNIX, the resident size (RSS) specifies how much memory is being used by the individual Oracle processes.
The resident size of the Oracle processes does not allow any useful conclusions as to how much memory is being used by Oracle. The reasons for this are as follows:
Some shared memory areas are included in the RSS. Among other things, this includes the executable text, which may form a large part of the RSS of all Oracle processes. In reality, this is a shared area so that it must be counted only once if you want to determine the memory consumption.
Other shared memories, such as SGA, are not part of the RSS.
If individual Oracle processes have RSS values that have increased considerably, this is generally due to an increased PGA consumption.
For more specific details about the Oracle memory consumption, seedoc 789011.
50. The column SEARCH_COLUMNS in V$SQL_PLAN or PLAN_TABLE specifies how many columns you can use to restrict the index access.
No, SEARCH_COLUMNS do not belong in the context of access predicates and filter predicates or of FF1, FF2 or FF3 fromdoc 750631. Instead, the value that is displayed generally corresponds to the number of "=" conditions or IN conditions for indexed columns, increased additional range conditions for indexed columns if necessary. Therefore, this information is of little practical relevance.
51. The function TO_TIMESTAMP(SYSDATE) returns the current timestamp.
This is not the case. By default, only the year, month and day is displayed when SYSDATE is output, but not the current hour, minute and second. For this reason, a transformation in a timestamp results in the system choosing 12.00 of the current day, and not the current point in time. This may result in follow-on problems such as the Data Pump problem described indoc 1259766.
52. The default for DB_FILE_MULTIBLOCK_READ_COUNT on Oracle 10g is 128.
Even if this value is set to 128 in most cases, this is not always the case. Instead, for smaller buffer pools a lower default value is used in connection with a high SESSIONS value. The following formula roughly applies:
DEFAULT(DB_FILE_MULTIBLOCK_READ_COUNT) = MIN (DB_CACHE_SIZE / SESSIONS, 1048576) / DB_BLOCK_SIZE
53. An Oracle parameter is set to its default value if V$PARAMETER or V$PARAMETER2 contains ISDEFAULT = TRUE.
No, ISDEFAULT = TRUE only specifies that a parameter is not set in profile, while ISDEFAULT = FALSE means that the parameter is contained in the profile. Confusing situations may arise as a result:
If a parameter is explicitly set to its default value in the profile, ISDEFAULT is still set to FALSE.
If a parameter that is not set in the profile is changed with ALTER SYSTEM at runtime, ISDEFAULT still remains at TRUE. You can identify the change only by ISMODIFIED = SYSTEM_MOD.
54. PGA_AGGREGATE_TARGET specifies the upper limit of all PGA memories that can be allocated.
No, it is actually possible for the PGA total to exceed the number specified in PGA_AGGREGATE_TARGET. The reason for this is because there is a part of the PGA memories that cannot be optimized (for example, for parsed SQL statements). However, the limit defined in PGA_AGGREGATE_TARGET is usually not exceeded by any significant degree. For more information, seedoc 1266245.
55. After the creation of fixed object statistics (Note 838725), all X$ objects have statistics.
No, this is not the case. There are two reasons why statistics may be missing for X$ objects (which is normally not a problem):
When you use the standard settings to create fixed object statistics, no statistics are created for some X$ objects. However, there is at least one entry in DBA_TAB_STATISTICS with LAST_ANALYZED IS NULL and, if required, you can generate statistics using DBMS_STATS.GATHER_TABLE_STATS.
No statistics are generated for X$ tables that are derived from other X$ tables (for example, X$KGLCURSOR_CHILD). However, in this case, there is no entry in DBA_TAB_STATISTICS and you cannot create manual statistics. You can use the following query to get an overview of derived X$ tables:
SELECT KQFDTNAM FROM X$KQFDT;
56. The column CHAIN_CNT from DBA_TABLES contains the number of chained rows of a table.
Unfortunately, this is not always the case. CHAIN_CNT is only filled as part of statistics based on ANALYZE TABLE. As of Oracle 10g, however, BRCONNECT uses DBMS_STATS to generate statistics. CHAIN_CNT is no longer updated, and the current value remains. Therefore, it may be possible that a CHAIN_CNT value determined by ANALYZE remains for years even though the actual number of chained rows has changed significantly.
For more information, seedoc 1269911.
57. The wait event "db file scattered read" (multi block reads) relates to full table scans; index accesses trigger "db file sequential read" (single block reads).
This statement is often correct. However, there are a large number of exceptions to this rule:
Index accesses using INDEX FAST FULL SCAN execute multi block reads.
As part of prefetch activities, multi block reads may also occur with other index accesses (such as INDEX RANGE SCANs).
As of Oracle 10g, the standard system "warms up" the buffer cache (_DB_CACHE_PRE_WARM = TRUE). As a result, if there are cache misses, the system often carries out a multi block read ("db file scattered read") instead of a single block read after the database is started. This is done to load further blocks that may be required to the buffer pool as a precaution.
In the case of chained rows, single block reads are required also for full table scans so that the system can to continue to read data records in other blocks.
If many table blocks already exist in the buffer pool, as part of full table scans, the uses single block reads to read missing blocks as required.
58. If the database is restarted, the contents of V$ views are lost.
This statement is generally correct; however, there is at least one exception: V$OBJECT_USAGE looks like a V$ view; however, it is in fact a normal view that is contained in DBA_VIEWS and that refers to tables that actually exist, such as SYS.OBJECT_USAGE. The information about the index usage contained in it is therefore still available after the database is restarted.doc that V$OBJECT_USAGE is filled only if index monitoring is activated (Note 912620). The system always displays information about the active user only.
59. GRANT and REVOKE of a role cancel each other out.
It is easy to assume that a database user has the same authorizations as originally assigned if a role is assigned to the user temporarily using GRANT that is then revoked using REVOKE. However, this is not always correct: For this reason, a GRANT and REVOKE of the DBA role results in the fact that the user no longer has a UNLIMITED TABLESPACE authorization. This can result in the errors described indoc 644104.
60. The access to DBA views, USER views and ALL views is possible without specifying the user SYS.
In most cases, it is not required to use the SYS user when accessing the Oracle DDIC views since PUBLIC synonyms exist. Some views, such as DBA_TAB_MODIFICATIONS or DBA_TRANSFORMATIONS do not have a synonym, which means that the user SYS must be used (for example, SYS.DBA_TAB_MODIFICATIONS).
61. The column LIMIT_VALUE in the view V$RESOURCE_LIMIT specifies the maximum value that a resource can achieve at that time.
This is partially incorrect. LIMIT_VALUE fur "parallel_max_servers" normally displays 3600 even though you used the parameter PARALLEL_MAX_SERVERS to set a much lower maximum value. The system cannot exceed the parameter value. As a result, "parallel_max_servers" is restricted by INITIAL_ALLOCATION rather than by LIMIT_VALUE. LIMIT_VALUE means that, if required, you can use ALTER SYSTEM to increase the value up to the specified limit without restarting the system.
62. Index names always match for SAP and Oracle.
No. Deviations may occur for several reasons:
The separator between table name and index suffix may have several attributes (for example, "~", "^", or one or several "_").
If the table name and index suffix exceed a certain limit, the index name is shortened (for example, "/BI0/PUC_DEVICE-Z01" for SAP becomes "/BI0/PUC_DEVICE~Z0" for Oracle).
63. You can refer to the CREATE statement from transaction SE11 for the aliases for tables that are used within a view.
No, the CREATE statement for views that is displayed in transaction SE11 contains aliases that do not necessarily have to correspond with the aliases used in the database (for example, "T1" in SE11 compared to "T0001" in Oracle). This causes problems, for example, if you want to use global hints (seedoc 772497). See DBA_VIEWS for the hints that are used in the database.
64. The parallelization at session level (for example, "ALTER SESSION ENABLE PARALLEL DDL ...") does not affect the degree of parallelism for segments.
Unfortunately, this is not the case. For example, if you define a degree of parallelism at session level to parallelize an index build, the index segment is created using the specified degree of parallelism. After creating, therefore, you must reset the degree of parallelism (by means of "ALTER INDEX ... NOPARALLEL") to 1 to avoid unwanted effects on the CBO and the selected access paths.
65. SAP always requests the complete resulting set of a database query. No, this is not always the case. For example, the database interface may end database queries prematurely, even if the system has not yet transferred the complete resulting set. In individual cases, this may mean that a considerable effort is required if a comparatively small dataset is requested, for example:
Reading 10,000 data records
Sorting these 10,000 data records
Returning the first data record from the sorted dataset
In many cases, the effort can be limited by a "ROWNUM <=" condition in the database, but this is not possible in connection with ORDER BY. To determine whether the system frequently reads incomplete resulting sets, you can use END_OF_FETCH_COUNT from V$SQLSTATS for an SQL statement. If this value is significantly lower than the EXECUTIONS value, only a part of the resulting set may be requested.
66. Oracle TX enqueues in shared mode are triggered by an INITRANS value that is too low.
Oracle TX enqueues in shared mode can be identified by S flags in the deadlock graph (Note 84348) or by LMODE = 4 in V$LOCK. INITRANS is only the trigger if the system is waiting for the wait event "enq: TX - allocate ITL entry". In the case of "enq: TX - row lock contention", the cause lies elsewhere (for example, unique constraint or bitmap index). For more information, seedoc 745639.
67. Oracle sessions always use CPU or wait for a wait event.
In theory, ELAPSED_TIME = CPU_TIME + WAIT_TIME should apply to SQL statements, but in practice, deviations from this occur for various reasons. Essentially, you can distinguish between the following two situations:
a) Time is recorded as CPU time and as wait time (for example, if CPU is used for "db file sequential read" waits or for filling the TCP buffer during "SQL*Net more data to client" waits).
b) Time is recorded neither as CPU time nor as wait time (for example, for direct path operations on HP-UX or for paging in the operating system).
68. A CREATE INDEX or REBUILD INDEX with Oracle 10g or higher always implicitly creates all CBO statistics that are required.
This statement is often correct, but is incorrect for partitioned indexes. In such a case, the system only creates the local partition statistics, but does not calculate realistic global statistics. Therefore, if in doubt, trigger a new statistics creation after the CREATE or REBUILD of a partitioned index.
69. The execution of SQL statements using database tools such as SQLPLUS is similar to the execution of the SQL statements by the SAP system.
No, the runtime behavior with SQLPLUS can be completely different from the runtime behavior of the SAP system. In most cases, the reasons for this is that SAP parses SQL statements using bind variables, while concrete values are generally transferred at SQLPLUS level. As a result, the CBO may have completely different execution plans, which lead to a very different runtime. Therefore, whenever possible, carry out performance analyses using SAP tools such as the DBACOCKPIT transaction or also use bind variables for tests using SQLPLUS and other tools.
70. Oracle ignores hints that are syntactically or semantically incorrect, but they do not cause a termination.
This is not always correct. Therefore, a non-existent index in IGNORE_ROW_ON_DUPKEY_INDEX hint with Oracle 11g or higher leads to a termination with error message ORA-38913, for example.
71. Online redo log groups must alternate between different disk groups.
In modern storage subsystems, it is most efficient to create files with an intensive IO in disk groups consisting of as many disks as possible. Competing processes (in this case, the log writer and the archiver) therefore use the maximum number of available disks. If n logs are available for the origlogs, the best performance is achieved by creating all of the origlogs in ONE stripe set consisting of n disks instead of distributing the m origlog directories in m stripe sets consisting of n/m disks respectively. Proceed in the same way with the mirrorlog disks. For security reasons, the origlog and mirrorlog files of the same log group never use the same disks.
The alternating sequence is a relic of times when modern storage subsystems were not yet used.