Online Tutorials & Training Materials | STechies.com
Register Login

Consistency Checks + Block Corruptions Interview Questions and Answers

|| || 0

Consistency Checks + Block Corruptions Interview Questions and Answers
Stechies

FAQ: Consistency Checks + Block Corruptions

1. What is a corrupted block?

General
=================================================================================================================================================================
There a different layers where a corruption can occur.

OS layer (lowest layer)
Data is not accessible with OS routines, e.g. while trying to copy a file with OS tools you get an OS error that the data cannot be accessed or copied

DB layer
Data is accessible from OS layer.
Data is not accessible with DB routines, e.g. while trying to access data with a select, a DB error occurs, indicating that the data cannot be selected

SAP Pool/Cluster table layer (only SAP pool or cluster tables)
Data is accessible from DB layer.
While trying to extract the pool or cluster table data, the database interface of SAP fires an error because the metadata about the structure of the application data is not valid. The metadata and the application data is stored in the same DB row.

Application layer (highest layer)
Data is accessible from DB layer (non pool/cluster Tables)
Data is accessible from SAP pool/cluster table layer (Pool/Cluster Tables)
The data selected is wrong or does not fit to dependent data in other tables.

 If a corruption occurs on a lower layer it leads usually also to problems on the higher layers, e.g. if data cannot be read on the DB layer it is not accesable by the application layer.

Eliminating a corruption on a lower level can leave the data on a higher layer corrupt. E.g. if the DB corruption cannot be fixed by applying a backup not containing the corruption but only by creating a non-corrupt version of the table reading arround the corrupt areas, the data in the corrupt areas will be missing in the new table. The DB corruption is fixed (no error anymore) but the application corruption (missing rows in the table) needs further investigation.

If "corrupt block" or "corruption" is mentioned in the following always corruptions on the DB layer are meant.

Methods to find corruptions on other layers are layer dependent.
An OS layer tool is e.g. the file system check.
A SAP Pool/Cluster table layer check tool is e.g. R3check described in note 89384.
Some applications offer on demand different checkreports to crosscheck application consistency on keytables.
Questions regarding those layers checktools need to be adressed to the appropriate OS, DBI or application component.

DB layer corruptions
========================================================================================================================
A corrupted block is a block that does not fulfill special predefined rules that all Oracle blocks usually fulfill. The data stored in the block is therefore no longer accessible. Rules that have to be fulfilled are for example:

redundant data stored in the blockheader and at the end of the block has to be equal

the successor of block n does not have blocknumber n+1

...

There are many of these criterias.

A corrupted block can occur in datafiles or in redologfiles.

2. What is a consistency check?

A consistency check is a check that tries to find corrupted blocks.

Note 23345 describes how the three different methods (analyze, export, dbverify) are invoked. The methods have different strategies to check for block corruptions. In simple words:

the dbverify checks if a database block fullfills the predefined rules

the export reads the data stored in table blocks

the analyze reads table and index data and performs crosschecks

If the Oracle RMAN is used for backing up the database an implicit consistency check for the blocks saved is performed. The checks performed are similar to the checks of dbverify. Therefore a check with dbverify is not neccessary anymore.

All these consistency check methods can be used for datafiles only.

Corrupted blocks in redologfiles can only be found by applying the redologs during recovery. As a consequence a - maybe only for this reason and not for high availability created - standby DB is neccessary to permanently check reliably the redologfiles.
If you get an error when dumping a redologfile this indicates that the redolog file is corrupted. But the reverse is not valid, i.e. if you dump a redolog and do not get an error this is no sufficent prove that the redolog is ok. A redo log can be dumped using the following command:

alter system dump logfile '<logfile>' dba min 1 . 1 dba max 1 . 1;

3. What is not a consistency check?

Do not mix up a consistency check with 'sapdba -check' or 'brconnect -f check'. During a run of these tools there are other checks performed (left freespace in the database, parameter settings, last successful backup etc.) but no consistency check.
Also during a backup no consistency check is performed. A backup physically reads all the blocks and stores it on tape. A backup neither knows what an Oracle block is nor which rules such a block has to fulfill.

4. Wherefore do I need a consistency check?

A consistency check does not prevent corrupted blocks in the database, but it can prevent data loss because of the corruption. If you check sufficiently often for corruptions you can restore the file containing the corrupted blocks from a backup that does not contain the corruption and recover up to the current point in time. It is quite unlikely that the same corruption is in the archivlogs, too.

5. How often should I perform a consistency check?

To answer this question have a look on the following cases where corruptions are detected:

Case 1: during an access (select, insert, ...) in normal operation
Case 2: during a check for corrupt blocks (database consistency check according to note 23345)

--X-----------X-----X------------X-----X------------------>  Time
|          |     |            |    |
T1 backup  |     T3 backup    |     T5 identifying the corruption
|                  |
|                  T4 consistency check
|
T2 consistency check

To Case 1:
You identify a corruption at time T5
The T4 consistency check was error-free
=>
The corruption occurred in interval ]T4,T5] and you can correct it by restoring the T3 backup because this backup does not contain the corruption. Afterwards you have to recover the datafiles to the current point in time.

For Case 2:
During the consistency check (T4), you identify a corruption
The consistency check T2 was error-free
=>
The corruption occurred in interval ]T2,T4] and you can correct it by restoring the T1 backup since this backup does not contain the corruption. Afterwards you have to recover the datafiles to the current point in time.

Other cases must not occur.

The following rules have always to be fulfilled, otherwise your consistency check concept is inadequate:
 

A successful backup must always exist from before the last successful consistency check

A continuous error-free chain of archivelogs from the beginning of this backup up to the current point in time need to exist
 

              When you plan your backup/recovery concept, note that you may have to rely on the last checked for consistency backup (T1) rather than the last backup (T3) in Case 2. This can result in you having to restore and recover considerably more archive logs.

6. What are the advantages and disadvantages of the different consistency check methods?

Analyze table validate structure cascade:

  • Runtime: long
  • locks the table examined currently against change accesses
    => only feasible in times of minimal work load or - even better - when the SAP system has been stopped
    starting with Oracle 9 the extension 'analyze table validate structure cascade ONLINE'; does not lock the table anymore
  • checks tables and indexes
  • if an analyze (online or not online) is running on a table and you try to alter storage parameters, parallel degree etc from this table then all further sessions accesing the table with write or read access are locked until the analyze is finished. Further details can be found in note 619188.
  • reads the blocks to be checked in the SGA, causing possible temporary deterioration in the buffer quality of the DB block buffer
  • does NOT check data stored in lob columns
     

Export:

  • Runtime: medium
  • feasible in a running system. Performance loss as a result of the export processes. If you access extensive tables in write mode during export, there is a danger of ORA-1555 "snapshot too old". However, you can prevent this by sufficiently dimensioned rollback segments
  • only checks tables
  • certain kinds of corruptions are exported without an error occuring.
  • If you later try to reorganize this type of table (export/import), you will have problems during import.
  • reads the blocks to be checked into the SGA which is why the buffer quality of the DB block buffer is adversely affected for a short while.
     

dbverify:

  • Runtime: short
  • feasible in running operation (UNIX: all oracle releases; Windows Oracle9 or newer)
  • not only checks tables and indexes but also blank DB blocks
  • does not check cross references between tables and indexes (indirectly some special crossreferences are checked when blockchecksums are enabled; see below)
  • reads the blocks WITHOUT loading them in the SGA, which is why the buffer quality of the DB block buffer is not affected
  • the only procedure that is also possible on data files restored from a backup (without these files having to belong to a DB)
  • does together with blockchecksums according note 923919 an extended check on blocks and can then find special kinds of cross reference corruptions because those may cause wrong checksums.
    Wrong cross references caused by lost writes do NOT lead to wrong checksums and therefore cannot be detected.
    Also cross reference failures already occuring in main memory BEFORE the checksum for the table and index blocks are calculated cannot be detected because the checksums base already on bad data.
    Anyway dbverify together with blockchecksums are a considerable alternative if an analyze table validate structure cascade on the complete DB lasts to long.
  • checks data in lob columns
     

7. Which consistency checkmethod do you recommend?

Check note 23345 for the current recommendation
 

8. Where are the inconsistencies logged?

Inconsistencies are logged in the logfile of the explicit check. For most inconsistencies there is in addition a tracefile generated in saptrace/usertrace. It is not guaranteed that the Oracle error number is logged in this tracefile or in the alertlog. Often there is just a textual description without an error code, so that a scan of the alertlog for 'ORA-', e.g. performed by 'brconnect -f check', will not inform you about the corruption. It is absolutely neccessary to check the log of the consistency check for errors and warnings.

9. Is it possible to find the root cause of a corruption with DB tools?

The root cause for corruptions is nearly always a layer below the database (OS, Firmware, Hardware etc.). Even if NOWHERE on these layers log, dump or trace files/utilities show any error we have seen in the past at customers with permanently new corruptions  that exchanging parts of the hardware or moving to a complete new hardware stopped new occurences of corruptions. This proved that these errors where hardware related.
The Oracle Software does not cause corruptions with the very special exceptions described in explicit SAP notes e.g.

547464 - Nologging Option when creating indexes
=> an INTENDED corruption marker is set

  • 553526 - Recovery terminates with ORA-600 [3020]
    => Oracle Bug

764015 Segment corruption in ASSM tablespace
817262 Check tool for segment corruption in ASSM tablespace
=> Oracle Bugs

828483 ORA-00600: [KCOAPL_BLKCHK]
=> Oracle Bug

              From the DB point of view it cannot be PROVED that a lower layer caused the problem. The DB has no universal check tool for every piece of HW of every vendor. These tools have to be offered and used by the HW vendor support. Only the problem itself can be detected by the DB. The DB Support can just tell from EXPERIENCE that lower levels have been always the reason except the situations mentioned in notes.
Keep in mind when thinking about the root cause: if it would be a new Oracle bug

it is extremely unlikely that you get a corruption A LONG TIME AFTER applying the last software change on Oracle level

it is extremely unlikely that ONLY YOUR system is affected if it is not a brand new release, patchset, bugfix you have applied. Several other customers must have this corruption so the support would be aware of this.

it is extremly unlikely that you get JUST A FEW corruptions. You should get a mass. Also the HW vendor may use the same argument but we have seen in the past that actions like syncing a disk, hot swapping of disks etc. lead to a few failures at a specific point in time but then never again.

it is extremly unlikely that you get corruptions on a special disk or at disks controlled by one special controller. Vice versa spreaded corruption do usually indicate a failure in a component that every block written from memory to disk has to pass (memory, CPU, cables etc.)

it is extremly unlikely that this causes curruptions in different types of files e.g. archive files and datafiles, because different oracle coding is responsible for this. Also corruptions in non DB files are a clear indicator for this but they need not to occur.

10. What is an appropriate way to go productive again when the root cause for the block corruption is not known/not yet known?

Root cause analysis of corruptions usually cost more time than available to go productive again. But If the root cause was not/can not be determined or if it cannot be excluded definitly that the problem is fixed by the HW vendor, using the original Hardware without fall back is not safe, because more likely than a single, never reoccuring root cause is an ongoing problem.


Therefore in the past several customers went this way:

Move the productive DB to another machine by DB copy (same DB name) with no reusage of any piece of hardware (including cables etc.). If no powerful machine is available ask the HW partner for a temporary one or check if the core business can be done on a less powerful but available machine.

Elimination of corruptions on the DB on the new hardware

Complete consistency check of the DB on the new hardware

Backup of the DB on the new hardware

Start of production on the DB on the new hardware

Search for the root cause on the original hardware until the reason/a possible reason is found and eliminated

Complete consistency check of the DB on the original hardware

Stress test or production simulation on the original hardware

Complete consistency check of the DB on the original hardware. This consistency must not show new corruptions

DB copy back from the DB on the new hardware to the original hardware. After this copy the DB on the new hardware must not be opened anymore!!! A standard offline Backup with brbackup with option -t offline will start the DB afterwards to write the result of the backup run to the DB. This has to be avoided!!! Use the option -t offline_stop instead if you want to create a backup that is restored on the original HW or use ftp for the file copy.

Usage of the DB on the new hardware as standby DB. This need not to be an official standby DB as described in the Oracle guide but can be just a database being mounted and in permanent recovery mode. Every archive generated at original HW is applied directly on the stanby DB. If the archivelog cannot be applied because it is corrupt switch back to the new HW loosing all changes beginning with this archive. If these changes are still in the online redologs of the original hardware then it may be possible that the archive got corrupted when written to the archive directory but the online redolog may be ok. Stop then the DB on the original hardware, copy the onlineredolog to the standby side and try toapply this instead of the corrupt archive.

Complete consistency check of the DB on the original hardware

Start of production on the DB on the original hardware

Complete consistency check of the DB on the original hardware as often as possible (e.g. on a daily basis). If during these consistency checks or during normal operation new corruptions occur switch back to the DB on the new Hardware. Therefore just the not applied archivelogs and the current online redolog must be applied on the standby side.

11. Is there a possibility to estimate how long an analyze table validate structure will last?

The following formulas are based on Oracle 9206. The formulas may differ on other Oracle releases. Also they are not valid up to the last digit but give a good rule of thumb, especially for larger tables.

Yes, via the logical reads performed by the session executing the analyze. The logical reads neccessary for of a table analysis can be calculated with:

analyze table validate structure cascade;
logical reads=<rows>*(sum(<indexheights>)+1)
= <rows>*(sum(<indexblevels>)+<number of indexes>+1)

analyze table validate structure cascade online;
logical reads=<rows>*(sum(<indexheights>)+<number of indexes>+1)
= <rows>*(sum(<indexblevels>)+2*<number of indexes>+1)

              The <indexblevels> and < number of indexes> can be found in dba_indexes. The number of <rows> in dba_tables. Keep in mind the statistics may have changed since the last calculation!

              The ratio logical reads per time can be found out via st04->detailed analysis->oracle sessions, selecting the analyze  command and check how many logical reads are e.g. performed per minute or better per ten minutes for large tables.
Assuming the ratio keeps constant, the overall runtime can be estimated.
The point in time when a already running analyze was started can be found out via the cursor cache - column 'first load time'.

12. Is it possible to fix a block corruption with zero downtime?

It is possible to fix corrupt blocks with nearly no influence on the productive activity if a good backup exists that contains the blocks in a non corrupt version. Only the corrupt blocks REMAIN inaccesible until the repair is finished. In other words during the repair the SAP System can remain up and running because

the DB remains accessible

all the datafiles remain accessible (no offline datafiles)

the data stored in non corrupt blocks in the corrupt objects  remains accessible

              NOTE: IF THE SOURCE OF THE CORRUPTION IS NOT CLEAR LEAVING THE SYSTEM UP AND RUNNING CAN LEAD TO FURTHER CORRUPTIONS. THEREFORE THE REPAIR ALONE IS NOT A SUFFICENT ACTION TO GET RID OF CORRUPTIONS.

              The idea of the following procedure is to use the blockrecovering feature of Oracle´s RMAN utility. Even if the backups are NOT done with RMAN this feature can be used VERY EASILY without any preconfiguration of RMAN:

perform an onlinebackup of the files containing the coruptions, just for security reasons

restore the files having corrupt blocks from a good backup to ANOTHER location on disk (e.g. sapreorg). Do NOT overwrite the files currently belonging to the database

restore ALL archives from the restored backup up to now to oraarch.

If there is not enough space to hold ALL the archives to be restored and the one generated during further productive work follow the following procedure:

  • restore the archives to other destinations where enough space is available
  • Prepare some commands to make RMAN aware of the restored archives. This can be done manually or - on Oracle 8 and 9 - with a small script provided here. As of Oracle 10 a set of archives can be registered with:
    catalog start with '<absolute path prefix>
    Then the script is superflous.

    a) Save the following script to any directory as script.sql
    The script will just create a list of commands of this kind when executed:
    catalog archivelog '<archivelog new path and name>';

    set serveroutput on size 1000000
    set pagesize 0
    set feedback off
    accept VPREFIX char prompt 'Prefix before sequence number: '
    accept VSUFFIX char prompt 'Suffix after sequence number: '
    accept VDIGITS number prompt 'Digits of sequence number: '
    accept VFROM number prompt 'First sequence number: '
    accept VTO number prompt 'Last sequence number: '
    spool script.log
    begin
    for i in &VFROM...&VTO loop
    dbms_output.put_line('catalog archivelog '' &VPREFIX'||
    lpad(to_char(i), &VDIGITS,'0')||'&VSUFFIX'';');
    end loop;
    end;
    /
    spool off
    exit


    b) execute the script for each location you have restored archives to create a script.log file containing commands like these:
    catalog archivelog '<prefix><sequencenumber><suffix>';

    <prefix><sequencenumber><suffix> is the absolute path and filename of an archive e.g /oracle/XYZ/newdest/XYZarch1_13992.dbf.
    If the script asks for the prefix in this example
    /oracle/XYZ/newdest/XYZarch1_
    has to be entered. The suffix would be
    .dbf
    The number of digits for the sequencenumber is here 5.
    The first and last sequencenumber is the highest and lowest sequencenumber of an archive restored at the location this script is executed for.

type 'RMAN' on OS level. The following commands are RMAN commands.

Connect to the DB with RMAN
connect target <DB SID>;
Password: / as sysdba

register the restored datafile so that RMAN is aware of them
catalog datafilecopy '<absolute path/filenames of the restored datafiles>';
repeat this action for every file restored
With the command 'LIST COPY;' successfull registration can be doublechecked

Only if other restore locations for the needed archives have been used than the standard archive destination specified in parameter log_archive_dest:
register all the archives from the backup up to now
This needs to be done on oracle 8 and 9 individually for each archive with the command
catalog archivelog ...;
generated above.
With the command 'LIST ARCHIVELOG ALL;' successfull registration can be doublechecked.

Do a blockrecovery for all corrupt blocks
blockrecover datafile <#1: file id> block <#1: block id>, ..., datafile <#n: file id> block <#n: block id>;

If the log_archive_dest parameters above have been changed, change them back now:

  • for all used archive destinations (<n>=1..10) disable the new parameter
    alter system set log_archive_dest_<n>='';
  • enable the old parameter
    alter system set log_archive_dest='<path><suffix>';
  • check if the settings are correct
    show parameter log_archive_dest
    => the parameter log_archive_dest should have now its original value

Optional Actions
ATTENTION: Be very careful with unregistering files if RMAN is used for standard backups.

  • unregister a datafile
    change datafilecopy '<absolute path/filenames of the restored datafiles>' uncatalog;
  • unregister an archivelog
    change archivelog '<absolute path and file>' uncatalog;
  • to tell RMAN a specific file is not available anymore at this location without uncataloging it it can be set 'unavailable'. Use this keyword instead of 'uncatalog' then in the commands above.

13. Is it possible to skip corruptions in redologfiles when performing a recovery and the recovery ends up in a ora-00600 [3020]?

Possibly. Please contact SAP Support and reference INTERNAL note 825970. This note is not public because trying to apply the procedures to continue recovery can cause application inconsistencies that can be fixed only by SAP Support.

14. How can I transform a rowid to a relative file_id, block_id and slot id or vice versa?

How can I transform a (Relative) Database Block Address (R)DBA to a (relative) file_id and block_id or vice versa?

Several dumps or tracefiles in the area of block corruptions show a rowid, a DBA (database block address) or a RDBA (relative database block address). This needs to be splitted in its components like object, file, block or slot.
For selects via rowid a row id need to be created.

The object_id is the corresponding column in dba_objects.

A slot is the number of a row within the block. The first row in a Block has number 0.

The following functions work SYSTEM INDEPENDENTLY. They just transform the data to another format and do NOT do any existance check of files, objects etc. Using the same syntax on any system produces always the same result.

                       rowid -> object, file, block, slot
select
DBMS_ROWID.ROWID_OBJECT(' <ROWID>') ROWID_OBJECT,
DBMS_ROWID.ROWID_RELATIVE_FNO(' <ROWID>') ROWID_RELATIVE_FNO,
DBMS_ROWID.ROWID_BLOCK_NUMBER(' <ROWID>') ROWID_BLOCK_NUMBER,
DBMS_ROWID.ROWID_ROW_NUMBER(' <ROWID>') ROWID_ROW_NUMBER
from
dual;

object, file, block, slot -> rowid
select
to_char(dbms_rowid.rowid_create(1, <object id>,
<relative file id>,<block id>,<slot id>)) "ROWID"
from
dual;

(R)DBA -> relative/absolute file, block:
select
DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(' < (R)DBA>')
DATA_BLOCK_ADDRESS_BLOCK,
DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE (' <(R)DBA>')
DATA_BLOCK_ADDRESS_FILE
from
dual;

relative/absolute file, block -> (R)DBA:

select
DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS( <file id>,<block id>)
DATA_BLOCK_ADDRESS
from
dual;


Related Articles