Search This Blog


SAP: Making Use of Oracle Advance Compression Option

Product: SAP, Oracle
Version: SAP 6.40 onward, Oracle 11g Release 1 (11.1), BR*Tools 7.20 onward

When planning for table compression, it is important to research and aware of following:

1. Oracle database compression restriction
2. Do not compress tables with frequent update
3. Do not compress tables that needs high performance throughput in INSERT
4. Do not compress tables that needs high performance throughput in UPDATE
5. Low space saving on high cardinality (less duplicate data). Uses Oracle Advanced Advisor PL/SQL to perform an estimate
6. Tables with more than 255 columns not supported
7. Tables with LONG columns not supported. Uses SAP BRSPACE (option long2lob) to migrate to LOB column type (recommend SecureFile LOB to use additional compression)
8. DELETE operation will runs 20% slower. If performance degrade more than 100%, search SAP for Oracle database patch with this known bug
9. BLOB is not compressed. Needs to convert to SecureFile in order to compress

SAP shipped with Oracle Enterprise Edition, which has license to use this feature. Anyone not using Enterprise Edition will find that compression does not work. On the other hand, anyone who has Enterprise edition not bundled with SAP, an extra license fee is require if you use it.

Normally people uses SAP brspace command to compress tables, which will SKIP these tables
1. SAP pool tables ATAB, UTAB. This is due to reason #2
2. SAP cluster tables CDCLS, RFBLG. Due to reason #2
3. INDX-type tables BALDAT, SOC3. Due to reason #3
4. ABAP source and load tables REPOSRC and REPOLOAD. Due to reason #4
5. Update tables VBHDR, VBDATA, VBMOD, VBERROR. Due to reason #3, #4

In ECC 6 system, there are 949 objects will be excluded when using Oracle Advance Compression Option

Following are my recommendation when decided to use Oracle Advance Compression Option for SAP

1. Convert LONG column type to LOB
brspace -f tbreorg -a long2lob -c ctablob -s PSAPOLD -t PSAPNEW -p 2
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD

2. Convert to new SecureFile LOB column type and compress
brspace -f tbreorg -a lob2lob -c ctablob -s PSAPOLD -t PSAPNEW
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD

3. Convert the rest of the tables
brspace -f tbreorg -a reorg -c ctab -s PSAPOLD -t PSAPNEW -p 8
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD. Compress 8 tables in parallel

Example of screenshot

Using Oracle Advanced Advisor

Note: For BR*Tools 7.x in Oracle 11g on UNIX (no need for AIX), needs to create additional soft link to make it work, unless the SAP kernel is version 7.20_EXT with BR*Tools 7.20

$ su - oracle
$ ln -s $ORACLE_HOME/lib/ $ORACLE_HOME/lib/

Note: Following features are not feature of Oracle Advance Compression Option

1. Regular table compression
2. RMAN compression
3. Index compression

Use option "-i " if want to use different tablespace to store index. I do recommend this so that corruption of index tablespace or dbf file can be re-create from scratch.

1. Verify there is no UNUSABLE index prior of compression
2. Verify there is no UNUSABLE index partition prior of compression
3. No Oracle SYS objects with status INVALID
4. PSAPTEMP has sufficient space to hold the largest table/index, with pre-allocated space, and don't rely on auto extend. If parallel compress, then increase to total size (performance reason)
5. Online redo log is properly size, or temporary add more. If parallel compress, increase more (performance reason)
6. Modify Oracle initialization file (spfile.ora) to has at least 1 GB for PGA (performance reason). Parameter PGA_AGGREGATE_TARGET
7. Increase DB_CACHE_SIZE to 1 GB in Oracle initialization file
8. If not using auto segment management, verify table and index initial extent (INITIAL) from DBA_SEGMENTS to ensure it will not over allocate the disk space. The compression will not free up the space if it sets larger than compressed data

Please use following PayPal donate if my post helped

No comments:

Blog Archive