Search This Blog

Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

2024-02-05

Oracle sqlplus Login Failure Handling in Shell Script

 Product: Oracle RDBS

Command: sqlplus, shell script

Overview

I would like to document how to create a shell script that will report whether the username credential that stores in a file can successfully login to Oracle DB, or filed.

Without specifying any sqlplus parameter, sqlplus will always return exit code of 0 to the shell script.  I have seen people uses text processing utility to search for the login failure command to detect login failure, but they should use parameter "/l" to do that.

Moreover, in a shell script, I will suggest to turn off the login banner to make the output less verbose by using parameter "/s"

Instruction

Suggested command is "sqlplus /s /l username@TNS"

To automate the password, my suggested command is "echo [password] | sqlplus /s /l username@TNS"

Later, the shell script can uses the OS return code's build-in parameter $? with following meaning:

  • 0 - login successful
  • 1 - login failed


2023-02-27

Oracle Timezone: Clarification of 2 Timezone Setting

Product: Oracle Database
Version: All
OS: All

History

Many years ago, Oracle database relies on OS clock, as well as no date column type that can store timezone setting (similar to MS SQL Server today).

Timezone aware column type, TIMESTAMP WITH TIME ZONE, introduced much later, but many DBA and developers don't have a good knowledge about OS timezone.

Date DB Function

In Oracle 12c to 21c, DB has following 4 date functions, which has different behavior on timezone
  • sysdate - Uses OS timezone
    • E.g. select sysdate from dual;
    • E.g. insert into [user table] values (sysdate);
  • systimestamp - Uses OS timezone
    • E.g. select systimestamp from dual;
    • E.g. insert into [user table] values (systimestamp);
  • current_date - uses DB session timezone, similar to sysdate
    • E.g. select sessiontimezone from dual;
    • E.g. insert into [user table] values (sessiontimezone);
  • LOCALTIMESTAMP
  •  - uses DB session timezone, similar to systimestamp
    • E.g. select sessiontimezone from dual;
    • E.g. insert into [user table] values (sessiontimezone);

Timezone Configuration At DB Level

I won't discuss OS timezone configuration, as this post is mainly clarifying DB level timezone setting that confused many DBA or developers.

In above DB functions, the DB session's timezone depends on sessiontimezone. Default is DB timezone, which set by following:
  • ALTER DATABASE SET TIME_ZONE = 'US/New_York';
  • CREATE DATABASE ..... TIME_ZONE = 'US/New_York';
At user level, you can configure it using
  • ALTER SESSION SET TIME_ZONE='US/Eastern';
You can check its current value by using:
  • sessiontimezone - uses DB session timezone, similar to sysdate
    • E.g. select sessiontimezone from dual;

Troubleshooting Highlight

When verifying it, always display the sessiontimezone together with 1 systimestamp, and 1 current_date, e.g.
select sessiontimezone , sysdate, current_date from dual;

This will ensure that you cover both OS and DB timezone, and displaying DB timezone value

Note: systimestamp will display OS timezone value

Ref: Oracle 19c DB Time Functions: https://docs.oracle.com/en/database/oracle/oracle-database/19/nlspg/datetime-data-types-and-time-zone-support.html

2022-12-13

Oracle: ORA-01555: snapshot too old Tuning

Product: Oracle Databse
Version: 9.0.0 - 18.0

Since Oracle Database 9i, many DBA have been keeping UNDO_MANAGEMENT=AUTO setting, and developers/support often encountered following UNDO related error on long running SQL

Oracle <ORCL> error message for operation <OCIStmtExecute>: <ORA-12801: error signaled in parallel query server P07G
ORA-01555: snapshot too old: rollback segment number 271 with name "_SYSSMU271_3009956035$" too small>.

If DBA don't study Oracle documentation on "Automatic Tuning of Undo Retention" but simply search Google/Internet for answer, then they often comes to following incorrect workaround:

  1. Increase UNDO_RENTENTION time - no change in the long running SQL
  2. Modify code to split the long running SQL to have several shorter running SQL, e.g. store into intermedia table, then select from the intermedia table
  3. Asks developer to tune the long running SQL to run faster, e.g. create index, don't select unnecessary columns, filter out unrequired data (more WHERE criteria)
  4. Don't insert/update/delete on the table that the long running SQL is SELECTing

So what does DBA missed in above analysis is he/she only focus in ORA-01555 error without considering the rest of the message.  Following are the things that DBA missed:

  1. Parallel query server is enabled
  2. Above PQ number is P07G with starting number as P000
  3. Number of concurrent running SQL that that point of time, without considering Parallel query server (excludes parallel query coordination and slave)
  4. Size of the UNDO tablespace, which could be few TB in size, and bigger than entire DB size
  5. UNDO tablespace's dbf files are not in auto extend mode
  6. Even if UNDO dbf files are in auto extend mode, it hit max dbf file size
As the result, tuning UNDO_RETENTION or even increase the size of UNDO dbf to 300GB larger doesn't resolve the issue.

The resolution is in fact totally not related to modify UNDO_RETENTION.

Post your comment if you want to know the answer, as I want to know how many visitors are actually reading my blog post

2021-12-07

Oracle: How to drop tablespace with missing file with ORA-29857

Product: Oracle DB

Version: 19.2 - 21c

This procedure illustrate how to drop a tablespace which has dbf files that missing, or corrupted.

Assumption:

  • Recycle bin is active
  • Several users has table and index within the affected tablespace
  • Entire tablespace will drop, even if one of the multiple dbf file missing
  • Tablespace name is ABC

Problems simulation:

SQL> drop tablespace ABC including contents and datafiles;

ERROR at line 1:
ORA-29857: domain indexes and/or secondary objects exist in the tablespace

Workaround Procedure:

1. Find out all the dbf file ID of the tablespace. Note down the FILE_ID to use later

select file_id from dba_data_files where tablespace_name = 'ABC';

2. Disable all the above dbf file. If multiple files, execute these once per file

alter database datafile [file_id] offline drop;

3. Disable recycle bin

alter system set recyclebin=off deferred;

4. Drop tablespace with all its dbf file

drop tablespace ABC including contents and datafiles;

5. Verify dbf file dropped. There should be no output

select file_id from dba_data_files where tablespace_name = 'ABC';

6. Re-enable recycle bin

alter system set recyclebin=on deferred;

2021-10-12

Oracle/Tomcat - Monitoring ulimit for nofile (max open files) to Resolve max file open

Product: Any product that open files, such as Oracle DB, Tomcat
Version: Any version since using floppy disk

In all software that runs in Linux (including MacOS), a good vendors will include Linux kernel tuning parameters that their software used, or tested.  In this post, I would like to discuss maximum number of concurrent files open parameter, which call following:

  • Linux (RedHat, Fedora, Ubuntu, CentOS): nofile
  • MacOS: maxfiles (won't be discussed in this post)

This configuration falls under responsibility of OS administrator & application administrator.  For application administrators, they could be DBA, programmer, web admin, LDAP admin, SAP BODS admin, ETL developer, data migration consultant, Apache Parque, etc.  The application team needs to tell OS admin how many max files the program will access (read or write) per process concurrently, while OS admin should monitor its usage, and alert application team when congested.

Linux has a 3 places to configure this per process parameter:

  • Global limit for entire OS/machine/VM - File /etc/sysctl.conf, parameter fs.file-max, verify current value /proc/sys/fs/file-max, command to change it sysctl -w fs.file-max=[new value], monitor current usage using cat /proc/sys/fs/file-nr
  • Per user, per process soft limit, the effective limit - File /etc/security/limits.conf, parameter "soft nofile," verify current value ulimit -Sn, verify current usage cat /proc/<PID>/fd
  • Per user, per process hard limit, the value user can change to without asking SA - File /etc/security/limits.conf, parameter "hard nofile," verify current value ulimit -Hn, verify current usage is not appliable as process depends on soft limit
Here are some clarification to prevent confusion:
  • "ulimit" command is per OS user, so it will display different value when login as different OS account
  • OS user can only manually increase (or automate using .bash_profile) the soft limit of nofile, but the hard limit (configured in /etc/security/limits.conf) must be changed by SA (whoever can sudo or modify file owned by root)
  • Network socket connection count towards nofile usage, in addition of file
  • Pipe file count towards nofile usage, and minimum there are 3 per process, i.e. standard input, standard error, standard output
  • File open/write by spawn thread counts toward the nofile usage of the process. If 2 threads are spawn by the process, each read/write 100 files concurrently, then total per process usage is 200 files access concurrently
  • The parameter is concurrent file open or write operation, and doesn't count files which are closed
  • DB always need to write to all dbf files (such as Oracle), plus additional log files, so total number of dbf file will be indirectly limited by OS nofile soft limit (required DB bounce to activate new value)
  • Vendors' nofile configuration is just for reference (unless its value is unlimited). Admin must adjust according to their concurrent file access usage
  • Direct raw storage usage, such as raw device used by Oracle RAC is counted, so the more raw device, the higher the concurrent nofile usage
  • Per process monitoring is most accurately by checking /proc/<PID/fd.  ls command can display which file it is accessing (concurrently), netstat  -anp| grep <PID> can display the network port it used, lsof -p <PID> can show pipe + file + network port
  • Extremely low value, or high usage could cause application unable to display "reached max file open" error, and causes it near impossible to troubleshoot, as OS doesn't capture historical value (when process crash/hang/misbehave)
Example output of lsof for PID 25661 by showing entries related to nofile:
$ lsof -P -p 25661 | | awk '$4 ~ /[[:digit:]]/ {print}'
COMMAND     PID USER   FD   TYPE  DEVICE  SIZE/OFF      NODE NAME
al_engine 25661  sap    0r  FIFO     0,9       0t0   2188223 pipe
al_engine 25661  sap    1w  FIFO     0,9       0t0   2188224 pipe
al_engine 25661  sap    2w  FIFO     0,9       0t0   2188225 pipe
al_engine 25661  sap    3r   REG   259,1    637448 156135934 /opt/sap/dataservices/bin/BEError_message_en.bin
al_engine 25661  sap    4r   REG   259,1      5492 156135932 /opt/sap/dataservices/bin/broker_message_en.bin
al_engine 25661  sap    5r   REG   259,1     91966 156135931 /opt/sap/dataservices/bin/BETrace_message_en.bin
al_engine 25661  sap    6u  IPv4 2188262       0t0       TCP localhost:37482->localhost:4012 (ESTABLISHED)

FD (nofile entry, which means File Descriptor) 0, 1 and 3 are standard input/output/error.
There is 1 TCP socket open
lsof -P is not to translate port number to application name, in this case port 4012 and port 37482
File type FIFO means pipe file
File type REG means regular file
File type IPv$ means TCP/IP ver 4
Command is the program that read/write/talk to these file/tcp/pipe/device

Summary of useful command, or files

CommandPurposePer Process
ulimit -Sn    Shows default soft limit (effective value) of nofile for new process/command that is going to useY
ulimit -nSame as "ulimit -Sn"Y
grep file-max /etc/sysctl.confChecks current global max concurrent file for entire OS/VM. If not configured, that it won't return any valueN
sysctl fs.file-maxSame as /etc/sysctl.conf, but it will always return a value, even if not defined in /etc/sysctl.confN
sudo sysctl -w fs.file-max <new max value>Modify the global concurrent max file openN
ls -l /proc/<PID>/fdDisplay all file accessed by specific processY
grep "open files" /proc/<PID>/limitsDisplay effective max file open soft & hard limit by specific process. This override the ulimit command output, and cannot be changed unless restart the process (after changed ulimit)Y
lsof -P -p <pid>  | awk '$4 ~ /[[:digit:]]/ {print}'More verbose output than /proc/<PID>/fd. It will display the file open for read/write, TCP port, block device name, pipe open for read/writeY
wc -l /proc/<PID>/fdCount nofile usage by specific processY
lsof -p <pid>  | awk '$4 ~ /[[:digit:]]/ | wc -lSame as wc -l above but based on lsof outputY

2021-08-24

Oracle: Advance Shrink UNDO Tablespace

Product: Oracle RDBMS
Version: 9.0.x - 19.3 (could be higher)
OS: Windows, Linux

Locally managed UNDO tablespace has been introduced about 20 yr ago in Oracle 9i. Oracle's documentation (KB) about shrinking it is never actually shrinking it, but replacing it with another tablespace with following steps:

1. Create 2nd new undo tablespace, e.g. UNDOTBS2
2. Set init parameter undo_tablespace=UNDOTBS2, the 2nd new undo tablespace above
3. Bounce DB
4. Drop old UNDOTBS1 tablespace, including its dbf files

This post going to go into more advance technique to drop existing automatically created rollback segments, and shrink the original UNDOTBS1 tablespace. This step will drop all rollback segments that automatically got created, followed by shrinking the files. As UNDOTBS1 tablespace is clean, we can shrink the dbf file. This procedure will be using hidden parameter "_OFFLINE_ROLLBACK_SEGMENTS" as without it, we can't drop offline rollback segments that auto created.

Preparation:

  1. Ensure DB can be safely brought down for maintenance, as this will require to bounce the DB few times
  2. Make a backup of spfile[SID].ora in $ORACLE_HOME/dbs/ directory. The maintenance will modify it, and backup is the fastest way to revert back to original setting
Steps:
1. List down all rollback segments in tablespace UNDOTBS1:
SELECT segment_name FROM dba_rollback_segs where tablespace_name = 'UNDOTBS1';

2. Creates a statement to set hidden parameter "_OFFLINE_ROLLBACK_SEGMENTS" like below with maximum of 255 characters long:

alter system set "_OFFLINE_ROLLBACK_SEGMENTS" = '_SYSSMU1_2270612333$,_SYSSMU2_1117599206$,_SYSSMU3_2866369812$' scope=spfile;

3. Shutdown immediate

4. Startup

5. Generates DROP ROLLBACK SEGMENT statement dynamically:
SELECT 'drop rollback segment "'||segment_name|| '";' FROM dba_rollback_segs where tablespace_name = 'UNDOTBS1';

6. Run above generate DROP ROLLBACK SEGMENT to drop all segments that listed in Step 2 above

7. There will be only some UNDO segments that we configured in Step 2 above that we can drop. In order to drop more, repeat step #1 (to get updated list) to 6

8. Double check only 1 rollback segments in UNTOTBS tablespace left, which is auto created

SELECT segment_name FROM dba_rollback_segs where tablespace_name = 'UNDOTBS1';

9. If there is more, then repeat Step 1 - 6

10. Shrink UNTOTBS1 dbf file, in this example, it is file ID 3, and shrinks it to 100MB

select tablespace_name, file_id, file_name, bytes/1024/1024 mb, maxbytes/1024/1024 mb_max from dba_data_files where tablespace_name = 'UNDOTBS1';
alter database datafile 3 resize 100m;

11. Optionally set its maxsize to 1000 MB

alter database datafile 3 autoextend on maxsize 1000m;

12. Shutdown, and restore original spfile.ora. This will indirectly remove the hidden parameter "_OFFLINE_ROLLBACK_SEGMENTS"

shutdown immediate
cp -p $ORACLE_HOME/dbs/spfile[SID].ora.org 

13. Startup
14. Now DB will auto creates 1-2 new undo rollback segments in UNDOTBS1 tablespace

SELECT segment_id, segment_name, tablespace_name, status, next_extent, max_extents
FROM dba_rollback_segs;

2021-04-09

Oracle - Startup Failure

Product: Oracle RDBMS
Version: 10.2 - 19.2
OS: Linux

While helping others to troubleshoot Linux systemd in auto startup Oracle RDBMS, encountering following failure when trying to login as SYSTEM (or any application DB user) in the Oracle server OS itself:

[oracle]$ sqlplus system/non_default_password SQL*Plus: Release 12.1.0.2.0 Production on Fri Apr 9 13:59:13 2021 Copyright (c) 1982, 2014, Oracle. All rights reserved. ERROR: ORA-27140: attach to post/wait facility failed ORA-27300: OS system dependent operation:invalid_egid failed with status: 1 ORA-27301: OS failure message: Operation not permitted ORA-27302: failure occurred at: skgpwinit6 ORA-27303: additional information: startup egid = 500 (oracle), current egid = 501 (dba)

However, if shutdown, then startup Oracle daemon manually, SYSTEM and other application DB user can login without issue.

Diagnostic:

1. Checks the OS username and group name for $ORACLE_HOME/bin/oracle

$ ls -ld $ORACLE_HOME/bin/oracle

-rwxr-x---. 1 oracle dba 323762270 Mar 24 19:34 /dbf/ora12.1/bin/oracle

2. Note down the username, and group name, which is oracle and dba
3. Checks the current running Oracle PMON daemon (it can be any other Oracle daemon) by displaying the OS user and OS group:
$  ps -eo euser,egroup,cmd | grep -i ora_pmon_$ORACLE_SID
oracle   oinstall      ora_pmon_SUPP121
4. Compare the output of ls and ps to ensure that both user & group are identical
5. Typically, the custom auto startup script has wrong OS user group name

6. For example, systemd script could have following wrong entry:

Group=oracle

7. Changes above to the OS user group as shown in #1, which is "dba"

Advance Diagnostic

Oracle daemons allocate "shared memory segment," and "semaphore arrays" in Linux (UNIX in general), which has OS user group which leads to this login issue:

1. Checks "shared memory segment" and "semaphore arrays" allocated to specific ORACLE_SID

$ sysresv -l RMANCAT
IPC Resources for ORACLE_SID "RMANCAT" :
Maximum shared memory segment size (shmmax): 18446744073692774399 bytes
Total system shared memory (shmall): 18446744004990070784 bytes
Total system shared memory count (shmmni): 4096
*********************** Dumping ipcs output ********************
------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 655360     oracle     640        4096       0
0x00000000 688129     oracle     640        4096       0
0x00000000 720898     oracle     640        4096       0
0xd787ced8 753667     oracle     640        16384      70
------ Semaphore Arrays --------
key        semid      owner      perms      nsems
0x720acc58 98304      oracle     640        125
0x720acc59 131073     oracle     640        125
0x720acc5a 163842     oracle     640        125
0x720acc5b 196611     oracle     640        125
0x720acc5c 229380     oracle     640        125
0x720acc5d 262149     oracle     640        125
0x720acc5e 294918     oracle     640        125
0x720acc5f 327687     oracle     640        125
0x720acc60 360456     oracle     640        125
*********************** End of ipcs command dump **************

***************** Dumping Resource Limits(s/h) *****************
core file size                         0 KB/0 KB
data seg size                     UNLIMITED/UNLIMITED
scheduling priority                    0 KB/0 KB
file size                         UNLIMITED/UNLIMITED
pending signals                       30 KB/30 KB
max locked memory                 UNLIMITED/UNLIMITED
max memory size                   UNLIMITED/UNLIMITED
open files                            64 KB/64 KB
POSIX message queues                 800 KB/800 KB
real-time priority                     0 KB/0 KB
stack size                            10 MB/10 MB
cpu time                          UNLIMITED/UNLIMITED
max user processes                    16 KB/16 KB
virtual memory                    UNLIMITED/UNLIMITED
file locks                        UNLIMITED/UNLIMITED
***************** End of Resource Limits Dump ******************
Total /dev/shm size: 14524801024 bytes, used: 2097008640 bytes
Shared Memory:
ID              KEY
688129          0x00000000
720898          0x00000000
655360          0x00000000
753667          0xd787ced8
Semaphores:
ID              KEY
98304           0x720acc58
131073          0x720acc59
163842          0x720acc5a
196611          0x720acc5b
229380          0x720acc5c
262149          0x720acc5d
294918          0x720acc5e
327687          0x720acc5f
360456          0x720acc60
Oracle Instance alive for sid "RMANCAT"

2. Let's pick the first semaphore array ID of 98304 to find out the OS user group using ipcs command:

$ ipcs -si 98304 | head -20

Semaphore Array semid=98304
uid=500  gid=501         cuid=500        cgid=501
mode=0640, access_perms=0640
nsems = 125
otime = Fri Apr  9 16:34:50 2021
ctime = Fri Apr  9 16:34:50 2021
semnum     value      ncount     zcount     pid
0          1          0          0          3252
1          25427      0          0          3252
2          4619       0          0          3252
3          32764      0          0          3252
4          0          0          0          0
5          0          0          0          0
6          0          1          0          3307
7          0          1          0          3309
8          0          0          0          0
9          0          1          0          3318
10         0          1          0          3320
11         0          1          0          3376

3. The "gid" of the semaphore array must match the ORACLE_HOME/bin/oracle program name. Uses following command to show the file owner in gid. Second column is the gid, which is 501:

[oracle]$ ls -ldn $ORACLE_HOME/bin/oracle
-rwxr-x---. 1 500 501 323762270 Mar 24 19:34 /dbf/ora12.1/bin/oracle

4. This value is configured in /etc/passwd and /etc/group file in UNIX

2020-11-27

Oracle: Oracle Enterprise Manager Express 12.2 Setup

Product: Oracle Enterprise Manager Express
Version: 12.2.0.1
Oracle Document URL: https://docs.oracle.com/database/121/ADMQS/GUID-BA75AD46-D22E-4914-A31E-C395CD6A2BBA.htm

Followed above Oracle Enterprise Manager Express 12.1 documentation to check my Oracle 12.2 EM Express login issue, and found the workaround.

Problem #1: Chrome blocks Adobe Flash content, so Oracle EM Express login screen won't appear
Solution: Uses IE 11 in Windows 10

Problem #2: Accessing http://hostname:8080 consistently getting prompt with That server also reports: "XDB" and cannot login as SYSTEM as SYS users

Solution: Change http://hostname:8080/em, which contains the extra "/em"


Problem #3: HTTP port is not configured, but only HTTPS. I prefer HTTP for ease of troubleshooting, and not to maintain self-signed SSL cert
Solution:

1. Login to sqlplus as system, and run following:

exec DBMS_XDB_CONFIG.SETHTTPSPORT(8080);

2. Stop Listener

lsnrctl stop

3. Start Listener

lsnrctl start

4. Checks listener is showing extra entry for port 8080

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=hostname)(PORT=8080))(Presentation=HTTP)(Session=RAW))

Problem 3: EM Express keeps prompting me with a dialog box with credential to XDB
Solution:
1. Login to sqlplus as system, and run following

exec dbms_xdb_config.setglobalportenabled(TRUE);

2. Restart Oracle database

3. Access http://hostname:8080/em and unable to login with following XDB login screen and able to see the login screen

Issue resolved

2020-07-20

Oracle 12: ORA-00600 12811 when Dropping Table with Identify Column

Product: Oracle RDBMS
Version: 12.1.0.1 - 12.2.0.2

There were many tables in my user which has IDENTITY column, which is a new feature of Oracle 12.1.0.1.  Randomly, Oracle will throw following error for one of them, but the rest has no issue:

ERROR at line 1:
ORA-00600: internal error code, arguments: [12811], [391537], [], [], [], [], [], [], [], [], [], []

Searched in Internet, and found that this is a known bug that affected following version:
  • 12.1.0.1 - 12.1.0.2
  • 12.2.0.1 - 12.2.0.2
The permanent workaround will be to install a patch, and temporary workaround is to drop the identify column first before dropping the table. For example, for table "my_company" with identity column company_ID, it will be:

ALTER TABLE my_company MODIFY company_ID DROP IDENTITY;
drop TABLE my_company;

Above workaround doesn't work in my environment, because my version dropped its internal sequence.  ALTER TABLE MODIFY DROP IDENTITY failed with same error below:

SQL> ALTER TABLE my_company MODIFY company_ID DROP IDENTITY;
ALTER TABLE my_company MODIFY company_ID DROP IDENTITY;
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [12811], [391537], [], [], [], [], [], [], [], [], [], []

In other word, even if I installed the patch, it won't re-create back the sequence, and the table cannot be dropped as well.

Internal of IDENTITY Feature

First, let me list down related components behind IDENTITY column:
  1. Table - tabs
  2. Column - user_tab_columns
  3. Sequence - user_sequences
  4. Sequence to Column mapping - user_TAB_IDENTITY_COLS
A new sequence number object with name ISEQ$$_[object_id] will be created for each identity column.  This can be query from user_sequences:

select * from user_sequences where sequence_name = 'ISEQ$$_[object_id]';

In my situation, the entry of the sequence exists in user_tab_columns, but missing in user_sequences and user_tab_identity_cols:

select data_default from user_tab_columns where table_name = 'MY_COMPANY' and column_name = 'COMPANY_ID';

"DH1030"."ISEQ$$_663000".nextval

As you can see, column user_tab_columns.data_default still show the info that there is a sequence called ISEQ$$_663000, but when check in user_sequences, it doesn't exists any more, i.e. it has been dropped:

select count(*) from user_sequences where sequence_name = 'ISEQ$$_663000';

So, it is logical that the sequence to column mapping is missing in user_TAB_IDENTITY_COLS.

Since most of the tables were table to drop, except 1, I believe this is intermittent issue. If I can re-create back following, I could drop the IDENTITY column, then drop the table:
  • sequence - a new entry exists in user_sequences, with new ID in user_objects.object_id
  • sequence to column mapping - needs to determine the internal system tables that hold the value

Sequence to Column Mapping

Digging into system view user_TAB_IDENTITY_COLS, I found following is the internal table that keep the lookup entry:
  1. sys.idnseq$ - it is a table owned by SYS
Above table still show a left over entry of the previous sequence object id (user_objects.object_id) of the table, column, and sequence name:

select * from idnseq$ where obj# = (select object_id from user_objects where object_name = 'MY_COMPANY');

The old sequence object ID stored in sys.idnseq$.seqobj#

So, I need to update this table to point to the new sequence that I manually created.

Workaround

Following are the steps involve to put back the dropped sequence, in order to drop the identity feature and table:
  1. Get the old sequence name of identity column my_company.company_id
    select data_default from user_tab_columns where table_name = 'MY_COMPANY' and column_name = 'COMPANY_ID';
    Output: "DH1030"."ISEQ$$_663000".nextval
  2. Based on above output, the old sequence name was ISEQ$$_663000
  3. Create a new sequence with same name (name doesn't matter)
    create sequence ISEQ$$_663000;
  4. Find out the object ID of the new sequence
    select object_id from user_sequences where object_name = 'ISEQ$$_663000';
    Output: 761299
  5. Ensure sys.idnseq$.obj# will return 1 row for table my_company
    select count(*) from sys.idnseq$ where obj# = (select object_id from user_objects where object_name = 'MY_COMPANY') and seqobj# = 663000;
  6. Update sequence mapping in sys.idnseq$.seqobj# to point to the new sequence number
  7. update sys.idnseq$ set seqobj# =761299 where obj# = (select object_id from user_objects where object_name = 'MY_COMPANY') and seqobj# = 663000;
  8. Now following system view will show an entry that my_company.company_id maps to sequence ISEQ$$_663000
    select table_name, column_name, generation_type, sequence_name, identity_options from user_TAB_IDENTITY_COLS where table_name = 'MY_COMPANY';
  9. Next, it is possible to disable IDENTITY feature for my_company.company_id
    ALTER TABLE my_company MODIFY company_id DROP IDENTITY;
  10. Lastly, drop the table
    drop table my_company;
Note: sys.idnseq$.obj# will return 1 row, as only 1 column in a table can has IDENTITY feature, but adding extra check seqobj# = 663000 will eliminate any human mistake

2020-03-19

Oracle: Tuning Slow ALTER TABLE DROP COLUMN

Product: Oracle
Version: 8i - 18c

I came across an interesting question that why Oracle takes so long to drop a column while MS SQL Server is immediate:
Alter Table [Table_name] Drop ([col1], [col2], [col3], .....);

Firstly, let's clarify that MS SQL Server only logically drop the column, while Oracle physically drop the column.  This means the space taken by the dropped column is not free up in MS SQL Server, but it is for Oracle.  However, for Oracle it is consider free space fragmentation, just like you delete record from the table, or set the column value to NULL.  Additional defragmentation required to see the freed space in tablespace level. If you would like Oracle to logically delete the column then use following statement:

Alter Table [Table_name] Set Unused ([col1], [col2], [col3], .....);

Note: To find out unused column, query DBA_UNUSED_COL_TABS

Next, let's list down what could cause Oracle column dropping to take so long:

  1. Online redo log
  2. Undo
  3. Checkpoint
  4. Table is update for each column even the ALTER TABLE DROP COLUMN is dropping multiple columns as 1 single statement
  5. NOLOGGING still use UNDO tablespace
  6. If UNDO dbf file is auto extend, then there are additional overhead to extend the dbf file
Next section going to discuss each option above

Tuning Online Redo Log

Several options to tune this:
  1. Temporary set table to NOLOGGING before dropping column: alter table [table1] nologging;
  2. Larger online redo log size to reduce number of slow checkpoints
  3. Increase checkpoint size, e.g. 100,000 rows (still limit to redo log size in #2), in the drop column syntax: alter table drop (col1, col2) checkpoint 100000;
  4. Move online redo log to NVMe SSD

Tuning Undo

  1. Pre-allocate undo dbf size to prevent auto extend
  2. Specify a small checkpoint size (but needs to maintain optimal speed with online redo log above): alter table drop (col1, col2) checkpoint 1000;
  3. Move undo dbf to NVMe SSD

Checkpoint Tuning

  1. Carefully check alert log for online redo log switching, and configure init LOG_CHECKPOINT_INTERVAL & LOG_CHECKPOINT_TIMEOUT
  2. Oracle recommend 3 check points per hour, but adjust it to provide optimal performance when dropping columns (restore them back after dropping columns)
  3. These parameters can be changed in real time without shutdown Oracle
  4. Less frequent checkpoint will affect DB recovery, if it ever crash

Table Performance

  1. Move tablespace used by the table to NVMe SSD
  2. Set table to NOLOGGING temporary, and LOGGING after it is done
  3. Take new DB backup after set NOLOGGING else DB is not restorable (table is not restorable)
  4. Use multiple DB sessions to drop columns in parallel
  5. ALTER TABLE DROP COLUMN can be interrupted as it has CHECKPOINT
  6. It is possible to plan for multiple down time window to perform the housekeeping
  7. It is possible to resume an interrupted column dropping for specific table using ALTER TABLE DROP COLUMNS CONTINUE CHECKPOINT [number]

Advance Performance Analysis

If you need to gather in depth performance statistics to get further understanding what could affect the specific column dropping performance, query following system views:
  • v$mystat - join with v$statname
  • v$statname - used with above
  • v$rollstat
  • v$enqueue_stat - lock statistics
Save the output to a static table before & after run ALTER TABLE DROP COLUMN, then study the statistics to understand where is the performance bottleneck.

Through out the period, ensure no contention in OS RAM, CPU, swap (must be 0 swap to be optimal).

2013-07-15

Oracle Rename Database SID in RedHat - Includes OEM

Product: Oracle RDBMS 10.2 - 11.2 in RedHat

While looking online for existing steps to rename Oracle SID and hostname, I come to realized that most DBA are blogging in Windows environment, and skip OEM configurations. Therefore, I am wrote this post to cover the Linux environment and includes the missing OEM part

Please note that OEM may not be functioning in your database prior of this change, so my post does not cover troubleshooting your broken OEM

1. Login as oracle user, or the user who each own ORACLE_HOME
2. Startup database in mount mode. You can use any approach to do so, but I am connecting locally
2.1. ORACLE_SID=[Old SID]
2.2. sqlplus '/ as sysdba'
2.3. SQL> alter system checkpoint;
2.4. SQL> shutdown immediate;
2.5. SQL> startup mount
3. nid dbname=[New SID] target=/
3.1. Press "y" to confirm rename
4. Set DB_NAME in spfile.ora
4.1. sqlplus '/ as sysdba'
4.2. startup nomount
4.3. alter system set db_name=[New SID] scope=spfile;
4.4. create pfile from spfile;
4.5. shutdown
5. Rename pfile
5.1. cd $ORACLE_HOME/dbs
5.2. mv pfile[Old SID].ora pfile[New SID].ora
6. Replace all names in pfile with vi, but you can use any editor to do so
6.1. Type ":%s/[Old SID]/[New SID]/g
7. Move all the following files to new directory name reflecting new SID
7.1. Control file
7.2. admin/[Old SID]
7.3. Online redo log
7.4. Temp file
7.5. Other tablespace
7.6. Flash recovery area
7.7. For 11.1 and 11.2: $ORACLE_BASE/diag/rdbms/[Old SID]
8. Startup database in MOUNT mode and observe alert.log to fix any missing steps
8.1. ORACLE_SID=[New SID]
8.2. sqlplus '/ as sysdba'
8.3. SQL> startup mount
9. Rename all files in database
9.1. SQL> select member from v$logfile;
9.2. SQL> alter database rename file '[Old filename]' to '[New file name]';
9.3. SQL>  select name from v$database;
9.4. SQL> alter database rename file '[Old filename]' to '[New file name]';
9.5. SQL> alter database open resetlogs;
10. Startup database with spfile
10.1. SQL> create spfile from pfile;
10.2. SQL> shutdown immediate
10.3. SQL> startup
11. Modify /etc/oratab to rename the SID





Next is Oracle Enterprise Manager, AKA the daemon started by "emctl start dbconsole"
1. UNIX hostname is modified by changing /etc/hosts and /etc/sysconfig/network
2. Reboot UNIX. Manually shutdown Oracle database if /etc/init.d is not setup to shutdown database
3. Rename hostname in $ORACLE_HOME/network/admin/listener.ora, if file exists
4. Rename following to reflect new hostname and SID
4.1. $ORACLE_OME/oc4j/j2ee/OC4J_DBConsole_[Old hostname]_[Old SID]
4.2. $ORACLE_HOME/[Old hostname]_[Old SID]
5. Re-create OEM DBConsole repository. Required entering SID, SYS password, SYSMAN password, and DBSNMP password (when create)
5.1. emctl stop dbconsole
5.2. emca -deconfig dbcontrol db -repos drop
5.3. emca -config dbcontrol db -repos create

2013-01-28

Oracle: Slow Data Import With Auto Extend Data File

Product: Oracle Database
Version: 10.x - 11.2 or later

Many developers or junior DBA may over look Oracle database's auto extend default configuration when they create the tablespace, or data file.

It become a critical performance bottleneck when user need to import data from dump file.

Oracle has default of 64 KB (it depends on database block size though) for the dbf file, which will slow down the database import, or table/index population when the dbf file needs to extend and grow

Use following query, and look for following 2 events to detect this bottleneck:
1. control file sequential read
2. Data file init write

select sid, event, total_waits, time_waited from v$session_event
where sid in (select sid from v$session where username='&username');

Replace &username with the user that you would like to monitor

Solution
Use following command to make the dbf file auto grow 1 MB each time

alter database datafile '&filename' autoextend on next 1m;

Use following command to make the dbf file auto grow 5 MB each time

alter database datafile '&filename' autoextend on next 5m;

Use following command to make the dbf file auto grow 300 MB each time

alter database datafile '&filename' autoextend on next 300m;

Kindly donate CAD$12 to appreciate my sharing of intellectual property, experience, research, and writing time

2012-09-02

Oracle: Update View Error ORA-01732 Explanation and Solution

Product: Oracle RDBMS
Version: 7.x - 11g and future version

Many junior DBA and developers always treat a database VIEW as table where they can simply UPDATE, but there are often they get ORA-01732 error, which I will explain the cause and solution

Example of error

SQL> UPDATE vwTestView set name='SCChen' where name='scchen';
UPDATE vwTestView set name='SCChen' where name='scchen'
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

Quick Verification for View Update
There are some restrictions but Oracle provide a system data dictionary which allows DBA and developers to quickly find out whether the column in the view can be used for update

The view name is as below:
1. USER_UPDATABLE_COLUMNS - filter by TABLE_NAME (which is view name)
2. ALL_UPDATABLE_COLUMNS - filter by OWNER, TABLE_NAME
3. DBA_UPDATABLE_COLUMNS - filter by OWNER, TABLE_NAME

Execute SQL below and supply the view's owner, and view name will allows you to see which column is updatable

select table_name, column_name, updatable from all_updatable_columns where table_name = &view_name and owner = &view_owner;

Explanation
A view often contains some translation, build-in function (sum, distinct, min), user defined function, combination of multiple columns (LastName + FirstName), or UNION from multiple tables. When they have these defined in the view, then the specific column, or even the whole view cannot be used for UPDATE.

Following are common examples that leads to this error
1. Build-in function DISTINCT is used, so it is impossible to update any column in the view

 create view vwTestView as select distinct name from employee

2. TRIM, LTRIM, RTIM function is used, so that specific column cannot update
3. OLAP function is used. It is impossible to update any column in the view


create view vwTestView as
select a.*, first_value(sal) over() as Quarter_Salary
from employee a;


4. A GROUP BY is used
5. ORDER BY is used
6. Subquery (inner SELECT) is used
7. Keyword READ ONLY is used
8. MODEL is used
9. CONNECT BY is used
10. START WITH is used
11. UNION or UNION ALL is used
12. INTERSECT is used
13. MINUS is used
14. COUNT is used
15. MIN, MAX, AVG is used
16. SUM is used
17. SUBSTR is used

There is many reasons why Oracle (and many other vendors) restrict update on the VIEW. Following are some reasons

Let's take DISTINCT as an example. When a view contains DISTINCT, then it may display 3 rows of records for column GENDER (Male, Female, Null), even underlying contains 1 million records. If the database allowed user to update the view, then user may accidentally update 1 million records, instead of 3.

Another example... take following view which contains SUBSTR function, and create a virtual column

create view vwTestView as

select SUBSTR(Name_Mid, 1, 1) Name_Mid_Init, Name_First, Name_Last
from employee;

update vwTestView set Name_Mid_Init = 'K' where Name_Mid_Init = 'n';

ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view


Column Name_Mid_Init, which is the middle initial is derived from column Name_Mid. The view extract the 1st character from the column to pass to client application. This is commonly used by many applications. However, no database allows such modification as user must aware that it is taken from other column, and simply allow user to update this column will break the data consistency, and possibly break foreign key constraint, and check constraints that build on original column Name_Mid.

Some times I really tired of explaining this over and over again as many people just say the view has problem, please fix it, but they should study further the original intention of the view design, and they may not use the correct view, or they should design another views that suitable for the application instead

Kindly donate CAD$32 to appreciate my knowledge sharing, personal time, experience, and business value to your organization

2012-07-19

Oracle: Alter Index Online by Other Owner and Get "insufficient privileges" Error

Product: Oracle RDBMS
Version: 11.2.0.2 and above

I have a customer who has following maintenance require, which is common for major corporate

1. Create another partition administration account with minimum privilege to move partition table and index to archive tablespace
2. This Oracle account is not the schema owner of table or index it is moving
3. The table move operation needs to be online
4. The index rebuild operation needs to be online
5. Grant minimum privilege to the partition administrator account

So this mainly to do with Oracle database privilege, and I encountered great challenge when doing #4

I though rebuild index only need "grant alter any index" privilege, but I am wrong. Due to the "ONLINE" syntax, Oracle database will create additional tables and indexes under original schema owner, so following are additional privileges require to grant to partition administrator account

grant create any table to sysman;
grant create any index to sysman;

Surprisingly, I don't need to give any quote on tablespace, or "grant unlimited tablespace" to partition administrator.

In summary, following is the solution

grant alter any index to [partition_admin];
grant create any table to [partition_admin];
grant create any index to [partition_admin];

Sample error message,

SQL> ALTER INDEX SCOTT.IDX_PRICELIST_1 REBUILD PARTITION P_20120425 NOCOMPRESS TABLESPACE users PARALLEL 2 online;
ALTER INDEX SCOTT.IDX_PRICELIST_1 REBUILD PARTITION P_20120425 NOCOMPRESS TABLESPACE users PARALLEL
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
 ---------------
Complete Privilege. Assume partition administrator is SYSMAN:

select * from dba_sys_privs where grantee= 'SYSMAN';

GRANTEE PRIVILEGE            ADM
------- -------------------- ---
SYSMAN  ALTER ANY INDEX      NO
SYSMAN  CREATE ANY TABLE     NO
SYSMAN  ALTER ANY TABLE      NO
SYSMAN  CREATE TABLE         NO
SYSMAN  CREATE ANY INDEX     NO

2012-07-01

Oracle: Advance Configuration of Linux HugePages

Product: Oracle RDBMS
OS: Linux in Intel x86 64-bit
Version: Any version supports hugepages

Although Oracle and many others document about Linux kernel hugepages, but none talk about other implication of hugepages, which I am quite upset about going through cycle of argument with DBA.

Guideline summary:
1. This setting applies to any Linux distribution
2. kernel value hugepages is in number of memory block in 2 MB size
3. Kernel parameter file, and value

Kernel setting file: /etc/sysctl.conf
Value: vm.nr_hugepages =

4. It will stop Linux from paging Oracle process memory to swap file
5. 60 GB of target Oracle SGA will need 30,720 huge pages (60 GB / 2 MB = 30,720 huge page)
6. Very Oracle shadow process (1 process spawn for every Oracle client connection, except using Oracle Shared Server)
7. Number of private page table per Oracle shadow process with hugepage = 60 GB / 2 MB = 30,720 pages
8. Page Table Entry (PTE) size per Oracle shadow process = 30,720 * 8 byte = 240 KB
9. If there are 300 Oracle client connections (like CRM, SAP, Tomcat with JDBC), with every Oracle shadow process need 240 KB of overhead = 240 KB * 300 = 70 MB

Most DBA use estimate on target Oracle SGA to derive, following example show the most accurate method to calculate the kernel hugepage value

1. Set Oracle SGA without setting huge page
2. Startup Oracle database. If multiple database, then start up all databases
3. Type: ipcs -m | grep oracle (where this is the UNIX user who start up Oracle process). Following shown 2 Oracle databases each allocated about 20 GB memory

# ipcs -m | grep oracle
0x38671d2c 3997702    oracle    660        21476933632 30

0x41271ab1 3997702    oracle    660        21476933632 30


4. If there is another Oracle process using different Oracle UNIX user, then grep with that username, e.g. oraias, orasap, oraweb

# ipcs -m | grep -e oraias -e orasap -e oraweb

5. Value 21476933632 byte is the actual memory allocated to Oracle database. Use calculator to sum up all the memory used


6. To activate huge page immediately, run this command as root: sysctl -w vm.nr_hugepages=
7. Started up database, and run following as normal user to confirm huge page is used:


grep HugePages /proc/meminfo


Note:
File: /etc/security/limits.conf


Following lines are assumed has been configured as per Oracle document



oracle          soft    memlock        unlimited
oracle          hard    memlock        unlimited



If there are other Orale UNIX user who will start the database, there there will be additional lines configure



Please use following PayPal donate if this post helped

2012-06-28

Oracle RAC: Java JDBC Oracle Thin driver syntax for RAC

Product: Java, Oracle
Component: JDBC

Most people used Java Oracle Thin driver provided by Java instead of download the full ojdbc.jar client.

Therefore, this post show the syntax that should be used for multiple nodes active/active cluster database


Jdbc:oracle:thin@(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521))
……
(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=.WORLD)
(GLOBAL_NAME=.WORLD)
(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))))

Where
... are hostname, VIP, or physical IP of each Oracle cluster node (normally is virtual IP)
1521 is Oracle listener port
is service name shown in "lsnrctl status listener__"
is value used in "srvctl start database -d " and not SID in the listener.ora
The FAIL_OVER_MODE is same like the one configured in tnsnames.ora, which make use of basic fail-over.

2012-06-13

Oracle: Example of TRIM function

Product: Oracle Database
Version: Any

TRIM function is used to remove white space from string, including
1. Both left and right spaces - default, or specify keyword BOTH
2. Left spaces - If specify keyword LEADING
3. Right spaces - If specify keyword TRAILING
4. Specific character in left and right - If specify keyword '[1 char]' FROM
5. Specific character in left - If specify keyword LEADING '[1 char]' FROM
6. Specific character in right - If specify keyword TRAILING '[1 char]' FROM

Example,

Table BONUS contain a single VARCHAR2 column, called ENAME

It contains 2 records as below:
1. [space][space]Col 1[space][space]
2. ..Col 1a.

Example 1: Remove white space from both left and right side
select TRIM (ENAME) ENAME from BONUS;


ENAME
----------
Col 1
..Col 1a.



Note: Row 2 no change


Example 2: Remove left spaces

select TRIM (LEADING FROM ENAME) ENAME from BONUS;



ENAME
----------
Col 1[space][space]
..Col 1a.

Note: Row 2 no change


Example 3: Remove right spaces
select TRIM (TRAILING FROM ENAME) ENAME from BONUS;


ENAME
----------
[space][space]Col 1
..Col 1a.



Note: Row 2 no change

Example 4:  Remove DOT in left and right


select TRIM ('.' FROM ENAME) ENAME from BONUS;


ENAME
----------
[space][space]Col 1[space][space]
Col 1a

Note: Row 1 no change




Example 5: Remove DOT in left

select TRIM (LEADING '.' FROM ENAME) ENAME from BONUS;





ENAME
----------
[space][space]Col 1[space][space]
Col 1a.

Note: Row 1 no change, and for row 2, the dot on left side removed


Example 6: Remove DOT in right

select TRIM (TRAILING '.' FROM ENAME) ENAME from BONUS;


ENAME
----------
[space][space]Col 1[space][space]
..Col 1a


Note: Row 1 no change, and for row 2, the dot on right side removed
Example 7: Remove space and DOT on both side
select TRIM(TRIM ('.' FROM ENAME)) ENAME from BONUS;
ENAME
----------
Col 1
Col 1a

Note: Both space and DOT characters are removed in both rows

These can be used while writing PL/SQL when assigning to variables like below

str_ENAME := TRIM(TRIM ('.' FROM str_ENAME));

2012-05-31

SAP: Making Use of Oracle Advance Compression Option

Product: SAP, Oracle
Version: SAP 6.40 onward, Oracle 11g Release 1 (11.1), BR*Tools 7.20 onward

When planning for table compression, it is important to research and aware of following:

1. Oracle database compression restriction
2. Do not compress tables with frequent update
3. Do not compress tables that needs high performance throughput in INSERT
4. Do not compress tables that needs high performance throughput in UPDATE
5. Low space saving on high cardinality (less duplicate data). Uses Oracle Advanced Advisor PL/SQL to perform an estimate
6. Tables with more than 255 columns not supported
7. Tables with LONG columns not supported. Uses SAP BRSPACE (option long2lob) to migrate to LOB column type (recommend SecureFile LOB to use additional compression)
8. DELETE operation will runs 20% slower. If performance degrade more than 100%, search SAP for Oracle database patch with this known bug
9. BLOB is not compressed. Needs to convert to SecureFile in order to compress

SAP shipped with Oracle Enterprise Edition, which has license to use this feature. Anyone not using Enterprise Edition will find that compression does not work. On the other hand, anyone who has Enterprise edition not bundled with SAP, an extra license fee is require if you use it.

Normally people uses SAP brspace command to compress tables, which will SKIP these tables
1. SAP pool tables ATAB, UTAB. This is due to reason #2
2. SAP cluster tables CDCLS, RFBLG. Due to reason #2
3. INDX-type tables BALDAT, SOC3. Due to reason #3
4. ABAP source and load tables REPOSRC and REPOLOAD. Due to reason #4
5. Update tables VBHDR, VBDATA, VBMOD, VBERROR. Due to reason #3, #4
6. RFC tables ARFCSSTATE, ARFCSDATA, ARFCRSTATE, TRFCQDATA, TRFCQIN, TRFCQOUT, TRFCQSTATE, QRFCTRACE, and QRFCLOG. Due to reason #3, #4

In ECC 6 system, there are 949 objects will be excluded when using Oracle Advance Compression Option

Following are my recommendation when decided to use Oracle Advance Compression Option for SAP

1. Convert LONG column type to LOB
brspace -f tbreorg -a long2lob -c ctablob -s PSAPOLD -t PSAPNEW -p 2
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD

2. Convert to new SecureFile LOB column type and compress
brspace -f tbreorg -a lob2lob -c ctablob -s PSAPOLD -t PSAPNEW
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD


3. Convert the rest of the tables
brspace -f tbreorg -a reorg -c ctab -s PSAPOLD -t PSAPNEW -p 8
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD. Compress 8 tables in parallel

Example of screenshot



Using Oracle Advanced Advisor
exec DBMS_COMP_ADVISOR.GetRatio ('SAPPRD', 'TEST_TABLE' , 'OLTP' , 10)

Note: For BR*Tools 7.x in Oracle 11g on UNIX (no need for AIX), needs to create additional soft link to make it work, unless the SAP kernel is version 7.20_EXT with BR*Tools 7.20

$ su - oracle
$ ln -s $ORACLE_HOME/lib/libnz11.so $ORACLE_HOME/lib/libnz10.so

Note: Following features are not feature of Oracle Advance Compression Option

1. Regular table compression
2. RMAN compression
3. Index compression

Note:
Use option "-i " if want to use different tablespace to store index. I do recommend this so that corruption of index tablespace or dbf file can be re-create from scratch.

Precaution
1. Verify there is no UNUSABLE index prior of compression
2. Verify there is no UNUSABLE index partition prior of compression
3. No Oracle SYS objects with status INVALID
4. PSAPTEMP has sufficient space to hold the largest table/index, with pre-allocated space, and don't rely on auto extend. If parallel compress, then increase to total size (performance reason)
5. Online redo log is properly size, or temporary add more. If parallel compress, increase more (performance reason)
6. Modify Oracle initialization file (spfile.ora) to has at least 1 GB for PGA (performance reason). Parameter PGA_AGGREGATE_TARGET
7. Increase DB_CACHE_SIZE to 1 GB in Oracle initialization file
8. If not using auto segment management, verify table and index initial extent (INITIAL) from DBA_SEGMENTS to ensure it will not over allocate the disk space. The compression will not free up the space if it sets larger than compressed data

Please use following PayPal donate if my post helped

SAP: Analyze Tablespace Growth

Product: SAP
Version: 4.x to 2010
Transaction Code: DB02
Type of database: Oracle 8.0 and above

Use this transaction (DB02) to analyze tablespace growth. It support raw device and Oracle ASM managed device as well.

However, for raw device, the size of the tablespace is fixed, so it can only determine the % used within the space allocated.

For beginner who does not understand Oracle tablespace, it is the visible usable space from SAP and database, which contain space logically assigned (total of all physical files). Under it, it consists of 1 or multiple files with physical space allocation, with extension dbf. DBA generally assign few MB to GB during tablespace creation, and allows them to auto grow (auto extend) if they hit the pre-allocated space. It is possible NOT to allow it to grow, regardless how much space left in the drive. Raw device and ASM is more complicated, so I won't explain here.

You must navigate to Space - Tablespace folder in order to analyze the tablespace usage. SAP keeps a 30 days history in "Detailed Analysis" folder


In order to determine the growth of each tablespace, the dbf data file has to:
1. shrink (or allocate) to its minimum size with as little free space as possible
2. minimum tablespace free space fragmentation (fragmented free space within the dbf file)
3. If there a a lot of free space fragmentation, then needs to re-organize the tablespace, which is time consuming, and requires downtime. For partition table, it is less downtime, but in all cases, there will certainly be performance impact, especially for data warehouse SAP BW

2012-01-31

Smaller Oracle Exadata...... Oracle Database Appliance

http://www.oracle.com/us/products/database/database-appliance/index.html

Read an interesting produce from Pythian homepage today, which is Oracle Database Appliance

To briefly tell what it is... it is the smallest component which break down from Oracle Exadata.

This means the motherboard, CPU, RAM, disk, network card, Linux, Oracle database software, kernel, etc are pre-install. Since Exadata is very expansive, even their quarter-rack model, Oracle is targeting SMB customer who likes the simplification (as DBA + SA labor fee could be more expansive than hardware) and short provisioning offer by Exadata.

Technical specification is
  1. 4U rack-mountable chasis (must use Oracle rack which is deeper and more expansive, not standard rack)
  2. Contains 2 physical computers/server. I will use node to represent each server
  3. Two 6-core Intel Xeon X5675 CPU (Oracle often sell with most current model every few months) per node (total 4 CPU). Cores can deactivate using Oracle Appliance Manager to control Oracle Database license. Minimum to enable 2 cores per node
  4. 96 GB RAM per node (total 192 GB) - using 12 * 8 GB memory module
  5. 2 internal gigabit Ethernet. This is ready for clustering, especially Oracle RAC, or Linux OS clustering. Mainly design to be used as interconnect for cluster purpose
  6. 73 GB SSD disk
  7. 12 TB of storage
  8. 2 GbE onboard network port per node (total 4 ports)
  9. 2 USB 2.0 port per node at rear
  10. Internal dual-port SAS2
  11. PCI dual-port SAS2
  12. One quad-port GbE network card
  13. One dual-port 10 GbE network card
  14. 20 slot for 3.5" 600 GB 15,000 rpm SAS disk (Oracle adjust the size about yearly to the largest supported model)
  15. 4 slot of 3.5" 73 GB SAS SSD drive (292 GB SSD)
  16. 2 2.5" 500 GB 7200 rpm SATA disk (pre-configured as mirror root disk)
  17. One internal 4 GB USB thumb drive per node (total 8 GB)
So it has 8 network ports usable. Now the software part
  1. Oracle Linux Release 5.5
  2. Appliance Manager
  3. Choice of database (extra fee) (a) Database Enterprise Edition 11.2.0.2 (b) Oracle RAC (c) Oracle RAC One Node (d) any other Oracle Database options (like performance pack)
  4. License (extra fee and charge per core, not CPU) 2, 4, 6, 8, 10, 12 per node. Oracle requires both node to be same number of cores
Cons:
  1. License needs to purchase in pair of node
  2. Both server nodes need to upgrade to same number of CPU core due to licensing
  3. Can't upgrade to faster SAS fiber disk. Needs to purchase Exadata quarter rack for this
  4. Only has dual-redundant power supply instead of 3
  5. 72.6 kg per box. Very heavy
  6. All hardware must purchase from Oracle to be supported, including hard disks, SSD, NIC, CPU
  7. Oracle Linux is always very slow in supporting current hardware and drivers
  8. Not for small company as it is still expansive to get the cheapest Oracle Enterprise license. Hardware cost is US$50k, plus Oracle license US$47k. Minimum cost is US$97k
  9. Minimum documentation from Oracle (http://download.oracle.com/docs/cd/E22693_01/index.htm) which is based on Sun Fire X4370 M2
  10. Does not has Exadata Smart Scan feature
  11. Does not has Exadata Smart Flash Cache
  12. Does not has Exadata Hybrid Columnar Compression
From business consideration, following are their cost benefit:
  1. Linux pre-install and pre-tuned for Oracle Database 11g R2
  2. Oracle Database (single instance or RAC) software pre-install (extra money)
  3. If RAC, then Oracle database cluster is pre-installed
  4. Single vendor for OS, hardware, software
  5. Optional to get Pythian outsource support for the whole box, especially database (save DBA salary)
  6. Saving of at least 2 weeks time to install Linux and Oracle database. Very short time to get it up, which is without couple hours, or less if you know Exadata, or used before
  7. Simplify patching. Oracle is aiming for one-button patching for entire server box
  8. Pythian offers cheap Oracle database upgrade path from Oracle 7 to 11g
Reference:
www.Pythian.com - remote DBA outsource service, residing in Ottawa, Canada (other offices spread across the world)

As of 2011-09-23, Intel Xeon E7-8870 2.4 GHz CPU has 10 cores with hyperthread (20 logical CPU). So when Oracle offered this model, just need to install 1 CPU per node, customer has to buy 10 core CPU license per node (20 core license for both nodes)

It would be cheaper to build a similar machine with minimum redundancy from Dell, HP, IBM, by adding following
  1. Hire DBA, and UNIX admin
  2. Buy fast SSD with larger size (600 GB per drive)
  3. Create database in SSD drive
  4. Use ASM for the SSD drive (tapping on its software RAID and recovery feature) 
  5. Design application to archive historical data to archive table, which keep in different tablespace (dbf). Store the dbf files in regular hard disks
  6. If want to save developer or manual labor cost in #5, purchase Oracle Partitioning option. Configure table with partitioning. Store one month of older data to tablespace which resides in regular hard disks
  7. Setup another identical box as backup. If primary server down, switch to backup server (power on), and apply all archive logs to catch up
  8. Purchase Oracle Advanced Compression to perform real time compression on data, which indirectly extend SSD life span
  9. Buy fastest Xeon 10 core CPU
I have see several high availability (HA) setup, some seldom fail, some fail too sensitively, and some fail too often. Depending on hardware reliability, environment (dust, hot), and luck, HA may not necessary require, and a cold standby machine with 1-2 hr of downtime can be considered

Dell's PowerVault MD3220 supports 24 2.5" drive slot and SSD size of 149 GB. With 24 SSD drive, total raw size becomes 3,576 GB (3.49 TB) or 138 GB formatted per drive (3.33 TB usable) without data protection. Cost is US$94,188. Or reduce number of SSD, which cut US$3,507 per drive (including $3 drive cover to cover the empty slots)

Known issue with Oracle Appliance
1. During BIOS POST diagnostic, if server rebooted before it starts video initailization, it will complain DIMM memory ECC error. Needs to run command "set /SYS/MB/Px/Dy/ clear_fault_action = true" where Px and Dy is last digit of the hex code
2. If installed Sun quad-gigabit ethernet into PCI slot 6, system does not boot up
3. USB keyboard and mouse will freeze under heavy load

If you run Oracle in Windows 64-bit, then there is SuperSpeed SuperCache product which offer RAM disk,  disk mirroring to RAM, and disk cache. All these 3 products can immediately increase database throughput by 500x (as it is running at RAM speed). This could be something to consider for redo log, archive destination, TEMP and UNDOTBS tablespaces. Or simply use disk cache (SuperCache) to improve read/write I/O which are repeating. This product uses physical RAM, so Oracle database SGA may needs to shrink, but I believe it will certainly be faster than database buffer pool

Please use following PayPal donate if my post helped