Product: Oracle Database
Version: 10.x - 11.2 or later
Many developers or junior DBA may over look Oracle database's auto extend default configuration when they create the tablespace, or data file.
It become a critical performance bottleneck when user need to import data from dump file.
Oracle has default of 64 KB (it depends on database block size though) for the dbf file, which will slow down the database import, or table/index population when the dbf file needs to extend and grow
Use following query, and look for following 2 events to detect this bottleneck:
1. control file sequential read
2. Data file init write
select sid, event, total_waits, time_waited from v$session_event
where sid in (select sid from v$session where username='&username');
Replace &username with the user that you would like to monitor
Solution
Use following command to make the dbf file auto grow 1 MB each time
alter database datafile '&filename' autoextend on next 1m;
Use following command to make the dbf file auto grow 5 MB each time
alter database datafile '&filename' autoextend on next 5m;
Use following command to make the dbf file auto grow 300 MB each time
alter database datafile '&filename' autoextend on next 300m;
Kindly donate CAD$12 to appreciate my sharing of intellectual property, experience, research, and writing time
Guidewire, SAP, Genesys, hacking, Oracle Application Server, Oracle database, UNIX
Search This Blog
Showing posts with label import. Show all posts
Showing posts with label import. Show all posts
2013-01-28
2008-04-18
Oracle Database Import: Strategies for optimum performance
Following options and tuning are able to make Oracle dump file import to be as fast as possible regardless it is using imp, impdp, sqlldr, or other utilities. Most of them are applicable to imp, and impdp however.
- Separate tables, and indexes/constraints import
- Simultaneous export and import via pipe (mknod, exp file=pipe_file direct=y, imp file=pipe_file buffer=)
- Create RAM disk and mount it to keep following files, online redo log, undo tablespace, temp tablespace, system dbf
- Transportable tablespace import with exp, or expdp
- Multiple imp, or impdp
- Tune up SGA & PGA, sort area according to memory constraint
- Ensure not consuming memory swapping area, else terminate other process, reduce SGA, or set swap mount point to as small as possible (OS will not able to store UNIX crash dump)
- Ensure NFS throughput is higher than total parallel exp/expdp throughput. It must be able to handle all concurrent export. Approach to increase network (wire, or fiber) throughput are aggregate port, multiple cards, Gb trunk, compressed-node-special-file (increase by 5x for data)
- Ensure root disk has near 0 activities, especially if swap is located in same physical disk as root disk
- Use "kill -SIGTSTP PID" and "kill -SIGCONT PID" to suspend and resume exp/imp process if require so you can identify contention in V$SESSION_WAITS while resolving it in real time without terminating active process
- Do not set autoextend on for dbf file. Size them manually
- Increase online redo log to switch every 5 - 15 min (benchmark throughput different to determine their different), 1 member (use less mount points), more groups (no contention)
- Disable background jobs, for both database (dbms_job) and UNIX (cron)
- Practice the whole procedure for 6 times to identify further scenario, constraint, bottleneck, tuning
- Tune NFS configuration with larger block size, or check documentation for NFS tuning. This can increase the throughput by 80% compare to initial setting
Subscribe to:
Posts (Atom)