Search This Blog

Showing posts with label SAP. Show all posts
Showing posts with label SAP. Show all posts

2012-07-24

SAP: ECC or Solution Manager Installation - sapinst shows blank

Product: SAP
Module: X Windows emulation with any SAP module which use sapinst to install

It if found that sapinst will display a blank page in certain X Windows emulator.

This could happen to X Manager, MobaXterm, cygwin, Hummingbird

Solution 1: MobaXterm
Only applicable to MobaXterm (free trial version works as well)

1. Click on Settings icon and change "X11 server display mode" to "Windowed mode with Fvwm"
2. Restart X11
3. Ensure X11 running

See following screen shots




Solution 2

Certain people found resizing the windows will make it display its content

Solution 3

Use VNC (vncserver in server end, and vncviewer in PC end) to run sapinst (as root).

This software can be upload to server as regular user account, so it does not need to install, or owned by root. It does not need to reside in /usr/bin/, but it can be in /home/scchen/vnc/, for example.

If there is firewall, then need to open port 5900. If you do not want to ask network team to open it, or too much work, then identify a network port which already open, such as Oracle listener port 1521 - 1590, SAP GUI port, SAP ICM port 80, or 8080.

Then when launch vncserver command, specify the port as those, e.g. 1521, 8080.

Please note that if use port lower than 80, then must run vncserver as user root only. This is standard security limitation for all OS, especially UNIX

Solution 4
Uses commercial version of Reflection X

Solution 5
Install free Linux in your PC, and set DISPLAY=:0.0 in remote sapinst server. Local Linux needs to allow remote X display by typing "xhost +"

Solution 6
Install free Oracle Solaris, and follow steps in #5 above


Side Notes

Don't listen to anyone who tell you below
1. Install Java JRE
2. Install Java JDK
3. Your Java is not 32-bit or 64-bit
4. Installation media corrupted
5. No privilege to execute sapinst (program already running, so privilege is there)
6. Must run as root (root only required when installation start, displaying it does not need root)
7. Insufficient free space in /tmp (if you found /tmp is full, then clean it up and run again. It is unlikely)
8. Use ssh only (I assume ssh with X11 forward is correctly configure as you can display the blank screen already)

As long as you can run xclock, xeye, or any X application, then it means your setup is correct. The sapinst binary won't display a blank windows if it is corrupted, because it won't even able to reach that section of the code

2012-07-14

SAP C_TADM70_40 Version 064 OS/DB Migration Questions

Product: SAP
Module: OS/DB Migration
Exam Name: SAP System: Operating System and Database Migration
Exam Code: C_TADM70_40
Last Updated: 2013-12-30

Due to low appreciation in this post. I am removing the content

S1mpl2 q52st34ns 94r p24pl2 wh4 3nt2r2st28 1645t 18v1n72 OS/DB m3gr1t34n 2x1m, 1n8 72rt39371t34n

1) Wh37h 49 th2 94ll4w3ng st1t2m2nts 3s tr52? R2g1r83ng 3n7r2m2nt1l m3gr1t34n

- M5st h1v2 2n45ght sp172 94r l4gg3ng t16l2s 3n s45r72 81t161s2
- L1rg2 t16l2s 71n 62 2xp4rt 4nl3n2, 65t r2m13n3ng t16l2s 2xp4rt 499l3n2
- D1t161s2 tr3gg2rs w3ll 62 7r21t28 3n s45r72 81t161s2

2) Wh1t 3s 5s28 t4 r2st1rt th2 2xp4rt?

- F4r r3l418 <= 4.6D, l4g, t47 93l2
- F4r r3l418 >= 6.1, tsk 93l2s (t47 93l2 3s r218 t4 93n8 th2 l1st wr3t2 p4s3t34n 4nly, 1n8 n4t r2q53r28 39 842s n4t n228 t4 r2s5m2 9r4m l1st p4s3t34n)
- F4r r3l418 <= 4.6D, -r 4pt34n 3s r2q53r28, 65t 39 r3s2t5p 71lls r3l418, th2n 3t w3ll 15t4m1t371lly 71lls w3th -r 4pt34n

3) Wh37h m5st 62 th2 r2l21s2 49 R3l418 94r th2 r2q52st 49 th2 m3gr1t34n k2y, 3n 1 syst2m 4.6D?

-  R3l418 m5st h1s th2 s1m2 k2rn2l r2l21s2 1s SAP k2rn2l r2l21s2. Ch27k w3th "r3l418 -v2rs34n"

4) Wh37h pr4gr1ms 7r21t2 th2 CMD 93l2s?

- r3s2t5p, s1p3nst, m3gm4n

5) Wh37h 49 th2 94ll4w3ng st1t2m2nts 3s tr52 1645t TABART?

- E17h t16l2 1ss3gn28 t4 4n2 TABART
- TABART 3s m13nt13n28 3n ABAP D37t34n1ry

- C5st4m2r 81t1 m5st 1ss3gn TABART USER

6) F4r v2r39y 39 th2 2x3st3ng 46j27ts 3n th2 *.STR 93l2s 1r2 4n DB wh37h 74ntr4l m5st t4 62 84?

- T-C8 SE11 t4 7h27k 81t161s2 46j27t (DBA_TABLES) 74ns3st2n7y, 1n8 r5nt3m2 46j27t (NAMETAB) 74ns3st2n7y

7) Wh1t 1r2 q52st34n28 4n Pr4j27t A583t S2ss34n?

- M3gr1t34n 74ns5lt1nt n1m2
- T3m2 s7h285l2 94r (1) t2st m3gr1t34n (2) 93n1l m3gr1t34n
- T1rg2t 81t2 94r OS/DB M3gr1t34n Ch27k An1lys3s
- T1rg2t 81t2 94r OS/DB M3gr1t34n Ch27k V2r39371t34n
- T27hn371l 921s363l3ty, 2.g. h1r8w1r2, OS, SAP v2r, DB v2r

D42s n4t 1sk 94ll4w3ng (th3s 3s 3n M3gr1t34n Ch27k An1lys3s)
- M1x3m5m syst2m 84wnt3m
- Or82r 49 syst2m m3gr1t34n
- Syst2m l1n8s71p2
- SAP pr4857ts 3nst1ll28
- S45r72 81t161s2 s3z2
- L1rg2st t16l2s/3n82x2s
- C482 p1g2
- In74ns3st2nt DDIC 1n8 DB t16l2s
- Ext2rn1l 9172s & 3nt2r9172s
- Fr22 st4r1g2
- M2831 t4 x'92r 85mp 93l2s

8) Wh1t 3s p17k1g2.TSK 93l2?

- 3ntr485728 4n v2rs34n 6.10
- 7r21t28 6y r3l418 9r4m p17k1g2.str 93l2s
- E1s32r t4 r2st1rt 2xp4rt/3mp4rt 6y 7h1ng3ng th2 2ntry t4 2rr, 4r x2q
- It 842s n4t 74nt13n r3l418 2x275t34n p1r1m2t2rs
- I9 2xp4rt/3mp4rt s5772ss95l, 3ts st1t5s w3ll 7h1ng28 t4 4k
- M1n51lly 7h1ng2 st1t5s t4 3gn t4 94r72 r3l418 t4 sk3p th2 t1sk
- F3rst 74l5mn 3s 81t1 typ2, 2.g. T, D, P, I, V


9) Wh1t 74nt13ns 3n 7m8 93l2s?

- tpl
- str
- t47- 2xt
- tsk (r3l418 >= 6.10)
- m5lt3pl2 85mp 83r
- 6l47k s3z2
- m1x 93l2 s3z2

10) In wh37h s3t51t34n 1n OS/DB m3gr1t34n 74ns5lt1nt r2q53r28?

P2r94rm3ng H2t2r4g2n245s Syst2m C4py 3n 2v2ry 2nv3r4nm2nt, 2.g. DEV, TST, QAS, PROD


11) In p17k1g2.tsk 93l2, 39 th2 st1t5s 49 1n 2ntry 7h1ng28 9r4m "4k" t4 "2rr," wh1t w3ll h1pp2n 1t r2st1rt th2 3mp4rt w3th R3l418?

- r3l418 w3ll st1rt 9r4m n2xt (93rst) 2ntry w3th x2q 4r 2rr
- D2l2t2/8r4p 46j27ts 94r t1sks w3th st1t5s 2rr
- F4r <= 4.60D, 4pt34n -r 3s r2q53r28 94r r2st1rt

12) Wh37h 49 94ll4w3ng r2q53r2 H2t2r4g2n245s Syst2m C4py?

- AIX  DB2 t4 HP-UX Or17l2
- S4l1r3s Or17l2 t4 AIX Or17l2
- S4l1r3s In94rm3x t4 AIX Or17l2

13) Wh37h pr4gr1m/t16l2 pr48572 th2 DBSIZE.TPL 4r DBSIZE.XML?

- DBSIZE.TPL - r3s2t5p 6294r2 6.10
- DBSIZE.XML - r3sz7hk s3n72 6.10
- D1t161s2 t16l2 DDLOAD, wh37h st4r2s th2 r2s5lts 49 t16l2/3n82x s3z2 71l75l1t34n


14) P17k1g2.STR 93l2 71n 94r72 R3l418 t4 4r82r th2 2xp4rt/3mp4rt. Wh37h 49 th2 94ll4w3ng 3s tr52

- M4v2 2ntry t16 1n8 3n8 t4 th2 t4p 49 p17k1g2.STR t4 l418 th2 81t1 93rst
- N4 n228 t4 7h1ng2 PACKAGE.EXT 93l2
- I9 PACKAGE.STR 4r82r 7h1ng28 w3th m5lt3pl2 85mp 93l2s, th2n 3mp4rt 745l8 thr4w 2rr4r 1s r3l418 71n't r218 pr2v345s 85mp 93l2s wh37h h1v2 p1ss28
- P17k1g2 SAPNTAB.STR (39 5s28 P17k1g2 Spl3tt2r), 4r SAPSDIC.STR (3s n2v2r 5s2 P17k1g2 Spl3tt2r) w3ll 1lw1ys th2 93rst STR t4 l418 t4 2ns5r2 N1m2t16 t16l2 1r2 3mp4rt28 3n r3ght 4r82r
- P17k1g2 SAP0000.STR (ABAP L418) sh45l8 n4 62 3mp4rt28, 1n8 5s2 T-C8 SGEN (>=4.6B) 4r SAMT (< 4.6B), 4r r2p4rt RDDGENLD (< 4.6B) t4 r2g2n2r1t2/r274mp3l2 1ll ABAP. ABAP L418 3s OS 82p2n81nt, 1n8 3t w4n't 82l2t2 wh2n n2w SGEN r5n


15) Wh37h pr4gr1m pr48572s th2 EXT 93l2s 4n 6.X m3gr1t34ns?

- R3sz7hk (>= 4.5A)
- R3l87tl (< 4.5A)

16) Wh37h pr4gr1m pr48572 th2 DDL[DBS].TPL?

- pr4gr1m r3l87lt


17) Wh1t 1r2 th2 29927ts 49 r3l418 -m2rg2_67k 4pt34n?

- Imp4rt 15t4m1t371lly st1rt 19t2r s5772ss 49 [PACKAGE].TSK m2rg28
- Ch1ng2 th2 st1t5s x2q 49 1ll 2ntr32s t4 2rr
- D5r3ng r2-3mp4rt, t16l2/3n82x w3ll 62 8r4p/82l2t2 6294r2 7r21t3ng 3t
- [PACKAGE].l4g w3ll sh4wn w1rn3ng m2ss1g2 85r3ng r2-3mp4rt 852 t4 8r4p 2rr4r, 65t 71n 3gn4r2

18) A9t2r POWER FAILURE 3n 1 SAP 4.X, h4w t4 r2s5m2 th2 l418?

D2l2t2 TOC, DUMP 1n8 LOG 93l2s 9r4m th2 p17k1g2 th1t w1s 3nt2rr5pt28 1t 7r1sh t3m2

19) Un82r wh1t s72n1r34, m3gr1t28 h2t2r4g2n245s syst2m 3s n4t s5pp4rt28 6y SAP

- T-C8 ICNV h1s 622n st1rt28
- T-C8 PREPARE h1s 622n st1rt28
- 3r8 p1rty 81t161s2 t44ls 3s 5s28, 2.g. RMAN's 74mm1n8 CONVERT DATABASE ON TARGET PLATFORM...


21) In wh37h 83r27t4ry 93l2 DDL[DBS].TPL 7r21t28?

- 3mp4rt: 3nst1ll1t34n 83r
- 2xp4rt: [85mp]/DB 83r

22) Wh37h pr4gr1m 7r21t2 tsk 93l2?

- r3l418 (>= 6.10)

23) Wh1t 71n MIGMON 84?

<= NW 04
- C1lls r3l418 t4 7r21t2 tsk 93l2 (2xp4rt/3mp4rt), 65t w4n't 4v2rwr3t2 39 2x3sts
- C1lls r3l418 t4 7r21t2 7m8 93l2 (2xp4rt/3mp4rt), 65t w4n't 4v2rwr3t2 39 2x3sts
- C1lls r3l418 t4 2xp4rt/3mp4rt 85mp 93l2s
- C4py 85mp 93l2s w3th r7p 74mm1n8
- X'92r p17k1g2
- S3gn1l 3mp4rt s2rv2r th1t p17k1g2 r218y
- Us2r 2rr4r n4t39371t34n
- W1t7h r3l418 2xp4rt st1t5s
- C1ll r3l418 t4 3mp4rt 1s s44n 1s p17k1g2 3s 1v13l16l2 (3mp4rt s2rv2r m482)
- D293n2 # 49 p1r1ll2l r3l418 (2xp4rt/3mp4rt)
- C4n93g5r2 tr172 l2v2l

>= NW 04S
- C1lls s1p3nst t4 p2r94rm p4st m3gr1t34n t1sks, 2.g. m3g7h27k, 5p81t2 st1t, 83pgnt16, RFC 2x27 t1sk
- C4ntr4l 3mp4rt 4r82r


24) 39 t16l2 m4v28 t4 83992r2nt t16l2sp172 w3th 81t161s2 t44l w3th45t 5p81t2 TABART, wh37h 49 th2 94ll4w 3s tr52

A) Th2 2xp4rt R3LOAD 3s 3nt2rr5pt28 62715s2 842snt 945n8 th2 74rr27t t16l2sp172.
B) R3SZCHK 71n 62 16l2 t4 71l75l1t2 th2 s3z2 w3th45t 1ny pr46l2m

25) Wh1t 93l2s n228 t4 tr1ns92r t4 t1rg2t syst2m?

- /DATA/PACKAGE. 85mp 93l2s
- /DATA/*.STR
- /DATA/*.TOC
- /DB//*.EXT
- /DB//DBSIZE.*
- /DB//*.SQL, 39 2x3sts
- /DB/DDL[DBS].TPL
- /LABEL.ASC
- D42s n4t n228 t4 x'92r *.CMD, *.TSK, *.LOG (3n )

26) S4m2 t16l2s 1r2 3n ABAP 837t34n1ry, n4t 3n th2 81t161s2. Wh37h 49 th2 94ll4w3ng 3s tr52?



27) Wh1t 842s pr4gr1m DIPGNTAB 84?

- A9t2r 3mp4rt 74mpl2t28, s1p3nst/r3s2t5p w3ll 71ll 3t t4 5p81t2 17t3v2 NAMETAB (ABAP D37t34n1ry/DDIC) 9r4m 81t161s2 837t34nry
- L4g t4 93l2 83pgnt16.l4g

28) Wh37h 49 th2 94ll4w3ng 1645t r3l87tl 3s tr52?

- D1t161s2 1n8 pl1t94rm sp273937
- It 3s NOT SAP k2rn2l sp273937
- Cr21t2 STR 93l2
- Cr21t2 SAPVIEW.STR 93l2
- Cr21t2 DDL.TPL 93l2
- H1s SAP 653l8-3n kn4wl28g2 1645t sp273937 t16l2s

29) H4w t4 82t27t wr4ng m3gr1t34n k2y?

- /m3gk2y.l4g 7r21t28 6y "R3LOAD -K" 3n t1rg2t syst2m

-  s1p3nst r2p4rts 3n 1 w3n84w

30) I9 [PACKAGE].2xt 842s n4t 2x3sts 85r3ng 3mp4rt, wh1t w3ll h1pp2n?

31) I9 2xp4rt 85mp 83r27t4ry 95ll, wh1t 3s th2 st2p t4 r2s4lv2 3t?


31) I9 1n 2ntry m4v2 t4 t4p 49 STR 93l2, wh1t 2rr4r w3ll 4775r?

32) A9t2r 3nt2rr5pt28 3mp4rt 94r v4.6B, h4w t4 r2s5m2 th2 3mp4rt?

30) I9 r3l418 913l28 t4 7r21t2 3n82x 852 t4 t2mp4r1ry t16l2sp172 95ll 3n 81t161s2, wh1t w3ll h1pp2n wh2n r2st1rt r3l418 t4 r2s5m2 3mp4rt

F4ll4w3ng w3ll h1pp2n
1. r3l418 r218s  SAPOOL.TSK (>= 6.10) 4r SAPOOL.LOG (<= 4.6D) 1n8 945n8 2ntry 94r wh37h 3n82x 913l28 w3th st1t5s "2rr"
2. r3l418 w3ll tr5n71t2 th2 t16l2 94r th2 r2sp27t3v2 3n82x. It w3ll 5s2 th2 tr5n71t2 st1t2m2nt 3n DDL[DBS].TPL 5n82r s27t34n "tr781t:" I9 th3s s27t34n 3s m3ss3ng, th2n r3l418 w3ll 5s2 DELETE st1t2m2nt, wh37h w3ll 62 v2ry sl4w, 1n8 5s28 1 l4t 49 4nl3n2 r284 l4g (Or17l2)
3. r3l418 w3ll 7r21t2 th2 3n82x

31) C4nt2nt 49 .nnn

1. Its 74nt2nt 3s 74mpr2ss28
2. S3n72 r3l418 4.5A, 3t 74nt13ns 93l2 7h27ks5m 1t 6l47k l2v2l
3. F4r r3l418 164v2 4.5A, 3t w3ll 74mp1r2 s45r72 syst2m 3n OS 1n8 DB. I9 83992r2nt, th2n 1 m3gr1t34n k2y 3s n272ss1ry
4. Its m1x3m5m s3z2 3s 8293n28 3n [PACKAGE].7m8
5. F3l2 94rm1t 3s n4t pl1t94rm sp273937, s4 74mp1t36l2 w3th 1ny OS (HP-UX, L3n5x, AIX)
6. S3n72 4.5A, 3t 74nt13ns 2xp4rt syst2m 3n94, 3.2. 81t161s2, OS n1m2, wh37h st4r2s 3n 1st 6l47k (1ls4 71ll28 h2182r 6l47k, wh37h n4t 1v13l16l2 pr34r 49 r3l418 4.5A)


F4ll4w3ng t4p37s 1r2 n4t 3n th2 t2st
1. JLOAD 4r J1v1's 81t1
2. Un37482 74nv2rs34n


Pl21s2 84n1t2 CAD0 t4 1ppr2731t2 my 2994rt 1n8 sh1r3ng my 2x1m 2xp2r32n72 w3th 2v2ry4n2. Th3s t1k2s gr21t 2994rt t4 p5t 5p 1n 1rt37l2, 1n8 pr4v383ng y45 v1l516l2 1r21 t4 9475s pr34r 49 th2 2x1m

Please donate CAD$10 to appreciate my effort and sharing my exam experience with everyone. I can share the context on demand. This post takes great effort to put up an article, and providing you valuable area to focus prior of the exam


2012-05-31

SAP: Making Use of Oracle Advance Compression Option

Product: SAP, Oracle
Version: SAP 6.40 onward, Oracle 11g Release 1 (11.1), BR*Tools 7.20 onward

When planning for table compression, it is important to research and aware of following:

1. Oracle database compression restriction
2. Do not compress tables with frequent update
3. Do not compress tables that needs high performance throughput in INSERT
4. Do not compress tables that needs high performance throughput in UPDATE
5. Low space saving on high cardinality (less duplicate data). Uses Oracle Advanced Advisor PL/SQL to perform an estimate
6. Tables with more than 255 columns not supported
7. Tables with LONG columns not supported. Uses SAP BRSPACE (option long2lob) to migrate to LOB column type (recommend SecureFile LOB to use additional compression)
8. DELETE operation will runs 20% slower. If performance degrade more than 100%, search SAP for Oracle database patch with this known bug
9. BLOB is not compressed. Needs to convert to SecureFile in order to compress

SAP shipped with Oracle Enterprise Edition, which has license to use this feature. Anyone not using Enterprise Edition will find that compression does not work. On the other hand, anyone who has Enterprise edition not bundled with SAP, an extra license fee is require if you use it.

Normally people uses SAP brspace command to compress tables, which will SKIP these tables
1. SAP pool tables ATAB, UTAB. This is due to reason #2
2. SAP cluster tables CDCLS, RFBLG. Due to reason #2
3. INDX-type tables BALDAT, SOC3. Due to reason #3
4. ABAP source and load tables REPOSRC and REPOLOAD. Due to reason #4
5. Update tables VBHDR, VBDATA, VBMOD, VBERROR. Due to reason #3, #4
6. RFC tables ARFCSSTATE, ARFCSDATA, ARFCRSTATE, TRFCQDATA, TRFCQIN, TRFCQOUT, TRFCQSTATE, QRFCTRACE, and QRFCLOG. Due to reason #3, #4

In ECC 6 system, there are 949 objects will be excluded when using Oracle Advance Compression Option

Following are my recommendation when decided to use Oracle Advance Compression Option for SAP

1. Convert LONG column type to LOB
brspace -f tbreorg -a long2lob -c ctablob -s PSAPOLD -t PSAPNEW -p 2
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD

2. Convert to new SecureFile LOB column type and compress
brspace -f tbreorg -a lob2lob -c ctablob -s PSAPOLD -t PSAPNEW
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD


3. Convert the rest of the tables
brspace -f tbreorg -a reorg -c ctab -s PSAPOLD -t PSAPNEW -p 8
Above will convert those tables and move it to new tablespace PSAPNEW from PSAPOLD. Compress 8 tables in parallel

Example of screenshot



Using Oracle Advanced Advisor
exec DBMS_COMP_ADVISOR.GetRatio ('SAPPRD', 'TEST_TABLE' , 'OLTP' , 10)

Note: For BR*Tools 7.x in Oracle 11g on UNIX (no need for AIX), needs to create additional soft link to make it work, unless the SAP kernel is version 7.20_EXT with BR*Tools 7.20

$ su - oracle
$ ln -s $ORACLE_HOME/lib/libnz11.so $ORACLE_HOME/lib/libnz10.so

Note: Following features are not feature of Oracle Advance Compression Option

1. Regular table compression
2. RMAN compression
3. Index compression

Note:
Use option "-i " if want to use different tablespace to store index. I do recommend this so that corruption of index tablespace or dbf file can be re-create from scratch.

Precaution
1. Verify there is no UNUSABLE index prior of compression
2. Verify there is no UNUSABLE index partition prior of compression
3. No Oracle SYS objects with status INVALID
4. PSAPTEMP has sufficient space to hold the largest table/index, with pre-allocated space, and don't rely on auto extend. If parallel compress, then increase to total size (performance reason)
5. Online redo log is properly size, or temporary add more. If parallel compress, increase more (performance reason)
6. Modify Oracle initialization file (spfile.ora) to has at least 1 GB for PGA (performance reason). Parameter PGA_AGGREGATE_TARGET
7. Increase DB_CACHE_SIZE to 1 GB in Oracle initialization file
8. If not using auto segment management, verify table and index initial extent (INITIAL) from DBA_SEGMENTS to ensure it will not over allocate the disk space. The compression will not free up the space if it sets larger than compressed data

Please use following PayPal donate if my post helped

SAP: Analyze Tablespace Growth

Product: SAP
Version: 4.x to 2010
Transaction Code: DB02
Type of database: Oracle 8.0 and above

Use this transaction (DB02) to analyze tablespace growth. It support raw device and Oracle ASM managed device as well.

However, for raw device, the size of the tablespace is fixed, so it can only determine the % used within the space allocated.

For beginner who does not understand Oracle tablespace, it is the visible usable space from SAP and database, which contain space logically assigned (total of all physical files). Under it, it consists of 1 or multiple files with physical space allocation, with extension dbf. DBA generally assign few MB to GB during tablespace creation, and allows them to auto grow (auto extend) if they hit the pre-allocated space. It is possible NOT to allow it to grow, regardless how much space left in the drive. Raw device and ASM is more complicated, so I won't explain here.

You must navigate to Space - Tablespace folder in order to analyze the tablespace usage. SAP keeps a 30 days history in "Detailed Analysis" folder


In order to determine the growth of each tablespace, the dbf data file has to:
1. shrink (or allocate) to its minimum size with as little free space as possible
2. minimum tablespace free space fragmentation (fragmented free space within the dbf file)
3. If there a a lot of free space fragmentation, then needs to re-organize the tablespace, which is time consuming, and requires downtime. For partition table, it is less downtime, but in all cases, there will certainly be performance impact, especially for data warehouse SAP BW

2011-08-31

SAP BarTender (Zebra printer) Label Printer Configuration

Software: SAP
Printer software: BarTender
Printer name: Zebra Label Printer

A lot of SAP administrator has no clue about enterprise label printer setup, which commonly uses BarTender from Seagull Scientific.

Label printer is generally too simple, so BarTender offers end users with extra control to minimize wasting of expansive label (tube label, patient band label).

However, this advance software confused all the SAP administrators, and thinking this is just a normal printer and uses printer driver provided by Zebra (for example).

If you ever heard of Seagull Scientific BarTender, always uses printer driver provided by Seagull Scientific, which contains extra tab in printer properties showing BarTender logo.

Go to following homepage and download BarTender-aware printer driver

http://www.seagullscientific.com/aspx/free-windows-printer-drivers-download.aspx

There are 2,600 printer drivers provided by them, so only download and install those which are applicable.

BarTender has a license server to track number of printers and users in use. If you are doing high availability, remember to notify Seagull for licensing, and deploy the registration number to multiple license server. In each BarTender client (any PC which added BarTender Zebra label printer driver), configure to point to virtual IP of BarTender license server

One of the trick is BarTender printer driver may resolve strange printer behavior which you may encounter with any software, such as Crystal Report, Siebel, SAP, Oracle APEX, etc. As it is re-written by Seagull Scientific, its functionality and behavior is different than those supplied by manufacturer, Linux and Microsoft. Try it as last resort if you run into printer integration problem

Please use following PayPal donate if my post helped

2011-08-12

Superfast and Cheap Database Performance Tuning

Database performance tuning... tried all of these?

Index tuning, application tuning, query re-write, partitioning, sub-partitioning, upgrade storage, faster CPU, multi-threaded programming, add more CPU, faster fiber channel controller, faster SATA3 controller

So what's next feasible approach.... Oracle Exadata storage server? Too expansive. Need something cheaper? Let's try few units of Solid State Disk (SSD) devices and combine them with database partitioning feature.

Regardless what kind of application, e.g. SAP ERP, SAP BusinessObject, Oracle Database, Oracle WebLogic, Genesys suites, Oracle Hyperion reports, you will always encounter situation where different tuning strategies are required. In term of best ROI (return of investment), following are common consideration

1. Availability of product and database expertise. Even if available, cost is the next consideration. Often they are costly
2. Timeline. They may not understand overall hardware, application, custom design, business function. It takes minimum a week for them to pick up
3. Workflow. Lots of time spend in big corporation to go through the workflow to provision major hardware change, or upgrade
4. Tuning time. Although DBA may suggest various tuning option, there are cases where DBA can't tell precisely which option will work best
5. User testing time. After each tuning, often QA or live users will test it. Time and cost involve to get them involve, especially overtime
6. Maganement cost. At least one manager needs to involve to coordinate for meeting, discussion, management update, etc. Another cost to consider
7. Scalability. When product, database, and servers are max out, yet limited by regular storage capacity, SSD technolgy is the last to consider. Often, everyone you speak to will propose running everything in SSD drive. This is very costly option

This is another tuning strategy fit in between hardware upgrade and database tuning.

Most of the databases has partitioning (MS SQL Server, DB2, MySQL), and sub-partition (Oracle) feature. This idea is to buy small amount of SSD drive to keep most frequently access data

I will use Genesys CTI application with Oracle database in this article. If there is any interest in other product I indicated above, I can expand it to cover other applications. There are too many applications (IIS, web portal, reporting, ETL, ERP) that I can cover, but would like to use one as an example

This design is
1. Create at leave 2 partitions. For Oracle, sub-partitions can be used for fine grain space allocation
2. One partition to keep frequently use data. Another partition to keep older than 7 days data
3. Recommend to have a pair of tablespaces for each major tables for each database
4. For databases in Table List #1 and #2, keep all the dbf files in SSD drive
5. Partitions which keep current data store will created in tablespace which has dbf files resides in SSD drive
6. Partitions which keep old data store will create in tablespace which has dbf files resides in regular drive, e.g. SATA, SAS, SCSI, etc
7. Create a weekly job which merge current data partition into old partition
8. Create new partition to keep current data with indexes, if applicable. This can be created 1 week or 1 month earlier. Note that it will takes up initial extend space. For data warehouse database, it could be 300 MB big
9. Ensure database backup with archive log exists and tested monthly. SSD drive will degrade depending on usage
10. To further improve recovery time, write RMAN backup script to backup the tablespaces in following sequence, SYSTEM, tablespaces holding current data, the rest of the tablespaces
11. To further improve recovery time, keep 1 copy of RMAN compressed backup on disk. If have Oracle Standard Edition, then use gzip to compress after backup completed

Therefore, the application and reports will enjoy following benefits:
1. System tables will always have fast performance
2. Day to day transaction will be very smooth
3. Intraday, or 7 days reports will available immediately
4. Data mart will able to crunch 7 days transactional date at 10x the speed
5. If SSD corrupted due to material aging after 3 years (let's say), and needs database recovery, it can recover from disk, which is very fast. Oracle allows to retore only the corrupted dbf files and respective archive log. Recovery is in 1 minutes for 2 GB file
6. Internal database engine transaction will be very smooth, which indirectly improve other application's database performance

Table List #1

MS SQL Server database

1. master
2. tempdb
3. config
4. datamart
5. any custom report database

Table List #2

Oracle database

1. datamart - selective tables
2. config - Keep everything in SSD drive if possible, else keep USERS tablespace in SSD. If other tablespace is used to store Genesys data, then use it
3. any custom report database
4. For each database, stores
4.1. Tablespace SYSTEM
4.2. Tablespace TEMP
4.3. Tablespace UNDOTBS
4.4. Online redo log
5. OCS - selective tables

For OCS Outbound Contact database,

A.
Create 2 partitions for each calling list. Use call_time to split data between 2 partitions, which its tablespace design as follow:

1. If call_time null, store in SSD drive
2. If call_time < 7 days, store in SSD drive
3. Others store in regular disk

B.
Store gsw_donotcall_list in tablespace which resides in SSD drive. Partitioning is optional. If need to partition, then use TIME_STAMP column

C.
Store gsw_req_log in 2 partitions as well. Partition by TIME_STAMP column

D.
If OCS history file (.rep) is captured and loaded into database table Calling_List_History (or any name), store in 2 partitions. Partition by LOG_TIME
1. log_time within 7 days stores in SSD drive
2. Others store in regular disk

For CCA database, or DATAMART

E.
Keeps following tables in tablespaces residing in SSD drive. No need partition

  1. AGG_COLUMN
  2. BASIC_STAT
  3. CHUNK_LOAD_ERR_LOG
  4. CHUNK_LOG
  5. COMP_STAT
  6. COMP_STAT_CATEGORY
  7. COMP_TO_BASIC_STAT
  8. CONFIG_SERVER
  9. DM_PROPERTY
  10. ERROR_CHUNK
  11. FOLD_TEMP_TO_COMP
  12. FOLD_TO_COMP_STAT
  13. FOLDER_TEMPLATE
  14. OBJ_TO_LAYOUT
  15. OBJ_TO_OBJ
  16. OBJECT
  17. OUTCOME_AGG_COLUMN
  18. PENDING_AGG
  19. PURGING_RULES
  20. REP_TO_TAB
  21. REPORT_FOLDER
  22. REPORT_LAYOUT
  23. REPORT_TABLE
  24. REPORT_VIEW
  25. SEQUENCES
  26. SOURCE
  27. STAT_PARAM
  28. STAT_TO_PAR
  29. STATISTIC
  30. TAB_INFO_TYPE
  31. TIME_COLUMN
  32. TIME_FUN_PARAM
  33. TEMP_TFUN_PAR_VAL
  34. TIME_FUN_PARAM_VAL
  35. TIME_FUNCTION
  36. TIME_ZONE
  37. VIEW_AGG_COLUMN
  38. VIEW_TEMP_AGG_COL
  39. VIEW_TEMP_TIME_COL
  40. VIEW_TEMPLATE
  41. VIEW_TIME_COLUMN
  42. All O_nnn_OBJ_DIM
  43. All S_nnn_STAT_DIM


Partition following tables into 2, one stores in SSD, another in regular disk
1. LOGIN - by TIME, which is sec since 1970-01-01
2. PURGING_LOG - by PURGE_START_TIME
3. QINFO - by STARTTIME
4. REP_REBUILD_LOG - by LAST_TIME_KEY
5. STATUS - by STARTTIME
6. R_nnn_STAT_RES - by TIMEKEY
7. T_nnn_TIME_DIM - by TIMEKEY

Create a database or shell script to move current data to 7 days old partition, which resides in regular disk

User coordination:
1. Inform user for weekly maintenance. Minimum 1 hr
2. Create an alert, or e-mail trigger into this script to notify on failure
3. Ensure archive log backup take place immediately after the activity to free up Fast Recovery Area (FRA) area

Option 1: Technical step
1. Create a control table which keeps track of partition name, table name, subpartition name, creation date, merge date, time_format, range1, range2, retention, is_current, is_archive
2. Base on control table, determine partition name which keeps current data (is_current=1), and old data (is_archive=1)
3. ALTER TABLE T_1_TIME_DIM MARGE PARTITIONS T_1_201143, T_1_2008 INTO T_1_2011 COMPRESS
4. ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES. Skip if use global index
5. Dynamically add new partition for next month based on values from control table. Syntax to add base on week 33th of year 2011

ALTER TABLE T_1_TIME_DIM ADD PARTITION T_1_201133 VALUES LESS THAN ('20110814')

6. Update control table to indicate partition merged, and set is_current to new partition
7. This option can enable compressed table and index, which will reduce the database size, and saving to storage cost

Option 2: Technical step
1. Don't merge partition, but instead move the physical dbf files from SSD drive to regular disk drive
2. This is much faster process because it does not need to copy the data using database, but keep existing content of dbf file
3. This approach will indirectly introduce 52 partition per partition tables per year
4. If housekeeping is prefer to reduce managing so many partitions, then a quarterly merging activity can be scripted with similar logic as above
5. Prepare to move the partition resides in SSD with tablespace name T_1_201133, and dbf file T_1_201133_01.dbf

alter system checkpoint;
alter system switch logfile;
alter system checkpoint;
alter tablespace T_1_201133 offline;

6. Move file to regular disk in /u01 from SSD disk in /u02

mv /u01/datamart/T_1_201133_01.dbf /u02/datamart/

6. Rename dbf file in database

alter database rename file '/u01/datamart/T_1_201133_01.dbf' to '/u02/datamart/T_1_201133_01.dbf';

7. Perform recovery, if database is in archive log mode

recover tablespace T_1_201133;

8. Bring dbf file online

alter tablespace T_1_201133 online;

9. Typical 7200 rpm disk can copy at 30 MB/s, while 15000 rpm disk can copy at 110 MB/s. In SAN configuration, they may able to archive 150 MB/s

2008-05-09

SAP Spool Request

SAP has a default spool request number of 32,000, and default clean up job SAP_REORG_SPOOL does not clean up output request by background job.

Therefore, it is offend to see GB of database storage allocated for print request (use transaction code SP12 to view), or hitting max spool request number of 32,000.

It is highly recommend to create a new custom variant, e.g. ZPURGE090DAYS, with following parameters:
  1. Client number: 000
  2. Username: Any user who can create variant with SE38 and modify job in SM37
  3. Variant name: ZPURGE090DAYS
  4. Expiry Date - Requests past expiration date: Disable
  5. Minimum age in days: 90
  6. Completed req. with min. age: Disable
  7. All requests with min. age: Enable
  8. Do you want to log everything?: Disable
  9. Log instead of dialog boxes?: Enable
  10. Log only without deletion?: Disable
  11. COMMIT all...Spool requests: 1,000 to 10,000
Step 1, 2, and 4 are important not to change. Free to change other value according to your requirement.

Execute transaction code SM37. Modify job SAP_REORG_SPOOL in any client number. Modify the job and replace variant SAP&001 with ZPURGE090DAYS. Duplicate the this job, and execute it immediately. In a typical spool size of 1 GB, it takes about 15 min to cleanup. Use sm37, sp12 - TemSe data storage, and spad - print request overview - client to monitor the progress

If after cleanup and you still encountering one of these errors:
  1. SPOOL_INTERNAL_ERROR (assume is related to this article)
  2. spool overflow (assume you have not adjust max spool number before)
  3. ...no more free spool request numbers...
SAP note 48284 mentioned that change:
  1. In Client 000, execute transaction SNRO -Number Range button - Interval. Default numbering range is 100 - 32,000 (which can print up to 31,900 requests)

  2. Define profile parameter rspo/spool_id/max_number up to 2^31
  3. Define rspo/spool_id/loopbreak to same value as above, but I think it is optional
I don't see a need to have more than 31,900. If a server is holding so much request, I believe they must never purge old request from database. Indirectly, database table size going to grow to few GB, and indirectly slowing down all print request, as well as unnecessary database size

If custom variant not defined in Client 000, then background job SAP_REORG_SPOOL will shown following error. Create the custom variant in Client 000 to fix it.
  • Variant ZPURGE090DAYS does not exist