Product: BusinessObject
Module: Crystal Report designer
Consultant, including myself, often design archiving solution to move several years old OLTP data to archiving table, or even to different archiving database.
It becomes a challenge for report designer to pull the data from 2 different tables, or databases. This post will indicate how to design a single rpt report to get the data from 2 tables automatically
1. Create a sub-report to query archive table, which keeps data older than 1 year
2. Create a parameter called "dateQuery"
3. In main report, modify Conditional Suppression formula to hide this year's data if dateQuery is older than 1 year
If DateDiff ("m", {?dateQuery}, CurrentDate ) > 12 Then True
4. 3. In sub-report, modify Conditional Suppression formula to hide old data if dateQuery is this year
If DateDiff ("m", {?dateQuery}, CurrentDate ) <= 12 Then True
This is another solution instead of design another report to query old data. Nonetheless, it is not a bad idea to create separate report to prevent user abusing the report, such as extracting 5 years of data, which severe degrade database, hard disk, and or even congested the network
Guidewire, SAP, Genesys, hacking, Oracle Application Server, Oracle database, UNIX
Search This Blog
Showing posts with label BusinessObject. Show all posts
Showing posts with label BusinessObject. Show all posts
2012-05-17
2012-05-16
BusinessObject: Adjust Width of List Box Prompt
Product: BusinessObject
Module: Crystal Report designer
File: C:\Program Files (x86)\SAP BusinessObjects\Crystal Reports for .NET Framework 4.0\Common\Crystal Reports 2011\crystalreportviewers\prompting\css\promptengine2.css
In the prompt menu, all the List Box for filtering have fix width. It is possible to adjust this file to make it wider.
This is a global setting, so you can't control a specific List Box size in specific report. Moreover, if the width is too large (say 1920px), the right side of the screen will not be visible. There could be a "height: 100px" that can be added in, but I never try
.pePromptTextBox
{
font-size: 8pt;
background-color: #FFFFFF;
font-family: Tahoma, verdana;
width: 300px;
}
BusinessObject: Dynamic Timezone Conversion
Product: BusinessObject
Module: Crystal Report designer
I have see many people posting about timezone conversion related questions, and found many people are proposing a static timezone conversion, e.g. -1 hour for EST and CST.
This solution does not work for people in none USA timezone, for example, server is in USA, but user who view the report is in London, where both countries have different DST
Following solution auto detect server and user timezone, and adjust the time accordingly, instead of using hard coded time different. For illustration purpose, build-in function CurrentDateTime is used, but it can be replaced with column name to convert the time to correct timezone
1. Convert Crystal Report server timezone to user's timezone
ShiftDateTime ( CurrentDateTime, PrintTimeZone, CurrentCEUserTimeZone)
2. Convert UK timezone to Sweden's, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0,BST", "CET,-60,CEST")
3. Convert UTC time to server's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", PrintTimeZone)
4. Convert UTC time to user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", CurrentCEUserTimeZone )
5. Convert server's timezone to UTC time
ShiftDateTime ( CurrentDateTime, PrintTimeZone, "UTC,0")
6. Convert user's timezone to UTC time
ShiftDateTime ( CurrentDateTime, CurrentCEUserTimeZone, "UTC,0")
7. Convert US Eastern timezone to Pacific timezone, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "EST,0,EDT", "PST,60,PDT")
8. Convert UTC time to Eastern timezone, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", "PST,480,PDT")
9. Convert based on timezone defined in user table stored in database
This make use of Oracle functions instead of Crystal Report. So create a SQL Expression in Crystal Report with following value
( Select From_TZ( OrderTimestamp , GlobalDataTimeZone) AT Time Zone TargetDataTimeZone As TzValue From Dual )
10. Same as above, but use client's timezone
( Select From_TZ( OrderTimestamp , GlobalDataTimeZone) AT Time Zone SESSIONTIMEZONE As TzValue From Dual )
Note on Syntax:
ShiftDateTime ( [inputDateTime], [fromTimezone] , [toTimezone] )
where fromTimezone and toTimezone has following syntax
"Standard, Offset, DST, [offset in minute];[DST start date]/[and time],[DST end date]/[and time]"
"" (empty string) will use server's timezone defined in OS
CurrentCEUserTimeZone
Return the timezone (string) of the user viewing the report
PrintTimeZone
Return the timezone (string) of the BusinessObject server
The time zone string must be valid Crystal Report value. I recommend to use CurrentCEUserTimeZone and PrintTimeZone function to see the value on the report while designing the report, and change the OS's timezone to test it
Module: Crystal Report designer
I have see many people posting about timezone conversion related questions, and found many people are proposing a static timezone conversion, e.g. -1 hour for EST and CST.
This solution does not work for people in none USA timezone, for example, server is in USA, but user who view the report is in London, where both countries have different DST
Following solution auto detect server and user timezone, and adjust the time accordingly, instead of using hard coded time different. For illustration purpose, build-in function CurrentDateTime is used, but it can be replaced with column name to convert the time to correct timezone
1. Convert Crystal Report server timezone to user's timezone
ShiftDateTime ( CurrentDateTime, PrintTimeZone, CurrentCEUserTimeZone)
2. Convert UK timezone to Sweden's, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0,BST", "CET,-60,CEST")
3. Convert UTC time to server's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", PrintTimeZone)
4. Convert UTC time to user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", CurrentCEUserTimeZone )
5. Convert server's timezone to UTC time
ShiftDateTime ( CurrentDateTime, PrintTimeZone, "UTC,0")
6. Convert user's timezone to UTC time
ShiftDateTime ( CurrentDateTime, CurrentCEUserTimeZone, "UTC,0")
7. Convert US Eastern timezone to Pacific timezone, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "EST,0,EDT", "PST,60,PDT")
8. Convert UTC time to Eastern timezone, regardless of user and user's timezone
ShiftDateTime ( CurrentDateTime, "UTC,0", "PST,480,PDT")
9. Convert based on timezone defined in user table stored in database
This make use of Oracle functions instead of Crystal Report. So create a SQL Expression in Crystal Report with following value
( Select From_TZ( OrderTimestamp , GlobalDataTimeZone) AT Time Zone TargetDataTimeZone As TzValue From Dual )
10. Same as above, but use client's timezone
( Select From_TZ( OrderTimestamp , GlobalDataTimeZone) AT Time Zone SESSIONTIMEZONE As TzValue From Dual )
Note on Syntax:
ShiftDateTime ( [inputDateTime], [fromTimezone] , [toTimezone] )
where fromTimezone and toTimezone has following syntax
"Standard, Offset, DST, [offset in minute];[DST start date]/[and time],[DST end date]/[and time]"
"" (empty string) will use server's timezone defined in OS
CurrentCEUserTimeZone
Return the timezone (string) of the user viewing the report
PrintTimeZone
Return the timezone (string) of the BusinessObject server
The time zone string must be valid Crystal Report value. I recommend to use CurrentCEUserTimeZone and PrintTimeZone function to see the value on the report while designing the report, and change the OS's timezone to test it
SAP BusinessObject: Hide Group Header on 2nd Page Onward
Product: SAP BusinessObject
Module: Crystal Report designer
It is possible to hide the group header by using condition. Following step will configure the report so that in first page of report, it will display the group header, but on second page onward, it will hide it
1. Split the group header
2. On 1st group header, specify condition InRepeatedGroupHeader
3. On 2nd group header, specify condition NOT( InRepeatedGroupHeader )
End of post
Module: Crystal Report designer
It is possible to hide the group header by using condition. Following step will configure the report so that in first page of report, it will display the group header, but on second page onward, it will hide it
1. Split the group header
2. On 1st group header, specify condition InRepeatedGroupHeader
3. On 2nd group header, specify condition NOT( InRepeatedGroupHeader )
End of post
2012-02-10
Crystal Report: Default input directory
Product: BusinessObject XI
AKA: Crystal Report
Module: Central Management Server (CMS)
It is often to see following file name in CMS web UI about the Crystal Report files. If you wonder where is the exact file location, copy out all the rpt, or change its location. This post will show you the detail
For example, rpt report "Interactive Parameters" stores in location "frs://Input/a_178/003/..." So where is so call Input directory? It is a parameter name in CMS which is configurable to any directory.
Default Directory: C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\FileStore\Input\
Login to CMS web portal with a username who has privilege to view the server configuration (if just need to view, and not to change).
Change the drop down list box on top to "Servers" to display server's configuration
Double click on entry with ".InputFileRepository" to view its parameters
Placeholder %DefaultInputFRSDir% (I called it parameter) is the location of frs://Input, while %DefaultOutputFRSDir% is frs://Output.
You can create additional place holder and used in report export purpose, so that end users does not need to know the exact location. This simplify them if there is a storage upgrade, or directory re-org. It will be invisible to them during such migration/upgrade/re-org
During report scheduling, following default output destination is referring to parameter %DefaultOutputFRSDir% configured above. Change it if you need to
End users can specify their own custom output directory as needed. Following screenshot, I am reusing existing parameter Output, but adding 2 sub-directory called HR, and 2012-02. This is another simpler approach to organized the files without adding parameter (place holder)
AKA: Crystal Report
Module: Central Management Server (CMS)
It is often to see following file name in CMS web UI about the Crystal Report files. If you wonder where is the exact file location, copy out all the rpt, or change its location. This post will show you the detail
For example, rpt report "Interactive Parameters" stores in location "frs://Input/a_178/003/..." So where is so call Input directory? It is a parameter name in CMS which is configurable to any directory.
Default Directory: C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\FileStore\Input\
Login to CMS web portal with a username who has privilege to view the server configuration (if just need to view, and not to change).
Change the drop down list box on top to "Servers" to display server's configuration
On left panel, click on Service Categories - Core Services, and look for server name containing name ".InputFileRepository." The column "Kind" will show "File Repository Server." You will see 2 entries, one for "Input" and one for "Output"
Placeholder %DefaultInputFRSDir% (I called it parameter) is the location of frs://Input, while %DefaultOutputFRSDir% is frs://Output.
You can create additional place holder and used in report export purpose, so that end users does not need to know the exact location. This simplify them if there is a storage upgrade, or directory re-org. It will be invisible to them during such migration/upgrade/re-org
During report scheduling, following default output destination is referring to parameter %DefaultOutputFRSDir% configured above. Change it if you need to
End users can specify their own custom output directory as needed. Following screenshot, I am reusing existing parameter Output, but adding 2 sub-directory called HR, and 2012-02. This is another simpler approach to organized the files without adding parameter (place holder)
2012-01-31
Solving Missing Value in Crystal Report Parameter Screen
Product: BusinessObject XI
Problem description:
In parameter screen, any dynamic parameter which contains value longer than 5,000 rows will only show the first 5,000
Root cause:
BusinessObject default to retrieve first 5,000 records for the dynamic parameter. This is to eliminate major impact to database server. This configuration is a global value that user configurable in Windows Registry
Solution:
Launch Registry Editor (regedit.exe), and navigate to following
HKEY_LOCAL_MACHINE - SOFTWARE - Business Objects - Suite 12.0 - Crystal Reports - DatabaseOptions - LOV
Change following 2 keys to max number of records that applicable. I don't recommend to put in more than 10,000 as it is not practical to let user select 10,000 values in parameter screen
MaxRecordsToRead - Default is 5,000
MaxRowsetRecords - Default is 5,000
If the program still running, terminate and launch again (such as SAP BO Windows service). The program only read the registry once when it starts up, but not every time you refresh the report. This applies to both designer and web interface. Less experience people will tell you to reboot the PC, BOE server, web server, any every piece of software. Don't do that if you are wise
Problem description:
In parameter screen, any dynamic parameter which contains value longer than 5,000 rows will only show the first 5,000
Root cause:
BusinessObject default to retrieve first 5,000 records for the dynamic parameter. This is to eliminate major impact to database server. This configuration is a global value that user configurable in Windows Registry
Solution:
Launch Registry Editor (regedit.exe), and navigate to following
HKEY_LOCAL_MACHINE - SOFTWARE - Business Objects - Suite 12.0 - Crystal Reports - DatabaseOptions - LOV
Change following 2 keys to max number of records that applicable. I don't recommend to put in more than 10,000 as it is not practical to let user select 10,000 values in parameter screen
MaxRecordsToRead - Default is 5,000
MaxRowsetRecords - Default is 5,000
If the program still running, terminate and launch again (such as SAP BO Windows service). The program only read the registry once when it starts up, but not every time you refresh the report. This applies to both designer and web interface. Less experience people will tell you to reboot the PC, BOE server, web server, any every piece of software. Don't do that if you are wise
2011-12-13
Crystal Report: Reasons for "Encapsulation Page Error"
Product: BusinessObject Crystal Report
Following are possible cause for "Encapsulating Page Failed"
1. If uses sub-report, ensure sub-report's database authentication is correct
2. If rpt file is from other database environment, or not from original designer, open the rpt with Crystal Report designer, then execute it to identify any database related error. Possible errors
2.1. View does not exists
2.2. Column does not exists
2.3. Database function does not exists
2.4. No privilege to access table or view which get false alarm that view or table does not exists
2.5. Database account locked, or password expired
2.6. Other database related error
3. If using NTFS, ensure BusinessObject installation directory has proper privilege. Re-install the product if can't identify the exact NTFS privilege
4. Bug in Crystal Report server. Install the latest Service Pack to the server, not to Crystal Report designer
Please use following PayPal donate if my post helped
Following are possible cause for "Encapsulating Page Failed"
1. If uses sub-report, ensure sub-report's database authentication is correct
2. If rpt file is from other database environment, or not from original designer, open the rpt with Crystal Report designer, then execute it to identify any database related error. Possible errors
2.1. View does not exists
2.2. Column does not exists
2.3. Database function does not exists
2.4. No privilege to access table or view which get false alarm that view or table does not exists
2.5. Database account locked, or password expired
2.6. Other database related error
3. If using NTFS, ensure BusinessObject installation directory has proper privilege. Re-install the product if can't identify the exact NTFS privilege
4. Bug in Crystal Report server. Install the latest Service Pack to the server, not to Crystal Report designer
Please use following PayPal donate if my post helped
2011-09-19
Solving Missing Value in Crystal Report Parameter Screen
Product: BusinessObject XI
Problem description:
In parameter screen, any dynamic parameter which contains value longer than 5,000 rows will only show the first 5,000
Root cause:
BusinessObject default to retrieve first 5,000 records for the dynamic parameter. This is to eliminate major impact to database server. This configuration is a global value that user configurable in Windows Registry
Solution:
Launch Registry Editor (regedit.exe), and navigate to following
HKEY_LOCAL_MACHINE - SOFTWARE - Business Objects - Suite 12.0 - Crystal Reports - DatabaseOptions - LOV
Change following 2 keys to max number of records that applicable. I don't recommend to put in more than 10,000 as it is not practical to let user select 10,000 values in parameter screen
MaxRecordsToRead - Default is 5,000
MaxRowsetRecords - Default is 5,000
If using Crystal Report CMC, then bounce Crystal Report server to activate these configuration
2011-08-26
BusinessObject: Sync up Data Foundation with Database
Product: BusinessObject
Sub-product: Business View Manager
I alter table and view in database level. Use following procedure to sync up in Business View Manager
1. Open "Data Foundation" which contains the table or view
2. From pull down menu Tools - Verify Database
3. It will download column definition from database, and automatically add or remove columns
BusinessObject: Login to CMC with custom port
Product: BusinessObject
Normally when you login to Central Management Console, no port number is specified, but just the hostname, or IP address. This will default to port 6400 (protocol TCP for firewall purpose)
However, it is possible to specify hostname and port number as shown below. In other word, it is possible to install multiple instance of CMC server, which points to different repository database, or single database with multiple database schema/username (simplified database management) for non-PROD or research purpose.
In project development, keeping the cost down by hiring less DBA, VMware admin, Windows admin, UNIX admin is one of the main saving in human resource budgeting.
Although virtual server (VMware, Oracle VM) often be used for multiple non-PROD environment, they are indirectly require additional maintenance. For example, 6 environment of DEV1, DEV2, QA1, QA2, UAT1, UAT2, there will be 6 Unix to patch, 6 times Unix user management, 6 databases, 6 Crystal Reports, etc.
So if it is not necessary, keep it simple, especially for databases where there are a lot of developers, and users has no clue about its design, maintenance, recovery, tuning, patching, known bugs, etc
Normally when you login to Central Management Console, no port number is specified, but just the hostname, or IP address. This will default to port 6400 (protocol TCP for firewall purpose)
However, it is possible to specify hostname and port number as shown below. In other word, it is possible to install multiple instance of CMC server, which points to different repository database, or single database with multiple database schema/username (simplified database management) for non-PROD or research purpose.
In project development, keeping the cost down by hiring less DBA, VMware admin, Windows admin, UNIX admin is one of the main saving in human resource budgeting.
Although virtual server (VMware, Oracle VM) often be used for multiple non-PROD environment, they are indirectly require additional maintenance. For example, 6 environment of DEV1, DEV2, QA1, QA2, UAT1, UAT2, there will be 6 Unix to patch, 6 times Unix user management, 6 databases, 6 Crystal Reports, etc.
So if it is not necessary, keep it simple, especially for databases where there are a lot of developers, and users has no clue about its design, maintenance, recovery, tuning, patching, known bugs, etc
2011-08-25
BusinessObject Crystal Report Firewall Configuration
Product: BusinessObject
Open following ports in firewall in order for Crystal Report Designer, Business View Manager, or any thick client to connects to Central Management Console
If using web interface, then only need port 80 of the web server
1. Protocol TCP port 6400
Note: Needs to go to web portal of Central Management Console, and configure fix port
1. Select "Servers" from drop down list box on upper left hand corner
2. Click "Servers List" on left panel, look for "Kind" name "Central Management Server"
3. Double click on it, and it will open Properties window
4. Scroll down to section "Common Settings"
5. Disable "Auto Assign" checkbox
6. Fill in "Request Port" as 6400
7. Click "Save and Close" button
There are other ports if you are using following
Web Application Server port, protocol TCP port 8080. Recommend to set to port 80 due to common corporate security rule
Server Intelligence Agent (SIA) port, protocol TCP port 6410
Note: Any port above is default value. It is configurable to other port
For load balancing, provide virtual hostname to Crystal Report designers, Business View Manager users, as well as Tomcat web server to connect to CMC. In firewall, open the port base on virtual IP, and physical IP.
Although open access to physical IP is optional, it is critical for technical support team to identify the problem, and shorten the time-to-resolution by 99%
Please use following PayPal donate if my post helped
Open following ports in firewall in order for Crystal Report Designer, Business View Manager, or any thick client to connects to Central Management Console
If using web interface, then only need port 80 of the web server
1. Protocol TCP port 6400
Note: Needs to go to web portal of Central Management Console, and configure fix port
1. Select "Servers" from drop down list box on upper left hand corner
2. Click "Servers List" on left panel, look for "Kind" name "Central Management Server"
3. Double click on it, and it will open Properties window
4. Scroll down to section "Common Settings"
5. Disable "Auto Assign" checkbox
6. Fill in "Request Port" as 6400
7. Click "Save and Close" button
There are other ports if you are using following
Web Application Server port, protocol TCP port 8080. Recommend to set to port 80 due to common corporate security rule
Server Intelligence Agent (SIA) port, protocol TCP port 6410
Note: Any port above is default value. It is configurable to other port
For load balancing, provide virtual hostname to Crystal Report designers, Business View Manager users, as well as Tomcat web server to connect to CMC. In firewall, open the port base on virtual IP, and physical IP.
Although open access to physical IP is optional, it is critical for technical support team to identify the problem, and shorten the time-to-resolution by 99%
Please use following PayPal donate if my post helped
2011-08-24
Genesys with BusinessObject (Crystal Report) Business View Filter
Product #1: Genesys CCA
Product #2: SAP BusinessObject
When using Crystal Report (BusinessObject designer) to design report for Genesys CCA (DataMart), Business View Manager is a common software that will be used for advance customization.
This screen shows how to enable filtering in Business Element called "O_Queue_No_Agg be"
My advice is not to create filter in Data Foundation layer, but in Business Element layer. Keep all the filter and parameter definition in this layer, which will be inherited by Business View, List of Values, and Prompt Groups
There is no way to use Business View Manager nor Crystal Reports designer to see the generated SQL, nor value with the filter applied. The trick I used is to open Business Element, and choose "Right Test View" from Tools pull down menu. It will display Crystal syntax for it.
In my example, I only want to show virtual queue that has naming convention VQ in front of the queue name.
For anyone who is using my design, please create index on O_Queue_No_Agg table on column Object_Name for performance tuning.
2011-08-12
Superfast and Cheap Database Performance Tuning
Database performance tuning... tried all of these?
Index tuning, application tuning, query re-write, partitioning, sub-partitioning, upgrade storage, faster CPU, multi-threaded programming, add more CPU, faster fiber channel controller, faster SATA3 controller
So what's next feasible approach.... Oracle Exadata storage server? Too expansive. Need something cheaper? Let's try few units of Solid State Disk (SSD) devices and combine them with database partitioning feature.
Regardless what kind of application, e.g. SAP ERP, SAP BusinessObject, Oracle Database, Oracle WebLogic, Genesys suites, Oracle Hyperion reports, you will always encounter situation where different tuning strategies are required. In term of best ROI (return of investment), following are common consideration
1. Availability of product and database expertise. Even if available, cost is the next consideration. Often they are costly
2. Timeline. They may not understand overall hardware, application, custom design, business function. It takes minimum a week for them to pick up
3. Workflow. Lots of time spend in big corporation to go through the workflow to provision major hardware change, or upgrade
4. Tuning time. Although DBA may suggest various tuning option, there are cases where DBA can't tell precisely which option will work best
5. User testing time. After each tuning, often QA or live users will test it. Time and cost involve to get them involve, especially overtime
6. Maganement cost. At least one manager needs to involve to coordinate for meeting, discussion, management update, etc. Another cost to consider
7. Scalability. When product, database, and servers are max out, yet limited by regular storage capacity, SSD technolgy is the last to consider. Often, everyone you speak to will propose running everything in SSD drive. This is very costly option
This is another tuning strategy fit in between hardware upgrade and database tuning.
Most of the databases has partitioning (MS SQL Server, DB2, MySQL), and sub-partition (Oracle) feature. This idea is to buy small amount of SSD drive to keep most frequently access data
I will use Genesys CTI application with Oracle database in this article. If there is any interest in other product I indicated above, I can expand it to cover other applications. There are too many applications (IIS, web portal, reporting, ETL, ERP) that I can cover, but would like to use one as an example
This design is
1. Create at leave 2 partitions. For Oracle, sub-partitions can be used for fine grain space allocation
2. One partition to keep frequently use data. Another partition to keep older than 7 days data
3. Recommend to have a pair of tablespaces for each major tables for each database
4. For databases in Table List #1 and #2, keep all the dbf files in SSD drive
5. Partitions which keep current data store will created in tablespace which has dbf files resides in SSD drive
6. Partitions which keep old data store will create in tablespace which has dbf files resides in regular drive, e.g. SATA, SAS, SCSI, etc
7. Create a weekly job which merge current data partition into old partition
8. Create new partition to keep current data with indexes, if applicable. This can be created 1 week or 1 month earlier. Note that it will takes up initial extend space. For data warehouse database, it could be 300 MB big
9. Ensure database backup with archive log exists and tested monthly. SSD drive will degrade depending on usage
10. To further improve recovery time, write RMAN backup script to backup the tablespaces in following sequence, SYSTEM, tablespaces holding current data, the rest of the tablespaces
11. To further improve recovery time, keep 1 copy of RMAN compressed backup on disk. If have Oracle Standard Edition, then use gzip to compress after backup completed
Therefore, the application and reports will enjoy following benefits:
1. System tables will always have fast performance
2. Day to day transaction will be very smooth
3. Intraday, or 7 days reports will available immediately
4. Data mart will able to crunch 7 days transactional date at 10x the speed
5. If SSD corrupted due to material aging after 3 years (let's say), and needs database recovery, it can recover from disk, which is very fast. Oracle allows to retore only the corrupted dbf files and respective archive log. Recovery is in 1 minutes for 2 GB file
6. Internal database engine transaction will be very smooth, which indirectly improve other application's database performance
Table List #1
MS SQL Server database
1. master
2. tempdb
3. config
4. datamart
5. any custom report database
Table List #2
Oracle database
1. datamart - selective tables
2. config - Keep everything in SSD drive if possible, else keep USERS tablespace in SSD. If other tablespace is used to store Genesys data, then use it
3. any custom report database
4. For each database, stores
4.1. Tablespace SYSTEM
4.2. Tablespace TEMP
4.3. Tablespace UNDOTBS
4.4. Online redo log
5. OCS - selective tables
For OCS Outbound Contact database,
A.
Create 2 partitions for each calling list. Use call_time to split data between 2 partitions, which its tablespace design as follow:
1. If call_time null, store in SSD drive
2. If call_time < 7 days, store in SSD drive
3. Others store in regular disk
B.
Store gsw_donotcall_list in tablespace which resides in SSD drive. Partitioning is optional. If need to partition, then use TIME_STAMP column
C.
Store gsw_req_log in 2 partitions as well. Partition by TIME_STAMP column
D.
If OCS history file (.rep) is captured and loaded into database table Calling_List_History (or any name), store in 2 partitions. Partition by LOG_TIME
1. log_time within 7 days stores in SSD drive
2. Others store in regular disk
For CCA database, or DATAMART
E.
Keeps following tables in tablespaces residing in SSD drive. No need partition
Partition following tables into 2, one stores in SSD, another in regular disk
1. LOGIN - by TIME, which is sec since 1970-01-01
2. PURGING_LOG - by PURGE_START_TIME
3. QINFO - by STARTTIME
4. REP_REBUILD_LOG - by LAST_TIME_KEY
5. STATUS - by STARTTIME
6. R_nnn_STAT_RES - by TIMEKEY
7. T_nnn_TIME_DIM - by TIMEKEY
Create a database or shell script to move current data to 7 days old partition, which resides in regular disk
User coordination:
1. Inform user for weekly maintenance. Minimum 1 hr
2. Create an alert, or e-mail trigger into this script to notify on failure
3. Ensure archive log backup take place immediately after the activity to free up Fast Recovery Area (FRA) area
Option 1: Technical step
1. Create a control table which keeps track of partition name, table name, subpartition name, creation date, merge date, time_format, range1, range2, retention, is_current, is_archive
2. Base on control table, determine partition name which keeps current data (is_current=1), and old data (is_archive=1)
3. ALTER TABLE T_1_TIME_DIM MARGE PARTITIONS T_1_201143, T_1_2008 INTO T_1_2011 COMPRESS
4. ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES. Skip if use global index
5. Dynamically add new partition for next month based on values from control table. Syntax to add base on week 33th of year 2011
ALTER TABLE T_1_TIME_DIM ADD PARTITION T_1_201133 VALUES LESS THAN ('20110814')
6. Update control table to indicate partition merged, and set is_current to new partition
7. This option can enable compressed table and index, which will reduce the database size, and saving to storage cost
Option 2: Technical step
1. Don't merge partition, but instead move the physical dbf files from SSD drive to regular disk drive
2. This is much faster process because it does not need to copy the data using database, but keep existing content of dbf file
3. This approach will indirectly introduce 52 partition per partition tables per year
4. If housekeeping is prefer to reduce managing so many partitions, then a quarterly merging activity can be scripted with similar logic as above
5. Prepare to move the partition resides in SSD with tablespace name T_1_201133, and dbf file T_1_201133_01.dbf
alter system checkpoint;
alter system switch logfile;
alter system checkpoint;
alter tablespace T_1_201133 offline;
6. Move file to regular disk in /u01 from SSD disk in /u02
mv /u01/datamart/T_1_201133_01.dbf /u02/datamart/
6. Rename dbf file in database
alter database rename file '/u01/datamart/T_1_201133_01.dbf' to '/u02/datamart/T_1_201133_01.dbf';
7. Perform recovery, if database is in archive log mode
recover tablespace T_1_201133;
8. Bring dbf file online
alter tablespace T_1_201133 online;
9. Typical 7200 rpm disk can copy at 30 MB/s, while 15000 rpm disk can copy at 110 MB/s. In SAN configuration, they may able to archive 150 MB/s
Index tuning, application tuning, query re-write, partitioning, sub-partitioning, upgrade storage, faster CPU, multi-threaded programming, add more CPU, faster fiber channel controller, faster SATA3 controller
So what's next feasible approach.... Oracle Exadata storage server? Too expansive. Need something cheaper? Let's try few units of Solid State Disk (SSD) devices and combine them with database partitioning feature.
Regardless what kind of application, e.g. SAP ERP, SAP BusinessObject, Oracle Database, Oracle WebLogic, Genesys suites, Oracle Hyperion reports, you will always encounter situation where different tuning strategies are required. In term of best ROI (return of investment), following are common consideration
1. Availability of product and database expertise. Even if available, cost is the next consideration. Often they are costly
2. Timeline. They may not understand overall hardware, application, custom design, business function. It takes minimum a week for them to pick up
3. Workflow. Lots of time spend in big corporation to go through the workflow to provision major hardware change, or upgrade
4. Tuning time. Although DBA may suggest various tuning option, there are cases where DBA can't tell precisely which option will work best
5. User testing time. After each tuning, often QA or live users will test it. Time and cost involve to get them involve, especially overtime
6. Maganement cost. At least one manager needs to involve to coordinate for meeting, discussion, management update, etc. Another cost to consider
7. Scalability. When product, database, and servers are max out, yet limited by regular storage capacity, SSD technolgy is the last to consider. Often, everyone you speak to will propose running everything in SSD drive. This is very costly option
This is another tuning strategy fit in between hardware upgrade and database tuning.
Most of the databases has partitioning (MS SQL Server, DB2, MySQL), and sub-partition (Oracle) feature. This idea is to buy small amount of SSD drive to keep most frequently access data
I will use Genesys CTI application with Oracle database in this article. If there is any interest in other product I indicated above, I can expand it to cover other applications. There are too many applications (IIS, web portal, reporting, ETL, ERP) that I can cover, but would like to use one as an example
This design is
1. Create at leave 2 partitions. For Oracle, sub-partitions can be used for fine grain space allocation
2. One partition to keep frequently use data. Another partition to keep older than 7 days data
3. Recommend to have a pair of tablespaces for each major tables for each database
4. For databases in Table List #1 and #2, keep all the dbf files in SSD drive
5. Partitions which keep current data store will created in tablespace which has dbf files resides in SSD drive
6. Partitions which keep old data store will create in tablespace which has dbf files resides in regular drive, e.g. SATA, SAS, SCSI, etc
7. Create a weekly job which merge current data partition into old partition
8. Create new partition to keep current data with indexes, if applicable. This can be created 1 week or 1 month earlier. Note that it will takes up initial extend space. For data warehouse database, it could be 300 MB big
9. Ensure database backup with archive log exists and tested monthly. SSD drive will degrade depending on usage
10. To further improve recovery time, write RMAN backup script to backup the tablespaces in following sequence, SYSTEM, tablespaces holding current data, the rest of the tablespaces
11. To further improve recovery time, keep 1 copy of RMAN compressed backup on disk. If have Oracle Standard Edition, then use gzip to compress after backup completed
Therefore, the application and reports will enjoy following benefits:
1. System tables will always have fast performance
2. Day to day transaction will be very smooth
3. Intraday, or 7 days reports will available immediately
4. Data mart will able to crunch 7 days transactional date at 10x the speed
5. If SSD corrupted due to material aging after 3 years (let's say), and needs database recovery, it can recover from disk, which is very fast. Oracle allows to retore only the corrupted dbf files and respective archive log. Recovery is in 1 minutes for 2 GB file
6. Internal database engine transaction will be very smooth, which indirectly improve other application's database performance
Table List #1
MS SQL Server database
1. master
2. tempdb
3. config
4. datamart
5. any custom report database
Table List #2
Oracle database
1. datamart - selective tables
2. config - Keep everything in SSD drive if possible, else keep USERS tablespace in SSD. If other tablespace is used to store Genesys data, then use it
3. any custom report database
4. For each database, stores
4.1. Tablespace SYSTEM
4.2. Tablespace TEMP
4.3. Tablespace UNDOTBS
4.4. Online redo log
5. OCS - selective tables
For OCS Outbound Contact database,
A.
Create 2 partitions for each calling list. Use call_time to split data between 2 partitions, which its tablespace design as follow:
1. If call_time null, store in SSD drive
2. If call_time < 7 days, store in SSD drive
3. Others store in regular disk
B.
Store gsw_donotcall_list in tablespace which resides in SSD drive. Partitioning is optional. If need to partition, then use TIME_STAMP column
C.
Store gsw_req_log in 2 partitions as well. Partition by TIME_STAMP column
D.
If OCS history file (.rep) is captured and loaded into database table Calling_List_History (or any name), store in 2 partitions. Partition by LOG_TIME
1. log_time within 7 days stores in SSD drive
2. Others store in regular disk
For CCA database, or DATAMART
E.
Keeps following tables in tablespaces residing in SSD drive. No need partition
- AGG_COLUMN
- BASIC_STAT
- CHUNK_LOAD_ERR_LOG
- CHUNK_LOG
- COMP_STAT
- COMP_STAT_CATEGORY
- COMP_TO_BASIC_STAT
- CONFIG_SERVER
- DM_PROPERTY
- ERROR_CHUNK
- FOLD_TEMP_TO_COMP
- FOLD_TO_COMP_STAT
- FOLDER_TEMPLATE
- OBJ_TO_LAYOUT
- OBJ_TO_OBJ
- OBJECT
- OUTCOME_AGG_COLUMN
- PENDING_AGG
- PURGING_RULES
- REP_TO_TAB
- REPORT_FOLDER
- REPORT_LAYOUT
- REPORT_TABLE
- REPORT_VIEW
- SEQUENCES
- SOURCE
- STAT_PARAM
- STAT_TO_PAR
- STATISTIC
- TAB_INFO_TYPE
- TIME_COLUMN
- TIME_FUN_PARAM
- TEMP_TFUN_PAR_VAL
- TIME_FUN_PARAM_VAL
- TIME_FUNCTION
- TIME_ZONE
- VIEW_AGG_COLUMN
- VIEW_TEMP_AGG_COL
- VIEW_TEMP_TIME_COL
- VIEW_TEMPLATE
- VIEW_TIME_COLUMN
- All O_nnn_OBJ_DIM
- All S_nnn_STAT_DIM
Partition following tables into 2, one stores in SSD, another in regular disk
1. LOGIN - by TIME, which is sec since 1970-01-01
2. PURGING_LOG - by PURGE_START_TIME
3. QINFO - by STARTTIME
4. REP_REBUILD_LOG - by LAST_TIME_KEY
5. STATUS - by STARTTIME
6. R_nnn_STAT_RES - by TIMEKEY
7. T_nnn_TIME_DIM - by TIMEKEY
Create a database or shell script to move current data to 7 days old partition, which resides in regular disk
User coordination:
1. Inform user for weekly maintenance. Minimum 1 hr
2. Create an alert, or e-mail trigger into this script to notify on failure
3. Ensure archive log backup take place immediately after the activity to free up Fast Recovery Area (FRA) area
Option 1: Technical step
1. Create a control table which keeps track of partition name, table name, subpartition name, creation date, merge date, time_format, range1, range2, retention, is_current, is_archive
2. Base on control table, determine partition name which keeps current data (is_current=1), and old data (is_archive=1)
3. ALTER TABLE T_1_TIME_DIM MARGE PARTITIONS T_1_201143, T_1_2008 INTO T_1_2011 COMPRESS
4. ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES. Skip if use global index
5. Dynamically add new partition for next month based on values from control table. Syntax to add base on week 33th of year 2011
ALTER TABLE T_1_TIME_DIM ADD PARTITION T_1_201133 VALUES LESS THAN ('20110814')
6. Update control table to indicate partition merged, and set is_current to new partition
7. This option can enable compressed table and index, which will reduce the database size, and saving to storage cost
Option 2: Technical step
1. Don't merge partition, but instead move the physical dbf files from SSD drive to regular disk drive
2. This is much faster process because it does not need to copy the data using database, but keep existing content of dbf file
3. This approach will indirectly introduce 52 partition per partition tables per year
4. If housekeeping is prefer to reduce managing so many partitions, then a quarterly merging activity can be scripted with similar logic as above
5. Prepare to move the partition resides in SSD with tablespace name T_1_201133, and dbf file T_1_201133_01.dbf
alter system checkpoint;
alter system switch logfile;
alter system checkpoint;
alter tablespace T_1_201133 offline;
6. Move file to regular disk in /u01 from SSD disk in /u02
mv /u01/datamart/T_1_201133_01.dbf /u02/datamart/
6. Rename dbf file in database
alter database rename file '/u01/datamart/T_1_201133_01.dbf' to '/u02/datamart/T_1_201133_01.dbf';
7. Perform recovery, if database is in archive log mode
recover tablespace T_1_201133;
8. Bring dbf file online
alter tablespace T_1_201133 online;
9. Typical 7200 rpm disk can copy at 30 MB/s, while 15000 rpm disk can copy at 110 MB/s. In SAN configuration, they may able to archive 150 MB/s
Subscribe to:
Posts (Atom)