Search This Blog

2011-08-31

SAP BarTender (Zebra printer) Label Printer Configuration

Software: SAP
Printer software: BarTender
Printer name: Zebra Label Printer

A lot of SAP administrator has no clue about enterprise label printer setup, which commonly uses BarTender from Seagull Scientific.

Label printer is generally too simple, so BarTender offers end users with extra control to minimize wasting of expansive label (tube label, patient band label).

However, this advance software confused all the SAP administrators, and thinking this is just a normal printer and uses printer driver provided by Zebra (for example).

If you ever heard of Seagull Scientific BarTender, always uses printer driver provided by Seagull Scientific, which contains extra tab in printer properties showing BarTender logo.

Go to following homepage and download BarTender-aware printer driver

http://www.seagullscientific.com/aspx/free-windows-printer-drivers-download.aspx

There are 2,600 printer drivers provided by them, so only download and install those which are applicable.

BarTender has a license server to track number of printers and users in use. If you are doing high availability, remember to notify Seagull for licensing, and deploy the registration number to multiple license server. In each BarTender client (any PC which added BarTender Zebra label printer driver), configure to point to virtual IP of BarTender license server

One of the trick is BarTender printer driver may resolve strange printer behavior which you may encounter with any software, such as Crystal Report, Siebel, SAP, Oracle APEX, etc. As it is re-written by Seagull Scientific, its functionality and behavior is different than those supplied by manufacturer, Linux and Microsoft. Try it as last resort if you run into printer integration problem

Please use following PayPal donate if my post helped

Genesys Hacking - Login/Logout SQL

Genesys stored agent login and logout entry into database table LOGIN.

Uses following SQL to determine agent login and logout event in GMT time


 SELECT "LOGIN"."SWITCHDBID",
 "LOGIN"."DNDBID",
 "LOGIN"."QUEUEDBID",
 "LOGIN"."AGENTDBID",
 "LOGIN"."PLACEDBID",
 "LOGIN"."TIME" Login_Time, login_1.time Logout_Time,
 DateAdd(s, "LOGIN"."TIME", '1970-01-01') Login_Time_gmt,
 DateAdd(s, login_1.time, '1970-01-01') Logout_Time_gmt
 FROM   LOGIN LEFT OUTER JOIN LOGIN LOGIN_1 ON
 "LOGIN"."SWITCHDBID"="LOGIN_1"."SWITCHDBID"
 AND "LOGIN"."DNDBID"="LOGIN_1"."DNDBID"
 AND "LOGIN"."QUEUEDBID"="LOGIN_1"."QUEUEDBID"
 AND "LOGIN"."AGENTDBID"="LOGIN_1"."AGENTDBID"
 AND "LOGIN"."PLACEDBID"="LOGIN_1"."PLACEDBID"
 AND "LOGIN"."TIME"<"LOGIN_1"."TIME"
 WHERE
 "LOGIN"."TIME" between 1312416000 AND 1312502400
 and login.status = 1 and login_1.status = 0
 and login_1."TIME" between 1312416000 AND 1312502400

2011-08-26

BusinessObject: Sync up Data Foundation with Database


Product: BusinessObject
Sub-product: Business View Manager

I alter table and view in database level. Use following procedure to sync up in Business View Manager

1. Open "Data Foundation" which contains the table or view
2. From pull down menu Tools - Verify Database
3. It will download column definition from database, and automatically add or remove columns

Hacking Genesys: Find out all DAP's database configuration

Product: Genesys
Sub-product: Configuration Manager (CME)

Yesterday, I received an e-mail from Portugal asking SQL query to get a full of DAP name, database server name, database name, and database user name.

With my expertise in hacking Genesys backend, it just took 30 min to figure out the SQL



Select app.name DAP_Name,
   prop859.prop_value DBserver_Host,
   prop864.prop_value DBname,
   prop865.prop_value Username_DB,
   prop867.prop_value DBEngine,
   case app.state when 1 then 'Y' else 'N' end is_Enabled
from cfg_flex_prop prop859,
    cfg_flex_prop prop864,
    cfg_flex_prop prop865,
    cfg_flex_prop prop867,
cfg_application app
where
    prop859.prop_name = 'dbserver'
and prop864.prop_name = 'dbname'
and prop865.prop_name = 'username'
and prop867.prop_name = 'dbengine'
and app.type = 8
and app.dbid = prop859.object_dbid
and app.dbid = prop864.object_dbid
and app.dbid = prop865.object_dbid
and app.dbid = prop867.object_dbid


DAP_Name        DBserver_Host DBname     Username_DB DBEngine Is_Enabled
--------------- ------------- ---------- ----------- -------- ----------
cscDAP_Log      iccs-db1      GLog       logowner    oracle   Y
cscDAP_ERS      iccs-db1      GERS       genesys     oracle   Y
cscDAP_OCS      iccs-db1      GOCS       ocs         oracle   Y
cscDAP_STAT     iccs-db1      GSTAT      gstat       oracle   Y
cscDAP_DataSrc  iccs-db1      GDATAM     dsourcer    oracle   Y
cscDAP_IRD1     iccs-db1      GConfig    genesys     oracle   Y
cscDAP_Interact iccs-db1      GIxn       genesys     oracle   Y
cscDAP_UCS      iccs-db1      GUCS       genesys     oracle   Y
cscDAP_CCA      iccs-db1      GDATAMART  cca         oracle   Y
cscDAP_IRD2     iccs-db1      GConfig    genesys     oracle   Y



Hacking Genesys: Find out all DAP Name, Version, Status


Product: Genesys CTI Suite
Sub-product: Configuration Manager

Although Genesys offer very user friend GUI called Configuration Manager (CME), hacking it from database level is fun, besides can do the job much quicker

Following SQL can run against CONFIG database which used by CME at the back

select * from cfg_application where name = 'DAP_DM' or type = 8

Following information will be shown in a glance
1. dbid of application
2. name of application as see in CME
3. type = 8 for DAP
4. version
5. is_server
6. state value 1 is enable, 2 is disable
7. app_prototype_dbid
8. csid
9. app_prototype_csid
10. work_directory not applicable for DAP
11. command_line not applicable for DAP
12. auto_restart
13. startup_timeout
14. redundancy_type
15. is_primary
16. cmd_line_args not applicable for DAP

BusinessObject: Login to CMC with custom port

Product: BusinessObject

Normally when you login to Central Management Console, no port number is specified, but just the hostname, or IP address. This will default to port 6400 (protocol TCP for firewall purpose)

However, it is possible to specify hostname and port number as shown below. In other word, it is possible to install multiple instance of CMC server, which points to different repository database, or single database with multiple database schema/username (simplified database management) for non-PROD or research purpose.


In project development, keeping the cost down by hiring less DBA, VMware admin, Windows admin, UNIX admin is one of the main saving in human resource budgeting.

Although virtual server (VMware, Oracle VM) often be used for multiple non-PROD environment, they are indirectly require additional maintenance. For example, 6 environment of DEV1, DEV2, QA1, QA2, UAT1, UAT2, there will be 6 Unix to patch, 6 times Unix user management, 6 databases, 6 Crystal Reports, etc.

So if it is not necessary, keep it simple, especially for databases where there are a lot of developers, and users has no clue about its design, maintenance, recovery, tuning, patching, known bugs, etc

2011-08-25

BusinessObject Crystal Report Firewall Configuration

Product: BusinessObject

Open following ports in firewall in order for Crystal Report Designer, Business View Manager, or any thick client to connects to Central Management Console

If using web interface, then only need port 80 of the web server

1. Protocol TCP port 6400

Note: Needs to go to web portal of Central Management Console, and configure fix port
1. Select "Servers" from drop down list box on upper left hand corner
2. Click "Servers List" on left panel, look for "Kind" name "Central Management Server"
3. Double click on it, and it will open Properties window
4. Scroll down to section "Common Settings"
5. Disable "Auto Assign" checkbox
6. Fill in "Request Port" as 6400
7. Click "Save and Close" button

There are other ports if you are using following
Web Application Server port, protocol TCP port 8080. Recommend to set to port 80 due to common corporate security rule
Server Intelligence Agent (SIA) port, protocol TCP port 6410



Note: Any port above is default value. It is configurable to other port

For load balancing, provide virtual hostname to Crystal Report designers, Business View Manager users, as well as Tomcat web server to connect to CMC. In firewall, open the port base on virtual IP, and physical IP.

Although open access to physical IP is optional, it is critical for technical support team to identify the problem, and shorten the time-to-resolution by 99%

Please use following PayPal donate if my post helped

2011-08-24

Genesys with BusinessObject (Crystal Report) Business View Filter

Product #1: Genesys CCA
Product #2: SAP BusinessObject

When using Crystal Report (BusinessObject designer) to design report for Genesys CCA (DataMart), Business View Manager is a common software that will be used for advance customization.

This screen shows how to enable filtering in Business Element called "O_Queue_No_Agg be"

My advice is not to create filter in Data Foundation layer, but in Business Element layer. Keep all the filter and parameter definition in this layer, which will be inherited by Business View, List of Values, and Prompt Groups


Please note that by default, filter's right is "Not Applied." Designer needs to modify the right to apply to Everyone, or fine grain user/group control.


There is no way to use Business View Manager nor Crystal Reports designer to see the generated SQL, nor value with the filter applied. The trick I used is to open Business Element, and choose "Right Test View" from Tools pull down menu. It will display Crystal syntax for it.

In my example, I only want to show virtual queue that has naming convention VQ in front of the queue name.

For anyone who is using my design, please create index on O_Queue_No_Agg table on column Object_Name for performance tuning.

The database principal owns a schema in the database, and cannot be dropped

In MS SQL Server database, it is very often to setup database user name wrongly during initial Genesys product installation. We need to create multiple databases, as well as multiple login to each database for owner, viewer, and reporting.

The most frequent mistake is for CCA database, where the installation document does not clearly document database related credential, and expect all Genesys architect is MS SQL Server security expert. In development, or QA environment, Genesys engineer/consultant often tested with full privilege, or simply use sa user, which is a biggest security breach.

Once a test user is created, you may need to undo it, and re-create again to ensure the step is documented, or correct the wrongly created credential.

This step covers database user dropping, which touching following area:

  1. Login user (created under MS SQL Server)
  2. Database user (created under each database, e.g. CCA)
  3. Schema user (created under each database, e.g. CCA)


How to drop database user when encountering following error

The database principal owns a schema in the database, and cannot be dropped

In this example, I created login name reporter, with DB user reporter_dbo (User name specified in Users Mapping page during login user creation), and set default schema to db_datareader. Sample database used is Genesys CCA database

I will firstly assign DB user db_datareader back to db_datareader, which is build-in DB user. Then I will be able to drop DB user reporter.

Remember to set the database name prior of executing the script, because this is database specific. If DB user reporter assigned to multiple default schema in each database, then execute this SQL in each database.

Step:
use CCA


SELECT cast(s.name as varchar(20)) schema_name, cast(u.name as varchar(20))  user_name FROM sys.schemas s, sys.sysusers u
where s.principal_id = u.uid


schema_name          user_name
-------------------- --------------------
db_accessadmin       db_accessadmin
db_backupoperator    db_backupoperator
db_datawriter        db_datawriter
db_ddladmin          db_ddladmin
db_denydatareader    db_denydatareader
db_denydatawriter    db_denydatawriter
db_owner             db_owner
db_securityadmin     db_securityadmin
dbo                  dbo
guest                guest
INFORMATION_SCHEMA   INFORMATION_SCHEMA
db_datareader        reporter_dbo
sys                  sys

(13 row(s) affected)

1. Navigate to database CCA - Security - Schemas - db_datareader. Double click on db_datareader to change its schema owner
2. Under page "General" click on "Search..." button
3. Click "Browse..." button
4. Select object name "[db_datareader]" so that it assigns back to original system user
5. Click OK button to save the result
6. Execute this SQL (very similar to above SQL) to confirm the change

SELECT cast(s.name as varchar(20)) schema_name, cast(u.name as varchar(20))  user_name FROM sys.schemas s, sys.sysusers u
where s.principal_id = u.uid and s.name = 'db_datareader'

schema_name          user_name
-------------------- --------------------
db_datareader        db_datareader

7. Now drop the DB user reporter_dbo (under CCA database - Security - Users), and login user reporter (under server name - Security). This error won't appear now


2011-08-23

Genesys ETL and DataSourcer Purging Configuration

Product: Genesys Call Center Analyzer

This post mainly discuss about configuring purging of DataSourcer server instead of ETL Service/ETL Assistant.

It is easy to find information or from ETL Assistant about configuring data retention, but less for DataSourcer

There are 2 places which can configure purging of data in DataSourcer. Personally I prefer to keep the raw data in DataSourcer (the database commonly called ods) for troubleshooting purpose, as well as providing evidents to Genesys support. In addition, with some hacking in ETL database (commonly called datamart or cca), it is possible to redo ETL from it (ods -> datamart).

Above screen shot shows in DataSourcer (using CME administration GUI), you can specify table size for OL_DATA[nnnnn] before it is dropped. ETL Service will drop the table once it hit the size (after data transferred into datamart database).

In etl.properties file, enable following line to automatically drop OL_DATA[nnnn] table, once all data transferred into datamart database


dropTransferredTables

Simply specifying "dropTransferredTables" implies immediately drop OL_DATA[nnnn] table once all statistics copied into datamart database. The faster the database/server, as well as frequent the ETL service wake up, the earlier OL_DATA[nnnn] tables will drop.

Uses Genesys Data Modeling Assistant to review when the OL_DATA[nnnn] has been dropped. When it shows a value in "Delete Time" that means OL_DATA[nnnn] table is dropped. There is no way to undo it, which indirectly will make troubleshooting impossible.


Using Oracle flashback feature, it is possible to revert to previous data within seconds then login to DMA to review it


Please use following PayPal donate if my post helped

2011-08-19

Crystal Report - Retrieve report name from database

Product: Crystal Report XI

BusinessObject stores its table as binary data type. However, they provided a web interface called Query Builder which can used to query the API-base virtual table

URL: http://crystalreport_cmc_web_server/AdminTools

Login as the same ID as Central Management Console

To display Crystal Report schedule job going to be executed, enter following query into the text box


SELECT SI_NAME, SI_NEXTRUNTIME, SI_SCHEDULEINFO.SI_STARTTIME, SI_PROCESSINFO.SI_RECORD_FORMULA
FROM CI_INFOOBJECTS WHERE SI_RECURRING=1

Most of the forums only show programming API to access property of above virtual table. In my example, I am showing that even using SQL, it is possible to extract the next level object. The limitation is still joining this virtual table where API will be more convenient

Use following query to map file names stored in frs://Input to report name. Please be aware of any duplicate name


SELECT SI_ID, SI_CUID, SI_KIND, SI_NAME, SI_FILES FROM CI_INFOOBJECTS WHERE SI_PROGID = 'CrystalEnterprise.Report' and SI_NAME like '%Agent Performance - Daily Common%'

Other useful queries

Get Full Client Reports 
SELECT SI_ID, SI_NAME,SI_FILES FROM CI_INFOOBJECTS WHERE SI_KIND in( 'webi' , 'CrystalReport') 

Get All reports 
Select SI_ID, SI_NAME From CI_INFOOBJECTS Where SI_PROGID='CrystalEnterprise.Report' And SI_INSTANCE=0 

Get All Webi reports 
Select SI_ID, SI_NAME From CI_INFOOBJECTS Where SI_PROGID='CrystalEnterprise.Webi' And SI_INSTANCE=0 

Get All universes 
Select SI_ID, SI_NAME, SI_WEBI, SI_KIND From CI_APPOBJECTS where SI_KIND ='Universe' 

Get All Users 
SELECT SI_ID, SI_NAME FROM CI_SYSTEMOBJECTS WHERE SI_PROGID='CrystalEnterprise.USER' 

get all personal categories 
Select SI_ID, SI_NAME From CI_INFOOBJECTS Where SI_KIND='PersonalCategory'

Get all groups 
Select * from CI_SYSTEMOBJECTS Where SI_KIND='UserGroup' 

get all folders 
Select SI_ID, SI_NAME From CI_INFOOBJECTS Where SI_PROGID='CrystalEnterprise.Folder' 

get all categories 
select SI_ID, SI_NAME From CI_INFOOBJECTS Where SI_KIND='Category' 

Tag: reverse lookup CMS report name, BO AdminTool tips

2011-08-18

Crystal Report - Delete unused dynamic parameter group

Product: Crystal Report 2008 - XI

Whenever dynamic parameter is used with new parameter defined, Crystal Report automatically creates following
1. Data Foundation - Visible in Business View Manager
2. Business Element - Visible in Business View Manager
3. Business View - Visible in Business View Manager
4. List of Values - Visible in Crystal Report
5. Prompt Group - Visible in Crystal Report

Note: "Visible" above means without using "Repository Explorer," above objects are usable from File - Open (Business View Manager), or parameter selection (Crystal Report)

Therefore, after report design finalize, a lot of unused objects will left behind. This post shows the tool and step used to remove any object above, but I use prompt group as an example

For this example, I use new dynamic view for table called Budget with parameter filter on department. Following are automatically created (this is the 2nd report using Budget table, so it automatically added "_1" to all the name)
1. Data Connection - Budget DC_1
2. Data Foundation - Budget DF_1
3. Business Element - Budget BE_1
4. Business View - Budget BV_1
5. List of Values - Budget > Department_1
6. Prompt Group - Budget > Department_1 - Prompt Group_1

As you can see above, if I have daily, weekly, monthly, quarterly, and yearly tables, the naming convention above will be very confusing to be reuse.

Following these steps to delete unwanted (or rename) them:

1. In Crystal Report designer, open the prompt group to find out the List-of-Value (LOV) name

Launch Business View Manager, and login to Crystal Report server as Administrator (any user who has privilege)
Open Repository Explorer from pull down menu View - Repository Explorer
Repository Explorer will show up on right panel, unless you have moved it to somewhere else
Inside Repository Explorer, open the folder where above objects store. In my case, I use folder name "Custom Daily"
Select prompt name "Budget Prompt Prompt Group 2"
On toolbar, click on X to delete
Close any open window which accessing the object. Business View Manager will show an error if the object is currently open. Inform everyone to close

Although Crystal Report (designer interface) can see Repository Explorer, it can't delete these objects .

Automated creation of these object has automated naming convention which is confusing and not easily reusable for other report design.

Therefore, for cleaner and proper report design, it is best to delete unused objects, as well as rename them.

This procedure can be used to rename the objects as well by right click on them, and click Rename

If the prompt has been remove and created again, this will be the prompt group name. You can see it is very messy

Budget > Department_1 - Department_1 - Prompt Group_1

Moreover, this name needs to update in the select formula which is affecting the readability, especially during troubleshooting

Please leave me a positive comment if this post help

Please use following PayPal donate if my post helped

3CX Unblock blacklisted IP

Software Type: Software IP PBX
Software Name: 3CX Phone System 10
Vendor Homepage: www.3cx.com

By default 3CX has anti hacker feature which blacklisted IP when phone login failed more than 25 times. It is common that during phone installation, the phone will hit this threshold within a couple minutes.

Follow this procedure to unlock the phone, or remove the blacklist IP

1. Login to web server, in my case it is http://localhost:5000/MainForm.wgx
2. Login as admin
3. Click on 3CX Phone System - Settings - Advanced
4. Click on tab IP Blacklist
5. Blacklisted IP will be listed with information
5.1. IP Address
5.2. Subnet Mask
5.3. Expiration Date
5.4. Description
6. Select the IP you want to delete, and click Delete on the toolbar

Works with my Cisco 7970

Please use following PayPal donate if my post helped. It will be encouraging for me to keep this blog post

2011-08-12

Superfast and Cheap Database Performance Tuning

Database performance tuning... tried all of these?

Index tuning, application tuning, query re-write, partitioning, sub-partitioning, upgrade storage, faster CPU, multi-threaded programming, add more CPU, faster fiber channel controller, faster SATA3 controller

So what's next feasible approach.... Oracle Exadata storage server? Too expansive. Need something cheaper? Let's try few units of Solid State Disk (SSD) devices and combine them with database partitioning feature.

Regardless what kind of application, e.g. SAP ERP, SAP BusinessObject, Oracle Database, Oracle WebLogic, Genesys suites, Oracle Hyperion reports, you will always encounter situation where different tuning strategies are required. In term of best ROI (return of investment), following are common consideration

1. Availability of product and database expertise. Even if available, cost is the next consideration. Often they are costly
2. Timeline. They may not understand overall hardware, application, custom design, business function. It takes minimum a week for them to pick up
3. Workflow. Lots of time spend in big corporation to go through the workflow to provision major hardware change, or upgrade
4. Tuning time. Although DBA may suggest various tuning option, there are cases where DBA can't tell precisely which option will work best
5. User testing time. After each tuning, often QA or live users will test it. Time and cost involve to get them involve, especially overtime
6. Maganement cost. At least one manager needs to involve to coordinate for meeting, discussion, management update, etc. Another cost to consider
7. Scalability. When product, database, and servers are max out, yet limited by regular storage capacity, SSD technolgy is the last to consider. Often, everyone you speak to will propose running everything in SSD drive. This is very costly option

This is another tuning strategy fit in between hardware upgrade and database tuning.

Most of the databases has partitioning (MS SQL Server, DB2, MySQL), and sub-partition (Oracle) feature. This idea is to buy small amount of SSD drive to keep most frequently access data

I will use Genesys CTI application with Oracle database in this article. If there is any interest in other product I indicated above, I can expand it to cover other applications. There are too many applications (IIS, web portal, reporting, ETL, ERP) that I can cover, but would like to use one as an example

This design is
1. Create at leave 2 partitions. For Oracle, sub-partitions can be used for fine grain space allocation
2. One partition to keep frequently use data. Another partition to keep older than 7 days data
3. Recommend to have a pair of tablespaces for each major tables for each database
4. For databases in Table List #1 and #2, keep all the dbf files in SSD drive
5. Partitions which keep current data store will created in tablespace which has dbf files resides in SSD drive
6. Partitions which keep old data store will create in tablespace which has dbf files resides in regular drive, e.g. SATA, SAS, SCSI, etc
7. Create a weekly job which merge current data partition into old partition
8. Create new partition to keep current data with indexes, if applicable. This can be created 1 week or 1 month earlier. Note that it will takes up initial extend space. For data warehouse database, it could be 300 MB big
9. Ensure database backup with archive log exists and tested monthly. SSD drive will degrade depending on usage
10. To further improve recovery time, write RMAN backup script to backup the tablespaces in following sequence, SYSTEM, tablespaces holding current data, the rest of the tablespaces
11. To further improve recovery time, keep 1 copy of RMAN compressed backup on disk. If have Oracle Standard Edition, then use gzip to compress after backup completed

Therefore, the application and reports will enjoy following benefits:
1. System tables will always have fast performance
2. Day to day transaction will be very smooth
3. Intraday, or 7 days reports will available immediately
4. Data mart will able to crunch 7 days transactional date at 10x the speed
5. If SSD corrupted due to material aging after 3 years (let's say), and needs database recovery, it can recover from disk, which is very fast. Oracle allows to retore only the corrupted dbf files and respective archive log. Recovery is in 1 minutes for 2 GB file
6. Internal database engine transaction will be very smooth, which indirectly improve other application's database performance

Table List #1

MS SQL Server database

1. master
2. tempdb
3. config
4. datamart
5. any custom report database

Table List #2

Oracle database

1. datamart - selective tables
2. config - Keep everything in SSD drive if possible, else keep USERS tablespace in SSD. If other tablespace is used to store Genesys data, then use it
3. any custom report database
4. For each database, stores
4.1. Tablespace SYSTEM
4.2. Tablespace TEMP
4.3. Tablespace UNDOTBS
4.4. Online redo log
5. OCS - selective tables

For OCS Outbound Contact database,

A.
Create 2 partitions for each calling list. Use call_time to split data between 2 partitions, which its tablespace design as follow:

1. If call_time null, store in SSD drive
2. If call_time < 7 days, store in SSD drive
3. Others store in regular disk

B.
Store gsw_donotcall_list in tablespace which resides in SSD drive. Partitioning is optional. If need to partition, then use TIME_STAMP column

C.
Store gsw_req_log in 2 partitions as well. Partition by TIME_STAMP column

D.
If OCS history file (.rep) is captured and loaded into database table Calling_List_History (or any name), store in 2 partitions. Partition by LOG_TIME
1. log_time within 7 days stores in SSD drive
2. Others store in regular disk

For CCA database, or DATAMART

E.
Keeps following tables in tablespaces residing in SSD drive. No need partition

  1. AGG_COLUMN
  2. BASIC_STAT
  3. CHUNK_LOAD_ERR_LOG
  4. CHUNK_LOG
  5. COMP_STAT
  6. COMP_STAT_CATEGORY
  7. COMP_TO_BASIC_STAT
  8. CONFIG_SERVER
  9. DM_PROPERTY
  10. ERROR_CHUNK
  11. FOLD_TEMP_TO_COMP
  12. FOLD_TO_COMP_STAT
  13. FOLDER_TEMPLATE
  14. OBJ_TO_LAYOUT
  15. OBJ_TO_OBJ
  16. OBJECT
  17. OUTCOME_AGG_COLUMN
  18. PENDING_AGG
  19. PURGING_RULES
  20. REP_TO_TAB
  21. REPORT_FOLDER
  22. REPORT_LAYOUT
  23. REPORT_TABLE
  24. REPORT_VIEW
  25. SEQUENCES
  26. SOURCE
  27. STAT_PARAM
  28. STAT_TO_PAR
  29. STATISTIC
  30. TAB_INFO_TYPE
  31. TIME_COLUMN
  32. TIME_FUN_PARAM
  33. TEMP_TFUN_PAR_VAL
  34. TIME_FUN_PARAM_VAL
  35. TIME_FUNCTION
  36. TIME_ZONE
  37. VIEW_AGG_COLUMN
  38. VIEW_TEMP_AGG_COL
  39. VIEW_TEMP_TIME_COL
  40. VIEW_TEMPLATE
  41. VIEW_TIME_COLUMN
  42. All O_nnn_OBJ_DIM
  43. All S_nnn_STAT_DIM


Partition following tables into 2, one stores in SSD, another in regular disk
1. LOGIN - by TIME, which is sec since 1970-01-01
2. PURGING_LOG - by PURGE_START_TIME
3. QINFO - by STARTTIME
4. REP_REBUILD_LOG - by LAST_TIME_KEY
5. STATUS - by STARTTIME
6. R_nnn_STAT_RES - by TIMEKEY
7. T_nnn_TIME_DIM - by TIMEKEY

Create a database or shell script to move current data to 7 days old partition, which resides in regular disk

User coordination:
1. Inform user for weekly maintenance. Minimum 1 hr
2. Create an alert, or e-mail trigger into this script to notify on failure
3. Ensure archive log backup take place immediately after the activity to free up Fast Recovery Area (FRA) area

Option 1: Technical step
1. Create a control table which keeps track of partition name, table name, subpartition name, creation date, merge date, time_format, range1, range2, retention, is_current, is_archive
2. Base on control table, determine partition name which keeps current data (is_current=1), and old data (is_archive=1)
3. ALTER TABLE T_1_TIME_DIM MARGE PARTITIONS T_1_201143, T_1_2008 INTO T_1_2011 COMPRESS
4. ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES. Skip if use global index
5. Dynamically add new partition for next month based on values from control table. Syntax to add base on week 33th of year 2011

ALTER TABLE T_1_TIME_DIM ADD PARTITION T_1_201133 VALUES LESS THAN ('20110814')

6. Update control table to indicate partition merged, and set is_current to new partition
7. This option can enable compressed table and index, which will reduce the database size, and saving to storage cost

Option 2: Technical step
1. Don't merge partition, but instead move the physical dbf files from SSD drive to regular disk drive
2. This is much faster process because it does not need to copy the data using database, but keep existing content of dbf file
3. This approach will indirectly introduce 52 partition per partition tables per year
4. If housekeeping is prefer to reduce managing so many partitions, then a quarterly merging activity can be scripted with similar logic as above
5. Prepare to move the partition resides in SSD with tablespace name T_1_201133, and dbf file T_1_201133_01.dbf

alter system checkpoint;
alter system switch logfile;
alter system checkpoint;
alter tablespace T_1_201133 offline;

6. Move file to regular disk in /u01 from SSD disk in /u02

mv /u01/datamart/T_1_201133_01.dbf /u02/datamart/

6. Rename dbf file in database

alter database rename file '/u01/datamart/T_1_201133_01.dbf' to '/u02/datamart/T_1_201133_01.dbf';

7. Perform recovery, if database is in archive log mode

recover tablespace T_1_201133;

8. Bring dbf file online

alter tablespace T_1_201133 online;

9. Typical 7200 rpm disk can copy at 30 MB/s, while 15000 rpm disk can copy at 110 MB/s. In SAN configuration, they may able to archive 150 MB/s

2011-08-09

Genesys Outbound Contact Server - Log Analysis


This post is to show how to read the Genesys Outbound Contact Server (OCS) log by cross referencing with rep file generated.

In my example, I loaded the OCS rep file into database with table name Record_History. You can use MS SQL Server SSIS, or Oracle SQL*Loader to do that. UNIX shell script which calls command line to generate INSERT statement will work as well. Tell me if you are interested in any of those script. Genesys does not provide tool to do that, but it is a common task any Genesys outbound developer will need for custom reporting, or troubleshooting through database.

Another simple way is to open the OCS rep file using Excel or OpenOffice (StarOffice) with "|" as field delimiter

Following are what maps to rep file for preview record

message RequestDistributeUserEvent - action 2 (preview record in GAD)
message EventDialing - No entry in rep file
message EventNetworkReached - No entry in rep file
message EventEstablished - action 7 (call established)
message EventReleased - action 8 (call released)
message EventUserEvent with GSW_AGENT_REQ_TYPE' 'UpdateCallCompletionStats - action 10 (record updated) where agent modify or enter value into any column in GAD
message EventUserEvent with GSW_AGENT_REQ_TYPE' 'RecordProcessed' - action 14 (record processed event)
message RequestDistributeUserEvent with GSW_USER_EVENT' 'RecordProcessedAcknowledge' - action 12 (call completed)
message EventACK with AttributeThisDN '036345376' - action 11 (record processed)

Assume phone number dialled is 036345376


For action 10, there will be an SQL to update the calling list table. For database respond time, this can be captured and calculate for database performance

15:42:53.357 CM_DBCallRecord(402-398-966): DBServer 'DBServer_Prim for cl_Survey_Beer (528)' SQL:  update cl_Survey_Beer set call_result=28,agent_id='1101',contact_info='9036345376',call_time=1312832429,switch_id=100 ,Title='',FirstName='',MiddleInitial='',LastName='Cona Drinks Sdn Bhd',Address1='28 Jalan RALGREEN',City='Kepong',Province='KL',PostalCode='52100',HomePhone='036345376',WorkPhone='036345376',ReferenceNumber='61600',Language='E',Note='Apr 1 Called' where chain_id=171 and chain_n=0 [ReqID=26811]

Search for ReqID to determine update completed

15:42:53.373 CM_DBCallList(402-398-966): DBServer 'DBServer_Prim for cl_Survey_Beer (528)' MSG_SQLEXECUTED(DBM_SUCCESS) [ReqID=26811]

There is a small section of statistics in OCS log which gives a summary of time of important events, like below

phone                   '9036345376'
pCallRes                'Answer'
timeDialing             '15:42:15.357'
timeClientRinging       '15:42:15.576'
timeCPDFinished         '15:42:43.466'
timeAgentEstablished    '15:42:43.466'
timeAgentCallReleased   '15:42:49.810'
deltaOCS_CPD            0

Table gsw_request_log is another table holding other call entry which contains following columns
phone,request_type,switch_name,list_name,dn,agent_id,tenant_dbid,time_stamp,dnc_message

One important log which indicate call will not be made again is below

15:42:59.545 Trc 61007 Record is removed   Chain_id: 171 Chain_n 0 Record_id: 171 Phone: 9036345376


Please use following PayPal donate if my post helped