Search This Blog

Showing posts with label genesys. Show all posts
Showing posts with label genesys. Show all posts

2012-07-16

Genesys: Why Get Duplicate Agent/Queue in Custom Reports?

Product: Genesys
Module: DMA, CCA, Brio, Crystal Report, Hyperion

Many report designers do not understand the reason of primary key index, which is not limited to Genesys product.

As such, when the reports designed using other fields which could have duplicate, then it is possible to get duplicate username, queue, route point, agent group, etc.

In this example, I will use agent report. Many call centers use other fields for agent, such as Genesys login ID, phone login ID, Genesys Employee ID, etc. In the reporting, especially for CCA (DMA), the primary key is Object_ID, which shows agent's name, and Genesys login ID (username).

In directly, many designers use the human name, and username as filtering criteria, which will trigger duplicate records in following situation:
1. Agent re-created. For example seasonal worker, and people re-join the company
2. Agent change its name, may be typo in the name, or title
3. Agent Genesys login ID changed

Solution
1. Join the report between O_, V_, T_, S_ using column OBJECT_ID
2. Speak to user/manager if they are asking something which can't meet with standard CCA tables, such as join with cfg_ tables in CONFIG database
3. Confirmed with user whether want to display DELETED user. If they don't want to display DELETED user, then this is another way to quickly fix the duplicate record without joining with OBJECT_ID. The solution is to add additional filter (WHERE) to say "...and delete_ts is null"

2012-03-01

Genesys OCM: Error 80FF0008 Data Type Converting

Product: Genesys
Component: Outbound Contact Manager (OCM)

When clicking on calling list in OCM, following error encountered
This post contain a potential OCM error related to data conversion error

Possibility:

  1. Calling list has Dialing Filter which contain data conversion. Review and remove the conversion formula. E.g. cast, convert, todate, tonumber, field_name/100. Temporary disable Dialing Filter to confirm
  2. Calling list has Viewing Filter which perform data conversion. Temporary disable Viewing Filter to confirm. Review and remove the conversion formula

  1. Data type for the campaign is different than the type defined in database
  2. Format of a specific column is wrong (OCM - Format - check column)

  1. The custom data import program (not through OCM) has bug which load the data wrongly
  2. The data is wrongly formatted, which caused custom data import program to load improper value


Complication:
1. This error in OCM does not generate any log in backend OCS server. So it is impossible to tell which column that is causing the error
2. This error does not tell any error if it is dialing filter, or viewing filter

2012-02-27

Genesys Report Administration - Handling of Deleted Agent


Product: Genesys
Component: Configuration Manager, Stat Server, DMA, CCA

Call center is one of the industry with high turnaround in HR in term of agents. Therefore, it is crucial to design a business process to handle this situation, and will not degrade the Genesys system performance

Facts to consider
1. If agent user account is left in CME, and unused, Stat Server and DataSourcer will work actively to gather its statistics. Server utilization impacted, and waste of space in DATAMART database
2. If agent user account is disabled in CME, the situation still same. Stat Server and DataSourcer are still gathering its stat. Same result as above, with the extra security control see by supervisor
3. If agent user account is disabled and set to no privilege (regardless enable and disable), then Stat Server will get privilege error, and DataSource will still populate a value into database. Although Stat Server save time in gathering statistics, but overhead in error handling
4. If agent user account deleted from CME, custom reports which designed to pick custom property from CME will not able to display historical data for that agent. This does reduce the load in Stat server, DataSourcer, and save storage in DATAMART
5. If agent user account re-created many times in CME, CCA reports will display the agent name multiple times. Custom reports must be designed to handle this situation. This does reduce load in Stat Server and DataSourcer. Moreover, save space in DATAMART

So the recommended setup is as follow
1. In CME, create a folder to keep old agent login for 1 year. The PBX agent login will be removed, but account can be reused if he returned
2. In CME, create a folder to hold agents pending for deletion. Once the agent deleted, Stat Server, and DataSourcer will not gather its statistics
3. In database, create a custom table to hold custom properties of agent, such as PBX login, e-mail address, external login
4. Ensure custom report does not select from CME tables (config database cfg_person, cfg_agent_login), such as employee_id, login_code, address_line1. These information are not store in ODS, or DATAMART (in CCA module)
5. If custom report needed, create a custom table to store these columns
5.1. Create a process to sync the data daily (or hourly) from cfg_person, cfg_agent_login
5.2. The custom table must have TS_CREATE, TS_MODIFY, TS_DELETE, dbid, tenant_dbid (if multiple PBX)
5.3. Gets agent TS_CREATE and TS_DELETE from ODS database, and refers to table OBJECT, column ADD_TIME, DELETE_TIME
5.4. Sync program will populate TS_MODIFY with local time
5.5. Create primary key suitable primary key according to custom report. My recommendation is dbid, username, and add_time
6. The sync program can be written as shell script, database store procedure (and database job), or regular programming (C#, Java)
7. In the custom reporting, join with this custom table and always ensure TIME_KEY or BEGIN_TIME is within ADD_TIME and DELETE_TIME (if not null), or larger than ADD_TIME (if DELETE_TIME is null)

E.g.
BEGIN_TIME between ADD_TIME and DELETE_TIME
OR BEGIN_TIME >= ADD_TIME and DELETE_TIME is null

8. In DMA, click on Object tab, and double click on any object to bring up object selection screen
9. Click on the checkbox to remove the checkbox on agents which have been deleted
10. This will eliminate DataSourcer from gathering its statistics, and reduces its load
11. Nonetheless, Stat Server will still monitor the agent, so there is no improvement in reducing Stat server's load
12. Not recommend to store the historical value for the extra fields (address_line1). Only store the latest value. This is to prevent duplicate record when joining with other tables
13. If end user decided to display only active agent, create a new report to use custom view which points to O_AGENT_NO_AGG, O_AGENT_DAY, O_AGENT_WEEK, O_AGENT_MONTH, O_AGENT_QUARTER, O_AGENT_YEAR, with following criteria


CREATE VIEW Z_O_AGENT_NO_AGG
SELECT * FROM O_AGENT_NO_AGG where DELETE_TIME is null

14. If end user decided to display agents deleted in last 31 days, and active agents, then create the view with a little longer in the WHERE clause


CREATE VIEW Z_O_AGENT_NO_AGG
SELECT * FROM O_AGENT_NO_AGG where DELETE_TIME is null or DELETE_TIME >= sysdate - 31

Note: Above is for Oracle database ver 7.3 onward

15. Standard report does filter number of days in interval report. It is a hidden text box which can easily modify. If I'm not mistaken, it is 90 days

16. This view filter applies to Queue, Route Point, Calling List, Campaign, etc. Point all the report to these custom view instead of standard views


Benefits:
1. DATAMART size will not store the new statistics for deleted agent
2. Supervisor has the luxury to re-use existing ID if agent return within 1 year. The DATAMART only contains extra 0 value data for these old agents
3. Parameter screen will show up faster due to filtering of unused data, especially for agent listing
4. End user can keep track of additional columns without linking the tables to CONFIG database, which only contain real time info

Other Info:
1. It is common to have business requirement to show additional agent info configured in CME to show up in custom reports. These additional information are not store in ODS, nor DATAMART, but in CONFIG database, e.g. tables cfg_person, cfg_login_info, cfg_agent_login. Above custom table can be used to copy the columns from these 3 tables so that historical report not affected by object deletion or permission change in CME

2012-02-02

Genesys Hacking #3: Guess/Hack Genesys Database Password

Product: Genesys
Database Type: MS SQL Server
3rd Party Software: http://www.passwordunlocker.com Price USD$79.95

This article is shows the steps to hack into Genesys database server, if it is MS SQL Server of any version.

1. Download Database Password Recovery Tool from above company
2. Shutdown Genesys applications, and MS SQL Server
3. Install the utility into MS SQL Server (if want to hack the password)
4. Alternatively, copy out master.mdf file into local PC where the utility install (if want to get username listing only). The file name must be called master.mdf
5. Launch the utility, and open master.mdf for the MS SQL Server
6. A list of database username will display. Stop here if you don't need to hack the password
6. Click on username sa, and Ctrl-C to change password (backup master.mdf if scare of corruption)
7. Startup MS SQL Server, and you will be able to login as sa, and reset the rest of the password


2012-01-20

Genesys OCS: Force Campaign Callback

Product: Genesys 5.x - 8.x
Component: Outbound Contact Server

Besides a record can set to campaign callback by agent (GAD, or custom softphone), it is also possible to configure directly from OCM.

Following is the step to hack it

  1. Set column record_type = Campaign CallBack. In export mode, or SQL*Plus, the value is 6
  2. Set column dial_sched_time to desire callback time with local timezone. This is EPOCH time in SQL*Plus, which is number of second since 1970-01-01 + timezone
  3. Set record_status = Ready. In export mode, or SQL*Plus the value is 1
Testing procedure:
  1. Login to GAD and change status to Ready
  2. Launch OCM. The callback record's record_status will always be Ready until it passed the time
  3. Wait for extra 1.5 - 2 minutes, and record_status will change to "Retrieved"
  4. Agent should get the callback record after retrieve a couple regular record
Other factors to consider:
  1. timezone is different, and needs to adjust the dial_sched_time to match local time
  2. daily_from and daily_till time is not within current time, so OCS won't pick up the record
  3. agent not login to GAD, but physical phone. Needs to use GAD or custom softphone
  4. configured wrong date in dial_sched_time. Used OCM to verify
In case troubleshooting is required, search OCS server log using following keywords

  1. Campaign name
  2. Calling list name
  3. @RecType = 6
  4. Buffer Size Coefficient Optimal value is set too high. Configure it lower
  5. RequestMakeCall
  6. 'Record_Type' 6
  7. Place name of the test agent
  8. EventUserEvent(PreviewRecordRequest)
  9. CallPreview[]::RequestMakeCall {
  10. message RequestDistributeUserEvent
  11. SQL: sp123423182267 @CampID = '123' (check log to determine the actual store procedure number, and cfg_campaign for the CampID)


Please use following PayPal donate if my post helped

2012-01-17

Genesys Database Analysis: MS SQL Server tempdb

Product: Genesys
Module: CCA, ODS, OCS
Related Product: MS SQL Server

It is common to have customization in several Genesys components, such as reporting (ODS or CCA), and outbound (OCS).

If MS SQL Server is used, then consultants or engineers must monitor tempdb space usage to determine any poorly design component, such as SSIS, database store procedure, trigger, or replication.

Although bouncing of MS SQL Server will free up the space, and shrinking the mdf file (can done online), finding the root cause and poorly design problem is the ultimate solution. This post will help to overcome temporary disk usage, but not to determine the root cause.

For Oracle, it is TEMP tablespace, and UNDOTBS tablespace. However, the nature of its usage is totally different than tempdb. If TEMP tablespace grow, then buffer cache (in RAM) needs to increase. If UNDOTBS increase, then review whether data volume is too big (potential bug if not datamart related), storage too slow (SQL queue up), CPU contention (again SQL queue up), or reduce UNDO_RETENTION (needs to assess the application requirement for the undo data).

For UNDO_RETENTION, Genesys application normally process very quick, and does not need more than a minute of retention. This parameter is normally tune up due to customization in ODS, CCA, and OCS

Following SQL will help to understand the usage of tempdb.


tempdb Analysis

Usage breakdown

SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8  as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage


usr_obj_kb internal_obj_kb version_store_kb freespace_kb mixedextent_kb
---------- --------------- ---------------- ------------ --------------------
2048       604096          0                36232640     4416


Top 20 Usage by session

SELECT top 20 * FROM sys.dm_db_session_space_usage
ORDER BY (user_objects_alloc_page_count +
 internal_objects_alloc_page_count) DESC


session_id database_id user_objects_alloc_page_count user_objects_dealloc_page_count internal_objects_alloc_page_count internal_objects_dealloc_page_count
---------- ----------- ----------------------------- ------------------------------- --------------------------------- -----------------------------------
108        2           0                             0                               51072                             51072
114        2           0                             0                               12480                             12480
92         2           0                             0                               1016                              0
124        2           39                            37                              0                                 0
94         2           0                             0                               16                                0
143        2           16                            7                               0                                 0
136        2           13                            8                               0                                 0
55         2           12                            6                               0                                 0
85         2           12                            6                               0                                 0
84         2           11                            0                               0                                 0
134        2           10                            6                               0                                 0
97         2           9                             0                               0                                 0
138        2           9                             9                               0                                 0
126        2           8                             5                               0                                 0
137        2           8                             8                               0                                 0
98         2           7                             6                               0                                 0
112        2           7                             6                               0                                 0
74         2           6                             1                               0                                 0
93         2           6                             0                               0                                 0
129        2           6                             6                               0                                 0


Trace the session ID and determine the username. Genesys DAP and Genesys DB Server must configured with distinct username in order to determine the components affected. If a common username is used, then try to determine its originating IP address, or program name

Top 20 current space allocated

SELECT top 20 * FROM sys.dm_db_task_space_usage
ORDER BY (user_objects_alloc_page_count +
 internal_objects_alloc_page_count) DESC


session_id database_id user_objects_alloc_page_count user_objects_dealloc_page_count internal_objects_alloc_page_count internal_objects_dealloc_page_count
---------- ----------- ----------------------------- ------------------------------- --------------------------------- -----------------------------------
108        2           0                             0                               51072                             51072
114        2           0                             0                               12480                             12480
92         2           0                             0                               1016                              0
124        2           39                            37                              0                                 0
94         2           0                             0                               16                                0
143        2           16                            7                               0                                 0
136        2           13                            8                               0                                 0
55         2           12                            6                               0                                 0
85         2           12                            6                               0                                 0
84         2           11                            0                               0                                 0
134        2           10                            6                               0                                 0
97         2           9                             0                               0                                 0
138        2           9                             9                               0                                 0
126        2           8                             5                               0                                 0
137        2           8                             8                               0                                 0
98         2           7                             6                               0                                 0
112        2           7                             6                               0                                 0
74         2           6                             1                               0                                 0
93         2           6                             0                               0                                 0
129        2           6                             6                               0                                 0



Current active request

SELECT t1.session_id, t1.request_id, t1.task_alloc,
  t1.task_dealloc, t2.sql_handle, t2.statement_start_offset,
  t2.statement_end_offset, t2.plan_handle
FROM (Select session_id, request_id,
    SUM(internal_objects_alloc_page_count) AS task_alloc,
    SUM (internal_objects_dealloc_page_count) AS task_dealloc
  FROM sys.dm_db_task_space_usage
  GROUP BY session_id, request_id) AS t1,
  sys.dm_exec_requests AS t2
WHERE t1.session_id = t2.session_id
  AND (t1.request_id = t2.request_id)
ORDER BY t1.task_alloc DESC


session_id request_id  task_alloc           task_dealloc         sql_handle                                                                                                                         statement_start_offset statement_end_offset plan_handle
---------- ----------- -------------------- -------------------- ---------------------------------------------------------------------------------------------------------------------------------- ---------------------- -------------------- ----------------------------------------------------------------------------------------------------------------------------------
114        0           293632               53960                0x020000000BD83117633F7C85432DD122569DBCB102F7C1B6                                                                                 0                      -1                   0x060007000BD83117B8A16B33000000000000000000000000
141        0           0                    0                    0x02000000ED9BCE2B47A4FA5065E6C3EE5C61717921937364                                                                                 72                     -1                   0x06000700ED9BCE2BB8416F72000000000000000000000000
146        0           0                    0                    0x02000000F3399A2D68766038497DE81333B6EFE9EFA6D29F                                                                                 0                      -1                   0x06000100F3399A2DB8613432000000000000000000000000
1          0           0                    74968                NULL                                                                                                                               NULL                   NULL                 NULL
2          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
3          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
4          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
6          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
7          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
8          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
9          0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
10         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
12         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
13         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
14         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
15         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
16         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
17         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
18         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
19         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
20         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
21         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
22         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL
23         0           0                    0                    NULL                                                                                                                               NULL                   NULL                 NULL



Trace the SQL handle to determine the SQL which used up the space

Versioning usage top 20

SELECT top 20 transaction_id, transaction_sequence_num, elapsed_time_seconds
FROM sys.dm_tran_active_snapshot_database_transactions
ORDER BY elapsed_time_seconds DESC


transaction_id transaction_sequence_num elapsed_time_seconds
-------------- ------------------------ --------------------
2225005580     2662907                  3


Genesys Hacking - CCA Finding Table Name from Column Name

Product: Genesys
Component: CCA DataMart

For any consultant who is not familiar with database, it is time consuming document and find CCA report. Especially if the report is designed by someone else, reverse engineering, and analysis is often require

This post will contains tips to check table and column definition quickly from MS SQL Server and Oracle

#1 Find table definition by table name S_AGENT_DAY

Oracle

select owner, table_name, column_name, data_type, data_length from all_tab_columns where table_name = 'S_AGENT_DAY'

Note: Or replace all_tab_columns with dba_tab_columns


MS SQL Server

select TableName, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH  from
INFORMATION_SCHEMA.COLUMNS where table_name = 'S_AGENT_DAY'


#2 Find table definition by column name T_ANSWER
This SQL can be used to find out table name using the specific column name find in Hyperion

Oracle
select owner, table_name, column_name, data_type, data_length from all_tab_columns where column_name = 'T_ANSWER'

Note: Or replace all_tab_columns with dba_tab_columns

MS SQL Server
select TableName as TableName, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH  from INFORMATION_SCHEMA.COLUMNS where column_name = 'T_ANSWER'

2012-01-13

Genesys: Old Technology in StatServer


Product: Genesys
Version: v5 - v8

It has been more than 10 years, but Genesys StatServer is still not a multi-threaded application.I always have the impression it is, but just read following post on 2010-12-30 that it is not.

Inspecting CPU utilization, I always see all CPU are being utilize in multi-CPU. I wonder how they write their program to be able to distribute load to different CPU. I understand that for Intel multi-core processor, its HyperThread technology is distributing the load within the CPU itself, so no programming change is require.

However, for single-threaded application, such as StatServer, there must be some API which allows the program to make use of multi-CPU. I must be behind in programming API.

In summary, single-threaded application can't fully utilize multi-CPU machine, regardless of operating system. So in high call volume, many stats, many agents (like many part-timer with high turnaround), many objects (place, place group, virtual queue, queue group, calling list, campaign, etc), high log level environment, StatServer may not able to utilize all CPUs, and either intermittent stop responding (reporting data lost), or crash.

The options to work around these are:
1. Install faster physical CPU (not virtual CPU) per core
2. Stay away from virtual machine to fully utilize raw CPU power
3. Invest in multi-core rather than multi-CPU
4. For Intel processor, overclock BCLK (base frequency), disable CPU Spread Spectrum, PLL voltage, Vcore voltage, etc. This will void the warranty
5. StatServer consider hitting CPU bottleneck when any of the core in the CPU hitting 100% continuously, not average, nor average of 1 mult-core CPU. Most performance monitoring software need custom configuration to show max per core, max per processor, max per process. Many people are looking at average CPU utilization which is totally wrong and assume StatServer not encountering CPU constraint


In demonstrating single-threaded, and multi-threaded application, I borrowed Intel Core i7-3960X (6 core/processor) benchmark, see following

WinZip v15.5 Pro (single-threaded)
7-Zip v0.955 (multi-threaded)
Compression Elapsed Time:
WinZip 3:10 minute
7-Zip 0:32 minute

http://www.tomshardware.com/reviews/sandy-bridge-e-efficiency-core-i7-3960x,3075-4.html

Regardless how big and content of the file being compressed, multi-threaded application will be several times faster than single-threaded

Moreover, on 32-bit OS (such as Windows 32-bit), StatServer must process the data fast, or it will need to allocate more RAM for its variables (link-list, array, etc), and hitting 2 GB/process limit. This is ugly because StatServer will crash instantly, and will affect other modules which it talks to (see CME what module the crashing StatServer is used), e.g. CCA, CCPulse, URS, GIS, WFM.

Reference:
http://solutionsearch.genesyslab.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=25349&sliceId=1&docTypeID=DT_QUESTIONANDANSWER_1_1&dialogID=19570934&stateId=0 0 19568750

2011-12-30

Genesys: Solving Config Server in read-only mode

Product: Genesys CTI Suite 7.x to 8
Module: Configuration Server

Under unknown scenario, Genesys Config server may starts and complain another instance of Config server is running. This is more often in HA environment where there are 2 machine running primary/backup Config server.

This post assume you have done all checks and verification for backup Config server, and verified Task Manager that no other Config server is running, and certainly not network or database issue.

Common message in Config server's log: Another Configuration Server is detected

Option 1: Possibility of Windows run away process
1. Try reboot Windows if possible. If can't, skip this. For Windows, you can't trust Task Manager, but use Process Explorer to verify whether Config server is really terminated

Option 2: Possibility of backup Config server improperly switching to primary role, but in transient and never complete the process
2. Shutdown backup Config server, then restart primary Config server

Option 3: DB Server used by Config server is passing wrong info
1. Restart Genesys DB Server used by Config server. Verified Connectivity tab of Config server to determine the name of the DB Server
2. Then restart primary Config server

Option 4: Possibility Oracle listener down
1. Checks to ensure Oracle listener is up (if using Oracle database)
2. Oracle database and listener are 2 different process. So database up does not mean listener is up, or able to connect (if listener setup for RAC, fail-over, load balancing)
3. Runs SQL*Plus from Genesys DB Server (the one used by Config server) and login with the username and password configured in CME to ensure it is able to connect. Assume Oracle is not install in the same box as DB Server
4. If DB Server and Oracle database server install in same box, then use following command to test

sqlplus username/password@configdbprod

Note: Fill in username and password with actual username. Replace database listener name configdbprod with actual listener name

5. Listener name above is may not be the database name, although it is often configured so. If using RAC, then there are many more names. Used the name that configured to used by DB Server. If have problem then test other name to determine the root cause. Ask me if you need assistant in Genesys with Oracle RAC database environment

Option 4: Table config.cfg_refresh contains invalid value that confused Config server
1. Verify the content of table cfg_refresh. Ask me if you need to know how to find the database and query it. I am expert in database as well
2. During every shutdown of primary and backup Config server, verify its content before and after
3. During every startup of primary and backup Config server, verify its content
4. While keeping Config server down, note down its value (it is only 2 columns for ver 7.x) before making any change, but after you notified its pattern
5. Don't modify value for column NOTIFY_ID
6. Change value of column REFERSH

update cfg_refresh set refresh = 0

7. Startup primary Config server

Long term solution:
1. Upgrade to Config Server 8.0.300.30 or higher. This is a product bug

Please use following PayPal donate if my post helped

2011-12-21

Genesys ETL Run-time contains inappropriate object or statistic

Product: Genesys CTI
Module: ETL Runtime Service

Encounter following error in log ETL_Service.20121213_214131_050.log?

15:39:30.593 Src.ChunkData.1 Chunk#1_2075 contains inappropriate object or statistic


Following is the step to troubleshoot

Connects to ODS database

Find out the schedule # from chunk #
select  * from ol_chunk_log where log_id = 2075


LOG_ID SCHEDULE_ID BEGIN_TIME              UPDATE_TIME             CHUNK_TYPE TABLE_NAME IS_LAST_CHUNK RECEIVE_TIME            TRANSFERRED_TIME        DELETE_TIME IS_NULL_CHUNK DATA_RECORDS_NUM
------ ----------- ----------------------- ----------------------- ---------- ---------- ------------- ----------------------- ----------------------- ----------- ------------- ----------------
2075   9           2010-09-26 20:00:00.000 2010-09-26 20:15:00.000 NULL       OL_DATA3   0             2010-09-26 20:15:02.000 2010-09-26 16:19:06.560 NULL        0             227


Its value is 9

Use schedule # to find layout #
select * from ol_schedule where schedule_id = 9


SCHEDULE_ID LAYOUT_ID TIME_PROFILE_ID START_TIME              STOP_TIME REFRESH_RATE IS_NEEDS_TRANSFER FREEZE_TIME
----------- --------- --------------- ----------------------- --------- ------------ ----------------- -----------------------
9           9         1               2010-06-07 13:36:26.670 NULL      NULL         1                 2010-06-07 13:48:07.297


Its value is 9. This value is very often same as schedule #

select * from ol_report_layout where layout_id = 9


LAYOUT_ID OBJECT_TYPE_ID TENANT_ID LAYOUT_NAME    METAGROUP_CLASS METAGROUP_DB_ID ADD_TIME                DELETE_TIME             LAYOUT_DESCRIPTION           UPDATE_TIME TEMPLATE_NAME IS_BRKN
--------- -------------- --------- -------------- --------------- --------------- ----------------------- ----------------------- ---------------------------- ----------- ------------- -------
9         0              101       Agent_Layout_2 100             0               2010-06-10 11:42:40.147 2010-10-13 16:59:08.983 Agent Report Layout with AHT NULL        AGENT_AHT1    NULL

This means the chunk belongs to Report Layout Agent_Layout_2 (see DMA), and time of the record is 2010-06-07 13:36:36

Normally this error appear if report layout is deleted from DMA before ETL service able to pull the data into DATAMART/REPORT database. If you confirmed these can be deleted, then take a backup of the ODS database and execute following statement to delete all the entries belongs to "Agent_Layout_2"

detete from ol_report_layout where layout_id = 0

This is no way to undo from DMA, or ETL service. The only way to undo is to restore from database backup, or flashback (Oracle) or similar technology offer by database server

2011-12-16

Genesys Outbound OCM: Control Campaign Viewing Privilege/Permission

Product: Genesys CTI Suite 5.x - 8.x
Module: Outbound Contact Manager (OCM)

In order to limit different agents to see their own campaigns, campaign groups and calling lists, following are require (use CME)

1. Create an Access Group, says HR_AccGrp. Grant this Access Group to the selected agent
2. Create a sub-folder under Table Access to keep the selective table access objects together for ease of management, and privilege management
3. Create a sub-folder for Calling List like above
4. Create a sub-folder for Campaigns like above
5. Change privilege of root folder Table Access, Calling List, Campaigns, Agent Groups (if any) to have read + execute access by HR_AccGrp. Do not propagate the change to sub-folder else they will have full visibility, which is not desire
6. Change privilege of each of the sub-folder created for Table Access, Calling List, Campaigns, Agent Groups (if any) to have read+execute privilege access by HR_AccGrp
7. Launch OCM with the agent login to verify the account can only see the campaigns, calling lists, and agent groups defined in the sub-folder

If any privilege is missing, following error will appear in OCM after it prompts for OCS server

Error reading configuration

Other related objects that may need to verify, if someone purposely revoke the privileges are


Treatment
Filter
Time Zones
Fields
Format

Remember that both root folder, sub-folders and the objects itself contains the privilege. Think about the privilege in both CME and OCM as outbound manager may need access to CME to administrate their own campaigns as well, like user management among several campaigns

2011-12-12

Genesys - Server with this name is already running

Product: Genesys CTI Suite
Module: Genesys CPD 8.0.001.02

There are various ways that CPD server may lost connection to Genesys config server. While many sites covering normal scenarios, such as config server down, database connectivity issue which cause config server failed to start, CPU bottleneck in CPD or config server, RAM bottleneck in CPD or config server, CPD 32-bit version hits 2 GB/process max, network instability, etc. This post is covering an unknown scenario.

For unknown reason, when a CPD server lost connection with config server, the config server should unregister the daemon so that it can start up again, or auto re-connect (if using ADDP protocol).

If some unknown reason again, config server did not unregister it, then CPD server will not allow to start up.

Following config server log clearly shown that it though CPD server already running, and not allow the CPD which lost connectivity to it to reconnect agian


13:27:31.008 Trc 04541 Message MSGCFG_CLIENTREGISTER received from 1176 ( '')

MSGCFG_CLIENTREGISTER
  attr: SATRCFG_PROTOCOLEX          value: "CfgProtocol 5.1.3.75"
  attr: SATRCFG_PROTOCOL            value: "CfgProtocol 5.1.3.54"
  attr: SATRCFG_USERPASS            value: "******"
  attr: SATRCFG_USERNAME            value: ""
  attr: SATRCFG_APPPASS             value: ""
  attr: SATRCFG_APPNAME             value: "CPDNorth"
  attr: IATRCFG_APPTYPE             value: 0 [Unknown]
  attr: IATRCFG_REQUESTID           value: 366

14:27:31.008 Std 23500 Configuration Server Error : Error  [CFGAppSecurityViolation], object [CfgApplication], property [name] Description Server with this name is already running
14:11:31.008 Trc 04542 Message MSGCFG_ERROR sent to 1176 ( '')

Following CPD log showing it is trying to reconnect every 10 sec, and it is giving false alarm that config server not found, instead of process already started error found in config server log


13:27:52.517 Trc 11000 CfgServerReconnectTimer timer for 10 sec. was created: 0x5fe9160
Request ConfRestoreSessionAsync to restore connection to ConfigurationServer (300)
addp-trace off
(addp_xconfig) local OFF, remote OFF, trace off
Request to restore session failed (The specified Configuration Server cannot be found). Keep trying to restore session.
ConfigurationServer cannot be contacted.
Trying to restore session.
13:27:53.501 Trc 11000 CfgServerReconnectTimer timer for 10 sec. was created: 0x68a02c8
Request ConfRestoreSessionAsync to restore connection to ConfigurationServer (300)
addp-trace off
(addp_xconfig) local OFF, remote OFF, trace off
Request to restore session failed (The specified Configuration Server cannot be found). Keep trying to restore session.
ConfigurationServer cannot be contacted.
Trying to restore session.


Anyway, bouncing of CPD server will resolve this connectivity issue.

It looks like a bug between CPD and config server

2011-12-05

Wireshark Capture Genesys T-Server SSL/TLS Network Packet

Product 1: Genesys CTI Suite (any version) for Windows
Product 2: Wireshark v1.6.4

There is nobody posted anything about capturing TLS encrypted network packet for Genesys CTI application, so I decided to write one.

This article contains configuration of Wireshark to capture SSL encrypted network packet communication with T-Server running on port 3000.

Overall activities are as below:
1. Export Windows SSL certificate as pfx file
2. Convert SSL certificate pfx file to pem
3. Capture network packet and restart application using Wireshark
4. Configure SSL pem certificate file

Step #3 is the most crucial step which capture the network packets. The rest of the steps can be done after the pcap file is saved. However, I am following above logical steps

Export SSL Certificate
Genesys uses Windows certificate feature, which is the same certificate used by IIS web  server. There are many references in Windows' SSL certificate export, so I will only show the major screen shots

Launch MMC and add snap-in Certificates. Alternatively, export it from IIS manager

Navigate to tree node Trusted Root Certification Authorities - Certificates. Look for the certificate name which match the T-Server hostname configured in Genesys Configuration Manager (CME). If there are several T-Server used, then export the SSL certificate from each of the server, and upload into a common directory. This will simplify the file gathering later

Choose output format as PKCS #12 (pfx). Ignore all the checkbox as Wireshark does not need any of those

Assume I have 2 T-Server, and saved them as tserver1.pfx, and tserver2.pfx.

Assume the pfx saved in C:\Wireshark\ directory

Convert pfx to pem Certificate
This step is not require for Windows, as Wireshark can accept pfx file. For other non-pfx certificate format, follow this section

Download OpenSSL for Windows, or Linux. This is a common utility comes pre-install for many Linux distribution. This free utility is provided by Shining Light Productions. Either 32-bit or 64-bit is fine, so for 64-bit Windows, either version will work.

http://www.slproweb.com/download/Win32OpenSSL-1_0_0e.exe (32-bit)

Program openssl.exe is the utility that needed to convert to pem certificate file.

Go to DOS prompt and execute following commands
cd C:\Wireshark

\Utilities\OpenSSL\bin\openssl pkcs12 -nodes -in tserver1.pfx -out tserver1.pem -nocerts -nodes
\Utilities\OpenSSL\bin\openssl pkcs12 -nodes -in tserver2.pfx -out tserver2.pem -nocerts -nodes

The content of the pem file will be similar to below


Bag Attributes
    Microsoft Local Key set:
    localKeyID: 01 00 00 00 
    Microsoft CSP Name: Microsoft RSA SChannel Cryptographic Provider
    friendlyName: c6bad00c9d00bfd55dc217383c14f1c5_d9350150-31e7-4e1f-889d-029377e717f2
Key Attributes
    X509v3 Key Usage: 10 
-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQC1PfnMaKjLpTyTZXW90FlLMYAeRsciKVpVjtx973gT6W552Tot
....cut...

FCR4IYW4+ye/IfbV4bYgDDyW4Wb1bk9bNnF6/U7pXJ3/
-----END RSA PRIVATE KEY-----

Capturing in Wireshark
In order to minimize the pcap network capture file, configure to capture only traffic sending to port 3000, which is the T-Server port for both T-Servers (or more)

Optionally, configured to save the captured data to file and auto rename file hourly. Following shown save to C:\Wireshark
Optionally, configured to stop capturing after 5 hours. This is handy if want to automatically stop network capturing, and eliminate filling up the disk space

Once the capturing start, restart any Genesys T-Server client, such as Siebel CRM, soft phone, OCM, CCPulse, StatServer, SAP soft phone, etc. This is because SSL certificate handshake mostly begin at the start of the application, or login. It contains crucial SSL encryption protocol which needed to decrypt the communication. If this step missed, then it is impossible to decode the network packet

During the capture, pay attention to specific application that needs to trace. For example, I select a CRM CTI application which communicate with T-Server on port 64406 (client) and 3000 (T-Server), which contains [PSH, ACK]. Wireshark will show an alias name stm-pproc by default but this is meaningless for Genesys T-Server troubleshooting

Right click on the packet, and choose decode as SSL

Click on Transport tab, and change TCP port to both. If the CTI application constantly change the port number, then only choose a packet sent from T-Server to client, and choose source port, and another packet sent from client to T-Server and choose destination port. In either case, choose protocol as SSL

The packet will change to TLSv1 protocol. In this case, it is a handshark fail, until we configured SSL pem certificate below. The SSL configuration can be done after the pcap file is saved. This is only for display purpose, mostly for real time analysis

Configure SSL pem Certificate
This step is required in order for Wireshark to show the encrypted TLS packet. As long as Wireshark capture the certificate handshake packets, it is able to decrypt the content after configure this.

Packet captured prior of SSL certificate configuration will still be able to be decrypted. This is a wrong understanding for many people (who do not understand SSL and TLS security).

Press Ctrl-Shift-P or from View - Preferences pull down menu to open the preference screen

Expand protocol on left panel, and scroll down to SSL

Click on RSA keys list Edit button, and fill in the pem certificate file for each T-Server. Click on Add button multiple time for each T-Server. Following screen shown T-Server 192.168.1.11 and .12 on port 3000 are added. Both pointing to different pem files

Once this step is done, previously SSL packet is readable

In above packets, we see following

  1. Contain "Server Hello"
  2. "Client Key Exchange" contain success message "Finished." It changed from "Encrypted Handshake Message" previously
  3. "Change Cipher Spec" contains success message "Finished." It changed from "Encrypted Handshake Message" previously
  4. Last packet changed from "Application Data" to "Continuation or non-HTTP traffic" with protocol as HTTP
Click on the HTTP with 141 bytes length show the decrypted content

It is found that pcap captured by Wireshark 1.6.4 is not able to decrypt by Wireshark 1.6.2. Not sure this is a known bug in Wireshark, but it does look like a bug to me.

Post your comment if you need help or clarification


Please use following PayPal donate if my post helped