Search This Blog

2024-09-22

Windows: Installer/Uninstaller error code 2203 when trying to install iTunes

Product: Windows
Version: 10

Overview

When trying to uninstall many software in Windows 10, the uninstallation failed with following error

"The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2203."

Procedure

1. Get more detail error message from Event Viewer

1.1. Press Windows key, and type "Event Viewer" to run "Event Viewer"

1.2. On left panel, navigates to Windows Logs > Application

1.3. Checks right panel, and focus in Source = MSInstaller

1.4. In lower right panel, check under General tab for msiexecdetail message.  In this example, iTunes failed during uninstallation with error code 2203


1.5. Following is displayed in General tab:

Product: iTunes -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2203. The arguments are: C:\windows\Installer\inprogressinstallinfo.ipi, -2147286788, 

1.6. Clicks on Details tab to see more detail, which will contain the product ID that we need for iTune software


1.7. Scrolls down to section "In Bytes" and take note of the hex code, which is iTune product ID.  In this screenshot, the Product ID is {E42D1B4D-78C2-4144-8E4D-FA82C53B68A4}

2. Explanation of above error:

2.1. The MS Windows installer/uninstall called program msiexec.exe at the background with product ID {E42D1B4D-78C2-4144-8E4D-FA82C53B68A4}

2.2. The actual command executed was "msiexec.exe  /uninstall {E42D1B4D-78C2-4144-8E4D-FA82C53B68A4}"

2.3. msiexec.exe will write temporary files to %LOCALAPPDATA%\temp\ of current login user, e.g. administrator

2.3.1. If current login user cannot create any files in directory %LOCALAPPDATA%\temp, then the installer will failed with this error.  So, test create a dummy file or directory in %LOCALAPPDATA%\temp\ using "cmd" prompt

2.4. msiexec.exe will write temporary files to %TEMP% as system user.  The value will be from system environment variable, under section "System variables"

2.4.1. Even you have TEMP variable defined under "User variables," system user will only use TEMP value under "System variables"
2.4.2. You can monitor above TEMP directory while running uninstaller/installer to see whether it is creating any temporary file
2.4.3. You cannot run "cmd" as administrator, and type "set" command to display the value of TEMP, because that will still cause TEMP value inherits from current login user
2.4.4. Ensures that the permission for "SYSTEM" in folder properties has Full Control

2.5. msiexec.exe will creates a temporary file in C:\windows\Installer\inprogressinstallinfo.ipi
2.5.1. Monitor directory "C:\windows\Installer\" in real time during uninstallation.and ensures file inprogressinstallinfo.ipi get created
2.5.2. The file will get deleted when the uninstaller failed
2.5.3. You can try to create a text file in above directory to test you can create file using cmd (run as administrator) to ensure there is no permission issue.  If you do, ensures the SYSTEM user has Full Control permission

2.6. You could find in Internet to suggest to disable Windows Virus Defender temporary.  This is assume the utility created temporary files which Windows Defender automatically deleted the temporary files by assuming it is a virus.  If you are working on common software, such as iTune, Acrobat, Notepad++, then you can ignore those suggestion, as it is not related

3. Manually construct the uninstallation command yourself, and enable debug log
3.1. Run notepad.exe to prepare a command to run later which will create a debug log to show us more info
3.2. Type: msiexec.exe  /uninstall {product ID} /l*v <log file location>
3.3. For my iTune, the product ID that we found in Event Viewer is E42D1B4D-78C2-4144-8E4D-FA82C53B68A4}
3.4. I would like to create the new log file in i:\temp\itune_uninstall.txt
3.5. So the final command is: C:\windows\system32\msiexec.exe  /uninstall {E42D1B4D-78C2-4144-8E4D-FA82C53B68A4} /l*v i:\temp\itune_uninstall.txt
3.6. Run cmd.exe as administrator
3.7. Copy-paste the final msiexec.exe command in cmd to start uninstaller for iTune, and create debug log in log file i:\temp\itune_uninstall.txt
3.8. Let the uninstallation failed with error 2203

4. Gets the detail error message for error 2203
4.1. Open debug log created in above step
4.2. In this example, it is i:\temp\itune_uninstall.txt
4.3. Open the log file in Notepad, or Notepad++
4.4. Search for "DEBUG: Error"
4.5. In this iTune example, the error is below

MSI (s) (68:78) [10:15:46:439]: BeginTransaction: Locking Server
MSI (s) (68:78) [10:15:46:440]: Note: 1: 2203 2: C:\windows\Installer\inprogressinstallinfo.ipi 3: -2147287038 
MSI (s) (68:78) [10:15:46:440]: Note: 1: 2203 2: C:\windows\Installer\inprogressinstallinfo.ipi 3: -2147287038 
MSI (s) (68:78) [10:15:46:440]: Server not locked: locking for product {E42D1B4D-78C2-4144-8E4D-FA82C53B68A4}
MSI (s) (68:78) [10:15:46:441]: Note: 1: 2203 2: C:\windows\Installer\inprogressinstallinfo.ipi 3: -2147286788 
Action start 10:15:46: InstallInitialize.
DEBUG: Error 2203:  Database: C:\windows\Installer\inprogressinstallinfo.ipi. Cannot open database file. System error -2147286788
MSI (s) (68:78) [10:15:48:055]: Product: iTunes -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2203. The arguments are: C:\windows\Installer\inprogressinstallinfo.ipi, -2147286788, 

MSI (c) (84:8C) [10:15:46:447]: Font created.  Charset: Req=0, Ret=0, Font: Req=MS Shell Dlg, Ret=MS Shell Dlg

The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2203. The arguments are: C:\windows\Installer\inprogressinstallinfo.ipi, -2147286788, 
Action ended 10:15:48: InstallInitialize. Return value 3.
Action ended 10:15:48: INSTALL. Return value 3.

4.6. This error message is longer that the one you see in Event Viewer, or Control Panel > Uninstall software
4.7. The line to focus on will be below:

DEBUG: Error 2203:  Database: C:\windows\Installer\inprogressinstallinfo.ipi. Cannot open database file. System error -2147286788

4.8. If you review other log entries, you will see no entry is referring to TEMP dir, which in my case, it is "I:\TEMP\Windows"

5. This post does not cover Windows 10 system files minor corruption
6. If your Windows 10 has minor corruption, then fix them using following steps:
6.1. Run cmd as administrator
6.2. Run: DISM.exe /Online /Cleanup-image /Scanhealth
6.3. Run: DISM.exe /Online /Cleanup-image /Restorehealth

7. A quick workaround will be to change System environment variables below:
7.1. TEMP=C:\temp

8. If you are stuck, repeat the installer/uninstaller while monitor following dir to ensure it has the permission.  Don't simply follows many suggestions in Internet to modify directories permission, if you see it is able to create temporary files
8.1. %LOCALAPPDATA%\temp - this is the value of users environment variable
8.2. %temp% - this is the value of the global environment variable, not users environment variable

2024-09-12

Oracle RDBMS: Manual Archive Log Clean Up with FAST_RECOVERY_AREA

Product: Oracle RDBMS

Version: 11.1.0 - 21c (19.3.x)

OS: Windows, Linux, HP-UX, AIX

The introduction of FAST_RECOVERY_AREA has been more than 10 years, which automatically housekeep archive logs.  This allow DBA to indirectly delete physical archive log files resides in the OS, even they do not have physical OS access.

However, for those DBA or SA who have access to the OS, or managed the storage and DB, it is often required to cleanup archive log files in OS first, then in Oracle's FAST_RECOVERY_AREA later.

This post show how to perform the manual clean up of it

1. Verify the usage of FAST_RECOVERY_AREA to get a breakdown what used up all the storage

col file_type format a40
set pagesize 50
set line 110

select * from v$recovery_area_usage;

2. For the purpose of this post, this is covering scenario where FILE_TYPE=ARCHIVED LOG.  So the SQL will be

select * from v$recovery_area_usage where FILE_TYPE='ARCHIVED LOG';

3. If archive log have not backup, and you want to backup the archive log to disk, then use following RMAN command to backup to disk, and delete from the disk

RMAN target /

backup archivelog all delete input;

4. If archive log already been deleted, then you need to sync up the status with Oracle DB.  Uses this RMAN command to sync up the status of all the archive log files

RMAN target /

crosscheck archivelog all;

5. After FAST_RECOVERY_AREA detected the archive log files are deleted, then you can run RMAN command to delete it from FAST_RECOVERY_AREA's internal table,  This does not delete archive log which present (no deleted) in FAST_RECOVERY_AREA

RMAN target /

delete noprompt expired archivelog all;


2024-07-14

Fresh Tomato Custom Router Firmware: Blocks Incoming Hacker IP

Product: Fresh Tomato Custom Router Firmware
Version: Firmware 2024.2 K26MIPSR2_RTAC USB AIO-64K
Router Model: ASUS RT-AC66U

Overview

Testing out custom firmware in ASUS router above using build-in NGINX web server on custom http port.  Trying to figure out how this custom firmware can blocks incoming hacker by their IP. 

Web server log is able to log visitors' IP, and the URL request URI.  So I can figure out the attackers' IP, which I considered as hacker.

This gives me an idea how aggressive hacker community and search engine will send valid and invalid attacks to a random web site.

I am leveraging build-in software firewall which configures using command iptables, configuration file /etc/iptables, web GUI called "Access Restriction" that stores in NVRAM (access using command "nvram get rrule0" and "nvram set rrule0").  Research of iptables chains and rules found that the menu "Access Restriction" is only restrict WAN to LAN, and doesn't restrict build-in NGINX web server, which is not consider in the LAN subnet.  So this post document how to configure it to leverage "Access Restriction."

Initially, I uses following methods to configures the software firewall to block hackers' IP, but it is not persistent:
1. /etc/iptables configuration file, with scheduler
2. Using iptables command, with scheduler

Every few hours, Fresh Tomato seems to replace the above 2 files and configuration with the value in web GUI, which later found the setting is stored in NVRAM.  So I lost all my firewall setting, and hacker can attack my NGINX web server again.  It took me few days to figure out the web GUI menu called "Access Restriction" will configures iptables, and its setting is permanently stores in NVRAM in addition of above 2 files/configuration, even after reboot or power cycle.

Google search is not returning any info about blocking hacking using Fresh Tomato on build-in NGINX web server, but finally I found following Fresh Tomato documentation that gives me a clue how to configure NVRAM (so that I can blocked hacker by using NGINX access.log file):
  • https://wiki.freshtomato.org/doku.php/access_restrictions
Web GUI's "Access Restriction" configuration resides in NVRAM, and needs to set using "nvram set [internal rule name]" command.  Its current configuration can be retrieve using command "nvram get [internal rule name]" command.  The internal rule name starts with rrule0, follows by rrule1, rrule2, and each can be enabled and disabled separately.  Moreover, you can set the time start/end for every day, and the router will automatically trigger iptables command by the schedule.

In this post, I will assume there is no rule set, so I will use rrule0.  This indirectly will create iptables chain name "rdev00."

Every time the "Access Restriction" configuration is changed, it will reload the configurations from NVRAM, as well as certain network related configurations in the router.  I need to adds custom iptables command into "Scripts" section called "Firewall" as well to modify OOTB iptables chain to apply "Access Restriction" configuration in addition of LAN traffic.

Procedure

1. Creates rule #1 with:
1.1. Rule name: Blocks Hacker 2024-07
1.2. Blocks IP: 103.246.195.85, 5.183.103.248, 118.123.105.93
2. Connects to the LAN port using cable, or WiFi
3. Assumes WiFi router's IP is 192.168.100.1, then uses ssh utility such as PuTTY to ssh to port 192.168.100.1
4. Login as root user
5. At the "#" shell prompt, creates a new rule 1 with internal rule name rrule0
5.1. nvram set rrule0="1|-1|-1|127|104.37.174.65>117.146.152.81>118.123.105.93|||0|Blocks Hacker 2024-07"
5.2. Expect output:
root@rt-ac66u:~# nvram set rrule0="1|-1|-1|127|104.37.174.65>117.146.152.81>118.123.105.93|||0|Blocks Hacker 2024-07"
root@rt-ac66u:~#
6. Login to WiFi routner's web GUI to check. URL: http://192.168.100.1
7. On left menu, lick on menu item "Access Restriction" to display all the rules
8. You will see the first rule call "Blocks Hacker 2024-07" on the right panel, and as the 1st rule in the list
9. Following is the expected output

10. The current "Access Restriction" are:
10.1. Only in configuration of NVRAM, but not saved yet
10.2. Current setting will lost after reboot, or power lost
10.3. Router has not run iptables to activate the setting.  There is delay of about 1 hr
11. To make this setting permanent after reboot, enter: # nvram commit
12. Waits for about 1 hr, and checks firewall to see the IP are blacklisted.  Command: iptables -L rdev00 -n
13. Expected output:
Chain rdev00 (1 references)
target     prot opt source               destination
DROP       all  --  104.37.174.65        0.0.0.0/0
DROP       all  --  117.146.152.81       0.0.0.0/0
DROP       all  --  118.123.105.93       0.0.0.0/0
14. If you want to force the setting to activate immediately, runs these 2 commands:
nvram set action_service=restrict-restart
# kill -USR1 1
15. You will see following in /var/log/messages:
# tail /var/log/messages | grep "Activating rule"
Jul 14 11:23:31 rt-ac66u user.info rcheck[1867]: Activating rule 0
16. Lastly, modify iptables firewall's INPUT chain to make web server port from the web GUI
16.1. In router's web GUI, on left panel, navigates to Administration > Scripts

16.2. Click on tab Firewall
16.3. Enter following command to create a new custom chained called "hacker" and replaces exiting rule for NGINX TCP port 85 to this custom chain:
logger "webUI-Scripts-Firewall Configures NGINX to follow Access Restriction"
iptables -N hacker
iptables -A hacker -j restrict
iptables -A hacker -j logaccept || iptables -A hacker -j ACCEPT
iptables -R INPUT 15 -p tcp -m tcp --dport 85 -j hacker
16.4. Sample screenshot


Maintenance

You might to perform other maintenance of the firewall.
1. Removes/delete the rule, e.g. removes rule 10: nvram unset rrule10
2. Delete firewall rule 10 (rdev09):
2.1. iptables -D restrict -j rdev09
2.2 .iptables --flush rdev09
2.3. iptables -X rdev09
3. Checks current actively blocked IP for rule 1 (chain name rdev00) in memory: iptables -L rdev00 -n
4. Checks current block IP for rule 1 in /etc/iptables configuration file: grep rdev00 /etc/iptables
5. Forces Access Restriction to activate now:
5.1. Command #1: nvram set action_service=restrict-restart
5.2. Command #2: kill -USR1 1 (after above completed)
5.3. Command #3: nvram get action_service (if completed, command will not return any value. If not completed, then returns "restricgt-restart")
6. Uses "top" Linux command to monitor the CPU usage with the new firewall rule. You will need to reduces number of blocked IP, if encountered CPU contention, or buy a faster/newer WiFi router

Known Limitation

  1. Web GUI's menu Administration > Logging > Syslog > Connection Logging to control following won't always going to capture above traffic:
    1. Inbound: If blocked by Firewall
    2. Outbound: If blocked by Firewall
    3. Reason: above logic/commands need to modify to use iptables chain "logaccept" and "logdrop" which will make it longer to write (updated to allows logging to syslog /var/log/messages)
  2. iptables firewall configuration is based on my configuration of Fresh Tomato on ASUS RT-AC66U, version 2024.2.  It might needs further modification
  3. When number of blocked IP is getting long (more than 100), the web GUI's menu "Access Restriction" won't be able to delete or modify rule #1, and will consistently prompt to delete IPs (then it will work)
    1. Fresh Tomato firmware seems to enforce a restriction to limit the number of entries inside the rule
    2. This error can be ignore, and doesn't affect the IP blocking

2024-06-28

Linux: How to get Parent Process (PPID)?

Product: Linux
Version: All
CPU: All

Overview

Many Linux forum and documentation often base on full Linux distribution, such as RedHat, Fedora, Ubuntu, SuSE, MacOS, and left out Linux installed in single box computer (SBC) such as WiFi router, smartphone, tablet, Raspberry Pi, Nano Pi, car's head unit, webcam, etc.  This leads to some standard UNIX design and operation slowly forgotten and replaces with newer commands.

During day-to-day troubleshooting, sometimes in addition of process ID (PID), it is required to check parent process ID (PPID) as well.  For example, what spawn the process, does the parent process terminated, etc. 

Procedure

Most Linux installed in SBC has a reduced version of Linux, such as BusyBox which is ~ 600KB used in DD-WRT and FreshTomoto firmware used in WiFi router.

These smaller footprint of Linux often bundled with "ps" command which can display limited info of a process PID.  For example, it does not display parent PID (PPID).

You can use following command to get the PPID using /proc virtual directory.

For example, assumes the PID is 2914:

root@rt-ac66u:/tmp/home/root# ls -l /proc/2914/status
-r--r--r--    1 root     root             0 Jun 28 09:20 /proc/2914/status
root@rt-ac66u:/tmp/home/root# cat /proc/2914/status
Name:   dropbear
State:  S (sleeping)
SleepAVG:       95%
Tgid:   2914
Pid:    2914
PPid:   1
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 32
Groups:
VmPeak:     1156 kB
VmSize:     1156 kB
VmLck:         0 kB
VmHWM:       256 kB
VmRSS:       256 kB
VmData:      348 kB
VmStk:        84 kB
VmExe:       304 kB
VmLib:       384 kB
VmPTE:        12 kB
Threads:        1
SigQ:   0/2047
SigPnd: 00000000000000000000000000000000
ShdPnd: 00000000000000000000000000000000
SigBlk: 00000000000000000000000000000000
SigIgn: 00000000000000000000000000001000
SigCgt: 00000000000000000000000000024402
CapInh: 0000000000000000
CapPrm: 00000000fffffeff
CapEff: 00000000fffffeff

To display the PPID alone, then uses following command:
root@rt-ac66u:/tmp/home/root# grep PPid /proc/2914/status
PPid:   1

Virtual directory /proc exists as very basic design of UNIX, so it always available in all UNIX, while "ps" command might has stripped off functionality.

Above example is captured from FreshTomato firmware in WiFi router

2024-05-21

Java SQL Programming - Tuning SELECT Statement Performance, JDBC Performance Tuning

Product: Java Programming
Version: 0.1 - current
OS: Any operating system, Windows, Linux, UNIX, AIX, Solaris, HP-UX
Related Product: Any programming language that uses SELECT in the world
Java API, Java class, Java method: java.sql.Statement, java.sql.PreparedStatement
Java method: java.sql.PreparedStatement.setFetchSize, java.sql.Statement.setFetchSize

Overview

Any programming that can work with databases will offer an API to write SELECT statement.  In term of Java, they are following Java classes:

  • java.sql.Statement
  • java.sql.PreparedStatement
Certainly there are more way to run a SELECT statement without using above, but this post will focus in these 2, as they are indirectly called by any SELECT statement execution by any programming languages in the world.  This has been since the 1980 when a database programming was introduces, and many programmer today has no clue how this begin, especially why this is required.

This post is going to explain how to use Java programming language to tune a SELECT statement for a very specific use case:
  • SELECT statement is returning a lots of data, such as 1000 rows and above
The tuning will be to tune either of following Java JDBC class:
  • java.sql.Statement
  • java.sql.PreparedStatement
  • java.sql.ResultSet

Architecture

In order to explains the tuning of these 2 JDBC class, first I need to explain how SELECT statement work in low level.  I will use programmer's perspective to explain, and skip many low level database execution, and pick Oracle DB as an example.  This is mainly I worked more in Oracle, and Oracle KB and blogs are sharing more of its internal architecture than MS SQL Server, for example.

A SELECT statement execution goes through following process:

  1. Crafts a SELECT statement
    1. In Java programming, this often be stored as a String variable, or many SELECT statements in many String variables
  2. Creates a String variable to store above SELECT statement
  3. Create a new variable of JDBC class java.sql.PreparedStatement by using above String (or SELECT statement) to that it becomes an object that is parsed (Oracle's keyword), and in programmer's terminology, it means compiled by DB
    1. PreparedStatement or DB SQL parsing is always required to run any SQL, including SELECT, INSERT, DELETE, CREATE, DROP, TRUNCATE, function call, store procedure call.  Let's call these DDL (data definition) and DML (data manipulation)
    2. PreparedStatement will indirectly getting call by any JDBC if the program doesn't defined/specify/create this object when executing any SQL
    3. There is no way to bypass SQL parsing (Java PrepareStatement) when executing any DDL and DML
    4. The object in Java takes up very little memory (Java called heap memory)
      1. In Java documentation, you often read this will allocate some memory, but the document was written when max desktop RAM was 512MB
    5. The object in Oracle takes up very little memory
      1. In Oracle documentation, you often read this will allocate some memory, but the document was written when max desktop RAM was 2GB
    6. The variable will auto set its row fetch size of 10 rows, i.e. PreparedStatement.setFetchSize(10).  It will download 10 rows on every ResultSet..next() method. You can call PreparedStatement.getFetchSize() to display its row fetch size
    7. This PreparedStatement variable is ready to run by calling  PreparedStatement.executeQuery()
    8. The FetchSize doesn't take up memory in object PreparedStatement
  4. Creates a commonly used variable for JDBC class of java.sql.Statement that either
    1. Uses above PreparedStatement variable
    2. Runs above String variable contains a SELECT statement
    3. Runs a a SELECT statement, typically used to get current timestamp
    4. The variable will auto set its row fetch size of 10 rows, i.e. Statement.setFetchSize(10).  It will download 10 rows on every ResultSet..next() method. You can call Statement.getFetchSize() to display its row fetch size
    5. Oracle DB will indirectly performs SQL parsing inside the DB, which takes up a little memory in Oracle DB server
    6. The FetchSize doesn't take up memory in object Statement
  5. Run the SQL by calling either following JDBC method:
    1. PreparedStatement.executeQuery()
    2. Statement.executeQuery()
  6. Oracle DB will takes time to execution the SQL statement.  It will take longer time if Oracle DB has poor SQL execution plan, or high data volumes that DB takes time to load the data from hard disk, or joins them using CPU.  Java program doesn't do any processing, and no data is available for the Java program to download.  In other word, Java program will stop and wait until DB finished execute the SQL, and ready to return the data. executeQuery() will wait as long as the Oracle DB takes to run the SELECT statement.  If the SELECT takes 50 hr to execute, then executeQuery() will wait for 50 hr
  7. After Oracle completed executing the SELECT statement, fetch/download the rows into Java using JDBC class ResultSet
    1. Creates a new variable of JDBC class ResultSet
    2. Download 10 rows (record) of records using ResultSet.next() method
    3. If you set FetchSize to be higher using setFetchSize(), then ResultSet.next() will download more rows from the DB server
    4. ResultSet.next() method will pause longer the higher the FetchSize set to.  Imagine you are downloading files from Internet with bigger size.  The bigger the file, the longer it takes to download
    5. The more columns the SELECT statement indicates, the bigger the data volume and longer it takes to run next() method
    6. So total time to fetch 10 rows (or FetchSize if set to diff value) depends on (excludes time waiting for DB to run SELECT statement):
      1. Number of columns defined in SELECT statement
      2. Number of columns with data. If no data, then no impact
      3. Data size for each column
      4. Number of rows to fetch that defined by setFetchSize. Default is 10 rows
    7. FetchSize can be modify in real time JDBC method ResultSet.setFetchSize(new_value) anytime

SELECT Statement Tuning

After lengthy explanation of the SELECT statement architecture, let's get into SELECT statement tuning.

We will not discuss about tuning of following as some are business requirement, and some are SQL tuning beyond Java programming tuning:

  1. slow running SELECT statement that DB takes time to process
  2. poorly written SELECT statement
  3. SELECT statement is joining plus expecting to retrieve all data from very large tables, such as 10GB - 300TB

The performance tuning in Java program is for following Java JDBC class in their FetchSize:

  • java.sql.Statement
  • java.sql.PreparedStatement
  • java.sql.ResultSet
To further elaborate, performance tuning in Java program is increase number of rows to download from Oracle DB each time by calling following Java JDBC method:
  • java.sql.Statement.setFetchSize()
  • java.sql.PreparedStatement.setFetchSize()
  • java.sql.ResultSet.setFetchSize()

Why Tune Default Row Fetch Size 

  1. Default of 10 rows (FetchSize) is too small by today's standard.  A desktop computer often comes with 1Gbps network card, while servers are using 10Gbps network card.  Back to 30 years ago, network card was running at 10Mbps.  Many residential internet are between 50Mbps and 1Gbps.
  2. Data size are significantly bigger, such as storing image, video, audio.
  3. Max column size has increased. Oracle LONG column type is changed to CLOB (character) and BLOG (binary) which can hold 8TB - 128TB for 1 column.
  4. Business users have valid usage to display 1000 rows per screen, such as viewing reports, YouTube video, X tweets, so 10 rows is too little, and anonying
  5. Too frequent pause when calling next() after it finished process/display 10 rows due to network latency
  6. It is safe to set a proper FetchSize, such as 1000 rows when the SELECT returns 1 row.  It doesn't take longer to download 1 row, when FetchSize=1000

Tuning Consideration

Fully study following when considering tuning FetchSize:
  1. Does users/application really needs to retrieve so many columns in the SELECT statement?
  2. How big (in bytes) the column size in the SELECT statement?
  3. How big (in bytes) the total size of all columns in the SELECT statement?
  4. How many records it will return by the SELECT statement?
  5. Can user adjust the FetchSize from the application?

Scenario 1: Showing number, strings, and time

90% of regular applications design often returns these column types in their SELECT statement, and their max size in Oracle as non-Unicode (byte):
  • number - 40 bytes
  • varchar2 - 32 KB
  • datetime - 16 bytes
Let's assume SELECT statement is returning so many number of column types, with total of 10 columns:
  • number - 3
  • varchar2 - 5
  • datetime - 2
So total max bytes that the next() will download from the DB server is
Max row size in byte = 3 * 40 + 5 * 32KB + 2 * 16
= 120 bytes + 160 KB + 32 bytes
= 163992 bytes
= 160.1484375 KB

Assumes that business users is able to accept 1 sec pause to display the data (download 10 columns from the Oracle DB server), then following is the size of data download size for different network speed of 1 sec:
1 sec for 1Mbps = 1 sec * 1,000,000 bit/s = 122.070 KB
1 sec for 10Mbps = 1 sec * 10,000,000 bit/s = 1220.703 KB
1 sec for 1Gbps = 1 sec * 1,000,000,000 bit/s = 119.209 MB
1 sec for 2Gbps = 1 sec * 2,000,000,000 bit/s = 237.418 MB
1 sec for 10Gbps = 1 sec * 10,000,000,000 bit/s = 1192.093 MB

Uses above data download size for 10 columns (160.1484375 KB max per row size) and determine the acceptable FetchSize:
  • 1Mbps: FetchSize=1
  • 10Mbps: FetchSize=7
  • 1Gbps: FetchSize=762
  • 2Gbps: FetchSize=1524
  • 10Gbps: FetchSize=7622
If business users said even the varchar2 column type can hold 32KB of data, but 99% of the data is below 2 KB, then max column size in bytes (keeping other columns type as their max size):
Max row size in byte = 3 * 40 + 2 * 2KB + 2 * 16
= 120 bytes + 4 KB + 32 bytes
= 4248 bytes
= 4.148 KB

Then the acceptable FetchSize for different network download speeds are below:
  • 1Mbps: FetchSize=29
  • 10Mbps: FetchSize=294
  • 1Gbps: FetchSize=29425
  • 2Gbps: FetchSize=58851
  • 10Gbps: FetchSize=294256
Default FetchSize of 10 rows is too small for business servers which often uses 1Gbps between Oracle server and Java application.

Over Tune Warning

Be aware about maximum column size (in bytes) in the SELECT statement when adjusting FetchSize:
  • varchar2 - 32KB max
  • LONG - 2GB max
  • BLOB - 8 - 128TB max
  • CLOB - 8 - 128TB max
  • BFILE - 4GB max
Not all SELECT statement are identical, so don't simply reuse a very high FetchSize value that used by another SELECT statement.

Following are bad example of over tune FetchSize:
  1. Business users often enter 4000 or more characters for a Note column inside a table.  The application set to display 1000 rows (FetchSize=1000) which easily download 4MB of data into Java application/servlet.. The Java application is used by 2000 concurrent users.  Programmer designed above screen in a frequently used home screen, which has 2000 concurrent users (4MB * 2000 ~ 8GB)
    1. Business users should consider fetching the first 100 characters from the Note column instead of displaying entire Note.  If business users need to read specific Note, then click and display the specific Note
  2. Business users want to auto download monthly pdf when click on the "Download Monthly Check Deposit" button, similar to an invoice screen that the application has (FetchSize=50).  However, business users didn't consider that the pdf in this screen often contains images of checks that are taken with cellphone, and each are 2MB easily.  This screen is used by business customers who deposit max 100 of check daily, and the screen suppose to display 1 months of records.  This means the button will download 100 checks * 31 days = 3,100 rows of checks in pdf file.  When JDBC FetchSize was set to 100 rows in this check deposit screen, it will download 302GB of data for the entire month per business customer
    1. This functionality to download entire month's check should not be offered unless end-to-end network speed is 10Gbps (this still takes ~ 30 sec to download the data into Java application)
  3. Reporting users designed a simple monthly operation reports which are showing 5,000 columns from few tables with big data volume, and high transactions volume daily. When setFetchSize to a high value, the report always crash the Java application (run out of heap memory)
    1. setFetchSize should set to a lower value
    2. Java heap memory needs to increase
    3. Controls number of rows to display the output to the screen
    4. Encourage users to save to file
    5. Offers pagination in the screen, and set setFetchSize to match the number of rows display on the screen, or 1/5 (so next() will indirectly download 5 times from the DB server) of the total rows. Uses following Java code to determine number of rows (not all DB type supports last())
      1. ResultSet rs = stmt.executeQuery(sql);
      2. int rowCount = rs.last() ? rs.getRow() : 0;
      3. rs.beforeFirst();

2024-04-22

SAP BODS JobServer: Enabling IPv6

Product: SAP Data Services

Version: 4.2.x

Overview

SAP BODS product bundled with following 2 software:

  • IPS
  • Data Services JobServer - I will refer to it as JobServer
  • Tomcat

Following are the default ports each listening on

  • IPS - TCP port 6400
  • JobServer - TCP port 3500
  • Tomcat - TCP port 8080

This post mainly write about IPv6, as certain people might have IPv6 setup, and would like to use it.  I won't discuss about Tomcat, as its IPv4 & IPv6 network configuration follows Apache Tomcat's documentation.

IPS has both IPv4 and IPv6 enabled during installation.  However, JobServer only enable IPv4, and the product document does not show how to enable IPv6.  The confusing part is that their product document, and SAP KB have several places that indicate IPv6 is supported, yet no documentation how to enable it.

Enabling IPv6 for JobServer

Follow these steps to enable IPv6 (and keeps IPv4) for JobServer

1. cd $LINK_DIR/bin

2. Append SAP_IPv6_ACTIVE=Y into al_env.sh: echo "export SAP_IPv6_ACTIVE=Y >> al_env.sh"

3. Restart JobServer: svrcfg

3.1. In the menu, choose stop JobServer, then start JobServer

Verify IPv6 enabled for JobServer

When I raised SAP support ticket, SAP keeps saying IPv6 is enabled, so I have to provide proof that it is not.  You can use this command to verify that IPv6 is enable
1. Linux: netstat -an | grep 3500 | grep LISTEN
2. Windows: Run cmd as Administrator
2..1. netstat -an | findstr "3500" | findstr "LISTEN"

Sample output that shows IPv6 works in port 3500
tcp   0    0 0.0.0.0:3500    0.0.0.0:*    LISTEN
tcp6   0    0 0.0.0.0:3500    0.0.0.0:*    LISTEN

The first column, "tcp6"means it is for IPv6, while "tcp" means it is for IPv4

2024-04-16

IBM Cognos Analytics: Content Store JDBC Configuration for Oracle

Product: IBM Cognos Analytics

Version: 10.2.1 - 12.0.0

OS: Windows, Linux

IBM Cognos Analytics documentation about Content Store's configuration using Oracle database is very limited.

This post will list down several configurations

Option 1: Using TNS from tnsnames.ora. Most Recommended

Using TNS name is the most preferred method for following reasons:

1. Changing of Oracle hostname doesn't need to change Cognos configuration, and can be done in real time

2. You can leverage Oracle scan listener functionality in tnsnames.ora

3. You can leverage fail-over Oracle functionality in tnsnames.ora

4. You can leverage standby fail-over Oracle functionality in tnsnames.ora

5. You can troubleshoot Cognos content store connectivity to Oracle DB using tnsping [TNS name] command

6. Supports Oracle RAC through tnsnames.ora

Steps:

1. Delete all entries under Data Access > Content Manager

2. Create new Content Store, and pick "Oracle database (Advanced)"

3. Enter the pre-configured Oracle TNS name from tnsnames.ora under "Database specifier"

4. In this example, "ora21c" is the TNS name entry that already configured in tnsnames.ora


Option 2: Uses URL syntax

Uses a JDBC URL syntax of //[hostname]:[port]/[service name] will leverage Oracle Service functionality, as well as making the entry easy to maintain.

There is not much advantage to this approach compare to above, so I will only list down disadvantage:

1. Changing of Oracle hostname, port, service name will required to shutdown Cognos daemon

2. Not possible to perform connectivity test using exact URL

3. Doesn't support Oracle RAC

4. Doesn't support Oracle standby

5. Doesn't support Oracle fail-over

Steps:

1. Delete all entries under Data Access > Content Manager

2. Create new Content Store, and pick "Oracle database (Advanced)"

3. Enter //[hostname]:[port]/[service name] under "Database specifier"

3.1. hostname - Oracle database server's hostname

3.2. port - Oracle listener's port. Default is 1521

3.3. service name - Oracle listener's service name as shown in "lsnrctl status [listener]"

4. In this example, "//ora-db-21c:1521/ora21c.abc.com" will connects to host ora-db-21c on port 1521, with Oracle service name  ora21c.abc.com


Option 3: Uses long TNS syntax

This is an lengthy and poor readability configuration, but it offers some advantages:
1. You can leverage Oracle scan listener functionality in tnsnames.ora

2. You can leverage fail-over Oracle functionality in tnsnames.ora

3. You can leverage standby fail-over Oracle functionality in tnsnames.ora

4. You can troubleshoot Cognos content store connectivity to Oracle DB using tnsping [TNS name] command

5. Supports Oracle RAC through tnsnames.ora

Disadvantage:

1. Length to type, and potentially make mistake

2. Difficult and time consuming to modify

3. Modification of hostname, port, service name will require to restart Cognos daemon

4. using tnsping to perform connectivity test is challenging for those who doesn't know the syntax that has been long forgotten

Steps:

1. Delete all entries under Data Access > Content Manager

2. Create new Content Store, and pick "Oracle database (Advanced)"

3. Under "Database specifier," enter "(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora-db-21c)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = ora21c.abc.com)))"

3.1. HOST - Oracle database server's hostname

3.2. PORT - Oracle listener's port. Default is 1521

3.3. SERVICE_NAME - Oracle listener's service name as shown in "lsnrctl status [listener]"

Option 4: Uses SID

This is the least recommended approach, but remains in Cognos product documentation for the longest time.  The main reason is that Oracle documentation has recommended not to use SID to login for any application since late 1990s.

The syntax is hostname:port:SID, which is very similar to URL syntax above, but it uses ":" instead of "/"

There is not much advantage to this approach compare to above, so I will only list down disadvantage:

1. Changing of Oracle hostname, port, service name will required to shutdown Cognos daemon

2. Not possible to perform connectivity test using exact URL

3. Doesn't support Oracle RAC

4. Doesn't support Oracle standby

5. Doesn't support Oracle fail-over

6. SID might be different than service name that Oracle DBA is familiar, and might cause communication gap, and confusion during connectivity troubleshooting

Steps:

1. Delete all entries under Data Access > Content Manager

2. Create new Content Store, and pick "Oracle database (Advanced)"

3. Under "Database specifier," enter "[hostname]:[port]:[SID]"

3.1. hostname- Oracle database server's hostname

3.2. port- Oracle listener's port. Default is 1521

3.3. SID - Oracle DB's SID, but not service name