Search This Blog

2017-07-17

MS SQL Server: Ver 2014 – CXPACKET – Parallelism

Product: MS SQL Server 2014
Version: Base version, SP1, SP2 (latest), or SP2 with various patches (as of Jul 2017)
Edition: Standard, Enterprise

This post is about MS SQL Server 2014 parallel query bug that rarely & randomly occurs which causes long execution, but not widely documented yet.

Problem Description

A query that used to take 18 sec suddenly switched its execution plan and took more than 16 hr.  The data volume only increased by 100 rows, although it was joining 10 tables.  The largest table size was 40,000 rows, while the rest is < 10,000 for each table, yet the bug hit with 100 rows increase in 4 out of 10 tables, which is consider < 0.1% of the table size.

Using SSMS Activity Monitor, one can see there are many sessions with same Session ID, as well as the WaitType=CXPACKET.

Backup/restore this DB and try in MS SQL Server 2014 Standard Edition, Enterprise Edition, vanilla version, SP1, SP2, SP2 with latest Jul 2017 patch, VMware, physical server, etc, all are producing the same result.

The behavior is consistently reproducible by running the query through SSMS even remove the WHERE clause.

Analysis

CPU is consistently 40% (of multi CPU) immediately when the query fire, and last for entire day.   Modified the query to have TOP 1, and the query took ~ 1 min

Disabled parallelism by setting "Max Degree of Parallelism" to 1, and it is 50% better, and not using parallel query execution in the plan, but still slower than before

In Activity Monitor, one can see there are multiple sessions (all with same Session ID) due to parallelism query execution plan (the more CPU core you have, there more Session ID it will spawn).  Except 1 is waiting on SOS_SCHEDULER_YIELD. This occurs consistently with 40% total CPU utilization for > 16 hr (for something that used to take 20 sec a day ago).

Isolated that there is no storage sub-system issue, Windows patch, DB patch, DB startup parameter change, antivirus patch, virus, firewall, network issue, etc.

A closer look at the SQL execution plan reviews that all spawned Session ID are in SUSPENDED mode with waittype=CXPACKET, expect with waittype SOS_SCHEDULER_YIElD that is running.

Tied to limit the parallelism to 1 in the SQL scope like folow:

SELECT ...[columns]... FROM [table1] INNER JOIN [table2]..... WHERE ...[conditions] ...  OPTION (MAXDOP 1, RECOMPILE)

Although the SQL execution plan is not using parallelism, the execution time is still > 15 min (cancelled early).

Ref: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option

Re-gathered all indexes using following procedure, and no change in the behavior
Ref: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/reorganize-and-rebuild-indexes

Checked database, and confirmed it is not corrupted.  No different:
DBCC CHECKDB

Updated statistics for all tables.  No different:
sp_updatestats

Checked sp_who2 that there is nothing blocking, and no deadlock
sp_who2

Found it spawned 16 child sessions, 15 waiting for CXPACKET for entire 16 hr

Examined the query execution plan (by choose any of the child session ID) and see 40% - 50% of cost for Parallelism (Repartition Streams), and one of the index has 100x more rows in the Estimated Number of Rows.

What if only select TOP 1?  How long it will takes?  So let's try it out
SELECT TOP 1 ...[columns]... FROM [table1] INNER JOIN [table2]..... WHERE ...[conditions] ...

It turns out it took ~ 1 min with only 1 row.  So there is something very wrong with the query optimizer in MS SQL Server 2014, which we can't workaround it by more CPU, set DOP=1, set  DOP=8, set DOP=# CPU core, rebuild index, faster storage, update statistics.  As the number of rows increase progressively, it may just flip and suddenly take 5000x longer to run.

Last resort, force DB engine to use SQL Server 2012 query optimizer, using trace flag 9481
SELECT ...[columns]... FROM [table1] INNER JOIN [table2]..... WHERE ...[conditions] ...  OPTION (QUERYTRACEON 9481, RECOMPILE)

BINGO!  Query took 20 sec to run.  The execution plan is not using any of the parallel query execution, and no CXPACKET waittype at all (in Activity Monitor).

Ref: https://support.microsoft.com/en-ca/help/2801413/enable-plan-affecting-sql-server-query-optimizer-behavior-that-can-be

Depending on the application, if there is noway to specify this trace flag in query, or session, then it can only specified in the global scope as startup parameter


Conclusion: MS SQL Server 2014 has bug in the query optimizer engine, which will take 5000x to 10000x to run the same SQL with 0.01% of row increase.  We are talking about the timing 1000x which far exceeds extraction of all the relevant tables in the join (table full scan or index full scan).

I consider this as a bug due to the fact that it took much longer than disregard DOP is 1 (disable), 2, 8, 16, max up to CPU core, the SQL will take 5000x longer to run, even the SQL is not using parallel query execution.

This SQL Server 2012 cardinal estimator compatibility level also can be set by setting the database compatibility level to SQL Server 2012 (110), which limits the entire database not to leverage SQL Server 2014's feature.

For SQL Server 2016, you can set database compatibility to 2016, yet uses "alter database scoped configuration set legacy_cardinality_estimation = on" to gives an equivalent effect of trace flag 9481.  This can be set in the UI


Anyway, I still consider this as temporary workaround given the SQL takes 5000x longer to run

2017-01-16

Oracle: ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Product: Oracle Database
Version: 12.1.0.2

Oracle 12c introduced a new parameter called PGA_Aggregate_Limit, which is a global PGA memory control to prevent RAM contention and virtual memory swapping to disk.

Previous parameter PGA_Aggregate_Target is only the prefer global PGA usage for the entire database instance, while some customers have been experiencing RAM contention due to this is a soft limit.

There is a known bug for PGA_Aggregate_Limit even you disable it by setting to 0.  You will continue to see following error:

ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

This bug will be visible on following conditions:

  1. Supplemental Logging is enable for the table
  2. Updating of Spatial column
  3. After install an unknown version of PSU or CPU patch (DBA does not disclose exact version)

The way to troubleshoot whether you affected by this known Oracle bug is as below:

1. In alert.log, confirm you are getting ORA-04036 which killing some DB sessions
2. Execute following from sqlplus

select value/power(1024,2) "MB" from v$pgastat where name = 'maximum PGA allocated';

2.1. Confirms it exceeded PGA_Aggregate_Limit size set in spfile.ora/pfile.ora
2.2. If it is below, then you are very likely hitting this Oracle bug
3. Review the generated trc log as shown in alert.log
3.1. The log will show the PGA memory usage
3.2. If it is below, then you are very likely hitting this Oracle bug
4. Determine the failing SQL statement and execute following from sqlplus and fill in the table name

SQL> desc user_log_groups
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                     NOT NULL VARCHAR2(128)
 LOG_GROUP_NAME                            NOT NULL VARCHAR2(128)
 TABLE_NAME                                NOT NULL VARCHAR2(128)
 LOG_GROUP_TYPE                                     VARCHAR2(19)
 ALWAYS                                             VARCHAR2(11)
 GENERATED                                          VARCHAR2(14)

SQL> select log_group_name, table_name from user_log_groups where table_name = '&table_name';

Note: Replace user_log_groups with dba_log_groups for all users

SQL> desc user_log_group_columns

 Name                Null?    Type
 ------------------- -------- ----------------------------
 OWNER               NOT NULL VARCHAR2(128)
 LOG_GROUP_NAME      NOT NULL VARCHAR2(128)
 TABLE_NAME          NOT NULL VARCHAR2(128)
 COLUMN_NAME                  VARCHAR2(4000)
 POSITION                     NUMBER
 LOGGING_PROPERTY             VARCHAR2(6)

SQL> select log_group_name, table_name, column_name from user_log_group_columns;

Note: Replace user_log_group_columns with dba_log_group_columns for all users

5. Confirm that the failing table contains spatial column

SQL> select data_type, table_name, owner from user_tab_columns where data_type = 'SDO_GEOMETRY';

DATA_TYPE       TABLE_NAME      OWNER
--------------- --------------- --------------------
SDO_GEOMETRY    CONTACTADDRESS  TEST1

Note: Replace user_tab_columns with dba_tab_columns  for all users

Workaround

Drop the supplemental logging for the failing table.  Several different syntax are listed below to make it more general

SQL> ALTER TABLE DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
SQL> ALTER TABLE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER TABLE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER TABLE DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

Note: The first statement should be sufficient, while others are for specific use case

If you want to disable it globally, then use following statement:

ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;


2017-01-04

MS SQL Server: Finding JDBC Version

Product: MS SQL Server
Version: 2008R2 - 2014

Every Microsoft JDBC JAR version bundled with 4 sqljdbc.jar files for 4 different versions of major Java JVM.  When you download the driver from Microsoft, it will include 4 JAR files, and you need to pick the proper JAR file according to your JVM version.

For example, following Microsoft JDBC 4.2 bundled with sqljdbc42.jar that needs JVM 1.8, sqljdbc41.jar that needs JVM 1.7, sqljdbc4.jar that needs JVM 1.6, and sqljdbc.jar that needs JVM 1.5.

https://msdn.microsoft.com/en-us/library/ms378422(v=sql.110).aspx

From this MSND, you can see that JDBC v4.2 has another versioning called JDBC Version Compliance for each file.

So when you received a file called sqljdbc.jar, it is hard to tell what version of Microsoft JDBC driver version it belongs to, but only able to tell it support JVM 1.5 or older.

Therefore, I developed following Java application that can display both the JDBC version, as well as JDBC Version Compliance

Sample Output

D:\temp\sqljdbc>C:\Language\jdk1.8.0_92_64\bin\java -cp sqljdbc42.jar;. JDBCVersion
JDBC Driver version: 4.2
Driver name: Microsoft JDBC Driver 6.0 for SQL Server
Driver version: 6.0.7728.100
Product name: Microsoft SQL Server
Product version: 11.00.5058

D:\temp\sqljdbc>C:\Language\jdk1.8.0_92_64\bin\java -cp sqljdbc41.jar;. JDBCVersion
JDBC Driver version: 4.1
Driver name: Microsoft JDBC Driver 6.0 for SQL Server
Driver version: 6.0.7728.100
Product name: Microsoft SQL Server
Product version: 11.00.5058

D:\temp\sqljdbc>C:\Language\jdk1.8.0_92_64\bin\java -cp sqljdbc4.jar;. JDBCVersion
JDBC Driver version: 4.0
Driver name: Microsoft JDBC Driver 4.0 for SQL Server
Driver version: 4.0.4621.201
Product name: Microsoft SQL Server
Product version: 11.00.5058

D:\temp\sqljdbc>java -cp C:\ClaimCenter7.0.7\lib\sqljdbc4.jar;. JDBCVersion
JDBC Driver version: 4.0
Driver name: Microsoft SQL Server JDBC Driver 3.0
Driver version: 3.0.1301.101
Product name: Microsoft SQL Server
Product version: 11.00.5058

D:\temp\sqljdbc>java -cp C:\ClaimCenter9.0.0\admin\lib\sqljdbc-4.1.jar;. JDBCVersion
JDBC Driver version: 4.0
Driver name: Microsoft JDBC Driver 4.1 for SQL Server
Driver version: 4.1.5605.100
Product name: Microsoft SQL Server
Product version: 11.00.5058

Source code

Filename: JDBCVersion.java
import java.sql.*;

class JDBCVersion
{
    public static void main(String[] args) {

        Connection conn = null;

        try {

            String dbURL = "jdbc:sqlserver://localhost\\SQLSERVER2012";
            String user = "sa";
            String pass = "sa";
            conn = DriverManager.getConnection(dbURL, user, pass);
            if (conn != null) {
                DatabaseMetaData dm = (DatabaseMetaData) conn.getMetaData();
System.out.println("JDBC Driver version: " + dm.getJDBCMajorVersion() + "." + dm.getJDBCMinorVersion());
                System.out.println("Driver name: " + dm.getDriverName());
                System.out.println("Driver version: " + dm.getDriverVersion());
                System.out.println("Product name: " + dm.getDatabaseProductName());
                System.out.println("Product version: " + dm.getDatabaseProductVersion());

            }

        } catch (SQLException ex) {
            ex.printStackTrace();
        } finally {
            try {
                if (conn != null && !conn.isClosed()) {
                    conn.close();
                }
            } catch (SQLException ex) {
                ex.printStackTrace();
            }
        }
    }
}


Compiling Instruction

C:\Language\jdk1.8.0_92_64\bin\javac JDBCVersion.java

This will create JDBCVersion.class in current directory

Execution Instruction

1. Download MS SQL Server JDBC driver to C:\temp
2. Extracts all the JAR files into C:\temp\sqljdbc.  In typical usage, you will want to copy the JAR file that you would like to find out its version
3. Run and specify the JDBC jar files that you want to verify the version

C:\Language\jdk1.8.0_92_64\bin\java -cp sqljdbc4.jar;. JDBCVersion

where you have changed the current directory to where JDBCVersion.class resides

2016-12-15

PowerShell: Rename File but Keep Extension Name

Rename all the files in current directory by removing the whitespace character from the base filename
Dir | where { ! $_.PSIsContainer } | Rename-Item –NewName { $_.Basename.TrimEnd() + $_.extension } -verbose

Same as above, but includes all the subdirectories
Dir -recurse | where { ! $_.PSIsContainer } | Rename-Item –NewName { $_.Basename.TrimEnd() + $_.extension } -verbose

Rename all the files in current directory by appending "_old" to the end of the base filename
Dir -recurse | where { ! $_.PSIsContainer } | Rename-Item –NewName { $_.name + "_old" + $_.extension } -verbose

Rename all the files including subdirectories by inserting "old_" to the beginning of the base filename
Dir -recurse | where { ! $_.PSIsContainer } | Rename-Item –NewName { "old_" + $_.name + $_.extension } -verbose

Rename all the files including subdirectories by replacing "new" with "old" in the base filename
Dir -recurse | where { ! $_.PSIsContainer } | Rename-Item –NewName { $_.basename.replace("new","old") + $_.extension } -verbose

Rename all the *.trp files including subdirectories by removing the "00" at end of filename, i.e. replacing "00.trp" with ".trp" in the base filename
Dir -recurse -filter *.trp | Rename-Item –NewName { $_.name –replace “00.trp“,”.trp” } -verbose

 

2016-12-06

MS SQL Server: bcp import unicode file

Product: MS SQL Server
Version: 2005 - 2016

Overview

MS SQL Server bcp command line utility and BULK IMPORT statement(T-SQL) do not provide any way to do debugging, as well as previewing.  This becomes frustrating when you are using fmt (format file) as well as throwing additional complication by using Unicode text file.

This post going to focus in bcp command line utility, but the information is applicable to BULK IMPORT statement.

bcp input (import) will display very generic, and useless error message when there the fmt file is incorrectly configured to match the data file's content.  For example:

Error #1

Starting copy...
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Native Client]I/O error while reading BCP format file

BCP copy in failed

Error #2:

Starting copy...
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file

BCP copy in failed

Error #3:

Starting copy...
SQLState = 22001, NativeError = 0
Error = [Microsoft][SQL Native Client]String data, right truncation
SQLState = 22001, NativeError = 0
Error = [Microsoft][SQL Native Client]String data, right truncation
SQLState = 22001, NativeError = 0
Error = [Microsoft][SQL Native Client]String data, right truncation
SQLState = 22001, NativeError = 0
Error = [Microsoft][SQL Native Client]String data, right truncation
SQLState = 22001, NativeError = 0
Error = [Microsoft][SQL Native Client]String data, right truncation
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file

0 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 312

Error #4 (no error, but nothing imported):

Starting copy...

0 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 94

Introduction

Microsoft MSDN has a very poor documentation about bcp.  The documentation about format file, and unicode are insufficient, as well as no example given for all use case, and even worse for unicode.

There is only one error file using "-e" parameter, but most of the time, it is empty, and not helpful in troubleshooting, as well as providing any guidance to indicate which entry in the fmt file is wrong or incorrectly configured.

There are several challenges when configuring the fmt file with unicode:

  1. Unicode - there are several unicode format for the bcp data file, addition to ANSI (regular) text file
  2. Windows, UNIX, MacOS end-of-line - MacOS uses CR, UNIX uses LF, Windows uses CR+LF, custom program might use LF+CR or mixture of all 4 combinations
  3. Multi lines fields - fields which contains ENTER key which will confuse the end-of-line

Unicode Data File Format

In Windows Notepad, there are 3 choices for unicode, as shown below.
MS SQL Server bcp only support "Unicode UCS2-Little Endian" format using their "bcp -w" parameter, or SQLNCHAR in fmt file.  This unicode encoding is called UCS2-Little Endian with BOM, which is subtype of UTF-16.  bcp supports both UCS2-LE with or without BOM.  Notepad UTF-8 is referring to UTF-8 Little Endian encoding with BOM.

Note: when you use -f to use fmt file, you cannot specify -w parameter in command line.  When "bcp out -w" is used to export to text file, UCS2-Little Endian with BOM is used.

When the bcp data file is created in other OS, or downloaded from browser(s), then the unicode encoding could be UTF-8, UCS2-Little Endian (Unicode in Notepad), or UCS2-Big Endian (e.g. AIX).

The text file that I used in this post is called import1.csv with content as below
"Case","Status","Assginee"
"論壇101","New","無"

Please keep this in mind when reading the post, because we are going to examine the file content in hex.  Please keep following hex code in mind:

Line 1, first 3 characters' hex code are 0x22, 0x43, 0x61 - "Ca
Line 2, first 3 characters' hex code are 0x22, 0x8AD6, 0x58C7 - "論壇

I will highlight the BOM as well as the Traditional Chinese characters which are critical in identifying unicode encryption and character set.  This is not explain in detail in MS SQL Server MSDN document, and hard to find in the Internet community.

The file will be saved as Tranditional Chinese BIG5 character set when ANSI is used.  Please note that this is not a supported format for bcp, and needs to convert to UTF16 UCS2 or UCS2-LE.

Uses following PowerShell script to display the first 128 bytes (32 bytes across), and you will be able to tell exactly what unicode encoding the data file belongs to.  For unicode with BOM, you will see the first 2 bytes to be FF FE or FE FF (which is called BOM) depending on the byte order (aka endian).  For unicode without BOM, you won't see these 2 bytes.

PS> Get-Content "import1.csv" -Encoding Byte -TotalCount 128 -ReadCount 32 | ForEach-Object {$output = "";foreach ( $byte in $_ ) {$output += "{0:X2} " -f $byte}; $output}

Output 1: UCS2-Little Endian with BOM - the first 2 BOM bytes is always "FF FE" before 0x22

FF FE 22 00 43 00 61 00 73 00 65 00 22 00 2C 00 22 00 53 00 74 00 61 00 74 00 75 00 73 00 22 00
2C 00 22 00 41 00 73 00 73 00 67 00 69 00 6E 00 65 00 65 00 22 00 0A 00 22 00 D6 8A C7 58 31 00
30 00 31 00 22 00 2C 00 22 00 4E 00 65 00 77 00 22 00 2C 00 22 00 21 71 22 00 0A 00

Output 2: UCS2-Big Endian with BOM - the first 2 BOM bytes is always "FE FF" before 0x22

FE FF 00 22 00 43 00 61 00 73 00 65 00 22 00 2C 00 22 00 53 00 74 00 61 00 74 00 75 00 73 00 22
00 2C 00 22 00 41 00 73 00 73 00 67 00 69 00 6E 00 65 00 65 00 22 00 0A 00 22 8A D6 58 C7 00 31
00 30 00 31 00 22 00 2C 00 22 00 4E 00 65 00 77 00 22 00 2C 00 22 71 21 00 22 00 0A

Output 3: UTF-8 Little Ending with BOM (3 BOM bytes before 0x22) - 2 Chinees characters are represented with 3 bytes each, total of 6 bytes.  This is the default format created by Notepad UTF-8 file format

EF BB BF 22 43 61 73 65 22 2C 22 53 74 61 74 75 73 22 2C 22 41 73 73 67 69 6E 65 65 22 0A 22 E8
AB 96 E5 A3 87 31 30 31 22 2C 22 4E 65 77 22 2C 22 E7 84 A1 22 0A

Output 4: UTF-8 Little Ending without BOM - It begins with 0x22.  2 Chinees characters are represented with 3 bytes each, total of 6 bytes

22 43 61 73 65 22 2C 22 53 74 61 74 75 73 22 2C 22 41 73 73 67 69 6E 65 65 22 0A 22 E8 AB 96 E5
A3 87 31 30 31 22 2C 22 4E 65 77 22 2C 22 E7 84 A1 22 0A

Output 5: ANSI Tranditional Chinese BIG5 character set

22 43 61 73 65 22 2C 22 53 74 61 74 75 73 22 2C 22 41 73 73 67 69 6E 65 65 22 0A 22 BD D7 BE C2
31 30 31 22 2C 22 4E 65 77 22 2C 22 B5 4C 22

Note: ANSI and UTF-8 will have identical output for ANSI characters, and will only display additional bytes when it is above FF (hex, or 255 decimal) ANSI character code.  Therefore, you need to observe non-ANSI character set

Once you determined the unicode encoding is not UCS2-Little Endian, uses Notepad to save the data file as "Unicode" format.  If you are using Notepad++, UltraEdit, Sublime Text, etc, then you can convert it to UCS2-LE/UCS2-Little Endian with or without BOM.  bcp will accept both type.

When UTF-16 UCS2-LE is used (with or without BOM), you need to specify 2 byte characters for field deliminator as well as SQLNCHAR in fmt file for all fields.  For UTF-8 or ANSI, you only specify 1 byte for field deliminator.

Wrongly identify the unicode file type will often encounter error 1 - 3 above.

End-of-Line (EOL)

EOL is a common problem when the data file is not exported from MS SQL Server database, but receive from other source, e.g. OS, save from browser, or programs.  You need to indicate the EOL sequence when importing in the fmt file.

Wrongly specify the EOL will often see error 4 above, which is

Starting copy...

0 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 94

You can use above PowerShell script, Notepad++, Sublime Editor, UltraEdit, etc to see the character sequence(s) used as EOL.  Following is screenshot from Notepad++


If you use "bcp -w" to import the data file, and not using fmt file, then you need to convert it to Windows EOF, which is CRLF, instead of LF.

Following are 4 EOL format for common OS:
WindowsCRLF
UNIX/LinuxLF
MacOSCR
ProgramLFCR

If fmt file is used, then you can specify the format in any of above format.

For EOL is LFCR, I recommend to convert it to Windows or UNIX EOL, or modify the program to use one of the standard EOL.  This kind of format is risky to cause some confusion by human, as well as database utilities.

Windows Notepad does not support converting the EOL, so you need to use other text editor, or PowerShell to convert the EOL to Windows.  Again, if you are using fmt, then you need to specify the EOL in the fmt file, and no file conversion is needed to convert the EOL.

Multilines Field

When you have multilines field, then you need to ensure EOL is unique to be able to identify as end of record (EOR).

Scan the bcp data file to ensure the multilines field does not conflict with EOL.

In my example, my EOR is "LF, or hex code 0x22, 0x0A

If there is a conflict, then you need to modify the export program (or bcp out) with a unique character strings, such as |"LF, or hex code 0x7C, 0x22, 0x0A

Failure to do so will experience error 1 or 2 above, which complains about EOF

bcp Format File

The MS SQL Server 2012 fmt file used for this data file is as below:

9.0
4
1  SQLNCHAR  0   0   "\"\0"        0 FIRST_QUOTE Latin1_General_100_CI_AS
2  SQLNCHAR  0   20  "\"\0,\0\"\0" 1 Case        Latin1_General_100_CI_AS
3  SQLNCHAR  0   50  "\"\0,\0\"\0" 2 Assignee    Latin1_General_100_CI_AS
4  SQLNCHAR  0   100 "\"\0\n\0"    3 Status      Latin1_General_100_CI_AS

Explanation:
  1. Line 3: This dummy field is to remove the double quote for the column 1.  For UCS2-LE, "\0" is used to indicate the 0x22, 0x00.  For UTF-8 and ANSI, 0x00 is not required
  2. Line 2: 3 data fields, pleaes 1 dummy field.  Total of 4 fields
  3. Line 3-6: Quote-Comma-Quote field separate is always padded with "\0" for UCS2-LE.  For UTF-8 and ANSI, 0x00 is not required
  4. Line 7: The quote as well as LF are used of EOR.  It will remove the double quote during import
  5. Last line must be an empty line, or bcp will complain error 1 above
  6. 2nd field must be SQLNCHAR for all columns.  If UCS2-LE is used, then you need to use this type for both varchar, and nvarchar columns, else you will get error 2 above
  7. 3rd field must be 0 for UCS2-LE, else you will encoutner error 2 above
  8. 4th field length is double the size of the nvarchar, or varchar field.  E.g. nvarchar(10) or varchar(10), the length will be 20
bcp command:
bcp Forum.Mgmt in D:\temp\import1.csv -f forum1.fmt -U admin1 -P [password for admin1] -S hostname\[instance name] -F 2 -e bcp_forum1.log

Note: -F 2 is to skip line 1

Tips on bcp Configuration/Troubleshooting


  1. Uses "bcp out -w" to output to UCS2-LE csv file and compare it against your data
  2. Create a new table with few nvarchar and varchar columns to determine the correct field deliminator and end-of-line configuration
  3. Create a csv file with 2-5 records to test the "bcp in" import, so that you are in better control of the data format
  4. Replace field deliminator with tab, comma, |, instead of multi-chracters deliminator (like my example is 3 characters deliminator).  Focus in simple field deliminaotr configuration before move on to multi-chraacters deliminator
  5. Remove multi-line records from the testing csv, and test with data that does not contains multi-line.  This allows you to confirm that your EOL deliminator works, before troubleshoot multi-line data import
  6. Manually remove double quotes or single-quotes which requires additional fmt configuration.  Make sure you have a working fmt file, before configure more complete deliminator
  7. Specify "bcp -b1" to force import to commit every row, so that you can determine which row is violating the format file.  This is used to resolve following error, which the error log is always empty, such as below
Starting copy...
1000 rows sent to SQL Server. Total sent: 1000
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-filee

1876 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 811    Average : (2313.19 rows per sec.)

2016-11-13

PowerShell: Rename File Name To Directory Name

Product: PowerShell

This command will generate a script which can be used to rename multiple files in the directory to the directory name.  This is used when:

1. Inconsistent filename in the directory, each with unique extension
2. Directory name is unique, but filename is random, so you would like to make the filename to be same as directory name

I have a directory which contains following file extensions used for the purpose of this post:

Dir: 4Ever
Files: file1.trp, file1.txt, file1.idx, file1.nfo

get-childitem  Stockings | rename-item  -NewName { $_.directory.name + $_.extension }

Output
PS I:\SVHD_VIDEO> dir 4Ever


    Directory: I:\SVHD_VIDEO\4Ever


Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---        11/13/2016   9:20 AM 1020313600 4Ever.trp
-a---        11/13/2016   9:20 AM     857792 4Ever.idx
-a---        11/13/2016   9:20 AM       4228 4Ever.ifo
-a---        11/13/2016   9:20 AM       1480 4Ever.txt

For renaming across multiple directories, use following syntax

get-childitem  -recurse | where { ! $_.PSIsContainer } | rename-item  -NewName { $_.directory.name + $_.extension }

where PSIsContainer means it is a directory.  Exclamation mark (!) used to exclude directory

2015-12-09

Common Acura Integra 3rd Generation (1994 - 2001) Common Known Issue


  1. Camshaft seal cap/plug on side of cylinder block
    1. This is a common part in 2001 Civic and 1992 Prelude.  Year 2000 Civic doesn't has this part
    2. It is a plastic cap with o-ring
    3. Plastic plug with o-ring cost CAD$13
    4. The o-ring itself cost CAD$0.40, which I recommend to replace every 10 yr
    5. If plastic cap is good, then just replace the o-ring size about 27mmx2.5mm or 28mmx2.5mm (IDxCS) nitrile
    6. High temperature material like Viton, FPM, AFLAS are waste of money because nitrile can last for 9 yr, and 4 to 10 times cheaper
    7. Inspect for oil leak at the bottom of the plug as the o-ring harded by age.  The plastic plug should be good for 15 year
    8. Use a flat head screw driver to pop up the old plastic cap.  It is a 1 minute job
    9. Don't waste money on billet type 2 or 3 o-ring cap.  The nitrile o-ring for those will harden quickly, and need to remove valve cover to replace.  The Honda/Acura plastic cap design will last equally long with 1 o-ring, and quick to replace every 10 yr
    10. Buy a couple o-ring if you have multiple Honda/Acura car dated 1990 to 2011, including Civic, Prelude, Accord, Fit, Insight, Fit
    11. Camshaft plug picture
  2. Valve cover gasket
    1. Needs to use Ultra grey RTV at the round and corner when installing
    2. Inspect for oil leak and replace as needed
    3. Cost CAD$35
  3. Front upper control arm
    1. Torn rubber bushing due to age.  Inspect the rubber bushing, and replace whichever side as needed
    2. Cost CAD$35
    3. Car alignment is recommended
    4. Front upper control arm picture
  4. Coolant temperature sensor for driver cluster
    1. It could slowly degrade and show lower than normal temperature at early failure and eventually stop moving the needle at end of life
    2. Cost CAD$8
    3. This is a common part with 2001 Civic
    4. Non-OEM might be longer, so Honda part is not require
    5. This is a 1-wire sensor for the instrument cluster, not the 2-wire sensor to ECU.  There are 2 coolant temperature sensor
    6. Coolant temperature sensor
  5. Front outer tie rod
    1. Inspect for torn rubber bushing
    2. Cost CAD$18
    3. Car alignment is recommended
  6. Front inner tie rod
    1. Inspect for torn rubber bushing
    2. Cost CAD$15
    3. Car alignment is recommended
  7. Rear trailing arm bushing
    1. The rubber bushing will crack slowly after 10 yr.  When it is totally broken, when going on uneven surface, the metal knocking sound will be heard
    2. It affects the wheel height, but not chamber or toe
    3. Recommend to replace in pair
    4. Needs pressing tool such as Honda Lower Control Arm Bushing ToolBushing Extractor
    5. No car alignment needed
  8. Antenna auto up/down
  9. Electronic trunk release - electric motor failure
  10. Rough idle
    1. Clean the Fast Idle Thermo Valve, and tighten any internal parts in it.  This part is located directly under the plastic air intake connector to the air intake body, where the butterfly valve is.  You need to remove large plastic air intake hose
    2. This is a common part since 1990's, such as 1992 Honda Prelude
    3. This part is DIY cleanable.  Remove it from intake valve body, then remove all the bolts to clean every part by paper tower
    4. If it doesn't resolve the issue, then remove and clean Idle Air Control Valve
  11. Low brake fluid warning light
    1. The sensor is easy to trigger intermittently on low brake fluid, even it is above MIN level
    2. When brake pads wear to 80%, the warning light may intermittently on, which you can add a few drops of brake fluid to shut it off, or replace the brake pads early
    3. Cost $30 (non-OEM) - $120 (OEM) front sets or rear sets
    4. Recommend to buy brake lube when replacing.  The smallest packet cost $3
    5. When removing the reservoir cover, inspect filter for clog, which might trigger the warning light
    6. Inspect wire on the cover.  Loose wire might trigger the light
    7. Recommend to replace all 4 wheels brake pad, then top it up.  Then this warning light eventually will tell me either the front or rear brake pad is 80% worn, and need replacement.  The brake fluid will never evaporated, and will always in the brake system even after 15 years.  It is a good indicator of worn brake pad, if you don't check it annually
  12. Water/moisture in rear brake light
    1. Replace the rubber seal
    2. Alternatively, drill a small hole at the bottom of light assembly to let the water to drain out
    3. Another cheap workaround is to use kitchen sink silicon seal.  Apply around the existing old rubber seal

Blog Archive