We are using Java and Oracle for development.
I have table in a oracle database which has a CLOB column in it. Some XYZ application dumps a text file in this column. The text file has multiple rows.
Is it possible that while reading the same CLOB file thru Java application, the escape sequences (new line chars, etc) may get lost??
Reason I asked this is, we gona parse this file line by line and if the escape sequences are lost, then we would be trouble. I would have done this analysis myself, but I am on vacation and my team needs urgent help.
Would really appreciate if you could provide any thoughts/inputs.
You need to ensure that you use the one correct and same character encoding throughout the whole process. I strongly recommend you to pickup
UTF-8for that. It covers every human character known at the world. Every step which involves handling of character data should be instructed to use the very same encoding.In SQL context, ensure that the DB and table is created with
UTF-8charset. In JDBC context, ensure that JDBC driver is usingUTF-8; this is often configureable by JDBC connection string. In Java code context, ensure that you’re usingUTF-8when reading/writing character data from/to streams; you can specify it as 2nd constructor argument inInputStreamReaderandOutputStreamWriter.