Category Archives: Internals

Mashing up audit parameters

This is a short follow-up on my earlier post Don’t ditch AUDIT_TRAIL prematurely.
As it turns out AUDIT_TRAIL is not the only parameter that still affects the behaviour when Unified Auditing is enabled.
Since we have reset all “old” audit parameters except AUDIT_TRAIL the configuration looks like this:

SQL> show parameter audit
NAME                      TYPE        VALUE
------------------------- ----------- --------------------------------
audit_file_dest           string      /u01/app/oracle/admin/DEV1/adump
audit_sys_operations      boolean     TRUE
audit_syslog_level        string
audit_trail               string      DB

Although AUDIT_SYS_OPERATIONS defaults to TRUE we wouldn’t expect audit files being written – we’re using Unified Auditing after all.
To my surprise the Oracle database still writes OS audit files. E.g. here on my playground VM I see MMON slave process writing a file.

ll /u01/app/oracle/admin/DEV1/adump/
-rw-r-----.  1 oracle oinstall 2637 Jan 25 12:59 DEV1_m001_4546_20150125125939459040143795.aud

On customer systems I’ve also seen audit files from Scheduler jobs Jnnn processes. I haven’t figured a pattern yet and it seems to be different on various platforms. On Windows there’s a lot more recorded in the Windows Event Log (mainly AWR activity) than there’s written to ADUMP on Linux.
Solaris is a different story again. What I’ve found reproducible on Linux is connecting with a JDBC client (SQL Developer) and run any query that fails at parse time. Strangely enough, this does not happen when using SQL*Plus (OCI client). E.g.:

Sun Jan 25 14:15:33 2015 +01:00
LENGTH : '175'
ACTION :[18] 'select K from dual'
DATABASE USER:[3] 'SYS'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[3] 'btr'
CLIENT TERMINAL:[7] 'unknown'
STATUS:[3] '904'
DBID:[10] '3434795880'

What gets written to the audit files is not what we have specified in the unified audit policies, so setting AUDIT_SYS_OPERATIONS to FALSE to prevent all the extra auditing appears to be safe. So far, I haven’t seen any adverse effect on the unified audit trail, but you may want to check the proper functioning on your platform and audit configuration yourself.

Foot note 1: If $ORACLE_BASE/admin/$ORACLE_SID/adump directory does not exists AUDIT_FILE_DEST is defaulting to $ORACLE_HOME/rdbms/audit.

Foot note 2: If AUDIT_FILE_DEST is set explicitly to a non-existing directory, you’ll see alert.log entries indicating that the OS audit trail could not be written: “OS Audit file could not be created; failing after 6 retries”

Foot note 3: All test have been run on Oracle 12.1.0.2

Update 10-May-2016: The issue is documented on MOS as bug 21133343 (see MOS note: 21133343.8). Patches are available for Linux x86-64 and AIX on Power.

Multitenant PDB connects with SID descriptor

When moving to the Oracle 12c multitenant architecture one implication is that you have to switch to using service names in your connection strings for PDBs. Even SYSDBA connects to PDBs must use SERVICE_NAME.
Well, not quite necessarily. I guess customers adopting this new technology have not all managed to update their scripts. Or, software vendors and in-house development have hard-coded connection strings that cannot be changed quickly.
So, Oracle hastily added a new listener parameter USE_SID_AS_SERVICE_listener. This will tell the listener to treat the SID in the connect descriptor as SERVICE_NAME.
As long as the SID in the client connect descriptor matches the service name registered on the listener the connection is established.

$ cat /u01/app/oracle/network/admin/listener.ora 
LISTENER =
	(ADDRESS_LIST =
		(ADDRESS = 
			(PROTOCOL = TCP)
			(Host = oel6ora12cdb1.localdomain)
			(Port = 1521)
		)
	)

USE_SID_AS_SERVICE_LISTENER = ON

I have two PDBs which are registered on the listener as follows.

$ lsnrctl status
...
Services Summary...
Service "CDB1" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "testdb1" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "testdb2" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...

My client tnsnames.ora holds entries for both PDBs, one with SERVICE_NAME (TESTDB1) and one with SID (TESTDB2).

$ cat /u01/app/oracle/network/admin/tnsnames.ora 
TESTDB1 =
  (DESCRIPTION =
	(ADDRESS = (PROTOCOL = TCP)(HOST = oel6ora12cdb1.localdomain)(PORT = 1521))
	(CONNECT_DATA =
	  (SERVER = DEDICATED)
	  (SERVICE_NAME = TESTDB1)
	)
  )

TESTDB2 =
  (DESCRIPTION =
	(ADDRESS = (PROTOCOL = TCP)(HOST = oel6ora12cdb1.localdomain)(PORT = 1521))
	(CONNECT_DATA =
	  (SERVER = DEDICATED)
	  (SID = TESTDB2)
	)
  )

Next, I connect to both PDBs using above TNS entries.

$ sqlplus /nolog

SQL> connect system/manager@PDB1
Connected.

SQL> connect system/manager@PDB2
Connected.

Well, that’s all fine.
But, what’s going to happen if we set the DB_DOMAIN, then the db domain is appended to the service name. Ultimately, the SID parameter you have in the client connect descriptor does not match the service name anymore.

Since you most likely use above described feature because you cannot change the client connect descriptor this solution does not work anymore.
Back to square one.

Just for fun let’s try it anyway…but of course, for it to work I have to adjust the SID in the TNS entry for TESTDB2.

SQL> show parameter db_domain

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_domain			     string	 localdomain

On the listener there are the two services from my pluggable databases, TESTDB1 and TESTDB2.

$ lsnrctl status
...
Services Summary...
Service "CDB1.localdomain" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "testdb1.localdomain" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "testdb2.localdomain" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now, to match the service name I change the TNS entry for TESTDB2 to following:

TESTDB2.localdomain =
  (DESCRIPTION =
	(ADDRESS = (PROTOCOL = TCP)(HOST = oel6ora12cdb1.localdomain)(PORT = 1521))
	(CONNECT_DATA =
	  (SERVER = DEDICATED)
	  (SID = TESTDB2.localdomain)
	)
  )

At first sight it’s a bit strange to see the db domain in the SID parameter as this was never supposed to work this way. But at least we can connect again.

$ sqlplus system/manager@TESTDB2.localdomain
Connected.

I thought about creating a service using DBMS_SERVICE.CREATE_SERVICE but Oracle will always append the db domain to the service on the listener. And since a service in the format “PDB_NAME.DB_DOMAIN” already exists you will not be able to create it (ORA-44303: service name exists).

I file this under half-baked solution…(Not that I think this feature should by used anyway. Go get your scripts and applications fixed!)

Don’t ditch AUDIT_TRAIL prematurely

When migrating to unified auditing in Oracle 12c I faced a strange behaviour in the recording of LOGON / LOGOFF actions.

As the Oracle 12c documentation points out, the AUDIT_* parameters have no effect once you enable unified auditing. Guess what I did in the good spirit of cleaning up obsolete settings:
I reset those parameters in the spfile after enabling unified auditing.

Enable unified auditing by relinking the oracle binary.

$ cd ${ORACLE_HOME}/rdbms/lib
$ make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=${ORACLE_HOME}

SQL> show parameter audit_trail
NAME          TYPE        VALUE
------------- ----------- ----------
audit_trail   string      NONE

I disabled all the Oracle built-in audit policies and created my own to record LOGON / LOGOFF actions.

SQL> create audit policy audpol_connection
     actions logon, logoff;

Audit policy created.

SQL> audit policy audpol_connection;

Audit succeeded.

SQL> select * from audit_unified_enabled_policies;

USER_NAME       POLICY_NAME          ENABL SUC FAI
--------------- -------------------- ----- --- ---
ALL USERS       AUDPOL_CONNECTION    BY    YES YES

SQL> select * from audit_unified_policies
     where policy_name = 'AUDPOL_CONNECTION'
     and audit_option like '%LOGO%';

POLICY_NAME          AUDIT CONDI AUDIT_OPTION    AUDIT_OPTION_TY OBJEC OBJEC OBJEC COMMO
-------------------- ----- ----- --------------- --------------- ----- ----- ----- -----
AUDPOL_CONNECTION    NONE  NONE  LOGOFF          STANDARD ACTION NONE  NONE  NONE
AUDPOL_CONNECTION    NONE  NONE  LOGON           STANDARD ACTION NONE  NONE  NONE

This shows, that my audit policy is enabled and should record LOGON and LOGOFF actions.

I also want the audit trail to be written immediately so it becomes visible in UNIFIED_AUDIT_TRAIL view without waiting for the flush to happen (or explicitly flushing it every time).

SQL> exec dbms_audit_mgmt.set_audit_trail_property(
         dbms_audit_mgmt.audit_trail_unified
       , dbms_audit_mgmt.audit_trail_write_mode
       , dbms_audit_mgmt.audit_trail_immediate_write
     )

PL/SQL procedure successfully completed.

For the sake of clean output let’s purge the existing audit trail.

SQL> exec dbms_audit_mgmt.clean_audit_trail(
         dbms_audit_mgmt.audit_trail_unified
       , false
     )

PL/SQL procedure successfully completed.

So far so good. Now begins the actual test case by connecting to the database from a different terminal.

$ sqlplus system/manager@dev1

SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 21 17:39:02 2014
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Last Successful login time: Tue Oct 21 2014 17:38:49 +02:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing and Unified Auditing options

Note that I have not logged out yet…

Checking in my original session we cannot find any LOGON event recorded. I most definitively expected to see a LOGON event.

SQL> select audit_type, sessionid, dbusername, event_timestamp, action_name, return_code
     from unified_audit_trail
     where action_name like '%LOG%'
     order by event_timestamp;

no rows selected

After disconnecting from the other session the LOGOFF event becomes visible.

SQL> select audit_type, sessionid, dbusername, event_timestamp, action_name, return_code
     from unified_audit_trail
     where action_name like '%LOG%'
     order by event_timestamp;

AUDIT_TYPE SESSIONID    DBUSERNAME    EVENT_TIMESTAMP                 ACTION_NAME     RETURN_CODE
---------- ------------ ------------- ------------------------------- --------------- -----------
Standard   2253237941   SYSTEM        21-OCT-14 05.13.38.712564 PM    LOGOFF          0

I have run multiple different test with different clients, connection methods, etc. all ending with the same result: Oracle just wont record LOGON actions! We only get to see the LOGOFF event.

Against all odds and feeling a little desperate I eventually decided to set “AUDIT_TRAIL = DB” and give it a go.

SQL> show parameter audit_trail

NAME          TYPE        VALUE
------------- ----------- ----------
audit_trail   string      DB

I purged the audit trail and ran the same test as before again.

Connect from a differen terminal to the database.

$ sqlplus system/manager@dev1

SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 21 17:39:02 2014
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Last Successful login time: Tue Oct 21 2014 17:38:49 +02:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing and Unified Auditing options

Check the unified audit trail – and there it is.

SQL> select audit_type, sessionid, dbusername, event_timestamp, action_name, return_code
     from unified_audit_trail
     where action_name like '%LOG%'
     order by event_timestamp;

AUDIT_TYPE SESSIONID    DBUSERNAME    EVENT_TIMESTAMP                 ACTION_NAME     RETURN_CODE
---------- ------------ ------------- ------------------------------- --------------- -----------
Standard   1706589952   SYSTEM        21-OCT-14 07.17.46.063184 PM    LOGON           0

After disconnecting from the other session I can again see the LOGOFF action.

SQL> select audit_type, sessionid, dbusername, event_timestamp, action_name, return_code
     from unified_audit_trail
     where action_name like '%LOG%'
     order by event_timestamp;

AUDIT_TYPE SESSIONID    DBUSERNAME    EVENT_TIMESTAMP                 ACTION_NAME     RETURN_CODE
---------- ------------ ------------- ------------------------------- --------------- -----------
Standard   1706589952   SYSTEM        21-OCT-14 07.17.46.063184 PM    LOGON           0
Standard   1706589952   SYSTEM        21-OCT-14 07.18.02.661384 PM    LOGOFF          0

In my opinion this does not work as designed as I believe Oracle actually wanted the AUDIT_* parameters to have no effect on unified auditing. I haven’t tested exhaustively but I imagine there might be other problems with unified auditing when AUDIT_TRAIL is set to NONE. So, test carefully whether all your auditing is working as expected.

Foot note: All test have been run on Oracle 12.1.0.2

Defeating ORA-12696 Double Encryption Turned On (tricking the ENVS)

This post is about configuring SQL*Net to support native SQL*Net encription and SSL authentication for different users at the same time. Imagine your security offices comes along and demands that every connection to the database must be encrypted and it must be enforced by the database. The application server (e.g. Weblogic) already authenticates itself to the database using SSL certificates, so the network traffic is already encrypted. All other connects use username/password authentication and must be encrypted by native SQL*Net encryption. The database server enforces encryption by setting SQLNET.ENCRYPTION_SERVER = REQUIRED. From a security point of view this is paramount.

Going ahead and configure everything according docs and white papers I bet you’ll end up with “ORA-12696 Double Encryption Turned On, login disallowed”.

[side note]
If SQLNET.ENCRYPTION_SERVER is set to REQUIRED then the client gets “ORA-28865: SSL connection closed”, but once you start digging in the SQL*Net trace file of the server process you’ll find ORA-12696.
Funnily, if SQLNET.ENCRYPTION_SERVER is set to REQUESTED then the client actually gets ORA-12696 reported.
[/side note]

The basic problem is, that there is only one $TNS_ADMIN location with one sqlnet.ora containing all the configuration. Ultimately, we need to find a way to use multiple $TNS_ADMIN locations and separate the sqlnet.ora configurations for the different listening end points/services.
One solution would be to fire up two listeners with different environment settings. Note that the server processes spawned by the listener inherit the properties from it, that being permissions and environment variables.
Following I’m going to show a solution with one single listener using a little known feature: specifying ENVS parameter in the static listener SID list.

But first let’s have a look a the current configuration which is giving us some trouble. There’s a listener configuration with two listening end points (one for TCP and one for TCPS) and a mixed configuration in sqlnet.ora.
My $TNS_ADMIN is pointing to /u01/app/oracle/network/admin which is the default location set in the oracle’s user profile and is used when starting up the listener.

$ cat /u01/app/oracle/network/admin/listener.ora

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel6ora12dev1.localdomain)(PORT = 2483))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCPS)(HOST = oel6ora12dev1.localdomain)(PORT = 2484))
    )
  )
SSL_CLIENT_AUTHENTICATION=FALSE
WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=/u01/app/oracle/admin/DEV1/wallet/auth)))

$ cat /u01/app/oracle/network/admin/sqlnet.ora

SQLNET.ENCRYPTION_SERVER=requested
SQLNET.ENCRYPTION_TYPES_SERVER=(aes256)
SQLNET.AUTHENTICATION_SERVICES=(BEQ,TCPS)
SSL_CLIENT_AUTHENTICATION=TRUE
SSL_CIPHER_SUITES=(SSL_RSA_WITH_AES_256_CBC_SHA)
WALLET_LOCATION = (SOURCE = (METHOD=FILE)(METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/DEV1/wallet/auth)))

If we try to connect from a client using SSL authentication it errors with “ORA-12696 Double Encryption Turned On, login disallowed”. Connecting using username/password works just fine and the network traffic is properly encrypted.

To be able to have different SQL*Net configurations we first need to create a separate directory structure. In this case I just create it next to the current network/admin directory and separate the configuration into two sqlnet.ora files:
$ mkdir /u01/app/oracle/network/adminssl
$ vi /u01/app/oracle/network/admin/sqlnet.ora

SQLNET.ENCRYPTION_SERVER=requested
SQLNET.ENCRYPTION_TYPES_SERVER=(aes256)

$ vi /u01/app/oracle/network/adminssl/sqlnet.ora

SQLNET.AUTHENTICATION_SERVICES=(BEQ,TCPS)
SSL_CLIENT_AUTHENTICATION=TRUE
SSL_CIPHER_SUITES=(SSL_RSA_WITH_AES_256_CBC_SHA)
WALLET_LOCATION = (SOURCE = (METHOD=FILE)(METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/DEV1/wallet/auth)))

Now, I need to tell the listener to set the proper environment when spawning server processes for incomming connection requests. This is done by specifying the TNS_ADMIN location in the ENVS parameter:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DEV1_SSL.localdomain)
      (ORACLE_HOME = /u01/app/oracle/product/ora12101)
      (SID_NAME = DEV1)
      (ENVS="TNS_ADMIN=/u01/app/oracle/network/adminssl")
    )
  )

Now, whenever a connection is established via service DEV1_SSL.localdomain the listener exports the environment varialbe TNS_ADMIN before spawning the server process. The server process then looks up the environment variable and reads the sqlnet.ora file in /u01/app/oracle/network/adminssl.
As you can see, next to using different listening end points I also present an addtitional service on the listener.
Depending on which service and end point the client uses it gets either native SQL*Net encryption or SSL authentication including SSL encryption.

On the client side I use either one of the following TNS entries, depending on which authentication method the user uses.

DEV1_SSL.LOCALDOMAIN =
  (DESCRIPTION =
    (ADDRESS_LIST=
      (ADDRESS = (PROTOCOL = TCPS)(HOST = oel6ora12dev1.localdomain)(PORT = 2484))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DEV1_SSL.localdomain)
      (SECURITY=(SSL_SERVER_CERT_DN="CN=DEV1.localdomain"))
    )
  )

DEV1_NSE.LOCALDOMAIN =
  (DESCRIPTION =
    (ADDRESS_LIST=
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel6ora12dev1.localdomain)(PORT = 2483))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DEV1.localdomain)
    )
  )

$ sqlplus /@DEV1_SSL
or
$ sqlplus system/manager@DEV1_NSE

Setting different values for TNS_ADMIN at the time of process spawning is useful in other situations, too. For instance, this technique can be applied if you have one listener for multiple databases and each database has its wallet.

In Grid Infrastructure environment there is also the possibility to set environment variables in Oracle Clusterware which is probably more comfortable to handle:

srvctl setenv database -d  -t TNS_ADMIN=

Foot note: All test have been run on Oracle 12.1.0.1

Deferred Segement Creation PARALLEL

Following up on Chris Antognini’s findings regarding deferred segment creation (here and here) I noticed another restriction that still finds no mention even in the latest Oracle documentation. I vaguely remember to have come across some issues in lower 11gR2 releases and some of them have been fixed in later patch sets. The problem went along the line of:

alter session set deferred_segment_creation = true;

create table t2 parallel 4
as 
  select /*+ parallel (t1, 4) */ *
  from t1 where 1 = 2
;

select segment_name, segment_type from user_segments where segment_name = 'T1';
SEGMENT_NAME SEGMENT_TYPE
------------ ------------
T1           TABLE

Note: This test war run against Oracle 11.2.0.1 as this particular issue is fixed in at least 11.2.0.4 and later

Now, why in gods name would you run CTAS in parallel when you know the query does not return any rows you might think.
Well, the problem becomes more subtle when we start combining features, namely partitioning and parallel DDL. Following I have prepared a few test cases to demonstrate the issue.

First let’s create the source table to select from:

create table t1
as
  select
      rownum id
    , round(dbms_random.value(1,4), 0) code
  from dual connect by level <= 100
;

The CODE column contains four distinct number values ranging from 1 to 4.

Next, we create a list partitioned table by using CTAS (create table as select) from our source table. All parts of this SQL run serially:

create table t3
segment creation deferred
partition by list (code) (
    partition code_1 values (1)
  , partition code_2 values (2)
  , partition code_3 values (3)
  , partition code_4 values (4)
)
as
  select *
  from t1
  where code = 1
;

select segment_name, segment_type, partition_name from dba_segments where owner = 'TESTUSER' and segment_name = 'T3' order by segment_name, partition_name;
SEGMENT_NAME SEGMENT_TYPE      PARTITION_NAME
------------ ---------------   --------------
T3           TABLE PARTITION   CODE_1

This works as expected. Only the relevant partition was materialized as a segment to hold the data for CODE value 1.

This time, we create the same table structure using parallel DDL:

create table t4
segment creation deferred
parallel 4
partition by list (code) (
    partition code_1 values (1)
  , partition code_2 values (2)
  , partition code_3 values (3)
  , partition code_4 values (4)
)
as
  select *
  from t1
  where code = 1
;

select segment_name, segment_type, partition_name from dba_segments where owner = 'TESTUSER' and segment_name = 'T4' order by segment_name, partition_name;
SEGMENT_NAME SEGMENT_TYPE      PARTITION_NAME
------------ ---------------   --------------
T4           TABLE PARTITION   CODE_1
T4           TABLE PARTITION   CODE_2
T4           TABLE PARTITION   CODE_3
T4           TABLE PARTITION   CODE_4

Oracle has materialized all four partitions even though only one of them contains data. The same also happens when the source query doesn’t return any rows, e.g. when CODE = 5.

Usually, there’s not much sense in running DDL in parallel while the SELECT providing the data runs serially. I have left out parallel processing of the SELECT part for simplicity as my tests have shown that it doesn’t affect the segment creation in any way.

Foot note: All test have been run on Oracle 12.1.0.2 and 11.2.0.4 except when where otherwise

DBMS_LOB and Character Sets

Running the database with a multi byte database character set (fixed or varying), e.g. AL32UTF8, has some implications on default behaviour of certain parts of the database you just have to be aware of. Of what I can see those are mostly well documented in the Oracle documentation and is just another case that proves the importance of reading them ;-).

Here the relevant NLS parameter config for the case/scripts following:

SQL> select * from nls_session_parameters where parameter in ('NLS_CHARACTERSET', 'NLS_LENGTH_SEMANTICS');

PARAMETER                      VALUE
----------------------------------------------------------
NLS_CHARACTERSET               AL32UTF8
NLS_LENGTH_SEMANTICS           CHAR

Create the table and populate it:

SQL> create table t1 (id number(1), text clob);

SQL> declare
  cval clob;
begin
  dbms_lob.createtemporary(cval, false);
  for idx in 0 .. 9 loop
    dbms_lob.writeappend(cval, 4000, lpad(to_char(idx), 4000, to_char(idx)));
  end loop;
  insert into t1 values (1, cval);
  commit;
end;
/

Illustrate the behaviour of DBMS_LOB.SUBSTR:

SQL> declare
  cval clob;
begin
  select text into cval from t1 where id = 1;
  --
  dbms_output.put_line('Original length: '||dbms_lob.getlength(cval));
  dbms_output.put_line('Substr 10000 length: '||length(dbms_lob.substr(cval, 10000, 1)));
  dbms_output.put_line('Substr 1000 length: '||length(dbms_lob.substr(cval, 1000, 1)));
end;
/
Original length: 40000
Substr 10000 length: 8191
Substr 1000 length: 1000

Although we know the entire string is 40’000 characters long a the sub-string function supposed to return the first 10’000 characters only returns 8191. Getting the first 1’000 characters just work s as expected. Here’s why:
AL32UTF8 character set uses variable-width multi byte encoding scheme which means depending on the character it uses one to maximum four bytes to store it. As by definition DBMS_LOB.SUBSTR returns 32767/n characters whereas n = number of bytes used to store a character for fixed-width character sets and n = max number of bytes used to store a character for variable-width character sets. This means in our case 32767 / 4 = 8191.75.

But heck, I really need the first 10’000 characters…so I make use DBMS_LOB.READ:

SQL> declare
  cval clob;
  vval varchar2(10000);  -- session level lenght semantic is CHAR
  nof_chars integer := 10000;
begin
  select text into cval from t1 where id = 1;
  --
  dbms_lob.read(cval, nof_chars, 1, vval);
  dbms_output.put_line('Read 10000 length: '||length(vval));
end;
/

Read 10000 length: 10000

This rule also applies to DBMS_LOB.COMPARE function.
Now I just wanted to mention something else that I always miss when checking DBMS_LOB function/procedure definitions in the PL/SQL Packages and Types Reference documentation. Procedures CONVERTTOBLOB, CONVERTTOCLOB and LOADCLOBFROMFILE require a character set ID but nowhere it points out how you can figure out the ID for a given character set (or vice versa). The SQL Reference documentation gives you the clue when looking for the functions NLS_CHARSET_ID and NLS_CHARSET_NAME:

SQL> select nls_charset_id('AL32UTF8') cs_id, nls_charset_name(873) cs_name from dual;

CS_ID         CS_NAME
--------      --------------
873           AL32UTF8

Foot note: All tests have been run on a 10g 10.2.0.4 database.

Statement level rollback and exception propagation

What’s obvious from reading the Oracle Concepts and PL/SQL Developers Guide is that Oracle sets an implicit savepoint before each single SQL statement and in case the statement fails (for whatever reason) Oracle rolls back to that savepoint. That’s what’s called “statement level rollback”. Apparently, for stored procedures that are called from a client the same rules apply. Before the procedure call a savepoint is set and if that procedure raises/propagates an exception Oracle rolls back to the implicit savepoint set just before the procedure call. It is fundamental to understand this behaviour as a Oracle database developer, especially when it comes to exception handling. Let’s examine the implications. First we create the table with index and constraint:

SQL> create table t1 (id number, text varchar2(50));
Table created.

SQL> create unique index t1_idx_01 on t1 (text);
Index created.

SQL> alter table t1 add constraint t1_uk_01 unique (text) using index t1_idx_01;
Table altered.

Then a stored procedure to insert some data into table T1:

SQL> create or replace procedure insert_t1
is
begin
  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (1, 'first');

  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (2, 'first'); -- this will rais ORA-00001

  dbms_output.put_line('inserting third row');
  insert into t1 (id, text) values (3, 'third'); -- this statement will not be executed due to previous exception
end insert_t1;
/
Procedure created.

This procedure tries to insert three records into table T1 but due to the unique constraint on TEXT the second insert statement will fail and the procedure returns with an exception propagated to the caller and subsequently everything done within the procedure will be rolled back.
Let’s put that to a test:

SQL> truncate table t1;
Table truncated.

SQL> exec insert_t1
BEGIN insert_t1; END;
*
ERROR at line 1:
ORA-00001: unique constraint (TESTUSER.T1_UK_01) violated
ORA-06512: at "TESTUSER.INSERT_T1", line 8
ORA-06512: at line 1

SQL> select * from t1;
no rows selected

So far so good, now let’s see what happens when we trap ORA-00001 (PL/SQL named exception DUP_VAL_ON_INDEX) and swallow it, not propagating the exception to the top-level call.

SQL> create or replace procedure insert_t1
is
begin
  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (1, 'first');

  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (2, 'first'); -- this will rais ORA-00001

  dbms_output.put_line('inserting third row');
  insert into t1 (id, text) values (3, 'third'); -- this statement will not be executed due to previous exception
exception
when dup_val_on_index then
  null;
end insert_t1;
/
Procedure created.

This time the procedures again tries to insert three records into table T1 but fails badly with the second record. The exception is trapped and silently swallowed so the procedure returns without an error, no exception propagation and therefore leaving one record in the table.

SQL> truncate table t1;
Table truncated.

SQL> exec insert_t1
PL/SQL procedure successfully completed.

SQL> select * from t1;
ID                  TEXT
---------- --------------------------------------------------
1                   first

1 row selected.

Ooops, something went wrong and we didn’t notice, if we proceed with the program flow in the application and eventually commit the transaction we leave the database in a inconsistent state as we expect three records to be present and not only one.

Alright, next time just let’s log the error and propagate it:

SQL> create or replace procedure insert_t1
is
begin
  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (1, 'first');

  dbms_output.put_line('inserting first row');
  insert into t1 (id, text) values (2, 'first'); -- this will rais ORA-00001

  dbms_output.put_line('inserting third row');
  insert into t1 (id, text) values (3, 'third'); -- this statement will not be executed due to previous exception
exception
when dup_val_on_index then
  dbms_output.put_line('Error: duplicate records');
  raise_application_error(-20001, 'There can only be one "first"!');
end insert_t1;
/
Procedure created.

There we go, again whatever the application does we ensure the consistency of the data within the database.

SQL> truncate table t1;
Table truncated.

SQL> exec insert_t1
BEGIN insert_t1; END;
*
ERROR at line 1:
ORA-20001: There can only be one "first"!
ORA-06512: at "TESTUSER.INSERT_T1", line 15
ORA-06512: at line 1

SQL> select * from t1;
no rows selected

The propagation of exceptions can also be observed in a SQL trace where for a cursor “begin insert_t1; end;” with depth 0 a ERROR is following the EXEC call. If the error is trapped and ignored there is only a ERROR following the EXEC call of the the cursor “INSERT INTO T1(ID, TEXT) VALUES (2, ‘first’)” with depth 1 and no ERROR for the parent cursor.

Conclusion:
Be careful with stored procedures that trap exceptions and do not propagate them as this kind of breaks the statement level rollback that Oracle would perform when the exceptions are not caught at all or are being re-raised.

Foot note 1: in this text “stored procedure” or “procedure” represents any named PL/SQL construct (except triggers)