Category Archives: General

Patch the patch: 12.2 DIRECTORY path name

Oracle has released the first Database Proactive Bundle Patch (12.2.0.1.170516) on May 16th 2017 for Oracle database 12.2. Unfortunately, I could not successfully apply the patch to my lab environment. It would patch CDB$ROOT and PDB$SEED but failed patching my PDB (see lines 34 to 36).

$ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sat May 20 22:49:57 2017
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_4836_2017_05_20_22_49_57/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBBP:
  ID 170516 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDBDEV1
    Nothing to roll back
    The following patches will be applied:
      25862693 (DATABASE BUNDLE PATCH 12.2.0.1.170516)

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...
Patch 25862693 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/25862693/21259009/25862693_apply_DEV1_CDBROOT_2017May20_22_50_22.log (no errors)
Patch 25862693 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/25862693/21259009/25862693_apply_DEV1_PDBSEED_2017May20_22_50_47.log (no errors)
Patch 25862693 apply (pdb PDBDEV1): WITH ERRORS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/25862693/21259009/25862693_apply_DEV1_PDBDEV1_2017May20_22_50_47.log (errors)
    Error at line 32: ORA-65254: invalid path specified for the directory

Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_4836_2017_05_20_22_49_57/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Sat May 20 22:51:10 2017

Checking the logfile it showed that a call to “dbms_sqlpatch.patch_initialize” had failed. Looking at what might have caused this error in the procedure I found this:

       DIRECTORY_SQL := 'CREATE DIRECTORY dbms_sqlpatch_dir AS ' ||
         DBMS_ASSERT.ENQUOTE_LITERAL(ORACLE_HOME || '/rdbms/admin');
       DEBUG_PUT_LINE('directory sql: ' || DIRECTORY_SQL);
       EXECUTE IMMEDIATE DIRECTORY_SQL;

Why would that fail? The code should actually work. After some further investigation I noticed a sneaky little behavior change in Oracle 12.2. If you create a PDB with the “PATH_PREFIX” clause “CREATE DIRECTORY” will no longer accept absolute path names in that PDB.

SQL> alter session set container=pdbdev1;

Session altered.

SQL> create or replace directory testdir AS '/u02/oradata/DEV1/PDBDEV1/dump';
create or replace directory testdir AS '/u02/oradata/DEV1/PDBDEV1/dump'
*
ERROR at line 1:
ORA-65254: invalid path specified for the directory


SQL> create or replace directory testdir AS './dump';

Directory created.

SQL> select directory_path from dba_directories where directory_name = 'TESTDIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u02/oradata/DEV1/PDBDEV1/./dump

SQL> create or replace directory testdir AS 'dump';

Directory created.

SQL> select directory_path from dba_directories where directory_name = 'TESTDIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u02/oradata/DEV1/PDBDEV1/dump

Oracle 12.2 will basically accept any string that doesn’t start with a forward-slash “/” and append it to the PATH_PREFIX. This behavior does not reflect the current documentation:

Oracle 12.1, on the other hand, will accept just any string. But as you can see only the first directory object ends up with the inteded directory path.

SQL> create or replace directory testdir AS '/u02/oradata/DEV1/PDB1';

Directory created.

SQL> select directory_path from dba_directories where directory_name = 'TESTDIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u02/oradata/DEV1/PDB1

SQL> create or replace directory testdir AS './dump';

Directory created.

SQL> select directory_path from dba_directories where directory_name = 'TESTDIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
./dump

SQL> create or replace directory testdir AS 'dump';

Directory created.

SQL> select directory_path from dba_directories where directory_name = 'TESTDIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
dump

I believe the intention of Oracle was to restrict the directory object path to “PATH_PREFIX” and subfolders in 12.2. They just did it in a very bad way. I’m going to raise a Service Request to see if that’s expected behavior or a regression and will keep you posted in the comment section of this post. Either way, Oracle will have to fix one or the other.

Scalable primary keys (GUID follow-up)

In the previous post we have seen that GUIDs as primary keys can lead to some serious performance problems. What we need is a primary key that won’t cause excessive I/O on inserts and reduce contention on the right hand side index leaf block. And I like it to be RAC aware.
To build a primary key that addresses above mentioned issue we use three different components:

  • the Oracle instance number
  • a one (or two) digit number
  • Oracle generated sequence number

The Oracle instance number is useful in Real Application Cluster setups to minimize data block transfers between nodes on inserts as each instance has its own part of the index tree to work on.
The one (or two) digit number could be anything that allows for distribution between sessions. For instance “mod(sid, 10)” or “round(dbms_random.value(0, 9), 0)”. The goal here is that each session works on different parts of the index.
The Oracle sequence number makes the key values unique.

I’m going to use the same test setup from the previous post. Tables TESTS and RESULTS are the same except that I changed the data type from “RAW(16)” to “NUMBER(10, 0)”.

create table tests (
    test_id number(10, 0) not null
  , vc2_long varchar2(4000 byte)
) tablespace myapp_data;

create unique index tests_idx_01 on tests (test_id) tablespace myapp_index;
alter table tests add constraint tests_pk primary key (test_id) using index tests_idx_01;

create table results (
    result_id number(10, 0) not null
  , test_id number(10, 0) not null
  , dt date
  , vc2_short varchar2(200 byte)
) tablespace myapp_data;

create unique index results_idx_01 on results(result_id) tablespace myapp_index;
alter table results add constraint results_pk primary key (result_id) using index results_idx_01;
create index results_idx_02 on results(test_id) tablespace myapp_index;
alter table results add constraint results_fk_01 foreign key (test_id) references tests(test_id) on delete cascade;

And we need sequences to get uniqueness.

create sequence tests_seq_pk cache 1000 noorder;
create sequence results_seq_pk cache 10000 noorder;

From the table definition above you can see I’m using one single number column to hold the three-part primary key values. As we need the three parts put together in a specific order (instance, some number, sequence) we cannot simply add up the numbers. I’ll use stored functions to concatenate the parts as strings and then cast the result to a number.

create or replace function tests_id_distr
return number
is
begin
  return to_number(sys_context('userenv', 'instance') || to_char(mod(sys_context('userenv', 'sid'), 10)) || to_char(tests_seq_pk.nextval) || dummy_java_call);
end tests_id_distr;
/

create or replace function results_id_distr
return number
is
begin
  return to_number(sys_context('userenv', 'instance') || to_char(mod(sys_context('userenv', 'sid'), 10)) || to_char(results_seq_pk.nextval || dummy_java_call));
end results_id_distr;
/

What’s the “dummy_java_call” at the end there, you ask?
Well, PL/SQL is blazingly fast ๐Ÿ™‚ …and concatenating & casting a few strings and numbers doesn’t do the Java GUID implementation justice. To get fairly comparable results I made this implementation also do a context switch to the JVM for every key generated. The Java method simply returns an empty string (or NULL if you will in the PL/SQL context).

create or replace and compile java source named "DummyJavaCall"
as
public class DummyJavaCall {
  public static String create() {
    return "";
  }
}
/

create or replace function dummy_java_call
return varchar2
is
language java name 'DummyJavaCall.create() return java.lang.String';
/

The test script was just slightly modified to use the new stored functions to get the primary key values. Otherwise it is the same: inserting 1000 Tests in a loop. After each test it inserts 100 Results per Test in the inner loop and commits.

set verify off feedback off
set serveroutput on size unlimited

<<myapp>>
declare
  test_id tests.test_id%type;
begin
  rrt.stat_cfg_default;
  rrt.stat_start('&1');
  rrt.timer_start();

  for tid in 1 .. 1000 loop
    insert into tests t (test_id, vc2_short) values (tests_id_distr(), dbms_random.string('A', trunc(dbms_random.value(200, 4000))))
    returning t.test_id into myapp.test_id;
    
    for rid in 1 .. 100 loop
      insert into results r (result_id, test_id, dt, vc2_short) values (results_id_distr(), myapp.test_id, sysdate, dbms_random.string('L', trunc(dbms_random.value(10, 200))));
    end loop;
    
    commit;
  end loop;

  dbms_output.put_line('Elapsed:' || to_char(rrt.timer_elapsed));
  rrt.stat_stop;
end;
/

exit success

I again used the shell script from the GUID test case to run 10 concurrent session for each run. The VM and database specification is the exact same for both test cases. To get approximately the same data volume for the indexes I had to run this test case 12 times.
RESULTS_IDX_01 is 34 MB bigger as in the GUID test case (385 vs 351 MB)
RESULTS_IDX_02 is 6 MB smaller as in the GUID test case (389 vs 395 MB)
Also the number of leaf blocks are in the same ballpark.

SQL> select segment_name, tablespace_name, blocks, extents, bytes/1024/1024 mb from dba_segments where (segment_name like 'TESTS%' or segment_name like 'RESULTS%') order by segment_name desc;

SEGMENT_NAME    TABLESPACE_NAME  BLOCKS  EXTENTS  MB    
--------------- ---------------- ------- -------- ----
TESTS_IDX_01    MYAPP_INDEX      512     4        4     
TESTS           MYAPP_DATA       41216   322      322   
RESULTS_IDX_02  MYAPP_INDEX      49792   389      389   
RESULTS_IDX_01  MYAPP_INDEX      49280   385      385   
RESULTS         MYAPP_DATA       220160  1720     1720  
SQL> select table_name, index_name, uniqueness, blevel, leaf_blocks, distinct_keys, avg_leaf_blocks_per_key, avg_data_blocks_per_key, clustering_factor, num_rows from dba_indexes where index_name in ('TESTS_IDX_01', 'RESULTS_IDX_01', 'RESULTS_IDX_02') order by table_name desc, index_name;

TABLE_NAME  INDEX_NAME      UNIQUENESS  BLEVEL  LEAF_BLOCKS  DISTINCT_KEYS  AVG_LEAF_BLOCKS_PER_KEY  AVG_DATA_BLOCKS_PER_KEY  CLUSTERING_FACTOR  NUM_ROWS  
----------- --------------- ----------- ------- ------------ -------------- ------------------------ ------------------------ ------------------ --------
TESTS       TESTS_IDX_01    UNIQUE      1       442          120000         1                        1                        92579              120000    
RESULTS     RESULTS_IDX_01  UNIQUE      2       47849        11704420       1                        1                        1082748            11704420  
RESULTS     RESULTS_IDX_02  NONUNIQUE   2       49037        118856         1                        4                        497219             11986476  

From this setup the aggregated session statistics show that we constantly get low numbers of physical reads and elapsed times irrespective of the data volume of the underlying tables and indexes.

I like this solution because it addresses the scalability issues and doesn’t rely on any fancy Oracle feature (which you might need extra cost options for).

GUID, or how to Get Ur Index from Disk

Who doesn’t want to write scalable applications these days? Then maybe you should think twice about using GUIDs as primary keys. It’s mostly not about storage overhead, or the fact that surrogate keys are not always the best solution. It is about performance!

I’m going to talk about Oracle here in particular but most of the concepts are the same/or similar in other database systems.
Usually, you’re using B-tree indexes and unique constraints to police primary keys. And if you read the excellent book SQL Performance Explained by Markus Winand you might already know what pain you’ll be facing by using GUIDs. As long as your data volume is tiny and can fully live in the buffer cache you’re probably fine. To illustrate what happens once your dataset outgrows the buffer cache I setup a small test kit.

The data model is simple. I have two tables, TESTS and RESULTS. Each Test has zero or more Results. To store GUIDs I will use data type RAW(16). The rest of the columns will hold random data to fill the space.

create table tests (
    test_id raw(16) not null
  , vc2_long varchar2(4000 byte)
) tablespace myapp_data;

create unique index tests_idx_01 on tests (test_id) tablespace myapp_index;
alter table tests add constraint tests_pk primary key (test_id) using index tests_idx_01;


create table results (
    result_id raw(16) not null
  , test_id raw(16) not null
  , dt date
  , vc2_short varchar2(200 byte)
) tablespace myapp_data;

create unique index results_idx_01 on results(result_id) tablespace myapp_index;
alter table results add constraint results_pk primary key (result_id) using index results_idx_01;
create index results_idx_02 on results(test_id) tablespace myapp_index;
alter table results add constraint results_fk_01 foreign key (test_id) references tests(test_id) on delete cascade;

For ease of grep’ing SQL traces I separated tables and indexes to different Tablespaces. MYAPP_INDEX has one datafile with file# 20.

I wanted to use PL/SQL to keep the test case short and concise. Unfortunately, Oracle’s SYS_GUID function produces “sequential” GUIDs in my setup (Oracle 12.2 CDB on Oracle Linux 7.2). To have random GUIDs in PL/SQL I resorted to Java stored procedures (the implementation is not relevant to the case at hand):

create or replace and compile java source named "RandomGUID"
as
public class RandomGUID {
  public static String create() {
    return java.util.UUID.randomUUID().toString().replaceAll("-", "").toUpperCase();
  }
}
/

create or replace function random_java_guid
return varchar2
is
language java name 'RandomGUID.create() return java.lang.String';
/

create or replace function random_guid
return raw
is
begin
  return hextoraw(random_java_guid());
end random_guid;
/

Works like a charm ๐Ÿ™‚

select random_guid() guids from dual connect by level <= 5;

GUIDS
--------------------------------
54746B0750374133BCF9FA85A6F2F532
C92168647BEC4A93982F19498238757E
3E4B858F41764126B177FCCB30CC73C5
4B7CD39D222D4E339482CD25F9AD6EA2
B8C5367F6B944EEA9BD71611CCB54E72

That’s it for the schema setup. To load data I use below code snippet (test1.sql) which in a loop inserts 1000 Tests. After each test it inserts 100 Results per Test in the inner loop and commits. The “RRT” package is used to collect session statistics (gv$session) and tracking wall clock time.

set verify off feedback off
set serveroutput on size unlimited

<<myapp>>
declare
  test_id tests.test_id%type;
begin
  rrt.stat_cfg_default;
  rrt.stat_start('&1');
  rrt.timer_start();

  for tid in 1 .. 1000 loop
    insert into tests t (test_id, vc2_long) values (random_guid(), dbms_random.string('A', trunc(dbms_random.value(200, 4000))))
    returning t.test_id into myapp.test_id;
    
    for rid in 1 .. 100 loop
      insert into results r (result_id, test_id, dt, vc2_short) values (random_guid(), myapp.test_id, sysdate, dbms_random.string('L', trunc(dbms_random.value(10, 200))));
    end loop;
    
    commit;
  end loop;

  dbms_output.put_line('Elapsed:' || to_char(rrt.timer_elapsed));
  rrt.stat_stop;
end;
/

To run the script concurrently in multiple sessions I use following shell script. It just spawns N SQL*Plus processes and runs the SQL script.

#!/usr/bin/bash

SCRIPT=$1
DEGREE=$2
EZCONN=$3

declare -i runid=$$
declare -i cnt=1

echo "Coordinator process pid ${runid}"
until [ $cnt -gt ${DEGREE} ]
do
  sqlplus -S -L ${EZCONN} @${SCRIPT} ${SCRIPT}.${runid}.${cnt}.log > ${SCRIPT}.${runid}.${cnt}.log &
  echo "Started sub-process ${cnt} with pid $!"
  cnt=$(( cnt + 1 ))
done

I ran the test case multiple times letting the database grow bigger with every run. I kept running until the performance got unbearable. On my VirtualBox VM setup it took 9 runs with 10 concurrent processes for the performance to go way down south. I deliberately configured a small SGA of 800 MB which resulted in about 500 MB buffer cache to hit the issue soon. Obviously, your milage may vary as the larger the buffer cache the longer you can sustain.
Let’s look at resource usage profile of one of the sessions from run 9. I processed the raw extended SQL trace file using Chris Antognini’s TVD$XTAT.

Wow, this single session did 16,158 random physical reads (db file sequential read). Let’s see which statements contributed to those reads:

Almost all of the random reads (16,078) came from the INSERT statement with ID 2 which is:

All this physical read I/O to INSERT records, which is all this test case does. And as you can see above the I/O is done on file 20 from the MYAPP_INDEX Tablespace. What’s the distribution between objects?

$ grep 'db file.*file#=20' *ora_5592.trc | cut -d ' ' -f12 | sort | uniq -c
     42 obj#=61805
  14538 obj#=61807
   1498 obj#=61808
select object_id, object_name from cdb_objects where con_id = 4 and owner = 'MYAPP' and object_id in (61805, 61807, 61808) order by object_id;

 OBJECT_ID OBJECT_NAME
---------- ---------------
     61805 TESTS_IDX_01
     61807 RESULTS_IDX_01
     61808 RESULTS_IDX_02

Considering the data pattern it makes sense that RESULTS_IDX_01 is suffering the most. It is the unique index that gets the most index entries added. Index RESULTS_IDX_02 benefits from the repeating foreign key values per outer-loop that all go into the same leaf blocks which Oracle most likely caches.

select table_name, index_name, uniqueness, blevel, leaf_blocks, distinct_keys, avg_leaf_blocks_per_key, avg_data_blocks_per_key, clustering_factor, num_rows from dba_indexes where index_name in ('TESTS_IDX_01', 'RESULTS_IDX_01', 'RESULTS_IDX_02') order by table_name desc, index_name;

TABLE_NAME  INDEX_NAME      UNIQUENESS  BLEVEL  LEAF_BLOCKS  DISTINCT_KEYS  AVG_LEAF_BLOCKS_PER_KEY  AVG_DATA_BLOCKS_PER_KEY  CLUSTERING_FACTOR  NUM_ROWS
----------- --------------- ----------- ------- ------------ -------------- ------------------------ ------------------------ ------------------ --------
TESTS       TESTS_IDX_01    UNIQUE      1       467          90000          1                        1                        89999              90000
RESULTS     RESULTS_IDX_01  UNIQUE      2       44237        8941107        1                        1                        8941107            8941107
RESULTS     RESULTS_IDX_02  NONUNIQUE   2       50757        92216          1                        4                        424304             9137794

Up to test run 8 Oracle could keep the most relevant parts of the indexes in the buffer cache. After that Oracle had to constantly “swap” index leaf blocks in and out of the buffer cache.

select segment_name, tablespace_name, blocks, extents, bytes/1024/1024 mb from dba_segments where (segment_name like 'TESTS%' or segment_name like 'RESULTS%') order by segment_name desc;

SEGMENT_NAME    TABLESPACE_NAME  BLOCKS  EXTENTS  MB
--------------- ---------------- ------- -------- ----
TESTS_IDX_01    MYAPP_INDEX      512     4        4
TESTS           MYAPP_DATA       31360   245      245
RESULTS_IDX_02  MYAPP_INDEX      50560   395      395
RESULTS_IDX_01  MYAPP_INDEX      44928   351      351
RESULTS         MYAPP_DATA       191744  1498     1498

I also graphed the session statistics aggregated (average of the 10 concurrent sessions) by test runs. It clearly shows a correlation between physical reads and elapsed time. Who’d a thunk it? ๐Ÿ™‚

Due to the random character of GUIDs every new index entry potentially goes into any of the index leaf blocks. Thus, performance is good as long as you can cache the entire index, meaning every index supporting primary and foreign keys in you entire application. That’s probably too much to ask.
Now, you might know about sequential GUIDs that Oracle and SQL Server implement and start using them. This might alleviate the problem somewhat, but could result in the next performance issue: right hand side index leaf block contention!

As this post is getting long already I will talk about a solution that scales in my next installment.

Sh! – silent installation (DBCA)

I’ve been playing with Oracle 12.2 lately, which also means I do lots of silent installations of it :-).
I don’t know why I just put a smiley there because there was nothing funny about it! Troubleshooting DBCA errors, combing through logs, and raising service requests with Oracle Support almost sucked the joy out of the new 12.2 release.

That’s what I’ve stumbled upon:

  • DBCA parameter "-variables" is not processed correctly. Oracle has acknowledge this as bug 25353173. Workaround: put name=value list in a file and use "-variablesFile <path-to-file>"
  • Controlfiles are not multiplexed to different paths as specified. First given path is used for all controlfile members (SR is "Work In Progress")
  • Specifying <initParam name="os_authent_prefix" value=""/> results in parameter to be set to "ops$". In 12.2 you have to specify the single quotes '' like this: <initParam name="os_authent_prefix" value="''"/>. If you want an empty value for a parameter you better set it in ''. (*)
  • Maybe not a bug but still annoying: specify DB_UNIQUE_NAME different from DB_NAME and your file paths will be appended with it.

* actually, the new way makes more sense to me as it is closer to the way you set empty values in ALTER SYSTEM/SESSION SET statement.

Anyway, enough with the rambling. I will show now how I setup my lab systems.

To be flexible with automation we set a few variables that are later used to feed input to DBCA. You can modify these according to your needs and environment.

NEW_ORACLE_SID=DEV1
NEW_DB_NAME=DEV1
NEW_DB_DOMAIN=localdomain
NEW_DB_CHARSET=AL32UTF8
NEW_SYS_PWD=manager
NEW_SYSTEM_PWD=manager
NEW_REDO_SIZE_MB=256

INIT_CMPA=DIAGNOSTIC+TUNING
INIT_COMPATIBLE=12.2.0.0
INIT_OFE=12.2.0.1
INIT_PGA_AGG_TGT_MB=512
INIT_PROCESSES=200
INIT_SGA_TGT_MB=4096
INIT_CPU_COUNT=4

NEW_DATA_PATH1=/u02/oradata/${NEW_DB_NAME}
NEW_REDO_PATH1=/u02/oradata/${NEW_DB_NAME}
NEW_REDO_PATH2=/u03/oradata/${NEW_DB_NAME}
NEW_ARCH_PATH1=/u04/oradata/${NEW_DB_NAME}

NEW_REDO_PATH1 and NEW_REDO_PATH2 will be used for multiplexing online redo-logs as well as controlfiles. The rest of the variables should be self-explanatory.

You don’t have to modify following variables as they are “calculated” values.

NEW_DB_UNIQUE_NAME=${NEW_DB_NAME}
NEW_GLOBAL_DBNAME=${NEW_DB_NAME}.${NEW_DB_DOMAIN}
INIT_PGA_AGG_TGT=$((${INIT_PGA_AGG_TGT_MB} * 1024 * 1024))
INIT_SGA_TGT=$((${INIT_SGA_TGT_MB} * 1024 * 1024))
INIT_LISTENER_REG=LISTENER_${NEW_DB_NAME}

Just make sure you have a listener running with the name specified in INIT_LISTENER_REG.

Now, because of bug 25353173 we must put what we’d normally feed to "-variables" into a file:

echo "DATA_PATH1=${NEW_DATA_PATH1}" >  /tmp/dbca-vars.txt
echo "REDO_PATH1=${NEW_REDO_PATH1}" >> /tmp/dbca-vars.txt
echo "REDO_PATH2=${NEW_REDO_PATH2}" >> /tmp/dbca-vars.txt
echo "ARCH_PATH1=${NEW_ARCH_PATH1}" >> /tmp/dbca-vars.txt

Before we can run DBCA we need to prepare a template. I like to prepare a template that contains placeholders and everything else that I define static. This way I define one single template and can reuse it for all kinds of deployments.
I know it’s awfully long but you can skip reading the template if you just want to go ahead and create the database.
Save this as /tmp/dbca-newdb-linux.dbt

<DatabaseTemplate name="New Database" description="" version="12.2.0.1.0">
   <CommonAttributes>
      <option name="OMS" value="false"/>
      <option name="JSERVER" value="false"/>
      <option name="SPATIAL" value="false"/>
      <option name="IMEDIA" value="false"/>
      <option name="ORACLE_TEXT" value="false">
         <tablespace id="SYSAUX"/>
      </option>
      <option name="CWMLITE" value="false">
         <tablespace id="SYSAUX"/>
      </option>
      <option name="SAMPLE_SCHEMA" value="false"/>
      <option name="APEX" value="false"/>
      <option name="DV" value="false"/>
   </CommonAttributes>
   <Variables/>
   <CustomScripts Execute="false"/>
   <InitParamAttributes>
      <InitParams>
         <initParam name="autotask_max_active_pdbs" value="1"/>
         <initParam name="awr_pdb_autoflush_enabled" value="true"/>
         <initParam name="awr_snapshot_time_offset" value="1000000"/>
         <initParam name="commit_logging" value="IMMEDIATE"/>
         <initParam name="commit_wait" value="WAIT"/>
         <initParam name="containers_parallel_degree" value="2"/>
         <initParam name="control_files" value="(&quot;{DATA_PATH1}/ctrl-001-{DB_NAME}.dbf&quot;, &quot;{REDO_PATH1}/ctrl-002-{DB_NAME}.dbf&quot;, &quot;{REDO_PATH2}/ctrl-003-{DB_NAME}.dbf&quot;)"/>
         <initParam name="control_file_record_keep_time" value="35"/>
         <initParam name="cursor_invalidation" value="DEFERRED"/>
         <initParam name="db_block_checking" value="MEDIUM"/>
         <initParam name="db_block_checksum" value="FULL"/>
         <initParam name="db_block_size" value="8" unit="KB"/>
         <initParam name="db_files" value="100"/>
         <initParam name="db_flashback_retention_target" value="1440"/>  
         <initParam name="db_securefile" value="ALWAYS"/>
         <initParam name="deferred_segment_creation" value="FALSE"/>
         <initParam name="dg_broker_config_file1" value="{ORACLE_BASE}/admin/{DB_NAME}/instance/dr-001-{DB_NAME}.dat"/>
         <initParam name="dg_broker_config_file2" value="{ORACLE_BASE}/admin/{DB_NAME}/instance/dr-002-{DB_NAME}.dat"/>
         <initParam name="diagnostic_dest" value="{ORACLE_BASE}/adr"/>
         <initParam name="dispatchers" value="''"/>
         <initParam name="enable_pluggable_database" value="TRUE"/>
         <initParam name="filesystemio_options" value="SETALL"/>
         <initParam name="inmemory_adg_enabled" value="FALSE"/>
         <initParam name="inmemory_force" value="OFF"/>
         <initParam name="inmemory_query" value="DISABLE"/>
         <initParam name="inmemory_size" value="0"/>
         <initParam name="log_archive_dest_1" value="LOCATION={ARCH_PATH1}"/>
         <initParam name="log_archive_dest_state_1" value="ENABLE"/>
         <initParam name="log_archive_format" value="{DB_NAME}-%t-%s-%r.arc"/>
         <initParam name="max_pdbs" value="1"/>
         <initParam name="nls_language" value="AMERICAN"/>
         <initParam name="nls_nchar_conv_excp" value="TRUE"/>
         <initParam name="nls_territory" value="AMERICA"/>
         <initParam name="open_cursors" value="300"/>
         <initParam name="os_authent_prefix" value="''"/>
         <initParam name="parallel_min_servers" value="0"/>
         <initParam name="remote_listener" value="''"/>
         <initParam name="shared_servers" value="0"/>
         <initParam name="temp_undo_enabled" value="TRUE"/>
         <initParam name="undo_retention" value="1800"/>
         <initParam name="undo_tablespace" value="UNDOTS"/>
         <initParam name="uniform_log_timestamp_format" value="TRUE"/>
      </InitParams>
      <MiscParams>
         <databaseType>MULTIPURPOSE</databaseType>
         <maxUserConn>20</maxUserConn>
         <percentageMemTOSGA>40</percentageMemTOSGA>
         <archiveLogMode>true</archiveLogMode>
         <initParamFileName>{ORACLE_BASE}/admin/{DB_NAME}/create/init.ora</initParamFileName>
      </MiscParams>
      <SPfile useSPFile="true">{ORACLE_BASE}/admin/{DB_NAME}/instance/spfile{SID}.ora</SPfile>
   </InitParamAttributes>
   <StorageAttributes>
      <ControlfileAttributes id="Controlfile">
         <maxDatafiles>100</maxDatafiles>
         <maxLogfiles>16</maxLogfiles>
         <maxLogMembers>3</maxLogMembers>
         <maxLogHistory>1</maxLogHistory>
         <maxInstances>8</maxInstances>
         <image name="ctrl-001-{DB_NAME}.dbf" filepath="{DATA_PATH1}/"/>
         <image name="ctrl-002-{DB_NAME}.dbf" filepath="{REDO_PATH1}/"/>
         <image name="ctrl-003-{DB_NAME}.dbf" filepath="{REDO_PATH2}/"/>
      </ControlfileAttributes>
      <DatafileAttributes id="{DATA_PATH1}/sysaux-001-{DB_NAME}.dbf">
         <tablespace>SYSAUX</tablespace>
         <temporary>false</temporary>
         <online>true</online>
         <status>0</status>
         <size unit="MB">256</size>
         <reuse>true</reuse>
         <autoExtend>true</autoExtend>
         <increment unit="KB">8192</increment>
         <maxSize unit="MB">4096</maxSize>
      </DatafileAttributes>
      <DatafileAttributes id="{DATA_PATH1}/toolts-001-{DB_NAME}.dbf">
         <tablespace>TOOLTS</tablespace>
         <temporary>false</temporary>
         <online>true</online>
         <status>0</status>
         <size unit="MB">56</size>
         <reuse>true</reuse>
         <autoExtend>true</autoExtend>
         <increment unit="KB">4096</increment>
         <maxSize unit="MB">2048</maxSize>
      </DatafileAttributes>
      <DatafileAttributes id="{DATA_PATH1}/system-001-{DB_NAME}.dbf">
         <tablespace>SYSTEM</tablespace>
         <temporary>false</temporary>
         <online>true</online>
         <status>0</status>
         <size unit="MB">1024</size>
         <reuse>true</reuse>
         <autoExtend>true</autoExtend>
         <increment unit="KB">8192</increment>
         <maxSize unit="MB">4096</maxSize>
      </DatafileAttributes>
      <DatafileAttributes id="{DATA_PATH1}/tempts-001-{DB_NAME}.dbf">
         <tablespace>TEMPTS</tablespace>
         <temporary>false</temporary>
         <online>true</online>
         <status>0</status>
         <size unit="MB">128</size>
         <reuse>true</reuse>
         <autoExtend>true</autoExtend>
         <increment unit="KB">2048</increment>
         <maxSize unit="MB">8192</maxSize>
      </DatafileAttributes>
      <DatafileAttributes id="{DATA_PATH1}/undots-001-{DB_NAME}.dbf">
         <tablespace>UNDOTBS1</tablespace>
         <temporary>false</temporary>
         <online>true</online>
         <status>0</status>
         <size unit="MB">512</size>
         <reuse>true</reuse>
         <autoExtend>true</autoExtend>
         <increment unit="KB">2028</increment>
         <maxSize unit="MB">8192</maxSize>
      </DatafileAttributes>
      <TablespaceAttributes id="SYSAUX">
         <online>true</online>
         <offlineMode>1</offlineMode>
         <readOnly>false</readOnly>
         <temporary>false</temporary>
         <defaultTemp>false</defaultTemp>
         <undo>false</undo>
         <local>true</local>
         <blockSize>-1</blockSize>
         <allocation>1</allocation>
         <uniAllocSize unit="KB">-1</uniAllocSize>
         <initSize unit="KB">64</initSize>
         <increment unit="KB">64</increment>
         <incrementPercent>50</incrementPercent>
         <minExtends>1</minExtends>
         <maxExtends>4096</maxExtends>
         <minExtendsSize unit="KB">64</minExtendsSize>
         <logging>true</logging>
         <recoverable>false</recoverable>
         <maxFreeSpace>0</maxFreeSpace>
         <bigfile>false</bigfile>
         <datafilesList>
            <TablespaceDatafileAttributes id="{DATA_PATH1}/sysaux-001-{DB_NAME}.dbf"/>
         </datafilesList>
      </TablespaceAttributes>
      <TablespaceAttributes id="TOOLTS">
         <online>true</online>
         <offlineMode>1</offlineMode>
         <readOnly>false</readOnly>
         <temporary>false</temporary>
         <defaultTemp>false</defaultTemp>
         <undo>false</undo>
         <local>true</local>
         <blockSize>-1</blockSize>
         <allocation>1</allocation>
         <uniAllocSize unit="KB">-1</uniAllocSize>
         <initSize unit="KB">128</initSize>
         <increment unit="KB">128</increment>
         <incrementPercent>0</incrementPercent>
         <minExtends>1</minExtends>
         <maxExtends>4096</maxExtends>
         <minExtendsSize unit="KB">128</minExtendsSize>
         <logging>true</logging>
         <recoverable>false</recoverable>
         <maxFreeSpace>0</maxFreeSpace>
         <bigfile>false</bigfile>
         <datafilesList>
            <TablespaceDatafileAttributes id="{DATA_PATH1}/toolts-001-{DB_NAME}.dbf"/>
         </datafilesList>
      </TablespaceAttributes>
      <TablespaceAttributes id="SYSTEM">
         <online>true</online>
         <offlineMode>1</offlineMode>
         <readOnly>false</readOnly>
         <temporary>false</temporary>
         <defaultTemp>false</defaultTemp>
         <undo>false</undo>
         <local>true</local>
         <blockSize>-1</blockSize>
         <allocation>3</allocation>
         <uniAllocSize unit="KB">-1</uniAllocSize>
         <initSize unit="KB">64</initSize>
         <increment unit="KB">64</increment>
         <incrementPercent>50</incrementPercent>
         <minExtends>1</minExtends>
         <maxExtends>-1</maxExtends>
         <minExtendsSize unit="KB">64</minExtendsSize>
         <logging>true</logging>
         <recoverable>false</recoverable>
         <maxFreeSpace>0</maxFreeSpace>
         <bigfile>false</bigfile>
         <datafilesList>
            <TablespaceDatafileAttributes id="{DATA_PATH1}/system-001-{DB_NAME}.dbf"/>
         </datafilesList>
      </TablespaceAttributes>
      <TablespaceAttributes id="TEMPTS">
         <online>true</online>
         <offlineMode>1</offlineMode>
         <readOnly>false</readOnly>
         <temporary>true</temporary>
         <defaultTemp>true</defaultTemp>
         <undo>false</undo>
         <local>true</local>
         <blockSize>-1</blockSize>
         <allocation>1</allocation>
         <uniAllocSize unit="KB">1012</uniAllocSize>
         <initSize unit="KB">64</initSize>
         <increment unit="KB">64</increment>
         <incrementPercent>0</incrementPercent>
         <minExtends>1</minExtends>
         <maxExtends>0</maxExtends>
         <minExtendsSize unit="KB">64</minExtendsSize>
         <logging>true</logging>
         <recoverable>false</recoverable>
         <maxFreeSpace>0</maxFreeSpace>
         <bigfile>false</bigfile>
         <datafilesList>
            <TablespaceDatafileAttributes id="{DATA_PATH1}/tempts-001-{DB_NAME}.dbf"/>
         </datafilesList>
      </TablespaceAttributes>
      <TablespaceAttributes id="UNDOTS">
         <online>true</online>
         <offlineMode>1</offlineMode>
         <readOnly>false</readOnly>
         <temporary>false</temporary>
         <defaultTemp>false</defaultTemp>
         <undo>true</undo>
         <local>true</local>
         <blockSize>-1</blockSize>
         <allocation>1</allocation>
         <uniAllocSize unit="KB">-1</uniAllocSize>
         <initSize unit="KB">512</initSize>
         <increment unit="KB">512</increment>
         <incrementPercent>50</incrementPercent>
         <minExtends>8</minExtends>
         <maxExtends>4096</maxExtends>
         <minExtendsSize unit="KB">512</minExtendsSize>
         <logging>true</logging>
         <recoverable>false</recoverable>
         <maxFreeSpace>0</maxFreeSpace>
         <bigfile>false</bigfile>
         <datafilesList>
            <TablespaceDatafileAttributes id="{DATA_PATH1}/undots-001-{DB_NAME}.dbf"/>
         </datafilesList>
      </TablespaceAttributes>
      <RedoLogGroupAttributes id="1">
         <reuse>false</reuse>
         <fileSize unit="KB">204800</fileSize>
         <Thread>1</Thread>
         <member ordinal="0" memberName="redo-g1-m1-{DB_NAME}.dbf" filepath="{REDO_PATH1}/"/>
         <member ordinal="1" memberName="redo-g1-m2-{DB_NAME}.log" filepath="{REDO_PATH2}/"/>
      </RedoLogGroupAttributes>
      <RedoLogGroupAttributes id="2">
         <reuse>false</reuse>
         <fileSize unit="KB">204800</fileSize>
         <Thread>1</Thread>
         <member ordinal="0" memberName="redo-g2-m1-{DB_NAME}.dbf" filepath="{REDO_PATH1}/"/>
         <member ordinal="1" memberName="redo-g2-m2-{DB_NAME}.log" filepath="{REDO_PATH2}/"/>
      </RedoLogGroupAttributes>
      <RedoLogGroupAttributes id="3">
         <reuse>false</reuse>
         <fileSize unit="KB">204800</fileSize>
         <Thread>1</Thread>
         <member ordinal="0" memberName="redo-g3-m1-{DB_NAME}.dbf" filepath="{REDO_PATH1}/"/>
         <member ordinal="1" memberName="redo-g3-m2-{DB_NAME}.log" filepath="{REDO_PATH2}/"/>
      </RedoLogGroupAttributes>
   </StorageAttributes>
</DatabaseTemplate>

We’re now ready to create the database by calling DBCA:

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/ora12201
export PATH=${ORACLE_HOME}/bin:${PATH}

${ORACLE_HOME}/bin/dbca -silent -createDatabase \
  -templateName "/tmp/dbca-newdb-linux.dbt" \
  -gdbName "${NEW_GLOBAL_DBNAME}" \
  -characterSet "${NEW_DB_CHARSET}" \
  -createAsContainerDatabase "true" \
  -databaseConfigType SINGLE \
  -dbOptions "ORACLE_TEXT:true" \
  -databaseType MULTIPURPOSE \
  -dvConfiguration "false" \
  -emConfiguration "NONE" \
  -enableArchive "true" \
  -archiveLogDest "${NEW_ARCH_PATH1}" \
  -listeners "${INIT_LISTENER_REG}" \
  -nationalCharacterSet "AL16UTF16" \
  -olsConfiguration "false" \
  -recoveryAreaDestination "NONE" \
  -redoLogFileSize "${NEW_REDO_SIZE_MB}" \
  -registerWithDirService "false" \
  -sampleSchema "false" \
  -sid "${NEW_ORACLE_SID}" \
  -sysPassword "${NEW_SYS_PWD}" \
  -systemPassword "${NEW_SYSTEM_PWD}" \
  -useOMF "false" \
  -variablesFile /tmp/dbca-vars.txt \
  -initParams "compatible=${INIT_COMPATIBLE},control_management_pack_access=${INIT_CMPA},db_domain=${NEW_DB_DOMAIN},db_name=${NEW_DB_NAME},db_unique_name=${NEW_DB_UNIQUE_NAME},local_listener=${INIT_LISTENER_REG},optimizer_features_enable=${INIT_OFE},pga_aggregate_target=${INIT_PGA_AGG_TGT},processes=${INIT_PROCESSES},sga_target=${INIT_SGA_TGT}"

Couple of things to note:

  • This will create an empty CDB
  • The template specifies not to install all the options. In this example I’ve chosen I want to install Oracle Text
  • The database will be in archivelog mode
  • Because of a bug, the controlfiles are not be multiplexed. You have to correct this manually after DBCA finishes
  • The DBCA can be extremely flexible by using variables (placeholders in template)
  • A lot can be configured via DBCA but not everything. E.g. I like most DB’s running in FORCE LOGGING mode
  • It never hurts to run "dbca -silent -<command> -h" to see what it can do

Sh! – silent installation

It’s been years since I last used a GUI to install Oracle software or create a database with DBCA. Sometimes it is baffling to see Word documents with dozens of screenshots showing how to setup Oracle at customer sites. In my opinion, showing the silent mode call (of OUI, DBCA, etc.) in the documentation is all you need. One single page, that’s it. OK, if you want to split hairs, the DBCA .dbt file is multiple pages in the appendix :-).
Knowing all this silent mode stuff will really make your life easier when it comes to automation. Think of Chef, Puppet, Ansible and what not. Even dockerized Oracle is now all the rage.

In this post we start with the Oracle software installation.

First we specify some environment variables to tell what and where to extract the Oracle software zips:

export ORA_SW_TMP=/tmp/oraswtmp                                # that is where the Oracle zip files will be extracted to
export ORA_ZIP_1=/tmp/p21419221_121020_Linux-x86-64_1of10.zip  # zip one of the Oracle database software
export ORA_ZIP_2=/tmp/p21419221_121020_Linux-x86-64_2of10.zip  # zip two of the Oracle database software

Then we specify some well known Oracle environment variables:

export ORACLE_HOSTNAME=$(hostname)
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/ora12102
export INVENTORY_LOCATION=/u01/app/oraInventory

Extracting the Oracle zips:

mkdir ${ORA_SW_TMP}
unzip ${ORA_ZIP_1} -d ${ORA_SW_TMP}
unzip ${ORA_ZIP_2} -d ${ORA_SW_TMP}

Run the installer in silent mode:

${ORA_SW_TMP}/database/runInstaller -silent -waitforcompletion \
  oracle.install.option=INSTALL_DB_SWONLY \
  ORACLE_HOSTNAME="${ORACLE_HOSTNAME}" \
  UNIX_GROUP_NAME=oinstall \
  INVENTORY_LOCATION="${INVENTORY_LOCATION}" \
  SELECTED_LANGUAGES=en \
  ORACLE_HOME="${ORACLE_HOME}" \
  ORACLE_BASE="${ORACLE_BASE}" \
  oracle.install.db.InstallEdition=EE \
  oracle.install.db.DBA_GROUP=dba \
  oracle.install.db.OPER_GROUP=oper \
  oracle.install.db.BACKUPDBA_GROUP=backupdba \
  oracle.install.db.DGDBA_GROUP=dgdba \
  oracle.install.db.KMDBA_GROUP=kmdba \
  DECLINE_SECURITY_UPDATES=true

Run the Oracle root scripts:

sudo ${INVENTORY_LOCATION}/orainstRoot.sh
sudo ${ORACLE_HOME}/root.sh

Go grab a coffe.
Then, put that in a shell script and you’re all set.

P.s. this also works on Windows, just with slighlty different parameters.

set ORACLE_HOSTNAME=%HOSTNAME%
set ORACLE_BASE=D:\app\oracle
set ORACLE_HOME=%ORACLE_BASE%\product\ora12102
set INVENTORY_LOCATION=C:\app\oraInventory
set ORA_HOME_USER=spotonoracle\OracleSA$    # use Group Managed Service Account otherwise you also have to provide a password

.\setup.exe -silent -waitforcompletion ^
  oracle.install.option=INSTALL_DB_SWONLY ^
  ORACLE_HOSTNAME="%ORACLE_HOSTNAME%" ^
  INVENTORY_LOCATION="%INVENTORY_LOCATION%" ^
  SELECTED_LANGUAGES=en ^
  ORACLE_HOME="%ORACLE_HOME%" ^
  ORACLE_BASE="%ORACLE_BASE%" ^
  oracle.install.db.InstallEdition=EE ^
  oracle.install.IsBuiltInAccount=false ^
  oracle.install.OracleHomeUserName="%ORA_HOME_USER%" ^
  DECLINE_SECURITY_UPDATES=true

Multiplex controlfiles with OMF

Even on the latest Oracle Engineered Systems like ODA X6-2S/M there’s only one single controlfile created by default. Despite the storage redundancy there’s always a risk of someone accidentally deleting a file or some logical corruption of the file.
Therefore, we always multiplex the controlfiles and online redo logs. But, what is the easiest way of adding a second controlfile to a database using OMF on a file system?

I think it’s the following procedure. Although, I’m gladly proven wrong with a even more elegant solution.

The command output has been stripped to the relevant information.

$ . oraenv
ORACLE_SID = [DUMMY] ? DEV1

$ rman target /


-- see, there is only one online log destination
RMAN> select name, value from v$parameter where name like 'db_create_online_log_dest_%';

NAME                                   VALUE
-------------------------------------- --------------------------------------
db_create_online_log_dest_1            /u03/app/oracle/redo/
db_create_online_log_dest_2
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5


-- set a second online log destination
RMAN> alter system set db_create_online_log_dest_2 = '/u02/app/oracle/oradata/DEV1' scope=both;


-- get the current controlfile path for later reference in commands
RMAN> select value from v$parameter where name = 'control_files';

VALUE
--------------------------------------------------------------------------------
/u03/app/oracle/redo/DEV1/controlfile/o1_mf_cwxjo46k_.ctl


-- this will make RMAN "restore" command create new controlfiles in all online log destinations
RMAN> alter system reset control_files;


-- restart instance so parameter changes will take effect
RMAN> shutdown immediate


-- only start in NOMOUNT as to be able to restore controlfile
RMAN> startup nomount


-- this will create two new OMF controlfiles based on existing one
--   remember, we have reset control_files parameter
RMAN> restore controlfile from '/u03/app/oracle/redo/DEV1/controlfile/o1_mf_cwxjo46k_.ctl';
...
channel ORA_DISK_1: copied control file copy
output file name=/u03/app/oracle/redo/DEV1/controlfile/o1_mf_cxf4q747_.ctl
output file name=/u02/app/oracle/oradata/DEV1/DEV1/controlfile/o1_mf_cxf4q74v_.ctl
Finished restore at 12-SEP-16


-- use the two newly created controlfiles at next startup
RMAN> alter system set control_files = '/u03/app/oracle/redo/DEV1/controlfile/o1_mf_cxf4q747_.ctl','/u02/app/oracle/oradata/DEV1/DEV1/controlfile/o1_mf_cxf4q74v_.ctl' scope=spfile;


-- the old controlfile is not needed anymore
RMAN> host "rm /u03/app/oracle/redo/DEV1/controlfile/o1_mf_cwxjo46k_.ctl";

RMAN> startup force

Since I’m using Oracle 12c RMAN is capable of running SQL just like this…no need to change between SQL*Plus and RMAN anymore.

(Please don’t debate the file system structure, it’s from an engineered system. I think it’s horrible!)

Synthetic data – nice little SQL feature

This blog was inspired by a Twitter conversation between Jeffrey Kemp and Franck Pachot about little SQL tricks.
Countless times I’ve created synthetic data to model a case or just play around in general. For quick and dirty and small volumes I prefer “CONNECT BY”. When doing it properly, I use recursive CTEs.

But this puts a smile on my face every time I look at it.

select
    item.getNumberVal() num
from xmltable('1 to 10,20,30,40 to 50') item
;

       NUM
----------
	 1
	 2
	 3
	 4
	 5
	 6
	 7
	 8
	 9
	10
	20
	30
	40
	41
	42
	43
	44
	45
	46
	47
	48
	49
	50

23 rows selected.

Here’s what I usually start with as a base and then tweak the script to my current needs.

drop table t1 purge;
create table t1 (
    id number
  , num_small number (5, 0)
  , num_big number
  , vc2_short varchar2(200 byte)
  , vc2_long varchar2(4000 byte)
  , dt date
  , cl clob
)
  tablespace users
  lob (cl) store as securefile t1_cl_seg (
    tablespace users
    disable storage in row
    retention auto
    nocache logging
    chunk 32K)
;
 
insert /*+ append */ into t1 (id, num_small, num_big, vc2_short, vc2_long, dt)
with datagen as (
  select --+ materialize
      rownum as id
  from
      dual
  connect by level <= 1e4
)
select
    rownum as id
  , trunc(dbms_random.value(1, 100)) as num_small
  , round(dbms_random.value(10000, 1000000), 4) as num_big
  , dbms_random.string('L', trunc(dbms_random.value(10, 200))) as vc2_short
  , dbms_random.string('A', trunc(dbms_random.value(200, 4000))) as vc2_long
  , trunc(sysdate + dbms_random.value(0, 366)) as dt
from
    datagen dg1
  , datagen dg2
where
    rownum <= 1e4
;
 
commit;
 
<<populate_lob>>
declare
  num_rows number;
  cl clob;
  chunk_size integer := 1024; -- 1 KB
  max_cl_size integer := 8192; -- 8 KB
begin
  select count(*) cnt
  into num_rows
  from t1
  ;
  
  for row_num in 1 .. num_rows loop
    dbms_lob.createtemporary(cl, false, dbms_lob.call);
 
    for idx in 1 .. trunc(dbms_random.value(1, (max_cl_size / chunk_size))) loop
      dbms_lob.writeappend(cl, chunk_size, dbms_random.string('A', chunk_size));
      null;
    end loop;
 
    update t1
      set t1.cl = populate_lob.cl
    where id = row_num
    ;
    commit;
 
    dbms_lob.freetemporary(cl);
  end loop;
end populate_lob;
/
 
--alter table t1 modify lob (cl) (cache logging);

I hope above little trick amuses you as much as it did me ๐Ÿ™‚
Imagine, it made me blog about it!

OTN is a bad place to download

Life is hard as it is regarding Oracle licensing compliance. Then I’m wondering why, time and time again I see DBA’s download Oracle software from the Oracle Technology Network website to install on production servers.

otn-download-page
 
I’m no lawyer or otherwise in any position to assess the legal consequences, but under no circumstances do you want to run your enterprise servers covered by the OTN License Agreement. These are the terms you agree on whenever you download software from the Oracle Technology Network website (here’s the current OTN license agreement as a PDF in case Oracle will break the link some time in the future).

As a general rule DBA’s should only download Oracle software from the following two sites:

To make matters worse the zip files from OTN do not have identical content compared to the ones from support.oracle.com (never mind the fact that the contents of all three sources may be different).

# file from OTN
$ stat linuxamd64_12c_database_1of2.zip
  File: โ€˜linuxamd64_12c_database_1of2.zipโ€™
  Size: 1673544724	Blocks: 3268664    IO Block: 4096   regular file
  ...

# file from support.oracle.com
$ stat p17694377_121020_Linux-x86-64_1of8.zip
  File: โ€˜p17694377_121020_Linux-x86-64_1of8.zipโ€™
  Size: 1673517582	Blocks: 3268600    IO Block: 4096   regular file
  ...

Also, note that on OTN you don’t necessarily get a message digest along with your zip file. So, how can you be sure you got what you thought you’d get. On edelivery and support.oracle.com you always have MD5 and SHA-1 hashes to verify your downloads.

I hope, by now, I convinced everybody that OTN downloads are a bad fit for your enterprise Oracle environment. Thank you!

Scripting mkstore

I had the pleasure to load a wallet (secure external password store) with around 180 credentials. Faced with a repeating task, I always try to automate things as much as possible or at least to the level I deem sensible. In this case my urge not to waste an afternoon typing in the same stuff over and over was overwhelming.
Unfortunately, the “mkstore” utility does not have a command switch to provide the wallet password. No problem, on Linux/Unix I would have just piped in the wallet password:

echo MyWallet-Password | mkstore -wrl /u01/app/oracle/etc/wallet/auth -createCredential DEV1.localdomain SYSTEM manager

Working on Windows using batch scripts piping does not work with “mkstore”. Here’s what I came up with:

echo | set /p="MyWallet-Password" > C:\Temp\walletpwd.txt

mkstore -wrl D:\app\oracle\etc\wallet\auth -createCredential DEV1.localdomain SYSTEM manager < C:\Temp\walletpwd.txt
...
...
...

Since we were generating new passwords the service account on all databases I went over the Enterprise Manager repository to generate the “mkstore” calls:

select
     'mkstore -wrl "D:\app\oracle\etc\wallet\auth" -createCredential '
  || tgt.target_name
  || ' srvacc "'
  || gen_pwd
  || '" < C:\Temp\walletpwd.txt'
from sysman.mgmt_targets tgt
  join sysman.mgmt_target_properties tgsid on (tgsid.target_guid = tgt.target_guid)
where tgt.target_type = 'oracle_database'
and tgsid.property_name = 'SID'
order by tgsid.property_value
;

Now I could just copy & paste the query output into a command line window. DONE.

Please remember to delete the temporary file containing the wallet password:

del /F /S /Q C:\Temp\walletpwd.txt

Although it saved me a lot of typing I’m not quite happy with the solution. The reason is I do temporarily write the wallet password to a file in plain text. I do not like that at all. And in some environments this might be considered a security breach, even if the file is only there for a couple of minutes.

If you come up with a solution that works without storing the wallet password in a file to redirect it to STDIN I’ll be happy to know about.