Database.fi

Blog about wonders of database technologies

Using Amazon Redshift

Posted by pparkko on May 25, 2014

The problem

I have a customer project which needs to dig into a moderately big data set over and over again. There are real performance issues which need to be tackled.

The sheer amount of rows is not actually the problem: the main history table is around 31 million rows. That is something you really need to have indexed access but with that, single query will run in OK time.
But what we have to do is to go through this history table over and over again. Also we need to go through person details and another detail table. The other tables contain around 6 million rows each. We basically must do nested loops with many subjoins in every step. Each step takes around 7-9 seconds.

I know this problem would be solved in no time with Enterprise version database and partitioning (applies to most license databases). What we have is a Standard version.

Partitioning cs. Sharding cluster

I have come up with a couple of possible solutions:
1) Use open-source database with partitioning (MySQL, PostgreSQL)
2) Use something else that divides the load into smaller parts => I will give Amazon Redshift a go.

Main thing about optimizing sql statement execution is reducing load it does. You might reduce the amount of logical blocks needed, amount of data fetched from the disk, amount of joins needed, amount of data moved over network, amount of cpu needed (parsing) etc.
You need to make the operation smaller, period.

Both options use mostly the same SQL statements and configure the database (cluster) in a way that it actually does a lot less work underneath.

Partitioning splits the data into slices, like a montly partition which holds data for a month. When only last month is queried, only data for last month is read from the disk. I think I could give also Galera Cluster a go at some point.

Sharding (Redshift) divides data into many different clusters and uses a query coordinator in front which knows in which node the data resides. I am familiar with PostgreSQL and luckily I knew a person to ask some details over a quick phone call so I’m good to go. Thank you Jukka L.!

I am now starting to work with the Redshift option first.

Amazon Redshift

I was a bit disappointed to know that Redshift, which is a PostgreSQL implementation underneath, doesn’t contain plpgsql. I speak PL/SQL fluently because of the 19 years of Oracle history. So I need to find another programming language.

Luckily there was a weekend after the decision to try Redshift so I had time to come up with a plan.
My idea is to create an Excel frontend which will act as the UI for the whole operation. User will load the initial data set (outermost loop), start the operation and get the results to the same Excel with visualization. So hello 1990’s, let’s boot VBA ;)

– to be continued –

Duplicating a running database with RMAN

Posted by pparkko on April 22, 2014

RMAN comes with many features and develops all the time.
This is actually an old feature that became much better since you can duplicate a running database now.

Here is the easy syntax:
duplicate target database to DEST from active database;

Since I need it for Data Guard purpose, I need to add some parameters:

Remember to put the Auxiliary (destination) database to NOMOUNT stage first.
Listener doesn’t normally allow remote connections when in NOMOUNT so you should add a static entry to the listener.ora

All commands needed:
rman TARGET sys/password@PROD AUXILIARY /

connected to target database: PROD (DBID=1234567890)
connected to auxiliary database: DEST (not mounted)

Oracle Data Guard Physical Standby installation

Posted by pparkko on April 22, 2014

Oracle Data Guard Physical Standby installation in Oracle Database 11g Release 2

Data Guard is Oracle’s own Disaster Recovery solution. Physical Standby mode is included with Oracle Enterprise Edition license – provided that also the Standby database server is licensed.

Prerequisites

  • Two servers with an operating system and Oracle already installed.
  • The primary server has already an instance.
  • The standby server: software only installation.

Primary Server Setup

Logging

Check that the primary database is in archivelog mode.

SELECT log_mode FROM v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL>

If Primary database is in noarchivelog mode, switch is to archivelog mode:

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

We need to make sure that all operations generate redo, forced logging must be enabled:

ALTER DATABASE FORCE LOGGING;

Initialization Parameters

Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to “ORA11″ on the primary database.

SQL> show parameter db_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 ORA11

SQL> show parameter db_unique_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_unique_name			     string	 ORA11

SQL>

The DB_NAME of the standby database is the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter. For this example, the standby database will have the value “ORA11_STBY”.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORA11,ORA11_STBY)';

Set suitable remote archive log destinations. In this case I’m using the fast recovery area for the local location, but you could specify an location explicitly if you prefer. Notice the SERVICE and the DB_UNIQUE_NAME for the remote location reference the standby location.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORA11_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11_STBY';
The LOG_ARCHIVE_DEST is enabled by default and Oracle alredy tries to connect to the Standby database.
You might want to set it deferred for the time being:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

Later, when the configuration is ready you can set it ENABLEd:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

The LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters must be set toappropriate values and the REMOTE_LOGIN_PASSWORDFILE must be set to exclusive.

ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='vrarc_%t_%s_%r.arc' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30 SCOPE=SPFILE;
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;

In addition to the previous setting, it is recommended to make sure the primary is ready to switch roles to become a standby. For that to work properly we need to set the following parameters. Adjust the *_CONVERT parameters to account for your filename and path differences between the servers.

ALTER SYSTEM SET FAL_SERVER=ORA11_STBY;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

There may be need to rename datafiles, logfiles etc during operation. However, this adds risk so it might be best if you manage without renaming. We try to manage without renaming datafiles, hence the commented commands:

--ALTER SYSTEM SET DB_FILE_NAME_CONVERT='ORA11_STBY','ORA11' SCOPE=SPFILE;
--ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='ORA11_STBY','ORA11'  SCOPE=SPFILE;

It might be a good idea to restart the database at this point so that all parameters become active.

Service Setup

Entries for the primary and standby databases are needed in the “$ORACLE_HOME/network/admin/tnsnames.ora” files on both servers. You can create these using the Network Configuration Utility (netca) or manually. The following entries were used during this setup.

ORA11 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = primaryhost)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ORA11.WORLD)
    )
  )

ORA11_STBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = standbyhost)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ORA11.WORLD)
    )
  )

Backup Primary Database

If you are planning to use an active duplicate to create the standby database, then this step is unnecessary. For a backup-based duplicate, or a manual restore, take a backup of the primary database.

$ rman target=/

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

Create Standby Controlfile and PFILE

Create a controlfile for the standby database by issuing the following command on the primary database.

ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/ORA11_stby.ctl';

Create a parameter file for the standby database.

CREATE PFILE='/tmp/initORA11_stby.ora' FROM SPFILE;

Amend the PFILE making the entries relevant for the standby database. I’m making a replica of the original server, so in my case I only had to amend the following parameters.

*.db_unique_name='ORA11_STBY'
*.fal_server='ORA11'
*.log_archive_dest_2='SERVICE=ORA11 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11'

Standby Server Setup (Manual)

Copy Files

Create same Oracle directory structure on the standby server as on the primary.

$ mkdir -p /u01/app/oracle/oradata/ORA11
$ mkdir -p /u01/app/oracle/fast_recovery_area/ORA11
$ mkdir -p /u01/app/oracle/admin/ORA11/adump

Copy the files from the primary to the standby server.

$ # Standby controlfile to all locations.
$ scp oracle@primaryhost:/tmp/ORA11_stby.ctl /u01/app/oracle/oradata/ORA11/control01.ctl
$ cp /u01/app/oracle/oradata/ORA11/control01.ctl /u01/app/oracle/fast_recovery_area/ORA11/control02.ctl

$ # Archivelogs and backups
$ scp -r oracle@primaryhost:/u01/app/oracle/fast_recovery_area/ORA11/archivelog /u01/app/oracle/fast_recovery_area/ORA11
$ scp -r oracle@primaryhost:/u01/app/oracle/fast_recovery_area/ORA11/backupset /u01/app/oracle/fast_recovery_area/ORA11

$ # Parameter file.
$ scp oracle@primaryhost:/tmp/initORA11_stby.ora /tmp/initORA11_stby.ora

$ # Remote login password file.
$ scp oracle@primaryhost:$ORACLE_HOME/dbs/orapwORA11 $ORACLE_HOME/dbs

Remember to use exactly the same paths as on the primary server to avoid unnecessary debugging work.

Oracle environment variables

Remember that Oracle directory structure needs to be exactly the same on the DR site. It seems that the default installation with database and with software only create different ORACLE_HOME. Software only creates ../dbhome_1 and with instance creation creates ../db_1. If you do not wish to start renaming files you need to use exactly the same directories on both sites. set . oraenv

Check that you have OK looking set of environment variables, for example:
ORABASE_EXEC=/opt/oracle/product/11.2.0/db_1/bin/orabase
ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/11.2.0/db_1
ORACLE_SID=PRODvrweb
ORAHOME=/home/oracle
ORASID=oracle

Start Listener

Make sure the listener is started on the standby server.

$ lsnrctl start

Restore Backup

Create the SPFILE form the amended PFILE.

$ export ORACLE_SID=ORA11
$ sqlplus / as sysdba

SQL> CREATE SPFILE FROM PFILE='/tmp/initORA11_stby.ora';

Restore the backup files.

$ export ORACLE_SID=ORA11
$ rman target=/

RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;

Create Redo Logs

Create online redo logs for the standby. It’s a good idea to match the configuration of the primary server.

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/ORA11/online_redo01.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/ORA11/online_redo02.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/ORA11/online_redo03.log') SIZE 50M;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

In addition to the online redo logs, you should create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In my case, the following is standby redo logs must be created on both servers.

ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo04.log') SIZE 50M;

Once this is complete, we can start the apply process.

Standby Server Setup (DUPLICATE)

Copy Files

Create the necessary directories on the standby server.

$ mkdir -p /u01/app/oracle/oradata/ORA11
$ mkdir -p /u01/app/oracle/fast_recovery_area/ORA11
$ mkdir -p /u01/app/oracle/admin/ORA11/adump

Copy the files from the primary to the standby server.

$ # Standby controlfile to all locations.
$ scp oracle@primaryhost:/tmp/ORA11_stby.ctl /u01/app/oracle/oradata/ORA11/control01.ctl
$ cp /u01/app/oracle/oradata/ORA11/control01.ctl /u01/app/oracle/fast_recovery_area/ORA11/control02.ctl

$ # Parameter file.
$ scp oracle@primaryhost:/tmp/initORA11_stby.ora /tmp/initORA11_stby.ora

$ # Remote login password file.
$ scp oracle@primaryhost:$ORACLE_HOME/dbs/orapwORA11 $ORACLE_HOME/dbs

Start Listener

When using active duplicate, the standby server requires static listener configuration in a “listener.ora” file. In this case I used the following configuration.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ORA11.WORLD)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)
      (SID_NAME = ORA11)
    )
  )

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = standbyhost.localdomain)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

Make sure the listener is started on the standby server.

$ lsnrctl start

Create Standby Redo Logs on Primary Server

The DUPLICATE command automatically creates the standby redo logs on the standby. To make sure the primary database is configured for switchover, we must create the standby redo logs on the primary server.

ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/ORA11/standby_redo04.log') SIZE 50M;

Create Standby Using DUPLICATE

Start the auxillary instance on the standby server by starting it using the temporary “init.ora” file.

$ export ORACLE_SID=ORA11
$ sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE='/tmp/initORA11_stby.ora';

Connect to RMAN, specifying a full connect string for both the TARGET and AUXILLARY instances. DO not attempt to use OS authentication.

$ rman TARGET sys/password@ORA11 AUXILIARY sys/password@ORA11_STBY

Now issue the following DUPLICATE command.

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='ORA11_STBY' COMMENT 'Is standby'
    SET LOG_ARCHIVE_DEST_2='SERVICE=ORA11 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11'
    SET FAL_SERVER='ORA11' COMMENT 'Is primary'
  NOFILENAMECHECK;

A brief explanation of the individual clauses is shown below.

  • FOR STANDBY: This tells the DUPLICATE command is to be used for a standby, so it will not force a DBID change.
  • FROM ACTIVE DATABASE: The DUPLICATE will be created directly from the source datafile, without an additional backup step.
  • DORECOVER: The DUPLICATE will include the recovery step, bringing the standby up to the current point in time.
  • SPFILE: Allows us to reset values in the spfile when it is copied from the source server.
  • NOFILENAMECHECK: Destination file locations are not checked.

Once the command is complete, we can start the apply process.

Start Apply Process

Start the apply process on standby server.

# Foreground redo apply. Session never returns until cancel.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

# Background redo apply. Control is returned to the session once the apply process is started.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

If you need to cancel the apply process, issue the following command.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

If you prefer, you can set a delay between the arrival of the archived redo log and it being applied on the standby server using the following commands.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;

Provided you have configured standby redo logs, you can start real-time apply using the following command. ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Test Log Transport

On the primary server, check the latest archived redo log and force a log switch.

ALTER SESSION SET nls_date_format='DD.MM.YYYY HH24:MI:SS';

SELECT sequence#, first_time, next_time
FROM   v$archived_log
ORDER BY sequence#;

ALTER SYSTEM SWITCH LOGFILE;

Check the new archived redo log has arrived at the standby server and been applied.

ALTER SESSION SET nls_date_format='DD.MM.YYYY HH24:MI:SS';
SELECT sequence#, first_time, next_time, applied
FROM   v$archived_log
ORDER BY sequence#;

Protection Mode

There are three protection modes for the primary database:

  • Maximum Availability: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If no standby location is available, it acts in the same manner as maximum performance mode until a standby becomes available again.
  • Maximum Performance: Transactions on the primary commit as soon as redo information has been written to the online redo log. Transfer of redo information to the standby server is asynchronous, so it does not impact on performance of the primary.
  • Maximum Protection: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If not suitable standby location is available, the primary database shuts down.

By default, for a newly created standby database, the primary database is in maximum performance mode.

SELECT protection_mode FROM v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL>

The mode can be switched using the following commands. Note the alterations in the redo transport attributes.

-- Maximum Availability.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORA11_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;

-- Maximum Performance.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORA11_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

-- Maximum Protection.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORA11_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORA11_STBY';
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;
ALTER DATABASE OPEN;

Database Switchover

A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following statements.

-- Convert primary database to standby
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown primary database
SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

On the original standby database issue the following commands.

-- Convert standby database to primary
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
STARTUP;

Once this is complete, test the log transport as before. If everything is working fine, switch the primary database back to the original server by doing another switchover. This is known as a switchback.

Failover

If the primary database is not available the standby database can be activated as a primary database using the following statements.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;

Since the standby database is now the primary database it should be backed up immediately. The original primary database can now be configured as a standby. If Flashback Database was enabled on the primary database, then this can be done relatively easily (shown here). If not, the whole setup process must be followed, but this time using the original primary server as the standby.

Cursors are dead… Long live cursors! (T-SQL)

Posted by pparkko on February 7, 2014

It’s been a while since I had a good T-SQL task. Now I have!

I am quite enthusiastic about it. My inner nerd is happy about some database modeling & coding & data crunching task.

What I was mean by the title is that it has been long time told in SQL Server circles that: “DO NOT USE CURSORS. They are bad for performance.”

I was a bit naive to just take it as a difference between Oracle and SQL Server.. Without investigating further.
One should never do that.

As always – the truth is not that simple.

I read this great post about the subject and comparisons between while loops and basic cursor usage: sqlblog.com

So.. It is perfectly OK to use cursors in T-SQL.
We all know they are really fast in Oracle, although not the most modern way of doing things.

=> Going with this structure:

Fetch Next from into @var
While @@fetch_status <> -1 /* end of cursor */
begin
if @@fetch_status <> -2 /* row missing */
begin

end
Fetch Next from into @var
end

Another great thing about cursors is that they are so easy to understand, which is always an important thing.

BR,
Pasi

How to read Oracle Database files without Oracle?

Posted by pparkko on February 1, 2014

Let’s say that you have lost your Oracle installation and data files.
You have no real backup, no archivelogs etc.

What you have is a broken, warm copy of an Oracle database that does not start, even with hidden _allow_resetlogs_corruption parameters etc. That was the situation I faced some time ago at a customer.

It was mandatory to be able to restore the database since 5 years of data would be lost otherwise and the business pain was unbearable.

I had to begin to search for DUL tools (Data Unload Tools).
Oracle support has a tool what they use internally but do not lend outside. I found a separate tool for the job and licensed it for the duration of the project.

I can say that:
a) The technology works.

b) I was able to recover & restore the database to the state of the “warm backup”

c) You can contact us if you are in deep sh*t and you still have a set of datafiles but no working Oracle.

BR,
“Norman Ibas” of databases – Pasi ;)

SQL Server tuning (for Oracle people)

Posted by pparkko on February 1, 2014

What if you’re an Oracle pro and need to start tuning SQL Server?
This might happen to you…

Here is a quickstart guide if you already know your way around the basics. BTW, no Profiler tips here.

This is a no-nonsense post, used mainly for working purposes and I will add more content to it from time to time. Maybe even some descriptions.

– Most I/O
SELECT TOP 5
(total_logical_reads/execution_count) AS avg_logical_reads,
(total_logical_writes/execution_count) AS avg_logical_writes,
(total_physical_reads/execution_count) AS avg_phys_reads,
execution_count,
statement_start_offset as stmt_start_offset,
(SELECT SUBSTRING(text, statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX),text)) * 2
ELSE statement_end_offset
END - statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(plan_handle)) as query_plan
FROM sys.dm_exec_query_stats
ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC

DISK – Is my disk overburdened?

– Pending IO tasks
select
database_id,
file_id,
io_stall,
io_pending_ms_ticks,
scheduler_address
from sys.dm_io_virtual_file_stats(NULL, NULL)t1,
sys.dm_io_pending_io_requests as t2
where t1.file_handle = t2.io_handle;

CPU – Is my CPU a bottleneck?

If you find runnable tasks (many nonzero values) often with this query, then the answer is yes.
The bigger the numbers and more you find them, the worse the case.

– Runnable CPU tasks
select
scheduler_id,
current_tasks_count,
runnable_tasks_count
from
sys.dm_os_schedulers
where
scheduler_id < 255;

-- Most cumulative CPU
select
highest_cpu_queries.plan_handle,
highest_cpu_queries.execution_count,
highest_cpu_queries.total_worker_time,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
from
(select top 5
qs.plan_handle,
qs.execution_count,
qs.total_worker_time
from
sys.dm_exec_query_stats qs
order by qs.total_worker_time desc) as highest_cpu_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_cpu_queries.total_worker_time desc

--PHYSICAL READS
select
highest_phys_queries.plan_handle,
highest_phys_queries.execution_count,
highest_phys_queries.last_physical_reads,
highest_phys_queries.max_physical_reads,
highest_phys_queries.total_physical_reads,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
from
(select top 5
qs.plan_handle,
qs.execution_count,
qs.max_physical_reads,
qs.last_physical_reads,
qs.total_physical_reads
from
sys.dm_exec_query_stats qs
order by qs.total_physical_reads desc) as highest_phys_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_phys_queries.total_physical_reads desc;

--LOGICAL READS
select
highest_logical_queries.execution_count,
highest_logical_queries.last_logical_reads,
highest_logical_queries.max_logical_reads,
highest_logical_queries.total_logical_reads,
q.[text],
highest_logical_queries.plan_handle
from
(select top 5
qs.plan_handle,
qs.execution_count,
qs.max_logical_reads,
qs.last_logical_reads,
qs.total_logical_reads
from
sys.dm_exec_query_stats qs
order by qs.total_logical_reads desc) as highest_logical_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
order by highest_logical_queries.total_logical_reads desc

-- Most I/O
SELECT TOP 5
(total_logical_reads/execution_count) AS avg_logical_reads,
(total_logical_writes/execution_count) AS avg_logical_writes,
(total_physical_reads/execution_count) AS avg_phys_reads,
execution_count,
statement_start_offset as stmt_start_offset,
(SELECT SUBSTRING(text, statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX),text)) * 2
ELSE statement_end_offset
END - statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(plan_handle)) as query_plan
FROM sys.dm_exec_query_stats
ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC

Currently running sessions
-- Currently executing sessions
SELECT TOP 1000 [session_id]
,[login_time]
,[host_name]
,[program_name]
,[host_process_id]
,[client_version]
,[client_interface_name]
,[security_id]
,[login_name]
,[nt_domain]
,[nt_user_name]
,[status]
,[context_info]
,[cpu_time]
,[memory_usage]
,[total_scheduled_time]
,[total_elapsed_time]
,[endpoint_id]
,[last_request_start_time]
,[last_request_end_time]
,[reads]
,[writes]
,[logical_reads]
,[is_user_process]
,[text_size]
,[language]
,[date_format]
,[date_first]
,[quoted_identifier]
,[arithabort]
,[ansi_null_dflt_on]
,[ansi_defaults]
,[ansi_warnings]
,[ansi_padding]
,[ansi_nulls]
,[concat_null_yields_null]
,[transaction_isolation_level]
,[lock_timeout]
,[deadlock_priority]
,[row_count]
,[prev_error]
,[original_security_id]
,[original_login_name]
,[last_successful_logon]
,[last_unsuccessful_logon]
,[unsuccessful_logons]
,[group_id]
,[database_id]
,[authenticating_database_id]
,[open_transaction_count]
FROM [master].[sys].[dm_exec_sessions]
where status = 'running'

Tempdb usage

Free tempdb space:
Select SUM (unallocated_extent_page_count)*8 as freespace_kb
From sys.dm_db_file_space_usage

SELECT TOP 1000 [database_id]
,[file_id]
,[filegroup_id]
,[total_page_count]
,[allocated_extent_page_count]
,[unallocated_extent_page_count]
,[version_store_reserved_page_count]
,[user_object_reserved_page_count]
,[internal_object_reserved_page_count]
,[mixed_extent_page_count]
FROM [master].[sys].[dm_db_file_space_usage]

Currently running SQL statements
Ordered by CPU here, change at will:
SELECT
substring(text,qs.statement_start_offset/2
,(CASE
WHEN qs.statement_end_offset = -1 THEN len(convert(nvarchar(max), text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)
,qs.plan_generation_num as recompiles
,qs.execution_count as execution_count
,qs.total_elapsed_time - qs.total_worker_time as total_wait_time
,qs.total_worker_time as cpu_time
,qs.total_logical_reads as reads
,qs.total_logical_writes as writes
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
LEFT JOIN sys.dm_exec_requests r
ON qs.sql_handle = r.sql_handle
ORDER BY 5 DESC

When tuning SQL statements, remember to put these on:

SET STATISTICS IO ON
SET STATISTICS TIME ON

Dropping Enterprise Manager 12c repository

Posted by pparkko on October 25, 2011

Just a real quick one, since I need to do it right away…

EM CC 12c installation blew up and need to re-create, since here are the instructions to remove an existing OEM CC (or OEM GC) repository of a failed installation using the following command:

$/sysman/admin/emdrep/bin/RepManager -dbUser sys -dbPassword -dbRole sysdba -reposName sysman -reposPassword -action dropall [-mwHome ] [-mwOraHome ]

For Oracle Enterprise Manager Database Control (single instance) repository use this command:
emca -deconfig dbcontrol db -repos drop -cluster -SYS_PWD sys1 -SYSMAN_PWD SYS_PASSWORD

HTH,
Pasi

Analyzing a specific time period with Oracle Statspack – DMBS_SCHEDULER

Posted by pparkko on October 12, 2011

Hello, it’s been a while since last post. Too busy :(

Here is just a small example of how to perform analysis on a specific time period. Of course with AWR it would be so simple, but if you do not have access to AWR (Standard Edition Oracle for example), you just have to make do with Statspack.

Let’s say that we have a problem which occurs late at night at customer site.
It would be helpful to see what happens between 21 and 23 at night.

Of course we can just make statspack do snaps every 5 minutes and then turn it off later:
BEGIN
DBMS_SCHEDULER.CREATE_JOB ( job_name => 'STATSPACK_COLLECTION',
job_type => 'STORED_PROCEDURE',
job_action => 'statspack.snap',
repeat_interval => 'FREQ=MINUTELY; BYMINUTE=00,05,10,15,20,25,30,35,40,45,50,55',
enabled => TRUE,
comments => 'Statspack Collection'
);
END;
/

But it is way too easy to forget to turn it off and also you would allocate space from perfstat for no good reason. You’d probably end up filling up perfstat tablespace / quota.

The elegant way is to run snaps only for the time period you are interested in.
For that, we need to add a schedule to dbms_scheduler, create a program and tie them together as a job.

We need to create a schedule:
- Starts at 21:00
- Ends at 23:00
- When the schedule is active, Oracle must run statspack.snap every 5 minutes

Fine. How do we do that?

We need to create a schedule like this:

-- run every 5 minute from 21 PM to 23 PM
begin
dbms_scheduler.create_schedule(
schedule_name => 'INTERVAL_EVERY_5_MINUTES_2H',
start_date => trunc(sysdate)+21/24,
repeat_interval => 'freq=MINUTELY;interval=5',
end_date => trunc(sysdate)+23/24,
comments => 'Today from 21 to 23 PM, 5 minute interval');
end;
/

Also we need a program (Oracle scheduler term) to run statspack.snap:
-- Program to run statspack.snap
begin
dbms_scheduler.create_program
(program_name=> 'STATSPACK_SNAP',
program_type=> 'STORED_PROCEDURE',
program_action=> 'statspack.snap',
enabled=>true,
comments=>'Procedure to create statspack snap.'
);
end;
/


begin
-- Create job which will run with the new schedule and program.
dbms_scheduler.create_job
(job_name => 'STATSPACK_COLLECTION',
program_name=> 'STATSPACK_SNAP',
schedule_name=>'INTERVAL_EVERY_5_MINUTES_2H',
enabled=>true,
auto_drop=>false,
comments=>'Run statspack every 5 minutes from 21 to 23 PM today');
end;
/

That should do it.
It will run statspack collection from 21 to 23 PM with 5 minute interval.

Tomorrow you’ll be able to create spreport.sql (statspack reports) from that time period and dig deeper into the problem.

Hope this helps,
Pasi

Using Expdp & impdp and changing schemas with remap_schema

Posted by pparkko on May 16, 2011

What is really great about expdp & impdp is that you can export and import to new schemas very easily. That needed some work earlier with export and import.

Let’s say that I want to import into HR_NEW schema instead of HR.

We just add a remap_schema into the import command: remap_schema=HR:HR_NEW

This will actually even create the new user (schema)!

C:\Temp>impdp system/oracle dumpfile=hr.dmp directory=dmpdir schemas=hr logfile=
hr_imp.log remap_schema=HR:HR_NEW

Import: Release 11.2.0.2.0 – Beta on Ma Touko 16 11:42:28 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 – Beta
Master table “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_SCHEMA_01″: system/******** dumpfile=hr.dmp direc
tory=dmpdir schemas=hr logfile=hr_imp.log remap_schema=HR:HR_NEW
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “HR_NEW”.”COUNTRIES” 6.367 KB 25 rows
. . imported “HR_NEW”.”DEPARTMENTS” 7.007 KB 27 rows
. . imported “HR_NEW”.”EMPLOYEES” 16.80 KB 107 rows
. . imported “HR_NEW”.”JOB_HISTORY” 7.054 KB 10 rows
. . imported “HR_NEW”.”JOBS” 6.992 KB 19 rows
. . imported “HR_NEW”.”LOCATIONS” 8.273 KB 23 rows
. . imported “HR_NEW”.”REGIONS” 5.476 KB 4 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully completed at 11:42:47

The new user is still locked:
SQL> connect hr_new/hr_new
ERROR:
ORA-28000: the account is locked

Unlock:
C:\Temp>sqlplus system/oracle

SQL*Plus: Release 11.2.0.2.0 Beta on Ma Touko 16 11:44:30 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 – Beta

SQL> alter user hr_new account unlock;

User altered.

SQL> alter user hr_new identified by hr_new;

User altered.

SQL> connect hr_new/hr_new
Connected.

SQL> select count(*) as amount, object_type
2 from user_objects
3 group by object_type;

AMOUNT OBJECT_TYPE
———- ——————-
3 SEQUENCE
2 PROCEDURE
2 TRIGGER
7 TABLE
19 INDEX
1 VIEW

6 rows selected.

Sometimes you need to also remap_tablespaces with … remap_tablespace. What a tool!

That was of course possible with exp & imp. But since it was not supported I remember setting quotas to 0 and changing default tablespaces to be able to import to different tablespace..

Hope this helps,
Pasi

Using expdp and impdp export and import in 10g and 11g

Posted by pparkko on May 16, 2011

Just a quick example of expdp and impdp.

First, check that you have the OS directory like /temp or c:\temp.

Since this example was made in Windows world, we use c\temp.

Log in and create an Oracle Directory:

C:\Temp>sqlplus system/oracle

SQL*Plus: Release 11.2.0.2.0 Beta on Ma Touko 16 11:05:13 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta

SQL> create directory dmpdir AS 'c:\temp';

Directory created.

You may have a suitable Oracle Directory already, you can check it from DBA_DIRECTORIES:

SQL> SELECT directory_name, directory_path FROM dba_directories ;

DIRECTORY_NAME
------------------------------
DIRECTORY_PATH
-------------------------------------------------------------------

DMPDIR
c:\temp

All right, we have what we need.

Let’s say that we need to export HR without knowing HR user’s password.

From OS level:

C:\Temp>expdp system/oracle dumpfile=hr.dmp directory=dmpdir schemas=hr logfile=
hr_exp.log

Export: Release 11.2.0.2.0 - Beta on Ma Touko 16 11:29:37 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_02": system/******** dumpfile=hr.dmp direc
tory=dmpdir schemas=hr logfile=hr_exp.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."COUNTRIES" 6.367 KB 25 rows
. . exported "HR"."DEPARTMENTS" 7.007 KB 27 rows
. . exported "HR"."EMPLOYEES" 16.80 KB 107 rows
. . exported "HR"."JOB_HISTORY" 7.054 KB 10 rows
. . exported "HR"."JOBS" 6.992 KB 19 rows
. . exported "HR"."LOCATIONS" 8.273 KB 23 rows
. . exported "HR"."REGIONS" 5.476 KB 4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
C:\TEMP\HR.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" successfully completed at 11:32:55

C:\Temp>dir

16.05.2011 11:30 .
16.05.2011 11:30 ..
16.05.2011 11:32 503 808 HR.DMP
16.05.2011 11:32 2 395 hr_exp.log

Import is very simple.
expdp / impdp is somewhat easier to use than exp/imp, since most of the times you can use the same parameters while exporting and importing.

Basically I changed EXPDP => IMPDP and the name of the logfile.

Impdp command:
impdp system/oracle dumpfile=hr.dmp directory=dmpdir schemas=hr logfile=hr_imp.log

What is really great about expdp & impdp is that you can export and import to new schemas very easily. That needed some work earlier with export and import.

I wrote another entry about that, just click here to see it.