Table of Contents
Introduction
Sometimes, a performance problem appears rather suddenly. One day, performance is reasonable but the next, operations are much slower. This can sometimes be caused by a change in behavior in the database layer. Oracle, for example, may change execution plans for SQL statements after updating statistics. In this article, I’ll look at why that happens with Oracle, and how you can detect and fix sub-optimal execution plans.
This will not be an exhaustive treatment of the topic, but I will focus on some of the common problems that I’ve seen with Oracle. I’ll provide links to Oracle documentation for those of you who want to learn more.
Let’s start with a little background on how SQL statements are handled.
What’s a SQL execution plan?
An execution plan is the series of steps that Oracle uses when executing a SQL statement. This covers the specific indexes, if any, that will be used plus other activities like sorts and joins. When you execute a statement for the first time, Oracle will parse it and then the Oracle optimizer will analyze the statement to find the lowest cost way of executing the statement. The optimizer will use the statistics available on the table, and will make guesses as to the number of rows that are likely to be returned by any step in the execution plan. The number of rows returned by a given step is called the cardinality. Once the parsing is completed, the execution plan is then cached; future requests to execute the statement will use the cached execution plan.
As you can probably guess, the algorithms used by the Optimizer are quite sophisticated, and they evolve with each release of Oracle. But the algorithms are not perfect, because in the end, they are making guesses about the cardinality of each step, and these guesses are not always correct for every workload. And so, there are many Oracle features that are intended to deal with this problem. Some features (like SQL profiles) ensure plan stability. Others (histograms, SQL plan directives, adaptive plans) improve cardinality estimates or react to bad cardinality predictions.
Bind variables
Bind variables are a way of parameterizing SQL statements. An application can supply values for the bind variables at run time. One advantage to bind variables is that they allow Oracle to reuse the cached execution plans.
If you use literals, then the following two statements would viewed as different, and each would be parsed separately.
SELECT ITEM_ID from RESOURCE_RESOURCE WHERE ARCHIVED = '0' SELECT ITEM_ID from RESOURCE_RESOURCE WHERE ARCHIVED = '1'
But if you use a bind variable, then Oracle only needs to parse the statement once, and it can use the cached execution plan even if the value of the bind variable changes. The statement below includes a bind variable (:1).
SELECT ITEM_ID from RESOURCE_RESOURCE WHERE ARCHIVED = :1
So, how does the optimizer decide on an execution plan when the SQL only specifies a placeholder for a value? It uses bind variable peeking – when the statement is parsed, Oracle will peak at the values of any bind variables and use those values to improve its cardinality estimates.
Histograms
A histogram is a type of column statistic that provides more detailed information about the data distribution in a table column. If the distribution of values is skewed, then a histogram can help the optimizer improve its cardinality estimates. For example, in statement below, the ARCHIVED column can have only two values: 1 or 0, to represent whether a resource has been archived or not:
SELECT ITEM_ID from RESOURCE_RESOURCE WHERE ARCHIVED = :1
But in this example, most resources are not archived. So Oracle might initially start by incorrectly assuming that the values are equally distributed. It would assume that the cardinality of “WHERE ARCHIVED = :1” would be the total number of rows divided by the number of distinct values. So, if we have 1,000,000 resources, the first estimate would be that 500,000 rows would be returned. In that case, the optimizer might decide that a full table scan is the best execution plan. But if 95% of the resources are not archived, then the cardinality of “WHERE ARCHIVED = :1” would be 50,000 when selecting for ARCHIVED = ‘1’ and 950,000 when selecting for ARCHIVED = ‘0’. A histogram will tell the optimizer about the frequency of different values, and so the optimizer can get a better cardinality estimate if it knows what value of ARCHIVED you are selecting for.
There are several different kinds of histograms. Oracle will decide which histograms are needed when running DBMS_STATS (if the METHOD_OPT parameter for a table is set to the default of “SIZE AUTO”).
How do statements become slow?
It is not unusual for SQL statements to become slow. If you Google for “Oracle SQL plan stability”, you’ll find a number of articles on why it happens and what to do about it. And in fact, there are many features in Oracle to help you deal with that.
Here I’ll look at some of the common problems I’ve encountered in applications I’ve worked with.
Bind variable peeking
If the number of rows returned by a SQL statement is sensitive to the values of the bind variables, then bind variable peeking can create a sub-optimal execution plan. Let’s use the following statement as an example:
select COUNT(*) from RM1USER.RESOURCE_RESOURCE t1 where ((t1.MODIFIED >= :1 ) and (t1.MODIFIED <= :2 ) and (t1.URI like 'storage/%' or t1.URI like 'baselines/%' or t1.URI like 'reindex/%'));
This statement has two bind variables: a start time and an end time. Let’s say that the first time the application executes this statement, it uses a wide date range, and so the statement returns over 1,000,000 rows. As the statement is compiled, the optimizer peeks at the bind variables and notices the cardinality is high, and so it selects a full table scan as the best way to execute the statement. And this execution plan is cached and will be used if this statement is executed again.
Now let’s say the code executes this statement again but this time with a narrow time range, and only a few rows are returned. Unfortunately, the cached execution plan involves a full table scan, and so this statement will perform poorly. A full table scan is not optimal if this statement returns a small number of rows – an index scan against the index on the MODIFIED column is a better choice.
The poor performance will persist until the execution plan is updated. This could take some time – the statement might need to age out of the cache first. Or, in Oracle 12c, the optimizer might notice the cardinality mis-estimate and the new adaptive plan features can come into play.
Adaptive cursor sharing is a feature which allows Oracle to select execution plans based on the values of bind variables. This is another source of execution plan instability.
Histograms
Another reason that Oracle might choose bad execution plans is histograms.
Histograms and bind variable peeking
When a bind variable is used, then the optimizer will peek at the bind variable when creating the execution plan. If there is a histogram available for a column, the cardinality estimate will use the peeked bind value in conjunction with the histogram. Since the histogram was created because Oracle detected skewed data, the cardinality estimates are going to be sensitive to the peeked value. You are likely to get different execution plans if the value of the peeked variable occurs at a low frequency vs. a high frequency. Performance can be dramatically different when using a high frequency value with a plan generated assuming a low frequency value (and vice versa).
This is another area where adaptive cursor sharing may come into play.
Histograms and VARCHAR columns
Oracle has a limitation when generating histograms on VARCHAR columns. In Oracle 11g, it will only look at the first 32 characters of the column values. In Oracle 12c, it will only look at the first 64 characters. That means if the values in the VARCHAR column are not unique in the first 32/64 characters, then the resulting histogram will map all of the values into a single bin. Oracle may then decide that a full table scan is the best execution plan, since it would assume that any WHERE clause will select all of the values.
In Oracle 12c (and in some versions of Oracle 11g), this behavior improved because Oracle would also look at the number of distinct values in the column as a way to check the accuracy of the histogram. It would use 1/(Number of distinct values) to predict the cardinality rather than use a histogram with a small number of bins. In other words, if all of the column values are mapped into one histogram bin, but the number of distinct values is more than 1, then the histogram isn’t accurate and Oracle will use 1/NDV as the cardinality.
Histograms generated inappropriately
Oracle will sometimes create a histogram on a timestamp column. Imagine a MODIFIED date column which ends up having duplicate timestamps during a period of high activity (such as when importing data). Oracle might think the values are skewed because of the duplicates, and then try to divide up the timestamps into ranges. This becomes less effective as a system ages. The range of MODIFIED timestamps becomes large for systems that have been in production for years, which makes the cardinality predictions inaccurate.
I have seen cases where Oracle created a histogram on a column which contained all unique values. For equality predicates, the cardinality for such a column would be 1/(number of distinct values) and so a histogram would be useless. I suspect the histogram was created because of queries involving LIKE clauses and wildcards. Oracle might have detected skew because “LIKE ‘string%’ returned more results than predicted by 1/NDV. This is speculation because I haven’t found any documentation of the impact of LIKE on histogram generation.
Out-of-date statistics
The accuracy of the Oracle optimizer does depend on having current statistics. If you are updating statistics daily, that would normally be frequent enough to avoid problems. There are a few exceptions, however.
One application that I’ve worked with provided an import capability, so that you could create entries in bulk. Importing an unusually high number of entries over a short time could skew statistics, and so updating statistics immediately after an import can be a good idea. One example of why this matters is the MODIFIED date column I mentioned earlier. If you import a large number of artifacts, their MODIFIED date values will be later than the last value that the optimizer knows about. From the optimizer’s perspective, queries against these new entries are for dates in the future. So it has to predict cardinality based on past creation rates, and this is likely to be an underestimate.
If you are upgrading to a new release, you should consider updating statistics at least once partway through the upgrade. When an upgrade adds new tables, those tables start out empty. The cardinality of queries is 0 initially, and that can lead to poor choices. Updating the statistics partway through the upgrade gives the optimizer better information.
Understanding slow SQL statements
When trying to understand slow SQL, the first step is to get information about the execution plan for the statement. Once you know what the optimizer decided to do, you can then look more deeply into why the optimizer made its decision. Look for areas where the cardinality estimated by the optimizer doesn’t match the actual results. Also look for signs that the optimizer took some action to adjust for cardinality mis-matches (like creating histograms, or activating adaptive features).
Finding slow SQL statements
You can get a list of the slowest SQL statements by querying v$sql. The SQL below will list the top 20 statements which have the highest cumulative elapsed time values. (Replace PARSING_SCHEMA_NAME with your schema).
column elapsed_time format 999,999,999,999 SELECT * FROM (SELECT sql_fulltext, sql_id, elapsed_time, child_number, disk_reads, executions, first_load_time, last_load_time FROM v$sql WHERE PARSING_SCHEMA_NAME = 'RM43USER' ORDER BY elapsed_time DESC) WHERE ROWNUM < 20;
The output will be similar to this:
SQL_FULLTEXT SQL_ID ELAPSED_TIME CHILD_NUMBER DISK_READS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME -------------------------------------------------------------------------------- --------------- ---------------- ------------ ------------ ------------ ------------------- ------------------- select ITEM_ID from RM43USER.VVCMODEL_VERSION where CONFIGURATION_ITEM_ID = :1 84ynj21zhy82r 1,587,459,655 6 238,200 266,346 2018-11-02/00:10:05 2018-11-02/00:10:05 select ITEM_ID from RM43USER.VVCMODEL_VERSION where CONFIGURATION_ITEM_ID = :1 84ynj21zhy82r 1,565,295,293 8 1,079 60,489 2018-11-02/00:10:05 2018-11-02/00:10:05 select distinct t1.CONFIGURATION_ITEM_ID, t1.STORAGE, t1.VERSION, t1.CURRENT_COL brftpntuw2cwd 493,432,990 0 6,390 19,495 2018-11-01/10:00:57 2018-11-01/10:00:57 select distinct t1.CONFIGURATION_ITEM_ID, t1.STORAGE, t1.VERSION, t1.CURRENT_COL 6wp9vb17jcg7j 431,497,284 0 631 16,673 2018-11-01/09:38:50 2018-11-01/09:38:50 select /*+ FIRST_ROWS(1) */ distinct t1.VERSION from RM43USER.VVCMODEL_VERSION gaftvsb00yjqa 272,086,087 0 0 10,552 2018-11-02/13:05:24 2018-11-02/13:05:24 select distinct t1.CONFIGURATION_ITEM_ID, t1.STORAGE, t1.VERSION, t1.CURRENT_COL baum8fwp9fpas 114,282,530 0 619,838 109 2018-11-01/09:37:52 2018-11-01/09:37:52 Insert Into RM43USER.REPOSITORY_ITEM_STATES (ITEM_UUID, KEY_UUID, MODIFIED, ITEM bjgqpffwbhus9 104,209,591 0 63,335 48,017 2018-11-01/09:38:56 2018-11-01/09:38:56 Select S.VAL_ENCODING, S.ITEM_VALUE From RM43USER.REPOSITORY_ITEM_STATES S Join 8ggwbwks137m3 84,135,464 0 117,909 162,801 2018-11-01/09:37:46 2018-11-01/09:37:46 Insert Into RM43USER.REPOSITORY_ITEM_CURRENTS (ITEM_UUID, STATE_UUID) VALUES (:1 4zwp3h4b4mt6g 55,854,907 0 25,418 26,160 2018-11-01/09:38:56 2018-11-01/09:38:56 INSERT INTO RM43USER.RESOURCE_RESOURCE(STATE_ID, ITEM_ID, CONTEXT_ID, MODIFIED, 6j112ssc6w8ty 45,780,861 0 36,647 18,047 2018-11-01/10:02:22 2018-11-01/10:02:22
You can use the values in the SQL_ID column to get more information about the execution plans for the statements, as described in the next section.
Explaining execution plans for cached statements
You can get the execution plan that Oracle used for a specific statement from the cursor cache. This is the best place to start, because the execution plan in the cursor cache accounts for the values of bind variables. If you use “EXPLAIN PLAN FOR”, you may get a different execution plan (see the discussion in the next section).
SELECT * FROM table(dbms_xplan.display_cursor('sql id', NULL, 'all +peeked_binds'));
To run this, you first to find the SQL id for the statement of interest. You can do that by looking in v$sqlarea:
select sql_id, sql_text as sq from v$sqlarea where sql_text LIKE '%RESOURCE_RESOURCE%' order by sql_text;
The more specific you can make the filtering on sql_text, the easier it will be to find the SQL statement you want to analyze. Note that the sql id is a hash generated from the SQL statement, so it stays stable over the lifetime of the database (until upgrading Oracle, anyway).
Here is some sample output from dbms_xplan.display_cursor:
There is one more thing you can do with dbms_xplan.display_cursor, but you can’t use bind variables for this. If you add a gather_plan_statistics hint to a SQL statement, you can get more information about the estimated and actual rows, like so:
select /*+ gather_plan_statistics */ t1.ITEM_ID, t1.STATE_ID, t1.JZ_DISCRIMINATOR from RM43USER.RESOURCE_RESOURCE t1 where ((t1.MODIFIED >= TO_TIMESTAMP('2018-10-17 00:00:00','yyyy/mm/dd hh24:mi:ss')) and (t1.URI like CHR(1)||'storage/%' or t1.URI like CHR(1)||'baselines/%' or t1.URI like CHR(1)||'reindex/%')) ; SELECT * FROM table(dbms_xplan.display_cursor(format=>'allstats last'));
The output from this would be similar to the screenshot below. Now the report includes columns for both the estimated (E-Rows) and actual (A-Rows) cardinality of each step. A mismatch between the two estimates is a warning sign that the optimizer doesn’t have good information.
For more information on how to read execution plans, refer to the Oracle documentation.
Using EXPLAIN PLAN FOR
You can also get execution plans by pre-pending “EXPLAIN PLAN FOR” to a SQL statement:
explain plan for select t1.ITEM_ID, t1.STATE_ID, t1.JZ_DISCRIMINATOR from RM43USER.RESOURCE_RESOURCE t1 where ((t1.MODIFIED >= TO_TIMESTAMP('2018-10-17 00:00:00','yyyy/mm/dd hh24:mi:ss')) and (t1.URI like CHR(1)||'storage/%' or t1.URI like CHR(1)||'baselines/%' or t1.URI like CHR(1)||'reindex/%')) ; select PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
The output for the statements above on my test database is:
This is less useful if you have SQL with bind variables. You can get EXPLAIN output if you use SQL with bind variables, but there won’t be values for the optimizer to peek at, so the execution plan might not be the same as what Oracle will actually use. That’s why I recommend using dbms_xplan.display_cursor instead when analyzing SQL with bind variables.
Diagnosing histogram issues
If you suspect histograms are responsible for bad execution plans, there are a few ways to check on that.
First, you can get a list of all columns for which histograms have been generated using the following command (change RM43USER to your schema name).
SET LINESIZE 200 column table_name format a40 column column_name format a24 select table_name, column_name, histogram from all_tab_col_statistics where owner = 'RM43USER' and histogram != 'NONE' order by table_name, column_name;
This will tell you which columns have a histogram, and what kind of histogram was generated for each column:
TABLE_NAME COLUMN_NAME HISTOGRAM ---------------------------------------- ------------------------ --------------- DIGNSTCTSTS_DGNSTC_TST_RSLT_TM NODE_ID FREQUENCY DIGNSTCTSTS_DGNSTC_TST_RSLT_TM TEST_ID FREQUENCY MARKERS_MARKER APPLICATION FREQUENCY MARKERS_MARKER CONTEXT_ID FREQUENCY MARKERS_MARKER PROJECT FREQUENCY MARKERS_MARKER TYPE FREQUENCY MARKERS_MARKER URI HYBRID MARKERS_MARKER URI2 FREQUENCY MARKERS_MARKER USER_COL FREQUENCY PROCESS_PROCESS_AREA ARCHIVED FREQUENCY PROCESS_PROCESS_AREA JZ_DISCRIMINATOR FREQUENCY
You can also get details on the distribution of values in the histogram by using the following SQL. Here, replace table_name and OWNER with values relevant to your environment. You may also want to add “column_name = ‘yourcolumn'” to limit this to a single column.
column density format 9.99999999 select column_name, num_distinct, num_nulls, histogram, num_buckets, density, sample_size from all_tab_col_statistics where table_name = 'RESOURCE_RESOURCE' and OWNER = 'RM43USER';
The output from this command would be similar to this. Watch out for columns that have a histogram, but the number of distinct values is much larger than the number of buckets.
COLUMN_NAME NUM_DISTINCT NUM_NULLS HISTOGRAM NUM_BUCKETS DENSITY SAMPLE_SIZE ------------------------ ------------ ---------- --------------- ----------- ----------- ----------- STATE_ID 27228382 0 NONE 1 .00000004 27228382 ITEM_ID 27228382 0 NONE 1 .00000004 27228382 CONTEXT_ID 15 0 NONE 1 .06666667 27228382 MODIFIED 15215616 0 HYBRID 254 .00000000 5565 MODIFIED_BY_ITEM_ID 229 0 NONE 1 .00436681 27228382 URI 27228382 0 HYBRID 254 .00000000 5563 CONTENT_TYPE 18 0 NONE 1 .05555556 27228382 ARCHIVED 2 0 FREQUENCY 2 .00000002 27228382 CONTEXT_URI 15 0 NONE 1 .06666667 27228382 CREATOR_ITEM_ID 229 0 NONE 1 .00436681 27228382 CREATED 14992384 0 NONE 1 .00000007 27228382 MERGE_PREDECESSOR_URI 0 27228382 NONE 0 .00000000 COPIED_FROM_URI 0 27228382 NONE 0 .00000000 CHANGE_ID 0 27228382 NONE 0 .00000000 COMPONENT_ID 913 240870 NONE 1 .00109529 26987512 JZ_DISCRIMINATOR 2 0 FREQUENCY 2 .00000002 27228382 APPLICATION_STORAGE_AREA 1 27228321 NONE 1 1.00000000 61 _ATH_K
Oracle 12c introduces a new type of histogram called HYBRID. To look at the distribution of values for hybrid histograms, use a command like the following:
column endpoint_number format 999,999 column endpoint_actual_value a40 select endpoint_number, endpoint_actual_value, endpoint_repeat_count from all_tab_histograms where owner = 'RM43USER' and table_name = 'RESOURCE_RESOURCE' and COLUMN_NAME = 'URI' order by endpoint_number;
This output can be difficult to interpret. It is especially tricky for date/time values and for some text strings. If you need more information, use the Oracle documentation. This is a good reference for Oracle 12c, release 1: https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL366
Identifying SQL plan directives
In Oracle 12c, a SQL plan directive can be created if there is a cardinality mis-match during SQL execution. This can complicate analysis for a couple of reasons:
- A SQL plan directive can be used by more than one statement since they are defined on query expressions. So, if a SQL plan directive is created during the execution of one SQL statement, it can have an effect on other statements. In other words, otherwise unrelated parts of an application can be coupled together by SQL plan directives.
- The optimizer can use dynamic sampling if it doesn’t have sufficient statistics for the directive. How it does dynamic sampling is impacted by the value of the parameter OPTIMIZER_DYNAMIC_SAMPLING. The default value of 2 may not be a good match for all applications.
In any case, when investigating SQL plan directives, be sure to get the value of OPTIMIZER_DYNAMIC_SAMPLING. You can get a list of the SQL plan directives by running this command (replace RM43USER with the schema for your database).
select to_char(d.directive_id) dir_id, o.owner, o.object_name, o.subobject_name col_name, o.object_type, d.type, d.state, d.reason from dba_sql_plan_directives d, dba_sql_plan_dir_objects o where d.DIRECTIVE_ID=o.DIRECTIVE_ID and o.owner in ('RM43USER') order by 1,2,3,4,5;
Here’s what the output looks like. There are plan directives at both the table and the column level. If the STATE value is “USABLE”, then the plan directive is active. In this example, the plan directives are using dynamic sampling, which means that the parameter OPTIMIZER_DYNAMIC_SAMPLING is relevant. This example also has an active plan directive for the STORAGE column of the VVCMODEL_VERSION table – this directive could potentially be used by any SQL statement which queries that column.
DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON --------------------- ---------- ------------------------------ -------------------- ------ ---------------- ---------- ---------------------------------------- 10431885218504418103 RM43USER VVCMODEL_VERSION STORAGE COLUMN DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE 10431885218504418103 RM43USER VVCMODEL_VERSION TABLE DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE 12001892899629240141 RM43USER PROCESS_PROCESS_AREA ARCHIVED COLUMN DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE 12001892899629240141 RM43USER PROCESS_PROCESS_AREA TABLE DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE 1307921164036279535 RM43USER VVCMODEL_VERSION TABLE DYNAMIC_SAMPLING USABLE JOIN CARDINALITY MISESTIMATE 1307921164036279535 RM43USER VVCMODEL_VERSION TABLE DYNAMIC_SAMPLING USABLE JOIN CARDINALITY MISESTIMATE 14702263942076599371 RM43USER PROCESS_PROCESS_AREA TABLE DYNAMIC_SAMPLING SUPERSEDED JOIN CARDINALITY MISESTIMATE 14702263942076599371 RM43USER PROCESS_PROCESS_AREA TABLE DYNAMIC_SAMPLING SUPERSEDED JOIN CARDINALITY MISESTIMATE 2455270644647083721 RM43USER JAZZ_TABLE_MAP TABLE DYNAMIC_SAMPLING SUPERSEDED JOIN CARDINALITY MISESTIMATE 2936098963790875151 RM43USER PROCESS_PROCESS_AREA TABLE DYNAMIC_SAMPLING USABLE JOIN CARDINALITY MISESTIMATE 2936098963790875151 RM43USER PROCESS_PROCESS_AREA TABLE DYNAMIC_SAMPLING USABLE JOIN CARDINALITY MISESTIMATE
When you generate execution plans, the output will indicate if a SQL Plan directive is being used:
Note ----- - dynamic statistics used: dynamic sampling (level=2) PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - 1 Sql Plan Directive used for this statement
Adaptive cursor sharing
Adaptive cursor sharing allows a statement to use different execution plans based on the values of the bind variables. A cursor can be marked as bind-sensitive if its optimal plan may depend on the value of a bind variable. A bind-aware cursor is a bind-sensitive cursor that is eligible to use different plans for different bind values.
You can see which SQL statements are either bind-aware or bind-sensitive by executing the following command (use your own schema name below).
set PAGESIZE 100 set LINESIZE 200 COL BIND_AWARE FORMAT a10 COL SQL_TEXT FORMAT a60 COL CHILD# FORMAT 99999 COL EXEC FORMAT 9999 COL BUFF_GETS FORMAT 999999999 COL BIND_SENS FORMAT a9 COL SHARABLE FORMAT a9 SELECT SQL_TEXT, CHILD_NUMBER AS CHILD#, EXECUTIONS AS EXEC, BUFFER_GETS AS BUFF_GETS, IS_BIND_SENSITIVE AS BIND_SENS, IS_BIND_AWARE AS BIND_AWARE, IS_SHAREABLE AS SHARABLE FROM V$SQL WHERE (IS_BIND_SENSITIVE = 'Y' OR IS_BIND_AWARE = 'Y') AND PARSING_SCHEMA_NAME = 'RM43USER';
Example output below.
SQL_TEXT CHILD# EXEC BUFF_GETS BIND_SENS BIND_AWARE SHARABLE ------------------------------------------------------------ ------ ----- ---------- --------- ---------- --------- select COUNT(*) as TST from RM43USER.RESOURCE_RESOURCE t1 WH 0 1 5199 Y N Y ERE ((t1.MODIFIED >= :1 ) and (t1.MODIFIED <= :2 ) and (t1.U RI like CHR(1)||'storage/%' or t1.URI like CHR(1)||'baseline s/%' or t1.URI like CHR(1)||'reindex/%')) /* SQL Analyze(322,1) */ select distinct t1.CONCEPT from RM4 0 10 40 Y N Y 3USER.VVCMODEL_VERSION t1 where (t1.STORAGE = :1 ) /* SQL Analyze(322,1) */ select distinct t1.CONCEPT from RM4 2 10 40 Y N Y 3USER.VVCMODEL_VERSION t1 where (t1.STORAGE = :1 ) select t1.ITEM_ID as TST, t1.STATE_ID, t1.JZ_DISCRIMINATOR f 0 1 124 Y N Y rom RM43USER.RESOURCE_RESOURCE t1 WHERE ((t1.MODIFIED >= :1 ) and (t1.MODIFIED <= :2 ) and (t1.URI like CHR(1)||'storage /%' or t1.URI like CHR(1)||'baselines/%' or t1.URI like CHR( 1)||'reindex/%')) order by t1.MODIFIED asc, t1.URI asc select t1.ITEM_ID as TST, t1.STATE_ID, t1.JZ_DISCRIMINATOR f 0 1 2903 Y N Y rom RM43USER.RESOURCE_RESOURCE t1 WHERE ((t1.MODIFIED >= :1 ) and (t1.URI like '%storage/%' or t1.URI like '%baselines/% ' or t1.URI like '%reindex/%')) order by t1.MODIFIED desc
Finding SQL statements with changed execution plans
If there isn’t any way to narrow things down, you can also use dba_hist_sqlstats to identify all plans with changed execution plans. You can get the sql ids for these statements, and then get more information about them from v$sql or from dbms_xplan. The reasoning here is that if there is a change in behavior, there should be SQL statements for which the execution plan has changed, and that can point you in the right direction.
Change the value of parsing_schema_name below to match your own schema.
set lines 220 pages 9999 trimspool on set numformat 999,999,999 column sql_id format a15 column plan_hash_value format 99999999999999 column min_snap format 999999 column max_snap format 999999 column min_avg_ela format 999,999,999,999,999 column avg_ela format 999,999,999,999,999 column ela_gain format 999,999,999,999,999 select sql_id, min(min_snap_id) min_snap, max(max_snap_id) max_snap, max(decode(rw_num,1,plan_hash_value)) plan_hash_value, max(decode(rw_num,1,avg_ela)) min_avg_ela, avg(avg_ela) avg_ela, avg(avg_ela) - max(decode(rw_num,1,avg_ela)) ela_gain, -- max(decode(rw_num,1,avg_buffer_gets)) min_avg_buf_gets, -- avg(avg_buffer_gets) avg_buf_gets, max(decode(rw_num,1,sum_exec))-1 min_exec, avg(sum_exec)-1 avg_exec from ( select sql_id, plan_hash_value, avg_buffer_gets, avg_ela, sum_exec, row_number() over (partition by sql_id order by avg_ela) rw_num , min_snap_id, max_snap_id from ( select sql_id, plan_hash_value , sum(BUFFER_GETS_DELTA)/(sum(executions_delta)+1) avg_buffer_gets, sum(elapsed_time_delta)/(sum(executions_delta)+1) avg_ela, sum(executions_delta)+1 sum_exec, min(snap_id) min_snap_id, max(snap_id) max_snap_id from dba_hist_sqlstat a where exists ( select b.sql_id from dba_hist_sqlstat b, v$sql c where a.sql_id = b.sql_id and b.sql_id = c.sql_id and c.sql_text NOT LIKE '%SQL Analyze%' and a.plan_hash_value != b.plan_hash_value and b.plan_hash_value > 0 and a.parsing_schema_name = 'RM43USER') and plan_hash_value > 0 group by sql_id, plan_hash_value order by sql_id, avg_ela) order by sql_id, avg_ela) group by sql_id having max(decode(rw_num,1,sum_exec)) > 1 order by 7 desc;
The output from this statement looks like this:
SQL_ID MIN_SNAP MAX_SNAP PLAN_HASH_VALUE MIN_AVG_ELA AVG_ELA ELA_GAIN MIN_EXEC AVG_EXEC --------------- -------- -------- --------------- -------------------- -------------------- -------------------- ------------ ------------ fdg7bs3gsrx6m 143 169 1846201654 28,892 2,406,860 2,377,968 1 8 dyap2f3b0hshd 117 283 1840836785 402,860 905,334 502,474 2 1 grhdd2q4uf434 117 283 2749462355 2,114,987 2,309,509 194,522 1 1 ctjn7p8nr0sya 265 289 324912578 33,810 143,071 109,261 204 53 4r6a185hqfckz 147 290 3129263991 34,651 130,930 96,278 213 281 0p3rhjjntvx7c 152 272 2801599874 51,909 65,276 13,368 4 4 84ynj21zhy82r 152 296 3766416807 17,783 22,418 4,635 2,232 1,370 3703yg5v136dg 283 289 2635859213 1,702 5,776 4,075 123 113 19bq4d08vp873 264 265 1783792595 4,619 7,437 2,817 7 4 2uafh8u7jpc46 264 264 1783792595 4,344 6,446 2,102 7 4 2k98g5zprfrn8 161 288 2904370115 3,262 3,947 685 294 1,056 bsbcmk8j6ndjb 161 284 2082792978 301 382 81 3,243 2,171 az33m61ym46y4 164 165 1171133436 5,671 5,671 0 202 202
Google for “dba_hist_sqlstat” to find other ideas for analyzing the statistics history.
Understanding SQL profiles and SQL baselines
You can get a list of SQL profiles (and their status) using the following SQL.
COLUMN category FORMAT a10 COLUMN sql_text FORMAT a60 COLUMN name FORMAT a30 SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM DBA_SQL_PROFILES;
The output from this command will be similar to this:
NAME SQL_TEXT CATEGORY STATUS ------------------------------ ------------------------------------------------------------ ---------- -------- SYS_SQLPROF_0166abfd50300000 select t1.ITEM_ID as TST, t1.STATE_ID, t1.JZ_DISCRIMINATOR f DEFAULT ENABLED rom RM43USER.RESOURC
You can get a list of SQL plan baselines by executing the following command:
COLUMN SQL_HANDLE FORMAT a30 COLUMN SQL_TEXT FORMAT a60 SELECT SQL_HANDLE, SQL_TEXT, ENABLED, ACCEPTED FROM DBA_SQL_PLAN_BASELINES ;
The output will be similar to what’s below. You can see whether the baseline is enabled and/or accepted in this output.
SQL_HANDLE SQL_TEXT ENA ACC ------------------------------ ------------------------------------------------------------ --- --- SQL_3b7e263b73afe227 SELECT changetimestamp, akey, avalue, aversion FROM KV_TABLE YES YES _LDX WHERE aname=:1
One you have a SQL handle, you can get information about the execution plans that are in the SQL plan baseline, using:
select * from table(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(sql_handle => 'your sql handle');
The output will be similar to the below. In this example, the plan baseline only contains a single execution plan. Some plan baselines will contain multiple execution plans.
SQL> select * from table(dbms_xplan.display_sql_plan_baseline(sql_handle => 'SQL_3b7e263b73afe227')); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- SQL handle: SQL_3b7e263b73afe227 SQL text: SELECT changetimestamp, akey, avalue, aversion FROM KV_TABLE_LDX WHERE aname=:1 AND akey LIKE :2 AND changetimestamp >= :3 ORDER BY changetimestamp DESC -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Plan name: SQL_PLAN_3qzj67dtuzsj7edbc80e1 Plan id: 3988553953 Enabled: YES Fixed: NO Accepted: YES Origin: MANUAL-SQLTUNE Plan rows: From dictionary -------------------------------------------------------------------------------- Plan hash value: 2964257357 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 499 | 9 (12)| 00:00:01 | | 1 | SORT ORDER BY | | 1 | 499 | 9 (12)| 00:00:01 | |* 2 | TABLE ACCESS BY INDEX ROWID BATCHED| KV_TABLE_LDX | 1 | 499 | 8 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | SYS_PK_KV_TABLE_LDX | 18 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("CHANGETIMESTAMP">=:3) 3 - access("ANAME"=:1 AND "AKEY" LIKE :2) filter("AKEY" LIKE :2) 31 rows selected. SQL> spool off
Optimizer tracing
If you can’t figure out why Oracle is choosing a given plan, there is a way to get more details about the execution plans considered (and rejected) by the optimizer. Execute the following procedure (replace sql_id with the sql id you found previously:
exec dbms_sqldiag.dump_trace(p_sql_id=>'sql_id',p_child_number=>0,p_component=>'Optimizer',p_file_id=>'MYSTMT');
This will generate a TRC file on the Oracle server, in the diagnostic trace directory. The trace file would have MYSTMT in its name, given the options above.
Use this only if you are desperate. It will be hard to wade through the TRC file, but you will get information about each of the indexes Oracle considered and its cost estimates for each one. This may give you a clue as to what went wrong. It is usually all about the cardinality estimates. You will also see information about the values of the peeked bind variables in the trace file, so that can be helpful.
Here’s a portion of the TRC output (there is much more!). Some of the interesting bits:
- The optimizer is using SQL plan directives (SPD) to adjust cardinality.
- It used adaptive dynamic sampling to estimate the cost of using the index RESOURCE_RESOURCE_RESOURC_R_DX with the filter “T1″.”URI” LIKE ‘baselines/%’
SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for RESOURCE_RESOURCE[T1] SPD: Directive valid: dirid = 9605947255896754416, state = 5, flags = 1, loc = 1 {C(99176)[4]} SPD: Return code in qosdDSDirSetup: EXISTS, estType = TABLE Table: RESOURCE_RESOURCE Alias: T1 Card: Original: 27228382.000000 >> Single Tab Card adjusted from 0.000298 to 94.718125 due to adaptive dynamic sampling Rounded: 95 Computed: 94.718125 Non Adjusted: 0.000298 Scan IO Cost (Disk) = 328358.000000 Scan CPU Cost (Disk) = 15441091384.480001 Total Scan IO Cost = 328358.000000 (scan (Disk)) + 0.000000 (io filter eval) (= 0.000000 (per row) * 27228382.000000 (#rows)) = 328358.000000 Total Scan CPU Cost = 15441091384.480001 (scan (Disk)) + 1363870583.088047 (cpu filter eval) (= 50.090034 (per row) * 27228382.000000 (#rows)) = 16804961967.568048 Access Path: TableScan Cost: 328781.217332 Resp: 328781.217332 Degree: 0 Cost_io: 328358.000000 Cost_cpu: 16804961968 Resp_io: 328358.000000 Resp_cpu: 16804961968 kkofmx: index filter:"T1"."URI" LIKE 'baselines/%' ****** Costing Index RESOURCE_RESOURCE_RESOURC_R_DX SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_FILTER Access Path: index (RangeScan) Index: RESOURCE_RESOURCE_RESOURC_R_DX resc_io: 6.000000 resc_cpu: 44079 ix_sel: 3.6726e-08 ix_sel_with_filters: 3.6726e-08 Cost: 6.001110 Resp: 6.001110 Degree: 1 ****** Costing Index RESOURCE_RESOURCE_RSRC_MDFD_DX SPD: Directive valid: dirid = 9605947255896754416, state = 5, flags = 1, loc = 1 {C(99176)[4]} SPD: Return code in qosdDSDirSetup: EXISTS, estType = INDEX_SCAN SPD: Directive valid: dirid = 9605947255896754416, state = 5, flags = 1, loc = 1 {C(99176)[4]} SPD: Return code in qosdDSDirSetup: EXISTS, estType = INDEX_FILTER >> Index Card adjusted from 8121.072949 to 89.000000 due to adaptive dynamic sampling >> Index Card adjusted from 8121.072949 to 89.000000 due to adaptive dynamic sampling
Improving slow SQL statements
If you can use the diagnostic techniques in the previous section to identify why an execution plan was inefficient, then there are several things you can do in Oracle to improve things. If you are having an urgent performance issue, using capabilities of Oracle to tune and optimize your poor-performing statements will get you the fastest relief.
Use SQL profiles and SQL plan baselines
An excellent place to start is to use the SQL tuning advisor to find better execution plans. You should be able to resolve the majority of your problems this way.
The SQL tuning advisor can recommend a SQL profile (which can provide the optimizer with more information about the cardinality of the parts of the SQL statement). You can also use the tuning advisor to create a SQL plan baseline (which is a set of one or more accepted execution plans for a statement). Once you do this, the execution plan should stabilize. If your performance problem was introduced by a change in execution plans, then the SQL tuning advisor should be able to find a set of potential execution plans and allow you to pick out the best one.
An in-depth discussion of the tuning advisor is beyond the scope of this article. Refer to the Oracle documentation for details.
Get SQL profile details
When Oracle generates a SQL profile, it stores a set of optimizer hints which can be used to improve execution plans. You can get the details on the hints by running the following SQL. This will list the details for all SQL profiles – if you know the name of one specific profile, you can change the SQL below to select just that one profile.
set linesize 999 set pagesize 100 column profile_name format a30 column created format a30 select created, profile_name, sql_text, TRIM(LEADING ' ' FROM dbms_lob.substr(comp_data,4000,1)) from dba_sql_profiles p, dbmshsxp_sql_profile_attr a where p.name=a.profile_name order by created desc, name;
The optimizer hints in the comp_data column will look like this:
<outline_data><hint><![CDATA[OPT_ESTIMATE(@"SEL$58A6D7F6", TABLE, "S"@"SEL$1", SCALE_ROWS=7.116686581)]]></hint><hint><![CDATA[OPT_ESTIMATE(@"SEL$58A6D7F6", JOIN, ("T"@"SEL$1", "S"@"SEL$1"), SCALE_ROWS=0.008265578193)]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE(default)]]></hint></outline_data>
For more details, refer to this description of the OPT_ESTIMATE hint.
If you can find the SQL tuning task that recommended the profile, you can get the detailed report (including recommended execution plans) by running:
select dbms_sqltune.report_tuning_task('task_name') FROM dual;
Disable histograms
If a histogram is leading to the selection of a poor execution plan, you can disable the collection of histogram statistics for a column.
Histograms are not useful in the following situations, and yet, Oracle may still decide to create the histograms if it can’t resolve cardinality mis-matches.
- TIMESTAMP columns
- Columns which contain only unique values (number of distinct values = number of rows)
- Columns where the number of distinct values is much greater than the number of histogram bins
- Hybrid histograms where the repeat count of all bins is 1
- VARCHAR columns where the first 32/64 characters of the column values are not unique (mostly an Oracle 11g issue)
Remove existing histogram statistics by running:
exec dbms_stats.delete_column_stats(ownname=>'RM', tabname=>'VVCMODEL_VERSION', colname=>'CONCEPT', col_stat_type=>'HISTOGRAM');
Configure the table so that it will not collect histogram statistics in the future:
exec dbms_stats.set_table_prefs('RM', 'VVCMODEL_VERSION', 'METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 1 CONCEPT);
Update statistics for the table:
exec dbms_stats.gather_table_stats('RM','VVCMODEL_VERSION');
Flush bad execution plans
Oracle may generate several different execution plans for a single SQL statement, and can switch between them. These execution plans can persist in the cursor cache for quite a while, so if you do happen to end up with a bad execution plan, it can potentially keep being selected. Oracle won’t parse the SQL again if it already has an execution plan.
You can explicitly flush bad execution plans from the cursor cache by following this procedure. You’ll need to get the SQL id first. Once you have it, execute the following command to find the address and hash value for the execution plan:
column hash_value format 9999999999 select address, hash_value from v$sqlarea where sql_id = 'grhdd2q4uf434';
That output should be something like the following. If there are multiple execution plans for a statement, you’ll see an entry for each one.
ADDRESS HASH_VALUE ---------------- ----------- 00000005078459A8 2309427300
To purge the execution plan from the cursor cache, execute this command (using the address and hash value found previously). You’ll be able to find the hash value for the poor execution plan by using the techniques described earlier in this article.
exec DBMS_SHARED_POOL.PURGE ('00000005078459A8, 2309427300', 'C');
Tune Oracle to reduce execution plan instability
The SQL tuning advisor can run as a maintenance task, and you can configure it to automatically accept SQL profiles. If this creates execution plan instability, you can disable this feature and manage SQL profiles manually.
If you find that SQL plan directives are being used, or you see that other adaptive optimization features are in use, you may want to disable them if they are leading to poor execution plans.
If you want to continue to use the adaptive features, then you should adjust the value of the OPTIMIZER_DYNAMIC_SAMPLING parameter. Use a value of 4 at minimum, since that is better for SQL statements with complex predicates. You may even want to use a value of 11 (which tells the optimizer to use adaptive dynamic sampling).
Watch for any of the following statements in the execution plan reports, since they are indications that adaptive features are in use.
- – this is an adaptive plan
- – this is an adaptive plan (rows marked ‘-‘ are inactive)
- – statistics feedback used for this statement
- – performance feedback used for this statement
- – SQL plan directives used for this statement
Disable the features with care, since the adaptive features are trying to account for cardinality differences, which can be a good thing. Except when it’s a bad thing!
You can set OPTIMIZER_ADAPTIVE_REPORTING_ONLY to true to disable automatic reoptimization and adaptive plans.