How do I learn to tune SQL?
I'm regularly asked, "How do I tune a database?", to which I usually suggest, "How do I tune SQL?", is a better question, since the vast majority of problems with database performance are down to poorly written SQL, or good SQL written against badly designed data models.
There are whole books written about SQL tuning, so a brief article is going to be flawed, but I wanted to put down some ideas and thought processes that might help anyone new to this subject. This article will no doubt evolve over time, but I will try to keep it really light.
If you are experienced in SQL tuning, you may find some of the topics covered here an extreme over-simplification. That is intentional. Remember, it's for beginners.
- Identify Resource Intensive SQL
- Context and Realistic Goals
- Quick Wins
- Execution Plans
- Cardinality is King
- Don't Do It
- Filter Early
- Efficient Function Calls
- Books and Videos
Identify Resource Intensive SQL
You often know exactly which SQL statement to tune, since you are working on it already, or a user has reported performance problems with a specific report or piece of application functionality. If that's the case, no problem.
In some situations you are presented with a badly performing system that you know nothing about. There are several ways you can identify resource intensive SQL, including the following.
- If you have the appropriate EE options licensed and you have Cloud Control, you can use the "Top Activity" page for the database. This displays the current activity, along with the top SQL statements that are causing that activity.
- If you have the appropriate EE options licensed, check out the ADDM Reports for the time period where system performance was bad. The ADDM reports will include SQL and PL/SQL calls that are using a lot of resources.
- If you have the appropriate EE options licensed, check out the AWR Reports for the time period where system performance was bad. The AWR reports contain a section called "SQL Statistics", with multiple sub-sections called "SQL ordered by <metric-name>". Pick the sub-section appropriate to the issues you are seeing. I give an example of the thought process here.
- If you don't have the relevant licensed options for the ADDM and AWR reports, consider using statspack. Like the AWR reports, it has lists of resource intensive SQL and PL/SQL.
- There are other system performance tools available. Just make sure using them does not break your current licensing, as several rely on database functionality that is part of separately licensed options.
Once you've found a suspect statement, you can start trying to tune it.
Context and Realistic Goals
When you are looking at resource intensive SQL you need to take a step back and put it into context. Here are two extremes to consider.
- The SQL is extremely efficient, but runs a lot. You are probably not going to make the SQL any faster, so you have to focus on ways of making it run less often, not query tuning.
- The SQL doesn't run often, but when it does it is very slow. Here you will focus on traditional query tuning.
Before you launch into tuning a SQL statement, think a little about the context and your aims for tuning the statement. You have to be realistic about what you can achieve. You should be asking yourself questions like these.
- How much data do we expect this SQL statement to return? You should have some idea about what you are trying to achieve, and therefore what the likely results will be. If you are expecting to return 1 billion rows, it's not going to be quick!
- Does the data returned by the SQL statement represent a large proportion of the data in the tables being referenced? If it does, then you are likely to see more full table scans and hash joins. If it is likely to return a small proportion of the data, then we might expect to see more index usage and nested loop joins.
- How many times a day would we expect this statement to run? If you understand the functional area, you should be able to guess at how often this particular functionality is used.
We are not necessarily looking for hard facts here. It's some context to understand what we have and what we are likely to achieve. If you are a DBA, you may look at these points and think it sounds a lot like development. Yes it does, because query tuning is more of a development task in my opinion, not a DBA task. If you are not well versed in SQL and application development, I think you will find query tuning a lot harder. Others may disagree.
Now we understand some of the context and have some realistic aims, we can move forward.
Assuming you have the correct licenses, tools like the SQL Tuning Advisor and the SQL Access Advisor can sometimes give you some quick wins. Nearly every statement I'm asked to tune gets checked by these advisors. Sometimes their suggestions are great. Sometimes they are rubbish. Sometimes they are rubbish, but they give you a clue to what is really the problem. For the sake of a few minutes, it's worth trying them to see what you get. Please don't mindless apply their recommendations, but if you've paid for them and they help, great!
With SQL tuning it's all about the execution plan. There are several ways to get them, but the best option is to use the DBMS_XPLAN package to display them, and it's important to look at the plan that was actually used (DISPLAY_CURSOR), rather than the estimated plan (DISPLAY) if possible.
If you have the appropriate licensing, the Real-Time SQL Monitoring feature is a great way to see the execution plan as the statement is actually running, but you should be comfortable with the basic approach too.
There are other tools too, like SQL Developer and TOAD etc. The important point is to make sure you are looking at the actual execution plan used for the last run, rather than an estimate of the plan.
You can find some basic information about reading execution plans here.
Cardinality is King
When we discuss cardinality in this context, what we are actually talking about is the number of rows processed by each operation in the execution plan. The optimizer uses database statistics to estimate the cost of each operation and ultimately the whole plan. It then compares several plans and picks the plan with the lowest cost. The cardinality estimate for each operation is a big factor in the cost of that operation.
When reading an execution plan it is really important to know the estimated and actual cardinalities for each operation. The GATHER_PLAN_STATISTICS hint can be used to display this information. If you see a big difference between the estimated and actual number of rows for an operation, it means the optimizer estimated the cardinality badly, so it is quite likely it made some bad choices in the execution plan. The types of bad choices could include the following.
- The optimizer thought a filter would eliminate most of the rows from a table, so it decided to use an index to access the data. At runtime it turned out the selectivity of the filter was not correct, so most of the rows were returned from the table, in which case a full table scan would have been more efficient.
- The optimizer thought a filter would not eliminate many rows, so it decided to use a full table scan. At runtime it turned out the filter was very selective, so an index access would have been much more efficient.
- The optimizer thought is was joining together two small rowsets, so decided to perform a nested loop join. At runtime it turned out the rowsets were very big, so a hash join would have been more efficient.
Every new version of the database tries to reduce the chances of these bad decisions happening, or includes ways for the optimizer to learn from its mistakes to improve future decisions. Oracle 12c includes a variety of adaptive query optimizations to minimise the impact of bad cardinality estimates.
Bad cardinality estimates can be caused by a number of factors including the following.
- The data has changed significantly and the database statistics are no longer representative of the data. Statistics don't necessarily have to be new, but they should be representative.
- There is skew in the data, which the optimizer doesn't know about. As a result it expects an even distribution of distinct values in a column, so the selectivity of filters is estimated badly. The presence of histograms on skewed data can help the optimizer make better estimates.
- There is a relationship between multiple columns that the optimizer doesn't understand. The use of column group statistics can help the optimizer understand this relationship and make better estimates.
Ultimately, the better the cardinality estimates, the more likely the optimizer will make the right choice. If you see big differences between the estimated and actual cardinalities of an operation, that's probably something to investigate further.
Don't Do It
You would be surprised how many databases spend a lot of time running SQL that is not necessary. If you can identify the source of this and remove it, it can result in substantial improvements to system performance. The types of things you might see include the following.
- Scheduled reports that nobody uses anymore. Stop running them.
- Reports on static or semi-static data being run on demand for multiple people, rather than scheduled to run once and presented on demand. Schedule reports and cache them.
- Reports containing thousands of rows, when only the first page is ever read. Only return the data people care about.
- Dynamic dashboards containing complex aggregations that refresh too frequently. Pick a refresh rate that is more acceptable.
- Multiple users running the same dashboards that require complex aggregations. Consider performing the aggregations using one or more materialized views, so the work is done once per refresh cycle, with all user dashboards querying the pre-computed data.
- Application servers requesting huge data sets and filtering or joining the data on the middle tier, rather than filtering or joining the data on the database.
If you've not already read the reading execution plans link, you probably should before you continue here as I will make some reference to the terminology explained there.
You will sometimes hear the term "filter early" when discussing execution plans. When the optimizer is trying to determine the driving set, it will often pick the smallest set available. If we have the option to filter data out of one of the result sets, we can make it smaller and therefore make it more likely to be picked as the driving set. Often, the quicker we can eliminate rows from the query, the more efficient the statement will be, hence filter early. If we have filter predicates that can be applied against multiple tables, it might make sense to group them into a single table to make the driving set even smaller.
Imagine a situation where we are looking for active orders that were created in the last seven days. If both the orders and order_lines tables include a created_date column, and we assume the order and order lines will always be created on the same day, we could do this.
SELECT ... FROM orders o JOIN order_lines ol ON ol.order_id = o.id WHERE o.status = 'ACTIVE' AND ol.create_date >= TRUNC(SYSDATE) - 7
We might find it is advantageous to move the created_date filter onto the orders table, allowing us to reduce the number of rows returned from this table even more, making it a more attractive and smaller driving set.
SELECT ... FROM orders o JOIN order_lines ol ON ol.order_id = o.id WHERE o.status = 'ACTIVE' AND o.create_date >= TRUNC(SYSDATE) - 7
You may also hear the term "filter early" used when discussing the data sent to application servers. As discussed earlier, sending large amounts of data to an application server, or client application, only for it to filter it and throw most of the data away represents a big waste of database effort.
Efficient Function Calls
You often see PL/SQL function calls in SQL statements, which can cause some performance nightmares. The subject of efficient function calls in SQL is discussed here.
Books and VideosThere are lots of books, training courses and videos related to performance tuning, but people often overlook the free and obvious. A good place to start is to check out these free resources.
- 2 Day + Performance Tuning Guide
- Database Performance Tuning Guide
- Oracle Database 12c Release 2 Real-World Performance Videos
For more information see:
- Automatic Database Diagnostic Monitor (ADDM)
- Automatic Workload Repository (AWR)
- Automatic SQL Tuning : SQL Tuning Advisor
- SQL Access Advisor
- DBMS_XPLAN : Display Oracle Execution Plans
- Real-Time SQL Monitoring
- Adaptive Query Optimization in Oracle Database 12c (12.1 and 12.2)
- Oracle Cost-Based Optimizer (CBO) and Database Statistics (DBMS_STATS)
- Histogram Enhancements in Oracle Database 12c Release 1 (12.1)
- Extended Statistics Enhancements in Oracle Database 11g Release 2
- Efficient Function Calls From SQL
Hope this helps. Regards Tim...