Day 5 was a presentations day for me.
I tried to make as many notes as I could, but you will see the quality and accuracy of the notes tail off as the day went along…
Tom’s Top 12 Things About the Latest Generation of Database Technology
My bullets don’t quite match Tom’s, which is why I have more than 12 things listed.
- Functions (and procedures used within those functions) can be defined in the WITH clause. Performance boost compared to regular unit defintion. Pragma to allow regular functions to benefit from these performance benefits.
- Default value of column can use a sequence.nextval.
- Identity columns : Multiple levels of control of how it is used. Can use simple or more complex syntax.
- Metadata only default of optional columns. Previous versions this was possible only for mandatory columns.
- VARCHAR2(32767) in the database. Less than 4K is stored inline. More than 4K is stored out of line, similar to LOB, but simpler. Not available by default.
- Top-N now using Row limiting clause eg. “OFFSET 10 ROWS FETCH FIRST 10 ROWS ONLY”. Similar to mySQL syntax.
- Row pattern matching. Quite a lot of new analytic syntax here.
- Partitioning Improvements:
– Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later.
– Cascade for TRUNCATE and EXCHANGE partition.
– Multiple partition operations in a single DDL
– Online move of a partition(without DBMS_REDEFINTIION).
– Interval + Reference Partitioning.
- Adaptive Execution Plans:
– If the optimizer notices the cardinality is not what is expected, so the current plan is not optimal, it can alter subsequent plan operations to take allow for the differences between the estimated and actual cardinalities.
– The stats gathered during this process are persisted as Adaptive Statistics, so future decisions can benefit from this.
– You will see STATISTICS COLLECTOR steps in the SQL Trace. Can make the trace harder to read as it can contain information about the expected plan and the actual plan.
- Enhanced Statistics:
– Some dynamic sampling operations are persistent, so they are not lost when the SQL is aged out.
– Hybrid histograms. When the number of distinct values is greater than 254, “almost popular” values can get “lost” in the mix. A single bucket can now store the popularity of than value, effectively increasing the number of buckets, without actually increasing it.
– Possible the max number of buckets can be increased based on a parameter. (demo grounds)
– Statistics gathered during loads. CTAS and INSERT … SELECT automatically compute stats.
– Global temporary tables can have “session private statistics”. Previously, we had one-size-fits-all.
- Temporary Undo (ALTER SESSION SET temp_undo_enabled=true):
– UNDO for temporary tables can now be managed in TEMP, rather than the regular UNDO tablespace.
– Reduces contents of regular UNDO, allowing better flashback operations.
– Reduces the size of redo associated with recovering the regular UNDO tablespace.
- Data Optimization:
– Information Lifecycle Management: Uses heat map. Colder data is compressed and moved to lower tier storage. Controlled by declarative DDL policy.
- Transaction Guard:
– If a failure happens, your application may not know the actual status of a transaction. If it was successful, issuing it again could cause a duplication transaction.
– In these cases, you can mark a transaction with an “unknown” state (as far as the application is concerned) as failed, so even though they may have been successful, it will never be considered, or recovered. You’ve guaranteed the outcome.
- Pluggable database:
– Oracle provided metadata and data is kept in the container database (CDB).
– User metadata and data is kept in the plugable database (PDB) .
– One container can have multiple plugable databases.
– No namespace clashes. Allows public synonyms and database links at the PDB level, rather than the CBD level.
– Cloning is quick and simple as only user metadata and data needs to be cloned.
– Upgrades have the potential to just unplug from old version (12cR1) to new version (2cR2).
– Reduce total resource usage is reduced on lower use databases.
Oracle Database Optimizer: An Insider’s View of How the Optimizer Works
Oracle database 12c is the first step on the way to making an adaptive, or self-learning optimiser.
Alternative subplans are precomputed and stored in the cursor, so no new hard parsing will be needed as part of the adaption of an already executing plan. Statistics collectors are included in the plan execution. If the collectors cross a threshold, the plan might switch during execution from a nested loops to a hash join.
You can see information about the adaptive actions that have occurred using the DBMS_XPLAN package, with the format of “+all_dyn_plan +adaptive”. If a plan has been adapted, you will see it indicated in the v$sql.is_resolved_dynamic_plan column.
If this functionality scares you, you can turn it off using the OPTIMIZER_APADPTIVE_REPORTING_ONLY parameter. Same work is done, but no actual adaptive action is taken.
During parallel execution, collectors can influence the distribution method (HASH > Distribution). Shown in the plan as the HYBRID HASH operations.
Dynamic statistics replace dynamic sampling. The resulting stats are cached as SHARED DYNAMIC STATS specific for the statement, including the bind values. This information is used for any session using the same statement.
Cardinality feedback can be used to re-optimize subsequent operations. Join statistics are monitored. Works with adaptive cursor sharing. Persisted on disk. New column v$sql.is_reoptimizable shows that a subsequent run will take this into consideration. Collectors are kept, even if the SQL statement is killed part way through. The plan shows that cardinality feedback is used.
SQL Plan Directives are based on a SQL phrase (a specific join) rather than the whole statement. Cached in the directive cache, but persisted in the SYSAUX tablespace. Managed using the DBMS_SPD package.
Information gathered by the optimizer, may prompt automatic creation of column groups, so next time stats are gathered, the extended stats will be gathered.
What’s New in Security in the Latest Generation of Database Technology
- Privilege Analysis:
– Track direct privileges and privileges via roles being used, so you can determine the least privileges needed.
– Monitoring controlled using DBMS_PRIVILEGE_CAPTURE.
– Report what is used and what is not used.
- Data Redaction: A variation in column masking of VPD, but it doesn’t just blank the value and still allows queries against the column in the WHERE clause.
- Enhanced Security of Audit Trail:
– Single unified audit trail.
– Extension of the audit management package.
– Multiple audit management privileges.
- Encryption Enhancements:
– Allow SQL creation and management of wallets, rather than command line utilities. Allows easier remote management.
– Export and import wallets/keys between plugable databases.
– Storage of wallets in ASM.
– Much more…
- Code-Based Access Control (CBAC):
– A PL/SQL unit can have roles granted to it.
– When the unit runs, any dynamic SQL running can have the privileges granted via the role.
– Doesn’t affect compile time, so focussing very much on dynamic SQL.
– Useful on invoker rights, since now the PL/SQL can run with user privileges and explicitly granted roles for the unit.
- Invoker Rights:
– INHERITED RIGHTS : Control accidental privilege escalation when a privileged user calls an invoker rights unit containing malicious code.
– Invokers rights for views.
- Separation of Duties:
– SYSDBA – God
– SYSOPER – More limited than SYSDBA, but still very powerful.
– SYSBACKUP – Just enough to do a backup.
– SYSDG – Just enough for data guard administration.
– SYSKM – Just enough to perform basic key management tasks.
– Roles for audit management.
The wrap up party was probably the highlight of the week, thanks to The Hives. They were freakin’ awesome. The front man is a scream. Very funny when he interacts with the audience. Makes me want to be in a band again!
I’ll follow this series up with a wrap-up post.