Generate nested JSON from SQL with ORDS and APEX 5

When creating web applications, at some point you will need to interact with JSON. Either you consume JSON or you need to generate it to be able to use that cool widget you found.

A few years ago when I wrote about interacting and customising charts, XML was the data format. Today JSON is more common to use as it works so well with JavaScript. I needed to pass some nested JSON - here's an example: multiple customers have multiple orders and an order consists out of multiple items: 

What is the easiest way to generate that JSON data?

Luckily we have Oracle REST Data Services (ORDS), it literally is just one SQL statement!
Here you go: 


Save and presto you're done :) Hit the TEST button and copy/paste the generated JSON in for example JSONLint to validate the output and yep we have valid JSON.


But that is not all - it gets even better. APEX 5 comes with a new package apex_json which contains so many goodies, really a wonderful piece of code. The same SQL statement I call with the package and again I get the JSON I need. It's slightly different from what ORDS generated - ORDS has an items tag surrounding the data, but again it's valid JSON.
Note that APEX 4.2 has some JSON support too (apex_util.json_from_sql), but it doesn't go as far as the package you find in APEX 5.0 - for example the cursor syntax is not working there, but for straight forward JSON it does it job.


So this shows how easy it is to generate JSON these days. It has been different where we had to use the PL/JSON package (which was good too, but build-in is what I prefer).

And finally with Oracle Database 12c, and the JSON support straight in the database, consuming and querying JSON has been made so much easier, but that is for another post. If you can't wait, the link will bring you to the official 12c Database JSON doc.

Riga Dev Day and Oracle Midlands Event #7

There are a couple of things on the radar for the coming week and unfortunately for me they are both on the same day.

In Birmingham, UK it is Oracle Midlands Event #7 (OM7) on Thursday 22nd of January. This will be the first Oracle Midlands event I’ve missed since it started. I’m a big fan of what Mike is doing with Oracle Midlands and I appreciate the sponsorship from Redgate that means it is a free event. Please remember to get off your post-Christmas asses and attend the event. It will only keep happening if you make the effort to turn up! So far the attendance and been good, but it only takes a couple of events with small numbers to let it fizzle out an die. Please keep showing your support!

On the same day it is Riga Dev Day 2015 in Latvia. It will be my first time in Latvia, so that will be fun. I think the daughter (Hell-Squirrel) has been before, so she’ll be able to look after me. :)

Cheers

Tim…


Riga Dev Day and Oracle Midlands Event #7 was first posted on January 17, 2015 at 11:43 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Understanding the APEX url – passing variables to items and (interactive) reports

Did you know you can recognise an Oracle APEX application through the url?

Here's the syntax:

f?p=appid:pageid:session:request:debug:clearcache:params:paramvalues:printerfriendly&p_trace=YES&cs=...

Let me give some examples:

  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570:::::

The first part: https://www.apexrnd.be/ords/ depends on your configuration. I'm using https, my domain name is apexrnd.be and I'm using the Oracle REST Data Services as connection to the database. This is a configuration in your webserver.

The next part is f?p= f is a procedure in the database with as paramaters (p=). f comes from flow - once APEX was called "Project Flows".

209 is my application id, 12 is my page id and 12351378808570 is my session
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570:CSV::::
The first part is the same, but now as request I've CSV, page 12 is an Interactive Report and the fastest way to see your IR data as CSV is to give CSV as the request in the url
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570:IR_REPORT_BASIC::::
I changed the request again to the name of a saved Interactive Report (Alternative) IR_REPORT_: so the page goes straight to that layout of the report.


  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570::YES:::
Here I'm calling the page in DEBUG mode
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570:::::
In the clear cache section you have a couple of options, specifying the below will clear the cache for : 
- APP: whole application
- SESSION: current user session
- RIR: reset interactive report to primary report
- CIR: reset to primary report but with custom columns
- RP: reset pagination
- 12: page 12
- P12_ID: item P12_ID
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570:::P12_ID:1:
Here I pass a variable with the url, we'll fill P12_ID with the value 1
You can use comma separated list of items and values for example: P12_ID,P12_NAME:1,DIMITRI
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570::::IREQ_CUST_LAST_NAME:Dulles
If you want to filter an interactive report you can call IR??_:
In the above case we set the customer last name = Dulles.
There are many other parameters: IREQ_ (equal), LT_ (less), IRLTE_ (less or equal), IRGT_, IRGTE_ (greater or equal), IRLIKE_ (like), IRN_ (null), IRNN_ (not null), IRC_ (contains), IRNC_ (not contains).
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570::::::YES
In this case we run the page in Printer-friendly mode
  • https://www.apexrnd.be/ords/f?p=209:12:12351378808570::::::&p_trace=YES
This url will generate a trace file on the system for this request.

You find some more information in the APEX Documentation or you can try online at https://www.apexrnd.be/ords/f?p=209:12:0:.

One remark: if you enable Session State Protection (SSP) you might not be able to call the parameters like this as it requires a checksum at the end which you have to calculate for example by using apex_util.prepare_url().

And finally, if you want more readable urls, some people did some posts about that, here's one of Peter.

AWR Warehouse and SYSDBA

I’ve had a few folks ask me a similar question about the AWR Warehouse, occuring numerous times this week.

“How can I limit what the user of the AWR Warehouse can view in the AWR Warehouse?”

“How can I add source databases to the AWR Warehouse without DBA privileges?”

This topic bridges into the area of confusion of use of the EM12c environment, (which I consistently promote for use by DBAs, Developers and all of IT) and then the AWR Warehouse, which shares it’s user interface as part of cloud control, that currently is a DBA’s deep analysis and research tool.

The request to limit privileges to add source databases, limit view access to targets, but also bind values, full SQL statements, and advanced performance data impacts the purpose of the AWR Warehouse.  I fully understand the security requirements for AWR Warehouse access as stated in Oracle’s support documentation:

“You can add and remove source databases provided you have access to the database target and to database credentials with execute privileges on the sys.dbms_swrf_internal package and the DBA role.”

https://docs.oracle.com/database/121/TDPPT/tdppt_awr_warehouse.htm#CACEEAHD

Why is this important?

The first reasoning would be for space considerations.  AWR data with long term retention could add up to a considerable disk space if just any database was added without careful planning to ensure the right database information is retained for the right databases.  The power of the Automatic Workload Repository, (AWR) is that its always on, but always on means its always growing and this is best left to a DBA resource to ensure that allocation of space is used wisely.

Second, when discussing limiting view of data in the AWR Warehouse-  When investigating an issue with any SQL execution, there are many factors to take into consideration.  One of the most significant and commonly important information to answering why a performance issue has occurred, requires me to look into differences in the amount of data resulting in the where clause and objects vs. the data provided to the optimizer.

If we take the following, simple where clause into consideration:

where a.col1=b.col2
and a.col2=:b1
and b.col7=:b2;

When the optimizer uses data provided to it from statistics, histograms and any dynamic sampling, there is going to be a number of choices that can be made from the following information provided.

  1. Hash join on table a and b to address the join or perform nested loop if…
  2. Column a.col2 is unique, making the join quite small or…
  3. Adding b.col7 to a.col2 to the join will make it so unique that a nested loop is sufficient.

Now, what if the optimizer decided to perform a nested loop when 16 million+ rows were returned?

To the untrained eye, some may assume that the optimizer had made a poor choice or that there was a bug and would walk away.  More often, if you have the data provided by the values passed to the bind variables, along with the data provided to the optimizer, assumptions would fall away and a very different story would present itself.

This is why Oracle requires DBA privileges to add a source database to the AWR Warehouse and to work with the data provided as part of the AWR Warehouse. This feature provides an abundance of data that is most useful to the professional that knows how to work with the Automatic Workload Repository.  This professional, to have the access required to perform this type of analysis and research should be the database administrator, so the requirement for the AWR Warehouse now makes perfect sense.

Now to return to the query, adding in the values for the bind variables, a new picture develops to research:

where a.col1=b.col2
and a.col2=6002
and b.col7='YES';

We now can verify the statistics data behind the values for both a.col2 and b.col7 and accurately diagnose where the optimizer may have been mislead due to incorrect data provided to the Cost Based Optimizer.

This may be a simple explanation behind why I believe in the DBA privilege policy was chosen for source database additions and view options to the AWR Warehouse, but hopefully it sheds a bit of light onto the topic.

 



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © DBA Kevlar [AWR Warehouse and SYSDBA], All Right Reserved. 2015.

Did you forget to run root.sh?

Not easy to detect, and depending on the product (agent/database), it may have only limited side effects.

Like external jobs not running, operating systems statistics not collected.

But it is not always easy to diagnose.

For instance if you patch from OMS 12cR2 to 12cR3, and you run the root.sh only in 12cR2, they are very few statistics missing (one is the OS_STORAGE_ENTITY).

Running the root.sh doesn’t generate a log file or an entry in the inventory.

To check if it was executed, check what it is supposed to do. It is a bit different in each version. One think it always does is changing the ownership to root and set the sticky bit for a few binaries. For the database, this is done in sub-scripts called rootadd.sh (10g) or rootadd_rdbms.sh (11g/12c).


eval ls -l $(find $ORACLE_HOME -name "rootadd*sh" -exec awk '$1="$CHOWN"&&$2=="root"{print $3}' {} ;|sort -u)

-rwsr-x--- root dba .../product/11.2.0/db_4/bin/extjob
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/jssu
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmb
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmhs
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmo
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/oradism
-rw-r----- root dba ...11.2.0/db_4/rdbms/admin/externaljob.ora

If the ownership is root, you definitely did run the root.sh.

On the 12c agent, there is a FULL_BINARY_LIST variable that point to list of root binaries in sbin


eval $(grep FULL_BINARY_LIST= $AGENT_HOME/root.sh)
cd $AGENT_HOME/../../sbin
ls -l $FULL_BINARY_LIST

-rws--x--- root dba nmb
-rws--x--- root dba nmhs
-rws--x--- root dba nmo

If all files exist and belong root, it looks like you did run the root.sh.

Spatial space

One thing you (ought to) learn very early on in an Oracle career is that there are always cases you haven’t previously considered. It’s a feature that is frequently the downfall of “I found it on the internet” SQL.  Here’s one (heavily paraphrased) example that appeared on the OTN database forum a few days ago:

select table_name,round((blocks*8),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

select table_name,round((num_rows*avg_row_len/1024),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

The result from the first query is 704 kb,  the result from the second is 25.4 kb … fragmentation, rebuild, CTAS etc. etc.

The two queries are perfectly reasonable approximations (for an 8KB block size, with pctfree of zero) for the formatted space and actual data size for a basic heap table – and since the two values here don’t come close to matching it’s perfectly reasonable to consider doing something like a rebuild or shrink space to reclaim space and (perhaps) to improve performance.

In this case it doesn’t look as if the space reclaimed is likely to be huge (less than 1MB), on the other hand it’s probably not going to take much time to rebuild such a tiny table; it doesn’t seem likely that the rebuild could make a significant difference to performance (though apparently it did), but the act of rebuilding might cause execution plans to change for the better because new statistics might appear as the rebuild took place. The figures came from a test system, though, so maybe the table on the production system was much larger and the impact would be greater.

Being cautious about wasting time and introducing risk, I made a few comments about the question –  and learned that one of the columns was of type SDO_GEOMETRY. This makes a big difference about what to do next, because dbms_stats.gather_table_stats() doesn’t process such columns correctly, which results in a massive under-estimate for the avg_row_len (which is basically the sum of avg_col_len for the table). Here’s an example (run on 12c, based on some code taken from the 10gR2 manuals):


drop table cola_markets purge;

CREATE TABLE cola_markets (
  mkt_id NUMBER,
  name VARCHAR2(32),
  shape SDO_GEOMETRY);

INSERT INTO cola_markets VALUES(
  1,
  'cola_a',
  SDO_GEOMETRY(
    2003,  -- two-dimensional polygon
    NULL,
    NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
    SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
          -- define rectangle (lower left and upper right) with
          -- Cartesian-coordinate data
  )
);

insert into cola_markets select * from cola_markets;
/
/
/
/
/
/
/
/
/

execute dbms_stats.gather_table_stats(user,'cola_markets')
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

analyze table cola_markets compute statistics;
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

If you care to count the number of times I execute the “insert as select” it’s 10, so the table ends up with 2^10 = 1024 rows. The 7,200 in the calculated column converts bytes to approximate blocks on the assumption of 8KB blocks and pctfree = 10. Here are the results following the two different methods for generating object statistics:


PL/SQL procedure successfully completed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
         14       1024        124               2

Table analyzed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
        109       1024        124              16

Where does the difference in Expected_blocks come from ? (The Blocks figures is 124 because I’ve used 1MB uniform extents – 128 block – under ASSM (which means 4 space management blocks at the start of the first extent.)

Here are the column lengths after the call to dbms_stats: as you can see the avg_row_len is the sum of avg_col_lens.


select column_name, data_type, avg_col_len
from   user_tab_cols
where  table_name = 'COLA_MARKETS'
order by
        column_id
;

COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             3
NAME                 VARCHAR2                           7
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             0
SYS_NC00004$         NUMBER                             4
SYS_NC00005$         NUMBER                             0
SYS_NC00006$         NUMBER                             0
SYS_NC00007$         NUMBER                             0
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

The figures from the analyze command are only slightly different, but fortunately the analyze command uses the row directory pointers to calculate the actual row allocation, so picks up information about the impact of inline varrays, LOBs, etc. that the dbms_stats call might not be able to handle.


COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             2
NAME                 VARCHAR2                           6
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             1
SYS_NC00004$         NUMBER                             3
SYS_NC00005$         NUMBER                             1
SYS_NC00006$         NUMBER                             1
SYS_NC00007$         NUMBER                             1
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

As a basic reminder – whenever you do anything slightly non-trivial (e.g. something you couldn’t have done in v5, say) then remember that all those dinky little script things you find on the Internet might not actually cover your particular case.


The info in OTHER_XML of view DBA_HIST_SQL_PLAN

I had some time to spend, killing time, and thought about something that was “on…

Log Buffer #406, A Carnival of the Vanities for DBAs

This Log Buffer Edition covers blog posts from various bloggers of Oracle, SQL Server and MySQL.

Oracle:

Sync tables: generate MERGE using Unique constraint.

What Hardware and Software Do YOU Want Oracle to Build?

There were a number of new features introduced in Ops Center 12.2.2. One of the shiny ones is an expansion of the backup and recovery capabilities to include Proxy Controllers.

Want to Be a Better Leader? Answer One Question.

Managing a remote Oracle Database instance with “Geographic Edition”.

SQL Server:

Learn how you can use SQLCop to prevent your developers from writing stored procedures that are named sp_ something.

Data Cleaning in SQL 2012 with Data Quality Services.

Stairway to PowerPivot and DAX – Level 9: Function / Iterator Function Pairs: The DAX MAX() and MAXX() Functions.

Options to Improve SQL Server Bulk Load Performance.

Dynamically Create Tables Based on an Access Table

MySQL:

Stored Procedures: critiques and defences.

JSON UDF functions 0.3.3 have been released.

Business Scalability, Operational Efficiency and Competitive Edge with MariaDB MaxScale 1.0 GA.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP.

Hyper-threading – how does it double CPU throughput?

Master Detail – Detail in APEX with FOEX

In the previous post I talked about Master-Detail and that it wasn't that easy to do Master-Detail-Detail today declaratively in APEX 4.2 (and 5.0).

Below is a screenshot of FOEX (a framework build on top of APEX) and a Master-Detail-Detail in there.

You can see the live example at https://www.apexrnd.be/ords/f?p=FOEX_FDOCS:MASTER_DETAIL_DETAIL:0::NO:::

Here's a screenshot of the page behind the scenes:


At first it might seem complex, but it isn't. In FOEX you can put regions in different places on the screen (center, east, west pane etc.), so many regions are to control those areas.
The most important regions are the "List of Customers", "List of Orders" and "Order Items", those are the regions that you see on the first screenshot. The other region "Manage Order Items" is a Modal Dialog that comes when you want to add an order item.


My goal is not to explain FOEX in great detail here, you can read about it on their website, but basically they extended APEX with a custom theme, many (many!) plugins and a builder add-on (you see in the screenshot the "Create FOEX Region", so it's really like you are working natively in APEX. Here's a screenshot when you hit the button to create a FOEX region:


So almost natively you can build your Master-Detail-Detail, through their wizards.

I thought to mention this solution here as well, as although my first choice is to make simple and clean web applications, if you do want a lot of information on your screen (like in a master-detail-detail), and you like ExtJs (which is used behind the scenes), FOEX is probably one of the best choices you have.

APEX R&D is a partner of FOEX, so if you need some more info, don't hesitate to contact us.

Master Detail (-Detail) in APEX

In the last posts we used following tables to show (Report) and edit (Form) the data:

Another way to work with this data is using a Master-Detail report/form.
In the master we have our customer info and in the detail the products he's interested in.

Oracle APEX provides a wizard to create a Master-Detail Form
You just follow the wizard to set it up:


By default the wizard will only show related tables, which is most logical - if you don't see your table, you probably don't have the FK relation on the tables.

You've some options for the layout - edit the records directly on the detail screen or as a separate form. Which one to pick? It depends... for small detail tables I would go for a Tabular Form, but for larger or more complex ones I would go with another Form. Tabular Forms are not the most flexible in APEX, but for now that is the only declarative option you have to edit multiple records at the same time. Alternatives are to use Modal dialogs, code your own custom solution or use a solution of somebody else. FOEX for example has a nice solution which I'll cover in the next post.

Tabular forms got improved only a little bit in APEX 5.0, but Modal Dialogs come native in APEX 5.0.  Tabular Forms will be more enhanced in APEX 5.1 which can then do master - detail - detail and it will also come with another solution - a new "Multi-Row Edit" region type - which could work well in this case.

You find the Master Detail result online at https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:MASTER_DETAIL


What if our tables were a bit more complex and we need Master-Detail-Detail today?
We would need to create our own custom "tabular forms", basically a report where we use the apex_item api... but that is for another post.