Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Integrating PFCLScan and Creating SQL Reports

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

PFCLScan Reseller Program

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

PFCLScan Updated and Powerful features

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Oracle DB12c in Docker

Docker has been making a lot of noise in the last couple of years. It piqued my interest recently when I came across a tweet saying that Oracle had recently uploaded an Oracle Linux 7 image to the Docker repository. Containers are nothing new in the Unix space but they are relatively new in Linux. Docker was built to take advantage of Linux Containers.

I asked on Twitter if anyone had successfully installed and run the Oracle Database inside of a Linux Container using Docker. I got a couple of responses that it has been done. I looked around for a blog post or documentation on how to do it and couldn’t find one so, here it is. This is also my first use of Linux Containers and Docker so feel free to comment if there are things that I missed or could do more efficiently.

Note: These instructions presume the user is familiar with VirtualBox and at least an intermediate user of Linux.

Create a small VM. Oracle Linux 7 doesn’t need a lot of horsepower to run. However, Oracle Database will run better with more resources. My VM has 4GB of memory and 25GB of dynamically added disk space. Add a second disk of the same size that will be used for the Docker images. Make sure to include a network adapter that will be able to access the internet.

Start the VM. Since there is no OS on the VM and no mounted ISO, you should be presented with a screen asking which ISO you want to use as a start-up disk.

2015-02-06 23_15_41-OL7 [Powered Off] - Oracle VM VirtualBox

 

Press enter when the install screen comes up.

2015-02-06 23_17_46-OL7 [Running] - Oracle VM VirtualBox

 

Choose the language.

2015-02-06 23_19_31-OL7 [Running] - Oracle VM VirtualBox

 

There should only be one exclamation mark on the screen for the installation destination. Click on the exclamation mark to choose the disk layout. Select the first disk and click Done.

2015-02-11 21_10_52-OL7 [Running] - Oracle VM VirtualBox

 

Click Begin Installation.

2015-02-06 23_22_35-OL7 [Running] - Oracle VM VirtualBox

 

The Oracle Linux 7 install is completely different from previous installs. The tasks have become much more parallelized. You may notice that the installation has already begun at this point. While the OS installs, click Root Password.

2015-02-06 23_22_52-OL7 [Running] - Oracle VM VirtualBox

 

Enter a password for root and click Done.

2015-02-06 23_23_02-OL7 [Running] - Oracle VM VirtualBox

 

The OS installation will now finish.

2015-02-06 23_23_22-OL7 [Running] - Oracle VM VirtualBox

 

Once the install is done, click Reboot.

2015-02-06 23_27_10-OL7 [Running] - Oracle VM VirtualBox

 

Install additional packages that will be needed in the VM to support running Docker.

yum -y --enablerepo=ol7_addons install docker btrfs-progs vim bzip2 kernel-uek-devel-$(uname -r) tigervnc-server xterm xorg-x11-xauth

 

Add the Guest Additions ISO to the VM by clicking the Devices menu on the VM and clicking Insert Guest Additions CD image.

2015-02-12 17_28_33-OL7 [Running] - Oracle VM VirtualBox

 

Mount the Guest Additions ISO and install.

mount /dev/cdrom /mnt
/mnt/VBoxLinuxAdditions.run

 

Format the second disk in the VM with btrfs. This will be used for the docker images.

mkfs.btrfs /dev/sdb

 

Query the UUID of /dev/sdb. This will be used in the next step.

blkid /dev/sdb

 

Create the Docker mount config file /etc/systemd/system/var-lib-docker.mount. Change the UUID to match that of /dev/sdb from the previous step.

[Unit]
Description = Docker Image Store

[Mount]
What = UUID=<UUID from previous step>
Where = /var/lib/docker
Type = btrfs

[Install]
WantedBy = multi-user.target

 

Enable the Docker mount.

systemctl enable var-lib-docker.mount

 

Create the Docker configuration file.

echo 'OPTIONS=-s btrfs' > /etc/sysconfig/docker

 

Copy the Docker service to systemd to allow you to start and stop the service with systemctl.

cp /usr/lib/systemd/system/docker.service /etc/systemd/system/docker.service

 

Enable and then start Docker.

systemctl enable docker
systemctl start docker

 

Pull down the Oracle Linux Docker image. This will require a connection to the internet.

docker pull oraclelinux

 

View the images currently stored on the VM. This may include more than one since the Oracle Linux Docker image is versioned.

docker images

 

Install the necessary packages in a container to support the database. The docker run command will create a new container from the image and start it. It will only run as long as the command takes to complete. After it completes. The container will still exist but will not be running.

MYCOMMAND='yum -y --enablerepo=ol7_addons install oracle-rdbms-server-12cR1-preinstall vim bzip2 xorg-x11-xauth xeyes xhost'
docker run --name oraol7 oraclelinux $MYCOMMAND

 

List the container that was just created. Running the docker ps command without the -a parameter will only show the running containers. Since the container we created is no longer running, it would not show up with only the docker ps command.

docker ps -a

 

Commit the container just created to a new image. The new image will be a copy of the container but can also be used to create new containers.

docker commit oraol7 oraol7

 

Now you can see an additional image available in the Docker images. Notice the virtual size of the image just created is significantly larger than the original oraclelinux images.

docker images

 

The container has been committed to an image so the changes made to the original image have been saved. You can now delete the container.

docker rm oraol7

 

Notice the container no longer exists.

docker ps -a

 

Load the Oracle DB12c binaries onto the VM or mount through a shared folder. The files in this case are in the /mnt directory.

Start vncserver on the host VM. Connect to the vnc session on your PC or directly on the VM.

vncserver

 

Create a container from the oraol7 image and run it in interactive mode with a bash shell. Give the container a hostname of db12c. Create two directory mappings between the host and the container; one for the DB12c binaries and the other to forward the X11 traffic.

docker run -it --name oraol7 -h db12c -v /mnt:/mnt -v /tmp/.X11-unix:/tmp/.X11-unix oraol7 /bin/bash

 

You are now at a prompt inside of the running container.

[root@db12c /]$ hostname
db12c

 

Confirm that an X11 application started inside of the container appears in the VNC session.

export DISPLAY=:1.0
xeyes

 

Give the rest of the users in the container OS permission to use X11 applications. This is necessary because we need to launch the database installer as oracle. By default, only root has permissions to launch X11 applications.

xhost +

 

Create a directory in the container for the DB12c install and change the ownership.

mkdir /u01
chown oracle:oinstall /u01

 

Change user to oracle and start the DB12c installer. The installer application will appear in the VNC session.

su - oracle
export DISPLAY=:1.0
/mnt/database/runInstaller

 

When the install is complete, the database should be running inside of the container.

[oracle@db12c /]$ ps -ef | grep [p]mon
oracle      23     1  0 20:04 ?        00:00:00 ora_pmon_orcl

 

Shut down the database.

While connected to the container and still the oracle user, create two scripts in the /home/oracle directory called start.sh and shut.sh.

start.sh

#!/bin/bash

export ORAENV_ASK=NO
export ORACLE_SID=orcl
. oraenv
echo $ORACLE_HOME

lsnrctl start
sqlplus / as sysdba << EOF
select status from v$instance;
startup
select status from v$instance;
exit;
EOF

while test $(pgrep ora_pmon_orcl); do
sleep 5
done

 

shut.sh

#!/bin/bash

export ORAENV_ASK=NO
export ORACLE_SID=orcl
. oraenv
echo $ORACLE_HOME

lsnrctl stop
sqlplus / as sysdba &lt;&lt; EOF
select status from v$instance;
shutdown immediate;
select status from v$instance;
exit;
EOF

 

Exit from the container and commit the container to an image.

docker ps -a
docker commit oraol7 oraol7

 

The oraol7 image virtual size is now close to 12GB.

docker images

 

Remove the oraol7 container. This is a necessary step because the next step creates a new container with different settings.

docker rm oraol7

 

Create a new container from the saved oraol7 image. This container will run in the background and start up the database instance. As long as the database instance is running, the container will continue to be active.

docker run -it -d --name oraol7 -h db12c -u oracle oraol7 /home/oracle/start.sh

 

List the containers. Notice there is one container that has a status of Up.

docker ps

 

The database is running inside of the oraol7 container. You can now connect to that running container in a separate interactive session.

docker exec -it oraol7 /bin/bash

 

While connected to the oraol7 container, try connecting to the database to confirm it is running.

. oraenv
ORACLE_SID = [] ? orcl
sqlplus system/welcome1@localhost/orcl

 

Containers can be stopped and started. Stop the oraol7 container.

docker stop oraol7

 

Now try starting it again.

docker start oraol7

 

You may run into an error having to do with systemctl when trying to start an existing container. This is a bug. The workaround is to stop the systemd scope listed in the error message.

systemctl stop docker-ebe362b4a4cd81fbf7fa5fef4870a32a423f8586ff187679bebb7e9c77c8f3de.scope

 

Then try starting the container again.

docker start oraol7

 

When you want to stop the database and the container it is running in, you could simply kill the container. However, killing the container would cause the database to be in an inconsistent state. The container can be closed cleanly with a consistent database using the shut.sh script. When the instance is stopped, the original session which ran the /home/oracle/start.sh will stop automatically.

docker exec -it oraol7 /home/oracle/shut.sh

 

Keep in mind that any changes that have been made are currently in the container, not the original image. In order to preserve those changes, you would need to commit the container to an image once again. To continue to use the same image name without overwriting the current image as we did previously, you can use a tag.

docker commit oraol7 oraol7:load1

 

Now you have a container with a database that you can start and stop but it doesn’t do you a lot of good if you can only access it from within the container so we need to make the database (more specifically the listener) accessible from outside the container.

Remove the existing container and start a new one based on the last saved image but this time expose port 1521 on the container to the host OS.

docker rm oraol7
docker run -it -d --name oraol7 -h db12c -u oracle -p 1521 oraol7:load1 /home/oracle/start.sh

 

Find the IP address of the running container.

[root@localhost ~]# docker inspect oraol7 | grep [I]PAddress
        "IPAddress": "172.17.0.19",

 

Connect to the container and add the IP address of the container to the listener.ora.

docker exec -it oraol7 /bin/bash
vim /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora

 

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.0.19)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)

 

Restart the database listener.

. oraenv
ORACLE_SID = [] ? orcl
lsnrctl stop; lsnrctl start

 

You may need to re-register the database service to the listener.

sqlplus / as sysdba
alter system register;

 

Back out on the host OS, I can now connect to the database through port 1521. In this case, I downloaded the Oracle Instant Client and connected to the IP address of the container using EZ Connect.

sqlplus64 system/welcome1@172.17.0.19/orcl

How to convert Excel file to csv

One-liner to convert Excel to CSV (or to and from any other format).
There is a bug 320369 if you have excel in English and your locale is not America. Just change your settings to us_en before conversion.

VirtualBox 3.2.22

VirtualBox 3.2.22 arrived yesterday, and here was me thinking it was dead. :)

The downloads and changelog are in the usual places.

Happy upgrading!

Cheers

Tim…


VirtualBox 3.2.22 was first posted on February 13, 2015 at 1:57 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Let’s Talk Database Replay- Part I

Database Replay is a feature in EM12c that a lot of folks are unfamiliar with, but once they learn about it, well, now then they figure out how valuable it really is.  Where the ADDM Comparison Report comes in really handy when you want to see what changed, think about how much more value there is if you could test out what a change in parameters, patches, or in code would do to a database and THEN use the ADDM Comparision Report to view the “What if’s”?  This is where Database Replay really enhances the comparison report feature.

along with the ADDM Comparison, you also have the ability to replay with ASH Analytics, SQL Analyzer and Replay’s own Comparison Report.

Now you can generate the workload capture reports via the DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO package, but as this is EM12c Goth Girl’s blog you’re visiting, so we’re going to discuss using Enterprise Manager 12c to produce and execute the Database Replay.

Accessing Database Replay in EM12c

Database Replay is under the Performance drop down once you log into a target database.

replay1

Once you enter the wizard, you see the steps that need to be completed to perform a successful database replay:

replay2

  1. Capture Production Workload
  2. Prepare Test Database
  3. Prepare for Replay
  4. Replay Workload on Test Database

Capture Production Workload

This is broken down into three parts:

replay3

You’ll need to click on the icon at the very right to configure the first “configurable” step, Capture Workload.  The first step is to ensure you understand the requirements to export and store the data that will be required to run the replay.

replay4

After verifying that you have the disk space and the location to restore to, acknowledge both and click on Next.

Now you can choose what to exclude from your replay.  Keep in mind, the more you exclude, the less space and replay demands will be required.

The default looks like this:

replay5

But for our post, I’m going to trim it down…a LOT. :)

I’m going to keep the “Do not restart the database…” and then change the filter mode to “Inclusion” and just use the DBSNMP user schema.

replay6

This is all I want to capture and click on Next.

I now need to decide what database directory to use for my export and have the option to rename the database replay or use the default naming convention.  For this post, I’m just going to leave it with the default name.

replay7

Now it’s time to schedule the workload export.  This is done via the EM Job Service, so the job will show up in Cloud Control’s Job Activity page.

replay8

The default for the job is to run immediately, but as you can see in the example for this post, I’ve scheduled it for later, (meaning you can also schedule a replay to be performed after hours, (when users are less likely to be accessing the database.)   You also can set a Capture Duration limit, which means the process will rollback if it exceeds this time.  The default for the credentials used are the Preferred Host Credentials, so try to have these set up vs. creating new ones, an option I’m not crazy about having here in the wizard, but understand why it does exist.

If you don’t have the credentials set up beforehand, you will receive the following error upon proceeding:

replay9

Once you do have the credentials set, you’re able to proceed and go to the next step in the wizard.

For Part II, we’ll proceed into the next step of preparing the test database to refresh and receive the workload to replay!

Have a great Thursday and for those folks that I don’t see at RMOUG Training Days 2015 conference next week, the horror you must feel right now! :)

Shocked

 

 

 

 



Tags:  ,

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © DBA Kevlar [Let's Talk Database Replay- Part I], All Right Reserved. 2015.

Parallel rownum

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code. Here’s a little trap I came across recently that’s probably documented somewhere, which could be spotted easily if you had access to the OEM SQL Monitoring screen, but would be very easy to miss if you didn’t check the execution plan very carefully. I’ll start with a little script to generate some data:


create table t1 nologging
as
select * from all_objects where rownum <= 50000
;

insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create table t2 as select * from t1;
alter table t2 add id number(10,0);

All I’ve done is create some data – 800,000 rows – and then create a table to copy it to; and while I copy it I’m going to add a temporary id to the rows, which I’ll do with a call to rownum; and since there’s a lot of data I’ll use parallel execution:


alter session enable parallel dml;

insert /*+ parallel(t2 3) */ into t2
select /*+ parallel(t1 4) *, rownum from t1;

For the purposes of experiment and entertainment I’ve done something a little odd by supplying two possible degrees of parallelism, but this lets me ask the query: will this statement run parallel 3, parallel 4, both of the above, or neither ? (You may assume that I have parallel execution slaves available when the statement runs.)

The answer is both – because that rownum does something nasty to the execution plan (I didn’t include the 50,000 limit in my first test, which is why the plan reports 993K rows instead of 800,000):


--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |          |   993K|    92M|  1076   (1)| 00:00:13 |        |      |            |
|   1 |  PX COORDINATOR            |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)      | :TQ20001 |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT          | T2       |       |       |            |          |  Q2,01 | PCWP |            |
|   4 |     PX RECEIVE             |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | PCWP |            |
|   5 |      PX SEND ROUND-ROBIN   | :TQ20000 |   993K|    92M|  1076   (1)| 00:00:13 |        | S->P | RND-ROBIN  |
|   6 |       COUNT                |          |       |       |            |          |        |      |            |
|   7 |        PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   8 |         PX SEND QC (RANDOM)| :TQ10000 |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | P->S | QC (RAND)  |
|   9 |          PX BLOCK ITERATOR |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWC |            |
|  10 |           TABLE ACCESS FULL| T1       |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------

See that “P->S” (parallel to serial) at operation 8. The select statement runs in parallel (degree 4) to scan the data, and then sends it all to the query co-ordinator to supply the rownum; then the query co-ordinator re-distributes the data (including rownum) to another set of slave (S->P) to do the parallel (degree 3) insert. The P->S at line 2 shows the parallel execution slaves passing details to the query co-ordinator of the private segments that they have created so that the query co-ordinator can stitch the segments together into a single data segment for the table. (If you watch closely you’ll see the query co-ordinator doing a few local writes as it tidies up the header blocks in those segment blocks.)

There are two threats to this rownum detail. The first, of course, is that the operation essentially serialises through the query co-ordinator so it’s going to take longer than you might expect; secondly an accident of this type is typically going to allocate twice as many parallel execution slaves as you might have expected – the select and the insert are two separate data flow operations (note how the Name column shows TQ1xxxx and TQ2xxxx), each gets its own slave sets, and both sets of slaves are held for the duration of the statement. If this statement is demanding twice the slaves it should be using, then you may find that other statements that start running at the same time get their degree of parallelism downgraded because you’ve run out of PX slaves. Although the rownum solution is nice and clean – it require no further infrastructure – you probably need to introduce a sequence (with a large cache) to get the same effect without losing parallelism.

If you look at v$pq_tqstat after running this statement the results are a little disappointing – there are a few problems connecting lines from the plan with rows in the view – here’s my original output (and you’ll now see why I chose to have two different degrees of parallelism):


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Consumer               1 P000                331330   39834186         74         71           0
                                             1 P001                331331   39844094         75         72           0
                                             1 P002                330653   39749806         74         71           0

                    1 Producer               1 P000                     1        131       2263        396           0
                                             1 P001                     1        131       2238        417           0
                                             1 P002                     1        131       2182        463           0

         2          0 Producer               1 P003                247652   28380762         13          0           0
                                             1 P004                228857   26200574         13          1           0
                                             1 P005                267348   30496182         14          0           0
                                             1 P006                249457   28401982         13          0           0
                                             1 QC                  993314  119428086 4294967269 4294967286           0
                      Consumer               1 QC                  993314  113479500        125         65           0

                    1 Consumer               1 QC                       3        393          2          1           0

The first problem is that the DFO_number reported in the view doesn’t match with the :TQ1xxxx and :TQ2xxxx reported in the plan – the parallel 4 bit is the select, which is covered by :TQ1000, but it’s listed under DFO_Number = 2 in the view, and the insert is the parallel 3 bit, which is covered by :TQ2000 and :TQ20001 but listed under DFO_Number = 1.

More confusingly, potentially, is that the all appearances of the query coordinator have been assigned to DFO_Number = 2. Ignoring the fact that the DFO_Number column switches the 1 and 2 from the plan, what we should see is as follows:

  • The consumer at line 16 is consuming from the 4 producers at lines 11 – 14.
  • The producer at line 15 is producing FOR the 3 consumers at lines 3 – 5
  • The consumer at line 18 is consuming from the producers at lines 7 – 9

Ideally (including the correction for the DFO_Number) I think the view content should be as follows:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P003                247652   28380762         13          0           0
                                             1 P004                228857   26200574         13          1           0
                                             1 P005                267348   30496182         14          0           0
                                             1 P006                249457   28401982         13          0           0
                      Consumer               1 QC                  993314  113479500        125         65           0

         2          0 Producer               1 QC                  993314  119428086 4294967269 4294967286           0
                      Consumer               1 P000                331330   39834186         74         71           0
                                             1 P001                331331   39844094         75         72           0
                                             1 P002                330653   39749806         74         71           0

                    1 Producer               1 P000                     1        131       2263        396           0
                                             1 P001                     1        131       2238        417           0
                                             1 P002                     1        131       2182        463           0
                      Consumer               1 QC                       3        393          2          1           0

Just don’t ask me why the waits and timeouts for the QC as producer seem to be counting backwards from 2^32.


Robert G. Freeman on Oracle 2015-02-12 01:00:00

Welcome to part three of my response to this blog entry on the deprecation notice of the Oracle non-CDB architecture. The previous blog entries are here for part one. Until now I've tackled these comments:

What I want to talk about is Oracle’s attitude to its customers and what seems to me to be breathtaking arrogance. Personally I can think of three very good reasons why I might not want to use the single PDB within a CDB configuration which does not require a Multitenant license

and 

Multitenant requires additional configuration and the use of new administrative commands, which means re-writing admin procedures and re-training operations staff

and:

Multitenant is an entirely new feature, with new code paths – which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patchset contains a section on Pluggable/Container Databases which lists no fewer than 105 items)

Now, I want to address the final comment which is:

With the Multitenant option installed it is possible to trigger the requirement for an expensive set of licenses due to human error… without the option installed this is not possible

This bit of FUD is silly. First of all, this risk already exists with various features of the Oracle database. For example, many of the OEM packs can be inadvertently used without a license as could several of the views in the database itself. Partitioning is another example that comes to mind. Often it's installed in a database but it's use requires a license.

So, how is this any different? Well, it's not. Simply put, this is an argument for enterprise compliance auditing/management.

In the end, this blog entry is a great example of knee jerk responses to a situation that is, frankly, still developing. A lot of assumptions are made in the points the blog presents in it's attempt to throw stones at Oracle. I only hope in the future that people will not start yelling about the end of the world before the trajectory of the asteroid is actually tracked with some degree of accuracy.