Video : TRUCATE TABLE … CASCADE

In today’s video we demonstrate the TRUNCATE TABLE … CASCADE feature added in Oracle 12.1.

The video is based on this article.

The star of today’s video is Øyvind Isene, who reluctantly took a break from his coffee to let me film this clip. πŸ™‚

Cheers

Tim…

Don’t be a moron. Install APEX in all your Oracle databases!

If you come from an Oracle background you know what Application Express (APEX) is. You know it’s an awesome low-code development environment, and you know it can turn DBAs and PL/SQL developers into productive members of society in no time at all. πŸ™‚

Even if you don’t want to use APEX, you should install APEX anyway, because it comes with some really handy PL/SQL packages. These come to mind.

APEX_WEB_SERVICE : I’ve written my own APIs in the past, but I never use them anymore. I recommend people make web service callouts from the database using the APEX_WEB_SERVICE package. You can use it for REST and SOAP calls, and it works equally well for JSON and XML web services.

APEX_JSON : Oracle included some JSON functionality in Oracle 12.1, but they only completed the implementation in Oracle 12.2. If you are working with a database version older than Oracle 12.2 and you care about JSON, you need the APEX_JSON package.

APEX_DATA_PARSER : This package allows you to convert CSV, JSON, XML and XLSX data into rows and columns. It’s really simple to use!

APEX_ZIP : You can do some simple gzip/gunzip stuff with the UTL_COMPRESS package, but the APEX_ZIP package allows you to zip up multiple files into an archive, or extract files from an existing archive.

APEX_MAIL : This package gives you an API over the APEX mail functionality. It’s more flexible than the UTL_MAIL package, and is much simpler than writing your own APIs using the UTL_SMTP package.

APEX_STRING : I always used to use the STRING_TO_TABLE and TABLE_TO_STRING functions in the APEX_UTIL package. Those have now been moved to the APEX_STRING package. You might prefer to use the SPLIT and JOIN functions instead. There is also a bunch of other string handling stuff worth checking out.

Every release brings more goodies for PL/SQL programmers, regardless of whether you want to use APEX or not!

So do yourself a favour. Install APEX into all your Oracle databases and make your PL/SQL developers happy.

Cheers

Tim…

PS. I don’t actually think you are a moron if you don’t install APEX. If you take the title of blog posts seriously, when they are clearly clickbait, maybe you are a moron…

Update: There seems to be a little confusion on some social media comments. Installing APEX into a database doesn’t mean APEX is available for people to develop applications. You can only use APEX proper if you have a gateway (ORDS, EPG, mod_plsql) fronting it. Without a gateway, APEX doesn’t work. Someone can’t take it upon themselves to run a gateway somewhere else if they don’t know all the passwords, so installing APEX and using it are not the same thing. Installing it does make the built-in packages available for normal PL/SQL development, which is really handy. Hence this post.

If you replace ‘enterprise application servers’ with ‘Oracle DB’ – does your message stay same? or different?

Yesterday I put out a post called Does anyone care about enterprise application servers anymore?

Before I posted it I got my boss to read it. This is how the Teams chat went.

  • Boss: It seems fine. Was there anything in particular you were not happy with?
  • Me: I guess I’m a little nervous about the post because the obvious comeback is, “Isn’t that true for the Oracle database too?”, and to some extent I would have to say yes*… I did consider adding a paragraph about that, but thought I would rather not.
  • Boss: Actually that is what I was thinking while reading it.

* In the context of the chat, “yes”, meant is was a valid question. Just not easy to answer.

Sure enough, not long after posting it I got a Twitter response from Timur Akhmadeev that said, “If you replace ‘enterprise application servers’ with ‘Oracle DB’ – does your message stay same? or different?” That is the title of this post. πŸ™‚

I’m going to attempt to explain my thoughts on this by going through the same talking points.

Full disclosure

I’m an Oracle fanboy. You know that. My take on this subject is bound to be biased because of my history.

I know I am mostly known for Oracle stuff, but in my current job I have to look after MySQL and SQL Server databases. I work on one project that uses PostgreSQL, which I’m really bad at. The company recently started using Snowflake, and the plan is to move all analytics and warehouse type stuff to that. So as much as I’m an Oracle fan, my work life is a lot more diverse than you might think.

Third-party application vendors

One of the major points in my original post was about the vendors of third-party products that got rid of enterprise application servers from their stack and replaced them with more streamlined alternatives. Do I see this reflected in the database space?

From my perspective no. We have some products that are 100% Oracle. Nothing we can do about that other than move to completely different products. Of those where the RDBMS engine is a choice, I don’t think we have one where they offer other RDBMS engines, but not Oracle. It seems Oracle is still a viable choice here. I’m not saying they are recommending it, but equally they are not advising against it.

There are some vendors where MySQL is not an option, but PostgreSQL is. Interesting…

Overall, we don’t see the same pressure from the vendors in the database space as we do in the enterprise application server space.

Containers

Similar to my comments about application servers, most RDBMS engines can work within containers, but should they? It’s fine for a demo or a small system. Are you happy about running a huge data warehouse as a monolith in a container though? I don’t think this is about Oracle or not. It’s something more fundamental than that.

Administration

There is definitely a big learning curve for Oracle. I could argue it’s probably harder to get good at Oracle administration than many other engines. I think you have to be really careful of the comparison though. Take my current company for example. The big and complicated stuff happens on Oracle, and the really simple stuff happens on SQL Server or MySQL. It’s unfair to compare the administration requirements because the use cases are so different.

I think the only way you can compare them is if someone has legitimately done the same types of projects on multiple engines and really understands them. I’m not that person, so I can’t make that comparison.

But what about product X?

As with the application server comment, if you have a vendor/product that demands a specific engine, you have no choice but to stick to it, or change vendor/product entirely.

For new products I would suggest you get some information about their current customer base and go with the majority/trend. If Oracle is an option, but 90% of their customers are not using Oracle, I would not knowingly make myself a minority without good reason.

Moving to the cloud

The cloud is the great leveller, and I think this is both good and bad for Oracle.

In the case of administration, cloud services can hide a lot of the complexity and make things much more generic. Take a look at RDS on AWS and you will see MySQL, SQL Server, PostgreSQL and Oracle all feel quite similar from a basic administration perspective. If people think Oracle is too complicated, take a look at RDS on AWS, or Autonomous Database on Oracle Cloud.

Where it becomes an issue for Oracle is as soon as the guts are hidden from us, how valuable is the Oracle IP? Imagine feature X is baked into the Oracle database, but requires 3 additional apps to achieve the same result with another RDBMS. On-prem this is a real pain, as it’s now my responsibility to get all this stuff working and keep it working. If a cloud service does all that for me, do I care if it is baked into the RDBMS or not? I don’t think I do. I just care about what features the service delivers and the price.

There is also the comparison of “built for the cloud” vs “monolith in the cloud”. I know this concerns a lot of people, but to be honest I don’t care. If the resulting service gives me what I need, how much do I care what’s under the hood? That’s the cloud provider’s problem, not mine.

Regardless of which engine you pick, I think you should assume you will be using a cloud database service unless there is a compelling reason not to, rather than the other way round.

RDBMS vs NoSQL?

I’m not going to get into the RDBMS vs NoSQL war because I think time has proved it to be redundant. I’ve been listening to this for about 15 years and it seems pretty clear to me the choice depends on the use case.

Some use cases fit really well with NoSQL and some don’t. There are also grey areas between where polyglot engines become interesting, but it all depends where your use case fits on the spectrum…

Imagine a shop that is mostly Oracle, but needs a JSON document store. Do they use a different engine like MongoDB, or just continue using Oracle and use SODA? The answer will be different depending on how you frame the question.

What’s the real problem with Oracle?

I don’t think Oracle’s problem is their tech, or even their prices. I speak to a lot of people and the number one problem they have with Oracle is the shady business practices. People don’t trust Oracle. That is not a good thing to hear from the customer base, especially when there are more alternatives than ever.

Oracle is in dire need of a Microsoft-style transformation, and that can only happen if the senior management at Oracle really want it to happen.

Is there a conclusion?

When you consider the context of the original blog post, not just the title, I don’t think you can just substitute “Oracle DB” for “enterprise application server” into the question. It’s too simplistic.

Cheers

Tim…

PS. The answer is, “it depends!” πŸ™‚

Does anyone care about enterprise application servers anymore?

A few years ago it seemed like everywhere you turned there was a vendor extolling the virtues of their enterprise application server. The message was very much “big is beautiful”. The more complicated the better. The more features the better. Fast forward to present time and is that message still the same?

In a recent conversation I was asked about my thoughts on this and my response was I don’t think these “do it all” application servers matter anymore. This post outlines my feelings on the matter.

Full disclosure

Because of my job I was forced to learn WebLogic, and I’ve always disliked it. I think it’s over-complicated, a pig to manage, and every quarter the list of vulnerabilities look like a horror show. Keep this in mind when reading the rest of this post. Basically, I’m saying this is my biased opinion and you don’t have to agree with me. I’m not throwing shade at any particular company. In my opinion, the mindset at the time was different to now, which resulted in similar products from multiple companies.

Also, it’s worth saying I’m not an application server guru, so that probably affects my views on the subject.

Third-party application vendors

The company I work for at the moment uses a lot of third party apps. From what I can see, the 3rd party application vendors that were previously looking to offer solutions on products like Websphere and WebLogic are quite happily getting rid of them in favour of more streamlined solutions. They are quite happy to forgo the additional features, or implement them with additional libraries in their code.

Maybe that’s just the application vendors I come into contact with. Maybe your experience is different. I would be interested to know, in case I’m totally off the mark here.

Containers

With the rise of containers we’ve become accustomed to small and lightweight pieces of infrastructure that focus on doing one thing well, rather than trying to be all things to all people.

I know you can run these enterprise application servers in containers and on Kubernetes, but it just feels like trying to force monoliths into containers. That’s not my idea of progress.

When I talk about lightweight, I’m not just thinking about the size of the footprint, but the complexity of it also.

Administration

The administration of the monoliths is too complicated. Given the choice of training up a new person on WebLogic or Tomcat, I know which I would pick.

Even after all these years, when we get an issue on a WebLogic server a piece of me dies because I know I’ll be rooting around for ages to get the answers. In comparison, Tomcat is so much simpler.

We go back to my definition of the size of the footprint again. The complexity of some of these products comes at a cost.

But what about product X?

I realise that some products still need these enterprise application servers. If you use a product that needs them, that’s fine. Continue doing so. I just wouldn’t be looking to buy new products that require that much investment in time and money. It’s just my opinion though.

Moving to the cloud

One of the things I’ve mentioned several times is the move to the cloud changes things significantly. To a certain extent, I don’t care about the underlying technology used by a cloud service. I just care that it meets my requirements. Does it perform well enough? Does it have the required availability? Does it come at a reasonable price? Is it really easy to administer, or preferably need no real administration? Does it support the programming language/framework I care about? At no point am I really thinking about the underlying tech.

If a cloud provider builds out there service using an enterprise application server, that’s fine. I just don’t want to pay those licensing fees, and I don’t want to see any of the management. I want the most streamlined experience I can get.

What do you think?

I’m really interested to know what you think. I come to this conversation with a lot of bias, so I understand if a lot of people don’t agree with me. Are you looking to simplify this layer of your infrastructure, or do you still lean on the side of “big is beautiful”?

Cheers

Tim…

High Availability : How much availability do you really need?

I had a discussion with a few folks about high availability (HA) and I thought I would write down some of my thoughts on it. I’m sure I’ve made many of these points before, but maybe not collectively in this form.

Before we start

This is not specifically about Oracle products. It applies equally well to any technology, but I will use Oracle as an example, because that’s what I know.

When I speak about a specific product or technology I am only talking about it in the context of HA. I don’t care what other benefits it brings, as they are not relevant to this discussion.

This is not about me saying, “you don’t need to buy/use X”. It’s me asking you to ask yourself if you need X, before you spend money and time on it.

How much downtime can you really tolerate?

This is a really simple question to ask, but not one you will always get a considered answer to. Without thinking, people will demand 24×7 operation with zero downtime, yet you ask for a downtime window to perform a task and it will get approved. Clearly this contradicts the 24×7 stance.

As a company you have to get a good grasp of what downtime you can *really* tolerate. It might be different for every system. Think about interfaces and dependencies. If system A is considered “low importance”, but it is referenced by system B that is considered “high importance”, that may alter your perception of system A, and its HA requirements.

There are clearly some companies that require as close to 100% availability as possible, but also a lot don’t. Many can get away with planned downtime, and provided failures don’t happen too often, can work through them with little more than a few grumbles. We are not all the same. Don’t get lead astray by thinking you are Netflix.

The more downtime you can tolerate, the more HA options are available to you, and the simpler and cheaper your solutions can become.

What is the true cost of your downtime?

The customers of some companies have no brand loyalty. If the site is down, the customers will go elsewhere. Some companies have extreme brand loyalty and people will tolerate being messed around.

If Amazon is down, I will wait until it is back online and make the purchase. There could be a business impact in terms of the flow of work downstream, but they are not going to lose me as a customer. So you can argue Amazon can’t tolerate downtime, or you can argue they can.

I used to play World of Warcraft (WOW), and it always irritated me when they did the Wednesday server restarts, but I just grumbled and waited. Once again, their customer base could tolerate planned downtime.

In some cases you are talking about reputational damage. If an Oracle website is down it’s kind-of embarrassing when they are a company that sells HA solutions. Reputational damage can be quite costly.

This cost of downtime for planned maintenance and failures has to factor into your decision about how much downtime you can tolerate.

Can you afford the uptime you are demanding?

High availability costs money. The greater the uptime you demand, the more it’s going to cost you. The costs are multi-factored. There is the cost of the kit, the licenses and the people skills. More about people later.

If you want a specific level of availability, you have to be willing to invest the money to get it. If you are on a budget, good luck with that 99.99+% uptime… πŸ™‚

Do you have the skills to minimize downtime?

It’s rare that HA comes for free from a skills perspective. Let’s look at some scenarios involving Oracle databases.

  • Single instance on VM: You are relying on your virtual infrastructure to handle failure. Your DBAs can have less HA experience, but you need to know your virtualization folks are on form.
  • Data Guard: Your DBAs have to know all the usual skills, but also need good Data Guard skills. There is no point having a standby database if you don’t know how to use it, or it doesn’t work when you need it.
  • Real Application Clusters (RAC): Now your DBAs need RAC skills. I think most people would agree that RAC done badly will give you less availability than a single instance database, so your people have to know what they are doing.
  • RAC plus Data Guard: I think you get the point.

We often hear about containers and microservices as the solution to all things performance and HA related, but that’s going to fail badly unless you have the correct skills.

Some of these skills can be ignored if you are willing to use a cloud service that does it for you, but if not you have to staff it! That’s either internal staff, or an external consultancy. If you skimp on the skills, your HA will fail!

What are you protecting against?

The terms high availability (HA) and disaster recovery (DR) can kind-of merge in some conversations, and I don’t want to get into a war about it. The important point is people need to understand what their HA/DR solutions can do and what they can’t.

  • Failure of a process/instance on a single host.
  • Failure of a host in a cluster located in a single data centre.
  • Failover between data centres in the same geographical region.
  • Failover between data centres in different geographical regions.
  • Failover between planets in the same solar system.

You get the idea. It’s easy to put your money down and think you’ve got HA sorted, but have you really? I think we’ve all seen (or lived through) the stories about systems being designed to failover between data centres, only to find one data centre contains a vital piece of the architecture that breaks everything if it is missing.

Are all your layers highly available?

A chain is only as strong as the weakest link. What’s the point of spending a fortune on sorting out your database HA if your application layer is crap? What’s the point of having a beautifully architected HA solution in your application layer if your database HA is screwed?

Teams will often obsess about their own little piece of the puzzle, but a failure is a failure to the users. They aren’t going to say, “I bet it wasn’t the database though!”

Maybe your attention needs to be on the real problems, not performing HA masturbation on a layer that is generally working fine.

Who are you being advised by, and what is their motivation?

Not everyone is coming to the table with the same motivations.

  • Some vendors just want to sell licenses.
  • Some consultants want to charge you for expensive skills and training.
  • Some consultants and staff want to get a specific skill on their CV, and are happy for you to pay them to do that.
  • Some vendors, consultants and staff don’t engage their brain, and just offer the same solution to every company they encounter.
  • Some people genuinely care about finding the best solution to meet your needs.

Over my career I’ve seen all of these. Call me cynical, but I would suggest you always question the motives of the people giving you advice. Not everyone has your best interests at heart.

So what do you do?

In my current company we use our virtual infrastructure for basic HA. The databases (Oracle, MySQL and SQL Server) typically run as single instances in a VM, and failover to a different host in the same data centre or a different data centre depending on the nature of the failure. There are some SQL Server databases that use AlwaysOn, but I see little benefit to it for us.

Every so often the subject of better database HA comes up. We can tolerate a certain level of downtime for planned maintenance and failures, and the cost and skills required for better HA are not practical for us at this point. This position is correct for us as we currently stand. It may not be the correct option in future, and if so we will revisit it.

For the middle tier we do the normal thing of multiple app servers (VMs or containers) behind load balancers.

I could probably build a very convincing case to do things differently to make my job a little “sexier”, but that would be a dick move. As it happens I want to move everything to the cloud so I can stop obsessing about the boring stuff and let the cloud provider worry about it. πŸ™‚

Conclusion

There is no “one size fits all” solution to anything in life. As the mighty Tom Kyte said many times, the answer is always, “it depends”. If people are making decisions without discussing the sort of things I’ve mentioned here, I would suggest their decision process is flawed. Answers like, “but that’s what Twitter does”, are unacceptable.

Cheers

Tim..

Video : APEX_MAIL : Send Emails from PL/SQL

In today’s video we’ll demonstrate how to use the APEX_MAIL package to send emails from PL/SQL.

The video is based on this article.

You may also want to check out these articles.

The star of today’s video is Paul Vallee, who knocked it out of the park with his recent post on hybrid working.

Cheers

Tim…

The Death of Critical Thinking

I’m repeatedly getting into exchanges with people who seem to be incapable of critical thinking. Here is a quote from the Wikipedia definition of Critical Thinking.

Critical thinking calls for the ability to:

  • Recognize problems, to find workable means for meeting those problems
  • Understand the importance of prioritization and order of precedence in problem-solving
  • Gather and marshal pertinent (relevant) information
  • Recognize unstated assumptions and values
  • Comprehend and use language with accuracy, clarity, and discernment
  • Interpret data, to appraise evidence and evaluate arguments
  • Recognize the existence (or non-existence) of logical relationships between propositions
  • Draw warranted conclusions and generalizations
  • Put to test the conclusions and generalizations at which one arrives
  • Reconstruct one’s patterns of beliefs on the basis of wider experience
  • Render accurate judgments about specific things and qualities in everyday life

They are all important points, but there are some bits that jump out at me. I could write about all of them, but that would get a bit boring, and these are the bits people seem to really struggle with.

Gather and marshal pertinent (relevant) information

It’s surprising how many people don’t go back to the source material, and are happy to rely on someone’s interpretation of it. That’s problematic, as people often focus on what they think is important, and have a habit of conveniently forgetting about material that contradicts their view point. You can twist almost anything to agree with your perspective if you carefully omit some of the content and context of the source material.

It’s important you check the source material to make sure it actually exists. In a recent interaction someone was using a quote by a university professor in support of their argument. I checked on the professor in question, and they did indeed say those words, but they were citing sources that did not exist. The professor’s statements were pure fabrication, and they’ve subsequently been removed from student interactions. I suspect they will lose their job at some point. It’s not good when scientists openly lie…

You also have to consider the source of the information. Is the person really qualified to be speaking about the issue in question? That can be quite difficult to determine for some people. If we were talking about cancer, a doctor would be the correct person right? Well, I would take the opinion of an oncologist over my general practitioner any day of the week. When discussing a virus, would you take the opinion of a doctor (unspecified discipline) over a virologist or immunologist? It’s easy to be fooled into thinking someone is a credible source, when they may have lots of qualifications, but in the wrong field. I have a PhD in genetic engineering, so I’m a doctor, but you probably shouldn’t ask me for gynaecology advice. πŸ™‚

Gathering the pertinent information does not mean finding the one paper or person that agrees with your point of view. You should consider the available information as a whole, not cherry pick what suits you.

Recognize unstated assumptions and values

Ultimately, you need to go back to the source material, and then look for unstated assumptions and values in that. You might for example assume a bias if the writer is employed by a specific company, or has a long history of pushing a certain message. You can’t totally discount this information, but you do need to take that context into account when coming to any conclusion about it.

When you start using “second hand evidence” this gets really tricky because you can lose some of the original content and context. We all come with our own set of biases, whether conscious or unconscious. If I suspect someone has a strong bias in favour of a specific stance, I am less likely to listen to their interpretation of the source material, because I’m expecting their bias to influence their interpretation. If I see something that reads as a balanced argument, I will usually give it more weight. That could still be a mistake, as it is not the source material. I may still be getting fooled.

Comprehend and use language with accuracy, clarity, and discernment

People often quote scientific papers as a means to justify their point. When you check the source material, it’s clear the person in question has totally misunderstood it. Scientific papers can be quite difficult to read. Different disciplines use language differently, and it’s easy to get the wrong end of the stick. That’s why it is super important you have more than a passing understanding of the subject matter before you launch into reading scientific papers. You need to be able to question your own understanding of what you are reading, to confirm you really do understand it. Even an abstract for a paper can be quite misleading when taken out of context.

As you read more source material, you will get a better feeling for the language used, and you will also be able to go back and check your previous understanding of things you’ve read. This is why it’s really problematic if a newbie reads a single paper and decides that is definitive proof of their opinion.

Interpret data, to appraise evidence and evaluate arguments

The process of appraising evidence is really important. It comes back to the point about source material verses other people’s interpretations. The source material could be considered the highest quality, but it will include outliers. Meta-analysis is often considered superior, as it reduces the importance of outliers. Most information on the internet is not source material. It is many layers divorced from the source material. Using a news story, blog post or tweet as part of your data set could be seriously skewing your results because of the volume of content written by unqualified people.

The weight of evidence has to be taken into account. If 99% of the quality material says X and 1% says Y, it would be a brave person who assumes that Y must be the correct answer. There is a thing called consensus. A conclusion has been made by a group of qualified people based on the body of evidence as a whole, not just a couple of outliers. Remember what I said about meta-analysis.

Consensus can change over time. As more information is gathered, the weight of the evidence may change. It’s perfectly fine to find out what you believed is wrong. You made the best judgement you could using the information at your disposal. As more and better quality information arrives, it needs to be evaluated, which could in turn alter the consensus.

Reconstruct one’s patterns of beliefs on the basis of wider experience

What I believe today may not be what I believe tomorrow. Today I side with the consensus. I am willing to adapt as that consensus changes, based on new information.

I don’t just fall in line with the opinion of a specific spokesperson. They have their own beliefs and biases. It is the underlying consensus that matters to me.

Cheers

Tim…

PS. I’m not trying to make out I’m the boss or gatekeeper of critical thinking. We all jump to conclusions based on little evidence from time to time. You’ve just got to get enough self-awareness to notice you are doing it, or accept you’ve done it when someone challenges you on it.