It’s been a couple of weeks since Oracle 23c Free was released. I’ve already put out a getting started post to help people start their journey, but I wanted to say something about why, in my opinion, you should start learning 23c now, rather than waiting.
Learning stuff takes time
I’ve written some tips about learning new things, and one of the posts was about time and repetition. It’s going to take a long time to get used to using 23c. If the latest version you have used is 19c, you’ve got the 21c and 23c new features to catch up on. If you’re using something older than 19c, then you’ve got even more to do. Sure, you can use Oracle 23c like it’s Oracle 7, and for the most part things will be fine, but really you should try and get the most out of the products you are paying for.
The sooner you start, the easier it will be, and the less rushed you will feel when you eventually have to start.
I’ve already put out a lot of 23c posts, but remember I’m on the 23c beta program, so I’ve been writing these posts for over 5 months now. They didn’t happen overnight.
Feeling comfortable with a new release takes me years. There is a difference between writing a post about something, and feeling comfortable using it in my day-to-day life. It will take some time before some of this stuff is a natural part of my toolbox. I want that to happen as quickly as possible, so that means getting stuck in now, so when the full blown 23c goes GA I can hit the ground running.
It might make you change the way you do things now
Sometimes, knowing what is coming around the corner will alter the way you approach a problem today. You may use the way Oracle has implemented a new feature as inspiration for how to solve your current problem in an older release. If possible, you may code it such that you can do a straight swap when you upgrade to 23c, throwing away your code and replacing it with built in functionality.
You don’t know what you don’t know
My initial reaction to new releases is often, “there’s nothing there I care about”. Then I start working through the new features and things jump out at me. The more I work with it, the more interesting it gets. I already feel like I can’t live without the IF [NOT] EXISTS DDL clause. On the surface it seems so trivial, but it makes a massive difference. 🙂
The Oracle 23c Free release contains mostly developer focused new features. We won’t see some of the administration features publicly until we get the full release, but there is still a lot to work through.
23c is new to everyone
I say this with each new release, but remember there are no Oracle 23c experts. Even Oracle staff and people in the beta program haven’t got something I would consider as significant production experience. Just remember this when you are reading posts about the 23c new features. We are all noobs! That doesn’t mean there is no value in these posts, but just remember what they represent.
It’s your choice
You are going to have to learn this stuff eventually, so you have two choices.
You can wait until the last minute, try to cram, and inevitably make mistakes when you are doing something real in your job.
You can start now and coast into the new release, so when you have to do something real with 23c, it’s not a problem.
It’s up to you, but you know what I think you should do. 🙂
In summary Oracle 23c Free is the replacement for what would have been Oracle 23c XE, but it is a developer release, so it’s not the final form of Oracle 23c Free. We should get an updated version of 23c Free once the main Oracle Database 23c release becomes GA.
Where do I get it?
If you want to install it from the RPM you can download it from here.
There is a VirtualBox appliance and a Docker image available from Oracle, so you don’t actually have to install it if you don’t want to.
I have a 23c page on the website ready to post Oracle 23c articles here.
I’ve written a bunch of articles against the 23c beta 1 release, but I’ve not published any of them yet because of the beta program NDA. I’m going to work through them against the 23c Free developer-release, and anything that I’m allowed to publish I will. Some of the articles will have to be held back until the GA release of 23c, as they are not covered by this release.
Basically, if it is documented in Oracle Database 23c Free, I can write about it. If not, I’m still under NDA, so I will release those articles later.
In case you didn’t know, countries occasionally change their time zones, or alter the way they handle daylight saving time (DST). To let the database know about these changes we have to apply a new database time zone file. The updated files have been shipped with upgrades and patches since 11gR2, but applying them to the database has always been a manual operation.
With the recent switch over to daylight savings time in the UK I decided to post this question on Twitter yesterday.
We get less than 6% of people updating their time zone files on a regular schedule. Nearly 45% who only do the updates after a database upgrade, and nearly 50% of people who never do it at all.
I can’t say I’m surprised by the results. In terms of the reasoning for these responses, I’ll reference some of the comments on Twitter.
“Every ru patch, also thanks to 19.18 it is included now and with out of place upgrade and autoupgrade, i dont do it anymore 🙂 all automatic.”
If you are using AutoUpgrade to patch to a new Oracle Home, then applying updated time zone files is really easy. Before 19.18 it’s just a single entry “timezone_upg=yes” in the AutoUpgrade config file. From 19.18 onward the update of the time zone file is the default action (see here).
So interestingly, there may be some people who don’t know they are applying an update of their time zone file, who actually are now…
This feels like the natural time to do it for me, and it seems many other people feel the same.
As mentioned previously, AutoUpgrade makes it simple. From 21c onward AutoUpgrade is the main upgrade approach, even for those that have resisted using it for previous versions, so this question goes away from an upgrade perspective.
We can specifically tell it not to perform the action using “timezone_upg=no”, but I’m guessing most people will just go with the default action.
“NEVER. As an American-only company with very little need for time-specific data, quite unnecessary. Horrible design with no rollbacks and headaches w/data pump. Just not worth it if possible to avoid”
I totally understand this response. Many of us work with systems that are limited to our own country. Assuming our country doesn’t alter its own daylight savings time rules, then using an old time zone file is unlikely to cause an issue.
When you consider the number of people that run *very old* versions of Oracle, you can see that using old versions of the time zone file doesn’t present a major issue in these circumstances.
With reference to the data pump issue, I’ve experienced this, and it was also picked up in the comments.
“My hypothesis: Most do it when datapump tells they need to do it to get the import file they just received to load”
It seems like the time zone file version is not high on the list of priorities for most people, providing it is not causing a data pump issue. I totally understand this, and I myself only consider it during database upgrades.
I always like reading these poll results. I know the sample size is small, but it gives you a good idea of how your beliefs compare to the wider audience.
If you are interested to know how to manually upgrade your time zone file, you can read about it here.
I’ve been having a play with ChatGPT over the last few weeks, and I wanted to give a little example of using it to try and write some PL/SQL code. Importantly, I wanted to know how close it would get to generating runnable code against an Oracle database.
Calling a web service from PL/SQL
Let’s start by asking it a pretty straight forward question.
I think that’s pretty impressive, and not much different to my own example on the website. If someone gave me that answer in an interview question I would think they had done a reasonable job, but does it work? Not without some additional stuff.
What about the network ACL?
The first issue is it doesn’t mention we need to create a network ACL to allow us to make a callout from the database.
Does ChatGPT understand we need an ACL? Let’s see. I ask it about the ACL, and it gives me an explanation of using an ACL.
It’s using the pre-12c syntax, but let’s create it with the correct values.
conn sys/SysPassword1@//localhost:1521/pdb1 as sysdba
acl => 'jsonplaceholder.typicode.com_acl.xml',
description => 'Example ACL for REST web service',
principal => 'APEX_220200',
is_grant => TRUE,
privilege => 'connect',
start_date => NULL,
end_date => NULL
acl => 'jsonplaceholder.typicode.com_acl.xml',
principal => 'APEX_220200',
is_grant => TRUE,
privilege => 'resolve'
acl => 'jsonplaceholder.typicode.com_acl.xml',
host => 'jsonplaceholder.typicode.com'
I would normally prefer to use the 12c+ syntax, shown below, but we will continue with what ChatGPT suggests rather than using the newer syntax.
The next issue is it doesn’t tell us to create a wallet to hold the root certificate for the URL. It’s using a HTTPS address, so I have to use a wallet. I downloaded the root certificate from the site and put it in the “/tmp” location on the database server, then created the wallet as follows.
Because it didn’t tell me the wallet was needed, it also neglected to tell me the P_WALLET_PATH parameter was necessary in the MAKE_REST_REQUEST call.
Does ChatGPT understand we need a wallet? I ask it about the wallet, and include a typo. 🙂 It didn’t care about the typo, and still gave us an answer. It doesn’t tell us how to create the wallet itself, and it thinks the SET_WALLET procedure is in the APEX_WEB_SERVICE package, but it’s in the UTL_HTTP package.
I feel like that is a fail. Not only did is get the wrong package name, but we would typically use the P_WALLET_PATH parameter with APEX_WEB_SERVICE. Even so, it might give us a clue about where to look next.
What about the output from the DBMS_OUTPUT package?
Finally, it didn’t tell use to turn on serveroutput to display the output from the DBMS_OUTPUT.PUT_LINE call. If this code was called from an IDE that might not matter, but from SQL*Plus or SQLcl it’s important if we want to see the result. I asked ChatGPT why I couldn’t see the output and it produced a lot of text, that kind-of eluded to the issue, but didn’t flat out tell us what to do.
Did the final solution work?
With the ACL and wallet in place, adding the P_WALLET_PATH parameter to the MAKE_REST_REQUEST call and turning on serveroutput, the answer is yes.
set serveroutput on
l_url VARCHAR2(4000) := 'https://jsonplaceholder.typicode.com/todos/1'; -- Replace with your API endpoint
APEX_WEB_SERVICE.G_REQUEST_HEADERS(1).name := 'Content-Type';
APEX_WEB_SERVICE.G_REQUEST_HEADERS(1).value := 'application/json'; -- Replace with your desired content type
l_response := APEX_WEB_SERVICE.MAKE_REST_REQUEST(
p_url => l_url,
p_http_method => 'GET', -- Replace with your desired HTTP method
p_wallet_path => 'file:/u01/wallet'
-- Do something with the response, for example print it to the console
"title": "delectus aut autem",
PL/SQL procedure successfully completed.
Overall it is pretty impressive. Is it perfect? No.
The interesting thing is we can ask subsequent questions, and it understands that these are in the context of what came before, just like when we speak to humans. This process of asking new questions allows us to refine the answer.
Just as we need some “Google-fu” when searching the internet, we also need some “ChatGPT-fu”. We need to ask good questions, and if we know absolutely nothing about a subject, the answers we get may still leave us confused.
We get no references for where the information came from, which makes it hard to fact check. The Bing integration does include references to source material.
Currently ChatGPT is based around a 2021 view of the world. It would be interesting to see what happens when this is repeated with the Bing integration, which does live searches of Bing for the base information.
When we consider this is AI, and we remember this is the worst it is ever going to be, it’s still very impressive.
Day 2 started pretty much the same as day 1. I arrived late to avoid the traffic.
The first session I went to was Martin Nash with “Oracle Databases in a Multicloud World”. I was a bit late to this session, but from what I saw it seemed the general view was “don’t be stupid, stupid”. Multi-cloud can add some complexity and latency, but if it’s what you need, it’s all manageable. If you do have a system where multi-cloud is not suitable, don’t do it for that system. Most things can be migrated, but some things are easier than others. Pick your fights. Sorry if I came to the wrong conclusion… 🙂
I often get the feeling that some people think everything has to be all or nothing. We have Oracle Cloud Apps on Oracle Cloud. A bunch of stuff on Azure, with more to come. One of our major systems is moving to AWS in the next couple of years. Then of course we still have a load of stuff on-prem. This isn’t because we are desperate to be multi-cloud. It’s just the way things have happened. I’m sure we’ll run into some issue along the way, but I’m also sure we’ll solve them. Once size does not fit all…
BTW Martin now works for Google, so we have to hate him. 🙂
Next up was Jasmin Fluri with “The Science of Database CI/CD”. I already had the long form of this presentation because I read Jasmin’s masters thesis, but I was interested to see how she summarised some of it into a presentation. She did a great job of getting the main points into a 45 minute session, which can’t have been easy. It was also a little depressing, because I’ve come a long way , but I’ve still got such a long way to go. Ah well…
Last up for me was “The Death of the Data Scientist, But Long Live Data Science” by Brendan Tierney. To summarise Brendan talked about the recent mass layoffs of data scientists, suggesting a number of factors including a glut of data scientists on the market, low return on investment from many data science teams, and the simplification and automation of data science to the point where it had now been integrated into products and domain-specific staff roles. It’s typical hype cycle stuff. We’ve moved from the “Peak of Inflated Expectations” to the “Trough of Disillusionment”, and the job market has corrected itself because of that. It doesn’t sound like a move from DBA to data scientist is a great career move right now. 🙂
I spent some time chatting to Brendan, then it was off to beat the traffic home so I could return to real life again.
UKOUG Breakthrough 2022 is over now, and it was a good introduction back into the world of face to face conferences for me. I’m still very nervous about the thought of travelling, but based on the last few days I’m hoping I can get my conference mojo back. Just don’t expect too much too soon. 🙂
Thanks to all the conference organisers and speakers for giving up their time to make this happen. See you all again soon.
Why do I do this? As mentioned in the first link, Fedora is a proving ground for future versions of RHEL, and therefore Oracle Linux. I like to see what is coming around the corner. Doing this has no “real world” value, but I’m a geek, and this is what geeks do.
Having recently put out a post about database patching, I was interested to know what people out in the world were doing, so I went to Twitter to ask.
As always, the sample size is small and my followers have an Oracle bias, so you can decide how representative you think these number are…
Here was the first question.
There was a fairly even spread of answers, with about a third of people doing quarterly patching, and a quarter doing six-monthly patching. I feel like both these options are reasonable. About 20% were doing yearly patching, which is starting to sound a little risky to me. The real downer was over 22% of people never patch their databases. This is interesting when you consider the recent announcement about monthly recommended patches (MRPs).
For those people that never patch, I can think of a few reasons off the top off my head why.
Lack of testing resource. I think patch frequency has more to do with testing than any other factor. If you have a lot of databases, the testing resource to get through a patching cycle can be quite considerable. This is why you have to invest some time and money into automated testing.
If it ain’t broke, don’t fix it. The problem is, it is broken! How long after your system has been compromised will it be before you notice? How are your customers going to feel when you have a data breach and they find out you haven’t even taken basic steps to protect them? I don’t envy you explaining this…
Fear of downtime. I know downtime is a real issue to some companies, but there are several ways to mitigate this, and you have to balance the pros and the cons. I think if most people are honest, they can afford the downtime to patch their systems. They are just using this as an excuse.
Patching is risky. I understand that patches can introduce new issues, but that is why there are multiple ways to patch, with some being more conservative from a risk perspective. I think this is just another excuse.
Out of support database versions. I think this is a big factor. A lot of people run really old versions of the database that are no longer in support, and are no longer receiving patches. I don’t even think I need to explain why this is a terrible idea. Once again, how are you going to explain this to your customers?
Lack of skills. We like to think that every system is looked after by a qualified DBA, but the reality is that is just not true. I get a lot of questions from people who are SQL Server and MySQL DBAs that have been given some Oracle databases to look after, and they freely admit to not having the skills to look after them. Even amongst Oracle DBAs there is a massive variation in skills. Oracle patching has improved over the years, but it is still painful compared to other database engines. Just saying.
Type of Patching
This was the second question.
This was a fairly even split, with In-Place winning by a small margin. Oracle recommend Out-Of-Place patching, but I think both options are fine if you understand the implications. I discussed these in my previous post.
I think of patch frequency in a similar way to upgrade frequency. If you do it very rarely, it’s really scary, and because nobody remembers what they did last time, there are a bunch of problems that occur, which makes everyone nervous about the next patch/upgrade. There are two ways to respond to this. The first is to delay patching and upgrades as long as possible, which will result in the next big disaster project. The second is to increase your patch/upgrade frequency, so everyone becomes well versed in what they have to do, and it becomes a well oiled machine. You get good at what you do frequently. As you might expect, I prefer the second option. I’ve fought long and hard to get my company into a quarterly patching schedule, and it will only decrease in frequency over my dead body!
Assuming the results of these polls are representative of the wider community, I feel like Oracle need to sit up and take notice. Patching is better than it was, but “less bad” is not the same as “good”. It is still too complicated, and too prone to introducing new issues IMHO!
I also did a Vagrant build, which includes the build of the servers, the database software installations, database creations and the perquisites, so you can jump straight to the DG PDB configuration section in the article. You can find that build here.
So that’s the basic how-to covered, and I really do mean “basic”. There is a lot more people might want to do with it, but it’s beyond the scope of my little Vagrant build.
What do I think about it?
Well I guess you know how this is going to go, based on the title of this post. I don’t like it (yet), but I’m going to try and be a bit more constructive than that.
It is buggy! : I know 21c is an innovation release, but this is a HA/DR solution, so it needs to be bullet proof and it’s not. There are a number of issues when you come to use it, which will most likely be fixed in a future release update, or database version, but for now this is a production release and I don’t feel like it is safe pair of hands for real PDBs. That is a *very* bad look for a product of this type.
Is it Data Guard? Really? : Once again, I know this is the first release of this functionality, but there are so many restrictions associated with it that I wonder if it is even deserving of the Data Guard name. I feel like it should have been a little further along the development cycle before it got associated with the name Data Guard. The first time someone has a problem with DG PDB, and they definitely will, they are going to say some choice words about Data Guard. I know this because I was throwing around some expletives when I was having issues with it. That’s not a feeling you want to associated with one of your HA/DR products…
Is this even scriptable? : The “add pluggable database” step in the DGMGRL utility prompts for a password. Maybe I’ve missed something, but I didn’t see a way to supply this silently. If it needs human interaction it is not finished. If someone can explain to me what I’ve missed, that would be good. If I’m correct and this can’t be done silently, it needs some new arguments. It doesn’t help that it consistently fails the first time you call it, but works the second time. Ouch!
Is the standby PDB created or not? : When you run the “add pluggable database” command (and it eventually works) it creates the standby PDB, but there are no datafiles associated with it. You have to copy those across yourself. The default action should be to copy the files across. Oracle could do it quite easily with the DBMS_FILE_TRANSFER package, or some variant of a hot clone. There should still be an option to not do the datafile copy, as some people might want to move the files manually, and that is fine, but to not have a way to include the file copy seems a bit crappy.
Ease of use : Oracle 21c introduced the PREPARE FOR DATA GUARD command, which automates a whole bunch of prerequisites for Data Guard setup, which is a really nice touch. Of course DG PDB has many of the same prerequisites, so you can use PREPARE FOR DATA GUARD to get yourself in a good place to start, but I still feel like there are too many moving parts to get going. I really want it to be a single command that takes me from zero to hero. I could say this about many other Oracle features too, but that’s the subject of another blog post.
Overall : A few times I got myself into such a mess the only thing I could do was rebuild the whole environment. That’s not a good look for a HA/DR product!
I’m sorry if I’ve pissed off any of the folks that worked on this feature. It wasn’t my intention. I just don’t think this is ready to be included in a production release yet. I’m hoping I can sing its praises of a future release of this functionality!