A couple of days ago I took to Twitter with a rather “incendiary” tweet caused by my frustration with MOS. It’s not about the specific SR or issue itself. It’s more a frustration with MOS generally and the way they handle some requests, specifically the automatic responses. I’ll explain.
I had an issue.
I Googled and didn’t find too much in the way of help.
I opened a SR about the issue, including an image to help explain my issue.
During that process it suggested some other stuff I might want to look at, one of which was quite interesting, but none of which were actually relevant. No problems I thought. At least I’ve learned something…
Next thing I get some emails about updates to my call. I logged in to find these 4 responses.
I was really angry about the auto-responses, and unloaded on Twitter using some rather “choice language”…
I totally understand a request for more information. The response of, “Please upload the RDA/TFA/AHF file”, is common and understandable on many occasions. It does annoy me more than a little when you are asking a general question, that is not specific to your software version, but you still have to upload it. Whatever…
So why did I lose the plot this time?
There are 4 messages, instead of one consolidated message. I hate that. It’s annoying. I just know that someone is running a report saying, “Look, we’ve done 1 gazillion responses this month”, but it’s all generated crap! This should have been one concise and clear request for additional information.
Just look at that second response. Are you kidding me? Loads of rubbish I don’t need to know and repetition of the first message. If I sent this sort of message to my users I’d be marched out of the building. If you think this is acceptable, please quit your job now! You have no place in a role that is even remotely user-facing.
How do you think people are going to respond to this? It makes me angry and I kind-of know what I’m doing. How do you expect some junior member of staff to respond to this? I’ll tell you how. They will ignore it, never fix the issue and think “Oracle is crap”. Thanks! Just what we need. I asked a colleague to look at it and their response was, “It’s like they don’t want you to continue with the request”. See?
People pay a lot of money for support, and this is what you are presented with? Really?
I’ve now deleted the tweet. I was *not* asked to delete it, and if I had been I definitely would not have, but I decided to because it was gathering too much momentum, such is the general feeling about Oracle Support, and it was not meant to be me grandstanding. It was just genuine frustration with a service my company is paying money for!
I’m a fan of automation. I understand wanting to streamline the SR process, and if automation can help, that’s great, but this is not the way to do it!
What should it look like?
It’s just my opinion, but I think something like this would be reasonable.
We need more information to continue. Please run the following Trace File Analyzer (TFA) commands and upload the files.
1) Run this command on the Agent target machine and answer the questions when prompted.
./tfactl diagcollect -srdc emtbsmetric
2) Enable debug on the OMS server using this command.
./tfactl diagcollect -srdc emdebugon
Repeat the actions in EM that you are trying to diagnose, then disable debug on the OMS server using this command.
./tfactl diagcollect -srdc emdebugoff
If you need more information about TFA or manual file collection for this issue, check out DOC ID 2279135.1.
If you would like to read more about the My Oracle Support automatic troubleshooting, check out Doc ID 1929376.1.
A single message that asks for the relevant information, and gives links if you need something more. That gets the job done, isn’t scary to new people and isn’t going to cause me to lose it on Twitter.
Feedback from Oracle
You may have noticed this post in my feed for a couple of days, but when you clicked on it, it was password protected. That’s because I wrote the post to provide some better feedback than my initial tweet, but delayed the publication while I waited for some feedback from Oracle. I was put in contact with the Vice President, Global Customer Support and the Sr. Director, DB-EM Proactive Support. Their respective responses were as follows. I’ve left out their names as not all folks like being name-checked.
“Hi Tim, Just reviewed your blog post and agree that the auto-responses are verbose. Adding our DB proactive lead who will follow up with you directly on planned next steps.”
Vice President, Global Customer Support
“Hi Tim, I have reviewed your blog regarding your experiences with SR automation. I want to thank you for providing this feedback. Direct feedback from users of SR automation is extremely important and valuable. We take the effectiveness of our SR automation very seriously. Our intention is to provide a streamlined support experience which allows us to identify information, up front in the SR, that will result in the shortest resolution time. There is a balance between casting a wide net to ensure we receive all diagnostic data required vs. the ease of consuming/executing the request to get that data. Admittedly, we don’t always strike the correct balance.
Regarding the case described in your blog, I agree that our diagnostic messaging should be more concise and consumable. I also appreciate your thoughts on using collectors, such as TFA, to simplify the instructions. We have a plan to address this specific automation flow to eliminate superfluous information and provide a clear message around what is required and how to obtain that information. Additionally, I will incorporate your feedback into our review process, which is conducted on an on-going basis for our automation flows. Please feel free to contact me if you have any other feedback or suggestions. As I said, this kind of feedback is appreciated and always welcomed.”
Sr. Director, DB-EM Proactive Support
The whole Twitter episode wasn’t my finest moment, but if nothing else I’m glad the message got through to the correct people. Of course, all of this is just words unless something substantial happens. Please don’t let us down!
To everyone else out there, please continue to add your own constructive feedback on all things (in life). There’s no point complaining about a problem, if you’ve never actually raised it. I think of it like voting. If you didn’t bother to vote, I don’t really think you are entitled to moan about the outcome.
OpenWorld and Code One 2019 are over, and here are a few thoughts…
The tech side of things was based almost exclusively at Moscone South this year. No walking around to different buildings and hotels. In part that was due to the Moscone rebuild, making it a much larger venue now, but I suspect the numbers were down a lot on previous years. It’s hard to know as wider corridors mean you are less packed in, so maybe it was an optical illusion…
The conference felt more like a tech event this year, and less like a marketing event. OpenWorld and Code One were a lot more joined up, and I would suggest this year it was actually a single conference. I’m sure the split branding will remain for political reasons, but it would make life a lot easier if it were one event with one session catalog.
The new branding for Oracle was interesting. I said in a previous post I liked it. Much softer than the old red stuff. Let’s see how people react to it, and let’s see if the company actually changes to be more customer focused. I wrote a post called Oracle: Tech Company or Service Company? a few years ago. Maybe Oracle are now catching up? We’ll see.
The VMware announcement was interesting. I expressed my opinion on this here. I just hope this isn’t short-lived and I hope sense prevails. Oracle need to build bridges now. It’s still possible. Remember when everybody hated Microsoft?
Obviously Oracle continued to push Cloud and the Autonomous brand, including the new Autonomous Linux and Autonomous JSON. If you’ve used SODA, you know what’s going on with Autonomous JSON. From my perspective, keep the autonomous services coming. The more automated the mundane stuff becomes, the better!
I got up at a reasonable time and got caught up with blog posts, then it was time to check out and get the BART to the airport. Bag drop was empty, because the rest of the planet was waiting at security. After what felt like an eternity I was through security and sat down and waited for my plane…
We boarded the flight from San Francisco to Amsterdam on time and didn’t have a significant wait for the departure slot, so the captain said we would arrive early. No luck with a spare seat on this flight. The guy next to me was about my size, but wasn’t making an effort to stay in his space. There was some serious man-spreading going on. I ended up spending most of the flight leaning into the aisle and pulling my arm across my body, so my left elbow feels knackered now. Doing that for 11 hours is not fun. I managed to watch the following films.
Rocketman – I wasn’t feeling this at the start. I’m not big on musicals, and I didn’t like the stuff when he was a kid. Once Taron Egerton started playing him it was cool. I kind-of forgot he wasn’t Elton John. If you can get past the start, it’s worth a go!
The Accountant – I liked it. Ben Affleck doing deadpan and expressionless is the perfect role for him.
John Wick: Chapter 3 – Parabellum – I got up to the final sequence, so I’m not sure how it ends. Pretty much the same as the previous films, which I liked. Just crazy fight scenes with loads of guns.
There was one bit of the flight that was odd. The in-flight entertainment died, then we hit some turbulence. Queue me deciding it was linked and we were all going to die… Pretty soon the turbulence stopped, then after about 10 minutes the screens rebooted…
I had quite a long wait at Schiphol. About 3 hours. That was pretty dull, but what are you going to do?
The flight from Amsterdam to Birmingham was delayed by a few minutes, then the was the issue of people trying to board with 15 pieces of hand luggage and a donkey. I had my bag on my feet. Luckily it was only an hour flight.
II was originally planning to get the train home, but I was so tired I got a taxi. The driver was a nice guy and we had a chat about his kids and future plans, which is a lot nicer than listening to me drone on…
I’m now home and started doing the washing…
I’ll do a wrap-up post tomorrow, with some thoughts about the event…
I started Wednesday by trying to play catch-up with some of the keynotes. I don’t like going to them, but it’s important to hear what was said, because people often put their own spin on what was actually said to make it fit with their narrative.
From there I headed down to the conference to see Michael Hüttermann with “DevOps: State of the Union”. Michael managed to pull off a session where we did all the talking. How does that work? 🙂 It was really good fun, and it was interesting to hear other people’s experiences, and how they define DevOps.
Next up was Simon Coter with “Practical DevOps with Linux, Virtualization, and Oracle Application Express. At the start of the session Simon started a Vagrant build using the “vagrant up” command, then continued with the session, describing how tools such as VirtualBox and Vagrant can help you build consistent environments. He then described this specific build and showed us the finished product. I think the session went really well, and if you follow the blog you know I’m a VirtualBox+Vagrant fan. The other thing worth mentioning was he showed how a VirtualBox VM can be exported to OCI, and maybe in future an OCI VM imported back into VirtualBox. The first of those two operations means you could use VirtualBox and Vagrant as your choice for custom infrastructure builds for the cloud. Interesting…
Next up was “Embracing Constant Technical Innovation in Our Daily Life”, which was a panel session made up of Gustavo Gonzalez, Sven Bernhardt, Debra Lilley, Francisco Munoz Alvarez and Me. We didn’t have a big crowd, but we did get some crowd participation. I find panels fun, and some of the practical suggestions included.
Write stuff, and preferably put it out on the internet. Thinking someone might read it makes you up your game, and something like blogging can help some people with motivation to try out new stuff. (Writing Tips)
Do presentations, because of the pressure of a deadline often makes you focus, and there is also a desire to present something new. Remember, presenting is not just about conferences. Get a group of people in your office and present stuff to the group. It’s a good skill to develop, improves your confidence and makes you more visible in the company and of course improves knowledge transfer! (Public Speaking Tips)
When you get good at one thing, it makes it easier to learn new things. You understand the effort it takes and you know you have to look below the surface. (Learning New Things)
Get involved with the community. A wise person learns by other people’s mistakes. Go to local meetups for subjects outside your main skill set, to give you a different perspective. It might reinforce your beliefs or challenge them.
After that it was off to see “Understanding the Oracle Linux Cloud Native Environment (OLCNE)” with Wiekus Beukes, Tom Cocozzello and Thomas Tanaka. Oracle have built a tool that allows you to install, manage and upgrade selected Cloud Native Computing Foundation projects. That tool is called OLCNE. Why is this important? Because there are loads of CNCF projects, with a load of dependencies, so trying to install, and more importantly upgrade them, can be a nightmare. This tool will make that easier, as it will manage dependencies, and keep track of which versions of project X are certified with which versions of project Y. All these versions will be tested by Oracle to make sure things just work. The idea being you want Kubernetes + CRI-O + Prometheus + Istio? Sorted. For someone like me, who is a complete noob at most of this, that is a really interesting proposition. The project will be open sourced and on GitHub. Once it gets enough non-Oracle people contributing to the project, they hope to submit it to CNCF. Maybe we are seeing the start of how to manage CNCF projects in the future?? 🙂
After that I went to see Colm Divilly speaking about “Database Management REST APIs”. The management APIs were introduced a couple of versions ago, but with each release they are adding more stuff. We now have integration with the DBCA for instance and PDB lifecycle management, as well as APIs to control features like Data Pump and get performance monitoring information. I really need to spend some time paying with these, because it’s a great way to automate operations and make them available to other people. I like to think of it as breaking down the walls of the silo by presenting what you do as a service.
Once that session was over I spent a few minutes talking to the ORDS and SQL Dev folks, then it was back to my hotel to crash. I ducked out of the concert (the ticket went to a good home) and other invites because I am old and my bed was calling me.
That was my last day at OpenWorld. I leave Thursday morning US time and will be back home at some point on Friday UK time. I’ll no doubt do a post about the journey home and a wrap-up post once I get back.
Is it safe to talk about this now? The announcement has happened and Mike Dietrich has posted about it, so I think so…
A couple of massive things have happened regarding the multitenant architecture in Oracle 19c and 20c.
Prior to 19c, you were only allowed to have a single user-defined pluggable database (plus a root container and a proxy) without having to license the full multitenant option. I’ve been a big proponent of single-tennant or lone-PDB, but I can understand the reluctance of people to go that way, as it’s harder to realise the benefits, even though they do exist.
Oracle have now announced from 19c onward you can have 3 user-defined PDBs, without having the multitenant option. This is similar to what we got with 18c XE. As Mike points out, the documentation has already been changed to reflect this.
“For all offerings, if you are not licensed for Oracle Multitenant, then you may have up to 3 PDBs in a given container database at any time.”
This means your 20c upgrade will also include a migration to the multitenant architecture. What I would suggest is you start down that road today by moving to multitenant in 19c, then when you have to move to the next long term support release (2?c), you won’t be getting any surprises. What’s more, the 3 PDBs thing in 19c makes that all the more attractive!
If this announcement has made you panic, don’t worry. I’ve written a bunch of stuff about multitenant over the years, and there’s a YouTube playlist too.
I was originally expecting to start Tuesday with the Cloud Native hands-on-lab, but it clashed with some other non-conference stuff I had scheduled, so I had to drop out of that. I played catch-up on blog posts and upgraded VirtualBox right before my demo, then went out to a photo shoot. Yes, I’m a model…
I had to get some shots done for a magazine piece, so Oracle arranged for me to meet a photographer and I spent some time looking off into the distance in a contemplative manner. I was going to say, “proper executive stuff”, but I was in a T-shirt and combats, so I looked my normal scruffy self. I’ve asked him to photoshop the hell out of them. If I’m recognisable, I won’t be happy. 🙂 I’m not normally at home in front of a camera, but it was surprisingly good fun. On Monday I spent 3 hours running crowd control for the photographer in the Groundbreakers Hub. On Tuesday I’m in front of the camera. I guess by Wednesday I’ll be running a production company…
From there I went straight to my “The 7 Deadly Sins of SQL” session. It covers things that are already on my website, but I’ll write a post specifically about it when I get home. I was surprised how many people showed up. It was a pretty full room. A few empty seats, but a few people standing at the back. The session clashed with the keynote, and a bunch of other sessions I would happily have attended if I wasn’t speaking, so I expected low numbers. Thanks to everyone who came. I hope you got something out of it.
I bumped into Don Sullivan from VMware and chatted to him about the impact of the Oracle & VMware announcement. Since the announcement of VMware Cloud Foundation on Oracle Cloud Infrastructure I’ve already seen some people write, “Oracle is now supported on VMware”, which makes me mad, as it has been supported for a looooong time. Plenty of people run Oracle tech on VMware and never get any problems accessing support. I’m one of those people. If nothing else, the announcement from Oracle will finally kill the Fear Uncertainty and Doubt (FUD) around this subject. The announcement does allow Oracle to take a piece of the pie as far a running VMware on the cloud, since VMware have already got all the other major players in the bag. I think this hybrid cloud approach will help many companies start their journey to the cloud, regardless of the cloud provider they pick to do it with.
From there I moved on to watch “The State of the Penguin” by Wim CoeKaerts, which is his yearly review of what’s happening in Linux and Virtualisation at Oracle.
If you’ve watched any of the announcements, I guess you know that Autonomous Linux was announced. I’m going to miss out a bunch of stuff for sure, but some interesting points coming out of this presentation were.
UEK6 is on the way, and will bring UEK to Oracle Linux 8 (OL8) for the first time.
The new Exadata X8M, which has the PMEM and RoCE stuff is shipping with KVM. The existing stuff and non-RoCE stuff is still shipping with the Xen hypervisor, but the future for Oracle’s visualisation thrust is KVM. If anyone is starting something new and thinking of picking the Xen-based OVM, you should probably not. 🙂
For ages Ksplice has been available to folks running Oracle Linux in the Oracle Cloud, as the license is baked in. This is now also the case when running Oracle Linux in Azure.
The plan is to make much of the Autonomous Linux stuff available for on-prem customers too. Wim repeatedly stated, what you have on-prem is what they run in the Oracle Cloud, and what you run in Azure etc. Most of their work is on upstream Linux, rather than on their own proprietary stuff, so everyone benefits from Oracle’s OSS contributions.
They are working on some stuff to simplify the setup and management of Kubernetes. It will be open sourced and accept community contributions once it goes to GitHub.
After that session I headed down to the Groundbreaker Hub and just hung around chatting to people. I also did a 60 second Periscope, which is much scarier than a 45 minute presentation. 🙂
This was the first evening I had free. I stuck by my guns and said no to every offer. I went back to my room and crashed! Tomorrow (Wednesday) is my last day at the conference, as I leave on Thursday morning…
When I get home I’ll probably write a series of posts about the Free Tier stuff. I’ve already written about many of the components included in the Free Tier offering individually (ADW, ATP, OCI Compute etc.), along with the supporting stuff (Compartments, Virtual Cloud Networks (VCNs), Firewall stuff etc.), but it would be good to give it a consistent story for people who are fresh into Oracle Cloud, even if it’s just links to what I already have, with some updated screen shots. I’ll sign up with a new account and go through it all from scratch.
I’ve had a number of discussions about the new Oracle branding, which is a lot softer than the previous branding and almost devoid of red. It’s been mostly positive, but one comment that keeps coming up is something along the lines of, “The new branding is supposed to be more customer focused, but that’s not going to go very far if the attitude of “the business” doesn’t change!” I think you know what that means, and I have to agree. Most people don’t have an issue the tech side of Oracle, but do have a big problem trusting the business side of Oracle. Let’s hope this branding change is the beginning of a new era on the business side of things too!