Why no GUI installations anymore?

I had the following comment on a RAC article yesterday.

“this way of training is not good. please train as previous method.”

I believe the person in question didn’t like that I no longer do those massive articles showing how to install RAC using the GUI. Instead I do automatic silent builds using Vagrant and shell scripts. My recent RAC articles just describe the Vagrant builds. I’ve written about this before, but I think it’s worth repeating.

GUI is dead to me!

For a long time I was only running the GUI installations to get screen shots for articles. All the installations I did at work were scripted. This started to feel wrong, as it felt like I was promoting something I don’t do, and don’t believe in. As a result, I stopped putting the GUI stuff in most of my articles also. Instead including the silent installation and configuration commands, and typically giving a link to my Vagrant builds.

GUI takes a long time!

I run lots of builds. Every weekend I am running Vagrant and Docker builds almost constantly on OL7, Windows 10 and macOS hosts. RAC builds of various versions. Data Guard builds of various versions. Single instance builds of various versions. Docker builds of Oracle and non-Oracle stuff. If I had to do those builds manually using the GUI I would be lucky to complete a couple a day.

Doing archaic style builds and gathering screen shots of them takes a long time, and that’s wasted time.

What about people learning the tech?

If you are trying to learn something new, you should start straight away using the silent builds. Try to ignore the GUI stuff even exists. Why?

  • Doing something like a RAC build using the GUI is really hard. There are loads of things to click on, and it’s really easy to make mistakes along the way. Typos and picking the wrong options etc. It’s far easier if the only thing you are having to change is the parameters in a property file.
  • If you do make a mistake in a manual build and have to start again, it’s soul destroying. Throwing away a days work and having to start again makes you want to cry, especially if you don’t get much time to do this anyway. I have people contacting me, spending weeks trying to fix things that are beyond repair. When they finally bite the bullet and start again their mystery problem goes away. In contrast, restarting an automatic build is no big deal. Throw it away, start a new build and go and grab some food. No drama.
  • The scripted builds make the individual steps incredibly clear. You can look at the individual scripts in order and see what is happening, and just run the builds and prove they actually work, rather than having to trust things haven’t changed since the article was written.
  • No professional DBA should be using a GUI for anything, if there is a possibility to script that action. If you have DBAs or consultants working that still use GUIs, you should be questioning if they are really the people you want with hands on your systems. Their work will be inaccurate, and they are wasting your money by wasting your time. In some cases taking days to complete what can be done in minutes. If that is the way we expect people to work, why teach them a different way of working as a beginner? It feels wrong to me. You are getting them into bad habits from day one.
  • You are going to have to learn the scripted way to build things at some point, so why not start there. I think we’ve been fooled into thinking the GUI is easier, but it’s really not. It is a burden, not a help for most things.

If you want to still use the GUI, that is your choice. Just don’t expect me to enable your bad choices, and please don’t try and get a job working with me. You are not welcome. Sorry, but not sorry.

What about people who are not focusing on DBA work?

Then the GUI is an even bigger disaster. If you are a developer who just needs a database to work with, use an automatic build to give you what you need and start doing the stuff you care about. Why would you waste time learning to install a database? Fire up an autonomous database for free. Use a vagrant build or a docker image to give you what you need.

I really don’t see a reasonable defence for using a GUI for basic, repeatable administrative tasks anymore.

Conclusion

This is how I feel about it. If other people feel differently and want to still keep producing those GUI build articles, that’s their choice, but I would urge them to consider the message they are sending to beginners. In this day and age, the last thing we need is a bunch of people making DBAs look like dinosaurs, and wasting time on stupid stuff when they could be spending that time preparing for the future!

Cheers

Tim…

Update: Based on my reply to a comment here and on Twitter.

  • Companies demand self-service calls to do everything. They don’t want to ask a DBA to do anything. The DBA must help provide these self-service features, which will require scripting.
  • Developers need Continuous Integration and Continuous Deployment (CI/CD). That is all scripted, including creation of short lived databases for some Test/QA systems.
  • Companies demand infrastructure as code. All config and builds in source control (like Git). You can’t put button presses in Git.
  • Companies are not paying you to waste time. If we did a race, with me doing a Vagrant build of RAC and you doing it manually, I would finish several hours before you. That’s wasted time and money. I’m not sure how your company thinks about this time usage.
  • I guess it’s harder to invest the time if you only ever do something once, but the counter argument is you should be confident to replace everything if something breaks. I am never confident without a script.

If you are using a GUI to do something that can be scripted, you are failing. You may not have realised it yet though. I’m sorry if that sounds harsh, but it’s what I believe.

Video : Decoupling to Improve Performance

In today’s video we demonstrate how to cheat your way to looking like you’ve improved performance using decoupling.

This was based on the following article.

This came up in conversation a few days ago, so I thought it was worth resurrecting this demo. It doesn’t really matter what tech stack you use, the idea is still the same.

The star of today’s video is Logan Rosenstein, formerly of OTN, and now working for Zignal Labs, and author of Building Towers by Rolling Dice.

Cheers

Tim…

Structuring Content : Think Pyramid!

This post is about making sure you get your message across by structuring your content correctly, and giving people convenient jump-off points when they’ve hit the level that is appropriate for them.

In my post about plain language, I mentioned a discussion on structuring content, and linked to a previous post called The Art of Miscommunication. I thought it was worth digging into content structure a little more.

We can think of the content structure as a pyramid. Starting at the top we keep things short and simple, then each layer down gets progressively more detailed. A person consuming the content can get to a level appropriate for them and stop, safe in the knowledge they have not missed something vital.

Level 1

What do I need to know about this content?

  • What is it about?
  • Is it really meant for me?
  • Are there any actions assigned to me?
  • Is it even worth my time reading this, or have I been included for the hell of it?

If we think about emails, blog posts and videos, it’s important we understand what the content is about early. This allows us to decide if this is the right content for us. In my opinion this is about the subject of the email, or title of blogs and videos, along with a short first paragraph.

Using the example of an email, it might be important that some of the management chain understand a situation is happening, but they may not understand the detail of the issue, or have time to dig into it further.

Here is a silly example of an email subject and first paragraph, which represents how I think the top level of the pyramid should work.

“Payroll run failed. Will be fixed by this afternoon!

This morning’s payroll run failed. Jayne is on the case, diagnosed the problem and is confident it will be fixed by this afternoon. The P1 incident number is INC123456.”

It’s short! It tells me what I need to know. It gives me some confidence I don’t need to worry about things unless I hear different. At this point I might feel safe to jump out of the email. I know it’s a Priority 1 (P1) incident, which means it will have a wash-up meeting, so I don’t need to respond asking someone to document and communicate the root cause. I feel most higher-level managers should be happy with this, and be able to duck out now.

Level 2

This is where we add a little more detail. We are still going to keep things short and simple. We will already have lost some of the readers, so the people left behind are here because they want something with a bit more depth. Maybe something like this.

“At 06:18 we got a notification to say the payroll process had died. It got escalated to Jayne, who checked the application logs. It looked like the payroll run had been hanging for a while and then died.

She asked the DBAs to check the database while she checked the OS layer on the app server. The DBAs said the database went really quiet at that time, like no new requests were coming through from the app layer, but didn’t think it was the database that was causing the problem.

Jayne noticed a Jenkins agent on the app server was grabbing 100% of the CPU, which is what killed the payroll run.

The Jenkins agent was restarted. The payroll run was restarted. Everyone is monitoring it, and they’re confident it will complete by 13:00.”

There is no evidence here, but it clearly describes what happened, and what is being done about it. If I didn’t feel confident about closing the email after the first level, I should now.

Level 3 and Beyond

In the case of an email, I don’t think anything beyond this point makes sense. Large emails and posts look daunting, and I get the impression people just “file them” to be looked at later. Maybe that’s just me. 🙂

In most cases, I think anything level 3 downward should be a link to something, so those people that are interested can “get their geek on”, while everyone else gets on with their day. Something like this for example.

“Further information:

Incident : INC123456

Problem Record : PRB123456 : How do we prevent Jenkins agents killing our stuff?

Knowledge Base: KB123456 : Diagnosing Payroll Run Failures.

Knowledge Base: KB234567 : What is Jenkins and why do we use it?”

This doesn’t add much to the size of the email, but it does give people a place to go if they need more information.

I’m making the assumption that people in the company know the evidence of the issue diagnosis and corrective actions will be included in the P1 incident, so I don’t need to add it into the email. The problem record shows we’ve got some thinking to do to make sure this doesn’t happen again. The knowledge base notes give us a place to get further information, and give us some confidence that if Jayne dies, we might still get paid next month.

Another Example

I’ve been producing content for a while, and occasionally I have light-bulb moments where I realise I’ve totally missed the point. Several years after writing an article about the Oracle Scheduler I realised the vast majority of people just want a basic example they can copy/paste. I added a section to the top of the article (here). I doubt many people move beyond that. I rarely do. 🙂

Conclusion

There is little point writing something unless you think someone is going to read it, even if it is yourself. You need to get the correct information to the correct people as quickly as possible. That involves thinking about the way you present your content and write your emails. I’m not saying this is perfect. I’m not an expert at this stuff. This is just how I feel about it, and I think the pyramid approach discussed in the course is a good mental cue to keep you on track.

Cheers

Tim…

PS. You are not allowed to use this against me when you see one of my rambling posts or articles.

PPS. In real life it wasn’t a payroll system, but it was a Jenkins agent that killed everything.

PPPS. Everyone knows it’s always the network! 🙂

Increasing headcount is probably not the answer!

I’m incredibly irritated by tech people using headcount as a reason for their problems. From my experience, throwing bodies at problems is rarely the correct answer.

Increasing headcount only makes sense if:

  • You understand the problem.
  • You’ve defined the corrective actions.
  • You have processes in place to make new people productive quickly.

If you don’t understand the problem, and don’t already have a plan for solving it, hiring a load of people isn’t going to help you. It can actually make things worse. At best they will sit around and do nothing. At worst, they will start working and come up with a bunch of random “solutions” to your problems, which can leave you in a worse position than you started. Supporting a bunch of random crap is no fun at all.

My first job was a great example of doing things the right way.

  • The company signed a new customer. The software was used to track drug trials. Each trial had a unique identifier. The customer wanted to be able to refer to trials using the unique identifier, or a free text alias. This meant adding a trial alias to loads of screen in the application. There was also a need to upgrade the whole application from Forms 3.0 to Forms 4.0.
  • The analysis was done. Two procedures were defined and documented. One procedure gave directions on performing the upgrade. One procedure gave directions on adding the trial alias to the forms that needed it.
  • In addition to the existing workforce, the company hired four new people. Two were computer science graduates. Two, including me, were not. None of us had previous Oracle Database or Oracle Forms experience. After some basic training, we were put to work upgrading the forms and adding in the trial alias code.
  • It worked fine, because despite us being not much more than trained monkeys, the prerequisites had been put in place to allow someone with a PhD in cabbage sex to be a productive member of the team. There were no major blockers or constraints to deal with.

I’ve also seen it done the wrong way a bunch of times, but I’m not going to go there as it’s too depressing, and the people and companies involved will recognise themselves…

There are situations where bodies can help, but randomly throwing people at something is not a great solution if you’ve not put in the necessary effort up front to make it work. You should also be asking how many of the tasks should really be automated, so humans can be allocated to something more meaningful. In these cases, even if extra bodies could work, your focus should be on solving the root cause, not papering over the cracks.

When I discuss headcount, or throwing bodies at a problem, I could be talking about hiring more permanent staff, temporary staff or outsourcing projects. There is a lot to be said for the old saying, “You can’t outsource a problem!”, but it could easily be, “You can’t headcount a problem!” 🙂

Cheers

Tim…

Plain Language : My review of the course

Last week I went on a Plain Language course. If you were following me on Twitter, you’ll know I was feeling a bit nervous about it. I find any type of “course” difficult. I don’t like being “trapped” and I prefer to learn things at my own pace. Having said that, it went really well.

What’s the point?

How you speak and write can have a big impact on how your message is received. I work for a university, which has a large number of overseas students and staff, where English is not their first language.

A significant proportion of our user base need accessibility tools, and a similar proportion use them by choice.

Even when English is your first language, it can be difficult to understand some of the rubbish that gets produced.

Isn’t it just about dumbing down?

Some people love flowery bullshit language. I hate it. I’m not the best at reading, so every unnecessary word requires parse time, and makes it easier for me to lose my concentration.

My problem is similar to that faced by someone who doesn’t have English as a first language, or someone using accessibility tools. There is a lot of effort spent dealing with words that add no value to the meaning.

Know your audience!

You always have to consider your audience when writing and speaking. There is a difference between writing a legal document, an academic paper and instructions about how to log into the WIFI.

My statistics tell me that about 45% of people reading this will be from India. About 43% from the USA, and the remaining are made up from the rest of the world. I have no idea about the language skills of the audience in those locations, but I’m guessing they don’t track well with someone born 50 years ago and raised in the Midlands, UK. 🙂

I’ve learned through my years of presenting, that often “less is more”. Try to get as much meaning in as few words as possible. I wrote a series of Public Speaking Tips, where I wrote about my experience of international presentations. I’ve tried to keep that in mind when I’m doing my YouTube videos too.

In short, write in a style that is acceptable to your audience!

What are “the rules”?

There’s a neat summary of some of the points covered by the course here.

It’s mostly about controlling your word selection, sentence size and use of active and passive verbs. There is also some information about how to structure communications to make sure the main points and actions are obvious from the start. It’s similar to what I wrote about in my post called The Art of Miscommunication.

The course doesn’t focus on punctuation or grammar. It doesn’t remove personality from your writing. It’s all about making yourself understood.

What tools are available?

If you are using Word or Outlook, you can use the built-in tools to help.

For Word:

  • Go to “File > Options > Proofing”.
  • Select the “Check grammar with spellings” and “Show readability statistics” options.
  • Click the “OK” button to exit.

For Outlook:

  • Go to “File > Options”.
  • Select “Mail” and click the “Spelling and AutoCorrect” button under the “Compose Messages” section.
  • Select “Proofing”.
  • Select the “Mark grammar errors as you type”, “Check grammar with spelling” and “Show readability statistics” options.
  • Click the “OK” buttons to exit.

In addition to the basic statistics, there is the Flesch-Kincaid readability score. There are a bunch of browser plugins that could help here also.

Did you disagree with anything?

I really struggled with the active vs. passive stuff. I often write in passive voice, and I find active voice quite aggressive. Despite this, I can see active verbs are more direct and often make sentences shorter, so I can see the value.

I’m not sure I will, or even can, take this on board. I guess time will tell.

Conclusion

It’s a good course, and despite my initial nerves I really enjoyed it. If you get the chance to take part in something like this, you really should!

Remember, the course is the beginning of the journey!

Here are the scores for this post from Word. They scientifically prove I’m amazing and can be understood by anyone! If you don’t agree Katy, you’re a poopy head! 🙂

Cheers

Tim…

Video : Schema Only Accounts in Oracle Database 18c Onward

Today’s video is a demonstration of schema only accounts, introduced in Oracle Database 18c.

This is based on the following articles.

The star of today’s video is Paul Vallee, of Pythian and Tehama fame.

Cheers

Tim…

VirtualBox 6.1.2

About a month after the release of VirtualBox 6.1 we get the release of VirtualBox 6.1.2, a maintenance release.

The downloads and changelog are in the usual places.

So far I’ve only tried it on a Windows 10 host at work, but it looks fine.

Remember, if you use Vagrant 2.2.6 and this is your first time using VirtualBox 6.1.x you will need to do a couple of config changes to Vagrant, as discussed in this post by Simon Coter. I’m sure once Vagrant 2.2.7 is released this will no longer be necessary.

Happy upgrading! 🙂

Cheers

Tim…

Update: Once I got home I installed VirtualBox 6.1.2 on Windows 10, Oracle Linux 7 and macOS Catalina hosts. It worked fine. 🙂

Video : Oracle REST Data Services (ORDS) : SQL Developer Web

Today’s video is a quick run through SQL Developer Web, introduced in Oracle REST Data Services (ORDS) 19.4.

For those that prefer the written word, this is based on the following article.

You can find all my other ORDS content here.

The reluctant star of today’s video is Tuomas Pystynen, who was held at gunpoint whilst filming this. 🙂

Cheers

Tim…

Automating SQL and PL/SQL Deployments using Liquibase

You’ll have heard me barking on about automation, but one subject that’s been conspicuous by its absence is the automation of SQL and PL/SQL deployments…

I had heard of some products that might work for me, like Flyway and Liquibase, but couldn’t really make up my mind or find the time to start learning them. Next thing I knew, SQLcl got Liquibase built in, so I figured that was probably the decision made for me in terms of product. This also coincided with discussions about making a deployment pipeline for APEX applications, which kind-of focused me. It’s sometimes hard to find the time to learn something when there is not a pressing demand for it…

Despite thinking I would probably be using the SQLcl implentation, I started playing with the regular Liquibase client first. Kind of like starting at grass roots. If you are working in a mixed environment, you might prefer to use the regular client, as it will work with multiple engines.

Once I had found my feet with that, I essentially rewrote the article to use the SQLcl implementation of Liquibase. If you are focused on Oracle, I think this is better than using the standard client.

Both these articles were written more than 3 months ago, but I was holding them back on publishing them for a couple of reasons.

  1. I’m pretty new to this, and I realise some of the ways I’m suggesting to use them do not fall in line with the way I guess many Liquibase users would want to use them. I’m not trying to make out I know better, but I do know what will suit me. I don’t like defining all objects as XML and the Formatted SQL Changelogs don’t look like a natural way to work. I want the developer to do their job in their normal way as much as possible. That means using DDL, DML and PL/SQL scripts.
  2. I thought there was a bug in one aspect of the SQLcl implementation, but thanks to Jeff Smith, I found out it was a problem between my keyboard and seat. 🙂

With a little cajoling from Jeff, I finally released them last night, then found a bunch of typos that quickly got corrected. Why are those never visible until you hit the publish button? 🙂

The biggest shock for most people will probably be that it’s not magic! I’m semi-joking there, but I figure a lot of people assume these products solve your problems, but they don’t. Both Flyway and Liquibase provide a tool set to help you, but ultimately you are going to need to modify the way you work. If you are doing random-ass stuff with no discipline, automation is never going to work for you, regardless of products. If you are prepared to work with some discipline, then tools like Liquibase can help you build the type of automated deployment pipelines you see all the time with other languages and tech stacks.

The ultimate goal is to be able to progress code through environments in a sensible way, making sure all environments are left in the same state, and allow someone to do that promotion of code without having to give them loads of passwords etc. You would probably want a commit in a branch of your source control to trigger this.

So looking back to the APEX deployments, we might think of something like this.

  • A developer finishes their work and exports the current application using APEXExport. It doesn’t have to be that tool, but humans have a way of screwing things up, so having a guaranteed export mechanism makes sense.
  • Code gets checked into your source control. This includes any DDL, DML, packages, and of course the APEX application script.
  • A new changelog is created for the work which includes any necessary scripts, including DDL and DML, as well as the APEX script, all included in the correct order. That new changelog for this piece of work is included in the master changelog, and these are committed to source control.
  • That commit of the changelog, or more likely a merge into a branch triggers the deployment automation.
  • A build agent pulls down the latest source, which will probably include stuff from multiple repositories, then applies it with Liquibase, using the changelog to tell it what to do.

That sounds pretty simple, but depending on your company and how you work, that might be kind-of hard.

  • The master changelog effectively serialises the application of changes to the database. That has to be managed carefully. If stuff is done out of order, or clashes with another developer, that has to be managed. It’s not always a simple process.
  • You will need something to react to commits and merges in source control. In my company we use TeamCity, and I’ve also used GitLab Pipelines to do this type of thing, but if you don’t have any background in these automation tools, then that part of the automation is going to be a learning curve.
  • We also have to consider how we handle actions from privileged accounts. Not all changes in the database are done using the same user.

Probably the biggest factor is the level of commitment you need as a team. It’s a culture change and everyone has to be on board with this. One person manually pushing their stuff into an environment can break all your hard work.

I’m toying with the idea of doing a series of posts to demonstrate such a pipeline, but it’s kind-of difficult to know how to pitch it without making it too specific, or too long and boring. 🙂

Cheers

Tim…

Midlands Microsoft 365 and Azure User Group – January 2020

Last night I went to the Midlands Microsoft 365 and Azure User Group. It was co-organised by Urfaan Azhar and Lee Thatcher from Pure Technology Group, and Adrian Newton from my company.

First up was Matt Fooks speaking about “Microsoft Cloud App Security”. Matt covered a number of use cases, including shadow IT detection, log collection, checking compliance of applications and using policies to protect systems. He demoed a few of these. The flexibility of the cloud is great, but it also allows you to create a security nightmare as your cloud estate grows. MCAS gives you visibility and control over that. I guess the value of this will depend how far down the cloud journey you are. If you’ve got a bit of IaaS that’s being managed centrally, this isn’t going to sounds too interesting. Once you open the gates and let other people/teams get involved in provisioning services, you are going to need something like this to keep some level of control over the sprawl.

I heard one of the attendees mention Snowflake, so I collared him during the break to discuss it. I’m not so interested in the headline stuff. I care more about the boring day-to-day stuff, as people tend not to talk about it. It was a really interesting. Networking is great.

Next up was Richard Harrison with “The Journey from being a DBA Guy for 20 years to becoming an Azure Guy”. Richard was a fellow Oracle ACE before he moved in a new direction, so it was good to see him again. We spent some time chatting before his session, and I kept him for ages after the session chatting about a bunch of cloud related stuff. As the name of the session suggests, this was about his transition. What made him decide to make the move. His first opening into the world of the cloud. Some of the steps along the way and the amount of work involved. Richard is a smart guy, so when he says it’s hard work to keep on top of things due to the rate of change in the cloud, that should be a warning sign for people who are focused on the status quo.

There were some pieces that related nicely to the first session. For example, he discussed the control/governance aspect. To paraphrase, some services like security, budget management and databases are kept under central control, because of the critical nature of them, but people are given pretty much a free reign with platforms within their resource group. Why? Because there are loads of services with loads of features and trying to manage them centrally is practically impossible. I think of this as a move to a high-trust culture. Of course, you have tools to monitor what’s going on to stop people doing crazy stuff, but ultimately you have to start releasing control (where it’s appropriate) or people will look elsewhere. 🙂

I’m hoping Richard will come back again and tell us some more about what he is doing, and I know he’s got some really interesting Oracle-related content that will work really well at Oracle events, so hopefully he’ll throw his hat into that arena too. I don’t want to say anything more because I don’t want to steal his thunder.

Thanks to everyone who turned up to support the event, the speakers, and the sponsor Pure Technology Group. See you at the next event.

Cheers

Tim…