I just noticed KeePass 2.41 was released about a week ago.
Downloads and Changelog available from the usual places.
You can read about how I use KeePass and KeePassXC on my Windows, Mac and Android devices here.
Cheers
Tim…
Oracle related rants (and lots of off-topic stuff)…
I thought I would post an update about some of the things I’ve been doing that don’t necessarily fall exactly in line with my normal website content. All of it can be found on my GitHub.
Once the ‘bento/fedora-29’ box was released I created a Oracle 18c on Fedora 29 build. If you are interested in that sort of thing you can find it here.
A few of the other Vagrant builds have been updated to use the ‘bento/oracle-7.6’ box. I’ve run through them all and they seem to be fine.
As part of a recent question, I ran my RAC builds on Windows 10, Oracle Linux 7.6 and macOS Majave hosts. They all worked fine, with no drama. I also tried them with less memory than before, as my MBP only has 16G of memory. It worked fine. I updated some of the “README.md” files to reflect these tests, and the option to use less memory.
I’ll be doing some stuff with Data Guard soon, so I will probably update those builds to use the latest ‘bento/oracle-7.6’ box and maybe neaten up anything that annoys me along the way. 🙂
All the Vagrant-related stuff can be found in this GitHub repository.
I’ve always assumed Vagrant was so simple it didn’t really require much in the way of explanation, but I was discussing it with someone from work, and figured it was worth a short post to explain a few things, just to save me having to repeat myself, so here it is.
I’ve done a few random things on Docker recently. Nothing particularly earth-shattering, but maybe worth a mention.
At UKOUG last year (a month ago 🙂 ) I was speaking to Roel Hartman about some stuff he mentioned in his Docker session. As a result of that I had a play with Portainer and Docker Swarm. I know Kubernetes has won the container orchestration war, but Swarm is so simple and does most of what I need.
I also needed to make some changes to my DB and ORDS Docker images to make using host directories as persistent volumes a little easier. I wrote these up as some short posts.
All the Docker-related stuff can be found in this GitHub repository.
As always, I feel the need to mention I’m not an expert in this stuff, and I don’t consider any of is “production ready”. It’s just stuff I’m playing with to learn the tech. If you find it useful, great. If not, that’s OK too. 🙂
Cheers
Tim…
I’ve already made this point in a previous post, but I thought it was worth mentioning in a little more detail.
One of the neat things about automation is it gives you the ability to quickly build/replace test environments, so you know you have a consistent starting point. This is especially important for automated testing (unit, integration etc.), but it also applies to your learning experience.
I’m currently learning about a bunch of Oracle 18c new features. Some of those features are limited to engineered systems and Oracle Database Cloud Services. Not to worry, there is a little hack that gets you round some of those restrictions for testing purposes.
alter system set "_exadata_feature_on"=true scope=spfile;
shutdown immediate;
startup;
In some cases I’m enabling extended data types. I’m also building additional test instances, and multiple test users, each requiring different levels of privileges.
So I finish learning about feature X and I move on to learning about feature Y. What am I bringing along with me for the ride? What problems will I run into, or not run into, as a result of the hacks I put in place for the previous test? I have no way of knowing.
This is where automation comes in really useful. You can quickly burn and build your test environment and start with a clean slate. This can also be really useful to check your understanding, by rerunning your tests on clean kit. Did you really remember to write down everything you did? 🙂
It’s kind-of obvious I know, but it’s really surprising how often I’m rebuilding my testing kit these days. I’m literally talking multiple times a day just when I’m messing about with stuff. Earlier in the week someone asked me a question about RAC builds and I did the following in a 3 hour period, while I was doing other stuff. 🙂
There’s no way I could have contemplated that without automation.
When you are learning new stuff, the last thing you need to worry about is being thrown off target by crap left over from previous tests, so just start again with a clean slate!
Check out the rest of the series here.
Cheers
Tim…
PS. I know sometimes you can learn interesting stuff by making mistakes, like finding out that feature X and feature Y are incompatible, but I think you should approach those sort of tests in a controlled and conscious manner. Learning the basics first is far more important in my opinion.
I posted a few days ago about the release of WordPress 5.0. As I said at the time, you can always expect a rash of new updates after a major release and this is the second maintenance release since then. There’s no drama, as these maintenance releases are applied automatically, so by the time you read this, you will probably already have it. The point of this post is to remind you to check for the other updates that aren’t automatic.
Since the release of version 5.0 I’ve had a lot of updates to plugins and some theme updates. The 5.0.2 release has come with another bunch of theme updates too. None of the plugin and theme updates happen automatically, so if you are self-hosting, remember to check the updates to these components. Running old versions of plugins and themes can present a security risk, as well as leading to unexpected behaviour of your site.
Happy upgrading. 🙂
Cheers
Tim…
Especially if you are self-hosting WordPress, you might have noticed that WordPress 5.0 has been born.
I’m not a WordPress aficionado, so I don’t really pay much attention to most of the WordPress new features, but something you can’t avoid is the new editor. It’s completely different.
The new editor has been available for some time for the previous WordPress version as the “Gutenburg Plugin”. The dashboard has been encouraging you to try it for ages. Once you get to WordPress 5.0 you can switch back to the original editor using the “Classic Editor” plugin, that will allegedly be supported until 2021.
What are/were my my first impressions? I previously tried the Gutenberg plugin and pretty much hated it, and switched back right away. 🙂 Now it is the main editor I’m going to try and stick with it.
I think the first thing that might freak you out is the idea of blocks. At first it seems really odd, as it implied to me I’ve got to add a new block every time I want a new paragraph. Not so! You just type and it figures out the block thing out for you. Type “return” and you start a new block. I think I’m probably guilty of over-thinking a lot of this stuff, rather than going with the flow and just seeing what happens.
I find it interesting how in some aspects of my life I’m quick to embrace change, like in the Oracle world, but in other parts of my life changes cause me problems. I think it probably comes down to what I’m interested in. I’m just not interested in blogging tools. I’m interested in blogging itself.
I’m also acutely aware that I often resist change, then a couple of weeks down the line I can barely remember a time before the change. I’m pretty sure that will be the case here. Today it took me a few minutes to figure out how to put that WordPress logo in the top-left of this post, whereas previously it took a second. I think it’s actually easier now and more WYSIWYG than it was before, but when it’s different, it feels wrong. 🙂
So that’s it. Give it a go and see what you think!
Cheers
Tim…
PS. I expect a whole bunch of updates to come in the next few weeks as they discover all the bugs and security holes they’ve put into the new version. 🙂
A few months ago I decided to write a post about the lost time associated with the hand-offs between teams. It was relevant to a conversation I wanted to have, and I wanted to order my thoughts before I went into that conversation. That post accidentally became a series of posts, which I’ve listed below.
I’m not an expert at automation and I’m far from being an expert at DevOps. Theses were just a useful exercise for me, so I thought they might be of interest to other people.
I’m not sure if I’ll write any more, but if I do, I’ll add them to this page.
I’ve added an Automation category to the blog, which I’ve been using to categorise these posts, and other things like my posts about Docker and Vagrant.
Cheers
Tim…
I was going to include Technical Debt in yesterday’s post about Unplanned Work, but I thought it deserved a post of its own.
What is it? You can read a definition here, but essentially it comes down to a short-termism approach to solving problems. It can be applied to many situations, but here are two.
In both these cases, it might actually be the correct decision to just move forward, as you may not have the necessary time and skills yet to do something “better”. It’s not the specific decision that matters as much as the recognition of the implications of that decision. By moving forward with this, you have to recognise you’ve added to your technical debt.
In the case of the development example it’s quite obvious. You now have yet another application that will have to be upgraded/rewritten in the future. You’ve added to your future workload.
In the case of the server it may be less obvious. If everything were done properly, with no human errors, you may have a beautifully consistent and perfect server, but the reality is that isn’t going to happen and you’ve just added another “non-standard” server to your organisation, that will probably result in more unplanned work later, and should immediately go on the list of things that needs replacing, once an automated and standardised approach is created.
Technical debt is insidious because it’s so easy to justify that you made the right decision, and turn a blind eye to the problems down the road.
What’s this got to do with automation? In this case it’s about removing obstacles. Improving your delivery of infrastructure and application delivery pipeline makes it far easier to make changes in the future, and one thing we know about working in technology is everything is constantly changing. I see automation as an enabler of change, which can help you make decisions that won’t add to your technical debt.
Check out the rest of the series here.
Cheers
Tim…
In previous a post I talked about lost time associated with manual processes and hand-offs between teams, but in this post I want to look at time from a different perspective…
One of the big arguments I hear against automation is, “We don’t have time to work on automation!” If you don’t think you have time now, how are you going to make time when you have to deal with another 10, 100, 1000 servers? I don’t know about you, but every week I have to deal with more stuff, not less. If I waited for a convenient opportunity to work on automation, it would never happen.
I think a lot of this comes from a flawed mindset as far as automation is concerned. There seems to be this attitude that we have to get from where we are now to a full blown private cloud solution in a single step/project. Instead we should be trying to incrementally improve things. This idea of continuous improvement has been part of agile and DevOps for years. It doesn’t have to be great leaps. It can be small incremental changes, that over time amount to something big.
As a DBA we might think of these baby steps along the path.
If all you have time to do is steps 1 & 2, you will still have saved yourself some time, as you can start a script and do something else until it finishes. That could be working on improving your automation. Added to that you’ve improved the reliability of those steps of the process, so you won’t have to redo things if you’ve made mistakes, or live with those mistakes forever.
I understand that company politics or internal company structure can make some things difficult. Believe me, I run into this all the time. I can build whole systems with a single command at home, but at work I have to break up some of my automation processes into separate steps because other teams have to perform certain tasks, and they haven’t exposed their work to me as a service. As frustrating as that can be, it doesn’t stop you improving your work, and maybe trying to gently nudge those around you to join in.
Remember, each time you save some time by automating something, invest some of that “saved” time into improving your automation, and automation skill set. Over time this will allow you to take on more work with the same number of staff, or even branch out into some new areas, so you aren’t left out on a limb when everything becomes autonomous. 🙂
Check out the rest of the series here.
Cheers
Tim…
PS. Continuous improvement (Kaizen – Masaaki Imai – 1986)
“Kaizen means ongoing improvement involving everybody, without spending much money.“
“The starting point for improvement is to recognize the need. This comes from recognition of a problem. If no problem is recognized, there is no recognition of the need for improvement. Complacency is the archenemy of Kaizen.”
Yesterday I went to Birmingham City University (BCU) to do a talk on “Graduate Employability” to a bunch of second year undergraduate IT students. I’ve done this a few times at BCU, and also at UKOUG for a session directed at students.
The session is what originally inspired the my series of blog posts called What Employers Want.
I’ve mentioned before, these sessions are a little different to your typical conference sessions. Perhaps you should try reaching out to a local college or university to see if they need some guest speakers, and try something outside your comfort zone.
Thanks to Jagdev Bhogal and BCU for inviting me again. See you again soon.
Cheers
Tim…
Danger, Will Robinson! Obligatory warning below.
So here we go…
Fedora 29 has been out for a bit over a week now. Over the weekend I had a play with it and noticed a couple of differences between Fedora 28 and Fedora 29 as far as Oracle installations are concerned. There are some extra packages that need to be installed. Also, one of the two symbolic links that were needed for the Oracle installation on Fedora 28 is now present in Fedora 29, but pointing to the wrong version of the package.
Here are the articles I did as a result of this.
It’s pretty similar to the installation on Fedora 28, with the exception of the extra packages and a slight alteration to the symbolic links.
Once the “bento/fedora-29” box becomes available I’ll probably do a Vagrant build for this, but for the moment is was the old fashioned approach. 🙂
So now you know how to do it, please don’t! 🙂
Cheers
Tim…