Lunchtimes…

Over the weekend I decided to start using my weekday lunch times for something other than reading/writing blogs and geeking out on the internet. Instead I’ve started swimming, something I’ve not done regularly since I was at University many years ago. OK. I go every week with my nephews, but I tend to float next to them, rather than actually swim!

Monday – It was just plain dire. What’s more, I was completely knackered when I got to Karate in the evening.

Tuesday – It wasn’t so hard, but it wasn’t good either.

Wednesday – It felt almost like I remembered how to swim again and I wasn’t too tired later on.

Thursday (today) – I managed to do a mile. It was no walk in the park, but I’m still alive to tell the tale.

Friday (tomorrow) – Do I go for another mile, or do I go to the pub? The jury is out at the moment. 🙂

Cheers

Tim…

SQL Developer in production…

Some time towards the end of last week I saw a blog post saying a new early adopter release of SQL Developer was available. I went to OTN to download it today, and there is now an official production release available. That came out pretty sharpish! 🙂

Cheers

Tim…

What a day!

Yesterday was a tough day…

I had a call at 02:30 about an extract job (written in Java) that was taking hours to run. After 2 hours of pratting about trying to get it working I gave up and went back to bed. As soon as I got to work I recoded the process in PL/SQL. It was a case of “if it compiles it must work”, because we had very little time to test the process before the next run was required. Fortunately, it worked fine. Did it improve the situation? The original extract took several hours, the PL/SQL version took 23 seconds. Sweet!

What was slowing down the original process? Some bright spark thought it would be better to pull back a huge table into an array and loop through it to searching for data, rather than writing a query to pull back a single row. This action was repeated for every line written to the extract file. There were an assortment of other classic bits of code also, including (the names have been changed to protect the “not so” innocent):

switch (getType())
{
case TYPE_CONST1:
sb.append(TYPE_CONST1);
break;
case TYPE_CONST2:
sb.append(TYPE_CONST2);
break;
}

Now, I’m no Java Guru, but for a mandatory item with only two allowed values, I instantly spotted that this actually meant:

sb.append(getType());

Conclusion: If you want data-intensive code to run fast, put it in the database and get someone who understands databases to write it!

Once all the fuss was over, I noticed that the front page of my website wasn’t working properly. It turned out that the OTN RSS news feed was broken, and my dodgy PHP code didn’t trap the error very well. I fixed the error trapping and informed Oracle about the news feed. Within a few minutes the OTN news feed was restored, so all was fine again.

Conclusion: My PHP isn’t as good as my PL/SQL 🙂

Cheers

Tim…

Blog Aggregation and RSS…

I decided to write a basic blog aggregator using PHP and mySQL at the weekend, which is now on my website:

https://oracle-base.com/aggregator/index.php

I created a couple of tables in a mySQL database, wrote a few lines of PHP and “Bob’s your uncle”, as they say. I’m not sure if it will stay, but I thought is was worth a few minutes of my time to play around with it, just for the experience.

By far the most time consuming part of the job was writing the XML parser so that it was able to deal with all the formats of blogs I currently read (RSS, ATOM and RDF). Even within these formats, there are variations on tag names and date formats. It’s a real pain in the ass.

Just like everything else in IT, you take a simple idea (RSS), then let it diverge into a monster…

Cheers

Tim…