Multiplexed redo logs and archiving by default?


After yet another post by someone whose database has crashed without running in archivelog mode and without having multiplexed redo logs, it makes me think it’s about time Oracle changed the default installation to include both these things.

Over the last few versions, Oracle have consistently made the database easier to install and use, but they still leave these gaping holes. Yes, archivelog mode is set if you choose to setup backups during the installation, but there’s nothing to stop you defaulting this setting even when backups are not initiated during the installation.

I realise some people will react by saying it’s up to the DBA to make this decision, but there are obviously lots of people out there that either don’t understand the issue, or don’t even know about it. It would seem sensible to me that Oracle install the product in the safest mode possible. Afterall, it’s no problem backing them out if you don’t need them.

I for one would rather have people complaining about disks filled with archived redo logs, than having unrecoverable databases.

Rant over. 🙂



Author: Tim...

DBA, Developer, Author, Trainer.

12 thoughts on “Multiplexed redo logs and archiving by default?”

  1. Agreed, archived logs out of the box would be safer and better overall. I’d only add that a default policy that archives until some x GB have been filled and then deletes the oldest to make space would be best.

    The result would be a similar cycle to the online redo logs, but going /much/ further back in time. The full benefits of archivelogs wouldn’t be realised, but and Oracle database wouldn’t inevitably fill a disk from a default install.

  2. Tim,

    I couldn’t agree more, I definitely think it should force the user/DBA to make a conscious decision to *not* run in Archivelog mode rather than the other way round.


  3. What happen to simple Oracle Installation on one single disk? Do you still want to duplex your redo logs if you have only a single disk?
    I admit this must be very very unusual.
    But is it possible to have one redo log member to be corrupt and the other log member to be perfectly fine in that particular scenario where you have 1 single disk?

  4. Hi.

    You should always multiplex redo logs. Delete one member of a redo log then ask me if it is possible to have a problem with one member of a group on a single disk. 🙂



  5. I can see why duplexing redo logs is required if you have multiple disks and multiple raid controllers. But if you only have one disk and you duplex them on the same disk, what problem you avoid in that case? If the disk fails, you lose both members anyway. Is it possible that a redo log member can get corrupt and not the other one if both member reside on the same disk?

  6. Certainly, if the disk fails you lose everything. This is why nobody should run any production system on a server without disk redundancy.

    As I said in my previous post, it is always possible that a file can get corrupted or lost, so having two copies is safer, even on a single disk system. If you accidentally delete the disk, you still have the other copy. If you get a corrupt file, you will have the other copy.

    Of course, if you don’t care about your database and your happy to lose it, by all means don’t multiplex your redo logs and don’t use archivelog mode. 🙂



  7. In that case a default backup job ready to run also is needed, if not the FRA can put the archives as available space. That´s one issue of the idea, if you made the backup as a copy with incremental block apply or as backupset. The option one asumes you have/need more space for you´re FRA, the second one use less space but is not recommended buckup method for Oracle 10g.

  8. I don’t think you need to force a backup scenario. They will soon know there is a problem when the flash recovery area fills up, which is a better problem to have than having a lost database.

    Also, this will force people to read up on how they should be doing backups.

    I’m not suggesting Oracle can make everything perfectly safe, but they can nudge people in the correct direction. Forcing people into a backup scenario seems too extreme to me. It’s something that needs consideration.



  9. While they are at it, why not make Standard Edition the default for an install (or neither, and force the person to decide).
    An Enterprise Edition user is more likely to be experienced and know when not to pick the default. And it is easier to push a Standard Edition Db to Enterpise Edition than the other way around.
    I’d suspect there are more Standard Edition installs than Enterprise anyway.

  10. I just had a requirement for a flat file at a remote site, or else ODBC connection to the production system. So I went, “hey, why don’t I give them XE, push the data over there, and they won’t have too much problem when their flaky network link dies.”

    I was glad to see archiving was off for the default, I think it is only relevant for a properly designed transactional system, not one with no real DBA around and no need for recovery. How many of those are there with XE, really? Weren’t they thinking of not even allowing transactional recovery with XE for a while?

    Maybe the install needs to ask “is this a toy database or a professional database?” 🙂

  11. I can see a nned for a production system running on a free DB, like mySQL ot XE. Even in these cases, I would hope the database is recoverable.

    The problem I see is when it fails, people won’t say it failed because they made a mistake. They will say it failed because Oracle is rubbish. The same way they used to say Oracle performs badly when they had no stats.

    I see the introduction of these features as a way of protecting their reputation as much as anything.



Comments are closed.