I'm not really sure I am the person to ask on this. I think I would ask the question of one of the performance gurus.
So as far as you are concerned:
- The query is the same.
- The volume of data has increased.
- The execution plan is *exactly* the same, except for the cardinality of the operations has presumably increased.
- The amount of memory used by a hash join operation has reduced.
Like I said, I'm not performance guru, but I would be wondering if the has table is overflowing to disk, so Oracle is not bothering to allocate more memory, in a kind-of, "I'm already having to overflow, so what's the point in grabbing more memory", type scenario.
This is just me thinking out loud. I have no evidence to say the optimizer will make those sort of decisions...