I'm trying to establish a relationship between an Oracle Doc ID, a system view and file response times.
I was reading "How to Tell if the I/O of the Database is Slow (Doc ID 1275596.1)". This great doc explains many things, shows a table with expected response times and it says, in brief, that "A typical multi-block synchronous read of 64 x 8k blocks (512kB total) should have an average of, at most, 20 milliseconds before worrying about 'slow IO'. Smaller requests should be faster (10-20ms) whereas for larger requests, the elapsed time should be no more than 25ms".
Then I was playing with DBA_HIST_FILESTATXS and it contains, for example, the number of physical reads done and the time (in hundredths of a second) spent doing reads.
Then I thought, if I add all the time spent doing physical reads and dividing it by the sum of all physical reads done, I can get the average time spent to read a single block. We can imagine we have a system with a balance between multi-block and single-block reads, so I can multiply this average by 64 (standar 8kb block size * 64 = 512kb). Then I can get the average time spent to read 512kb from the disk.
I generated the following query:
Code: Select all
SELECT FILENAME "Name of the datafile",
TSNAME "Name of the tablespace",
BLOCK_SIZE "Block size of the datafile",
SUM (PHYRDS) "# of physical reads done",
SUM (PHYWRTS) "# times DBWR required to wrt",
SUM (SINGLEBLKRDS) "of single block reads",
SUM (READTIM * 10) "Time (ms) spent doing reads", --if the TIMED_STATISTICS parameter is true; 0 if TIMED_STATISTICS is false
ROUND ( ( ( (SUM (READTIM) / SUM (PHYRDS))) * 10), 2) "Avg time (ms) to read 8kb",
ROUND ( ( ( (SUM (READTIM) / SUM (PHYRDS)) * 64) * 10), 2) "Avg time (ms) to read 512kb", --See metalink note "How to Tell if the IO of the Database is Slow [ID 1275596.1]"
SUM (WRITETIM * 10) "Time (ms) spent doing writes", --if the TIMED_STATISTICS parameter is true; 0 if TIMED_STATISTICS is false
SUM (SINGLEBLKRDTIM * 10) "Sum single blocks read (ms)", --Cumulative single block read time (in hundredths of a second)
SUM (PHYBLKRD) "# of physical blocks read",
SUM (PHYBLKWRT) "# of blocks written to disk", --which may be the same as PHYWRTS if all writes are single blocks
SUM (WAIT_COUNT) "Wait Count",
SUM (TIME) "Wait Time"
WHERE 1 = 1
AND SNAP_ID >= (SELECT MIN(SNAP_ID) FROM DBA_HIST_SNAPSHOT WHERE BEGIN_INTERVAL_TIME >= SYSDATE-7)
GROUP BY FILENAME, TSNAME, BLOCK_SIZE
ORDER BY 8 DESC;
I multiply some columns by ten because the view shows values in hundreds of seconds and I want it in milliseconds.
Do you think this reasoning could be right? Do you think the Oracle Doc ID is talking about 20ms to take blocks direct from the disk or maybe this time includes cached values?
A typical 15K RPM disk has a 3-12ms seek time. If all of this 64 blocks are in different tracks into the disk, then the minimun seek time I need to take them is 192ms (64 * 3ms), so I can't understand how to get 20ms for 512kb, maybe if they are all contiguous... However the sequencial read time expected it's not very different and I can't imagine 512kb of sequencial read all together.
Thanks in advance!