Post by KeithHello Paul,
<<
HD Tune: Hitachi HTS547550A9E384 Error Scan
Scanned data : 476749 MB
Damaged Blocks : 0.0 %
Elapsed Time : 169:50
Keith
This is as close as I could get. Due to the wavy-gravy
nature of this plot, it's hard to say whether your
drive falls in line with the curve here exactly,
or not. It's in the right ballpark.
Your drive might be three-head, this drive could be
the four-head version. Disks use two heads per platter
(top and bottom). A three-head drive is just a four-head
drive, with one head being ignored during read/write. But
it still flies along, and helps balance forces on either
side of the platter.
Failed to load image: http://dyski.cdrinfo.pl/benchmark/hdtune/hdtune-1516-107204-943aLDRGqpf1O.pngNotice the seek dots are all over the place. I occasionally
have a couple seek dots off the beaten path, so they
don't have to be perfect. But if it's "snowing" off
the main axis, that spells some sort of trouble.
Like, uneven performance in day to day usage.
One possibility for your "slow" portion, is
perhaps the envelope for the partition is
bigger than it should be. The 32Kbit/sec section
could be one timeout after another, while seeking
to places that don't exist. But Macrium would
stop immediately if that were the case.
Partitions have two size parameters. There is
the physical size (in Windows 7, likely rounded
to some number of 1048576 byte "megabytes"). But
inside the physical partition, the virtual information
declares some number of clusters make up the file system.
The two sizes do not have to be equal. If there is
a mishap during a Windows Disk Management partition
resize, there have been cases where the physical
size was 1TB, while the virtual size was 500GB. Which
means half of the partition is completely
inaccessible. The backup would dutifully record
that, without an issue, and reproduce it given
a chance. All the tools are happy if the
situation arises. Only the user is unhappy.
Linux does this sort of thing on purpose. The physical
and virtual are handled as two separate steps in
GParted. Whereas Windows tries not to expose such
details to the user.
If increasing the size of a partition, you increase
Physical first, then increase size of Virtual. If
decreasing the size of a partition, you decrease
the Virtual size first, then adjust the Physical (update
partition table) right after that. Not that any of
this is relevant. I just wanted to point out one
failure mode is for Physical to be quite a bit
larger than Virtual.
The other way around wouldn't work. If Virtual was bigger
than Physical, the partition would corrupt as you
were filling it with data. But what would happen
if backing up ? Would the backup software
try to seek past the end of the partition ?
Dunno.
*******
You've already done the bad block scan. It shows
zero percent bad.
Perhaps this is one of those cases, where you
run the Macrium backup again, then run ProcMon
and collect a trace. Save out the trace, then
examine all the "Readfile" calls. Check the
addresses on the Readfile calls. They could be
relative to the start of the partition. If
some of those addresses are disproportionate
(outside the partition), then that might account
for bad behavior.
I had another idea, which is to resize C: a little
bit. Shrink it down by 10GB. Then run another backup
and time it. The purpose of this, is to give Windows 7
a chance to examine the partition and perhaps put things
right.
But the thing is, Macrium works best if the partition
table stays constant over a set of backups. While
you can resize on a restore now, it's a bit disconcerting
to have the tool complaining it cannot restore the MBR
because the partitions are different sizes. The danger of
modifying the MBR, is the possibility of running into
trouble on a restore (if restoring a 2 year old backup
say).
Did CHKDSK approve of your partition ?
I've studied a Macrium backup from end to end
with Sysinternals Procmon, and the trace was
around 9GB in size (20 minutes worth). You'll need
a 64 bit OS and thus the 64 bit version of ProcMon
will automatically be running, in order to collect
traces that big. Then, convert the trace to another
format, for post-analysis. While you can certainly
scroll through the trace, you may want some other
way to check it out. And we know text editors on
Windows suck, and there aren't a lot of good
choices there (from Microsoft itself). My best
tool now for examining files (not good for this
purpose), is the HxD hex editor. Finally, I can
edit a 30GB file with a hex editor, and it actually
runs at a decent speed. Now, if I could only
find a text editor that works that well.
When the backup is running, the clusters should
be backed up in sequential order, with "gaps"
where nothing is stored. You know the trouble
happens at the end of the backup, so maybe you'll
only have to scroll through the last 100,000 lines
on the screen :-)
The last really good text editor I had was
BBEdit Lite on the Mac (they don't make a PC
version). Which for its time and situation,
was fast. The text editors I've used since then,
are embarrassingly bad.
Paul