Custom Search
Join the PC homebuilding revolution! Read the all-new, FREE 200-page online guide: How to Build Your Own PC!
NOTE: Using robot software to mass-download the site degrades the server and is prohibited. See here for more.
Find The PC Guide helpful? Please consider a donation to The PC Guide Tip Jar. Visa/MC/Paypal accepted.
Results 1 to 11 of 11

Thread: Defrag HDD

  1. #1

    Defrag HDD

    I recently tried to defrag my HDD. Before that, I selected to analyse whether it was necessary to defrag. The result analysis is that the HDD is not required for defragmentation. Nevertheless, I just proceeded to defrag anyway.

    It took almost 35 minutes for the system to defrag only 2% and naturally I decided to stop the process as it was taking too long. I have never experienced such a long delay.

    I like to know what could be the cause of the problem in the delay and was it advisable to proceed with the defrag even though it was not necessary?

  2. #2
    Join Date
    Jun 2001
    Location
    Scottish Borders
    Posts
    3,519
    It might possible be that you have a security porgram running in the background.

    Even though your drive might not be badly fragged Defragging sorts\moves your progs so that they start quicker and this is what takes up the time. (hope I have explained that bit corretly)

    Best way to defrag (IMHO) is:
    Disconnect from net.
    Close ALL running programs.
    Run Defrag and then reboot.

    If your drive is badly fragged it will take a long time. It is also if possible running it in safe mode.
    Ernie

    The difference between perseverance and obstinancy is that one is made from strong will, and the other from strong won't
    Henry Ward Beecher
    Do you have reading problems? Don't let it deter you. This is what YOU can do if you try http://www.erniek.eclipse.co.uk

  3. #3
    For additional, if you are using FAT partition will face longer time to defragment compare to NTFS partition.
    Just do your best. We appreciate the effort, result is secondary.

  4. #4
    Join Date
    Mar 2002
    Location
    west Lothian, Scotland.
    Posts
    13,304
    Was the defrag continually restarting?
    You fix that by defragging in Safe Mode with modem powered off.

    Does your HDD have only one HUGE partition?
    I keep my C: partition [3GB, 1.8GB used] as small as possible by moving all data files of C: so that it only holds Windows & Program Files [they account for 95% of used space] plus a few odds & bods.

    The data partitions [although much bigger] don't need defrag'd so often as C:
    I'm using FAT32 partitions:
    I use small clusters for small partitions holding small files...
    Large clusters for large partitions holding large files.
    Hence the large partitions are still relatively easy to defrag because the files are held in HUGE clusters. [I try to keep the number of clusters to a file somewhere around a reasonable number like ten, if possible.]
    Can you imagine trying to defrag a single HUUUUuuuge file made up of hundreds of thousands of tiny clusters?
    And multiplying that by thousands of files?

    It can be quite instructive if you watch the process.
    You can see the number of clusters to a file [the big block of coloured rectangles that clear in one go once complete].
    You don't want to see that a single file consists of half a screenful or more of tiny clusters [the little coloured rectangles].
    NTFS is different I believe; don't know how that works.

    Probably best to begin the defrag about 30 to 60 minutes before you go to bed and let it continue while you sleep.

  5. #5
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    Both FAT and NTFS file systems can become fragmented. I am not aware of any reference or reason for FAT taking longer to defrag than NTFS.

    Nor does the cluster size have much, if anything, to do with the amount of fragmentation. As long as the clusters are contiguous it doesn't matter if they are large or small clusters. Contiguous blocks can always be moved around for any file i/o more quickly than if they are fragmented for what should seem to be obvious reasons.

    The things that tend to cause a lot of fragmentation are the creation of very large files on very full drives or of files that keep changing their sizes. The pagefile is well known for its effects on fragmentation when it has no fixed size. Both file systems try to write friles as contiguous blocks but as a drive gets filled up there comes a point when there is no nice space for a whole contiguous file to fit in. Thereafter fragmentation gets more common and one other effect of a defragmented drive is that it is then even more prone to become even more defragmented.

    The effects of fragmentation on NTFS volumes is however of less impact on file i/o because of the way files are mapped under NTFS versus FAT. What is bad under NTFS is if the MFT gets fragmented since this is "the spine" of the whole file system.

    A large drive will take longer to defrag then a small one and a heavily defragmented drive will take longer than a mildly defragmented one. This can be fairly easily confirmed by defragging a drive immediately after it has been defragged.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

  6. #6
    Join Date
    Mar 2002
    Location
    west Lothian, Scotland.
    Posts
    13,304
    "Nor does the cluster size have much, if anything, to do with the amount of fragmentation. As long as the clusters are contiguous it doesn't matter if they are large or small clusters."
    1. Imagine a large file held on 4 large clusters...
    2. Versus that same large file held on 1000 tiny clusters...
    In 1:
    Even if the clusters were fragmented they would be easy to put back together [make contiguous once more]. There's only 4 clusters to find and put back in order.
    In 2:
    If those 1000 clusters became fragmented it would tend to take a lot of time and effort to put them all back together.
    And because there are more of them, there is a greater probability that they will become fragmented.
    Hence it makes sense to avoid storing files in thousands of small clusters.

    Imagine if someone was delivering orange juice [to one customer] and milk [to another] on a lorry.
    If the orange was in one huge container...
    And the milk was in another huge container...
    There is almost no opportunity for a mixup.

    But if the orange is in thousands of cartons...
    And the milk is in another few thousands of cartons...
    And somehow they become mixed up....
    [The lorry is involved in an accident perhaps]
    Sorting the milk from the orange could take an AWFUL lot of time and effort when it comes to delivery.

  7. #7
    Join Date
    Mar 2002
    Posts
    12,206
    Blog Entries
    2
    Yes, but only if the volume consists entirely of large files. Large cluster sizes on a normal usage NTFS volume (such as the Windows partition) would result in a huge performance loss if the cluster size is larger (or smaller) than the default 4KB. The overhead associated with 64KB clusters each filled with a mere 100 Byte file would have a tendency to get in the way of the clusters belonging to large files. Remember that Windows uses an innumerable number of small files just to keep it itself running. Large cluster sizes should only be used on secondary storage volumes that contain large files only.

    In your example of delivering milk and orange juice, what if the lorry was also responsible for delivering a teaspoon of apple juice to 1000 other customers? Using large containers at that point (since the cluster size must be the same for a single volume) would be out of the question.

  8. #8
    Join Date
    Nov 2000
    Location
    The Mountain State
    Posts
    23,389
    In a Windows world, cluster size is always going to be a choice among several evils...the question is which can you live with.
    AV, Anti-Trojan List;Browser and Email client List;Popup Killer List;Portable Apps
    “When men yield up the privilege of thinking, the last shadow of liberty quits the horizon.” - Thomas Paine
    Remember: Amateurs built the ark; professionals built the Titantic."

  9. #9
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    There seems to be an assumption that fragmenting files (or mixing a thousand milk and orange cartons) would fragment two 1000 cluster files into 1000 clusters each all over the place. That just wouldn't happen. Windows would attempt to keep the fragments to a minimum and you would be far more likely to have three or four blocks of contiguous data, say 250 small clusters each. This data has to be "copied and pasted" to another part of the drive sector-by-sector in order to defragment the files in question and there is not all that much searching around for a horrendous number of small fragments of the same file. If any file of fixed size is fragmented then all the sectors have to be rewritten and there is not a significant impact on defrag performance from gathering-in the data from its fragmented segments - regardless of the cluster size.

    You can begin to get a grasp of how the windows file management works by copying just a few files to a newly formatted FAT partition. Just take two files and they will occupy the first available sectors of the data area. If you now enlarge the first file, windows will in fact move all the data to a new area rather than fragment it across the second file.

    One of the most significant aspects of NTFS partitions is that, regardless of cluster size, small files will reside totally within the MFT and neither waste space nor be involved in any fragmentation issues.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

  10. #10
    Join Date
    Mar 2002
    Location
    west Lothian, Scotland.
    Posts
    13,304
    Saphaline
    "only if the volume consists entirely of large files"
    In the real world you would never have a partition consist 100% of "large" files...
    But what if it was 99%, or 95%?
    I attempt to separate my file types into different partitions.
    It's not going to be perfect, but an approximation of the ideal.

    "Large cluster sizes on...the Windows partition...would result in a huge performance loss"
    My idea is to try to choose a suitable cluster size to match the most common file size [I studied statistics].
    Since I move all the [tend to be larger] data files off C: then the remaining Windows and program files tend to be on the smaller side.
    Imagine a [bell shaped] "normal" distribution of file sizes.
    What would be the "mean" file size?
    Ideally, the cluster size would be about a quarter or a tenth of that [about 4 to 10 clusters to hold an average file].
    On average, half a cluster per file is wasted "slack", so too few clusters per file is bad.
    But having too many clusters per file is also bad because...
    a. There's a greater POSSIBILITY of fragmentation [Murphy's 1st Law = If it can happen, it probably WILL happen].
    b. It doesn't help if you force the PC to rush around trying to [successfully] link up in correct order, it's 1000 [possibly fragmented] parts. Much better if it only had to find 5 parts.
    c. And doesn't each file part have to be listed in the FAT [don't know about NTFS], so that the FAT can become HUGE. One HUGE partition with tiny clusters to be avoided.

    "what if the lorry was also responsible for delivering a teaspoon of apple juice to 1000 other customers?"
    My idea is that you suit the container size to the amount to be held.
    Hence, tiny quantities would be held in tiny containers in small vehicles.
    A van would deliver the small items; a tanker would deliver the huge loads.
    Small partitions [C: D:] holding small files in small clusters...
    Big partitions [E: F:] holding big files [video, MP3, wave] in big clusters.
    Imagine if there was a program that would...
    Sort files according to size; make partitions with sizes to suit, and make the cluster sizes to suit.

    Paul
    "That just wouldn't happen. Windows would attempt to keep the fragments to a minimum"
    Perhaps so, and perhaps it would succeed [increased probability of error?], but to keep such good order WORK must be expended.
    If the right things are done, it is possible to minimise the necessity of Windows to expend effort.
    [Climbs on soapbox]
    There is a modern tendency to solve all problems by EXPENDING...
    Money...Energy...Species...Land...Lives.
    Increased efficiency...Reduced conservation of resources.
    [Get's off soapbox]
    Take it to the extreme...Imagine if each file was held in 1,000,000 clusters.
    Doesn't it take effort to find and correctly recombine all of those?
    Isn't there a greater posibility of mistakes being made?
    Last edited by Sylvander; 06-10-2006 at 04:17 AM.

  11. #11
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    In the real world you would never have a partition consist 100% of "large" files...
    I have partitions that only contain image files - usually 600mb or 2gb in size.

    But that's not the real point. There a two distinct issues that are being overlapped. Cluster size per se and its effects on file managment generally but of defragmentation in particular. When a file is to be loaded into RAM (or swapped into virtual memory) some "intelligence" is used and so each cluster isnt copied and then the next cluster's position worked out and then it gets copied, etc. The file's whole lay-out is mapped first into where and how many sectors are to be copied from each contiguous block. A contiguous block of data can be copied quickest not only because of "read ahead" but also because the heads don't have to make any dramatic movements. The cluster size is thus not important though the number of non-contiguous file fragments is. It wouldn't make any difference if each byte (or each bit) was contiguous and mapped in an analagous manner.

    Errors shouldn't be a problem (barring bad sectors or untoward interruption) since error checking goes on with every file i/o; that issue is a distraction. As for expending energy - that is a distraction also and it is indeed hard to know exactly what you mean. The cpu seldom works flat out all the time so there is a lot of spare capacity there just for one example.

    The software developers of operating systems havent spent millions of hours designing and improving on how the kernel operates without giving due regard to efficiency and speed. At times some personal tweaking of systems can produce finite noticeable effects but there is so much extra capacity involved in modern systems that it seems somewhat anal to want to keep one's house so absolutely clean and tidy and tweaked to the last.

    The poster's original question relates to just such a phenomenon. The advice was that the volume didn't need defragmentation - but this was done none-the less. Why? Possibly because it was believed that this would make a noticeable difference to performance or possibly because the thought of a grid with a few "misplaced" fragments didn't look pristine enough.

    In the winter I have a herd of cattle kept in sheds that have to be cleaned-out every day or so. One could go mad trying to lift every bit of manure (contiguous or not!!) and keep the sheds constantly and beautifully clean but as soon as one pat is swept up another one is being deposited. Its only when there is a signifcant build-up of manure that the sheds must be emptied or the cows would get dirty or there would be problems with calving. One only needs to defrag a file-system when the fragmentation has reached the stage of impairing function.

    Take it to the extreme...Imagine if each file was held in 1,000,000 clusters.
    A million is hardly extreme when you consider that a modern CUP can operate at some 4 thousand million cyles every second. And a million 4k clusters would be a 4gig file - not unheard of by any means.
    Last edited by Paul Komski; 06-10-2006 at 07:32 AM.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •