View Full Version : Partitioning a 160gb hdd
07-30-2004, 10:02 AM
I just bought a Seagate Barracuda 160gb, and am a bit confused on how to partition it. I will be using my old 20gb drive exclusively for linux :) I guess, the usuable space for a 160gb drive should be around 150gb.
All drives will be FAT32...
c:\ 5gb - win98 + prgs in same drive
d:\ 10gb - winXP
e:\ 5gb - temp folder + temp internet files + swap file
f:\ 40gb - games + programmes
g:\ 60gb - setups + songs + pics + videos + data
h:\ 10gb - future expansion
i:\ 20gb - future expansion
Win98 is only for backup purposes, just in case, if something goes wrong with winXP, my work shouldnt stop.
d:\ will have winXP ONLY, all other application programs will be installed in f:\.
e:\ and f:\ are likely to be defragged the most, so i am putting both games and softwares on f:\ and temp files and swap file on e:\, so that i can defrag the stuff at one go.
g:\ will have relatively static data.
Also, i am planning to put the 160gig drive on Primary Master and 20gig linux drive on Primary Slave, and CD-writer on Secondary Master. Because at any given time, either linux will be in operation or windows, so the
primary channel will never be "shared".
Any Suggestions on partitioning or the hardware setup ?
WinXP cannot format FAT32 partitions greater than 32gb, but that shouldn't be a problem as i will partition with the win98 startup disk, and win98 has no such limitations. What i am worried about is, will it affect performance of winXP ?
Also, i read somewhere that, cluster size will also increase, and so there will be more wastage of space, so should i just keep the partition sizes under 32gb :confused:
08-01-2004, 04:06 AM
Is your BIOS compatible with 48-bit LBA (http://support.microsoft.com/default.aspx?scid=kb;en-us;q303013)?
A Win98 startup disk is likely to have issues partitioning a HDD as large as this (regardless of the BIOS) so consider a third party utility or Seagate's Disk Wizard Starter Edition (http://www.seagate.com/support/disc/drivers/discwiz.html) to do the partitioning.
Well, as partitions get bigger and bigger the cluster size increases in parallel. The waste of space really depends on the average file size that you are storing. There would be no waste of space for large image or media files but significant wastage if you have loads and loads of 1 kb or other "small" files. On the up side, you will have less fragmentation with large clusters since more files will reside on single or small numbers of clusters. Personally speaking I don't get hung up on this but then I use a RAID1 which "wastes" all the space on one hard drive.
The master and slave, hdd and atapi, channel one and two debates go back a long way but on a modern system the performance effects are pretty negligible whatever way you configure them.
08-01-2004, 05:36 AM
On average, about half a cluster is wasted for every file in the partition.
Hence, to find the total wasted space [or "slack"] in a partition, you multipy half the cluster size by the number of files in the partition.
So if you use really big clusters in partitions where you store relatively few HHUUUGGgge files, there will be a relatively small quantity of slack.
If you use tiny clusters in partitions where there are ENORMOUS numbers of small files, there will still be relatively little slack.
08-01-2004, 07:51 AM
The default settings when Win2K/XP format a drive are worth pondering and should make one think about the benefits, including the cluster-size-efficiency, of using NTFS. NTFS has other efficiencies in this area because small files are actually incorporated within the master file table (the $mft file, which is itself a single file) and don't spill out onto the HDD at all.
Utilising NTFS the cluster size will be 4KB for partitions from 256MB - 2TB.
Utilising FAT32 the cluster size will be 4KB for partitions of 2-8GB, 8KB for 8-16GB and 16KB for 16-32GB (the max using Win2K/XP).
You can of course, within any allowable limits, use both these NT OSes and 3rd-Party partitioning/formatting utilities to use other cluster sizes (though this may affect file compression) but in the main most people don't bother to do this - and one would need to have fairly sound reasons for doing so.
HDD memory is nowadays so cheap and huge compared to just a few years ago that tweaking with such settings only complicates things - IMHO. Most of the time, particulaly with NTFS, one is going to approx be using an 8-sector cluster (or file allocation unit) of 4KB in size. If the average file size is approaching or above this value then you cant really get any more "efficiency". There will always be some file slack unless each sector is one byte in size using single sector clusters, thereby, just one-byte in size!!
To take this efficiency thinking to its extreme one needs to consider the waste of unused zero bits in each byte. This isn't theoretical and it is done in the partition tables with the CHS values where bytes are split into two and just a part of a byte is used. There was a time when each and every bit was jealously guarded!! ;)
Further reading regarding some of the trade-offs:-
08-02-2004, 07:42 AM
thanks for replying guys..
After researching and pondering for over 2 days over the partition sizes / cluster size / fragmentation / slack space / performance issues relating to xp, etc etc, i finally partitioned the disk this way -
c:\ 10gb - WinXP
d:\ 10gb - Win98 + Swap file of WinXP
e:\ 32gb - Softwares
f:\ 32gb - Games
g:\ 32gb - Setups, Linux ISOs :)
h:\ 32gb - Data: Songs,Pics etc
i:\ 12gb - Future Expansion
Paul, thanks for the 48-bit LBA patch :) ..and ya, i used Seagate's Disk Wizard Starter Edition for partitioning.
08-06-2004, 08:59 PM
Nice drive, I only have one suggestion. Putting the swap file for XP on a different partition of the same drive will not improve disk performance. Putting it on another Drive would though.
08-07-2004, 06:46 AM
hie Variable, thanks for the advice, but i am using my other harddrive solely for linux. ;)
08-07-2004, 09:30 AM
I've just readjusted the partition sizes on my 10 GB Master HDD using Samsung's version of "Ontrack Disk Manager".
Increased the C: partition to 1,800 MB; OS specified as Win98, FAT32 selected; C: partition 4 kB clusters; E: partition 64 kB clusters.
Then restored backups taken immediately before re-partitioning.
Doing this so I can make some more room on the C: drive to install Sun Java 2 RunTime Environment, version 1.4.2_05.
By-the way, I've just made and tested my HDD's using Seagate's "Seatools Desktop, Bootable Diskette".
It was telling me there were problems with my drives although they are working fine and I don't believe there's anything wrong with them.
08-07-2004, 10:13 AM
Sylvander... HUH ? :confused:
08-07-2004, 11:52 AM
Well rahulkothari, it wasn't necessarily intended to specifically relate to you and your 160 GB [not gb] drive, but:
"After researching and pondering for over 2 days over the partition sizes / cluster size / fragmentation / slack space / performance issues "
1. What cluster sizes are YOU using?
2. If your partitions began to fill, how would YOU re-arrange the sizes of each? I use restored backups.
3. Because I keep the C: drive small [1,600 MB (not mb)], de-fragmentation is quicker.
4. Because I use cluster sizes related to the file size and keep [to some degree] large files in one partition and small files in another, slack is kept to a minimum.
5. Because I keep my C: partition as small as possible, the FAT is as small as possible, there are fewer file entries and finding files is quicker. It also uses less memory to store the FAT. It can become enormous if the partition is large and FAT32 is used.
I also wondered if [since you have Seagate HDD] you might like to try their HDD testing software.
Wouldn't it be nice if...
Files had a "Virtual Home", but were actually stored in different partitions on the basis of their size, and the partitions and clusters were sized to suit?
And it was all done automatically?
Now there's an idea for a new program. :)
08-07-2004, 01:23 PM
To put things into some sort of meaningful and quantitative context. When 4kb clusters are used under FAT32, each FAT occupies, as near as damnit, 0.1% of the total space on the partition. On a 5gig partition this takes up less space than the download of a Mozilla Firebird installation.
For a given cluster size the FATs occupy the same percentage space of the partition whatever the partition size. Halving the cluster size approximately doubles the size of the FATs since there would be approx twice as many clusters to reference; and vice versa.
So having smaller clusters loses HDD space because the FATs themselves get larger but save on file-slack space on the data area of the partition itself; looks like swings and roundabouts there then. There would have to be very good reasons indeed for me to change the default settings on a modern system. Running Win31 on a 300MB HDD would be a quite different story than running WinXP on a 120GB one.
08-07-2004, 07:37 PM
I just defragmented my E:drive and C: drive.
The files on the E:drive are typically huge.
A wave file I looked at was 90,000,000 bytes.
I think that's 1,373 in number of 64 kB clusters for one file.
That's an awful lot of containers to hold something in.
And that's despite using large clusters.
If I had used 4 kB clusters that would be 21,972 clusters to every file of that size.
And there are a LOT of files of that size on the E: drive, this file size is not untypical.
When the drive is being defragmented, typically each file will occupy almost a whole screenful of clusters.
If I could use an even bigger cluster size I would.
Now, on the C: drive, the cluster size is only 4 kB.
A major proportion of the files on the drive occupy less than a single cluster. the larger files occupy 2, 3, or 4 clusters.
The biggest file on the drive is about 5,000,000 bytes, using about 1,220 clusters, but that's most untypical.
You only need to watch the drives being defragmented to get a feel for the situation.
Using thousands of clusters to hold a file doesn't make sense.
Using a 64 kB cluster to hold a 1 kB file, then multiplying that 63 kB of waste by many thousands of files doesn't make sense either.
I think of it like delivering milk.
If you supply 1 or 2 pints or 1 or 2 litres to each customer, you don't supply it in 1 or 5 gallon containers, each one of which is mostly empty. That would be wasteful and inefficient.
Similarly if you're supplying 100,000 gallons in bulk to lots of local distribution points around the nation, you don't supply each with 800,000 full pint containers if you can avoid it. Or even worse, 800,000 mostly empty gallon containers!
So you may say "Trucks are cheap these days", "and so is petrol", "and so are the roads". But perhaps that's an unfair metaphor?
Anyway, I figure waste is a bad idea, and somewhere along the line it costs, and someone pays.
08-07-2004, 08:44 PM
lol, Sylvander you are a funny fellow. I still think you probably have the most tidy and orderly home out of everyone who posts here. :)
Most people do not suffer from lack of space on their hard drive. Quite the contrary. HD space is cheap. I also think that speculating on average file size on any given PC is just that, speculation. Another thing, your using an OS that was first released almost 7 years ago. That is about 49 years ago in dog years, which is compareable to IT years ;) NTFS has several other benefits besides size. It is much more robust. You should try jumping into a 2000 era OS ?? and trying NTFS, then maybe you wouldn't need to reformat all the time, 98 tends to get cluttered.
I think maybe we could all pitch in and buy Sylvander a really huge drive. He would be utterly lost with all the space he has and his reaction would be wonderful to see in person. Unfortunately for me, I have no idea where West Lothian, Scotland is but without a doubt it is pretty far from the Piedmont of North Carolina.
08-07-2004, 08:54 PM
If I could use an even bigger cluster size I would.
I don't see the logic of using large cluster sizes just to accomodate large files. The main reason for using large cluster sizes is to allow the FAT file system to accomodate large partitions - since there is a limit to the number of clusters that any FAT can hold. Having a finite number the only way to increase partition sizes is to make the clusters larger. Though to be fair this is much more of an issue under FAT16 than FAT32, which can theoretically support partitions of 2 to 4 Terabytes.
That's an awful lot of containers to hold something in. And that's despite using large clusters.
Containers isn't a word I would have used for something as virtual as a cluster. Whether such a file is stored in 2,746 x 32kB (the max cluster size under FAT) clusters or 21,968 x 4kB clusters doesn't materially affect the number of 512-byte sectors utilised (or are sectors not containers in the same sense). Whether defragmentation is being performed on 2,746 or 21,968 clusters what is more relevant is the number of fragments or non-contiguous areas rather than the number of clusters per se. A completely contiguous file, even with a large number of clusters, wont need to be degragmented at all - though for other reasons it might be moved to another area of the drive.
If one really wants files to be managed much better then use NTFS where both large and small files are managed very intelligently and where defragmentation is much less of an issue than under FAT.
LOL :D - I hadn't seen Variable's post.
08-07-2004, 10:24 PM
I have to agree that today NTFS is the way to go and with the size and price of drives now, cluster size just isn't important.
As for the pagefile, while putting it on another drive may be ok, putting it on another "channel" is even better.If you can, splitting it between two drives on independent channels from the OS drive is even better.
On my main Windows system, I have a motherboard with 4 SATA connectors, and two IDE connectors, that's six independent channels.
I use four hard drives. Two 36G Raptor drives, configured in RAID 1 for mirroring, with a 128K stripe size and default cluster size, which I use for XP and all my programs. I have two other 80G SATA drives which I use for scheduled imaging/backup and data storage (as well as a second install of XP for experimenting). I have split my pagefile between the two 80G drives. By doing so, I almost get a RAID 0 effect when writing to the pagefile,writing to both at the same time (image below showing the pagefile being written to both drives at the same time, white and black lines, the red pages/sec). Mind you, I have a gig of RAM, so I am rarely accessing the pagefile.
I realize not everyone has the luxury of multiple drives, but interesting nonetheless.
08-08-2004, 02:55 PM
I used the default cluster sizes alloted by Seagate's Disk Wizard.
I made a partition of max 32gb only 'cause i dont want to face any problems in future as WinXP is not really compatible with anything more than that size. Sylvander, i liked your idea about keeping c:\ drive small, but i just dont trust Xp, it keeps expanding as time passes :(
As far as fragmentation is concerned, i dont think my data, softwares, games and winXp drive will ever get filled up, and so there will always be enough free space (more than 15%) for effective deframentation. For the drive that holds setups, i dont care about the fragmentation, coz, thats the drive i will be accessing the least.
Regarding performance issues, I have a pretty decent system, 1.7ghz AMD, 256mb Ram, Seagate Baraccuda, WinXP... so i dont think the cluster size will make much difference.
This is what i concluded after researching and pondering for over 2 days over the partition sizes / cluster size / fragmentation / slack space / performance issues. ! ;)
Powered by vBulletin® Version 4.2.2 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.