Custom Search
Join the PC homebuilding revolution! Read the all-new, FREE 200-page online guide: How to Build Your Own PC!
NOTE: Using robot software to mass-download the site degrades the server and is prohibited. See here for more.
Find The PC Guide helpful? Please consider a donation to The PC Guide Tip Jar. Visa/MC/Paypal accepted.
Results 1 to 8 of 8

Thread: The effects of "Block Size" on USB Storages

  1. #1
    Join Date
    Nov 2008
    Location
    Aachen
    Posts
    5

    The effects of "Block Size" on USB Storages

    Hi everybody,

    I have used "dd" and "time dd" on windows xp and linux suse on some USB drives of both External H/D and Flash for performance test.

    On windows xp I have used filemon and usblyzer for monitoring traffic between host and USB, on linux suse I just simply used usbmon.

    I have varied in serveral "block size" from 512, 1K, 2K, ..., 1M, 2M, ... and generated file sizes in 100 MB, 500 MB, 1 GB, 1.5 GB.

    As the results I found that on linux suse, there are not much different throughputs even given different block sizes (bs).
    But on windows xp there are some significant results e.g. if bs=4K, it came out the worst performance

    Does anybody knows why the block size has much effects on windows xp but not linux suse? Please let me know if there are any further references I can understand more about the block size.

    These results are different from sqlio benchmark for windows. That is the throughput (a bit) increases until stay constant level, when block size increases. Anybody knows how to setup the parameters for block size test, which should be related to dd test.

    Thanks a lot the attention.

  2. #2
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    Does anybody knows why the block size has much effects on windows xp but not linux suse?
    It is likely to be due to different types of memory management, the amount of ram and virtual memory in windows versus the swap file under linux. There can come a point in time when the OS "runs out of memory" particularly if using large block sizes - though in general using large block sizes will result in faster file i/o operations. There would however be little point in using a 1 gb block size to copy a 1 kb file (or a couple of sectors if you prefer). The block size needs to be chosen in line with the amount of memory and the total amount of data to be copied.

    In addition dd is native to linux even though there is a dd for windows application (is that what you were using?) but I doubt if it is totally analagous in the way it works.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

  3. #3
    Join Date
    Nov 2008
    Location
    Aachen
    Posts
    5
    Hi
    Thanks a lot Paul for your reply, yes I am using "dd for windows" under cygwin.

    How large of the virtual memory should be in order to speed up USB transfer file? Currently my machine has RAM=2GB, and the virtual memory is set in custom size: Initial 2046MB and Maximum 4092MB on drive C which has available spac 23657MB. Could you please give me suggestions or any further materials?

    I found, somebody posted to change "LargeSystemCahce" from 0 to 1 in registry of : system->CurrentControlSet->Contro->Session Manager->Memory Management but it doesn't work on my machine. In which case can I use this parameter?

    As we know, the maximum payload of USB is 64KBytes.
    I use Filemon to compare read block size, cp uses 8KB and rsync 256KB, therefore (as the results) rsync is a bit faster than cp because of bulk transfer and less overheads.

    I'm currious to know: which transfer file utilities and how large of the file can provide the best throughput to USB storages?

    3ts

  4. #4
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    I found, somebody posted to change "LargeSystemCahce" from 0 to 1 in registry of : system->CurrentControlSet->Contro->Session Manager->Memory Management but it doesn't work on my machine. In which case can I use this parameter?
    Not sure what you mean by "doesn't work" but the same change can be made by Right-clicking My Computer >> Properties >> Advanced >> Performance >> Advanced >> Memory Usage >> Programs or System Cache. There is a MSKB on this issue so I guess you'll just have to try both with as little else running on the PC in the background.

    I suppose that one of the problems with Windows is that so much can be going on in the background that any sort of "benchmarking" is going to be a bit compromised. I have no idea whether the Linux emulation with cygwin would function in Safe Mode but, it it did, it should cut down on resources allocated by drivers and so forth.

    One might also think that with the sort of direct output from dd that it is DMA that would be of the greatest relevance but now I'm just guessing because if copying files with dd then one imagines that the file system must first allocate some mapped area of the drive to copy the data to. This might be done by sparse files on NTFS or ext2 systems - but I'm definitely guessing now. Having a large System Cache would seem to be the logical choice for dd, which one also imagines would use only a few resources for the Windows API holding cygwin to operate in.

    There are obviously a number of issues at the heart of trying to compare dd and dd for windows. At the most basic one must ask if the same partition being accessed and is the same file system thus also being accessed? The drivers for Linux and NTFS for example would be different from the private code that makes up the native code for any of the NT based versions of Windows utilising NTFS.

    I'm currious to know: which transfer file utilities and how large of the file can provide the best throughput to USB storages?
    If by file transfer you mean the rate of transfer along the bus rather than the rate of writing the data to the device itself then I doubt if it makes much difference since the data transmission at this stage is just serial transmission of data. At the ports and the file-writing end of things there could be major differences between magnetic media and the various flash memory technologies.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

  5. #5
    Join Date
    Nov 2008
    Location
    Aachen
    Posts
    5
    Thanks a lot Paul for nice suggestions.
    Concerning cache & virtual memory managment for Windows System, I have found interesting information of file copy improvement

    http://blogs.technet.com/markrussinovich/
    http://blogs.technet.com/markrussino...4/2826167.aspx
    http://www.codinghorror.com/blog/archives/001058.html

    In my case as I mentioned above, only "LargeSystemCache" parameter which is set to 1, doesn't improve the copy performance from my machine to USB storage. At least I reallize that there are some variable latencies related to "wear leveling" of flash drive and seek/spin up delays on external hard drive. On the same machine and the same USB external drive, the transfer rate on Windows XP is slower than on Linux Suse more than 10 times!

  6. #6
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    One thing you might like to try would be to use a windows disk hex editor such as WinHex to copy the same block of data within windows. It would essentially be doing the same thing as dd but using only a Windows API. It is a great forensic utility but not free if you want to actually write to disk.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

  7. #7
    Join Date
    Nov 2008
    Location
    Aachen
    Posts
    5
    Quote Originally Posted by Paul Komski View Post
    One thing you might like to try would be to use a windows disk hex editor such as WinHex to copy the same block of data within windows. It would essentially be doing the same thing as dd but using only a Windows API. It is a great forensic utility but not free if you want to actually write to disk.
    I am also using Winhex to explore what data is stored in the storage. I have never thought that it can be used for copying files. I will have a try some time.

    BTW, the "eseutil" from microsoft exchange sever analyzer tool is as another interesting tool to speed up a large file copy. Details can be found in:

    http://blogs.technet.com/askperf/arc...py-issues.aspx

  8. #8
    Join Date
    Oct 2001
    Location
    N of the S of Ireland
    Posts
    20,504
    Nice article and will probably try eseutil because I always like a new toy to play with. WinHex, btw, is the only Windows Application that I have come across that can analyse NTFS metadata files such as the $MFT etc in a very straightforward manner and can outline the exact map of a file's data on disk LBA by LBA.
    Take nice care of yourselves - Paul - ♪ -
    Help to start using BiNG. Some stuff about Boot CDs & Data Recovery Basics & Back-up using Knoppix.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •