SCOUG-HELP Mailing List Archives
Return to [ 16 | 
September | 
2003 ]
<< Previous Message << 
 >> Next Message >>
 
 
 
Content Type:   text/plain 
=====================================================  
If you are responding to someone asking for help who  
may not be a member of this list, be sure to use the  
REPLY TO ALL feature of your email program.  
=====================================================  
 
 
It's not immediately clear where the speed bottleneck is,   
if you can even call it a "bottleneck" with   
44 thousand directory entries to process!   
 
One of the shortcomings of the OS/2 HPFS driver is its 2MB   
software cache limit. HPFS386 allows an OS cache up to 64MB.   
Perhaps the 2MB cache once seemed large, but no longer.   
 
Then there's HD and bus speed as possible limiting factors.   
I do not think that a 200MHz Pentium would be the limiting   
factor in processing that command. I'd have to suspect the   
HD.  Seek time is a big factor in reading lots of small data   
strings, in this case directory entries, located "all over"   
the drive.  
 
Defraggers don't reorganize the HPFS directory entries, which are   
alphabetical in bands every 8MB across the disk. So the logical   
file hierarchy doesn't correspond to the physical file placement   
in any organized way, at least not any way that contributes to   
grouping the *.lst file entries together so the heads don't have   
to seek all over the place. This is the most time-consuming portion   
in older HDs, and still a major factor (~50%) in the fastest   
modern drives.       
 
I guess what I'm trying to say is that it seems to me that   
this particular command doesn't actually read the files, but only    
the directory entries, which are not reorganized by the defragger   
for fast access. They are instead forced to be alphabetical by   
filename in the HPFS driver.   
 
Perhaps someone who understands HPFS data organization better   
than I would have a different perspective.    
 
  --Steve    
 
  --=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--     
On 9/16/03, Peter Hooper wrote, in part:  
> ...  
>The Graham Utilities work great,   
>  
>Problem is that I still have the same problem I thought it would fix.   
>Which is that it takes the best part of a minute to do a   
> "DIR *.LST>NUL"   
>where there are 44,000 ".LST" files!  
>  
>If anyone has any ideas on how to optimise    
>(I've played with the cache size - only goes upto 2Mb   :(  
>  
>(It's a 200MHz Pentium with 64 Mb RAM)  
>  
>Any suggestions greatly appreciated!   
>Even if its just to say that's how long it takes for that many files.   
 
 
 
=====================================================  
 
To unsubscribe from this list, send an email message  
to "steward@scoug.com". In the body of the message,  
put the command "unsubscribe scoug-help".  
 
For problems, contact the list owner at  
"rollin@scoug.com".  
 
=====================================================  
 
  
<< Previous Message << 
 >> Next Message >>
Return to [ 16 | 
September | 
2003 ] 
  
  
The Southern California OS/2 User Group
 P.O. Box 26904
 Santa Ana, CA  92799-6904, USA
Copyright 2001 the Southern California OS/2 User Group.  ALL RIGHTS 
RESERVED. 
 
SCOUG, Warp Expo West, and Warpfest are trademarks of the Southern California OS/2 User Group.
OS/2, Workplace Shell, and IBM are registered trademarks of International 
Business Machines Corporation.
All other trademarks remain the property of their respective owners.
 
 |