[RFA/cache.c] large fread fails on NetApp share...
Joel Brobecker
brobecker@adacore.com
Wed Apr 30 02:09:00 GMT 2008
More information about the Binutils mailing list
Wed Apr 30 02:09:00 GMT 2008
- Previous message (by thread): Fix arm-symbianelf data segment relocs
- Next message (by thread): [RFA/cache.c] large fread fails on NetApp share...
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hello, This is on Windows. One of our customers was having problems debugging his application, because GDB was unable to read the contents of the debug_info section. After some investigation, we found that the reason for the failure is that his .debug_info section was pretty large (>100MB). Normally, this isn't a problem, but it turned out that his executable was also sitting on a NetApp share. Apparently, large reads fail for no valid reason. The workaround is to copy the executable to a local disk first, and then debug that. Another option that seems to work, although I'm not sure why, is to turn oplocks on - but that's not recommended apparently when using ClearCase. The customer is pursuing solutions with both NetApp and ClearCase to see if they can get to the bottom of this, but they also noticed that splitting the read into smaller chunks does seem to work. Here is a patch that does just that. This is the only place that I could find that does a fread or a read of a possibly large block. So I implemented the idea there only. 2008-04-29 Joel Brobecker <brobecker@adacore.com> * cache.c (cache_bread_1): Renames cache_bread. (cache_bread): New function. Tested with the GDB testsuite using two values for max_chunk_size: 0x100 and 0x800000. The first run was to make sure that we would iterate in the loop more than once even with the smallish executables that we produce in the GDB testsuite. Note that one section of our testsuite does the testing by debugging GDB itself, which has a resonable size (total size is about 16MB, and .debug_info size is 0x00af4747). I think it would be a useful addition, as it could help others who are in the same situation. But it is a work around what I consider a bug in the filesystem. So we may not want to complexify the code to handle this case... Opinions? -- Joel -------------- next part -------------- Index: cache.c =================================================================== RCS file: /cvs/src/src/bfd/cache.c,v retrieving revision 1.32 diff -u -p -r1.32 cache.c --- cache.c 14 Mar 2008 18:39:41 -0000 1.32 +++ cache.c 29 Apr 2008 17:30:32 -0000 @@ -250,7 +250,7 @@ cache_bseek (struct bfd *abfd, file_ptr first octet in the file, NOT the beginning of the archive header. */ static file_ptr -cache_bread (struct bfd *abfd, void *buf, file_ptr nbytes) +cache_bread_1 (struct bfd *abfd, void *buf, file_ptr nbytes) { FILE *f; file_ptr nread; @@ -301,6 +301,33 @@ cache_bread (struct bfd *abfd, void *buf } static file_ptr +cache_bread (struct bfd *abfd, void *buf, file_ptr nbytes) +{ + file_ptr nread = 0; + + /* Some filesystems are unable to handle reads that are too large + (for instance, NetApp shares with oplocks turned off). To avoid + hitting this limitation, we read the buffer is chunks of 8MB max. */ + while (nread < nbytes) + { + const file_ptr max_chunk_size = 0x800000; + file_ptr chunk_size = nbytes - nread; + file_ptr chunk_nread; + + if (chunk_size > max_chunk_size) + chunk_size = max_chunk_size; + + chunk_nread = cache_bread_1 (abfd, buf + nread, chunk_size); + nread += chunk_nread; + + if (chunk_nread < chunk_size) + break; + } + + return nread; +} + +static file_ptr cache_bwrite (struct bfd *abfd, const void *where, file_ptr nbytes) { file_ptr nwrite;
- Previous message (by thread): Fix arm-symbianelf data segment relocs
- Next message (by thread): [RFA/cache.c] large fread fails on NetApp share...
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Binutils mailing list