From Tim on Wed, 08 Sep 1999
Hello,
I have a box on my network running RedHat 6.0 (x86) that is going to be used primarily for backing up large database files. These files are presently 25 gigs in size. While attempting a backup over Samba, I realized that the file system would not allow me to write a file > 2gig to disk. I tried using an large file system patch for kernel 2.2.9, but that only allowed me to write 16 gigs, and it seemed buggy when it was doing that even. Doing an 'ls -l' would show me that the file size of the backup was about 4 gig, but the total blocks in the directory with no other files there indicated a much higher number like so:
[root@backup ]# ls -l total 16909071 -rwxr--r-- 1 ntuser ntuser 4294967295 Sep 2 19:45 file.DAT
I am well aware that a 64 bit system would be the best solution at this point, but unfortunately i do not have those resources. I know BSDi can write files this big, as well as NT on 32 bit systems.. i am left wondering, why can't linux?
Thanks -Tim
Linux doesn't currently support large files on 32-bit platforms. I wouldn't trust an experimental patch to this job.
Use FreeBSD for this (I've heard that it does support 63-bit lseek() offsets). Samba works just as well on FreeBSD as Linux.
If you really need to use Linux for this project then use it on an Alpha.
Note: You could back these up to raw partitions (without filesystems made on them). However, I wouldn't recommend that.
1 | 2 | 3 | 5 | 5 | 6 | ||
7 | 8 | 9 | 10 | 11 | 12 |