FC6 and disk larger than 2 terabytes

Place to discuss Fedora and/or Red Hat
User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Apr 05, 2007 3:41 pm

Just be aware that xfs isn't officially supported by Fedora so if things go wrong you might have a harder time getting help. Here's the quote from the fedorafaq link:
None of these file systems are officially supported by the Fedora Project. (That means that you can use them, but you won't find a lot of official help from the Fedora Project if things go wrong.)

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Thu Apr 05, 2007 3:46 pm

Yeah, I saw that but if I need help I don't call RedHat I call Void Main, lol
But I need to install that system right? Cos i can't fint any tools to use.
Found the source at: http://mirrors.sunsite.dk/xfs/download/ ... 8-1.tar.gz

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Apr 05, 2007 4:10 pm

What OS and version are you running? xfs tools should be included with fc6 since you can install it from the installation CD. Do an "apt-cache search xfs" or "yum search xfs" or "smart search xfs" and install the appropriate bits. They should also be on your istall CD. Looks like "xfsprogs" would be the key package to install. I just did an "apt-get install xfsprogs" on the Void server and now I have xfs tools and man pages:

$ man mkfs.xfs

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Thu Apr 05, 2007 4:32 pm

I use FC6. I installed the source but now I removed that and got the RPM instead :)
Last edited by Basher52 on Thu Apr 05, 2007 5:37 pm, edited 1 time in total.

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Apr 05, 2007 5:30 pm

Here's something interesting to note if you want to install FC6 on an xfs partition:

http://blog.twisty-industries.com/users ... mpressions

I don't think you should have that issue creating secondary file systems as xfs though.

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Thu Apr 05, 2007 5:49 pm

Interesting read :)

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Fri Apr 06, 2007 1:02 am

Now this XFS format was fast, it did it in 2 seconds :)

Code: Select all

[root@ftp ~]# mkfs.ext
mkfs.ext2  mkfs.ext3  
[root@ftp ~]# mkfs.xfs /dev/sdb 
meta-data=/dev/sdb               isize=256    agcount=32, agsize=6100992 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=195231744, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=4096   blocks=0, rtextents=0

Code: Select all



Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb              745G  1.1M  745G   1% /root/test

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Fri Apr 06, 2007 8:33 am

So you are using the whole disk method I see (not creating a partition). I did a lot of searching aroung the net last night and saw some posts that indicated more than just partition problems with file systems >2TB. I saw where under certain common conditions where someone would get file system corruption on several different file system formats, including xfs. Then I saw another post where a guy had created 2 4TB arrays and used LVM to paste them together into one 8TB file system and didn't have any problems. I don't recall if he was using xfs or ext3 though.

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Fri Nov 14, 2008 2:18 pm

Bumping this, cos know I'm stuck again lol
This time it's not what TYPE of filesystem, cos that'll be XFS and the non-partition version, like made on /dev/sdc and not /dev/sdc1

I've read this thread over and over thinking how to lower the Inode in this new setup.

[root@ftp ~]# df -i
/dev/sdc 4294967295 3 4294967292 1% /mnt/test

can't use tune2fs -l /dev/sdc it only tells me: Couldn't find valid filesystem superblock. and some other stuff

That Inode of over 4 billion it just abit too much I think. There's also only gonna be bigger files and I'm thinking of this, from an earlier post

Code: Select all

-T fs-type
    Specify how the filesystem is going to be used, so that mke2fs can chose optimal filesystem parameters for that use. The supported filesystem types are:

        news
            one inode per 4kb block
        largefile
            one inode per megabyte
        largefile4
            one inode per 4 megabytes 
I've been googling to set both of this but haven't found anything so I wonder if you know or can find anything about this.

My parameters would be 1 million Inodes and that would be about 3 or 4 times the size I need and the size's of files would be above the 'largefile4' size shown above.

If you can find anything that'll save some space, dang, I would buy you a toaster :) You just name your color and I'll fix it :P


Oh, btw If you know if it's possible to change this AFTER files already are in place, that'll be great. Even though I figure that it isn't, I've started to copy files to the new disks so if it possible I would save time otherwise I only would lose some energy lol

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Fri Nov 14, 2008 5:35 pm

I'm not an expert on XFS by any means but the way I understand it XFS allocates inodes dynamically so I don't believe changing the inode defaults will probably gain you much for XFS. There are inode options you can specific for mkfs.xfs and you can set a maximum amount of space that inodes could take up but as I said, I don't think that would buy you much. You can also change that setting after the file system is created using the xfs_growfs command. Here are the man pages:

$ man mkfs.xfs
$ man xfs_growfs

Again, I could be talking completely out of my butt.

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Sat Nov 15, 2008 1:44 pm

thx, but as always all that text make no sense to me lol
I read it 4 times and tried some stuff to up the blocksize and to lower the number of inodes, but I dont seem to be able to do that.
so I ask that you that really understand these man's can translate them to me.

B52


UPDATE: I'll go for the setup I got now and hope that I can recall some of the 400G later
thx for all help tho :)

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Sat Nov 15, 2008 7:51 pm

I'm not sure if this is what you are getting after but I just created a 100MB xfs file system with the default imaxpct of 25%, which yields 100k+ possible inodes. Then I change the imaxpct to 5% using xfs_growfs which causes the new max number of inodes to only be just over 20k. I'm not sure you gain anything by doing this (like it would with ext2/3) but this is how I did it:

Code: Select all

[root@laplinux ~]# cd /tmp


[root@laplinux tmp]# dd if=/dev/zero of=xfs.dat bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 2.00808 s, 52.2 MB/s


[root@laplinux tmp]# mkfs.xfs xfs.dat 
meta-data=xfs.dat                isize=256    agcount=4, agsize=6400 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=25600, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=1200, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0


[root@laplinux tmp]# mount xfs.dat xfs -o loop


[root@laplinux tmp]# df -i xfs
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/tmp/xfs.dat          102400       3  102397    1% /tmp/xfs


[root@laplinux tmp]# xfs_growfs -m 5 xfs
meta-data=/dev/loop0             isize=256    agcount=4, agsize=6400 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=25600, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=1200, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
inode max percent changed from 25 to 5


[root@laplinux tmp]# df -i xfs
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/tmp/xfs.dat           20480       3   20477    1% /tmp/xfs

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Mon Nov 17, 2008 2:18 am

just tried this, thx

Code: Select all

[root@ftp test]# df -i
/dev/sdc             195336592       3 195336589    1% /mnt/test

[root@ftp test]# df -h
/dev/sdc              4.6T  5.1M  4.6T   1% /mnt/test
Still got 400G lost, but the inode numbers is lowered quite a bit
even though is wont get any space back I think the numbers are better but still high.
Think I'll read up more on the block size thing too since files placed here is gonna be bigger than 4Mb and maybe this will help some.[/code]

UPDATE: Just saw that no bigger block size than 4096 is possible so, I'll let it be
thx again for all help :)

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Mon Nov 17, 2008 9:33 am

Ahh, I thought you were only concerned about number of inodes as you didn't mention in your first post that it was the 400GB of disk space you wanted to be able to utilize. I thought you were referring to the 4 billion inodes being a bit much. I don't think reducing the max # of inodes will give you any of that 400G back and like you said, there is a 4M upper limit on the block size (max page size) so you're probably as good as you can get.

User avatar
Basher52
guru
guru
Posts: 916
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Mon Nov 17, 2008 3:26 pm

yeah think so too

Post Reply