mount to HDDs into the same mountpoint

Discuss Applications
Post Reply
User avatar
Basher52
guru
guru
Posts: 923
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

mount to HDDs into the same mountpoint

Post by Basher52 » Wed Nov 17, 2004 5:36 am

This just plopped into my mind and since my ISP has problems i cant reach my box at home and try it... and since my memory aint that good i prolly
forgotten it when i get back home from work, so i ask here.

It cant be possible to mount to different HDDs into the same mountpoint, is it?
eg. u got 'hda1' and 'hdb1' and mount them both into '/test/mount'

personally i think i would get some sort of error trying this, right?

agent007
administrator
administrator
Posts: 254
Joined: Wed Feb 12, 2003 11:26 pm

Post by agent007 » Thu Nov 18, 2004 12:13 pm

The previous mount is unmounted and the new one is mounted.....

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Nov 18, 2004 12:39 pm

If you want to be able to use two drives under the same mount point you will have to create a logical volume or raid set which will combine the two drives into one logical device which can be mounted under a single mount point. This is extremely common practice when you start working on larger systems.

User avatar
Calum
guru
guru
Posts: 1349
Joined: Fri Jan 10, 2003 11:32 am
Location: Bonny Scotland
Contact:

Post by Calum » Thu Nov 18, 2004 1:33 pm

a quick question, from somebody who's no knowledge of this sort of thing:

so the two volumes get treated as one by the system using such a set, but is it then impossible to use each seperately (say if one suddenly dies, or whatever), or can they both be used as seperate volumes with whatever happens to be on them? just curious.

User avatar
Basher52
guru
guru
Posts: 923
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Post by Basher52 » Thu Nov 18, 2004 3:04 pm

that i do know :D i think
i used to use an LVM but when one disk got bad i couldnt get any data back,
but as far as i know, u should be able to get the "complete" data back from
the drive that is still functioning, lets say, u got a big file that spans over two
different drives, this file will be corrupt, but the files that are totally placed on
the functioning drive should be able to be restored, but i couldnt :P

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Nov 18, 2004 3:32 pm

Well, that has never been my experience and I don't believe it works that way. If you lose a disk in a plain old LVM then your file system is busted and you would be lucky to get any good complete data back from the remaining one, that's where RAID comes in. If you use at leasrt 3 disks (or partitions in Linux) then you can set up a RAID 5 set. Then you can lose one drive (or partition) and still have *all* of your data in tact. You will lose 1/3 of your entire space for parity though. You could also set those 2 drives up as RAID 1 (mirror) and then if one fails you won't lose any of your data but you lose the use of 50% of your total space as a penalty. A plain old logical volume is much like a RAID 0 (stripe) set and if one drive fails you lose everything.

User avatar
Calum
guru
guru
Posts: 1349
Joined: Fri Jan 10, 2003 11:32 am
Location: Bonny Scotland
Contact:

Post by Calum » Sat Nov 20, 2004 3:57 pm

so, RAID 5 trades storage space for intelligent redundancy. this is interesting actually, i will look some stuff up about this, thanks...

User avatar
Void Main
Site Admin
Site Admin
Posts: 5716
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Sat Nov 20, 2004 5:00 pm

Calum wrote:so, RAID 5 trades storage space for intelligent redundancy. this is interesting actually, i will look some stuff up about this, thanks...
Yes, I actually prefer hardware RAID to software RAID and run at least 5 disks in a RAID5 set and a 6th disk as a hot spare. That way you can actually lose 2 disks and still keep running as if nothing had happend. If any one of the 5 disks in the RAID set fails the 6th disk automatically rebuilds as the on that failed and you can still lose 1 of the original 5 and keep on going. On the systems I have managed over the years this happened quite often. I can replace the failed disk and never have to bring the system down (the hardware has to support this of course).

You need a minimum of 3 disks in a RAID5 set but as I said, when using 3 disks you give up 1/3 of your total capacity for parity. If you use 5 disks then you only give up 1/5 of your capacity for parity, much smaller penalty. You can even go with more than 5 disks in a set. I have a couple of reasons for preferring hardware RAID to software RAID:

1) you move the processing required for calculating parity to the RAID controller with it's own CPU.
2) No configuration on the OS required, it just sees the RAID set as one big disk.
3) Usually easier to deal with when a disk fails.

Of course hardware RAID is usually much more expensive but is common on medium and high end servers.

Post Reply