1. Post #1
    Gold Member
    IpHa's Avatar
    March 2005
    1,987 Posts
    I'm having a rather annoyhing problem with a fresh Arch install; my hard drives don't like to keep the same device name.

    I've installed arch on what should be /dev/sda(5-7) , but about half the time it fails to boot because my boot drive decides to be /dev/sdb

    Does anyone know what's going on?

  2. Post #2
    CPPNOOB's Avatar
    March 2010
    419 Posts
    The kernel could name drives differently than to your GRUB/LILO install. This is why it's common practice to use UUIDs.

  3. Post #3
    Gold Member
    IpHa's Avatar
    March 2005
    1,987 Posts
    That would make them different, but it shouldn't make them change from boot to boot.

  4. Post #4
    Gold Member
    florian's Avatar
    January 2005
    292 Posts
    That would make them different, but it shouldn't make them change from boot to boot.
    AFAIK, it has all to do with how fast drives are detected.

  5. Post #5
    Wyzard's Avatar
    June 2008
    1,243 Posts
    Yep. Device files in /dev are assigned to disks sequentially in the order that the disks are detected by drivers. The udev daemon detects all the SATA, IDE, and SCSI controllers in your computer and issues commands to load the drivers for each, but for efficiency, it loads them all at the same time, rather than waiting for each one to finish loading before starting the next. When two drivers are loading simultaneously, it's generally a toss-up as to which will finish first (the term in programming is "race condition"), so your disks seem to rearrange themselves in /dev because the order that the drivers finish loading can vary from one boot to another.

    It's generally preferred to use stable identifiers, like filesystem labels or UUIDs, rather than /dev filenames, in your /etc/fstab. Those always identify the same partition regardless of which driver happened to load first, so you can rely on them to stay the same across reboots.

  6. Post #6
    Gold Member
    IpHa's Avatar
    March 2005
    1,987 Posts
    I think I've fixed it. I was missing my grub device.map file; I fixed that and now I've been through 5 boots without a problem.

    Looks like the mappings in this file take presidence over the order they're detected and without it my SATA and IDE were switching places.

  7. Post #7
    Wyzard's Avatar
    June 2008
    1,243 Posts
    The GRUB device.map is used for translating /dev names (based on hardware detected by Linux drivers) into GRUB names (based on BIOS drive numbers). It doesn't have any influence on how things are created in /dev.

    You didn't clearly explain the "fails to boot" part in your original post, though, so it's not clear how GRUB is involved. Do you get an error from GRUB itself? Or does GRUB successfully boot the kernel but then you get a kernel panic because it couldn't mount the root filesystem?

  8. Post #8
    Gold Member
    IpHa's Avatar
    March 2005
    1,987 Posts
    The kernel would start to boot, then I would get dropped into a recovery shell when it failed to mount /dev/sda7.

    Another thought, maybe it's working now since I compiled my own kernel whith the modules I need built in rather than using initramfs.

  9. Post #9
    Wyzard's Avatar
    June 2008
    1,243 Posts
    Another thought, maybe it's working now since I compiled my own kernel whith the modules I need built in rather than using initramfs.
    This is what made the difference. When drivers are compiled into the kernel, they always load in the same relative order, so there's no race condition.