[LFS Trac] #2057: Udev-122

Gerard Beekmans gerard at linuxfromscratch.org
Wed May 21 21:41:03 PDT 2008

I've just caught up on all the recent comments and I think I understand 
the root problem regarding multiple interfaces, the need to configure 
them before the first reboot so once you do reboot, you have a chance to 
remotely login.

After the 1st reboot it's not always known which of multiple cards ends 
up being eth0, eth1, etc seeing assignment is random by default unless 
we come up with a way to make it not random during our chroot stage.

The comments further went on about certain kernel versions, having udev 
running on the host system, etc. That's problematic ground to walk on. 
We don't know if the host system has a new enough kernel. It may not 
even run udev.

I hope I'm not repeating old ideas here. I didn't see these listed so 
here goes.

Rather than trying to fix udev, and it sounds like there isn't a 
solution that isn't hackish and has a risk of not working any day now 
with new releases. How about we fix our network setup. We can easily get 
a lot more advanced at this.

This email may get long. Please bear with me.


Scenario 1: you have multiple network cards but only one card is plugged 
into an actual cable and thus has a link. That card, whichever it is, 
needs to be activated.

Possible solution 1: install dhclient in LFS rather than BLFS. Configure 
bootscripts to run dhclient on every interface. Only one interface will 
receive an IP address from a router - ie an IP address that you need it 
to get for remote access.

At this point you don't yet care if the kernel called that interface 
eth0, eth1, or something else. You can fix that after the 1st reboot if 

Possible solution 2: If static IP is needed, configure every network 
interface for the same IP address.

Enhance the network boot scripts to first check for a link before 
assigning the IP address.

Scenario 2: multiple cards are all plugged into networks. Likely 
different physical networks so "solution 2" above isn't going to work.

Solution 1 from above may work but only if the location you are offers 
DHCP. Most data centers won't offer that so you're back to having to 
assign two static IP addresses to the proper interfaces. Mix them up, 
nothing works.

It wouldn't be an unfair assumption to make that only one of your 
plugged in networks is the network to which you, the builder, is 
reachable (public Internet or a LAN).

You know your current IP address. You know the IP address you need the 
server to have in order for you to connect to it. Have a bootscript 
assign the server's IP address to the first interface. Have the server 
try to ping you. If the ping fails, have the script move the IP address 
to the next interface until it's able to "find" your own computer.

Then you login and clean up the networking files now that you know what 
is what.


I would wager a guess that "scenario 1" is the one we primarily will 
encounter. If not, would it be a fair assumption to make that if 
"scenario 2" is true, you request a person on-site to unplug the extra 
network cables so only one is plugged in through which you are building 
LFS. Then later on you can have the on-site techs plug your other cables 
back in.

There are a lot more scenarios that come to mind having worked in data 
centers and setup ISP networks. You're going to find situations where 
you have multiple network cards all connected to the public internet via 
load balancing setups. Or just redundancy then you have your lovely BGP 
setups. How many do we need to support though for that 1st reboot (after 
which things will change anyways as your config settles down).

More information about the lfs-dev mailing list