[lfs-dev] perl tests loop - possibly sorted

Ken Moffat zarniwhoop at ntlworld.com
Tue Aug 9 13:39:51 PDT 2016


On Mon, Aug 08, 2016 at 08:10:15PM -0500, Bruce Dubbs wrote:
> Ken Moffat wrote:
> > > 
> > I think I might have found the main problem with these builds
> > (ignoring the sporadic segfaults on this box) - When I booted this
> > box to start the first build, I noticed that icewm's CPU monitor on
> > its taskbar was mainly showing red for the active percentage,
> > instead of the normal green [ normal is occasionally green with a
> > small amount of red in some circumstances ] and it had continued
> > like that.  Looking at the icewm source the other night, I _think_
> > that red means time in syscalls.  Now, it is back to green.
> > 
> > So, I guess that CONFIG_LEGACY_VSYSCALL_EMULATE=y was a very bad
> > idea.
> 
> I'm not familiar with that item, but checking:
> 
> $ grep CONFIG_LEGACY_VSYSCALL_EMULATE config*
> config-4.4.2-lfs-7.9-rc2:CONFIG_LEGACY_VSYSCALL_EMULATE=y
> config-4.6.2-lfs-7.9-1:CONFIG_LEGACY_VSYSCALL_EMULATE=y
> 
> My kernel is running config-4.6.2-lfs-7.9-1 and that's whgat I was running
> with my last test build.

If I can, I always prefer to upgrade to the same version of the
kernel (if I have not already done that) before starting a build -
if there is a problem with the new version on my  hardware, much
nicer to find that out whilst I still have a full system.  Plus, I
like to test kernels to watch for breakage.

I suspect that most people don't need the emulation, and probably
many people don't test it.  I had turned it off on the haswell
because I had to go through all the details of the config, but on
the phenom something in the changes I listed has solved the
problem (I've now run memtest86+ for 12 hours in round-robin mode,
and got past perl in the new build).
> 
> I will note that when I was testing the new glibc and binutils, I was
> testing one at a time and did get spurious hangs in a couple of random
> tests, once in findutils, once in another non-toolchain package (can;t
> remember which one though), and some other issues.  In at least one case,
> the whole system hung.  At times top showed a load level of 15 (at -j1). In
> a couple of cases I was able to terminate the build, but then tried
> re-running make which picks uo with the last package not completed.  In each
> case the build then completed normally.
> 

Fun!  BTW, thanks for all your efforts on those two packages.

> In other words the error was transient.
> 

The best errors are - otherwise they would be easier to replicate
and test fixes for.

> I've built with both of the latest glibc and binutils twice now and the
> problems did not recur, so I am thinking there is some interaction there.
> 
> > The test of perl-5.24 on the old (20150610) host system was *much*
> > better:
> > 
> > All tests successful.
> > Elapsed: 624 sec

And in chroot the tests only took 601 sec elapsed time - something
seems to have improved slightly.
> 
> I have strictly Intel systems, but I'm attaching my current config file.
> 
>   -- Bruce
> 
> 

The problem with a full config is that there is so much which may
be specific to a particular machine ( hardware monitoring, perhaps
different filesystems, different nic, different firmware, perhaps
different cpufreq, sched uler, io schedular ).

When I diffed the configs of the haswell and the phenom, both using
the 4.7.0 release, I had over 1600 lines of output (at least 'view'
colours it).  And much of it was just noise - it also showed that
I have not bothered to fix the config on the haswell for qemu -
things like CONFIG_HIGH_RES_TIMERS not set, and things I thought I
didn't want, such as CONFIG_CGROUPS, are pulled in by
CONFIG_SCHED_AUTOGROUP which sounds like a good idea on this
power-hungry desktop box - the haswell uses less power, as well as
being quicker, so I won't bother enabling it there.

ĸen
-- 
`I shall take my mountains', said Lu-Tze. `The climate will be good
for them.'     -- Small Gods


More information about the lfs-dev mailing list