[links-list] Re: bug? (scanning NUL characters is VERY slow)

Ludvik Tesar tesar at mee.tcd.ie
Fri Nov 29 09:39:38 PST 2002

It is very interesting. I can not reproduce this in links -g. However, I
can reproduce this in text version of links (in xterm). It took almost 
2 minutes to render your testfile.html document. I have links-2.1pre7.


On Fri, 29 Nov 2002, [iso-8859-1] José Luis González González wrote:

> Hi,
> I noticed that both Links and ELinks take lots of time parsing files
> with NUL characters.  This time seems to be proportional to the number
> of NUL characters, to say the least.
> It's easy to reproduce:
> $ cat testfile.html
> <html>
> <head>
> <title>Testfile</title>
> </head>
> <body>
> <p>This file includes NUL characters</p>
> $ dd if=/dev/zero bs=1k count=70 >>testfile.html
> $ echo '</body></html>' >>testfile.html
> $ time links -dump testfile.html >/dev/null # This will be very slow
> Since some of you may think a HTML document should never contain them,
> take a look at http://www.joelonsoftware.com/navLinks/fog0000000247.html
> NUL characters should be ignored when scanning, so where does the
> overhead come from?  Are they actually not ignored?

Ludvik Tesar
Dept. of Electronic & Electrical Engineering,
University of Dublin, Trinity College, Dublin 2, Ireland
E-mail: tesar at mee.tcd.ie
Tel. +353-1-6083818   Fax. +353-1-6772442

Unsubscribe: send email to links-list-request at linuxfromscratch.org
and put unsubscribe in the subject header of the message

More information about the links-list mailing list