PDA

View Full Version : javascript & sys requirements



TomK
08-28-2006, 06:23 PM
Are there any system requirements for running javascript?

In playing with the "Ultimate Fade In Slide Show" it works fine on systems with a P3 and up but will not work on P2 or lower using IE. All systems work fine with firefox (imagine that).

All systems loaded the same with the same OS and svc packs Etc.

Thanks

Twey
08-28-2006, 06:28 PM
It's impossible to accurately judge how much of a resource Javascript will take up, since every browser does things differently. I'm quite surprised at those results, actually; Firefox is generally considered to be a bit of a resource hog.

TomK
08-29-2006, 02:22 AM
There is a site using the script I'm talking about if you want to try it. I hope someone might have more insight on the problem. The P2 works with FF but NOT IE6 sp1.

www.thewharfal.com

Thanks

Twey
08-29-2006, 02:17 PM
Unfortunately, due to a bug in my build of Firefox, I'm unable to test it for you :-\

blm126
08-29-2006, 04:04 PM
Works fine in Firefox 2 beta 1 and Internet Explore 6 sp1 for me.

TomK
08-29-2006, 04:57 PM
thanks for trying this.

blm126, if you don't mind, what processor are you using. the problem I'm wondering about is why won't that script work with a processor below a P3? Alot of other sites have scripts that appear to work fine on those machines.

Its just one of those things I would like to understand.

Thanks again

blm126
08-29-2006, 06:12 PM
AMD Sempron 2600+
Also tested a P3 866mhz
both worked. Sorry, that is the slowest computer I own(well, I do own a P2, but it is a FreeBSD machine with no GUI)

TomK
08-29-2006, 06:32 PM
thanks for trying.
It must be an IE thing since FF works fine on the P2. Just never realized there might be a problem. I discovered it by accident. btw, FF loads faster on the P2 than IE

Thanks

Twey
08-29-2006, 06:59 PM
FF loads faster on the P2 than IEThat's not good news for Microsoft at all, since most of IE is already preloaded :p Firefox should take much longer to start.

mwinter
08-29-2006, 07:17 PM
... Firefox is generally considered to be a bit of a resource hog.

As far as I'm aware, that's only in relation to how it handles graphics; it tends to accumulate the data in memory, rather than flush it out periodically. I'm not sure if that's a platform-specific problem (I seem to remember that described in reference to X), but certainly something similar occurs in Windows.

Whilst taking a quick look at BugZilla, part of my problem could be due to the AdBlock extension, in which serious memory leaks have been discovered in the past (apparently; the bug report was regarding something else).

Mike

Twey
08-29-2006, 07:34 PM
Is that so? I'll disable it and see if it speeds her up any.

TomK
08-29-2006, 11:23 PM
sorry, I should have been more clear. FF takes alittle longer to start up but not much, but I believe the pages load faster.

You have probably seen this already but I read this and it seems to work some.


"Here's something for broadband people that will really speed Firefox up:

1.Type "about:config" into the address bar and hit return. Scroll down and look for the following entries:

network.http.pipelining network.http.proxy.pipelining network.http.pipelining.maxrequests

Normally the browser will make one request to a web page at a time. When you enable pipelining it will make several at once, which really speeds up page loading.

2. Alter the entries as follows:

Set "network.http.pipelining" to "true"

Set "network.http.proxy.pipelining" to "true"

Set "network.http.pipelining.maxrequests" to some number like 30. This means it will make 30 requests at once.

3. Lastly right-click anywhere and select New-> Integer. Name it "nglayout.initialpaint.delay" and set its value to "0". This value is the amount of time the browser waits before it acts on information it receives.

If you're using a broadband connection you'll load pages MUCH faster now!"

mwinter
08-30-2006, 02:33 AM
Here's something for broadband people that will really speed Firefox up:

Claims like that should be taken with a pinch of salt. Lots of factors affect network performance. Fiddling with a few settings may improve performance under some conditions, but not always.



Normally the browser will make one request to a web page at a time. When you enable pipelining it will make several at once, which really speeds up page loading.

That is, assuming the server supports pipelining, implements it properly, and dynamic content behaves itself. That's quite a lot of "if"s, particularly the last one which is typically negative.

To understand why there's so much dependancy on network performance, it's necessary to understand how pipelining works.

Under the traditional HTTP/1.0 connection behaviour, a client will connect to a server, send a request, receive the response, then terminate the connection. If another request needs to be made, this connection needs to be re-established. This causes a lot of packets to be generated, increasing congestion.

HTTP/1.1 introduced persistent connections. Instead of dropping the connection once the client receives the response, it is maintained and reused to send the next request. This reduces the number of packets used, but it's still not very efficient. There's also a limitation: the length of each request and response must be known. Without that information, there's no reliable way to distinguish between a pause and the end of the data stream. A request (typically POST) or response (almost any) that contains a message body, but doesn't include a Content-Length header (or some other transfer length) must terminate the connection to indicate the end of data. This sets us back to something like HTTP/1.0.

The pipelining mechanism specified by HTTP/1.1 builds upon persistent connections, but rather than sending a request and waiting for the response to be completed before sending the next request, several are sent at once and the responses are returned consecutively in request order. This further improves performance by allowing data to be bundled into larger packets, but it also inherits the limitations of persistent connections. It also adds to them as only idempotent methods should use pipelining: if the connection fails because the server doesn't support pipelining, resending an non-idempotent method may cause unwanted side-effects. The other drawback is that if a server doesn't support pipelining, bandwidth will be wasted as the server will refuse the requests and they will need to be sent again.

As I said previously, dynamic content has a significant influence on this and persistent connections in general. Most dynamically-generated content does not send a transfer length. As a result, requests for that data will always cause connections to be dropped. They also tend not to send useful caching information, which is another significant feature that HTTP/1.1 builds upon and has the potential for creating enormous savings. The best case effect is to obviate the need for requests, entirely. In most other cases, the message body of a response is unnecessary.

I don't know for certain how well pipelining is supported server-side across the Web, and it may not be easy to find out. However, there certainly are servers that are broken and may cause sites to load incorrectly.



Set "network.http.pipelining.maxrequests" to some number like 30. This means it will make 30 requests at once.

There are two gross miscalculations there. The first is the obvious wasted bandwidth as described above. The second is load balancing.

Of all of the data that will be downloaded, some resources will be larger than others. There's no way for the client to know what the distribution is without querying the server (which would be a waste of bandwidth in itself). This may mean that one connection gets lumbered with much more data than the other. Though the data will still be downloaded very quickly, there will be an illusion of a reduction in performance. Usually there will be two persistent connections at a time (if anyone tells you to increase that number, beat them to death with a shovel; it's bad for the network!), and this means that data for two resources can be received asynchronously. If all of the data eventually only comes down one connection, resources will be loaded synchonously.

Though you can increase the number of pipelined requests, thirty is too high: ten is probably the useful limit.



3. Lastly right-click anywhere and select New-> Integer. Name it "nglayout.initialpaint.delay" and set its value to "0". This value is the amount of time the browser waits before it acts on information it receives.

Laying out a document as soon as there's data isn't really that sensible; you'll just increase CPU usage with more frequent reflowing. With so much table-based stuff on the Web, browsers can't render documents without having a fairly good idea of the structure anyway.



If you're using a broadband connection you'll load pages MUCH faster now!

Again: with a pinch of salt! A lot of information about how to increase connection speed is rubbish and only gives the illusion of better performance. As noted with the number of connections, some of this information also has the potential effect of adding strain to networks by flooding servers and routers.

Mike

TomK
08-30-2006, 12:38 PM
good explaination. Like I said something I read and thought I would pass along. The mods are probably not worth doing afterall.

I agree, I take everything on the internet with a grain of salt.

Thanks

TomK
08-30-2006, 12:54 PM
Follow-up to origional question:

I used the P2 machine and went to the image script section on Dynamic Drive and looked at the demos with IE. Some on the demos did not work at all. Some worked fine. The carousel scripts worked but jerky.

All scripts worked fine and smoothly with FF on the same machine. I guess thats just the way it is. It must be a conspiracy to make everyone buy a new system.

FireFox Rules!

Thanks