Friday, February 4, 2011

Javascript and CSS parsing performance

I am trying to improve the performance of a web application. I have metrics that I can use to optimise the time taken to return the main HTML page, but I'm concerned about the external CSS and Javascript files that are included from these HTML pages. These are served statically, with HTTP Expires headers, but are shared between all the pages of the application.

I'm concerned that the browser has to parse these CSS and Javascript files for each page that is displayed and so having all the CSS and Javascript for the site shared into common files will negatively affect performance. Should I be trying to split out these files so I link from each page to only the CSS and Javascript needed for that page, or would I get little return for my efforts?

Are there any tools that could help me generate metrics for this?

  • I believe YSlow does, but be aware that unless all requests are over a loopback connection you shouldn't worry. The HTTP overhead of split-up files will impact performance far more than parsing, unless your CSS/JS files exceed several megabytes.

  • Context: While it's true that HTTP overhead is more significant than parsing JS and CSS, ignoring the impact of parsing on browser performance (even if you have less than a meg of JS) is a good way to get yourself in trouble.

    YSlow, Fiddler, and Firebug are not the best tools to monitor parsing speed. Unless they've been updated very recently, they don't separate the amount of time required to fetch JS over HTTP or load from cache versus the amount of time spent parsing the actual JS payload.

    Parse speed is slightly difficult to measure, but we've chased this metric a number of times on projects I've worked on and the impact on pageloads were significant even with ~500k of JS. Obviously the older browsers suffer the most...hopefully Chrome, TraceMonkey and the like help resolve this situation.

    Suggestion: Depending on the type of traffic you have at your site, it may be well worth your while to split up your JS payload so some large chunks of JS that will never be used on a the most popular pages are never sent down to the client. Of course, this means that when a new client hits a page where this JS is needed, you'll have to send it over the wire.

    However, it may well be the case that, say, 50% of your JS is never needed by 80% of your users due to your traffic patterns. If this is so, you should definitely user smaller, packaged JS payloads only on pages where the JS is necessary. Otherwise 80% of your users will suffer unnecessary JS parsing penalties on every single pageload.

    Bottom Line: It's difficult to find the proper balance of JS caching and smaller, packaged payloads, but depending on your traffic pattern it's definitely well worth considering a technique other than smashing all of your JS into every single pageload.

    From kamens
  • To add to kamen's great answer, I would say that on some browsers, the parse time for larger js resources grows non-linearly. That is, a 1 meg JS file will take longer to parse than two 500k files. So if a lot of your traffic is people who are likely to have your JS cached (return visitors), and all your JS files are cache-stable, it may make sense to break them up even if you end up loading all of them on every pageview.

    From levik

0 comments:

Post a Comment