Update on AMO's performance tests · 2011-05-18 18:02 by Wladimir Palant

Disclaimer: I am not associated with AMO in any way, simply an add-on developer.

I forgot to update my overview of AMO’s performance measurements for their latest test run on May 14th, done it now. As Alice Nodelman notes, a bunch of bugs has been fixed on the technical side. Six weeks after the campaign announcement it finally looks like the most critical bugs are fixed and the numbers are mostly reliable.

Remaining issues:

What’s still missing IMHO is giving the user some idea of what these numbers mean instead of simply using the metric with the highest scare factor. Personally, I would prefer bug 648742 to be fixed.

Tags:

Comment [4]

  1. Harsh86 · 2011-05-18 22:03 · #

    I agree bug 648742 should be fixed. At the very least they should be using the dirty profiles. I know they don’t represent real world profiles perfectly but at least they’re more realistic than a fresh profile.

    Also shouldn’t the benchmark be using tpaint instead ts now?

  2. DM · 2011-05-21 19:48 · #

    as for relaying what the numbers mean to general users how about a page letting people tick what addons they have installed and calculating the total startup delay in seconds (+ms)

    Reply from Wladimir Palant:

    The problem that AMO tries to solve is that these numbers will be different for everybody. One way or another, the numbers they report will be unrealistic (because of being measured on a clean profile, for warm startup and a bunch of other reasons). My point is simply that percentages are more unrealistic than absolute numbers. But calculating a total delay would make things even worse, by suggesting a degree of precision that simply isn’t there.

  3. pd · 2011-06-05 18:35 · #

    Although Wladamir you’ve done a great job pointing out AMO’s very poor implementation of startup perf naming and shaming, I very much support the idea.

    However there are equally important targets for naming and shaming, such as memory leaks!

    Yet again my firefox 4 just hit over 4 million page faults and 1.3GB of memory when I have less than 1GB of physical memory available! Result: Not Responding. That is how bad Firefox’s memory management is. Assuming this is not just the browser’s fault, but also that of some extensions (I run several), I really would like to see some naming and shaming of extensions that cause memory leaks.

    Whilst in an ideal world, at least a new extension API like Jetpack would be written in a manner that developers could not cause memory leaks, this is unlikely. The Jetpack SDK has some memory profiling built in. All developers of Jetpack extensions should at least be conscious of their memory footprint and any leaks should be resolved. For developers of XUL API extensions, tools should be made available to cut their footprints as well.

    Furthermore the glacially-moving attempt to get decent memory profiling built into Firefox itself looks scheduled to miss out on divulging extension memory consumption for individual extensions. Nicholas Nethercoate seems to be working hard on this aspect of Firefox. Perhaps more resources should be assigned to help him? If not, with the new major-version-whenever-someone-yawns release cycle, Firefox 10 will still have horrible memory leaks and we all will still be none the bloody wiser!

  4. DM · 2011-06-07 21:52 · #

    the prevailing view seems to be “programs should use memory that what its there for” and i would agree with that if it wasn’t for the fact that the firefox ui gets slower and slower the more memory it uses

Commenting is closed for this article.