This repository has been archived by the owner on Aug 24, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 242
JavaScript Performance For Madmen
kevingadd edited this page Jul 29, 2012
·
61 revisions
JavaScript performance is terrifying and unpredictable. If you want things to run fast, you'll need a dowsing rod, but these test cases might help:
- Object.defineProperty versus regular property assignment
- Performance hit for initializing properties in different orders in your constructor (Hidden Classes)
- Deeply nested prototype chains
- String concatenation versus array.join
- Truncating numbers
- Object.create vs new
- Closures versus globals (and const)
- Deeply nested property fetches
- Try blocks deoptimizing functions
- Inconsistent hidden classes deoptimizing functions
- Code duplication = faster JS
- Functions containing try { } finally { } are run by the interpreter (and never optimized).
- Any array instance that isn't initialized sequentially, or has non-indexed properties, gets deoptimized to a regular old Object (hash table). This includes adding a named property to an array after initializing it normally.
Based on the Type Inference paper:
Local variables fall into three categories with decreasing performance characteristics:
- The best kind of local variable is always the same type (all objects count as one type). Unfortunately, this includes the initial value, so var x; x = null; is slower than var x = null. If possible, strive to ensure that all variables used for numeric computation are of this type. Note that in many cases the optimizer will be able to convert the first slow example statement into the faster one - but in more complex functions it may not be able to do so.
- The second best kind is, well, to quote the paper: " either strings or objects (but not both), and also at most one of the undefined, null, or a boolean value. "
- The rest fall into the worst category and are even more expensive than the first two.
- See http://www.youtube.com/watch?v=XAqIpGU8ZZk and http://www.youtube.com/watch?v=UJPdhx5zTaw for various tips
- Any array instance that is not sequentially initialized may end up as a 'sparse array', which is basically a hash table. Whether or not this happens is based on some heuristics based on the size of the array and its capacity. (Note that named properties can live in an array, unlike SpiderMonkey)
- This page describes how to pass flags to the V8 runtime when starting chrome. There are some flags you can pass to cause the V8 runtime to tell you when it fails to optimize a function. Unfortunately, these do not seem to be documented on the wiki, so see the below blog post...
- This series of blog posts goes into depth on various V8 performance gotchas and describes how to diagnose some of the problems.
- V8 cannot represent integers larger than 31 bits as an integer (they get promoted to floats).
- Floating-point values are almost universally stored in the heap by V8 (which means each one is an allocation).
- Any function containing a try { } block is never optimized (regardless of whether it has any catch or finally blocks).
- Functions that are too long (including comments and newlines/whitespace) are not inlined and may not be optimized.
- Long-lived closures can hold onto variables from outer scopes that they never use, keeping them alive as garbage. To address this, set any outer scope variables you don't intend to use in the closure to null or undefined.
- Calling functions with different hidden classes from a single call site will deoptimize the call site. Setting/removing properties on a function instance (like debugName, displayName, or toString) will cause its hidden class to diverge from built-in functions. Some built-in functions like the result of .bind() also have different classes from normal functions. Strict mode functions have different hidden classes from non-strict functions. See this example.
- SPS: A sampling profiler that can record mixed native/JavaScript stacks so you can see why a particular JS function is slow. Lets you share profiles on the web with other people! Amazing! Default accuracy is somewhat low, but you can adjust the sampling rate and recording size in about:config.
- JIT Inspector: Tells you various things about what the SpiderMonkey JIT believes about your code (and to an extent, how it is performing). Completely inscrutable unless you read this PDF, at which point it is only partially inscrutable. Also, doesn't display actual numbers anywhere or let you save profiles... Activating this deoptimizes your JS!
- Firebug: If you can get the profiler to work instead of crashing the browser entirely, apparently it's pretty good. I've never gotten it to work.
- Web Inspector's Profiles tab: This is a sampling profiler with poor accuracy that often omits entire native call paths from your profiles, so the data is often a lie. Simply opening the Web Inspector deoptimizes your JS, and activating the profiler double-deoptimizes it.
- chrome://tracing/: You can instrument your JS to show up here via console.time/console.timeEnd. Can only trace a few seconds at a time, but is fairly accurate. You can save/load traces.
- WebGL Inspector: Let's pretend for a moment that WebGL performance is JS performance, since it sort of is. WebGL inspector gives you pretty accurate timings and recordings for your WebGL calls, so you can combine it with the built-in profiler to understand why your renderer is 50x slower than the one you wrote in Python.