You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So the research thus far shows a dramatic improvement by bumping max_inlined_source_size to 800 with 1000 providing a smaller improvement. That means we have some long hot path functions that could benefit from inlining.
So... That leaves three real options.
Figure out which functions are over 600 chars in length and refactor them to be smaller
Find a way to build/munge to get them under that limit
Increase the default inlining programmatically (this is potentially dangerous, but safe if done in the CLI tools)
Of those options #1 seems the most feasible. So keeping the defaults below in mind we should identify the functions that exceed them and work on refactoring them to be shorter.
For reference the improvement I saw with 10000 requests was a few MB in memory (2-3, not many) but a 30% reduction in longest response time.
Other options that are interesting, or potentially so:
--max_inlining_levels (maximum number of inlining levels)
type: int default: 5
--max_inlined_source_size (maximum source size in bytes considered for a single inlining)
type: int default: 600
--max_inlined_nodes (maximum number of AST nodes considered for a single inlining)
type: int default: 196
--max_inlined_nodes_cumulative (maximum cumulative number of AST nodes considered for inlining)
type: int default: 400
One last thought before I finish for the evening... moving comments outside the body of functions would also be useful where possible. Since comment characters count against the 600 char limit.
Look at creating a built js file to replace the index.js file (add
main: built/index.js
to package.json) to see if this would speed up performance.Ideas driving this are reducing the comment size and the white space to improve inlining of functions on hot paths.
Measuring can probably be done with statsd or with the soon to land Server Timings header.
The text was updated successfully, but these errors were encountered: