Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance & diagnostics best practices - call for ideas #256

Closed
goldbergyoni opened this issue Sep 26, 2018 · 58 comments
Closed

Performance & diagnostics best practices - call for ideas #256

goldbergyoni opened this issue Sep 26, 2018 · 58 comments

Comments

@goldbergyoni
Copy link
Owner

goldbergyoni commented Sep 26, 2018

We're kicking off the performance best practices section. Any idea regarding performance best practices or great bibliography to extract ideas from (youtube, blog post, etc)?

This issue will summarize all the ideas and bibliography and serve as a TOC for the writers. This is also a call for write or would like to learn by practicing, discussing and writing.

@sagirk @BrunoScheufler @js-kyle

@goldbergyoni
Copy link
Owner Author

goldbergyoni commented Sep 26, 2018

Title: Monitor first customer-facing metrics
Gist: Going by Google's SRE book, monitoring should focus on metrics that immediately impact the customer, practically the golden signals: API Latency, Traffic, Errors, and Saturation. Also relevant are the RED framework and the USE method

Anything else you monitor (e.g. event loop) is a cause and not a symptom. Quote from My Philosophy on Alerting:

"But," you might say, "I know database servers that are unreachable results in user data unavailability." That's fine. Alert on the data unavailability. Alert on the symptom: the 500, the Oops!, the whitebox metric that indicates that not all servers were reached from the database's client. Why?

Examples: we will show how to acheive this in Node.js

@profnandaa
Copy link

Perhaps debugging can be part of it? This is a good place - https://www.joyent.com/node-js/production/debug

@js-kyle
Copy link
Contributor

js-kyle commented Sep 27, 2018

The express documentation has good tips: https://expressjs.com/en/advanced/best-practice-performance.html

We may want to review the current production practices section, as they may overlap or be more appropriate as part of the new section, for example the bullet around serving assets from a CDN rather than Node.js

@sagirk
Copy link
Contributor

sagirk commented Sep 27, 2018

@sagirk
Copy link
Contributor

sagirk commented Sep 27, 2018

Use a version of node that ships with new TurboFan JIT compiler rather than just the older Crankshaft compiler.

Reasons why:
https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

@AbdelrahmanHafez
Copy link
Contributor

Deal with database and external APIs in batches, meaning that a developer should favor, and try to fetch a 100 entities using a single HTTP request, instead of a 100 HTTP requests with a single document each.

Same goes for database operations, writing and fetching data are faster when done in batch rather than multiple operations.

@goldbergyoni
Copy link
Owner Author

Copy pasting from the reddit discussion:

"I usually write to log/monitor on DB query start and query end so I can later identify avg query time and identify the slowest queries. Using Sequelize you can pass the flag {benchmark:true} and get the query times logged. Later you can bake/export to your monitoring system and create metrics based on it"

@goldbergyoni
Copy link
Owner Author

Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime (a.k.a POLYMORPHIC VS MONOMORPHIC FUNCTIONS)

https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

@VinayaSathyanarayana
Copy link
Contributor

@goldbergyoni
Copy link
Owner Author

goldbergyoni commented Oct 2, 2018

When evaluating alternatives and need to measure performance, use benchmark tooling like auto-canon and benchmark js which can provide more precise results (microtime, run many iterations) and better benchmarking performance (e.g. issue more call per second)

@TheHollidayInn
Copy link
Contributor

Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus

@TheHollidayInn
Copy link
Contributor

A few others:

  • Use gzip compression
  • Serve from a CDN
  • use a priority queue for high db usage/long running cpu processes
  • optimize queries (such as indexing)
  • parallelize operation where possible
  • use http2
  • use the cluster module

References:

@barretojs
Copy link

i think a good performance practice is to use a load balancer for request and a load balancer to distribute the node.js app on the multiple cores of your cpu(because as we know node is single threaded). the latter can be achieved EXTREMELY easily using pm2, setting one flag that does the multi core load balancing automatically.
besides, pm2 can make really easy to monitor the memory and cpu usage of your app, and it has a lot of other amazing features.

http://pm2.keymetrics.io/

https://medium.com/iquii/good-practices-for-high-performance-and-scalable-node-js-applications-part-1-3-bb06b6204197

@goldbergyoni
Copy link
Owner Author

@TheHollidayInn Great list!

Few remarks/questions:

  1. Use gzip compression & serve from a CDN - that is more frontend related tips, are we covering frontend as well? never dealt with questions, I don't know and tend to think that - No. What does @sagirk @BrunoScheufler and @js-kyle think?
  2. HTTP2 - do we have evidence/reference that http2 reduces the load on the server?

@goldbergyoni
Copy link
Owner Author

@barretojs First, welcome on board, good to have you here. I'm not sure about this advice as PM2 is using the cluster module under the hood (I think so at least, does it?) which seems to be slower than *real router like nginx and iptables

https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272

What do you think?

@TheHollidayInn
Copy link
Contributor

@i0natan Np!

  1. For gzip and CDN, I included this because they are decisions you'd make while building out web apps (like using Express). If you serve static content with express, using gzip will be helpful. Also, it might be good to just note the benefits of using a CDN rather than serving your own static content.
  2. I'm not sure load is improved. Although Multiplexing may help. Here is a list of benefits: https://webapplog.com/http2-node. To me, it seems more helpful for front end, but I'll admit I'm still new to http2.

@barretojs
Copy link

@i0natan you're right. i wasn't aware of iptables' capability, and the numbers are impressive. i think there's no doubt that for pure performance it distributes the load way better. thanks for the great in-depth article!

@goldbergyoni
Copy link
Owner Author

/remind me in 2 days

@reminders reminders bot added the reminder label Oct 7, 2018
@reminders
Copy link

reminders bot commented Oct 7, 2018

@i0natan set a reminder for Oct 9th 2018

@js-kyle
Copy link
Contributor

js-kyle commented Oct 7, 2018

I'd definitely +1 using the CDN for static assets - we have this as a production best practice at the moment. I think the performance section and production sections are going to have a lot of potential crossover so it's a good time to clear up the differences

Again, with gzip & other CPU intensive tasks, we actually recommend that they are better handled outside of Node for performance reasons. So I guess it's whether we want to adjust our view on this, or, as part of this section recommend to not use Node. Thoughts? See current bullets 5.3 & 5.11

@reminders reminders bot removed the reminder label Oct 9, 2018
@reminders
Copy link

reminders bot commented Oct 9, 2018

👋 @i0natan,

@goldbergyoni
Copy link
Owner Author

Adding one more which exists in our production best practices: set NODE_ENV=production if rendering on the server as it utilizes the templating engine cache

@goldbergyoni
Copy link
Owner Author

Also highly recommends this video to be included in our bibliography: https://www.youtube.com/watch?v=K8spO4hHMhg&vl=en

@mohemos
Copy link

mohemos commented Dec 20, 2018

For MySQL Users:

  • Always run chains of queries in TRANSACTION, allow mysql to handle rollback and commit don't do it yourself, this would help you move faster with less maintenance.
  • Create pool of connections in order to use more than one connections concurrently and make sure you release connection when you're done....
  • Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

@goldbergyoni
Copy link
Owner Author

@Berkmann18 Great set of advice. And welcome aboard.

Why removing orphaned dependencies (tree shaking) improves performance? what is "Memoising"?

@mohemos Welcome & Thank you. Adding some of the advice to the pool #302

@Berkmann18
Copy link
Contributor

Berkmann18 commented Dec 21, 2018

@Berkmann18 Thank you.
Tree shaking improves performance because it will only include code that is used instead of importing/requiring every exported functions/data structure from modules/libraries/frameworks which will make the resulting code lighter and easier to transfer and parse.
Here's another explanation of what tree shaking is.

Example:
Importing everything but using only add;

//NodeJS
const maths = require('maths');

//ES
import * from 'maths';

Only importing add.

//NodeJS
const { add } = require('maths');
//Or
const add = require('maths').add;

//ES
import { add } from 'maths';
import add from 'maths/add';

Memoising is where a code does something then instead of doing it again, which might waste some resources to do the exact same thing as done before, it simply gets what was done and stored in a variable.

Example:
Without memoising:

const fx = (x, y) => {
  console.log('The result is', doALongCalculation(x, y));
  return doALongCalculation(x, y);
};

With memoising:

const fx = (x, y) => {
  const result = doALongCalculation(x, y);
  console.log('The result is', result);
  return result;
};

So it's basically where the code will memorise some data that will be stored in a variable instead of re-doing the computation/fetching process again and again.

@goldbergyoni
Copy link
Owner Author

@Berkmann18 Thanks for the comprehensive and great explanation.

Memoising sounds like a good fit for the General bullet - a bullet which holds all generic practices that are not node.js related

About tree shaking - I'm familiar with it, for frontend the benefit is obvious, but why would it improve backend performance? during run-time anyway all the files are parsed (this is how 'require' work), do you mean to include also re-bundling of the code and pack in less files? I'm looking for the exact part that will boost performance

@Berkmann18
Copy link
Contributor

Berkmann18 commented Dec 25, 2018

@i0natan You're welcome 😄 .
Tree shaking will prevent servers from fetching things it doesn't need which enable it to get to the main code faster as well as not having to go through more resources than it needs to run.
It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).
When it comes to bundling, it will allow the bundling tool to include less stuff which would encourage for a smaller, more optimised bundle.

@VinayaSathyanarayana
Copy link
Contributor

We should add throttling to the list

@VinayaSathyanarayana
Copy link
Contributor

We should add
"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

@goldbergyoni
Copy link
Owner Author

@Berkmann18

It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).

Can you provide an example, maybe even using code, and some data/benchmark that shows the impact? just want to ensure we address popular issues that leave a serious performance foortprint

@goldbergyoni
Copy link
Owner Author

@VinayaSathyanarayana Thanks for joining the brainstorm :]

"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

Can you provide (pseudo) code example? are we concerned because of performance or the code readability/errors?

@Berkmann18
Copy link
Contributor

@i0natan Sure, I've made a repo with benchmarks using only math as an example.

@TheHollidayInn
Copy link
Contributor

@AbdelrahmanHafez For your proposed idea Deal with database and external APIs in batches, do you have any links or references for this?

@migramcastillo
Copy link

* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

@goldbergyoni
Copy link
Owner Author

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

@goldbergyoni
Copy link
Owner Author

@js-kyle Maybe add this idea to our TOC issue?

@migramcastillo
Copy link

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

I'd be glad to, in which branch should I make the PR?

@sagirk
Copy link
Contributor

sagirk commented Mar 21, 2019

I'd be glad to, in which branch should I make the PR?

@migramcastillo If you meant to ask what branch to raise your PR against, that would be master. If you meant to ask what branch to make your changes in, feel free to create a new branch in your fork.

@stale
Copy link

stale bot commented Jun 19, 2019

Hello there! 👋
This issue has gone silent. Eerily silent. ⏳
We currently close issues after 100 days of inactivity. It has been 90 days since the last update here.
If needed, you can keep it open by replying here.
Thanks for being a part of the Node.js Best Practices community! 💚

@stale stale bot added the stale label Jun 19, 2019
@stale stale bot closed this as completed Jun 29, 2019
@abhishekshetty
Copy link

abhishekshetty commented Apr 10, 2020

* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

Although Promise.all is better than sequential non-dependent promises in most of the cases, It works best only when the number of calls/promises are known before hand or atleast few. In case of dynamic number of promises, e.g. batching in runtime and if the number of calls increases significantly it might endup exhausting all the connection pool or create deadlocks or if it http calls then spam the service. sample stackoverflow link https://stackoverflow.com/questions/53964228/how-do-i-perform-a-large-batch-of-promises

@abhishekshetty
Copy link

abhishekshetty commented Apr 10, 2020

  1. Maybe have a section for in-memory cache and best practices around implementing and maintaining cache. There are few packages that supports this functionality. Also it seems there is no shared in-memory cache implementation possible in cluster mode.

  2. Would like to highlight a recent video released by chrome team https://www.youtube.com/watch?v=ff4fgQxPaO0, that talks about how to have faster apps when dealing with large object literals.

@victor-homyakov
Copy link
Contributor

victor-homyakov commented Oct 12, 2020

When writing NPM module, it's a good idea to measure the time needed to require() the module you are developing, e.g.

console.time('require');
const myModule = require('my-module');
console.timeEnd('require');

Reason: this time is added to the overall startup time of every code that uses your module. It doesn't play a big role for long-running servers, but is very important for CLI utilities (pre-commit hooks, linters etc.).

Few examples:

How to fix/mitigate:

  • provide granular exports in your module, like ramda and lodash do - consumers may require() only what they need
  • use exact imports, e.g. const complement = require('ramda/src/complement'); instead of const R = require('ramda');
  • use https://www.npmjs.com/package/import-lazy or something similar, require modules only when you are 100% sure you need them, e.g. ESLint should not load every plugin listed in config, but only those needed to lint specified files (don't load TSX rules if you are checking just plain JS file, don't load mocha-specific rules if there is no mocha test file to lint)

@Aschen
Copy link

Aschen commented Jan 6, 2021

Reduce hidden classes number

Benefits from the inline cache and let Turbofan generate optimized assembly code

Use the ES Lint sort-keys rule to ensure same internal hidden classes for your object.

This will allows to fully benefits from the inline caching. Also Turbofan will be able to compile the JS code into optimized bytecode.

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Hidden classes and Inline cache
Inline Caches with Monomorphism, Polymorphism and Megamorphism
Interpreter and Compiler: hidden classes, inline caching, polymorphism and megamorphism
Turbofan Speculative Optimization

@Aschen
Copy link

Aschen commented Jan 6, 2021

Prefer monomorphic function

Prefer the usage of monomorphic function instead of polymorphic functions.

This will allows:

  • Turbofan to compile them into optimized assembly
  • better inline caching for fast property access
example
// don't
function add (a, b) {
  return a + b
}
add(21, 42);
add(21.2, 42.3);

// do
function addInt(a, b) {
  return a + b;
}
function addFloat(a, b) {
  return a+ b;
}
addInt(21, 42);
addFloat(21.2, 42.3);

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Inline Caches with Monomorphism, Polymorphism and Megamorphism
Interpreter and Compiler: hidden classes, inline caching, polymorphism and megamorphism

@Aschen
Copy link

Aschen commented Jan 6, 2021

Avoid dynamic allocation of function

Avoid to declare function inside function.

Function are object and object allocation is costly as well as the garbage collection.
Also, since the function is re-declared at each call, v8 will not be able to optimize it (compilation into optimized assembly, inline caching for fast property access)

example
// don't
function add(a, b) {
  const addInt = (a, b) => {
    return a + b;
  }

  if (typeof a === 'number') {
    return addInt(a, b);
  }
  
  if (typeof a.x === 'number') {
    return addInt(a.x, b.x);
  }
}

// do
function addInt(a, b) {
  return a + b;
}

function add(a, b) {
  if (typeof a === 'number') {
    return addInt(a, b);
  }
  
  if (typeof a.x === 'number') {
    return addInt(a.x, b.x);
  }
}

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Closures Compilation and Allocation
How Closures Works

@kibertoad
Copy link

Avoid process.env. Copy and cache it: https://www.reddit.com/r/node/comments/7thtlv/performance_penalty_of_accessing_env_variables/
The whole concept of object shapes: https://mathiasbynens.be/notes/shapes-ics (previously mentioned in the context of monomorphic params, but it's also important in the context of initializing variables)
Avoid using express.js (it is really, really slow: https://github.com/fastify/benchmarks)

@Aschen
Copy link

Aschen commented Jan 10, 2021

@goldbergyoni Are you still looking for contributor on this topic?

@rluvaton
Copy link
Collaborator

rluvaton commented Mar 24, 2022

There are really good ideas here! We should add it

I also have some:

  1. Prefer branchless programming (but make sure it's not effecting readability and always measure before to check if it worth it - as all performance tips)
  2. Wrap large data with closure so it can get garbage collected
// Bad - really large file not getting garbage collected 
function run() {
   const reallyLargeFile = readFile()
   const transformed = transformData(reallyLargeFile)

   // do some extra work

}

// Good - really large file  getting garbage collected
function run() {
   let transformed;

    // prefer extract to function instead of empty closure
   {
     const reallyLargeFile = readFile()
     transformed = transformData(reallyLargeFile)
   }

   // do some extra work

}
  1. Read large files in streams instead all at once

@victor-homyakov
Copy link
Contributor

  1. Wrap large data with closure so it can get garbage collected
   // prefer extract to function instead of empty closure
   {
     const reallyLargeFile = readFile()
     transformed = transformData(reallyLargeFile)
   }
   // do some extra work

Looks like in order to benefit from this change, the "extra work" part

  • either should have a significant execution time, e.g. 5 seconds - then in original code reallyLargeFile will be unavailable for GC during these 5 seconds, and in modified code, it could be GCed if the engine decides to start GC during that time
  • or should have high memory consumption, to trigger Full GC (usually the engine gives it a last try and tries to free all possible memory before throwing the "out of memory" error)

I think these details should be noted - the readers should not follow the advice blindly but should have some understanding of why and when it helps.

Also, it would be great to have some proofs or benchmarks to illustrate this idea.

  1. Read large files in streams instead all at once

But we've already seen const reallyLargeFile = readFile() in advice number 2. Could you please make the code example from advice 2 consistent with advice 3 (i.e. use something else instead of reading a large file at once)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests