diff --git a/.remarkrc b/.remarkrc index 247f8de914743..1b2f58f2f2a27 100644 --- a/.remarkrc +++ b/.remarkrc @@ -7,7 +7,6 @@ ["remark-lint-maximum-line-length", false], ["remark-lint-no-file-name-articles", false], ["remark-lint-no-literal-urls", false], - ["remark-lint-no-shortcut-reference-link", false], ["remark-lint-no-trailing-spaces", false], ["remark-lint-no-undefined-references", false], ["remark-lint-no-unused-definitions", false], diff --git a/locale/ar/get-involved/node-meetups.md b/locale/ar/get-involved/node-meetups.md index ebb4a429df28a..3526baa9a371e 100644 --- a/locale/ar/get-involved/node-meetups.md +++ b/locale/ar/get-involved/node-meetups.md @@ -329,7 +329,7 @@ layout: contribute.hbs ##### إرفين -* [اللقاء]https://www.meetup.com/Node-JS-OC/) +* [اللقاء](https://www.meetup.com/Node-JS-OC/) * تردد اللقاء - شهريا * كيف تقدم طلب محاضرة ؟ اتصل بالمنظمين في صفحة اللقاءات. * المنظم - فرشيد عاطف diff --git a/locale/en/blog/feature/streams2.md b/locale/en/blog/feature/streams2.md index 4193dcc19ddfb..6bd483fd4808e 100644 --- a/locale/en/blog/feature/streams2.md +++ b/locale/en/blog/feature/streams2.md @@ -196,7 +196,7 @@ Note that `stream.Readable` is an abstract class designed to be extended with an underlying implementation of the `_read(size)` method. (See below.) -### new stream.Readable([options]) +### new stream.Readable(\[options\]) * `options` {Object} * `highWaterMark` {Number} The maximum number of bytes to store in @@ -458,7 +458,7 @@ can be `'utf8'`, `'utf16le'` (`'ucs2'`), `'ascii'`, or `'hex'`. The encoding can also be set by specifying an `encoding` field to the constructor. -### readable.read([size]) +### readable.read(\[size\]) * `size` {Number | null} Optional number of bytes to read. * Return: {Buffer | String | null} @@ -479,7 +479,7 @@ a future `'readable'` event will be emitted when more is available. Calling `stream.read(0)` will always return `null`, and will trigger a refresh of the internal buffer, but otherwise be a no-op. -### readable.pipe(destination, [options]) +### readable.pipe(destination, \[options\]) * `destination` {Writable Stream} * `options` {Object} Optional @@ -515,7 +515,7 @@ reader.on("end", function() { Note that `process.stderr` and `process.stdout` are never closed until the process exits, regardless of the specified options. -### readable.unpipe([destination]) +### readable.unpipe(\[destination\]) * `destination` {Writable Stream} Optional @@ -549,7 +549,7 @@ Note that `stream.Writable` is an abstract class designed to be extended with an underlying implementation of the `_write(chunk, encoding, cb)` method. (See below.) -### new stream.Writable([options]) +### new stream.Writable(\[options\]) * `options` {Object} * `highWaterMark` {Number} Buffer level when `write()` starts @@ -595,7 +595,7 @@ the class that defines it, and should not be called directly by user programs. However, you **are** expected to override this method in your own extension classes. -### writable.write(chunk, [encoding], [callback]) +### writable.write(chunk, \[encoding\], \[callback\]) * `chunk` {Buffer | String} Data to be written * `encoding` {String} Optional. If `chunk` is a string, then encoding @@ -612,7 +612,7 @@ the buffer is full, and the data will be sent out in the future. The The specifics of when `write()` will return false, is determined by the `highWaterMark` option provided to the constructor. -### writable.end([chunk], [encoding], [callback]) +### writable.end(\[chunk\], \[encoding\], \[callback\]) * `chunk` {Buffer | String} Optional final data to be written * `encoding` {String} Optional. If `chunk` is a string, then encoding @@ -698,7 +698,7 @@ Rather than implement the `_read()` and `_write()` methods, Transform classes must implement the `_transform()` method, and may optionally also implement the `_flush()` method. (See below.) -### new stream.Transform([options]) +### new stream.Transform(\[options\]) * `options` {Object} Passed to both Writable and Readable constructors. diff --git a/locale/en/docs/guides/simple-profiling.md b/locale/en/docs/guides/simple-profiling.md index 16f4e5e1b56e7..7a2eeda6718c6 100644 --- a/locale/en/docs/guides/simple-profiling.md +++ b/locale/en/docs/guides/simple-profiling.md @@ -157,7 +157,7 @@ up by language. First, we look at the summary section and see: This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next -find the [C++] section which contains information about which C++ functions are +find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: ``` @@ -174,7 +174,7 @@ taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these -functions, we will next look at the [Bottom up (heavy) profile] section which +functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: diff --git a/locale/en/get-involved/node-meetups.md b/locale/en/get-involved/node-meetups.md index df74acc7a5d4f..784f7b1f83929 100644 --- a/locale/en/get-involved/node-meetups.md +++ b/locale/en/get-involved/node-meetups.md @@ -356,7 +356,7 @@ REQUIREMENTS * Frequency of meetups - every 6-9 month * How to submit a talk? Contact organizers in the meetup page or use contacts information below * Organizer name - Denis Izmaylov -* Organizer contact info - [Telegram](https://t.me/DenisIzmaylov) [Twitter](https://twitter.com/DenisIzmaylov] [Facebook](https://facebook.com/denis.izmaylov) +* Organizer contact info - [Telegram](https://t.me/DenisIzmaylov) [Twitter](https://twitter.com/DenisIzmaylov) [Facebook](https://facebook.com/denis.izmaylov) ### South Africa diff --git a/locale/es/docs/guides/simple-profiling.md b/locale/es/docs/guides/simple-profiling.md index aa0392569fb75..321424de97011 100644 --- a/locale/es/docs/guides/simple-profiling.md +++ b/locale/es/docs/guides/simple-profiling.md @@ -129,7 +129,7 @@ Opening processed.txt in your favorite text editor will give you a few different 215 0.6% Unaccounted ``` -This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the [C++] section which contains information about which C++ functions are taking the most CPU time and see: +This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: ``` [C++]: @@ -139,7 +139,7 @@ This tells us that 97% of all samples gathered occurred in C++ code and that whe 3165 8.4% 8.6% _malloc_zone_malloc ``` -We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the [Bottom up (heavy) profile] section which provides information about the primary callers of each function. Examining this section, we find: +We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: ``` ticks parent name diff --git a/locale/fr/docs/guides/simple-profiling.md b/locale/fr/docs/guides/simple-profiling.md index aa0392569fb75..321424de97011 100644 --- a/locale/fr/docs/guides/simple-profiling.md +++ b/locale/fr/docs/guides/simple-profiling.md @@ -129,7 +129,7 @@ Opening processed.txt in your favorite text editor will give you a few different 215 0.6% Unaccounted ``` -This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the [C++] section which contains information about which C++ functions are taking the most CPU time and see: +This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: ``` [C++]: @@ -139,7 +139,7 @@ This tells us that 97% of all samples gathered occurred in C++ code and that whe 3165 8.4% 8.6% _malloc_zone_malloc ``` -We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the [Bottom up (heavy) profile] section which provides information about the primary callers of each function. Examining this section, we find: +We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: ``` ticks parent name diff --git a/locale/it/about/community.md b/locale/it/about/community.md index 3de739391f20f..d45d608bc6e93 100644 --- a/locale/it/about/community.md +++ b/locale/it/about/community.md @@ -21,7 +21,7 @@ Ci sono quattro tipi di coinvolgimenti con il Comitato della Community: * Un **Osservatore** è un individuo che ha richiesto o a cui è stato richiesto di assistere ad un incontro del CommComm. È anche il primo step per diventare un Membro. * Un **Membro** è un collaboratore con diritti di voto che ha soddisfatto i requisiti di partecipazione ed è stato eletto dalla procedura di votazione del CommComm. -Per la lista attuale dei membri del Comitato della Community, vedere il [README.md] del progetto (https://github.com/nodejs/community-committee). +Per la lista attuale dei membri del Comitato della Community, vedere il [README.md del progetto](https://github.com/nodejs/community-committee). ## Contributors and Collaborators (Contributori e Collaboratori) diff --git a/locale/ja/docs/guides/simple-profiling.md b/locale/ja/docs/guides/simple-profiling.md index d80478cd5b7b8..4446a978edc99 100644 --- a/locale/ja/docs/guides/simple-profiling.md +++ b/locale/ja/docs/guides/simple-profiling.md @@ -233,7 +233,7 @@ up by language. First, we look at the summary section and see: This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next -find the [C++] section which contains information about which C++ functions are +find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: --> @@ -241,7 +241,7 @@ taking the most CPU time and see: 処理された出力の他のセクションを見るときは (JavaScript ではなく) C++ で行われている作業に最も注意する必要があることを示しています。 これを念頭に置いて、次にどの C++ 関数が最も CPU 時間を消費しているかについての情報を含む -[C++] セクションを見てみます。 +\[C++\] セクションを見てみます。 ``` [C++]: @@ -258,7 +258,7 @@ taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these -functions, we will next look at the [Bottom up (heavy) profile] section which +functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: @@ -270,7 +270,7 @@ CPU 時間の少なくとも 51.8% が占められていることが分かり (またはそうである場合は例のために別のふりをすることになる)、 すぐには明らかにならないかもしれません。 これらの関数間の関係をよりよく理解するために、 -次に各関数の主な呼び出し元に関する情報を提供する [Bottom up (heavy) profile] セクションを見ていきます。 +次に各関数の主な呼び出し元に関する情報を提供する \[Bottom up (heavy) profile\] セクションを見ていきます。 このセクションを調べると、次のことがわかります。 ``` diff --git a/locale/ko/docs/guides/simple-profiling.md b/locale/ko/docs/guides/simple-profiling.md index 504b89687bd5b..eb73c88cb9c8c 100644 --- a/locale/ko/docs/guides/simple-profiling.md +++ b/locale/ko/docs/guides/simple-profiling.md @@ -315,7 +315,7 @@ up by language. First, we look at the summary section and see: This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next -find the [C++] section which contains information about which C++ functions are +find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: ``` @@ -329,7 +329,7 @@ taking the most CPU time and see: 이 부분을 보면 C++ 코드에서 수집된 샘플이 97%를 차지하는 것을 볼 수 있으므로 처리된 결과에서 다른 부분을 볼 때 C++에서 이뤄진 작업에 대부분의 관심을 기울여야 합니다.(JavaScript 대비) -그래서 C++ 함수가 대부분의 CPU 시간을 차지한 정보를 담고 있는 [C++] 부분을 찾아볼 것입니다. +그래서 C++ 함수가 대부분의 CPU 시간을 차지한 정보를 담고 있는 \[C++\] 부분을 찾아볼 것입니다. ``` [C++]: @@ -346,7 +346,7 @@ taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these -functions, we will next look at the [Bottom up (heavy) profile] section which +functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: @@ -370,7 +370,7 @@ section, we find: 해시를 생성하는 PBKDF2 함수 호출이 최소 51.8%의 CPU 시간을 차지한 것을 바로 눈치챌 수 있습니다. 하지만 더 낮은 비율을 가진 두 부분은 애플리케이션의 어떤 부분인지 바로 알 수 없습니다.(아니면 예제를 위해서 그런 척 할 것입니다.) 이러한 함수 간의 관계를 더 이해하려면 각 함수의 주요 호출자 정보를 -제공하는 [Bottom up (heavy) profile] 부분을 봐야 합니다. +제공하는 \[Bottom up (heavy) profile\] 부분을 봐야 합니다. 이 부분을 찾아보면 다음과 같이 나와 있습니다. ``` diff --git a/locale/pt-br/blog/community/transitions.md b/locale/pt-br/blog/community/transitions.md index eb9045a5e3872..580cea5425073 100644 --- a/locale/pt-br/blog/community/transitions.md +++ b/locale/pt-br/blog/community/transitions.md @@ -8,8 +8,7 @@ slug: transitions layout: blog-post.hbs --- -Em Fevereiro, nós anunciamos a [Fundação Node.js] -(https://www.joyent.com/blog/introducing-the-nodejs-foundation), +Em Fevereiro, nós anunciamos a [Fundação Node.js](https://www.joyent.com/blog/introducing-the-nodejs-foundation), que irá administrar o futuro do Node.js e abri-lo para a comunidade de uma maneira que não estava disponível antes. Organizações como IBM, SAP, Apigee, F5, Fidelity, Microsoft, PayPal, Red Hat, e outras estão diff --git a/locale/pt-br/docs/guides/dont-block-the-event-loop.md b/locale/pt-br/docs/guides/dont-block-the-event-loop.md index e6979b79354e0..b5b769322fd01 100644 --- a/locale/pt-br/docs/guides/dont-block-the-event-loop.md +++ b/locale/pt-br/docs/guides/dont-block-the-event-loop.md @@ -102,7 +102,7 @@ Você nunca deve bloquear o Event Loop. Em outras palavras, cada um de seus callbacks JavaScript devem ser concluídos rapidamente. Isto, obviamente, também se aplica aos seus `wait`'s , seus `Promise.then`'s, e assim por diante. -Uma boa maneira de garantir isso é estudar sobre a ["complexidade computacional"] (https://en.wikipedia.org/wiki/Time_complexity) de seus callbacks. +Uma boa maneira de garantir isso é estudar sobre a ["complexidade computacional"](https://en.wikipedia.org/wiki/Time_complexity) de seus callbacks. Se o seu callback executar um número constante de etapas, independentemente de seus argumentos, você sempre dará a cada cliente pendente uma chance justa. Se seu callback executa um número considerável de etapas, dependendo de seus argumentos, pense em quanto tempo os argumentos podem demorar. @@ -241,7 +241,7 @@ Em um servidor, *você não deve usar as seguintes APIs síncronas desses módul * `zlib.inflateSync` * `zlib.deflateSync` * Sistema de arquivo: - * Não use as APIs do sistema de arquivos síncronas. Por exemplo, se o arquivo que você acessar estiver em um [sistema de arquivos distribuído](https://en.wikipedia.org/wiki/Clustered_file_system#Distributed_file_systems) como [NFS](https://en.wikipedia.org/wiki/ Network_File_System), os tempos de acesso podem variar bastante. + * Não use as APIs do sistema de arquivos síncronas. Por exemplo, se o arquivo que você acessar estiver em um [sistema de arquivos distribuído](https://en.wikipedia.org/wiki/Clustered_file_system#Distributed_file_systems) como [NFS](https://en.wikipedia.org/wiki/Network_File_System), os tempos de acesso podem variar bastante. * Child process: * `child_process.spawnSync` * `child_process.execSync` @@ -348,7 +348,7 @@ Para uma tarefa complicada, mova o trabalho do Event Loop para uma Worker Pool. ##### Como fazer offload Você tem duas opções para uma Work Pool de destino no qual descarregar o trabalho. -1. Você pode usar a Worker Pool built-in do Node desenvolvendo um [addon C++](https://nodejs.org/api/addons.html). Nas versões mais antigas do Node, crie seu complemento C++ usando [NAN](https://github.com/nodejs/nan) e nas versões mais recentes use [N-API](https://nodejs.org/api/n -api.html). [node-webworker-threads](https://www.npmjs.com/package/webworker-threads) oferece uma maneira JavaScript-only para acessar a Worker Pool do Node. +1. Você pode usar a Worker Pool built-in do Node desenvolvendo um [addon C++](https://nodejs.org/api/addons.html). Nas versões mais antigas do Node, crie seu complemento C++ usando [NAN](https://github.com/nodejs/nan) e nas versões mais recentes use [N-API](https://nodejs.org/api/n-api.html). [node-webworker-threads](https://www.npmjs.com/package/webworker-threads) oferece uma maneira JavaScript-only para acessar a Worker Pool do Node. 2. Você pode criar e gerenciar sua própria Worker Pool dedicada à computação, em vez da Worker Pool de I/O do Node. As maneiras mais simples de fazer isso são usando [Child Process](https://nodejs.org/api/child_process.html) ou [Cluster](https://nodejs.org/api/cluster.html). Você *não* deve simplesmente criar um [Child Process](https://nodejs.org/api/child_process.html) para cada cliente. diff --git a/locale/ro/docs/guides/simple-profiling.md b/locale/ro/docs/guides/simple-profiling.md index aa0392569fb75..321424de97011 100644 --- a/locale/ro/docs/guides/simple-profiling.md +++ b/locale/ro/docs/guides/simple-profiling.md @@ -129,7 +129,7 @@ Opening processed.txt in your favorite text editor will give you a few different 215 0.6% Unaccounted ``` -This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the [C++] section which contains information about which C++ functions are taking the most CPU time and see: +This tells us that 97% of all samples gathered occurred in C++ code and that when viewing other sections of the processed output we should pay most attention to work being done in C++ (as opposed to JavaScript). With this in mind, we next find the \[C++\] section which contains information about which C++ functions are taking the most CPU time and see: ``` [C++]: @@ -139,7 +139,7 @@ This tells us that 97% of all samples gathered occurred in C++ code and that whe 3165 8.4% 8.6% _malloc_zone_malloc ``` -We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the [Bottom up (heavy) profile] section which provides information about the primary callers of each function. Examining this section, we find: +We see that the top 3 entries account for 72.1% of CPU time taken by the program. From this output, we immediately see that at least 51.8% of CPU time is taken up by a function called PBKDF2 which corresponds to our hash generation from a user's password. However, it may not be immediately obvious how the lower two entries factor into our application (or if it is we will pretend otherwise for the sake of example). To better understand the relationship between these functions, we will next look at the \[Bottom up (heavy) profile\] section which provides information about the primary callers of each function. Examining this section, we find: ``` ticks parent name diff --git a/locale/ru/docs/guides/simple-profiling.md b/locale/ru/docs/guides/simple-profiling.md index 47df11ba72073..027ea1d9e6ab3 100644 --- a/locale/ru/docs/guides/simple-profiling.md +++ b/locale/ru/docs/guides/simple-profiling.md @@ -158,7 +158,7 @@ node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt Это говорит нам о том, что 97% всех собранных замеров происходили в коде C++, и что при просмотре других разделов обработанного вывода мы должны уделять больше внимания работе, выполняемой именно в C++ (а не, к примеру, JavaScript). -Имея это в виду, мы затем находим раздел [C++], который содержит информацию о том, +Имея это в виду, мы затем находим раздел \[C++\], который содержит информацию о том, какие функции C++ отнимают больше всего процессорного времени, и видим: ``` @@ -176,7 +176,7 @@ node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt не сразу очевидно, как две нижние записи влияют на наше приложение (если же вы догадываетесь об этом, мы притворимся, что это не очевидно, в целях примера). Чтобы лучше понять взаимосвязь между этими функциями, мы обратимся затем к -разделу [Bottom up (heavy) profile], который предоставляет информацию о том, +разделу \[Bottom up (heavy) profile\], который предоставляет информацию о том, где чаще всего вызывается каждая функция. Исследуя этот раздел, мы находим: ``` diff --git a/locale/zh-cn/docs/guides/simple-profiling.md b/locale/zh-cn/docs/guides/simple-profiling.md index 830b6ab1ca332..528e73e15c318 100644 --- a/locale/zh-cn/docs/guides/simple-profiling.md +++ b/locale/zh-cn/docs/guides/simple-profiling.md @@ -129,7 +129,7 @@ node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt 215 0.6% Unaccounted ``` -这告诉我们:收集到的所有样本中有 97% 是在 C++ 代码中进行的。当查看处理的输出的其它部分时,我们应该最注意 C++ 中所做的工作(而不是 JavaScript)。考虑到这一点,我们接下来会找到 [C++] 部分,其中包含有关 C++ 函数占用最多 CPU 时间的信息,然后查看一下: +这告诉我们:收集到的所有样本中有 97% 是在 C++ 代码中进行的。当查看处理的输出的其它部分时,我们应该最注意 C++ 中所做的工作(而不是 JavaScript)。考虑到这一点,我们接下来会找到 \[C++\] 部分,其中包含有关 C++ 函数占用最多 CPU 时间的信息,然后查看一下: ``` [C++]: @@ -139,7 +139,7 @@ node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt 3165 8.4% 8.6% _malloc_zone_malloc ``` -我们看到,前 3 个条目占了程序占用的 CPU 时间的 72.1%。从这个输出中,我们立即看到至少 51.8% 的 CPU 时间被称为 PBKDF2 的函数占用。它与用户密码中的哈希生成相对应。然而,较低的两个条目的因素是如何进入我们的应用程序(或者我们为了例子而假装如此)不会立即明显得看出来。为了更好地理解这些函数之间的关系,接下来我们将查看[自下而上(重)配置文件]部分,该节提供有关每个函数的主要调用方的信息。检查此部分,我们会发现: +我们看到,前 3 个条目占了程序占用的 CPU 时间的 72.1%。从这个输出中,我们立即看到至少 51.8% 的 CPU 时间被称为 PBKDF2 的函数占用。它与用户密码中的哈希生成相对应。然而,较低的两个条目的因素是如何进入我们的应用程序(或者我们为了例子而假装如此)不会立即明显得看出来。为了更好地理解这些函数之间的关系,接下来我们将查看\[自下而上(重)配置文件\]部分,该节提供有关每个函数的主要调用方的信息。检查此部分,我们会发现: ``` ticks parent name