node.js的开发流程_Node.js子流程:您需要了解的一切
node.js的开发流程
by Samer Buna
通过Samer Buna
Node.js子流程:您需要了解的一切 (Node.js Child Processes: Everything you need to know)
如何使用spawn(),exec(),execFile()和fork() (How to use spawn(), exec(), execFile(), and fork())
Update: This article is now part of my book “Node.js Beyond The Basics”.
更新:这篇文章现在是我的书《超越基础的Node.js》的一部分。
Update: This article is now part of my book “Node.js Beyond The Basics”.
更新:这篇文章现在是我的书《超越基础的Node.js》的一部分。
Read the updated version of this content and more about Node at jscomplete.com/node-beyond-basics.
在jscomplete.com/node-beyond-basics中阅读此内容的更新版本以及有关Node的更多信息。
Single-threaded, non-blocking performance in Node.js works great for a single process. But eventually, one process in one CPU is not going to be enough to handle the increasing workload of your application.
Node.js中的单线程无阻塞性能非常适合单个进程。 但是最终,一个CPU中的一个进程不足以处理应用程序不断增加的工作量。
No matter how powerful your server may be, a single thread can only support a limited load.
无论您的服务器多么强大,一个线程只能支持有限的负载。
The fact that Node.js runs in a single thread does not mean that we can’t take advantage of multiple processes and, of course, multiple machines as well.
Node.js在单个线程中运行的事实并不意味着我们不能利用多个进程,当然也不能利用多个计算机。
Using multiple processes is the best way to scale a Node application. Node.js is designed for building distributed applications with many nodes. This is why it’s named Node. Scalability is baked into the platform and it’s not something you start thinking about later in the lifetime of an application.
使用多个进程是扩展Node应用程序的最佳方法。 Node.js设计用于构建具有许多节点的分布式应用程序。 这就是为什么它被命名为Node的原因 。 可伸缩性已植入平台中,您在应用程序的生命周期后期就不会考虑它。
This article is a write-up of part of my Pluralsight course about Node.js. I cover similar content in video format there.
本文是我的Pluralsight课程有关Node.js的一部分的文章 。 我在那里以视频格式介绍了类似的内容。
Please note that you’ll need a good understanding of Node.js events and streams before you read this article. If you haven’t already, I recommend that you read these two other articles before you read this one:
请注意,在阅读本文之前,您需要对Node.js 事件和流有充分的了解。 如果还没有,我建议您在阅读本文之前先阅读另外两篇文章:
Understanding Node.js Event-Driven ArchitectureMost of Node’s objects — like HTTP requests, responses, and streams — implement the EventEmitter module so they can…
了解Node.js事件驱动的体系结构 Node的大多数对象(例如HTTP请求,响应和流)都实现EventEmitter模块,以便它们可以…
Streams: Everything you need to knowNode.js streams have a reputation for being hard to work with, and even harder to understand. Well I’ve got good news…
流:您需要了解的所有内容, Node.js流 以 难以使用甚至难以理解而闻名。 好吧,我有个好消息……
子进程模块 (The Child Processes Module)
We can easily spin a child process using Node’s child_process
module and those child processes can easily communicate with each other with a messaging system.
我们可以使用Node的child_process
模块轻松地旋转一个子进程,并且这些子进程可以通过消息传递系统轻松地相互通信。
The child_process
module enables us to access Operating System functionalities by running any system command inside a, well, child process.
child_process
模块使我们能够通过运行子进程内部的任何系统命令来访问操作系统功能。
We can control that child process input stream, and listen to its output stream. We can also control the arguments to be passed to the underlying OS command, and we can do whatever we want with that command’s output. We can, for example, pipe the output of one command as the input to another (just like we do in Linux) as all inputs and outputs of these commands can be presented to us using Node.js streams.
我们可以控制该子进程的输入流,并监听其输出流。 我们还可以控制要传递给底层OS命令的参数,并且可以对该命令的输出执行任何所需的操作。 例如,我们可以将一个命令的输出作为输入传递给另一个命令(就像在Linux中一样),因为可以使用Node.js流向我们展示这些命令的所有输入和输出。
Note that examples I’ll be using in this article are all Linux-based. On Windows, you need to switch the commands I use with their Windows alternatives.
请注意,我将在本文中使用的示例全部基于Linux。 在Windows上,您需要切换与Windows替代品一起使用的命令。
There are four different ways to create a child process in Node: spawn()
, fork()
, exec()
, and execFile()
.
在Node中有四种创建子进程的方法: spawn()
, fork()
, exec()
和execFile()
。
We’re going to see the differences between these four functions and when to use each.
我们将看到这四个函数之间的区别以及何时使用它们。
产生的子进程 (Spawned Child Processes)
The spawn
function launches a command in a new process and we can use it to pass that command any arguments. For example, here’s code to spawn a new process that will execute the pwd
command.
spawn
函数在新进程中启动命令,我们可以使用它来传递该命令的任何参数。 例如,以下代码产生一个新进程,该进程将执行pwd
命令。
const { spawn } = require('child_process');
const child = spawn('pwd');
We simply destructure the spawn
function out of the child_process
module and execute it with the OS command as the first argument.
我们只需从child_process
模块中解构出spawn
函数,并以OS命令作为第一个参数来执行它即可。
The result of executing the spawn
function (the child
object above) is a ChildProcess
instance, which implements the EventEmitter API. This means we can register handlers for events on this child object directly. For example, we can do something when the child process exits by registering a handler for the exit
event:
执行spawn
函数(上面的child
对象)的结果是一个ChildProcess
实例,该实例实现了EventEmitter API 。 这意味着我们可以直接在此子对象上注册事件的处理程序。 例如,当子进程退出时,我们可以通过为exit
事件注册一个处理程序来做一些事情:
child.on('exit', function (code, signal) {
console.log('child process exited with ' +
`code ${code} and signal ${signal}`);
});
The handler above gives us the exit code
for the child process and the signal
, if any, that was used to terminate the child process. This signal
variable is null when the child process exits normally.
上面的处理程序为我们提供了子进程的退出code
和用于终止子进程的signal
(如果有)。 当子进程正常退出时,此signal
变量为null。
The other events that we can register handlers for with the ChildProcess
instances are disconnect
, error
, close
, and message
.
我们可以使用ChildProcess
实例注册处理程序的其他事件是disconnect
, error
, close
和message
。
The
disconnect
event is emitted when the parent process manually calls thechild.disconnect
function.当父进程手动调用
child.disconnect
函数时,将发出disconnect
事件。The
error
event is emitted if the process could not be spawned or killed.如果无法生成或终止进程,
error
发出error
事件。The
close
event is emitted when thestdio
streams of a child process get closed.当子进程的
stdio
流close
时,发出close
事件。The
message
event is the most important one. It’s emitted when the child process uses theprocess.send()
function to send messages. This is how parent/child processes can communicate with each other. We’ll see an example of this below.message
事件是最重要的事件。 当子进程使用process.send()
函数发送消息时发出。 父/子进程可以这样相互通信。 我们将在下面看到一个示例。
Every child process also gets the three standard stdio
streams, which we can access using child.stdin
, child.stdout
, and child.stderr
.
每个子进程还获得三个标准stdio
流,我们可以使用child.stdin
, child.stdout
和child.stderr
。
When those streams get closed, the child process that was using them will emit the close
event. This close
event is different than the exit
event because multiple child processes might share the same stdio
streams and so one child process exiting does not mean that the streams got closed.
当这些流关闭时,使用它们的子进程将发出close
事件。 此close
事件与exit
事件不同,因为多个子进程可能共享相同的stdio
流,因此一个子进程退出并不意味着流已关闭。
Since all streams are event emitters, we can listen to different events on those stdio
streams that are attached to every child process. Unlike in a normal process though, in a child process, the stdout
/stderr
streams are readable streams while the stdin
stream is a writable one. This is basically the inverse of those types as found in a main process. The events we can use for those streams are the standard ones. Most importantly, on the readable streams, we can listen to the data
event, which will have the output of the command or any error encountered while executing the command:
由于所有流都是事件发射器,因此我们可以在附加到每个子进程的stdio
流上侦听不同的事件。 但是,与普通进程不同,在子进程中, stdout
/ stderr
流是可读流,而stdin
流是可写流。 从本质上讲,这与在主要过程中发现的那些类型相反。 我们可以用于这些流的事件是标准事件。 最重要的是,在可读流上,我们可以侦听data
事件,该事件将具有命令的输出或执行命令时遇到的任何错误:
child.stdout.on('data', (data) => {
console.log(`child stdout:\n${data}`);
});
child.stderr.on('data', (data) => {
console.error(`child stderr:\n${data}`);
});
The two handlers above will log both cases to the main process stdout
and stderr
. When we execute the spawn
function above, the output of the pwd
command gets printed and the child process exits with code 0
, which means no error occurred.
上面的两个处理程序会将这两种情况都记录到主进程stdout
和stderr
。 当我们执行上面的spawn
函数时,将打印pwd
命令的输出,并且子进程以代码0
退出,这意味着没有发生错误。
We can pass arguments to the command that’s executed by the spawn
function using the second argument of the spawn
function, which is an array of all the arguments to be passed to the command. For example, to execute the find
command on the current directory with a -type f
argument (to list files only), we can do:
我们可以将参数传递至由所述执行的命令spawn
功能使用的第二个参数spawn
功能,这是所有参数数组要传递给该命令。 例如,要使用-type f
参数在当前目录上执行find
命令(仅列出文件),我们可以执行以下操作:
const child = spawn('find', ['.', '-type', 'f']);
If an error occurs during the execution of the command, for example, if we give find an invalid destination above, the child.stderr
data
event handler will be triggered and the exit
event handler will report an exit code of 1
, which signifies that an error has occurred. The error values actually depend on the host OS and the type of error.
如果在执行命令期间发生错误,例如,如果我们在上面给出查找无效的目的地,则将触发child.stderr
data
事件处理程序,并且exit
事件处理程序将报告退出代码1
,表示退出代码为1
。发生错误。 错误值实际上取决于主机操作系统和错误类型。
A child process stdin
is a writable stream. We can use it to send a command some input. Just like any writable stream, the easiest way to consume it is using the pipe
function. We simply pipe a readable stream into a writable stream. Since the main process stdin
is a readable stream, we can pipe that into a child process stdin
stream. For example:
子进程stdin
是可写流。 我们可以使用它向命令发送一些输入。 就像任何可写流一样,使用它的最简单方法是使用pipe
功能。 我们只是将可读流传输到可写流。 由于主进程stdin
是可读流,因此我们可以将其通过管道stdin
到子进程stdin
流。 例如:
const { spawn } = require('child_process');
const child = spawn('wc');
process.stdin.pipe(child.stdin)
child.stdout.on('data', (data) => {
console.log(`child stdout:\n${data}`);
});
In the example above, the child process invokes the wc
command, which counts lines, words, and characters in Linux. We then pipe the main process stdin
(which is a readable stream) into the child process stdin
(which is a writable stream). The result of this combination is that we get a standard input mode where we can type something and when we hit Ctrl+D
, what we typed will be used as the input of the wc
command.
在上面的示例中,子进程调用wc
命令,该命令在Linux中对行,单词和字符进行计数。 然后,我们将主进程stdin
(这是可读流)通过管道stdin
到子进程stdin
(这是可写流)。 这种结合的结果是,我们得到了一种标准的输入模式,可以在其中键入某些内容,并且当您Ctrl+D
,键入的内容将用作wc
命令的输入。
We can also pipe the standard input/output of multiple processes on each other, just like we can do with Linux commands. For example, we can pipe the stdout
of the find
command to the stdin of the wc
command to count all the files in the current directory:
我们也可以通过管道将多个进程的标准输入/输出相互传递,就像我们可以使用Linux命令一样。 例如,我们可以将find
命令的stdout
通过管道传递给wc
命令的stdin,以计算当前目录中的所有文件:
const { spawn } = require('child_process');
const find = spawn('find', ['.', '-type', 'f']);
const wc = spawn('wc', ['-l']);
find.stdout.pipe(wc.stdin);
wc.stdout.on('data', (data) => {
console.log(`Number of files ${data}`);
});
I added the -l
argument to the wc
command to make it count only the lines. When executed, the code above will output a count of all files in all directories under the current one.
我在wc
命令中添加了-l
参数,以使其仅计算行数。 执行后,以上代码将输出当前目录下所有目录中所有文件的计数。
Shell语法和exec函数 (Shell Syntax and the exec function)
By default, the spawn
function does not create a shell to execute the command we pass into it. This makes it slightly more efficient than the exec
function, which does create a shell. The exec
function has one other major difference. It buffers the command’s generated output and passes the whole output value to a callback function (instead of using streams, which is what spawn
does).
默认情况下, spawn
函数不会创建外壳程序来执行我们传递给它的命令。 这使它比exec
函数(创建一个外壳程序)效率更高。 exec
函数还有另一个主要区别。 它缓冲命令生成的输出,并将整个输出值传递给回调函数(而不是使用spawn
流)。
Here’s the previous find | wc
example implemented with an exec
function.
这是以前的find | wc
用exec
函数实现的find | wc
示例。
const { exec } = require('child_process');
exec('find . -type f | wc -l', (err, stdout, stderr) => {
if (err) {
console.error(`exec error: ${err}`);
return;
}
console.log(`Number of files ${stdout}`);
});
Since the exec
function uses a shell to execute the command, we can use the shell syntax directly here making use of the shell pipe feature.
由于exec
函数使用外壳执行命令,因此我们可以在此处利用外壳管道功能直接使用外壳语法 。
Note that using the shell syntax comes at a security risk if you’re executing any kind of dynamic input provided externally. A user can simply do a command injection attack using shell syntax characters like ; and $ (for example, command + ’; rm -rf ~’
)
请注意,如果您执行外部提供的任何类型的动态输入,则使用Shell语法会带来安全风险 。 用户可以简单地使用shell语法字符(例如;)进行命令注入攻击。 和$(例如, command + '; rm -rf ~'
)
The exec
function buffers the output and passes it to the callback function (the second argument to exec
) as the stdout
argument there. This stdout
argument is the command’s output that we want to print out.
exec
函数缓冲输出并将其作为stdout
参数传递给回调函数( exec
的第二个参数)。 此stdout
参数是我们要打印的命令输出。
The exec
function is a good choice if you need to use the shell syntax and if the size of the data expected from the command is small. (Remember, exec
will buffer the whole data in memory before returning it.)
如果您需要使用shell语法并且命令所期望的数据量很小,则exec
函数是一个不错的选择。 (请记住, exec
将在返回之前将整个数据缓冲在内存中。)
The spawn
function is a much better choice when the size of the data expected from the command is large, because that data will be streamed with the standard IO objects.
当命令期望的数据量很大时,最好使用spawn
函数,因为该数据将与标准IO对象一起传输。
We can make the spawned child process inherit the standard IO objects of its parents if we want to, but also, more importantly, we can make the spawn
function use the shell syntax as well. Here’s the same find | wc
command implemented with the spawn
function:
如果需要,我们可以使产生的子进程继承其父进程的标准IO对象,而且更重要的是,我们也可以使spawn
函数也使用shell语法。 这是相同的find | wc
用spawn
函数实现的find | wc
命令:
const child = spawn('find . -type f | wc -l', {
stdio: 'inherit',
shell: true
});
Because of the stdio: 'inherit'
option above, when we execute the code, the child process inherits the main process stdin
, stdout
, and stderr
. This causes the child process data events handlers to be triggered on the main process.stdout
stream, making the script output the result right away.
由于上面的stdio: 'inherit'
选项,当我们执行代码时,子进程会继承主进程stdin
, stdout
和stderr
。 这将导致在主process.stdout
流上触发子流程数据事件处理程序,从而使脚本立即输出结果。
Because of the shell: true
option above, we were able to use the shell syntax in the passed command, just like we did with exec
. But with this code, we still get the advantage of the streaming of data that the spawn
function gives us. This is really the best of both worlds.
由于上面的shell: true
选项,我们能够在传递的命令中使用shell语法,就像对exec
。 但是使用此代码,我们仍然可以获得spawn
函数为我们提供的数据流的优势。 这确实是两全其美。
There are a few other good options we can use in the last argument to the child_process
functions besides shell
and stdio
. We can, for example, use the cwd
option to change the working directory of the script. For example, here’s the same count-all-files example done with a spawn
function using a shell and with a working directory set to my Downloads folder. The cwd
option here will make the script count all files I have in ~/Downloads
:
除了shell
和stdio
之外,我们还可以在child_process
函数的最后一个参数中使用其他一些不错的选择。 例如,我们可以使用cwd
选项来更改脚本的工作目录。 例如,这是使用shell的spawn
函数并将工作目录设置到我的Downloads文件夹中的全部文件示例。 这里的cwd
选项将使脚本计算我在~/Downloads
拥有的所有文件:
const child = spawn('find . -type f | wc -l', {
stdio: 'inherit',
shell: true,
cwd: '/Users/samer/Downloads'
});
Another option we can use is the env
option to specify the environment variables that will be visible to the new child process. The default for this option is process.env
which gives any command access to the current process environment. If we want to override that behavior, we can simply pass an empty object as the env
option or new values there to be considered as the only environment variables:
我们可以使用的另一个选项是env
选项,用于指定对新的子进程可见的环境变量。 此选项的默认值为process.env
,它使任何命令都可以访问当前流程环境。 如果要覆盖该行为,我们可以简单地传递一个空对象作为env
选项或将新值视为唯一的环境变量:
const child = spawn('echo $ANSWER', {
stdio: 'inherit',
shell: true,
env: { ANSWER: 42 },
});
The echo command above does not have access to the parent process’s environment variables. It can’t, for example, access $HOME
, but it can access $ANSWER
because it was passed as a custom environment variable through the env
option.
上面的echo命令无法访问父进程的环境变量。 例如,它不能访问$HOME
,但是它可以访问$ANSWER
因为它是通过env
选项作为自定义环境变量传递的。
One last important child process option to explain here is the detached
option, which makes the child process run independently of its parent process.
此处要解释的最后一个重要的子流程选项是detached
选项,它使子流程独立于其父流程运行。
Assuming we have a file timer.js
that keeps the event loop busy:
假设我们有一个timer.js
文件,它使事件循环保持繁忙:
setTimeout(() => {
// keep the event loop busy
}, 20000);
We can execute it in the background using the detached
option:
我们可以使用detached
选项在后台执行它:
const { spawn } = require('child_process');
const child = spawn('node', ['timer.js'], {
detached: true,
stdio: 'ignore'
});
child.unref();
The exact behavior of detached child processes depends on the OS. On Windows, the detached child process will have its own console window while on Linux the detached child process will be made the leader of a new process group and session.
分离的子进程的确切行为取决于操作系统。 在Windows上,分离的子进程将具有其自己的控制台窗口,而在Linux上,分离的子进程将成为新进程组和会话的领导者。
If the unref
function is called on the detached process, the parent process can exit independently of the child. This can be useful if the child is executing a long-running process, but to keep it running in the background the child’s stdio
configurations also have to be independent of the parent.
如果在分离的进程上调用了unref
函数,则父进程可以独立于子进程退出。 如果孩子正在执行一个长时间运行的进程,这可能会很有用,但是要使其在后台运行,孩子的stdio
配置也必须独立于父进程。
The example above will run a node script (timer.js
) in the background by detaching and also ignoring its parent stdio
file descriptors so that the parent can terminate while the child keeps running in the background.
上面的示例将通过分离并忽略其父stdio
文件描述符在后台运行节点脚本( timer.js
),以便父级可以在子级继续在后台运行的同时终止。
execFile函数 (The execFile function)
If you need to execute a file without using a shell, the execFile
function is what you need. It behaves exactly like the exec
function, but does not use a shell, which makes it a bit more efficient. On Windows, some files cannot be executed on their own, like .bat
or .cmd
files. Those files cannot be executed with execFile
and either exec
or spawn
with shell set to true is required to execute them.
如果需要在不使用外壳的情况下执行文件,则需要execFile
函数。 它的行为与exec
函数完全相同,但是不使用Shell,这使其效率更高。 在Windows上,某些文件无法单独执行,例如.bat
或.cmd
文件。 这些文件不能使用execFile
执行,并且需要exec
或spawn
并将shell设置为true来执行它们。
*同步功能 (The *Sync function)
The functions spawn
, exec
, and execFile
from the child_process
module also have synchronous blocking versions that will wait until the child process exits.
child_process
模块中的函数spawn
, exec
和execFile
也具有同步阻塞版本,这些版本将等待子进程退出。
const {
spawnSync,
execSync,
execFileSync,
} = require('child_process');
Those synchronous versions are potentially useful when trying to simplify scripting tasks or any startup processing tasks, but they should be avoided otherwise.
当试图简化脚本编写任务或任何启动处理任务时,这些同步版本可能很有用,但应避免使用它们。
fork()函数 (The fork() function)
The fork
function is a variation of the spawn
function for spawning node processes. The biggest difference between spawn
and fork
is that a communication channel is established to the child process when using fork
, so we can use the send
function on the forked process along with the global process
object itself to exchange messages between the parent and forked processes. We do this through the EventEmitter
module interface. Here’s an example:
fork
函数是spawn
函数的变体,用于spawn
节点进程。 spawn
和fork
的最大区别在于,使用fork
,将建立与子进程的通信通道,因此我们可以在派生进程上使用send
函数,并与全局process
对象一起使用,以在父进程和派生进程之间交换消息。 我们通过EventEmitter
模块接口执行此操作。 这是一个例子:
The parent file, parent.js
:
父文件parent.js
:
const { fork } = require('child_process');
const forked = fork('child.js');
forked.on('message', (msg) => {
console.log('Message from child', msg);
});
forked.send({ hello: 'world' });
The child file, child.js
:
子文件child.js
:
process.on('message', (msg) => {
console.log('Message from parent:', msg);
});
let counter = 0;
setInterval(() => {
process.send({ counter: counter++ });
}, 1000);
In the parent file above, we fork child.js
(which will execute the file with the node
command) and then we listen for the message
event. The message
event will be emitted whenever the child uses process.send
, which we’re doing every second.
在上面的父文件中,我们派生child.js
(它将使用node
命令执行该文件),然后侦听message
事件。 每当孩子使用process.send
,都会发出message
事件,我们每秒都会这样做。
To pass down messages from the parent to the child, we can execute the send
function on the forked object itself, and then, in the child script, we can listen to the message
event on the global process
object.
要将消息从父对象传递给子对象,我们可以在分支对象本身上执行send
函数,然后在子脚本中,我们可以侦听全局process
对象上的message
事件。
When executing the parent.js
file above, it’ll first send down the { hello: 'world' }
object to be printed by the forked child process and then the forked child process will send an incremented counter value every second to be printed by the parent process.
当执行上面的parent.js
文件时,它将首先向下发送{ hello: 'world' }
对象,以供派生的子进程打印,然后派生的子进程将每秒发送一个递增的计数器值,以供打印父进程。
Let’s do a more practical example about the fork
function.
让我们做一个关于fork
函数的更实际的例子。
Let’s say we have an http server that handles two endpoints. One of these endpoints (/compute
below) is computationally expensive and will take a few seconds to complete. We can use a long for loop to simulate that:
假设我们有一个处理两个端点的http服务器。 这些端点之一(下面的/compute
)在计算上很昂贵,并且需要几秒钟才能完成。 我们可以使用long for循环来模拟:
const http = require('http');
const longComputation = () => {
let sum = 0;
for (let i = 0; i < 1e9; i++) {
sum += i;
};
return sum;
};
const server = http.createServer();
server.on('request', (req, res) => {
if (req.url === '/compute') {
const sum = longComputation();
return res.end(`Sum is ${sum}`);
} else {
res.end('Ok')
}
});
server.listen(3000);
This program has a big problem; when the the /compute
endpoint is requested, the server will not be able to handle any other requests because the event loop is busy with the long for loop operation.
这个程序有个大问题; 当请求/compute
端点时,服务器将无法处理任何其他请求,因为事件循环忙于long for循环操作。
There are a few ways with which we can solve this problem depending on the nature of the long operation but one solution that works for all operations is to just move the computational operation into another process using fork
.
根据长操作的性质,有几种方法可以解决此问题,但是一种适用于所有操作的解决方案是仅使用fork
将计算操作移至另一个进程中。
We first move the whole longComputation
function into its own file and make it invoke that function when instructed via a message from the main process:
我们首先将整个longComputation
函数移到其自己的文件中,并在通过主进程的消息指示时使其调用该函数:
In a new compute.js
file:
在一个新的compute.js
文件中:
const longComputation = () => {
let sum = 0;
for (let i = 0; i < 1e9; i++) {
sum += i;
};
return sum;
};
process.on('message', (msg) => {
const sum = longComputation();
process.send(sum);
});
Now, instead of doing the long operation in the main process event loop, we can fork
the compute.js
file and use the messages interface to communicate messages between the server and the forked process.
现在,我们无需在主流程事件循环中进行冗长的操作,而可以fork
compute.js
文件,并使用messages接口在服务器与派生流程之间传递消息。
const http = require('http');
const { fork } = require('child_process');
const server = http.createServer();
server.on('request', (req, res) => {
if (req.url === '/compute') {
const compute = fork('compute.js');
compute.send('start');
compute.on('message', sum => {
res.end(`Sum is ${sum}`);
});
} else {
res.end('Ok')
}
});
server.listen(3000);
When a request to /compute
happens now with the above code, we simply send a message to the forked process to start executing the long operation. The main process’s event loop will not be blocked.
现在,使用上述代码进行对/compute
的请求时,我们仅向分叉的进程发送一条消息即可开始执行长操作。 主进程的事件循环不会被阻止。
Once the forked process is done with that long operation, it can send its result back to the parent process using process.send
.
一旦使用该长时间操作完成了分叉的进程,它就可以使用process.send
将其结果发送回父进程。
In the parent process, we listen to the message
event on the forked child process itself. When we get that event, we’ll have a sum
value ready for us to send to the requesting user over http.
在父进程中,我们侦听派生子进程本身上的message
事件。 收到该事件后,我们将准备好一个sum
值,可以通过http发送给发出请求的用户。
The code above is, of course, limited by the number of processes we can fork, but when we execute it and request the long computation endpoint over http, the main server is not blocked at all and can take further requests.
当然,上面的代码受我们可以分叉的进程数量的限制,但是当我们执行它并通过http请求较长的计算端点时,主服务器根本不会被阻塞,并且可以接受其他请求。
Node’s cluster
module, which is the topic of my next article, is based on this idea of child process forking and load balancing the requests among the many forks that we can create on any system.
Node的cluster
模块是我下一篇文章的主题,它基于子进程派生的思想,并在我们可以在任何系统上创建的众多fork之间平衡请求的负载。
That’s all I have for this topic. Thanks for reading! Until next time!
这就是我要做的所有事情。 谢谢阅读! 直到下一次!
Learning React or Node? Checkout my books:
学习React还是Node? 结帐我的书:
翻译自: https://www.freecodecamp.org/news/node-js-child-processes-everything-you-need-to-know-e69498fe970a/
node.js的开发流程