Releases: moleculerjs/moleculer
v0.13.8
v0.13.7
v0.13.6
New
Secure service settings
To protect your tokens & API keys, define a $secureSettings: []
property in service settings and set the protected property keys.
The protected settings won't be published to other nodes and it won't appear in Service Registry. They are only available under this.settings
inside the service functions.
Example
// mail.service.js
module.exports = {
name: "mailer",
settings: {
$secureSettings: ["transport.auth.user", "transport.auth.pass"],
from: "[email protected]",
transport: {
service: 'gmail',
auth: {
user: '[email protected]',
pass: 'yourpass'
}
}
}
// ...
};
Changes
- fix
cacher.clean
issue #435 - add
disableVersionCheck
option for broker transit options. It can disable protocol version checking logic in Transit. Default:false
- improve Typescript definition file. #442 #454
- waitForServices accept versioned service names (e.g.:
v2.posts
). - update dependencies (plus using semver ranges in dependencies)
v0.13.5
New
Conditional caching
It's a common issue that you enable caching for an action but sometimes you don't want to get data from cache. To solve it you may set ctx.meta.$cache = false
before calling and the cacher won't send cached responses.
Example
// Turn off caching for this request
broker.call("greeter.hello", { name: "Moleculer" }, { meta: { $cache: false }}))
Other solution is that you use a custom function which enables or disables caching for every request. The function gets the ctx
Context instance so it has access any params or meta data.
Example
// greeter.service.js
module.exports = {
name: "greeter",
actions: {
hello: {
cache: {
enabled: ctx => ctx.params.noCache !== true,
keys: ["name"]
},
handler(ctx) {
this.logger.debug(chalk.yellow("Execute handler"));
return `Hello ${ctx.params.name}`;
}
}
}
};
// Use custom `enabled` function to turn off caching for this request
broker.call("greeter.hello", { name: "Moleculer", noCache: true }))
LRU memory cacher
An LRU memory cacher has been added to the core modules. It uses the familiar lru-cache library.
Example
let broker = new ServiceBroker({ cacher: "MemoryLRU" });
let broker = new ServiceBroker({
logLevel: "debug",
cacher: {
type: "MemoryLRU",
options: {
// Maximum items
max: 100,
// Time-to-Live
ttl: 3
}
}
});
Changes
- throw the error further in
loadService
method so that Runner prints the correct error stack. - new
packetLogFilter
transit option to filter packets in debug logs (e.g. HEARTBEAT packets) by @faeron - the Redis cacher
clean
&del
methods handle array parameter by @dkuida - the Memory cacher
clean
&del
methods handle array parameter by @icebob - fix to handle
version: 0
as a valid version number by @ngraef
v0.13.4
Changes
- catch errors in
getCpuUsage()
method. - support multiple urls in AMQP transporter by @urossmolnik
- fix AMQP connection recovery by @urossmolnik
- add
transit.disableReconnect
option to disable reconnecting logic at broker starting by @Gadi-Manor - catch
os.userInfo
errors in health action by @katsanva - allow specifying 0 as
retries
#404 by @urossmolnik - fix
GraceFulTimeoutError
bug #400 - fix event return handling to avoid localEvent error handling issue in middleware #403
- update fastest-validator to the 0.6.12 version
- update all dependencies
0.13.3
Changes
- update dependencies
- fix MQTTS connection string protocol from
mqtt+ssl://
tomqtts://
by @AndreMaz - Moleculer Runner supports typescript configuration file
moleculer.config.ts
- fix to call service start after hot-reloading.
- fix Bluebird warning in service loading #381 by @faeron
- fix
waitForServices
definition inindex.d.ts
#358 - fix
cpuUsage
issue #379 by @faeron
v0.13.2
Changes
- update dependencies
- add Notepack (other MsgPack) serializer
skipProcessEventRegistration
broker option to disableprocess.on
shutdown event handlers which stop broker.- make unique service dependencies
- add
socketOptions
to AMQP transporter options. #330 - fix unhandled promise in AMQP transporter
connect
method. - add
autoDeleteQueues
option to AMQP transporter. #341 - ES6 support has improved. #348
- add
qos
transporter option to MQTT transporter. Default:0
- add
topicSeparator
transporter option to MQTT transporter. Default:.
- fix MQTT transporter disconnect logic (waiting for in-flight messages)
- add support for non-defined defaultOptions variables #350
- update ioredis to v4
v0.13.1
v0.13.0
Migration guide from v0.12.x to v0.13.x is here.
Breaking changes
Streaming support
Built-in streaming support has just been implemented. Node.js streams can be transferred as request params
or as response. You can use it to transfer uploaded file from a gateway or encode/decode or compress/decompress streams.
Why is it a breaking change?
Because the protocol has been extended with a new field and it caused a breaking change in schema-based serializators (ProtoBuf, Avro). Therefore, if you use ProtoBuf or Avro, you won't able to communicate with the previous (<=0.12) brokers. Using JSON or MsgPack serializer, there is nothing extra to do.
Examples
Send a file to a service as a stream
const stream = fs.createReadStream(fileName);
broker.call("storage.save", stream, { meta: { filename: "avatar-123.jpg" }});
Please note, the params
should be a stream, you cannot add any more variables to the request. Use the meta
property to transfer additional data.
Receiving a stream in a service
module.exports = {
name: "storage",
actions: {
save(ctx) {
const s = fs.createWriteStream(`/tmp/${ctx.meta.filename}`);
ctx.params.pipe(s);
}
}
};
Return a stream as response in a service
module.exports = {
name: "storage",
actions: {
get: {
params: {
filename: "string"
},
handler(ctx) {
return fs.createReadStream(`/tmp/${ctx.params.filename}`);
}
}
}
};
Process received stream on the caller side
const filename = "avatar-123.jpg";
broker.call("storage.get", { filename })
.then(stream => {
const s = fs.createWriteStream(`./${filename}`);
stream.pipe(s);
s.on("close", () => broker.logger.info("File has been received"));
})
AES encode/decode example service
const crypto = require("crypto");
const password = "moleculer";
module.exports = {
name: "aes",
actions: {
encrypt(ctx) {
const encrypt = crypto.createCipher("aes-256-ctr", password);
return ctx.params.pipe(encrypt);
},
decrypt(ctx) {
const decrypt = crypto.createDecipher("aes-256-ctr", password);
return ctx.params.pipe(decrypt);
}
}
};
Better Service & Broker lifecycle handling
The ServiceBroker & Service lifecycle handler logic has already been improved. The reason for amendment was a problem occuring during loading more services locally; they could call each others' actions before started
execution. It generally causes errors if database connecting process started in the started
event handler.
This problem has been fixed with a probable side effect: causing errors (mostly in unit tests) if you call the local services without broker.start()
.
It works in the previous version
const { ServiceBroker } = require("moleculer");
const broker = new ServiceBroker();
broker.loadService("./math.service.js");
broker.call("math.add", { a: 5, b: 3 }).then(res => console.log);
// Prints: 8
From v0.13 it throws a ServiceNotFoundError
exception, because the service is only loaded but not started yet.
Correct logic
const { ServiceBroker } = require("moleculer");
const broker = new ServiceBroker();
broker.loadService("./math.service.js");
broker.start().then(() => {
broker.call("math.add", { a: 5, b: 3 }).then(res => console.log);
// Prints: 8
});
or with await
broker.loadService("./math.service.js");
await broker.start();
const res = await broker.call("math.add", { a: 5, b: 3 });
console.log(res);
// Prints: 8
Similar issue has been fixed at broker shutdown. Previously when you stopped a broker, which while started to stop local services, it still acccepted incoming requests from remote nodes.
The shutdown logic has also been changed. When you call broker.stop
, at first broker publishes an empty service list to remote nodes, so they route the requests to other instances.
Default console logger
No longer need to set logger: console
in broker options, because ServiceBroker uses console
as default logger.
const broker = new ServiceBroker();
// It will print log messages to the console
Disable loggging (e.g. in tests)
const broker = new ServiceBroker({ logger: false });
Changes in internal event sending logic
The $
prefixed internal events will be transferred if they are called by emit
or broadcast
. If you don't want to transfer them, use the broadcastLocal
method.
From v0.13, the
$
prefixed events mean built-in core events instead of internal "only-local" events.
Improved Circuit Breaker
Threshold-based circuit-breaker solution has been implemented. It uses a time window to check the failed request rate. Once the threshold
value is reached, it trips the circuit breaker.
const broker = new ServiceBroker({
nodeID: "node-1",
circuitBreaker: {
enabled: true,
threshold: 0.5,
minRequestCount: 20,
windowTime: 60, // in seconds
halfOpenTime: 5 * 1000,
check: err => err && err.code >= 500
}
});
Instead of failureOnTimeout
and failureOnReject
properties, there is a new check()
function property in the options. It is used by circuit breaker in order to detect which error is considered as a failed request.
You can override these global options in action definition, as well.
module.export = {
name: "users",
actions: {
create: {
circuitBreaker: {
// All CB options can be overwritten from broker options.
threshold: 0.3,
windowTime: 30
},
handler(ctx) {}
}
}
};
CB metrics events removed
The metrics circuit breaker events have been removed due to internal event logic changes.
Use the $circuit-breaker.*
events instead of metrics.circuit-breaker.*
events.
Improved Retry feature (with exponential backoff)
The old retry feature has been improved. Now it uses exponential backoff for retries. The old solution retries the request immediately in failures.
The retry options have also been changed in the broker options. Every option is under the retryPolicy
property.
const broker = new ServiceBroker({
nodeID: "node-1",
retryPolicy: {
enabled: true,
retries: 5,
delay: 100,
maxDelay: 2000,
factor: 2,
check: err => err && !!err.retryable
}
});
Overwrite the retries
value in calling option
The retryCount
calling options has been renamed to retries
.
broker.call("posts.find", {}, { retries: 3 });
There is a new check()
function property in the options. It is used by the Retry middleware in order to detect which error is a failed request and needs a retry. The default function checks the retryable
property of errors.
These global options can be overridden in action definition, as well.
module.export = {
name: "users",
actions: {
find: {
retryPolicy: {
// All Retry policy options can be overwritten from broker options.
retries: 3,
delay: 500
},
handler(ctx) {}
},
create: {
retryPolicy: {
// Disable retries for this action
enabled: false
},
handler(ctx) {}
}
}
};
Changes in context tracker
There are also some changes in context tracker configuration.
const broker = new ServiceBroker({
nodeID: "node-1",
tracking: {
enabled: true,
shutdownTimeout: 5000
}
});
Disable tracking in calling option at calling
broker.call("posts.find", {}, { tracking: false });
The shutdown timeout can be overwritten by $shutdownTimeout
property in service settings.
Removed internal statistics module
The internal statistics module ($node.stats
) is removed. Yet you need it, download from here, load as a service and call the stat.snapshot
to receive the collected statistics.
Renamed errors
Some errors have been renamed in order to follow name conventions.
ServiceNotAvailable
->ServiceNotAvailableError
RequestRejected
->RequestRejectedError
QueueIsFull
->QueueIsFullError
InvalidPacketData
->InvalidPacketDataError
Context nodeID changes
The ctx.callerNodeID
has been removed. The ctx.nodeID
contains the target or caller nodeID. If you need the current nodeID, use ctx.broker.nodeID
.
Enhanced ping method
It returns Promise
with results of ping responses. Moreover, the method is renamed to broker.ping
.
Ping a node with 1sec timeout
broker.ping("node-123", 1000).then(res => broker.logger.info(res));
Output:
{
nodeID: 'node-123',
elapsedTime: 16,
timeDiff: -3
}
Ping all known nodes
broker.ping().then(res => broker.logger.info(res));
Output:
{
"node-100": {
nodeID: 'node-100',
elapsedTime: 10,
timeDiff: -2
} ,
"node-101": {
nodeID: 'node-101',
elapsedTime: 18,
timeDiff: 32
},
"node-102": {
nodeID: 'node-102',
elapsedTime: 250,
timeDiff: 850
}
}
Amended cacher key generation logic
When you didn't define keys
at caching, the cacher...