Sparkplug Decode
The Sparkplug specification version 3.0 requires all payloads to be sent in a standardised Protobuf format. This is very efficient in bandwidth use, but is not human readable. The Sparkplug Decode plugin can be used to decode the Protobuf messages and republish them as JSON.
The plugin has two decoding modes, basic, enhanced, and can disabled at
runtime.
Use the setDecodeMode API command to switch modes.
Prior to version 3.2, only the basic mode was supported. This remains the
default mode if no other configuration is present.
Plugin Activation
To enable the plugin it must be loaded into the broker with, by adding the
following to your mosquitto.conf:
global_plugin /usr/lib/cedalo_sparkplug_decode.so
Further configuration is carried out over the API.
Basic decoding
This mode automatically processes all Sparkplug messages arriving on topics
matching spBv1.0/#, decodes them into JSON and republishes them to
spJv1.0/#.
This provides a human readable version of each message, which is useful for
observation and debugging. Only valid Sparkplug messages will be decoded and
forwarded to the spJv1.0/# topic, while all others will be ignored.
The decoding is a one to one conversion without any state, so a message with multiple metrics will have all metrics in the resulting JSON message, and metric aliases will be left as is.
The JSON schema used is from First draft of Sparkplug B JSON schema, however it is important to note that this feature is not currently supported by Sparkplug. It is possible that a standard JSON payload for Sparkplug may appear in the future, at which point the output of this plugin may be incompatible. For this reason it is recommended that you do not build your device management on the JSON output but use it only for observation and debugging.
Enhanced decoding
This mode processes NBIRTH, NDATA, DBIRTH, and DDATA messages only. It
republishes each metric individually to its own topic and stores the metric
name/alias mapping so that metrics in NDATA/DDATA messages are published
with the correct name.
By default, the metrics are published to an extended Sparkplug topic. For edge nodes, the topic is of the form:
spBv1.0/<group name>/NBIRTH/<edge node id>//metrics/<metric name>spBv1.0/<group name>/NDATA/<edge node id>//metrics/<metric name>
For devices the topic is of the form:
spBv1.0/<group name>/DBIRTH/<edge node id>/<device id>/metrics/<metric name>spBv1.0/<group name>/DDATA/<edge node id>/<device id>/metrics/<metric name>
Thus it is possible to subscribe to all JSON messages using a topic
subscription to spBv1.0/+/+/+/+/metrics/#.
A custom topic prefix can also be specified using the setTopicPrefix API
command. In this case, the spBv1.0 prefix will be replaced with the custom
prefix to produce topics of the form:
<topic prefix>/<group name>/NBIRTH/<edge node id>//metrics/<metric name><topic prefix>/<group name>/NDATA/<edge node id>//metrics/<metric name><topic prefix>/<group name>/DBIRTH/<edge node id>/<device id>/metrics/<metric name><topic prefix>/<group name>/DDATA/<edge node id>/<device id>/metrics/<metric name>
The template metric datatype is not currently supported.
Decoding disabled
To disable the processing of incoming messages, set the decode mode to none in
the setDecodeMode command.
Decoding nesting limits
The Sparkplug propertySet, propertySetList, and template metric types all
allow nested structures to be created. By default the plugin allows a maximum
nesting depth of 10 levels. This can be configured using the setMaxDepth
command. If a metric uses more than this number of levels, the metric will
still be published however the levels beyond the maximum will be left as empty
objects.
The currently configured limit can be read using the getMaxDepth command.