Rolling Out Analytics Builder models: From Single Asset to Global Fleet

When launching an IoT project, the first “Proof of Concept” is usually a success because it’s easy to manage one or two machines. The real challenge—the “Scale Gap”—happens when you try to move that logic to 100, 500, or 5,000 assets.

If your analytics strategy relies on manual setup for every new machine, your operational costs will grow as fast as your fleet, wiping out your ROI. To scale profitably, you need an architecture that separates the logic (how the math works) from the configuration (the specific limits for each machine).

In this guide, we will walk through the architectural patterns for scaling a predictive maintenance use case. We’ll use the fictional company Strong-Vac Solutions as our example. They need to monitor pump health by calculating differential pressure (ΔP) across filters: ΔP = P_{Inlet} - P_{Outlet}. If the difference exceeds a certain limit, the filter is likely clogged, and a maintenance alarm must be triggered.

A Single Hardcoded Asset

When building a prototype, it’s common to create a model where the input blocks are tied to specific Device IDs and the threshold is a fixed number.

The Problem: This works for one pump. But if Strong-Vac has 500 pumps, copying this model 500 times is a maintenance nightmare. If you decide to change the alarm text later or extend with an integration towards a CMMS system, you have to edit 500 models.

Taking Advantage of Built-in Scaling

Rather than copying a single model with hardcoded devices and thresholds, let’s take advantage of the built-in capabilities of Streaming Analytics to make our logic portable and reusable.

Case A: Homogeneous Fleets (Using Groups)

If you have a group of identical pumps that all share the same threshold, you don’t need to specify individual devices.

  1. Group Context: In the model, select a Group of devices as Input Source instead of a single device.

  2. Automatic Partitioning: The engine automatically spawns a separate execution context for every device that is a member of that group.

  3. Efficiency: One model definition now monitors $N$ devices simultaneously, each within its own “silo” of data.

Case B: Template Models (Using Parameters)

Case A works fine if ALL the pumps in that group need the same threshold-value. Environment-specific factors (like altitude or humidity) mean that Pump A might need a threshold of 200 mbar, while Pump B needs 250 mbar and Pump C yet another threshold. To solve this without duplicating the model, you can leverage the built-in concept of Model Instances and Templates.

Implementation:

  • In your input block, instead of selecting a specific device or group of devices, you define a parameter pump

  • In your threshold block, instead of typing 200, you define a parameter (e.g., dP threshold).

Deployment: When you want to activate the model for a given pump, you have to create an instance of the model template and for that instance define the value of pump to be a specific device and the value of dP threshold to be a specific value.

The Result: You have one “Master Logic” (the Template) and multiple “Instances” with their instance-specific parameter values. If you update the Master Logic, all Instances are updated automatically, while keeping their unique threshold values.

Note that this approach does require you to configure an instance of the model template for each and every pump, which can become somewhat cumbersome (also in terms of maintenance) as your device fleet is growing.

Pro Tip: Instances can also be applied to (Sub-)Groups. If you have 10 pumps in a “High-Stress” environment, you can deploy one instance to that specific (sub-)group with a tighter threshold.

Dealing with High-Diversity Fleets

As a fleet grows into the thousands with high diversity, managing hundreds of “Instances” with each their own threshold setting in the Analytics Builder UI can still become cumbersome. The more robust architectural pattern is to then decouple the configuration from the model entirely.

In this pattern, you store the threshold as a property on the Managed Object itself, and inside the Analytics Builder model, you retrieve the property from the Managed Object and use that for threshold comparison.

How to implement it:

  1. Define the Property: On each pump’s Managed Object, add a custom fragment:
"$pump": 
{

   ...

   "dP_threshold_limit": 225

}
  1. The Analytics Model: Instead of a parameter, use an Input block that pulls the dP_threshold_limit value from the Managed Object of the device to which the model applies.

    1. It is recommended that you check the “Capture Start Value” selection box to fetch the current value of the Managed Object property at the moment of activation of the model; otherwise the model will only start to work on an update of the Managed Object property

    2. A small adaptation of the model needs to be done in this step, as the Threshold-block which you used in the previous steps can not take in the threshold value as an input. Therefore you should replace this block with an Expression block that can cope with an additional input. In this case the following expression input1>float.parse(input2) makes it work.

The Result: The model becomes a generic “Engine”:

  • calculates ΔP,

  • looks at the device’s own properties to see what its specific limit is,

  • compares ΔP to this threshold,

  • triggers the creation of an alarm if the result of the comparison was true.

The (additional) value it brings

  • Centralized Management: All configurations live in the Inventory (Digital Twin), which is the single source of truth for the asset’s state.

  • Zero Downtime Updates: Changes to thresholds take effect instantly without needing to restart or redeploy the model.

  • Empowers Field Staff: A technician/operator/… can onboard a new device without having to touch the underlying model or manually enter parameters

Summary

The transition from a single-pump pilot to a global fleet often fails when teams treat each asset as a unique software project. As you’ve seen with the Strong-Vac example, Analytics Builder provides the architectural tools to avoid this “manual-scale” trap by treating logic as a dynamic template rather than a static script.

By leveraging the following core capabilities, you shift the workload from manual maintenance to automated orchestration:

  • Logic Decoupling: Using Template Parameters allows you to separate the analytic “math” from the site-specific “thresholds.” You manage one master logic file while supporting hundreds of unique operational environments.

  • Parameter-Driven Configuration: Moving parameters to the Managed Object turns your Digital Twin into a configuration layer. This allows the streaming engine to remain generic, while individual device behavior is governed by the data in the inventory.

  • Lifecycle Management: Because a single model can target groups or entire fleets, updates to your maintenance algorithms can be pushed globally without the risk and overhead of redeploying hundreds of individual models.

By using these built-in scaling patterns, you ensure your predictive maintenance solution remains manageable, whether you are monitoring one pump or ten thousand.

4 Likes