From lithography machines to medical instruments, real-world edge systems demand deterministic behavior, reliable sensor interpretation and carefully engineered algorithms that go far beyond simply training an AI model.
Artificial Intelligence has been glorified as the future of automation. It’s often portrayed as the ultimate solution for efficiency, decision-making and innovation across industries. It’s marketed as a transformative technology for everything from healthcare and finance to autonomous systems and industrial processes.
However, this narrative doesn’t reflect the present reality. AI in its current form remains too limited to be relied upon for mission-critical applications that require deterministic behavior, such as those stipulated by ISO/IEC standards. Although it performs impressively in controlled settings, AI often struggles when exposed to the complexity, variability and unpredictability of real-world environments – particularly when those conditions fall outside the assumptions used to train the model.
This degradation in performance occurs because AI lacks common-sense reasoning and struggles with real-world subtlety, ie it doesn’t understand the real world in the same way that humans do. Trained largely on synthetic or otherwise limited datasets, it has no intrinsic grasp of the physical or situational subtleties present in operational environments. When real-world conditions differ from those seen during training, AI systems often misinterpret context, leading to unreliable or misleading outcomes – an unacceptable limitation for any mission-critical application.
An application is classed as mission-critical when results must be delivered predictably, repeatably and within guaranteed timing constraints defined by system and stakeholder requirements. In many operational environments, unreliable or delayed behavior can lead directly to financial loss, disrupted processes or regulatory non-compliance. In embedded edge systems like lithography machines, factory automation, medical instruments and logistics automation, algorithms interact directly with physical processes under strict constraints such as bounded latency, limited computational resources, low power consumption and long-term reliability, while often needing to comply with IEC/ISO standards.
At the heart of these systems are physical measurements by sensors that produce signals reflecting real-world processes but are often corrupted by noise, interference and environmental disturbances. This broader challenge is increasingly discussed under the banner of “physical AI” – intelligent systems that perceive, reason and act within the physical world through sensors, signals and real-time control. Achieving this in practice requires the integration of deterministic digital signal processing (DSP) algorithms, machine learning (ML) and embedded systems design.

Evolution of intelligent systems
The idea of embedding intelligence into machines predates modern physical AI by decades. Early automation systems implemented rule-based behavior using mechanical logic and later electromechanical control systems. With the advent of electronics, these concepts evolved into embedded systems built on PLCs, microcontrollers (MCUs) and later digital signal processors (DSPs), allowing complex signal analysis and control algorithms to be implemented in software while maintaining robust deterministic behavior.
Regardless of the technology used, these systems ultimately rely on measurements obtained from sensors observing the physical world. This data is rarely clean, though, as signals are affected by measurement noise, powerline interference and environmental disturbances. As such, extracting meaningful information has traditionally relied on signal processing techniques, implemented using either analog electronics or digitally using a DSP or microcontroller. These techniques perform tasks such as filtering, spectral analysis and feature extraction. Filtering removes noise, spectral analysis reveals hidden patterns and feature extraction transforms raw measurements into structured information describing the underlying system behavior. They’re all deterministic methods that make signal interpretation predictable and verifiable.
The early embedded platforms were typically designed as isolated control systems, operating with deterministic control loops. However, as semiconductor manufacturers began to provide developers with connectivity modules, such as Zigbee, BLE, Lora, Wi-Fi and Ethernet, together with more advanced low-cost microcontrollers, the systems increasingly became part of the Internet of Things (IoT), where devices transmit measurements to cloud infrastructure for storage and analysis.
As processor technology continued to improve and more software libraries became available, engineers began augmenting IoT systems with ML models. This convergence gave rise to AIoT (Artificial Intelligence of Things) systems, enabling devices to perform increasingly autonomous decision-making without a human in the loop.
While AIoT enables powerful functionality, it also introduces limitations such as network latency, bandwidth consumption and dependence on continuous connectivity. For many real-world systems, the processing must therefore move closer to the data source, leading to edge AI and ultimately real-time edge intelligence (RTEI).

Integrated approach
RTEI systems operate directly on real-world physical signals originating from sensors (eg temperature, pressure or strain). These signals are constrained by noise, bandwidth, sampling behavior and numerical precision. The systems execute under real-time and deterministic timing constraints, where latency and scheduling behavior form part of the specification, typically managed by an RTOS running on the embedded computing platform. The output is traceable and physically interpretable, with the behavior largely governed by DSP algorithms derived from human reasoning rather than data-driven inference alone.
ML models operate most effectively on a few well-designed features based on the physics of the sensor data. To facilitate this, the DSP algorithms are a fundamental pre-step for signal enhancement and feature extraction, such that the models can perform their classification tasks based on high-quality feature data. It’s this combination that can provide high classification accuracy, even when conditions slightly deviate from the original training datasets.
In an RTEI architecture, sensing, signal processing and ML inference run locally at the edge. Cloud connectivity is optional and typically used only for device management and data storage.
The DSP algorithms, the ML models and the underlying embedded platform are developed together. This is done in a standards-driven process, ensuring compliance with IEC or ISO requirements for commercial deployment, while remaining aligned with stakeholder requirements. This integrated approach improves reliability while ensuring traceable, standards-driven systems suitable for long-term deployment in real-world edge applications.

