Robots AtlasRobots Atlas
May 5, 2026 · 5 min readPhysical AIEmbodied AIrobotics governance

Physical AI brings new governance demands for autonomous systems

Physical AI brings new governance demands for autonomous systems

When an AI model controls a machine

The term "physical AI" refers to systems in which AI models make decisions that translate directly into physical actions — a robot arm movement, a machine instruction, a sensor-based navigation choice. This is fundamentally different from purely software-based automation, because errors carry material consequences: damaged objects, safety incidents, flawed decisions in production environments.

Deployments are scaling quickly. According to the International Federation of Robotics, 542,000 industrial robots were installed worldwide in 2024 — more than double the annual figure from a decade earlier. The federation forecasts 575,000 units in 2025 and over 700,000 by 2028. Analysts at Grand View Research estimate the global Physical AI market at $81.64 billion in 2025, projecting growth to $960.38 billion by 2033 — though the category varies depending on how vendors define "intelligence" in physical systems.

The central governance question is: how do you oversee a system that doesn't just interpret data, but executes sequences of actions in a real-world environment?

Gemini Robotics as an architectural example

Google DeepMind addressed this directly in March 2025 with two model releases. Gemini Robotics is a vision-language-action (VLA) model built on Gemini 2.0, designed for direct robot control. Gemini Robotics-ER focuses on embodied reasoning — spatial understanding and task planning — without directly actuating hardware.

In its launch materials, Google DeepMind described three properties it considers essential for useful robots: generality (handling unfamiliar environments and objects), interactivity (adapting to changing human instructions), and dexterity (executing fine physical tasks). Demonstrated capabilities included folding paper, packing items into a bag, and manipulating objects unseen during training.

In April 2026, the company made Gemini Robotics-ER 1.6 available in preview through the Gemini API. According to Google's developer documentation, the model combines visual scene interpretation, spatial reasoning, and planning from natural-language commands — along with success detection: the system must evaluate whether a task has been completed correctly, whether it should retry, or whether it should stop.

Success detection is a function that traditional automation handled through hard-coded rules. In AI models, it is probabilistic — and a mistaken assessment can trigger further actions based on a false premise.

The governance problem: defining boundaries in the physical world

In software environments, governing AI agents typically involves four layers: defining which resources and tools the system can access, specifying which actions require human approval, logging activity for review, and setting escalation paths for failures. In robotics, physical controls are added on top: force and acceleration limits, collision detection, stability monitoring.

Google DeepMind describes robot safety as a layered problem — lower layers cover physical and electrical constraints, while higher layers involve contextual reasoning: whether a given instruction is safe to execute in the current situation.

The company also released ASIMOV, a dataset designed to evaluate semantic safety in robotics and embodied AI. According to Google DeepMind, the dataset tests whether systems can understand safety-related instructions and avoid unsafe behaviour in physical settings.

Comparing this to governance requirements for software agents reveals an important asymmetry: a software agent can be stopped by revoking API access; a physical agent requires stop mechanisms that remain effective even when the supervisory system fails.

The organisational maturity gap

McKinsey's 2026 AI trust research found that only about one-third of organisations reported maturity levels of 3 or higher — on a scale of 1 to 5 — in AI strategy, AI governance, and agentic AI governance, even as those same companies deploy increasingly autonomous systems.

This gap is particularly significant in the context of physical AI, where governance immaturity translates into not just operational risk but potential physical safety consequences. Frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 provide structures for managing AI risk across the system lifecycle — but adapting them to robotic environments requires additional work: accounting simultaneously for model behaviour, connected machines, and the operating environment.

Google DeepMind has been building out its physical AI ecosystem through partnerships. In March 2025, the company announced collaboration with Apptronik on humanoid robots using Gemini 2.0, and named Boston Dynamics, Agility Robotics, Agile Robots, and Enchanted Tools as trusted testers for Gemini Robotics-ER. The April 2026 update described tasks with Boston Dynamics including instrument reading — a use case that depends on visual scene understanding and reliable assessment of physical conditions.

Why this matters

Physical AI is not a niche category — it is the direction in which most industrial robotics, logistics, and infrastructure inspection applications are heading. The integration of language and vision models directly into robot control systems changes the nature of the governance problem: previously, systems' decisions were audited; now, the behaviour of probabilistic models in changing physical environments must be overseen.

The availability of models like Gemini Robotics-ER 1.6 through public APIs lowers the barrier to entry for developers but does not reduce safety and compliance requirements. The absence of mature governance frameworks — as the number of autonomous physical systems grows — creates a widening gap between technological capability and organisational readiness. Companies that ignore this gap risk not only operational incidents but also increasing regulatory pressure: the EU AI Act covers high-risk systems, a category that encompasses many robotic applications.

What's next?

  • Gemini Robotics-ER 1.6 remains in preview — tracking the full release schedule through the Gemini API will indicate when the technology moves into production readiness.
  • Ongoing deployments with Boston Dynamics and Apptronik will be the first scalability tests for physical AI governance in real-world conditions.
  • Industry governance frameworks (NIST RMF, ISO/IEC 42001) need extension with modules specific to robotic systems — an area where standardisation initiatives are expected in 2026–2027.

Sources

Share this article