We are excited to announce that Priya Shivakumar is joining us as Chief Product Officer, overseeing Archetype’s product strategy as we expand on Physical AI deployments for enterprise customers worldwide. Priya brings over 15 years of experience leading product strategy and execution across AI, cloud, and enterprise software at Lightning AI, Confluent, and VMware.
Priya's appointment comes as Archetype enters its next phase of growth and product expansion. In November 2025, we launched the Archetype platform and introduced Physical Agents: ready-to-deploy applications that bring real-time intelligence to factories, buildings, and infrastructure worldwide. From safety monitoring to anticipating failures in complex physical systems, our Physical Agents turn raw data into operational decisions, so reliability, operations, and engineering teams can act on real-world signals without needing to become machine-learning experts.
We asked Priya a couple of questions about what customers and developers can expect from Archetype in 2026.
Newton is a powerful foundation model but models alone don't solve problems. What role does the platform play in making Physical AI actually usable for enterprise teams?
Priya: A foundation model on its own doesn’t change how work gets done. The platform is what makes Physical AI operational by handling real-world constraints like latency, scale, data governance, and system integration so teams don’t have to. It bridges the gap between what Newton understands and how operators actually work, turning sensor data into decisions that hold up in production, not just in demos.
What does it take for an enterprise to move from “interesting pilot” to “this runs our operations”?
Priya: Physical AI only matters if it can actually be deployed where the data and decisions live. In 2025, the conversation shifted from impressive demos to real questions about scale, latency, and control. In 2026, the winners will be teams that can choose how and where AI runs based on their infrastructure and business needs, and evolve that choice as operations grow.
What does it mean to “get Physical AI right” in 2026? What’s the bar?
Priya: Getting Physical AI right means moving beyond novelty into reliability. The bar is no longer “does it work in a demo?” but “can it run continuously, adapt to real environments, and earn trust in critical operations?” Success in 2026 will be defined by robustness, scalability, and the ability to operate across real-world constraints without constant re-engineering and overspending on costly ML tools that don’t lead to scalable outcomes.
What’s your framework for deciding what the product should do versus what the model should do?
Priya: Models should focus on understanding the physical world — for example, learning from sensor data and uncovering patterns humans can’t easily see. The Product team’s role is to make that understanding actionable: deciding how insights surface, how decisions are triggered, and how humans stay in the loop. Clear separation here is what allows both the model and the platform to scale independently and evolve faster.
LLMs and Physical AI are fundamentally different technologies. How does this reflect in the latter’s product strategy?
Priya: LLMs operate in a largely digital, text-first world. Physical AI operates in environments defined by time, sensors, and physical constraints. That difference changes everything about product strategy from deployment models to interfaces. Physical AI products need to live inside real environments, respond in real time, and integrate with existing infrastructure, rather than sitting behind a chat interface or being limited to robots as the sole form of spatial intelligence.
Physical AI is changing how organizations understand and operate in the real world. Priya's track record turning complex platforms into enterprise-ready products makes her the right leader to bring that intelligence to more customers faster.




