- Built For The Future
- Posts
- Why I’m Writing About the Future of Robotics
Why I’m Writing About the Future of Robotics
Exploring the challenges, innovations, and ideas driving the next wave of automation
Hi there,
Welcome to the first edition of Built for the Future. This newsletter is all about deep-tech robotics—exploring the challenges, breakthroughs, and big ideas shaping the future of automation. Whether you’re an engineer, researcher, or just fascinated by cutting-edge technology, my goal is to provide you with insights that are both practical and thought-provoking.
This inaugural post sets the stage: why I’m starting this newsletter, the critical challenges robotics is addressing today, and my perspective on where the field is headed. Let’s jump in.
What Drives This Exploration
At silana, we’ve been tackling one of robotics’ hardest problems: enabling machines to handle soft materials, like textiles, with precision.
Why is this so difficult? Unlike rigid objects, textiles behave unpredictably—they stretch, wrinkle, and deform with even slight forces. Traditional robotic systems, optimised for rigid and repetitive tasks, fail when faced with the variability of soft materials. The key is moving beyond fixed programming to creating systems that can sense, predict, and adapt in real time.
Recent advancements in research underscore the need for interdisciplinary approaches to this problem:
Deformable Object Simulation: Physics-based models, such as those pioneered by MIT’s Interactive Materials Group, are helping robots better predict how textiles respond to manipulation.
Tactile AI: Innovations like Google DeepMind’s tactile reinforcement learning demonstrate the potential for teaching robots to “feel” and respond dynamically to soft objects.
Soft Robotics: Flexible grippers, such as those developed at Harvard’s Wyss Institute, show how material compliance can improve the grasping of delicate fabrics.
Building on these insights, we’ve developed an early-stage handling and perception system that allows robots to manipulate textiles by dynamically adapting to their movements. This marks a significant step toward automating industries like fashion and healthcare, where soft material handling has long been a barrier.
This breakthrough isn’t just about solving a single task—it offers a glimpse into the future of robotics. It has shaped our vision of creating systems that adapt, learn, and collaborate with their surroundings to overcome high-variability challenges in real-world environments.
Robotics Today: At the Crossroads of Intelligence and Action
We are witnessing a transformative era in robotics, driven by the convergence of cutting-edge trends that bridge intelligence and physical action. Three key developments are shaping the future of this field:
1. Foundation Models for Robotics
Inspired by advanced AI architectures like GPT-4, foundation models are revolutionizing robotics by enabling generalization learning across tasks. These systems empower robots to adapt knowledge to new scenarios, eliminating the need for task-specific programming.
For instance, research initiatives such as Google’s RT-1 Robotics Transformer illustrate how robots can transfer skills to entirely different materials without retraining. This approach marks a significant departure from narrow, single-purpose applications, ushering in an era of multi-task, adaptable robots.
Recent Development: Physical Intelligence’s π0 (Pi-Zero)
Physical Intelligence recently announced π0 (Pi-Zero), a general-purpose AI foundation model for robots. Built on the PaliGemma vision-language model, Pi-Zero was trained on a dataset from 7 robots performing 68 tasks, as well as the Open X-Embodiment dataset. It outperformed baseline models like OpenVLA and Octo in tasks such as folding laundry and bussing a table, achieving significant improvements.
Pi-Zero’s architecture combines vision, robot joint data, and language commands to generate action tokens. It includes an “action expert” module for robot-specific tasks and can decompose high-level commands into simpler steps, improving performance on complex tasks like setting a table.
Described as a GPT-1 moment for robotics, Pi-Zero exemplifies the potential of large-scale models to merge semantic understanding with diverse robot data, paving the way for advanced dexterity and adaptability in robots.
2. Embodied Intelligence
Embodied intelligence refers to a robot’s capability to navigate, manipulate, and adapt effectively within its physical environment. This includes precision in motion, handling of complex objects, and resilience in dynamic, unstructured settings.
Pioneering examples include Boston Dynamics’ Atlas robot, which showcases remarkable balance and mobility. Similarly, startups like Figure AI are integrating advanced AI with physical control systems for tasks ranging from industrial automation to delicate material handling. These advancements are crucial for deploying robots in real-world environments where variability is the norm.
3. Real-Time Perception and Decision-Making
Real-time perception systems, driven by advancements in computer vision and sensor fusion, enable robots to interpret and act on their surroundings with unprecedented precision.
Projects such as NVIDIA’s Isaac Sim exemplify this trend, offering simulation environments where robots can train and refine their perception systems. Combining these capabilities with cutting-edge machine learning frameworks enhances adaptability in tasks like seam detection in textiles or assessing product quality in manufacturing.
A New Paradigm for Robotics
These trends collectively signify a paradigm shift for robotics, moving from rigid automation to systems capable of intelligence, adaptability, and collaboration. This fusion of technology and physical action is not just advancing the field—it’s redefining what robots can achieve in industries as diverse as fashion, healthcare, and manufacturing.
Why This Matters
These trends aren’t just about building better machines—they’re about fundamentally rethinking what robots can do. We’re moving toward a world where robots won’t just work in factories or labs; they’ll collaborate with humans in dynamic environments, solving problems that require adaptability and creativity.
This shift has profound implications:
For Industry: Robots that can adapt to variability will unlock automation in industries like textiles, agriculture, and healthcare—areas traditionally resistant to robotics.
For Society: As robots become more versatile, they’ll free humans from repetitive, unsafe tasks, allowing us to focus on more meaningful work.
For Innovation: The fusion of AI and robotics will push the boundaries of what’s possible, from humanoids that assist in everyday life to machines that perform tasks we can’t yet imagine.
In future posts, I’ll dive deeper into topics like:
How foundation models are reshaping robotics.
The technical challenges of building robots for unpredictable environments.
Practical insights from building a robotics startup.
In the next issue, I’ll explore how foundation models are enabling robots to move beyond rigid programming, making them adaptable to real-world complexity.
Join the Journey
Robotics is hard, messy, and incredibly rewarding—and I want to share that journey with you. If this resonates with you, I’d love for you to subscribe and be part of this exploration.
Let’s build the future together.
— Anton
P.S.
What’s the biggest challenge you think robotics faces today? Reply to this email—I’d love to hear your thoughts!