I am excited to share a groundbreaking development in the world of autonomous robotics. We at Ottonomy unveiled Contextual AI 2.0, a transformative leap in robotic intelligence that brings general intelligence capabilities to the edge. This innovation is not just about improving autonomous delivery—it’s about redefining how robots perceive, understand, and interact with the world around them.
Contextual AI 2.0 is the next evolution of autonomous decision-making, powered by Vision-Language Models (VLMs) running directly on Ambarella’s N1 System on Chips (SoCs). Unlike traditional AI systems that rely on pre-programmed rules or cloud-based processing, Contextual AI 2.0 enables robots to process and interpret complex environments in real-time, entirely on the edge. This means our robots can now understand context, make intelligent decisions, and adapt to dynamic situations without relying on external infrastructure.
One of the most exciting aspects of Contextual AI 2.0 is its ability to bring general intelligence to autonomous robots. General intelligence refers to a robot’s capacity to perform a wide range of tasks in diverse environments, much like a human would. With VLMs, our Ottobots can now:
This level of intelligence is a significant step forward in making autonomous robots more versatile, reliable, and capable of handling the complexities of real-world scenarios.
A key enabler of Contextual AI 2.0 is its ability to run VLMs on the edge. Edge computing eliminates the need for constant cloud connectivity, reducing latency and ensuring that our robots can operate efficiently even in areas with limited or no internet access. This not only enhances performance but also improves privacy and security, as data is processed locally rather than being transmitted to external servers.
The integration of general intelligence and edge computing into autonomous robots has far-reaching implications:
At Ottonomy, we believe that the future of robotics lies in creating systems that are not only autonomous but also intelligent and adaptable. Contextual AI 2.0 is a testament to our commitment to innovation and our vision of a world where robots seamlessly integrate into everyday life. As we continue to refine this technology, we are excited to collaborate with businesses, communities, and partners to bring the benefits of autonomous robots to more people.
Intelligence across use-cases and ODDs
The era of general intelligence in robotics is here, and it’s transforming the way we think about automation. Together, let’s build a future where robots are not just tools but intelligent partners in our daily lives.
Join me and Ottonomy as we pioneer the next generation of autonomous robots. Let’s shape the future together!
Hospitals are busy places. Clinicians are constantly on the move, juggling patient care, paperwork, and—let’s… Read More
Passengers passing through Terminal 2 at Munich Airport (MUC) in southeastern Germany may find themselves encountering service… Read More
Ottonomy Inc., a provider of autonomous delivery robots, today announced its Contextual AI 2.0, which… Read More
DPD has taken a groundbreaking step in the UK logistics sector by deploying Ottonomy’s autonomous… Read More
DPD has become the first UK parcel delivery company to deploy Ottonomy's multi-compartment, fully autonomous… Read More
In vast warehouses, Addverb’s robots work tirelessly, picking, sorting and delivering products with precision. Across frozen Oslo,… Read More
This website uses cookies.