Mimus is a giant industrial robot that's curious about the world around her. Unlike in traditional industrial robotics, Mimus has no pre-planned movements: she is programmed with the freedom to explore and roam about her enclosure. Mimus has no eyes, however — she uses sensors embedded in the ceiling to see everyone around her simultaneously. If she finds you interesting, Mimus may come in for a closer look and follow you around. But her attention span is limited: if you stay still for too long, she will get bored and seek out someone else to investigate.

Our interactive installation responds to a commonly cited social fear: robots are taking work from humans. However, we believe in a more optimistic future, where robots do not replace our humanity, but instead amplify and expand it. Ordinarily, robots like Mimus are completely segregated from humans as they do highly repetitive tasks on a production line. With Mimus, we illustrate how wrapping clever software around industry-standard hardware can completely reconfigure our relationship to these complex, and often dangerous, machines. Rather than view robots as a human adversary, we demonstrate a future where autonomous machines, like Mimus, might be companions that co-exist with us on this planet.

Industrial robots are the foundation of our robotic infrastructure, and they have remained relatively unchanged for the past 50 years. With Mimus, we highlight an untapped potential for this old industrial technology to work with people, not against them. Our software illustrates how small, strategic changes to an automation system can take a one ton beast-of-a-machine from spot welding car chassises in a factory, to curiously following a child around a museum like an excited puppy. We hope to show that despite our collective anxieties surrounding robotics, there is the potential for empathy and companionship between humans and machines.

Robots are creatures, not things

Every aspect of Mimus — from the interaction design to her physical environment — is designed for visitors to forget they are looking at a machine, and instead see her as a living creature. This lets us use the robot’s body language and posturing to broadcast a spectrum of emotional states to visitors: when Mimus’ sees you from far away, she looks down at you using a fairly intimidating pose, like a bear standing on their hind legs; when you walk closer to her, Mimus approaches you from below, like a dog that is excited to see you.

When something responds to us with lifelike movements — even when it is clearly an inanimate object — we cannot help but project our emotions onto it. For Mimus, her body language acts as a medium for cultivating empathy between museum goers and a piece of industrial machinery. This primitive, yet fluid, means of communication equips visitors with an innate understanding of the behaviors, kinematics, and limitations of a robot. Mimus’ movements may not always be predictable, but they are always comprehensible to the people around her.

From Dominance to Co-existance

Our current model for robotics and automation primarily consist of systems for optimization and control: we tell the robots what to do, and they do it to maximum effectiveness. This human-robot relationship has served us very well, and over the past 50 years robotic automation has led to unprecedented innovation and productivity in agriculture, medicine, and manufacturing.

However, we are reaching an inflection point. Rapid advancements in machine learning and artificial intelligence are making our robotic systems smarter and more adaptable than ever, but these advancements also inherently weaken our direct control and relevance to autonomous machines. Similarly, robotic manufacturing, despite its benefits, is arriving at a great human cost: the World Economic Forum estimates predicts that rapid growth of robotics in global manufacturing will place the livelihoods of 5 million people at stake by 2020. What should be clear by now is that the robots are here to stay. So rather than continue down the path of optimizing our own obsolescence, now is the time to rethink how humans and robots are going to co-exist on this planet. What is needed now is not better, faster or smarter robots, but an opportunity for us to pool our collective ingenuity, intelligence, and relentless optimism to invent new ways for robots to amplify our own human capabilities.

Image by ATONATON, LLC. / Autodesk, Inc.

As we go from operating to cohabitating with robots, one of the biggest challenges we face is in communicating with these machines. Take for example, autonomous vehicles. Currently, there is no way for a pedestrian to read the intentions of a driverless car, and this lack of legibility can lead to disastrous results. And as non-humanoid, autonomous robots become increasingly prevalent in our daily lives — like drones, cars, trucks, and co-workers — we will need more effective ways of communicating with these machines.

Implementation Details

Mimus uses three layers of custom built software to transform an ABB IRB 6700 industrial robot into a living, breathing mechanical creature. The first layer handles all the data streaming from the eight depth sensors embedded in the ceiling. Our software stitches together depth data from each individual sensor to form a single point-cloud of the perimeter around the robot enclosure. This unified point cloud provides the 3D information needed to do low-latency people tracking and basic gesture detection. This sensor array has an effective tracking area of approximately 45m², tracking from 500 millimeters to 2.2 meters (around 18” to 7’) in height. This portion of the codebase was developed in openframeworks – a C++-based open source arts-engineering coding toolkit. 

Each detected person is tracked and assigned attributes as they move around the space. Some attributes are explicit — like position, age, proximity, height, and area — and other attributes are implicit — like activity level and engagement level. Mimus uses these attributes to find the "most interesting person" in her view. Our software dynamically weight these attributes so that, for example, on one day Mimus may favor people with lower heights (e.g., kids) and on another day, Mimus may favor people who have the greatest age (i.e., people who have been at the installation the longest). Once a person grabs Mimus's attention, they have to work to keep it: once they are no longer the most interesting person, Mimus will get bored and go find someone else to investigate.

The second software layer runs directly on the robot’s onboard computer. Written in RAPID, the programming language used to control ABB industrial robots, this program simply listens for specific movement commands being sent by our PC. Once a command is received and parsed, the robot can physically move to position it was given. The final software layer acts as a bridge between our sensing software and the robot. It has hard limits and checks to ensure that the robot can’t run into potentially damaging position, but otherwise Mimus is free to roam within the limits that are set for her.

Physical Design

We approached the physical design of the installation as if we were bringing a wild animal into a museum gallery. And as is the case for zoos and menageries, the design of the installation is two-fold: the staging and enclosure for the creature, and its interactions with visitors. For the physical design of the installation, the challenge was to integrate the necessary safety and sensing infrastructure in a way that still facilitated awe, wonder, and spectacle when visitors interact with our robot.


Project Credits

Mimus was commissioned by The Design Museum in June 2016 for their inaugural exhibition, Fear and Love: Reactions to a Complex WorldShe will be living at the museum for five months: from November 24th, 2016 to April 23rd, 2017.

Development Team: Madeline Gannon, Kevyn McPhail, Ben Snell

Sponsors: Autodesk, Inc., ABB Ltd., The Frank-Ratchye Studio for Creative Inquiry 

Fear And Love Catalog

Press Kit



Sponsors