Manus

Exploring Pack Behaviors in Autonomous Robots

 

Manus is a set of ten industrial robots that are programmed to behave like a pack of animals. While each robot moves independently, they share the same central brain. So instead of acting in isolation, they have intertwined behaviors that ripple through the group as people walk by. Manus uses 12 depth cameras to detect and track the body language of approaching visitors. The robots then use this depth data to decide who to move towards, and whether to look at a their hands or face. Like many of today’s intelligent autonomous machines, the robots in Manus don’t look like and they don’t act like us — but they can still connect with us in meaningful ways.

In a world of robotic autonomy, Manus explores more desirable ways for humans and intelligent machines to co-exist with one another. Instead of programming robots to look and act more like humans, we show the many benefits of allowing robots to move and behave in a manner that is native to their own bodies. For one, embracing the visual and kinematic constraints of a distinctively non-humanoid robot helps avoid the “Uncanny Valley” — the cognitive dissonance felt when interacting with almost-human machines. The robots in Manus avoid this common pitfall by being more animalistic than anthropomorphic: their primal behaviors help people become acutely aware of subtle cues from their familiar, but alien body language.

Manus’ animalistic behaviors provide baseline transparency between what the robots are thinking and what they are about to do. They broadcast subtle, non-verbal cues that make their intentions and motivations more legible to the people around them. Simple, subtle design decision make big impacts: for example, when the robots notice a new point of interest, they look towards it before moving. The robots will also relax their bodies over time, as if they get tired from carrying the weight of their own bodies. Also, they are almost always moving: they never hold a pose too long before shifting their weight. These behaviors are completely unnecessary from the perspective of the robot: these robots can hold heavily payloads cantilevered from their bodies indefinitely, without any extra strain or effort. However, they are essential for broadcasting a continuous stream of useful, low-level information on a frequency that humans can’t turn off or ignore. From the perspective of a person, they now have a way of knowing whether or not the robot sees them, whether they have its attention, or there’s something more interesting in its field of view.

Manus_ATONATON_hello.jpg

Implementation Details

Manus features ten, off-the-shelf, ABB IRB1200 -5/0.9 industrial robot arms. These machines are more common found in factories doing food handling or painting car chassis. However, in Manus, we developed custom vision and communication software for embedding autonomous behaviors into these machines. Our vision system improves upon our previous system used in Mimus, and places 12 depth sensors in the base of the installation to give the robots a worms-eye view of the world. This gives us a 1.5 meter tracking region all the way around the base. The tracking system looks for 3D positions of a person’s head and hands to pass on to the robot control software.

“Worms-Eye” view from depth sensor system.

“Worms-Eye” view from depth sensor system.

3D view from depth sensor system.

The robot control software uses tracking information from the vision system as input for interaction. Each robot decides whether and how to engage with tracked people, based on their spatial relationship to one another (the person’s body language). If a person of interest is too far away, a robot may decide they are not interested enough to look at them. Similarly, if a person is very close, robots may change their gaze to look at the person’s hands or head.

A single PC runs the vision and robot control software, and communicates over ethernet to the physical industrial robots using ABB Externally Guided Motion (EGM) protocol. We built a UDP server to manage message passing between the real and virtual robots, and to better coordinate desired versus actual poses of the ten robots. Manus’ vision and control software were all developed in openframeworks – a C++-based open source arts-engineering coding toolkit. 

Custom kinematic software for simulation, planning, and interaction design with the pack of robots.

Physical Design

The physical design of Manus engages visitors passing between two main points of entry onto a mezzanine in a great hall. Manus’ ten robot arms sit in a row along a 9 meter illuminated base. The linear layout of the installation alludes to its manufacturing roots and the assembly line. However, we implement a number of design decisions to help subvert people’s expectations on these industrial robots should behave. For example, the robots are spaced very close together — within striking distance of one another — which undermines their basic utility. Moreover, there are no tool attachments placed on the ends of the robots — they are left naked, as they were born from the factory. Also, the LED top surface of the robot is also a visibly delicate material, that would never be used as a work station. This interior lighting combines Manus’s the clear acrylic enclosure to visually amplify the number of robots in the installation, by adding mirrored counterparts in its many reflections. The enclosure also provides a necessary layer of separation between visitors and the pack of robots.

Gallery

The following photographs were taken at the World Economic Forum’s 2018 Annual Meeting of New Champions in Tianjin, China.


Project Credits

Manus was commissioned by The World Economic Forum for their 2018 Annual Meeting of New Champions, in Tianjin, China.

Development Team: Madeline Gannon, Kevyn McPhail, Ben Snell

Sponsors: NVIDIA and ABB Ltd.

Made with love in openframeworks, with help from the following contributor addons: ofxCv, ofxEasing, ofxGizmo, ofxOneEuroFilter



Sponsors