Why humanoid robots need their own safety rules

Last year, a humanoid warehouse robot named Digit set to work handling boxes of Spanx. Digit can lift boxes up to 16 kilograms between trolleys and conveyor belts, taking over some of the heavier work for its human colleagues. It works in a restricted, defined area, separated from human workers by physical panels or laser…

Jun 11, 2025 - 11:42
 0
Why humanoid robots need their own safety rules

Last year, a humanoid warehouse robot named Digit set to work handling boxes of Spanx. Digit can lift boxes up to 16 kilograms between trolleys and conveyor belts, taking over some of the heavier work for its human colleagues. It works in a restricted, defined area, separated from human workers by physical panels or laser barriers. That’s because while Digit is usually steady on its robot legs, which have a distinctive backwards knee-bend, it sometimes falls. For example, at a trade show in March, it appeared to be capably shifting boxes until it suddenly collapsed, face-planting on the concrete floor and dropping the container it was carrying.

The risk of that sort of malfunction happening around people is pretty scary. No one wants a 1.8-meter-tall, 65-kilogram machine toppling onto them, or a robot arm accidentally smashing into a sensitive body part. “Your throat is a good example,” says Pras Velagapudi, chief technology officer of Agility Robotics, Digit’s manufacturer. “If a robot were to hit it, even with a fraction of the force that it would need to carry a 50-pound tote, it could seriously injure a person.”

Physical stability—i.e., the ability to avoid tipping over—is the No. 1 safety concern identified by a group exploring new standards for humanoid robots. The IEEE Humanoid Study Group argues that humanoids differ from other robots, like industrial arms or existing mobile robots, in key ways and therefore require a new set of standards in order to protect the safety of operators, end users, and the general public. The group shared its initial findings with MIT Technology Review and plans to publish its full report later this summer. It identifies distinct challenges, including physical and psychosocial risks as well as issues such as privacy and security, that it feels standards organizations need to address before humanoids start being used in more collaborative scenarios.    

While humanoids are just taking their first tentative steps into industrial applications, the ultimate goal is to have them operating in close quarters with humans; one reason for making robots human-shaped in the first place is so they can more easily navigate the environments we’ve designed around ourselves. This means they will need to be able to share space with people, not just stay behind protective barriers. But first, they need to be safe.

One distinguishing feature of humanoids is that they are “dynamically stable,” says Aaron Prather, a director at the standards organization ASTM International and the IEEE group’s chair. This means they need power in order to stay upright; they exert force through their legs (or other limbs) to stay balanced. “In traditional robotics, if something happens, you hit the little red button, it kills the power, it stops,” Prather says. “You can’t really do that with a humanoid.” If you do, the robot will likely fall—potentially posing a bigger risk.

Slower brakes

What might a safety feature look like if it’s not an emergency stop? Agility Robotics is rolling out some new features on the latest version of Digit to try to address the toppling issue. Rather than instantly depowering (and likely falling down), the robot could decelerate more gently when, for instance, a person gets too close. “The robot basically has a fixed amount of time to try to get itself into a safe state,” Velagapudi says. Perhaps it puts down anything it’s carrying and drops to its hands and knees before powering down.

Different robots could tackle the problem in different ways. “We want to standardize the goal, not the way to get to the goal,” says Federico Vicentini, head of product safety at Boston Dynamics. Vicentini is chairing a working group at the International Organization for Standardization (ISO) to develop a new standard dedicated to the safety of industrial robots that need active control to maintain stability (experts at Agility Robotics are also involved). The idea, he says, is to set out clear safety expectations without constraining innovation on the part of robot and component manufacturers: “How to solve the problem is up to the designer.”

Trying to set universal standards while respecting freedom of design can pose challenges, however. First of all, how do you even define a humanoid robot? Does it need to have legs? Arms? A head? 

“One of our recommendations is that maybe we need to actually drop the term ‘humanoid’ altogether,” Prather says. His group advocates a classification system for humanoid robots that would take into account their capabilities, behavior, and intended use cases rather than how they look. The ISO standard Vicentini is working on refers to all industrial mobile robots “with actively controlled stability.” This would apply as much to Boston Dynamics’ dog-like quadruped Spot as to its bipedal humanoid Atlas, and could equally cover robots with wheels or some other kind of mobility.

How to speak robot

Aside from physical safety issues, humanoids pose a communication challenge. If they are to share space with people, they will need to recognize when someone’s about to cross their path and communicate their own intentions in a way everyone can understand, just as cars use brake lights and indicators to show the driver’s intent. Digit already has lights to show its status and the direction it’s traveling in, says Velagapudi, but it will need better indicators if it’s to work cooperatively, and ultimately collaboratively, with humans. 

“If Digit’s going to walk out into an aisle in front of you, you don’t want to be surprised by that,” he says. The robot could use voice commands, but audio alone is not practical for a loud industrial setting. It could be even more confusing if you have multiple robots in the same space—which one is trying to get your attention?

There’s also a psychological effect that differentiates humanoids from other kinds of robots, says Prather. We naturally anthropomorphize robots that look like us, which can lead us to overestimate their abilities and get frustrated if they don’t live up to those expectations. “Sometimes you let your guard down on safety, or your expectations of what that robot can do versus reality go higher,” he says. These issues are especially problematic when robots are intended to perform roles involving emotional labor or support for vulnerable people. The IEEE report recommends that any standards should include emotional safety assessments and policies that “mitigate psychological stress or alienation.”

To inform the report, Greta Hilburn, a user-centered designer at the US Defense Acquisition University, conducted surveys with a wide range of non-engineers to get a sense of their expectations around humanoid robots. People overwhelmingly wanted robots that could form facial expressions, read people’s micro-expressions, and use gestures, voice, and haptics to communicate. “They wanted everything—something that doesn’t exist,” she says.

Escaping the warehouse

Getting human-robot interaction right could be critical if humanoids are to move out of industrial spaces and into other contexts, such as hospitals, elderly care environments, or homes. It’s especially important for robots that may be working with vulnerable populations, says Hilburn. “The damage that can be done within an interaction with a robot if it’s not programmed to speak in a way to make a human feel safe, whether it be a child or an older adult, could certainly have different types of outcomes,” she says.

The IEEE group’s recommendations include enabling a human override, standardizing some visual and auditory cues, and aligning a robot’s appearance with its capabilities so as not to mislead users. If a robot looks human, Prather says, people will expect it to be able to hold a conversation and exhibit some emotional intelligence; if it can actually only do basic mechanical tasks, this could cause confusion, frustration, and a loss of trust. 

“It’s kind of like self-checkout machines,” he says. “No one expects them to chat with you or help with your groceries, because they’re clearly machines. But if they looked like a friendly employee and then just repeated ‘Please scan your next item,’ people would get annoyed.”

Prather and Hilburn both emphasize the need for inclusivity and adaptability when it comes to human-robot interaction. Can a robot communicate with deaf or blind people? Will it be able to adapt to waiting slightly longer for people who may need more time to respond? Can it understand different accents?

There may also need to be some different standards for robots that operate in different environments, says Prather. A robot working in a factory alongside people trained to interact with it is one thing, but a robot designed to help in the home or interact with kids at a theme park is another proposition. With some general ground rules in place, however, the public should ultimately be able to understand what robots are doing wherever they encounter them. It’s not about being prescriptive or holding back innovation, he says, but about setting some basic guidelines so that manufacturers, regulators, and end users all know what to expect: “We’re just saying you’ve got to hit this minimum bar—and we all agree below that is bad.”

The IEEE report is intended as a call to action for standards organizations, like Vicentini’s ISO group, to start the process of defining that bar. It’s still early for humanoid robots, says Vicentini—we haven’t seen the state of the art yet—but it’s better to get some checks and balances in place so the industry can move forward with confidence. Standards help manufacturers build trust in their products and make it easier to sell them in international markets, and regulators often rely on them when coming up with their own rules. Given the diversity of players in the field, it will be difficult to create a standard everyone agrees on, Vicentini says, but “everybody equally unhappy is good enough.”