THE BRAND NEW Breed review: The case for treating robots as animals

THE BRAND NEW Breed: How to consider robots

Kate Darling

Allen Lane

BEFORE dawn, a Roomba sweeps the ground in my own home. Suckubus (as we call it) will get tangled up with shoelaces or carpet tassels and need rescuing. At the neighborhood supermarket, a robot called Marty patrols looking for spills, summoning employees loudly for clean-ups. Its skulking existence annoys customers.

In the world’s cities, free-roaming robots are poised to work alongside humans. Will these machines steal jobs? Might they harm the humans they work alongside? And can social robots alter human relationships?

Luckily, robot ethicist and MIT Media Lab researcher Kate Darling is readily available. In her publication THE BRAND NEW Breed , she reminds us that we have interacted with non-humans before. You will want to view robots as animal-like, rather than as machines?

Throughout history, we have involved animals inside our lives – for transport, physical labour or as pets. In the same way, robots can also supplement, instead of supplant, human skills and relationships, she says.

In terms of making robots safe to connect to, sci-fi fans have always fixated on Isaac Asimov’s laws of robotics: a robot should never harm a human; a robot must obey orders; a robot must protect itself. Later, Asimov added a law to precede others: a robot should never harm humanity or, by inaction, allow humanity to come quickly to harm. But in the real world, says Darling, such “laws” are impractical, and we don’t discover how to code for ethics.

Just what exactly happens if a robot does accidentally harm a human at the workplace? Because they are created and trained by people, this may make it better to assign blame, says Darling.

It’s the social robots, made to interact as companions and helpers, that trigger most dystopian visions. Human relationships are messy and take work. Imagine if we abandon them for agreeable robots instead?

Darling offers helpful perspective. Nearly five decades ago, she writes, psychologists concerned about the popularity of pets and that they might replace our relationships with humans. Today, few would say pets make us antisocial.

If we are available to a new group of relationships, says Darling, there are interesting possibilities. At some care homes, residents with dementia benefit from the company of a furry robotic seal, which appears to do something as a mood enhancer. Elsewhere, autistic children may respond easier to coaching when there is a robot in the area.

Research shows people tend to hook up with well-engineered social robots. And as Darling writes, we often project human feelings and behaviour onto animals so it is no real surprise if we personify robots, particularly kinds with infantile features, and bond with them.

Even in a military context, where robots are made to be tools, soldiers have mourned the increased loss of bomb disposal robots. Darling cites a trooper who sprinted under gunfire to “rescue” a fallen robot, much as their predecessors rescued horses in the first world war. The question isn’t whether persons will get mounted on a robot, but if the firm so that it is can exploit you. Corporations and governments shouldn’t manage to use social robots to control us, she says.

Unlike animals, robots are designed, peddled and controlled by people, Darling reminds us. Her timely book urges us to focus on the legal, ethical and social problems with respect to consumer robotics to make certain the robotic future is effective for all those.

More on these topics:

  • robots
  • books

Leave a Reply

Your email address will not be published. Required fields are marked *