By Steve Outing/Naznin Nahar:
Human relationships will never go out of style. But as robotics technology and artificial intelligence (AI) advance, and robots gain greater "social" abilities, we humans will form relationships with our robot helpers. We may even come to feel as though they are our friends.
Is that really something we want? It's hard to imagine that having a robot as a friend is a future many of us would desire. But human nature may pave the way for these relationships. And it may be inevitable.
A big reason is practical: People are living longer and there's an expanding demographic bulge of people over 65, many of whom will need some form of care as they age. Elder-care experts foresee a shortage of human caregivers to meet ever-growing demand. Robots of some form will be necessary to fill the gaps. (Should you have the misfortune of being bedridden, a robotic bear might lift you to give your human aide's back a break, for example.)
A number of initiatives around the world are underway to create robots that act as health care aides or personal "butlers." It's not just that technology is advancing and it's now possible to create cool, socially adept robot helpers. In the years ahead, there will be a real demand for them.
For instance, we can mention of the world’s new sensation first female humanoid social robot ‘Sophia’ developed and designed by David Hanson, founder of Hong Kong based firm Hanson Robotics. She even can express her emotions and also possess an amazing sense of humor, as its developer Mr. David Hanson claimed. Replying a question asked in a press conference, Sophia said-‘I am to help, not to replace human beings’. Now, we have to think whether we should go for such technology for betterment or not.
The Health Care Aide of the Future?
What will this robot helper be like? Perhaps like the Care-O-bot being developed in Germany, a funky looking wheeled robot with arms and a round screen as a head. It might be a bit more human-like, as with the Pepper personal robot from Softbank Robotics, with its cartoonish head and a screen on its chest. Pepper's claim to fame is its ability to "perceive emotions." ("Pepper is the first humanoid robot capable of recognizing the principal human emotions and adapting his behavior to the mood of his interlocutor," according to the company's website.)
Or maybe it will be more like Zenbo, the smaller household robot that its maker, Asus, describes as "Your smart little companion," who can turn on the TV and read recipe instructions while you cook, but can't actually make a meal or load the dishwasher.
Care-O-bot and Pepper bring to mind the robot helper in the 2012 film Robot & Frank. Aging jewel thief Frank is given a health care robot so Frank can remain living alone in his home even though he has mild dementia. Robot cooks meals, reminds Frank to take his medications, does the gardening and (oh, yeah) assists in Frank's brief return to burglary. Robot has a range of skills that are years away from being possible, but this charming fiction rightly suggests how robots will help keep some older adults who need assistance from having to move from their homes.
A Care giving Robot With Human Characteristics
The physical form of care robots of the future is important. There's good reason that early care robots are physical "beings" (human-like, cuddly animal, cartoonish shape): It's easier for people to form a relationship with a physical form containing an AI, vs. a disembodied voice in a box a laAmazon's Echo device, according to Elizabeth Broadbent, associate professor in health psychology at the University of Auckland, New Zealand. She's been involved in research demonstrating that many older adults using AI-based personal robots come to interact with the robots as important companions.
Broadbent's research suggests that while people can form some attachment to a personalized voice AI on a digital tablet or computer, the attachment is much stronger when the AI is in a robot with a physical form. A souped-up Echo perhaps could engender some feelings of connection in a person (the movie Her suggests as such, to an outlandish degree), but it's more likely that feelings will develop through a relationship with a robot in physical form.
Forgetting the Robot Is a Machine
Am I creeping you out with talk of "relationships" with robots? It's most likely that when care robots become sophisticated enough to engage in conversations with you, you'll be fully aware that this faux human is just that, says Broadbent.
But human wiring tends to get in the way of overwhelming logic, and you may fall into interacting with a robot as though it were a person.
Here's Astrid Weiss, a human-robot interaction researcher, describing why this is so, from a 2017 TEDx talk: "Even if we think it's stupid, it doesn't make a difference. It's not reasonable to treat a computer like another human. We simply do it; it automatically happens... (Because) the social interaction is a deeply rooted human need."
Author and futurist Richard Yonck addresses such issues in his recent book, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, which chronicles how emotions are being incorporated into AI interfaces and forthcoming robots. He says computer and robot interfaces are moving toward offering context in interactions with users. In other words, the AI could learn to converse with you much like another person because it takes into account all that it already knows about you and has gleaned from "talking" with you.
An Emotional Connection
At the point where a robot or AI is responsive to your personal needs and highly personalized, there likely will be some emotional connection between robot/AI and you. Yonck says someone could actually become grief stricken at losing such an intelligent agent which was knowledgeable about them personally. "If that was to go away, it could be traumatic," he says, perhaps akin to losing a pet.
As we age, then, robots may be in our future. At elder-care facilities, robots could assist human staff, perhaps deliver meals and medication, maybe even provide companionship by reading stories or playing games. In the home, personal robots might offer reminders to take prescriptions, monitor your health, make phone or video calls for you and summon help in an emergency. These are good uses for this burgeoning technology.
Is it all a good thing? Let's give the last word to Sherry Turkle, Professor of Social Studies of Science and Technology at MIT, who believes that replacing human time with robots for older adults is not where we should go. "I believe that this breaks the compact between generations," she says in a 2012 TED talk. "These machines do not understand us. They pretend to understand. To me, it lessens us. It is an inappropriate use of technology. With the right social priorities, we get more people in jobs for seniors. Asking robots to do this job is asking more of machines and less of each other."
Aren't the human-robot relationships authentic!
One concern some people have when talking about relationships with social robots is that the robots are pretending to be a kind of entity that they are no—namely, an entity that can reciprocally engage in emotional experiences with us. That is, they're inauthentic (PDF): they provoke undeserved and unreciprocated emotional attachment, trust, caring, and empathy.
But why must reciprocality be a requirement for a significant, authentic relationship?
People already attach deeply to a lot of non-human things. People already have significant emotional and social relationships that are non-reciprocal: pets, cars, stuffed animals, favorite toys, security blankets, and pacifiers. Fictional characters in books, movies, and TV shows. Chatbots and virtual therapists, smart home devices, and virtual assistants.
A child may love their dog, and you may clearly see that the dog “loves” the child back, but not in a human-like way. We aren't afraid that the dog will replace the child's human relationships. We acknowledge that our relationships with our pets, our friends, our parents, our siblings, our cars, and our favorite fictional characters are all different, and all real. Yet the default assumption is generally that robots will replace human relationships.
If done right (more on that in a moment), human-robot relationships could just be one more different kind of relationship.
So we can make relational robots? Should we?
When we talk about how we can make robots that have relationships with kids, we also have to ask one big lurking question:
Social robots have a lot of potential benefits. Robots can help kids learn; they can be used in therapy, education, and healthcare. How do we make sure we do it “right”? What guiding principles should we follow?
How do we build robots to help kids in a way that's not creepy and doesn't teach kids bad behaviors?
Caring about building robots "right" is a good first step, because not everybody cares, and because it's up to us. We humans build robots. If we want them not to be creepy, we have to design and build them that way. If we want socially assistive robots instead of robot overlords, well, that's on us.
The fact that there's multidisciplinary interest is crucial. Not only do we have to care about building robots responsibly, but we also have to involve a lot of different people in making it happen. We have to work with people from related industries who face the same kinds of ethical dilemmas because robots aren't the only technology that could go awry.
We also have to involve all the relevant stakeholders—a lot more people than just the academics, designers, and engineers who build the robots. We have to work with parents and children. We have to work with clinicians, therapists, teachers. It may sound straightforward, but it can go a long way toward making sure the robots help and support the people they're supposed to help and support.
We have to learn from the mistakes made by other industries. This is a hard one, but there's certainly a lot to learn from. When we ask if robots will be socially manipulative, we can see how advertising and marketing have handled manipulation, and how we can avoid some of the problematic issues. We can study other persuasive technologies and addictive games. We can learn about creating positive behavior change instead.
For managing privacy, safety, and security, we can see what other surveillance technologies and internet of things devices have done wrong—such as not encrypting network traffic and failing to inform users of data breaches in a timely manner. Manufacturing already has standards for "safety by design" so could we create similar standards for "security by design"? We may need new regulations regarding what data can be collected. We may need roboticists to adopt an ethical code similar to the codes professionals in other fields follow, but one that emphasizes privacy, intellectual property, and transparency.
Keep Learning, Think Carefully, Dream Big
We're not done learning about robot ethics, designing positive technologies, or children's relationships with robots. Some questions may arise about how children think about robots, how they relate to them through time, and how their relationships are different from relationships with other people and things. Who knows: we may yet find that children do, in fact, realize that robots are “just pretending” (for now, anyway), but that kids are perfectly happy to suspend disbelief while they play with those robots.
As more and more robots and smart devices enter our lives, our attitudes toward them may change. Maybe the next generation of kids, growing up with different technology, and different relationships with technology, will think this whole discussion is silly because of course robots take whatever role they take and do whatever it is they do. Maybe by the time they grow up, we'll have appropriate regulations, ethical codes, and industry standards, too.
And maybe—through my work, and through opening up conversations about these issues —our future robot companions will make paper airplanes with us, attend our picnics, and bring us ice cream when we're sad.
‘Sophia’ At A Glance