View profile

A study finds that children welcome our new robot overlords [Collision Course #8]

Hello and welcome to the 8th issue of Collision Course, a newsletter about tech policy, consumer priv
A study finds that children welcome our new robot overlords [Collision Course #8]
By Tommy Collison • Issue #8 • View online
Hello and welcome to the 8th issue of Collision Course, a newsletter about tech policy, consumer privacy, and the future. As always, click here to read previous issues of this newsletter.
Today — can you be peer-pressured by a robot?

Yes, if you’re young. That’s what a team of researchers, led by Anna-Lisa Vollmer at Bielefeld University, have found in a study released this week. 
Carolyn Y. Johnson over at the Washington Post summarized it well: 
“In the experiment, two groups of children were asked to complete a simple task: choose which two of several lines are the same length. One group did the task alone, and the other did the task while seated at a table with three autonomous robots that gazed at the same puzzle, paused and answered the question — incorrectly. The children who faced misleading robot peer pressure did less well, and three-quarters of their wrong answers were the same as the robots’ bad answers.”
The test is a somewhat updated version of a classic behavioral study which found that adults are influenced by an incorrect-but-unanimous answer given by a peer group. This most recent study also tested 60 adults, and while they also found that some adults could be swayed some of the time, the adults largely resisted the peer pressure of the robots.
Credit: CITEC/Bielefeld University
Credit: CITEC/Bielefeld University
Previous research has shown that people treat robots as social beings, whether that’s by being polite to them or displaying gender stereotypes toward computers with stereotypically “masculine” or “feminine” voices. This takes it a step further: children can be actively swayed by robots when making decisions.
From Alexa and Siri to robot toys, robots are becoming increasingly integrated into our daily lives, and the daily lives of our kids. In particular, we ought to be thinking about the effects of this conformity has on young children. (One caveat: it’s not immediately clear to me that conformity is all bad all of the time —a robot could, say, encourage young children brush their teeth twice a day?)
The researchers are also thinking through the repercussions. “A discussion is required about whether protective measures, such as a regulatory framework, should be in place that minimize the risk to children during social child-robot interaction,” the researchers note. In an interview with Motherboard, Vollmer suggested that this could be similar to the framework already in place for advertisements aimed at children.
The genie is out of the bottle as far as robots in the home go — pick a household appliance, and there’s an internet-connected version of it. It’s no longer a question of if we interact with them, but how.
Bonus round: more questions than answers
As with most studies, Vollmer and co. raise more questions than they answer. Here are some questions I’ve been noodling on since I read the news on Wednesday:

  • Why weren’t the adults swayed? Is it because the robots were small and cute? I’d love to see the experiment repeated with adult-size robots.
  • Are there better examples of “positive conformity” than my tooth-brushing example?
  • What specific robot behavior induces conformity?
Answers on a postcard (or just an email reply).
As always, I welcome your feedback, and I’d love to hear your suggestions for what you’d like to see covered in this newsletter. I’m @tommycollison on Twitter, or you can email tommy@collison.ie. Please get in touch! 📩📬

Did you enjoy this issue?
Tommy Collison

A newsletter about tech policy, consumer privacy, and the future.

If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue
Seattle, WA