Can robots educate children?

Gert Braspenning
arinti
Published in
4 min readJun 17, 2019

--

Pepper

Introduction

Arinti has been working on a robotics research project named ROBO-CURE. The purpose of ROBO-CURE is to assess the chemistry between Social Robotics, Internet of Things and Artificial Intelligence when applied for medical purposes. The project is part of ICON and therefore includes multiple partners from the private and public sector.

We at Arinti teamed up with our colleagues at The Learning Hub to seamlessly merge E-Learning with Social Robotics. Our target was to create digital learning content which could be taught by a humanoid robot, without the needed of specialist programming. We describe our solution as StoryLine 2 Pepper.

The humanoid robot of choice is Pepper, which has been used for robotics research worldwide.

Pepper is specifically suited for social robotics, which makes it an ideal candidate for ROBO-CURE.

To view the full scope of the ROBO-CURE project, take a look at http://smit.vub.ac.be/project/robo-cure

Can a robot educate?

Researchers at Plymouth have previously shown that children respond very well to robots when it comes to influence opinions. This makes them strongly suited for the role of educator.

A humanoid robot would combine the benefits provided by the digital world. The robot can react based on the behaviour and actions of the child. Although a tablet could be used for the same kinds of interaction, the lack of social interaction is a key difference. However, to broaden our scope, the proposed solution also supports the use of a browser, if there is no robot available.

End-user programming & E-learning

We chose to use StoryLine which can be integrated with an LRS system such as Learning Locker to monitor the education of the patient. StoryLine is a leader in the E-Learning community, and is similar to PowerPoint. The learning content created in StoryLine can be exported to HTML content with integrated LRS support. This content can be adapted to add an interface that provides interaction with the robot. This interface can be accessed from within StoryLine using the Javascript support. This might sound technical but all actions can be achieved with 1 line of code.

We refer to the exported HTML content, after it has been processed by StoryLine 2 Pepper, by a Learning module.

This is an example of a question, and the corresponding answers. After the answer is recognized a variable with the same name in StoryLine will be incremented. The end-user can subscribe to the change of this variable, and configure storyline to act accordingly.

Listen("What is 1 + 4", ["five", "four", "eight", "ten"]);

In this command, the user can specify whether the robot should automatically proceed to the next slide after pronouncing the sentence.

Say("Hello, I am pepper!", { goToNextSlide: false });

Since there are only 2 possible commands available, adding the interaction in StoryLine is very convenient. Adding browser support also adds he benefit that all learning modules can be fully tested without the need of a robot.

When activating browser-support, the learning module makes use of modern browser features such as Web Speech API and SpeechSynthesis to handle speech recognition & pronounciation.

Another available feature is the normalization of audio in all .mp4 files found within the StoryLine output.

When a learning module is started pepper will always introduce himself, before asking the patient if he wants to proceed. Using the visual capabilities of the tablet, it is possible to hint which words are expected by the robot, to improve recognition. Some tolerance has been built in, when the robot is unable to recognize what has been said, or is not able to map it to a correct behaviour, it will ask the patient to repeat. This process continues until the robot can proceed. To make sure the patient does not get stuck, every interaction is also achievable using the tablet. This is however seen as a backup option, since it reduces the advantages of a social robot.

Deploying the learning content to the robot

Deployment

The modules that have been created in the previous step now need to be displayed on the robot. This is the part where the other partners jump in. A task scheduler has been set in place by imec. This scheduler will launch the modules at the necessary time for the patient, with monitoring provided by the LRS integration. The only technical necessity is that the modules that have been made using the StoryLine application, will be either uploaded to the robot, or hosted on a webserver.

Conclusion

We have made available the production of learning content without technical know-how, that can be delivered to the audience by a humanoid robot. By making use of the highly interactive StoryLine application, it augments the functionalities provided by a recognized leader in E-Learning. Educational results can be monitored easily because of the native integration between StoryLine and LRS Systems. At the time of writing the integration project only supports the Pepper robot, but it should be relatively easy to add other robots, providing they have a tablet and Javascript API.

Thanks to all the involved partners

  • Cronos — Arinti
  • Cronos — The Learning Hub
  • QBMT
  • UZ Brussel
  • Medtronic Belgium
  • imec — IDLab — UGent
  • imec — SMIT — VUB
  • VUB — GRON
  • VUB — R&MM

ROBO-CURE in the press:

https://www.vrt.be/vrtnws/nl/2019/06/14/sociale-robot-in-uz-brussel-voor-kinderen-met-diabetes/

https://nieuws.vtm.be/binnenland/medische-primeur-zorgrobot-helpt-diabeteskinderen

https://www.hln.be/nieuws/binnenland/sociale-robot-helpt-en-begeleidt-kinderen-met-diabetes-in-het-uz-brussel~a59f64df/

https://www.nieuwsblad.be/cnt/dmf20190614_04460520

https://www.rtbf.be/info/societe/detail_un-robot-pour-aider-les-enfants-atteints-de-diabete?id=10246203

--

--