Robots in the classroom

Sofia Serholt works in the Computer Science & Engineering department at Chalmers University of Technology and is a member of the LIT community in Gothenburg. She is one of the leading European researchers in the area of robots and education. In this interview, Sofia talks with Neil Selwyn about her recent work, and the growing interest being shown in AI and education.

NEIL: So, first off, for the uninitiated, exactly how are robots being used in schools? What sort of technologies are we talking about here?

SOFIA: A big issue in education right now is programming robots – using robots as tools to learn different programming languages. However, my research is mostly focused on robots that actually teach children things and/or learn from them, and perhaps interact with them socially. So, these are humanoid robots that are similar to humans in different ways. Then, we also see robots that can be remotely controlled in classrooms – like Skype on Wheels – and these can be used by children who can’t be in the classroom for different reasons such as if they’re currently undergoing cancer treatment. We have some very rare situations where teachers have remotely controlled robots to be able to teach a class as well. So, that’s basically how the field is right now.

NEIL: Now, this all sounds really kind of futuristic and interesting in theory, but in practice, one of the main issues is how people actually accept these machines, and how they gain trust in them. What have you found out about people’s reactions to these technologies in the classroom?

SOFIA: When it comes to the research that I’ve done in the classroom I have put actual robots in the classroom for several months and studied interactions with them. However, I haven’t done any rigorous research about acceptance in those cases. However, I do see that children are generally optimistic and positive about it … outwardly anyway. And teachers, of course, appreciate having a researcher there and being part of a project, and it’s fun and interesting. But, teachers haven’t been very involved in the actual robot and the interactions. Instead it’s been on the sidelines of what they’re doing. When I talk to teachers and students that are not part of any study, I can see that students are concerned about certain things that robots can do or what they shouldn’t do. And, the privacy issue is one thing that they are very concerned about. For example, they don’t want to be recorded by a robot, which might be necessary for it to be able to interpret different things about the person it’s interacting with. Also children don’t want to be graded by a robot. And, this is something that also teachers resonate with, so they don’t want to give away their authority in that sense. So, there are a lot of open questions right now – for example, how this technology might impact children in the long run. We don’t know very much about this. We know about our children have toys and grow up with toys. But these robots are a new thing – kind of like a social interaction partner that is not really human, but does kind of mirror human behaviour. And also, it has certain restrictions, like it might not be able to speak with a humanlike intonation. So how does that affect children if they were to grow up with these robots? In reality that’s a kind of study we can’t conduct.

NEIL: So, can you just tell us a bit about the research you’ve done yourself on robots in the classroom? 

SOFIA: I’ve looked at interaction breakdowns. In one study I selected situations or instances where children either became notably upset or they became inactive when interacting with the robot. Not because they were bored with the actual game … that didn’t actually happen because it was very engaging and fun. But these were instances when students couldn’t do anything – they couldn’t proceed – and also when they started doing other stuff in the room, or began to talk to their friends instead of working on the topic. And, what I saw was that when the robot doesn’t understand what the child is saying, it generates a situation where on the one hand you have a robot who can express a lot of stuff and tell you what to do. But, if the child can’t ask the robot a question or show uncertainty, it creates a very difficult situation when they’re alone with this robot. And this is what leads to these breakdowns – when students need help from the outside. I looked at six randomly picked students over the course of the three-and-a-half months that I was in the school. And out of those sessions, I spotted 41 breakdowns. Some children were very upset about not understanding what to do, and not being able to move on. Some got really angry at the robot for disrupting and destroying their strategy that they were using.  But, I think the worst case was when the students who’ve actually felt that they were not good enough and they put the blame on themselves. You know, because robots and computers have a lot of authority in the sense. You know, you don’t think that your calculator lies to you?

NEIL: Yeah, yeah.

SOFIA: You trust in your calculator more than your own calculations. And, I spotted this similar tendency here. If the robot broke down when I was with it then it must not like me. And, having to deal with those kinds of situation is ethically problematic, I think.

NEIL: So, is that a design problem? Can we design robots to suddenly be a bit more imprecise, or as you say, not to kind of break down the magic between the student and the machine?

SOFIA: I think we could have obviously accomplished a lot more than what we did in that study.  We used a teaching robot and there were a lot of outside things that affected it. You know, sunlight affects it, heat in the room affects it, how long it’s been going on affects the ability of the robot to work like it should. But, nevertheless we have to kind of ask ourselves how far along that road can we go to uphold this illusion? The illusion that this is a sentient being in the eyes of the student. I think that’s a question for philosophy and ethics really.

NEIL: So, you’ve – you’ve moved very quickly from looking at these things as teaching and learning technologies to ethical questions. These are big kind of issues to be grappling with. So, what are the main ethical questions that we need to be asked here? We’ve got privacy …

SOFIA: Yeah, and we have issues of responsibility.  For example, who is responsible for the robot? We see a lot of companies developing these technologies and selling them. However, where does their responsibility end and where does the teacher’s responsibility begin? And, according to teachers, they want to be the responsible party in terms of what’s going on in the classroom. However, they do feel at the same time that they can’t have this responsibility if they can’t monitor what’s going on. So, the idea with a teaching robot is that it’s supposed to work autonomously, and it’s not supposed to be under the control of teachers. And, often times the teachers don’t know even how it works, right, so they can’t control it.

But one teacher asked how this benefits them … because if they have to walk around and keep an eye on the robot all the time, then what kind of sacrifice is this for their roles as teachers? And, we also have the inevitable fact that robots do break and robots don’t support physical interaction as much as we are led to believe. Because they look humanoid, it is tempting to think you can shake its hand, you can give it a high five, you can give it a hug … but this usually doesn’t work unless the robot is programmed to go along with this.

NEIL: So, this issue of is the robot going along with it leads me to think about questions of deception. If the robot is mimicking certain behaviour, is that an ethical issue as well?

SOFIA: Well, I think it might become an issue if robots have these kind of social interaction features. Of course, there is a level of deception in that, because they’re not social. You can erase the program, you can accidentally erase a log about one child and then the robot won’t remember that child anymore. That’s a big issue that I think we’re going to be seeing a lot more of. 

NEIL: So, just backtracking from the ethics for a second, one of the things that spring into mind is why on earth should we be using these machines if there are all these issues. Presumably there are kind of very strong learning and teaching rationales for using robots in the classroom. What sort of things do we know about the learning can take place around a robot? 

SOFIA: We have certain indications that robots are preferred by students over virtual agents –  for example, intelligent tutoring systems that have a virtual agent with different levels of animation in the agent. So, the more humanlike the embodiment of the robot is, the more physical it is, the better the learning outcomes. However, these are not long-term field studies that I’m talking about here.  These findings come from very controlled experiments, often not even with children. And, so we honestly don’t know too much about the learning outcomes. In my study, I had a robot that taught geography and map-reading to children, and also sustainability issues. The learning goal was that the children should be able to reason about sustainability and the economic issues involved, the social issues involved, and it’s a complex interaction. So, we didn’t see any learning outcomes in that regard. In map-reading there was a slight learning improvement, but not as much as one would hope after a month’s worth of interactions with this robot.

 NEIL: So, this is very future focused research, it’s a very future focused area of education, and doing research in this area must be really, really tricky. And also, there’s a lot of hype in this area as well. So, looking forward in the future, what do you realistically think we’ll see in 20 years’ time? And, what is actually hype?

SOFIA: I think as soon as you talk about the social aspects of interaction, then we have a problem. And if we talk about AI as being very clever at certain specific tasks, then yes, we have this already – this is technology that is coming. But, if we talk about a general social intelligence that’s supposed to make its own decisions and deductions based on how you are, and it gets to know you in human terms, and it gets to reason and think, then I’m not sure I believe this is going to happen at all. This would require a very complex form of programming and machine learning, so I guess we’ll see … I’m a bit reluctant to answer that question, because ‘who knows’?

NEIL: Now, I just wanted to finish on a nice easy question. It’s often said that robots and AI actually raise this existential question of ‘what does it mean to be human in a digital age’? I was wondering if your work has led you to any such insights? What will it mean to be human in the 21stcentury? And, what implications might this have for education?

SOFIA: I think we’re going to start to see that there is something else to human nature that technology might not be able to fill. The question is how we want to proceed knowing this. And children are what we define as a vulnerable group in society that we have some sort of duty of care towards. And, if we see all these problems with technology, if we see problems and potential suffering, then maybe we should talk about those issues and not just sweep them under the carpet. I don’t think there’s going to be any revolutionary situation where you see that robots somehow make us question our own sense of being in the world. But, I do think that if we interact with them too much, then we’re going to have problems knowing what we are. So it’s important that we don’t put this technology in the hands of children who are too young to be able to critically assess what’s going on.

Informal learning through online platforms

Neil Selwyn, Professor of Education in the Digital Education Research group at Monash University, produces a podcast called Meet the Education Researcher where he interviews researchers during his travels about the things they are working on and find interesting. A few months ago, Neil, who has been a visiting professor with our research group, came back for a visit. During his visit he interviewed me about the work I am doing on the ways informal learning plays out on online platforms.

NS: Now, first off before we get into projects and things that you’re working on, what’s the big idea that your research addresses? What are the big questions that you’re interested in?

TH: The big thing I’m interested in at the moment is networked learning. I would say my interest has evolved over time and now I realise that I’m quite interested in the phenomenon of collective intelligence and wisdom of the crowd, but not so much at scale. Instead I’m interested in bridging what we know about these phenomena from scale down to the individual level. To try to understand what the experiences of the individual are, and what their relationship is with the systems they use, how the platforms that they use assemble these knowledges and work as a collective … and kind of break down these phenomena in a way that gives us something more tangible we can work with.

NS: So this is a really interesting approach to take to education and technology, and I guess a lot of it is not school-based. So, your background I guess is not in school teaching?

TH: No, it’s not. My background’s actually in industrial design, and I worked as a computer interaction designer and as an industrial designer doing things like furniture and sporting goods and lots of weird things. And, then I worked designing museum exhibits and technologies for education, and then got into graduate school that way, realising that I actually knew nothing about learning or what people were doing, and I was designing for them, and making it up as went along! And that’s where my interest has emerged, and so I do tend to look at a lot of out-of-school learning situations, on the internet, in museums. And, partly I do that because I’m not sure school is actually the best place to look for learning, and I think maybe we can learn a lot of applicable things to formal schooling from so-called ‘informal’ learning.

NS: I wanted to go through some of those examples of informal learning. Now, you’ve done a few projects that I’m aware of that have really fascinated me. The first one is citizen science. Can you explain what citizen science is, and what sort of platforms and networks does it take place on?

TH: Yeah, we had a project studying the phenomenon of citizen science and citizen social science, citizen natural science, citizen humanities … there’s a large umbrella of these projects now. Traditionally citizen science has been people counting birds in their backyard, or using buckets to test the water quality in their local stream. But, in the recent decade or so, online platforms have sprung up and the kind of activity that people are involved in has changed. And, there’s been a lot of classification projects, where scientists have huge data sets that they need analysed in some way, and they ask volunteers to log into platforms online and to code them. So, one of the big ones that we’ve looked at is called Zooniverse, which is a platform for doing lots of different kind of citizen science projects, and in particular it started with this galaxy classifying project. So, people looked at pictures of galaxies, and over time got more and more advanced. People picked out whether there were spirals or circles or what shapes the galaxies were. And, they oriented the scientists in which pictures they should be looking at.

NS: So, presumably, these members of the public identifying galaxies are not all professional astronomers? I mean, how are people actually engaging with this content?

TH: No, mostly they’re not professional at all. In fact interestingly, most of the people that we’ve talked to as part of the project have no background even in amateur astronomy. Most of them were first just interested by the fact that they got to see these pretty pictures. Because, these are quite beautiful pictures of galaxies, and then they develop this interest in astronomy over time. So one of the things we’ve been doing is to study the discussion forums that go on around these activities, and to see what kind of knowledges these people develop over time, kind of to the side of the main activity they’re asked to do. The main activity they’re asked to do is very, very simple. Generally speaking, these citizen science projects build on the fact that you don’t really need a lot of background in order to do the classification work of deciding whether a galaxy is circular or oblong.

NS: So, what are people talking about on the discussion forums?

TH: They’re talking about all kinds of different things. So what’s interesting is often they’re using the pictures they’re presented on the platform, not to go through and classify as many as they can quickly, but to stop and do their own analysis and discuss different scientific databases and the scientific papers they can use to do analysis. There’s even examples of volunteers using the material they find on the system to produce their own scientific articles and get them published. With a colleague, Dick Kasperowski, I’ve recently published a paper looking at how they develop knowledge about the imaging processes. So when the telescopes take these pictures and when the different computational processes process these images, often-times artefacts appear – little errors in the images. And you’ll actually find quite a lot of discussion where people are learning to identify these errors and break them down and learn to read what that tells them about the kind of instruments that have been used to produce the images. And, that’s often a gateway into breaking down images in the way that a professional astronomer would do. So, professional astronomers generally don’t look at the visual images that much, they look at graphs and they look at different wavelengths and it’s a much more complicated process. And, so looking at these errors in the images is often a kind of gateway into looking at different representations of the images and taking a much more analytical approach.

NS: So, these are really powerful forms of informal learning and, as you say, the system is acting as a bridge into these other forms of knowledge.

TH: Exactly, and it’s a completely unexpected form of informal learning. So it’s not at all something planned by the project, or it’s not part of an initiative that the scientists have to inform the public that they’re working with. It’s very much a grass roots re-assemblage of the platform and what’s available to them.

NS: Now, you’ve done another project on teachers’ use of Facebook groups. And, just for anyone thinking that we’re talking about a few hundred people here, I know this Facebook group was 18,000 at its peak.

TH: It got up to about 18,000. When we actually assembled our corpus of data from the group I think it was about 13,500 users. We assembled three years of activity from this Facebook group where teachers talked about a particular kind of pedagogical approach, and we looked at the patterns in their usage, and also the character of their discussions. We did analysis of how the norms in the group functioned and how they were put in place, and the different footings that teachers took in the discussions, and the kind of topics that they worked with.

NS: Now, you’re making this sound very straightforward, but I guess methodologically assembling data from 18,000 teachers on a Facebook group’s quite tricky.

TH: It’s not easy … and I should say, this was pre-Cambridge Analytica when we collected the data, so I’m not even sure you can do it in the same way we did. So I actually wrote code in the Python programming language to query the Facebook database through what’s called the application programming interface – the API – and downloaded the information we wanted. Of course, we had permission from the group owner, and we’d posted in the group to show that we were doing it, and we encouraged members to let us know if they didn’t want to be part of it. If so, we deleted their data from the data set if they wanted to be removed from the project. Now it’s a lot more difficult to get that kind of access to the Facebook database.

NS: So,this is ethically tricky, you had to teach yourself to code. But all those caveats aside, what did you actually find out? What learning was taking place?

TH: A lot of the literature on these kind of social media teacher groups suggests that it’s quite superficial – that it’s a lot of tips and tricks and sharing of apps, and that kind of thing. But, we actually found that if you dig a little deeper into these seemingly superficial threads, you find a lot of exchange of pedagogical ideas and it isn’t uncommon to have quite serious discussions. But we also found that it was very uncommon to see anyone challenging norms of the group. The pedagogical principal that this group was framed around was the Flipped Classroom. Anyone coming in and challenging the Flipped Classroom as an approach really was met with quite a strong response. There was very much an intention to maintaining cohesion in the group.

NS: But there was no trolling or flaming?

TH: There was very little trolling or flaming, and that’s interesting because it’s very different than you would find on a Reddit group or a more general internet discussion forum, where a lot of the moderation is actually policing behaviour. In this kind of professional space, similar to how it is in the Citizen Science projects … possibly because you have this professional or thematic orientation. Most of the moderation work is actually to do with kind of supporting or guiding or mentoring, maintaining certain norms. But not in the ‘hard policing’ of bad behaviour ay that you would expect with a lot of internet forums.

NS: So was this a case of 18,000 teachers all learning, all participating?

TH: No, I’m not sure you could say that. Our data shows that it’s a core group of maybe 25 teachers that are responsible for the vast majority of activity. So, the data breaks down to show that about half the teachers in the group over the three-year period had either posted, commented or liked something. It’s a private group, so you have to apply to be a member and to see the posts. But about half of those members had never actively contributed to the group in any way.

NS: Never clicked at all?

TH: No, and you could construct that as them being ‘Lurkers’. But on the other hand they may be getting a lot out of simply ‘reading’ the group.

NS: That’s really interesting … and then your third project which caught my attention was your research regarding YouTube and informal learning. So, how on earth is learning taking place on YouTube?

TH: Well, it’s a massive area, and what we’re specifically interested in is instructional videos. Instructional video has been something that’s gone on since the dawn of television, but the availability of instructional video is enormous now. The YouTube statistics are that there is 300 hours of footage uploaded every second.

NS: And are these instructional videos uploaded by educational institutions? These are formal offerings?

TH: No, not at all. They’re very rarely that. In fact, they’re mostly people that are interested in a specific topic. It’s generally manual skills – repairing your washing machine, putting on makeup, cutting your hair, doing all these mundane tasks. But it’s this interesting space where people are not just demonstrating, but they’re taking a kind of pedagogical agency and showing other people how to do things, explaining them.

NS: So, what sort of people are uploading videos about how to mend a washing machine or put makeup on? Who are the actual content creators?

TH: It’s interesting. There’s kind of two different types of creators, There are people that have an interest in washing machine repair perhaps, that’s their hobby and they’re just showing people how to do it. And then there are people that are really trying to make a career of doing instructional video. Because actually if you look at instructional video statistics on YouTube, there a relatively small proportion of these videos on YouTube, but they’re the second most viewed category. So, if you produce a successful instructional video, you generally get a lot of views on it, and then you get a lot of advertising revenue from that. For instance, there’s one user we’ve been following. She is a young woman that does makeup videos. She’s gone from being a student to all of a sudden doing about US$4 million a year in advertising revenue. It can be incredibly lucrative if you get a channel that really becomes popular.

NS: And, so what type of learning is taking place? You can earn $4 million doing it but what are people getting out of watching these videos?

TH: Well they’re learning to put makeup on, or they’re learning to get their washing machine working again [laughs]. Sometimes the videos can address a more abstract topic, but most often it’s a very concrete kind of activity. So, whether you’ve actually learnt something or not can be measured in whether you’ve fixed your bike chain or whatever.

NS: Are you finding anything interesting about this learning from the educational perspective?

TH: We’re finding quite a lot about how demonstration works and the sequentiality between telling someone to do something and showing someone to do something. There’s something about demonstration and video that works well. The video provides you with a specific case to work with, but the description that you give provides a general description of the kind of activity that’s going on. So, you have these two levels, with the visual and the audio working together. That provides a situation where, in many cases, you can learn something more than just the video you’re looking at. It’s not just learning how to fix your washing machine, you also learn something about how electrical components work together, because the person telling you is filling in aspects of that while they’re working with the specific concrete case of that specific washing machine.

NS: These are fascinating topics to be looking at in terms of education research. You’ve got millions of people online, engaging in learning every day. I’m interested what you’re going to look at next. You’ve looked at these particular projects, what’s on the horizon?

TH: Well I’m really interested now in these large-scale educational movements which are non-traditional in a sense. So moving away from things like MOOCs being the focus of trying to understand education online. Instead, I’m interested in things like Stack Overflow, where millions of people are learning to program together. There’s a whole suite of stack exchange platforms. And I’m interested by the platform mechanics and what these platforms can teach us about learning in formal LMS systems or classrooms even. I’m interested in looking at the way millions of people who have no extrinsic motivation to learn get engaged and in doing something.

NS: That’s absolutely fascinating … So my final question – as an industrial designer, what do you actually make about education researchers? It’s a very different industry to be working in.

TH: It is an incredibly different industry to be working in. Industrial designers are also a group that sometimes works quite conceptually, but is generally a group that ends up having to produce a product at the end. So you can’t really get away with just problematizing things when you have to actually produce something. I think we’re quite limited in educational research at the moment in the ways that we can represent our findings. Maybe educational researchers can learn something from designers in a sense that designers have tools and methods for reaching audiences in ways that are simple and elegant, and can sometimes cause people to think, but don’t do it in a heavy-handed way. I think that’s something that can be brought to educational research.

NS: But to end on a positive note, you are working as an educational researcher. So why do you do this job? What do you enjoy?

TH: I’m really, really interested in figuring out why and how people learn. This connection between learning and the German word Bildung – this lifelong pursuit of knowledge. I’m interested in everybody feeling that love of learning. I think that’s why I’m interested in these online spaces, because there’s evidence of people with a self-motivated love of learning, and these platforms work for them for some reason. I kind of want to know why.

AI in the future of education

The speculations run high on what will be the next global business opportunity. As reported by the World Economic Forum, the futurist Thomas Frey is arguing that the next sector to be disrupted by technological innovation is going to be education. Education is undeniably a massive societal enterprise and whoever can sell a believable product that could be plugged into this activity would find themselves in a favorable financial position.

I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet (Thomas Frey, DaVinci Institute)

The arguments for his predictions come from a belief in a combination of artificial intelligence and personalized learning approaches. While it will surprise no one that AI is currently much hyped, the idea of tailoring instruction to each student’s individual needs, skills, and interests is also gaining considerable traction at the moment.

A report on the effects of personalized learning carried out by the Rand Corporation is pointing to favorable outcomes. The reported findings indicate that compared to their peers, students in schools using personalized learning practices were making greater progress over the course of the two school years studied.

Although results varied considerably from school to school, a majority of the schools had statistically significant positive results. Moreover, after two years in a personalized learning school, students had average mathematics and reading test scores above or near national averages, after having started below national averages. (p. 8)

The authors are however careful to point out that the observed effects were not purely additive over time.

[For] example, the two-year effects were not as large as double the one-year effects (p. 10)

Since they also had problems in separating actual school effects from personalized learning effects they urge readers to be careful in extrapolating from the results.

While our results do seem robust to our various sensitivity analyses, we urge caution regarding interpreting these results as causal. (p. 34)

So how are these kinds of results interpreted by futurists and business leaders? By conjecture and hyperbole, is the short answer. Thomas Frey  speculates that these new effective forms of education will enable students to learn at four to ten times the ordinary speed. Others, like Joel Hellermark, promise even greater gains. Hellermark is CEO of Sana Labs, a company specializing in adaptive learning systems.

Adaptive learning systems use computer algorithms to orchestrate the interaction with students so as to continuously provide customized resources. The core idea is to adapt the presentation of educational materials by varying pace and content, according to the specific needs of the individual learner. And those needs are assessed by the system and are based on the students’ previous and current performance. Since Sana Labs use artificial intelligence to run this process, they are a good candidate for the kind of company Frey thinks might grow considerably in the near future. They are currently attracting funding from big investors, but in order to gain the interest of the likes of Mark Zuckerberg and Tim Cook you must be selling a compelling idea. In this regard, the Rand report provides a fertile ground with Hellermark interpreting the results thus:

If you understand how people learn, you can also personalize content to them. And the impact of personalised education is extraordinary. In one study by the Bill and Melinda Gates foundation [ie., the Rand report] students using personalised technique had 50% better learning outcomes in a year. Meaning that over a 12-year period due to compounding these students would learn a hundred times more.  Joel Hellermark

However, the size of the gap between interpretations like this, made by business leaders aiming to sell adaptive learning systems to schools and those of the Rand report’s authors themselves, is impressive.

Something also pointed out in the Rand report is the importance of heterogenous classes for learning. This becomes something of a complication for the personalized learning approach espoused by many purveyors of adaptive learning systems that focus entirely on the student as individual learner. But, why should heterogeneity and sociality add to an individual’s learning? A partial answer to this question might lie in the notion of social responsivity (as specifically found in the works of Johan Asplund and in the sociological tradition of Ethnomethodology more generally).

Social responsivity builds on the simple idea that humans are fundamentally social beings who are interconnected by means of communication and other forms social actions. This connectedness is built up as chains of interactions through responses to earlier actions. This is such a fundamental aspect of who we are that if you were to strip a person of their ability to belong in this world—to see and to be seen by others—it can be used as a form of punishment with solitary confinement perhaps the clearest of examples.

Following this line, there is but a small step from the mechanics of adaptive learning systems that function by tracking all the activity of individual learners to the ideas of the English philosopher Jeremy Bentham and his original formulation of the Panopticon, or what he also called the Inspection-House. Way down the list, after institutions such as penitentiaries, factories, mad-houses and hospitals, Bentham ventured to apply his principles of an architecture for inspection to schools also. In his plan, students should work as solitaries under the watchful eye of the master. Here, he wrote:

 All play, all chattering – in short, all distraction of every kind, is effectually banished by the central and covered situation of the master, seconded by partitions or screens between the scholars (Bentham, 1787)

Even though he advocated that the idea should be tested for education, he expressed severe qualms about the power one would hand over to whoever would govern such an institution.

Doubts would be started — Whether it would be advisable to apply such constant and unremitting pressure to the tender mind, and to give such herculean and ineludible strength to the gripe of power? (Bentham, 1787)

The central issue and challenge that I would like to raise here is that modern personalized learning approaches typified in adaptive learning systems build, in parallel with the views of Bentham, on the general understanding that traditional whole-class education is defective. And while personalized education picks on traditional teaching for being a blunt instrument, it remains blissfully ignorant of the entire dimension of social responsivity. Therefore, even with the best of intentions, it works to create learning environments built around a-sociality.

So, what could be done then? We don’t simply want to repeat the past. From Thorndike, via Pressey and Skinner, to the systems we see today, the idea that machines might provide efficient ways of adapting instructions to the needs of students has been aired and tested on and off for over a century. If we now wish to use AI to make a new attempt to better adapt instructions to students we also need to devise additional measures of student progress. What we need are concepts and measures that can incorporate and operate on the level of the group. Because we must acknowledge that human interaction and the social responsivity it supports is a vastly important aspect of how we live and learn.

Like the AI driven adaptive learning systems that are emerging today, the systems I would like to see should make suggestions for what learners should do next, but not only for individual learners. Instead, they should also be able to support teachers in what class activities to offer up next or which students should work together. If we can develop adaptive learning systems that support rather than obstruct social responsivity, then I think they can begin to have a real impact.

Smart or Dumb?

AI and the transformation of knowing

The development of human knowledge is very much a tale of tools. Tools as extensions of the human mind has been transforming human practices for millennia. Now the digital transformation has sparked a revolution where we still might see most of the change to happen in decades ahead. At this juncture we can identify an interesting shift in one of the many dimensions constituting the relation between humans and technology.

Physical and intellectual instruments (even many of the digital ones) functioning as mediational means in the service of human activities have, traditionally, established a sort of stable relationship between the user and her task (at an ontogenetical level). This stability or predictability stems from a basic form of ignorance held by the artificial.

For example, the modern power drill enables me to accomplish many things that would be hard to do purely by manual labor. But I still have to learn how to best use the tool and to choose a suitable drill bit according to what materials I’m working with. In a similar fashion,  the spell checking happening in my word processor helps in getting the words right. However, so far, it has not acknowledged my changing skills in the langue nor has it taken into account for what purposes a specific text is being written. Such artifacts are generally not context dependent, they aren’t altering theirbehavior in response to their anticipation and analysis of whatIam doing.

This, in turn, necessitates mastery in their use—A combination that has proved most successful. The very idea that a competent user wielding a powerful technology has been key in the proliferation of the human species is a central underpinning of the socio-cultural-historical theory. We could summarize this picture by saying that:

In the old world, the tools, as servants, were blind to the needs of their masters.

Looking ahead, what happens when the technologies start to anticipate my actions and alter their operations based on such assumptions? We can introduce a though experiment to clarify this idea by departing from Gregory Bateson’s discussion of the blind man and his stick:

 [Consider] a blind man with a stick. Where does the blind man’s self begin? At the tip of the stick? At the handle of the stick? Or at some point halfway up the stick? These questions are nonsense, because the stick is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind man’s locomotion.  (Steps to an Ecology of Mind, 1972)

In Bateson’s example the stick in question is simply a “dumb”-stick that does nothing but affords a pathway that carries along vibrations between the ground and the blind man. We could however envision a next version of such a stick. Perhaps a “smart”-stick would start to learn about its master’s preferences. Gradually it builds a model separating the tactile forms of feedback generated by hard surfaces from the soft forms provided by the roadside. It can also extrapolate the blind man’s clear preference for one type over the other.

But what if the stick itself could also alter its shape so as to translate the “soft” feedback into “hard” one? Then it could adapt the presentation of information so that it reflects its user’s preferences, and not simply transmitting whatever surfaces it encounters. In this simple example we can easily grasp that such a development would lead to disaster and that a stick of that ilk would be of no use.

But can we always be so sure of other implementations that adapt their presentation of information, or change the way they operate, according to whatever assumptions they make of what the user needs? Do we even know when this happens? And when implemented how should such technology-held assumptions be communicated?

Filmade keynotes från #PopUpDig18

I juni i år hölls för andra gången  #PopUpDig, konferensen inom Skola och digitalisering. Temat  var Digital kompetens och här kan du se de keynotes som medverkade.

Annika Lantz-Andersson: Introduktion till #PopUpDig18 om Digital kompetens [8:28]

Keynote Luci Pangrazio: Developing student’s critical digital literacies [36:18]    Pdf: Keynote gothenburg_pangrazio

Luci Pangrazio, PhD, is a Research Fellow at Deakin University in the Centre for Research for Educational Innovation (REDI). Her research focuses on critical digital literacies and the changing nature of digital texts. Läs mer Fortsätt läsa ”Filmade keynotes från #PopUpDig18”

The ethical dilemma of the robot teacher

The rise of automated teaching technologies

We need to talk about robots.

Specifically, we need to talk about the new generation of AI-driven teaching technologies now entering our schools. These include various ‘autonomous interactive robots’ developed for classroom use in Japan, Taiwan and South Korea. Alongside these physical robots, are the software-based ‘pedagogical agents’ that now provide millions of students withbespoke advice, support and guidance about their learning. Also popular are ‘recommender’ platforms, intelligent tutoring systems and other AI-driven adaptive tutoring – all designed to provide students with personalised planning, tracking, feedback and ‘nudges’. Capturing thousands of data-points for each of its students on a daily basis, vendors such as Knewton can now make a plausible claim to know more about any individual’s learning than their ‘real-life’ teacher ever could.

One of the obvious challenges thrown up by these innovations is the altered role of the human teacher. Such technologies are usually justified as a source of support for teachers, delivering insights that “will empower teachers to decide how best to marshal the various resources at their disposal”. Indeed, these systems, platforms and agents are designed to give learners their undivided attention, spending indefinitely more time interacting with an individual than a human teacher would be able. As a result, it is argued that these technologies can provide classroom teachers with detailed performance indictors and specific insights about their students. AI-driven technology can therefore direct teachers’ attention toward the most needy groups of students – acting as an ‘early warning system’ by pointing out students in most need of personal attention.

On one hand, this might sound like welcome assistance for over-worked teachers. After all, who would not welcome an extra pair of eyes and expert second opinion? Yet rearranging classroom dynamics along these lines prompt a number of questions about the ethics, values and morals of allowing decisions to be made by machines rather than humans. As has been made evident by recent AI-related controversies in healthcare, criminal justice and national elections, the algorithms that power these technologies are not neutral value-free confections. Any algorithm is the result of somebody deciding on a set of complex coded instructions and protocols to be repeatedly followed. Yet in an era of proprietary platforms and impenetrable coding, this logic typically remains imperceptible to most non-specialists. This is why non-specialist commentators sometimes apply the euphemism of ‘secret sauce’ when talking about the algorithms that drive popular search engines, news feeds and content recommendations. Something in these coded recipes seems to hit the spot, but only very few people are ‘in the know’ over the exact nature of these calculations.

This brings us to a crucial point in any consideration of how AI should be used in education.

If implementing an automated system entails following someone else’s logic then, by extension, this also means being subject to their values and politics.

Even the most innocuous logic of [IF X THEN Y] is not a neutral, value-free calculation. Any programmed action along these lines is based on pre-determined understandings of what X and Y is, and what their relation to each other might be. These understandings are shaped by the ideas, ideals and intentions of programmers, as well as the cultures and contexts that these programmers are situated within. So key questions to ask of any AI-driven teaching system include who is now being trusted to program the teaching? Most importantly, what are their values and ideas about education? In implementing any technological system, what choices and decisions are now being pre-programmed into our classrooms?

The ethical dilemma of robot teachers

The complexity of attempting to construct a computational model of any classroom context is echoed in the ‘Ethical Dilemma of the Self-Driving Car’. This test updates a 1960s’ thought experiment known as ‘the Trolley Dilemma’ which posed a simple question: would you deliberately divert a runaway tram to kill one person rather than the five unsuspecting people it is currently hurtling toward? The updated test – popularised by MIT’s ‘Moral Machine’ project – explores human perspectives on the moral judgements made by the machine intelligence underpinning self-driving cars. These hypothetic scenarios involve a self-driving car that is imminently going to crash through a pedestrian crossing. The car can decide to carry on the same side of the road or veer onto an adjacent lane and plough into a different group of pedestrians. Sometimes another option allows the car to self-abort by deciding to swerve into a barrier and sacrifice its passengers.

Unsurprisingly, this third option is very rarely selected by respondents. Few people seem prepared to ride in a driverless car that is programmed to value the lives of others above their own. Instead, people usually prefer to choose one group of bystanders over the other. Contrasting choices in the test might include hitting a homeless man as opposed to a pregnant woman, an overweight teenager or a healthy older couple. These scenarios are complicated further by considering which of these pedestrians is crossing on a green light or jaywalking. These are extreme scenarios, yet neatly illustrate the value-laden nature of any ‘autonomous’ decision. Every machine-based action has consequences and side-effects for sets of ‘users’ and ‘non-users’ alike. Some people gets to benefit from automated decision-making more than others, even when the dilemma relates to more mundane decisions implicit in the day-to-day life of the classroom.

So what might an educational equivalent of this dilemma be? What might the ‘Ethical Dilemma of the Robot Teacher’ look like? Here we might imagine a number of scenarios addressing the question: ‘Which students does the automated system direct the classroom teacher to help?’. For example,

who does the automated system tell the teacher to help first – the struggling girl who rarely attends school and is predicted to fail, or a high-flying ‘top of the class’ boy?

Alternately, what logic should lie behind deciding whether to direct the teacher toward a group of students who are clearly coasting on a particular task, or else a solitary student who seems to be excelling. What if this latter student is in floods of tears? Perhaps there needs to be a third option focused on the well-being of the teacher. For example, what if the teacher decides to ignore her students for once, and instead grab a moment to summon some extra energy?

#1 Who should the robot help next?

#2 Who should the robot help next?

The limits of automated calculations in education

Even these over-simplified scenarios involve deceptively challenging choices, quickly pointing to the complexity of classroom work. Tellingly, most teachers quickly get frustrated when asked to engage in educational versions of the dilemma. Teachers complain that these scenarios seem insultingly simplistic. There are a range of other factors that one needs to know in order to make an informed decision. These might include students’ personalities and home lives, the sort of day that everyone has had so far, the nature of the learning task, the time of academic year, assessment priorities, and so on. In short, teachers quickly complain that their working lives are not this black-and-white, and that their professional decisions are actually based on a wealth of considerations.

This ethical dilemma is a good illustration of the skills and sensitivities that human teachers bring to the classroom setting. Conversely, all the factors that are not included in the dilemma point to the complexity of devising algorithms that might be considered appropriate for a real-life classroom. Of course, many system developers consider themselves well-capable of being able to provide sufficient measurement of thousands (if not millions) of different data-points to capture this complexity. Yet such confidence of quantification quickly diminishes in light of the intangible, ephemeral factors that teachers will often insist should be included in these hypothetical dilemmas. The specific student that a teacher opts to help at any one moment in a classroom can be a split-decision based on intuition, broader contextual knowledge about the individual, as well as a general ‘feel’ for what is going on in the class. There can be a host of counter-intuitive factors that prompt a teacher to go with their gut-feeling rather than what is considered to be professional ‘best practice’.

So, how much of this is it possible (let alone preferable) to attempt to measure and feed into any automated teaching process? A human teacher’s decision to act (or not) is based on professional knowledge and experience, as well as personal empathy and social awareness. Much of this might be intangible, unexplainable and spur-of-the-moment, leaving good teachers trusting their own judgement over what a training manual might suggest that they are ‘supposed’ to do. The ‘dilemmas’ just outlined reflect situations that any human teacher will encounter hundreds of time each day, with each response dependent on the nature of the immediate situation. What other teachers ‘should do’ in similar predicaments is unlikely to be something that can be written down, let alone codified into a set of rules for teaching technologies to follow. What a teacher decides to do in a classroom is often a matter of conscience rather than a matter of computation. These are very significant but incredibly difficult issues to be attempting to ‘engineer’. Developers of AI-driven education need to tread with care. Moreover, teachers need to be more confident in telling technologists what their products are not capable of doing.


The two ‘dilemma’ images were illustrated using graphics designed by Katemangostar / Freepik


Author: Neil Selwyn

Neil Selwyn is a Professor in the Faculty of Education, Monash University and previously Guest Professor at the University of Gothenburg. Neil’s research and teaching focuses on the place of digital media in everyday life, and the sociology of technology (non)use in educational settings.

@neil_selwyn is currently writing a book on the topic of robots, AI and the automation of teaching. Over the next six months he will be posting writing on the topic in various education blogs … hopefully resulting in:  Selwyn, N. (2019)  Should Robots Replace Teachers? Cambridge, Polity

AI and the medical expert of tomorrow

In my research I have addressed the consequences of the continuous shift and development of technologies in different work settings and what that means for the skills that we develop. For a number of years, our group have been heading interdisciplinary research initiatives in the medical area. This work encompasses radiologists, dentists, surgeons, radio-physicists and social scientists that jointly study the management of different technological advancements in medicine. We also design workplace-learning environments in which both experienced professionals and novices can develop and improve essential skills. The next step for us, is both to develop AI applications in these areas as well as scrutinising their adoption and their consequences for practice.

If we look at medicine and many other complex work settings, what we find today is an inherent dependence on various technological set-ups. Work proceeds, by necessity, through the incorporation and use of a vast array of technologies. Physical as well as digital. This has as a consequence, that when these tools and technologies change and develop, then the practitioners have to adapt and re-skill.

The general trend here is that tasks of lower complexity can be automated and taken over by technology. The more complex tasks however, have so far tended to require expert involvement. The more recent developments of AI for medicine can be seen as a continuation of a long trend. But it might also represent change on a different order of magnitude. In the near future we will most probably see systems extending and going beyond the current limits of possible performance. This will imply that the medical experts will take on new and even more advanced roles, as supervisors or developers of new forms of knowledge and inquiry.

Now, these are not new arguments. What I want to highlight here is an issue that I miss in the current discussion about how AI transforms work. When we promote the current workforce and let them take on more advanced tasks today, we do so given a pool of people who have undertaken a traditional training and who have become experts under certain conditions. And this is a long process. But these trajectories of becoming experts are themselves being shifted in these transformations. What will it mean to be knowledgeable or an expert radiologist in say 15-20 years from now? Surely something different from today. But how is an individual going to end up so knowledgeable about a professional domain when during training, a system can outperform her every move and diagnosis for years on end? How will we motivate people to keep training and to keep learning so that they will one day be able to contribute to the development of new knowledge?  

While I don’t think that this is an unsolvable problem, we need to start this discussion alongside whatever powerful systems we introduce into medical practice. Otherwise we might be getting a devil’s bargain, where we profit in the short term, whilst depleting the knowledge base in the long run.

Mässor och konferenser centrala när agendan sätts för skolans digitalisering

Konferenser, skolmässor och sociala medier beskrivs ofta som viktiga och demokratiska mötesplatser för skolan och för lärare. Enligt den bilden kan lärare där göra sina röster hörda, lyfta fram och diskutera relevanta pedagogiska frågor och dela framgångsrika undervisningskoncept och metoder. I en studie från Göteborgs universitet framställs en helt annan bild.

– Forum och event av denna typ skildras ofta som en ny typ av underifrån framväxande folkrörelse som genom mötesplatser kan påverka beslutsfattare och stärka lärarprofessionen, men vad event inom IT-området tillåter i form och utbyte för lärare och skolledare är väldigt begränsat och ensidigt, säger Catarina Player-Koro som tillsammans med Annika Bergviken Rensfeldt och Neil Selwyn står bakom studien.

Mässor, konferenser och sociala medier har för skolan fått ett allt större inflytande över policyfrågorna när det gäller digitalisering. Platser där privata och offentliga aktörer och intressen möts. Här framställer vinstdrivande IT- och utbildningsföretag, teknik- och infrastrukturleverantörer och skolaktörer digital teknik som lösning på ofta komplexa problem i skolan.

I studien har de båda forskarna använt IT-mässan SETT som ett exempel på event som fått stort genomslag. SETT utger sig själv för att vara ”Skandinaviens största mässa och konferens inom det moderna och innovativa lärandet” och arrangeras sedan 2011 årligen i Stockholm.
En rad olika aktörer som vinstdrivande IT- och utbildningsföretag, teknik- och infrastrukturleverantörer men även kommuner och fackförbund samarrangerar mässan. Internationellt finns motsvarande event som BETT, BETT Latin America, EdTechXAsi med flera.

I sin studie har forskarna följt mässan före, under och efter eventet, både genom att besöka eventet, intervjua lärare och analysera mässan i sociala media. Forskarnas huvudsakliga resultat är att SETT och liknande mässor bör ses som en del i ett globalt policynätverk där idéer, teknik och föreläsare av olika slag rör sig över olika länder och sammanhang och därigenom sätter agendan för skolans digitalisering. Ofta gör detta att agendan blir likriktad, ytlig, anpassad till kommersiella intressen snarare än pedagogiska.

– SETT-mässan är en del i en ny typ av policyprocess som vi ser internationellt och som drivs av en ekonomisk agenda. Det problematiska är att en betydande del av policyarbetet, riktat mot skolans innehåll, läroplan, resurser och teknik, sker utanför de vanliga demokratiska forumen för skolbeslut, som klassrum, skolor, kommuner och stat, säger Annika Bergviken Rensfeldt.

Studien beskriver detaljerat villkoren för mässdeltagarna och hur mässan erbjuder en tillrättalagd förmedling av budskap, med liten möjlighet utbyte av kunskap och information lärare emellan. Lärarna på plats har få möjligheter att göra sina röster hörda, att påverka, ifrågasätta eller på andra sätt uttrycka kritiska kommentarer eller inbjudas till längre diskussion. Lärarna på plats visar sig också ha svårt att uppfatta vem som är budbärare för budskapen och teknikföretag, apptillverkare, lärare och forskare kan ha lika stort inflytande över vad man anser värdefullt.

– Kravet om att all undervisning ska vila på vetenskaplig grund och beprövad erfarenhet blir i relation till detta omöjlig för lärare att bedöma och de uppfattar det som att mässan ska förmedla det som de förväntas göra i klassrummet, säger Catarina Player-Koro.

Istället för utbyte av kunskap och information erbjuds budskap i form av korta slogans som lösning på de komplexa utmaningar som finns inom utbildningssystemet. Såväl programmering som inkludering har sin plats, ”Programmering och kodning – tillgängligt för alla”, ”Förbättrade studieresultat”, ”Inkludering för invandrare”. Formen för detta är oftast starka berättelser om framgångsrika metoder och en marknadsplats för säljbara applikationer och metoder. Sällan framställs utmaningar och problem med digital teknik utifrån en skolvardag.

– Eftersom vi idag har öppnat skolan för både privata och offentliga intressen på olika sätt är det också viktigt att undersöka konsekvenserna av det. Om den här typen av IT-mässor är vad lärare erbjuds som fortbildning inom IT i skolan är det väldigt problematiskt. Samtidigt finns det en demokratisk potential i en mer öppen diskussion om IT i skolan, men då måste de offentliga intressena få ett större inflytande över vilka frågor som är viktiga för lärare och skolor, säger Catarina Player-Koro.

Catarina Player-Koro, Annika Bergviken Rensfeldt, Neil Selwyn

Läs artikeln Selling tech to teachers: education trade shows as policy events i Journal of Education Policy: http://www.tandfonline.com/doi/abs/10.1080/02680939.2017.1380232

 

How can EdTech help save the ocean?

Since the beginning of the industrialized revolution, human activities have had a growing negative impact on our planet. The ocean, covering more than 70% of Earth and providing goods and services our species depends upon, is not spared; increased temperature, acidity, destruction of large marine habitats, etc. In other words, our everyday behaviors (e.g., what we eat, how we travel, what we buy) impact the ocean and humans are so dependent on the marine environment that destroying it results in threatening our own survival and the survival of the ocean’s inhabitants.

Fortsätt läsa ”How can EdTech help save the ocean?”

Träning av icke-tekniska färdigheter i simulatormiljö: Hur går det till i praktiken?

Inom professionsutbildningar med höga krav på säkerhet, så som sjukvårdsutbildning, pilotutbildning och sjöfartsutbildning, sker idag träning och bedömning allt oftare i simulator-baserade lärmiljöer. För att säkerställa att framtida sjökaptener är kompetenta och kan agera säkert regleras dagens sjöfartsutbildning av internationella konventioner som anger kraven för olika fartygsbehörigheter och certifikat. Dessa regelverk betonar allt mer vikten av att träna och certifiera såväl tekniska som så kallade icke-tekniska färdigheter. Det är idag både omtvistat och osäkert hur färdigheter som anses vara icke-tekniska ska tränas och bedömas i simulatorträning.

Copyright KONGSBERG Group, used with permission from KONGSBERG Group.

Fortsätt läsa ”Träning av icke-tekniska färdigheter i simulatormiljö: Hur går det till i praktiken?”