Interview with John Bailey

50CAN CEO Marc Porter Magee interviews John Bailey, Senior Fellow at AEI and advisor to the Chan Zuckerberg Initiative and the Walton Family Foundation. Marc and John discuss how developments in Artificial Intelligence will impact the field of education over the coming years.

Editor’s Note: The transcript of this interview was copyedited and formatted by ChatGPT AI.

Transcript

Marc: All right. Well, John, thanks so much for joining us to talk about education and AI, which feels like it is everywhere now. And just by way of introduction, for people who don’t know you, you are a key thinker in the world of education. Currently, in trusted advisor to Chan Zuckerberg Initiative and Walton Family Foundation. Senior fellow at AEI. You became a real force in education policy when you’re working for the Foundation for Excellence in Education, and before that in the White House. And a lot of your work here dates back to the ’90s when you started working on all these interesting issues of technology and education. And I’m sure for you, this feels a little bit coming full circle, but also maybe a little bit different at this point than what we used to talk about.

John: It does. This moment feels very different from other sorts of moments when there was a bunch of personalized learning hype that kind of happened in a couple of hype cycles. Look, I was a part of that. We were all trying to push the technology to do something, but often what it delivered vastly did not meet a lot of expectations. This feels though, similar to two other moments in my life. And this makes me feel very old, but one was the first time I had the chance to engage with the internet with a web browser when Mosaic came out. You just sort of played with it and you’re like, “This is going to be different. This is going to change a lot of things.” And then the second time I felt this way was when the iPhone came out. And the iPhone felt like a very similar moment to now where it was easy to point out its limitations. It didn’t have a keyboard. At the time, everyone was obsessed with the BlackBerry keyboards. And it didn’t have an app store. It was easy to kind of say all the things that didn’t have. And yet, now the leading cell phone manufacturers back then are footnotes and everyone has a smartphone. This feels very similar. I don’t think I’ve experienced a technology that’s been introduced in the span of 100 days and seen multiple improvements and then also has been scaled the way that you’re seeing with Microsoft. And then we just saw Google make some announcements this week, Anthropic, which also is doing an early release. So the pace at which this is improving upon itself and also getting scaled across the ecosystem is just really, it’s unprecedented. Makes your head spin.

Marc: Yeah. And it is one of these things where obviously to your point, we’re surrounded by technology and some of that has completely changed what we do. And some of it is kind of overhyped, but it does feel like this is something that’s catching on really fast. It’s almost hard to keep up with. So for those people who aren’t deep in the world of AI, what is this thing that we’re talking about now? What is this leap forward in AI technology?

John: Well, the way to think about it, there are multiple different types of AI technologies. There are AI technologies that are powering your self-driving car, that are powering drones. AI is getting used in a whole bunch of different ways. What has sort of captured the imaginations of everyone – through ChatGPT and now Bing, Google also has Bard coming out – is a form of AI called Large Language Models. And these are these models that are trained on these just enormous amounts of text. Billions and billions of texts. Pretty much every book that’s been published, everything on Wikipedia, a lot of different websites a lot of these companies have LLMs, Meta has an LLM, Google has an LLM, OpenAI’s LLM, large language model, is called GPT. And what they did is they released this and it allows you to query and ask it questions and then give it different types of tasks and it looks for patterns within all that data and then it produces a response. And the way I sort of think about this is with Google it’s almost like a library, you go to Google and say, “I want to find something about 50CAN?,” and it tells me where to go to find something about 50CAN?. This LLM and GPT in particular, it’s more like having an analyst in your pocket. It’s where you go and not say where do I go find something but, “Tell me what is 50CAN?.” “Do an analysis of the leading market segmentations for smart thermostats.” And you give it tasks and it comes back with an analysis and result. And it can do things, in the early days of this Marc and I were on Twitter and making it write, you know, interesting lyrics and write a 10-point advocacy plan. But it does so much more powerful things now too, it can build strategic plans. I had it with four props doing an adaptive tutor, it does lesson plans really well and lesson activities. Even some formative assessments that teachers can use. And so you just start playing with it and you get to see all the different ways that this can eliminate a lot of the tedious tasks of our lives and teachers’ lives. And also just unlock a lot of other productivity sort of opportunities, whether it’s at work or teaching, or with a lot of kids I think with learning too.

Marc: Yeah. And it really is the kind of thing where you kind of have to use it and play with it to start to get a sense of it. Like you were saying, it’s kind of a magical thing. You bring it your problems or your tasks and it in almost an instant, seems to be able to do things that we previously thought of as just only things that humans could do. It feels like some of the possibilities in education kind of flow to mind. I would say as with any technology and I was sort of crowdsourcing questions earlier, in people’s minds it goes to some of the dangers, you know, the Skynet-type fears of AI that we’ve been trained on with Syfy, but also some practical concerns. So if you can give it the kinds of prompts a teacher might give you as a student to write an essay and it’ll write the draft of the essay for you. How should we think about that? There are some obvious upsides, you can build on top of that. Maybe some downsides, it discourages kids who might be falling behind to try and do the hard work of learning writing. How should we think about that side of the work?

John: A couple of good things, one, I think you’re so right. The only way really to understand this technology is to use it and to just keep playing with it. And really just have fun and give it just– again, keep in mind that sort of metaphor of what would you ask a teaching assistant? A twenty-two-year-old, fresh out of college, just been certified, what would you ask them to help you with? And it gives you sort of a sense of how to think about this and how to use it. And also what not to trust. I think one of the biggest problems with this technology is that it’s right more often. And in fact, the model that just got released this week, the GPT-4, it’s improved immensely in terms of some of the wrong answers and the hallucinating, printing false answers that it was doing in the past. It’s not perfect though. But again, often what you would ask a teaching assistant to do, the 22-year-old would give you, it’s probably going to have errors in it too, so you always want to double check it. But I do worry, this becomes part of the ethical issue, is that you can imagine a teacher is very busy just outsourcing some tasks to delegating some tasks and not taking the time to double-check it. And then, all of a sudden, you’re spreading wrong information amongst the classroom. You can imagine in the example you give, we had similar debates about this in history when calculators came out, and smartphones came out, is this going to make kids lazy? Or is it another tool? And I think we’re at the beginning stages of trying to figure out how do we use it as a tool in the classroom in a way that’s going to cultivate kids thinking and learning, and not just as a crutch, that actually it’s going to replace our kids and students learning different things.

Marc: Yeah. And it feels like one of these disruptive technologies where maybe it’s not up to us to say whether it should change things or not. That it’s going to change things. It’s that Napster-type thing where the music industry had to adjust to just the the ability to download songs was now a reality. Walton Family Foundation has a poll out that shows large numbers of teachers are experimenting with AI right now in the classroom. So do you have some advice on kind of the rights and wrongs of that, of how people should be going about it? Or do we just need to be pretty transparent about it and say, “Here’s what we’re trying, here’s what we’ve learned.” Just a moment where we just have to experiment and you know, fail and try again?

Marc: So, what do you think the appropriate uses are for AI in education? How should we be thinking about this technology as a tool to help students and teachers?

John: Yeah, I would say the way a lot of the AI companies talk about this, it’s sort of a risk-based framework. It’s a little too technical for us in education. But the way to think about it is things that have a lot of consequence, maybe experiment but don’t rely on it. I would say one of the things that was particularly concerning coming out of the Walton survey was that a large number of teachers were using it to grade students. Now, they didn’t really ask, “Grade what?” Is it essays, is it grading homework assignments? But again, the inaccuracies of these, you don’t necessarily want to totally rely on student grades through one of these systems on just the open systems right now. But on the other side, you just want to experiment, you want to play with it on those lower consequence consequential sort of activities. It’s great for lesson planning. It’s great for giving students some feedback. As part of an exercise I just would stop a little bit short. But also the other thing I think we’re seeing this goes back to the speed of this, that again, ChatGPT was released just over 100 days ago. And then this week, GPT4 came out. And already two companies have been using this to introduce their own smart tutors, Khan Academy, introduced a tutor, it’s a nonprofit, and so did Duolingo. And I think what you’re going to start seeing is that now all of a sudden, you have a lot of publishers a lot of curriculum companies that don’t necessarily have a lot of smart digital tools on top of that. Now, all of a sudden, the cost and the barriers to operating an intelligence layer on that stack has just gone dramatically down. Now you can have a smart tutor with all the scholastic materials and scholastic books. You can have amazing sort of formative assessments and activities that can be layered on using this technology, but in sort of a more controlled and gated environment. But even Duolingo and even Khan Academy is that they’re experimenting with this. They’re prototyping it, then releasing it in a limited way and trying to learn how people are using it and where are its strengths and also where are some of its weaknesses.

Marc: Yeah, and this whole idea of using it and tutoring in a personalized way is obviously really exciting in part because we know tutoring works for a lot of kids. And we know it’s been really hard to roll out tutoring, especially post-COVID when we’ve been trying to figure out how to help kids catch back up. There was some numbers released out of Chicago last week that said only 3% of their students were getting tutoring. And part of that may be district bureaucracy and kind of slow-moving changes but I think at least part of it is the whole question of staffing, how are we going to find all these qualified tutors? Do you think tutoring is one of those areas where this might really take off?

John: Yeah, I do. And Marc, I could probably even send you a video of the pre-recording I did but just giving a four-text prompt created a pretty powerful adapt to tutor. It’s rudimentary but if I can do that with four-text prompts on any subject in the free version, it’s pretty amazing to think about what other curriculum companies can do in terms of just training the GPT model on their data. The fact that we’re seeing this Varsity Tutors which has transcripts of tutoring sessions, we can start helping, and they just rolled out an AI-based tutor that is trained in part on the best tutoring sessions that they have transcripts from. I think you’re going to see a lot of innovations in that space too. But yeah, this feels like a natural– I don’t think for some kids it should ever take the place of in-person tutoring but it’s been pretty alarming just the depth of academic loss that students have experienced and where they are and where they need to be and that they’re just not getting access to tutors. And some of that is for legitimate reasons, you can’t find tutors, they’re hard to find in rural areas, but this might be a good bridge solution.

Marc: Yeah, and I liked your metaphor you were using earlier, it’s sort of like this version of AI is kind of like a new employee. You can put them to work, maybe you should have some check-ins, a little insight, and oversight over how it’s going, but can be incredibly useful. Is that a fair description?

John: Yeah.

Marc: Yeah.

John: Yeah, I think, again, just picture an entry-level person who came out of a really smart college, they’re smart and you trust them but you also don’t totally trust them. That’s why you do checks. But they get you most of the way there. That’s kind of how this technology feels like to me right now. And again, part of the reason why it might be really great for a teacher and saving on time and the lesson plan and coming up with activities, but I would also be a little bit more hesitant they’re using it for grading right now and I think we’re seeing some real-life tutoring experiments. Super exciting. I can’t wait to see what’s going to happen over the next year.

Marc: One of the questions I got when I crowdsourced for this interview was across a few different categories, students, teachers, curriculum providers, community members, family members, where do you think this is going to be most disruptive and most powerful? Where maybe are we kind of overhyping it?

John: Well, so, the one other type of AI technology we’ve not talked about is that this is– the way to think about this is that this is generative text. So I give it a text prompt and it generates a text prompt. There’s other parallel technologies that are emerging even more rapidly in some way. One is text to image, so you see things like Stable Diffusion, DALL-E, Midjourney, when you give it a text and it creates amazing pictures and Midjourney just had their version release this year and it’s really difficult for some of the prompts to tell the difference between a stock photograph and something created Midjourney. It does Pixar-styled images really well. And there’s other technologies coming out with video. So I think this is going to change up a lot of the way that we create content in schools. I think it takes a lot of the amazing curriculum and content that a lot of districts have made and now you add tutoring layer on top of it, an intelligence layer on top of it, you can imagine Scholastic and others being be able to write new content at different types of levels using training on their own data and then generating the images with one of these image generators. So I don’t know. I think you can see it really useful with translating. I did the lesson plan example so I could come up with an activity. Imaginary Fairfax, they serve– I think Fairfax serves like 118 language groups. You can have it translate your activity into 50 languages in a matter of minutes. And again, it feels like it’s going to take a lot of the tediousness out of teaching away and help free teachers up to do what they do best which is instruction and working with kids. So I think that’s kind of where I see a lot of the promise over the next couple of years. I worry about some of the hype. I’m not sure I worry about the hype at this point as much as I worry about some of the risks. And the risks are the ethical issues you just raised and also again, this spits out some fairly convincing but totally wrong answers. And that’s why I don’t think we should completely rely on medical information and also as a sort of a trusted education tutor in all situations just because, again, you have to run it. You and I were talking about this before. Bing is going to solve that a little bit by– they give footnotes so when it produces a result you would know where they’re at least getting some sources. I think Google’s going to be doing that as well. That helps but again it’s a little bit of trust needed.

Marc: Yeah. And it seems to get to this question, a lot of times in education we’ve talked about foundational skills and then we also hope that our kids are becoming critical thinkers and able to assess information and make choices. Does this accelerate the need for that? How does this connect to those kind of higher order skills and knowledge?

John: Yeah. I think it accelerates it. I mean, for critical thinking, critical analysis, that’s true for teachers but it’s also going to be true for the kids. I think one area  away from just education where I’m really worried about this technology is in misinformation. We saw how this information just spread through Twitter using photoshopped images and bots sponsored in Russia and China. All of a sudden now you can get a lot of persuasive other types of misinformation. But this will just sort of spread and it’s going to require all of us as citizens and as students to just be more critical on how we evaluate the information whether it’s visual, text or video that we’re seeing. But I worry we’re in a new era for how deep fakes and misinformation can spread. That’s particularly worrisome.

Marc: Yeah. Yeah. I’m not looking forward to the incredibly persuasive spam text exchanges or the other things. When you start to deploy the really smart stuff, it’s going to get harder to deal with, I guess. Obviously 50 can lives in a world of education policy and we’ve always tried to create a system that’s adaptive and flexible and learning. Is there any obvious policies that come to mind that could help emphasize the positive aspects of this?

John: No. It’s a great question. It’s a question I was just talking to some of the companies about this week. You know what? The other thing that’s very interesting about this is that usually whenever a new innovation comes out, you have the CEOs coming to DC and saying don’t regulate us. This is one of the first times I’ve seen where all the CEOs are saying regulate us. They want some guard rails, they want some rules of the road for what should ethical and responsible use of AI look like broadly. And then, of course, within education we have a lot of sort of specific issues to deal with whether it’s the students using this for cheating or for essays, what are the ethical implications and the right ways of using this within the teaching profession? So I think the most important thing right now is trying this in generating those questions and then beginning to have a process of which to think through it. I don’t know if there’s policy solutions quite yet. And I’d be a little bit cautious and worried about policy makers getting a little over their skis. And this is changing so fast that I think even if they instituted a policy, it would be outdated or would run into a new innovation tomorrow. But beginning to have those conversations and establishing, especially at the school level, a set of principles that are going to guide the use of AI, seems really, really important right now. The one other thing that I think is really important is that when schools should be very careful about submitting sensitive personal identifiable information about students into these systems. Because, again, there’s some privacy protections but you don’t want to be taking a lot of sensitive student data and just uploading it into one of these open systems. You can have a lot more trust when it’s with Khan Academy or Duolingo, something that’s a little bit more gated. But I do worry a little bit, too, about teachers taking some sensitive data into sort of feeding some of these models with it.

Marc: Yeah. That’s a great caution. Well, thanks John for joining us quickly to assess the [laughter] ever-evolving world of AI. And I’m sure over the next couple of months, it’s going to be even more so. I’d love to check back in. And is there any way people can follow along as you are thinking through some of these things?

John: Sure. Well, at first, I’m tweeting about it or just trying to elevate people that are showing really interesting stuff on Twitter. And then I’m doing a little bit of writing about this set of AI, and I’m just– I don’t know. Rarely do I get excited about technology. This is something that’s exciting, and so if folks want to talk, reach out. Marc and I were, in the early days of this, doing dual tweets or just all the different things we were able to get systems to do. So, follow Marc and me on Twitter.

Marc: Yeah. Good Twitter. We promised to be good Twitter. We’re not going to attack anyone. Just having a little fun with the technology.

John: Hopefully. Hopefully.

Marc: Awesome. All right. Thanks, John.

John: Good. Thank you so much.

More Interviews
Loading...
Share This