For Joseph Qualls, it all started with video games.
That got him “messing around with an AI program,” and ultimately led to a PhD in electrical and computer engineering from the University of Memphis. Soon after, he started his own company, called RenderMatrix, which focused on using AI to help people make decisions.
Much of the company’s work has been with the Defense Department, particularly during the wars in Iraq and Afghanistan, when the military was at the cutting edge in the use of sensors and seeing how AI could be used to help train soldiers to function in a hostile, unfamiliar environment.
Qualls is now a clinical assistant professor and researcher at the University of Idaho's college of engineering, and he hasn’t lost any of his fascination with the potential of AI to change many aspects of modern life. While the military has been the leading edge in applying AI—where machines learn by recognizing patterns, classifying data, and adjusting to mistakes they make—the corporate world is now pushing hard to catch up. The technology has made fewer inroads in education, but Qualls believes it’s only a matter of time before AI becomes a big part of how children learn.
It’s often seen as being a key component of the concept of personalized education, where each student follows a unique mini-curriculum based on his or her particular interests and abilities. AI, the thinking goes, can not only help children zero in on areas where they’re most likely to succeed, but also will, based on data from thousands of other students, help teachers shape the most effective way for individual students to learn.
Smithsonian.com recently talked to Qualls about how AI could profoundly affect education, and also some of the big challenges it faces.
So, how do you see artificial intelligence affecting how kids learn?
People have already heard about personalized medicine. That’s driven by AI. Well, the same sort of thing is going to happen with personalized education. I don’t think you’re going to see it as much at the university level. But do I see people starting to interact with AI when they’re very young. It could be in the form of a teddy bear that begins to build a profile of you, and that profile can help guide how you learn throughout your life. From the profile, the AI could help build a better educational experience. That’s really where I think this is going to go over the next 10 to 20 years.
You have a very young daughter. How would you foresee AI affecting her education?
It’s interesting because people think of them as two completely different fields, but AI and psychology are inherently linked now. Where the AI comes in is that it will start to analyze the psychology of humans. And I’ll throw a wrench in here. Psychology is also starting to analyze the psychology of AI. Most the projects I work on now have a full-blown psychology team and they’re asking questions like 'Why did the AI make this decision?'
But getting back to my daughter. What AI would start doing is trying to figure out her psychology profile. It’s not static; it will change over time. But as it sees how she’s going to change, the AI could make predictions based on data from my daughter, but also from about 10,000 other girls her same age, with the same background. And, it begins to look at things like “Are you really an artist or are you more mathematically inclined?”
It can be a very complex system. This is really pie-in-the-sky artificial intelligence. It’s really about trying to understand who you are as an individual and how you change over time.
More and more AI-based systems will become available over the coming years, giving my daughter faster access to a far superior education than any we ever had. My daughter will be exposed to ideas faster, and at her personalized pace, always keeping her engaged and allowing her to indirectly influence her own education.
What concerns might you have about using AI to personalize education?
The biggest issue facing artificial intelligence right now is the question of 'Why did the AI make a decision?' AI can make mistakes. It can miss the bigger picture. In terms of a student, an AI may decide that a student does not have a mathematical aptitude and never begin exposing that student to higher math concepts. That could pigeonhole them into an area where they might not excel. Interestingly enough, this is a massive problem in traditional education. Students are left behind or are not happy with the outcome after university. Something was lost.
Personalized education will require many different disciplines working together to solve many issues like the one above. The problem we have now in research and academia is the lack of collaborative research concerning AI from multiple fields—science, engineering, medical, arts. Truly powerful AI will require all disciplines working together.
So, AI can make mistakes?
It can be wrong. We know humans make mistakes. We’re not used to AI making mistakes.
We have a hard enough time telling people why the AI made a certain decision. Now we have to try to explain why AI made a mistake. You really get down to the guts of it. AI is just a probability statistics machine.
Say, it tells me my child has a tendency to be very mathematically oriented, but she also shows an aptitude for drawing. Based on the data it has, the machine applies a weight to certain things about this person. And, we really can’t explain why it does what it does. That’s why I’m always telling people that we have to build this system in a way that it doesn’t box a person in.
If you go back to what we were doing for the military, we were trying to be able to analyze if a person was a threat to a soldier out in the field. Say one person is carrying an AK-47 and another is carrying a rake. What’s the difference in their risk?
That seems pretty simple. But you have to ask deeper questions. What’s the likelihood of the guy carrying the rake becoming a terrorist? You have to start looking at family backgrounds, etc.
So, you still have to ask the question, 'What if the AI’s wrong?' That’s the biggest issue facing AI everywhere.
How big a challenge is that?
One of the great engineering challenges now is reverse engineering the human brain. You get in and then you see just how complex the brain is. As engineers, when we look at the mechanics of it, we start to realize that there is no AI system that even comes close to the human brain and what it can do.
We’re looking at the human brain and asking why humans make the decisions they do to see if that can help us understand why AI makes a decision based on a probability matrix. And we’re still no closer.
Actually, what drives reverse engineering of the brain and the personalization of AI is not research in academia, it’s more the lawyers coming in and asking 'Why is the AI making these decisions?' because they don’t want to get sued.
In the past year, most of the projects I’ve worked on, we’ve had one or two lawyers, along with psychologists, on the team. More people are asking questions like 'What’s the ethics behind that?' Another big question that gets asked is 'Who’s liable?'
Does that concern you?
The greatest part of AI research now is that people are now asking that question 'Why?' Before, that question relegated to the academic halls of computer science. Now, AI research is branching out to all domains and disciplines. This excites me greatly. The more people involved in AI research and development, the better chance we have at alleviating our concerns and more importantly, our fears.
Getting back to personalized education. How does this affect teachers?
With education, what’s going to happen, you’re still going to have monitoring. You’re going to have teachers who will be monitoring data. They’ll become more data scientists who understand the AI and can evaluate the data about how students are learning.
You’re going to need someone who’s an expert watching the data and watching the student. There will need to be a human in the loop for some time, maybe for at least 20 years. But I could be completely wrong. Technology moves so fast these days.
It really is a fascinating time in the AI world, and I think it’s only going to accelerate more quickly. We’ve gone from programming machines to do things to letting the machines figure out what to do. That changes everything. I certainly understand the concerns that people have about AI. But when people push a lot of those fears, it tends to drive people away. You start to lose research opportunities.
It should be more about pushing a dialogue about how AI is going to change things. What are the issues? And, how are we going to push forward?