E21: Thinking with AI | Erin Solovey | Computer Science
In this episode of The WPI Podcast, Erin Solovey, associate professor in the Department of Computer Science, introduces the concept of “thinking with AI,” a vision of human-centered intelligent systems designed as collaborative partners that enhance decision-making and creativity. She also discusses her research that seeks to make technology more intuitive and responsive. Solovey’s work lies at the intersection of AI, human-computer interaction, and neuroscience and is focused on how people interact with artificial intelligence.
Related links:
Human-Computer Interaction Lab
Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary
Transcript
Jon Cain: What do you think about artificial intelligence? It's a question that elicits so many different answers. Maybe you think of AI as a tool that gives you more time to focus and be creative, or maybe you see the technology as a threat to your job. And then there's the open question of what AI will mean for society as a whole. At the heart of all these questions is the need to explore and understand how people interact with AI, computers and technology. My next guest does exactly that by doing research on human computer interaction. It's work that combines AI, computer science, and neuroscience. Today we're exploring thinking with AI. Hi, I'm Jon Cain from the Marketing Communications Division at Worcester Polytechnic Institute. This is The WPI Podcast, your home for news and expertise from our classrooms and labs. I'm here at the WPI Global Lab in the Innovation Studio on campus. I'm happy to be joined by Erin Solovey. She's an associate professor in the Department of Computer Science at WPI. She's also a recent fellow of the Radcliffe Institute for Advanced Study at Harvard University. Erin, thanks for being part of The WPI Podcast.
Erin Solovey: Yeah, thank you.
Cain: Your work crosses a lot of disciplines and applications from computers to autonomous systems, to education, to accessibility and more. How would you summarize the research you do? Uh, give me the elevator pitch.
Solovey: Yeah. So, at the heart of my work as a human computer interaction researcher is the exploration of enabling technologies to make interactions with technology and AI systems more natural, intuitive, and responsive. So, through interdisciplinary collaborations, my research lab develops technology that is capable of sensing and interpreting behavior, brain activity, and cognitive states to design meaningful interaction between humans and machines across different domains. This could be in education, accessibility, health, safety-critical domains, and more.
Cain: So, what drives you to study how humans interact with technology and specifically AI?
Solovey: So, my path into human computer interaction began as a software engineer shortly after graduating from college with a degree in computer science from Harvard. I remember being invited to observe sessions of the usability tests on the product that I had been working on for about a year. I didn't know it initially, but this meant watching users struggle with the software we built. And it struck me that the problem wasn't these users, it was how we were designing our interfaces. And this realization that technology should be tailored to meet user needs and not the other way around was a really pivotal moment for me. And it was also connected to a second realization that I had that there were people in my own company who had a job studying these problems, which I couldn't believe. And so, I started meeting with them and after several coffee and lunch chats, I decided to shift my career towards human computer interaction and towards this mission to make technology more intuitive in developing emerging technology to the level that it can improve human experiences and performance.
Cain: You know, that's really interesting. I think a lot of people can relate to sometimes the frustration of, uh, software maybe not working the way that they'd hoped or getting frustrated with using something like that. So, it's interesting that there is this whole area of study that looks into that. And I think it's really interesting you use the term interaction. I haven't really thought of my use of technology as being an interaction, so I'm wondering if you can tell us why it's really a two-way street when you're using something like a computer or an AI tool.
Solovey: Yeah. So, when you think about how you use technology today, whether it's a laptop, a tablet, smartphone, smart home, or the car you drive, usually you are giving it some input via a computer keyboard, a mouse, touchscreen, or your voice. And so, every time you use a computer or an AI system, it is also responding to you and you're adapting to it, and that's the interaction. Um, and so designing for that dynamic relationship is central to human computer interaction and human AI collaboration. When we communicate with other people, we also have these interactions. And so, some of the technologies that are being developed are starting to look more and more like the way that we communicate and adapt with, in our conversations with other people. And so, a lot of human computer interaction is inspired by that as well.
Cain: Wow. So, this is really fascinating to me. I'm wondering, in your research, how do you measure and understand that interaction and what's going on in a person's mind when they're using a technology, a computer, an AI tool, something like that?
Solovey: Yeah, so there's a lot of ways to explore this, and in my lab, we do a lot where we're combining neuroscience tools with behavioral data as well as things like user interviews to understand what people are thinking and feeling during their interaction. We may give them tasks to do, we observe and there's, there's a lot of different approaches and it really depends on what we're trying to learn about that interaction.
Cain: I understand in some of the research you're using brain sensors. Can you tell me a little bit about how you use them and what information you receive, um, as you're monitoring someone who's using a technology?
Solovey: Sure. Yeah. So, we're in an exciting time. The devices for looking at the brain are getting better. They're cheaper, they're easier to use. And so, for me, this was really an opportunity for human computer interaction research because at the core we're always trying to understand what is going on in your mind when you're interacting or doing a task. And this gives us the tools to actually do that directly. So, in my lab we have many different types of brain sensors. So one is called functional near-infrared spectroscopy, which sometimes is called fNIRS, F-N-I-R-S. it looks like a cap where it's almost like a swim cap, that has sensors in it, and each sensor can go in a different place on the head. And depending on where that sensor is placed, we get information from that part of the brain. This device is also portable, so someone could be wearing the cap, and it does have wires coming out of it, but they are not tethered. They don't have to sit down. They could walk around. And I also use EEG, it's electroencephalography. EEG is made up of several electrodes that you put on the head, and at each electrode you can pick up the electrical activity that reaches the scalp from the neurons that are firing in your brain as well as other physiological sensors like heart rate, skin conductance, eye tracking, and all of these tools can help us get directly at this question of what's going on in the mind, while you're interacting.
Cain: It must be really interesting to see some of the results. And I imagine there's some surprises as you, you know, discover that maybe there's something going on beneath the surface that wouldn't be obvious from a user reporting, like their own experience with the use of a technology.
Solovey: Yeah, that's true. So, we do also ask people about their experience and that does give us a lot of information about how they're feeling the interaction went. Um, but sometimes it doesn't—there's some things that we can't get from that, whether it's because we have to wait until after they're done doing the task. So, in order to ask them how was that, they have to complete the task, whereas physiological and brain sensing data can be collected all throughout the task. So, we can actually continuously without interrupting their task, look at what's going on. And then we use that as well as the data that we get from interviews and other self-report data.
Cain: A lot of your research, Erin, falls under the umbrella of what you’ve described to me as thinking with AI. Can you tell me what you mean by that term?
Solovey: Sure. When I talk about thinking with AI, I'm referring to a vision where AI doesn't just automate tasks, it becomes a true partner in human thinking. That means designing intelligent systems that help us make better decisions, stay focused, be more creative, and even overcome our own cognitive biases. So, my research explores how we can build AI that understands what's going on in the human mind, using signals from the brain and the body, and adapt in real time. For example, if someone is overwhelmed or distracted, the system could adjust how it presents information or offers support. We're also studying how people collaborate with AI in high stakes situations like supervising autonomous systems or making legal judgements and how to design interfaces that foster trust, transparency, and effective teamwork. So ultimately thinking with AI is about building technology that works with us, not just for us, and that enhances our strengths while respecting our limitations. And it's a way to ensure that AI, as AI becomes more powerful, it also becomes more human-centered.
Cain: Erin, I wanna talk about a few specific research projects that you're working on. Can you tell me about some of the work you've done with students at WPI that's focused on computer systems and workload?
Solovey: Yeah. So, throughout my research, I've always been interested in understanding what the role of user interfaces is in helping people to do their tasks. And so, we look at, one of the things that I look at a lot is a user's workload and so particularly their cognitive workload. So, depending on the task that someone is doing, there should be more or less workload. So sometimes the work that you have to do is hard work. And so, an increase in cognitive workload isn't necessarily bad. But if the workload that you're feeling or the stress that you're feeling is due to the user interface you're working with, then maybe we can change that user interface to better support you. Some of the work that we do accept people doing very complex tasks that we expect to cause a lot of workload, high, high levels of workload, but then we might give them different versions of user interface and see which one helps support them to stay in sort of this engaged level of workload where you have enough that you're not bored and zoned out, but you're not so overloaded that you're making a lot of errors and mistakes because it's too much. And so a lot of the work is trying to find that middle, and build systems that either are designed from the beginning to keep you in that middle stage or that adapt in real time to the tasks that are coming up, as well as what we can measure in the brain with your cognitive workload.
Cain: So, it's interesting when you talk about the idea that that interface could adapt and modify based on the feedback of the user and that feedback could be not even, uh, direct or verbal, how does that work? Is that AI picking up on the signals from the user and then the software could be updated to make adjustments on the fly through, machine learning? Is that how that would work or?
Solovey: Yeah, absolutely. So, there's different ways. There's always been systems that maybe adapt to something, whether it's your, you know, recognizing that, you always go to this page or this menu item, and so maybe that menu item becomes more prominent. That's one example of an interface adapting. Uh, the work that I'm looking at and that I've been doing for a long time is trying to understand if we could deeply understand the user, both the task that they're doing, the environment that they're in, their goals when interacting or doing a task, so that we can sort of support them in just the right way. Similar to if you had a really good teammate who also would adapt their behavior as they get to know you, you, you start working really well together and you know when someone needs help and when you shouldn't interrupt them, when you should. And so, we're trying to build these responsive systems that do the same thing.
Cain: I’m talking with Erin Solovey from the Department of Computer Science at WPI about Thinking with AI. We’re going to take just a moment away from this conversation to invite YOU, our listeners, to help us with an upcoming episode. We’re going to be diving into holiday nostalgia, and we want to hear from you. So, here’s the question: if you could bring back one beloved item from your childhood to gift—or get—this holiday season, what would it be? Record a voice memo on your phone and send it to us by visiting wpi-dot-edu-slash-plus-voice. That’s wpi dot edu slash – and then the plus sign – and the word voice. We may use your message in the podcast. Be sure to start your message by identifying yourself. Erin, let’s get back to the conversation. I know that you were part of a team recently that explored what judges in the United Kingdom are thinking about as the possible applications for using artificial intelligence in their work, um, and some of their concerns about AI as well. What did the judges tell you?
Solovey: Yeah, this is a fascinating project. We conducted focus groups with judges across the UK judicial system, including five members of the UK Supreme Court, to understand how they perceive the role of AI in their work. And what we found was a nuanced and very thoughtful perspective. So, judges saw potential for AI to improve their efficiency, especially in areas like legal research, summarizing documents and handling administrative tasks. Some even imagined AI helping with small claims or generating public facing summaries of judgments. These kinds of tools could help reduce the backlog and improve access to justice. But the judges were also very clear about the limits. They emphasize that justice is fundamentally a human process, especially when it comes to making decisions that require empathy, moral reasoning, or understanding of complex human context. For example, in cases involving child custody or criminal sentencing, they felt strongly that a human judge must be the one making a final call. They also raised a lot of concerns about reliability. Many of them had tried AI tools and found them prone to errors or hallucinations, and they worried about over-reliance on AI leading to the de-skilling of judges and especially newer judges who are still developing their judgment and reasoning skills.
Cain: Was there anything that surprised you from talking with the judges about this?
Solovey: Yeah, so what surprised me most was how open the judges were to the idea of using AI despite their concerns. They weren't dismissive or fearful. Instead, they were asking really thoughtful questions about how to integrate AI responsibly. They talked about the importance of preserving trust in the legal system and ensuring that AI doesn't undermine its legitimacy of the judicial decisions themselves. One judge even said that using AI without proper oversight can make the public feel like judges are just rubber-stamping machine generated outcomes. And that's a powerful reminder that it's not just about whether AI is technically capable, it's about how it affects public confidence in justice.
Cain: In your view, is that why it's so important to make sure that any use of AI in the judicial system is really fully understood and done properly?
Solovey: Yeah, absolutely. I think there's opportunities for it, and it is going to change the way that people do their work in every field, including in the justice system. But it's really important that people are learning about how people do their jobs today without AI, so that when we're building systems that have AI in them, we're not taking away really important parts of the process. Um, and so there, there's a lot of things that are really human, particularly in this domain, and we wanna make sure that we're designing and testing AI tools in collaboration with judges or lawyers or other people throughout the process and creating tools that support rather than replace human judgment. This is some of the work that we're planning on doing in the future where we expand it. We wanna conduct broader surveys across different countries, across different levels. One thing we learned was the way people see AI being part of their work really depended on the role they're in, what level of the justice system and what their everyday work look like. And that one size doesn't fit all. So, you're gonna have to build systems that take that into account. We're also exploring some experimental studies to see how using AI might be influencing decision making, both positively and negatively. Because our ultimate goal is to help shape AI systems that are not only effective, but also ethical, transparent, and aligned with the values of the people that use them.
Cain: , I understand you'll also be doing some research about what's happening in the brain when we make decisions. What will you be looking for in this research and how does this relate to thinking with AI?
Solovey: Yeah, so I've always been interested in decision making and even this work with the judges is related to that, um, they make a lot of decisions every day, but so do people in many fields. And so, a lot of my work is trying to understand how we can build AI to help people make better decisions and decisions that they're happier with. This involves understanding where AI can help people and how AI is changing the way we make decisions. So, you may, depending on your experience with AI, start to feel, or there's stories of people maybe not thinking as much because they're using AI to make decisions for them. And that's really not what I think is the best outcome here. I don't think we want AI just replacing our human decision making, but I do think it can help augment our decision making. And so we're looking at it both in how we design those AI systems for that purpose, as well as, looking at it from the other side, looking at the neuroscience of it and seeing, is our brain working in a different way when we're working with AI and, and what does that mean for our decision making?
Cain: And when you're sort of doing the research to understand what's happening in the brain, when you're making a decision, how do you get that information? Is it, um, looking for brain signatures through brain sensors?
Solovey: In the more foundational neuroscience work, what we would be doing is looking at decisions that also have some ground truth. Not all decisions do, but sometimes we know what a good decision is in a particular context and what is not a good decision. And so, we can create tasks for people to do while we're doing neuroimaging, and we can look at what's happening in their brain. We can look at different people, and we can also have their brain data along with the results of their decision making. And so, then we can start to see what does it look like when someone's making a good decision versus, maybe a not ideal decision. And start to learn that and maybe be able to recognize it in more complex decision-making processes as well, so that we can help people to recognize when their decision making might not be optimal.
Cain: I imagine, you know, one of the key factors is maybe if someone is fatigued or if they are not able to really focus on a particular question at hand due to other factors that they're thinking about. Uh, I'm wondering how you would potentially through this research, get at that, um, question of inattentive decision making and how that would be sort of an example of human technology interaction if you are having tools that could, you know, alert people potentially to the fact that they're not at a most attentive state of mind at the moment.
Solovey: We know that people don't always make attentive decisions and there's lots of papers and articles you can read about decisions made before or after lunch or like after a team wins. We know that people don't always make optimal decisions and so I'm interested in helping people recognize that. That being said, you know, I am a human computer interaction researcher, and so I would be concerned if we're constantly interrupting people and telling them you're making a good or bad decision all the time. That's not my goal. But maybe depending on the context and the importance of the decision making, the right support tool that, you know, if it's really critical, making sure people are thinking through everything, maybe it's helping them synthesize all of the facts that are related to a case or a decision that they have to make and making sure that they've thought through all of the angles. This is why also understanding how people currently are making their best decisions. We want to always be our best version of ourself, and so if we can make AI that helps us with that, then that's, that would be my goal.
Cain: Can you tell me a little bit about how students get involved in some of the work that you do and whether that's building specific technologies and systems or improving it?
Solovey: So, in my lab I typically have several PhD students as well as master's students and undergraduate students. For the undergraduate students, this often is through the MQPs at WPI, and I'm usually supporting many different MQP projects in my lab across different areas. Some of them come directly from work that's in collaboration with the PhD students in my lab, and some of them are ideas that students come to me with that they want to build out, and I kind of am able to put the human computer interaction lens on them. Sometimes they're collaborations with other faculty where we're co-advising undergraduate students. So, I've done a lot of that, as well as in the summer, typically I'll have undergraduate research students that are in my lab from WPI and sometimes from other labs and other universities as well.
Cain: In the spring. I understand you're planning a new course at WPI called “Thinking with AI”. What's gonna be in the lesson plans?
Solovey: Yes. I'm really excited about this new course “Thinking with AI,” because it brings together ideas from human computer interaction, artificial intelligence, and neuroscience to explore how we can design intelligent systems that truly work with people. This course looks at how humans think with, through and alongside AI. We'll dive into topics such as prompt engineering, conversational agents, explainability, and trust, but also explore how cognitive neuroscience can inform better designs. An example might be, how does the brain manage attention or cognitive load? And how can we build AI systems that support rather than overwhelm users? In the class, students are gonna read and critique, state of the art research papers each week and take turns leading discussions. We'll cover things from multimodal interfaces and effective computing to ethics and alignment in human AI systems. A big part of the course is a semester long project where students will design or study a human AI system. That could mean building a prototype, conducting user research, or exploring how AI can augment human decision making or creativity. And the goal is to really push the boundaries of what it means to think with AI and to do so in a way that's human centered, ethical, and grounded in real world needs.
Cain: What inspired you to create the new course?
Solovey: Yeah, so I really enjoy getting to teach special topics courses. So, for many years I've been teaching a special topics course on brain computer interfaces. It was very tied to my research, but also, it helped to bring students from other disciplines in. So, I didn't always have computer science students. Sometimes I'd have students in learning sciences or IMGD in my courses. And so, as I was getting ready to come back from my sabbatical and I was thinking of what, what is one of the main things that I'm thinking about right now? It is this idea of going beyond just brain computer interfaces, but bringing in brain sensing with AI, with human computer interaction. Looking at it from all those directions seemed really exciting to me because I've also seen a lot of AI is inspired by neuroscience and then also neuroscience work is done now using machine learning and AI. And so those are, you know, feeding back and forth to each other. Uh, and then same thing with human computer interaction and AI, you can use AI to build more interesting or useful, interactive systems, but we also need that human centered perspective as we're building AI. So, I really, for me, this intersection of neuroscience, AI, and human computer interaction is just what I'm excited about. And so, I thought it's the next evolution of the special topics courses that I've been teaching over the years.
Cain: Sounds like a great opportunity for students. It strikes me that your work crosses a lot of different disciplines, but your training is in computer science and I'm wondering if you can talk a little bit about why it's so important for computer scientists who might be developing the technology of the future to be working with experts in a lot of other fields as they do their work.
Solovey: Yeah, so I am trained in computer science. All my degrees have been in computer science. And, um, I think it's important to have that background and understand what computers can do, um, and what's happening underneath. But now computing is touching every single field, um, whether at work as well as people's lives at home. Uh, and it's just everywhere. So, it's, the impact can't be contained within computer science. And so it's really important that if we're gonna build responsible, effective AI, that computer scientists are working with psychologists, educators, ethicists, domain experts, um, and so all of these problems, these real world problems where AI can really help us, um, but they're very complex and so solving them requires these diverse perspectives. For me, I really enjoy having the opportunity to talk about AI with people in other domains. I always learn a lot about how AI is impacting their work and their lives in a way that's very different from when you're just talking to computer scientists.
Cain: A lot of your research has me thinking about AI and the question about what the future of work looks like with AI in the mix. I'm wondering how you envision work changing and what are your hopes and what are your concerns for that?
Solovey: Yeah, this is something I'm really interested in and concerned about. Just recently I was at a conference that's called CHIWORK, which is human computer interaction and the future of work. And that whole conference, we all come together to talk about this, how is human computer interaction going to look as work changes? And a lot of that is about how AI is changing work. This is something really important to me and I've been involved in the organization of this conference since its beginning. At this conference I helped to co organize a workshop there that was about, um, the future of work and how AI can support flourishing at work. So, for me, what I'm interested in and where I feel like I can contribute is try to help people think about where it can actually have a positive impact. And so in this workshop, we spent the day thinking about this, and one of the aspects that I think is important is to not only think about success as efficiency, so how, how efficient are we or how much faster are we now that we're using AI, but also how interested are we in our work? How meaningful is our work to us? And sometimes AI can actually help. We can build tools where AI is helping us recognize the important parts of our work and helping us to see the value in the work that we're doing. My hope is that we can work towards this place where AI is helping us, have a positive influence on our work.
Cain: The question that everybody seems to ask these days is AI good for us or bad for us, or somewhere in between? Uh, what are your thoughts and how does that question come to you and your work?
Solovey: Yeah, and I would say that it doesn't have to be one extreme or the other. Um, it's a little bit of both and also somewhere in between. So, I have a lot of enthusiasm for AI, but a lot of concerns about AI and people always are asking me about how I feel about it. And I do feel both, like, I think it has a lot of potential to solve a lot of the hardest problems that we have in society and in the world. But there's this huge responsibility that we also have to be careful how it's being used. And so, just as much optimism that I have. I have pessimism and fear. And so I, for me, it's really important that we're always, just working in a responsible way and that the more people that are involved in the creation of AI systems and the more perspectives that are being brought in, I think that we'll have better AI systems. And so, when it's only in the hands of smaller groups of people or companies, there's always this possibility that AI will take one direction and we want it to kind of serve all of us. I think it's really hard when people make it into this binary of AI is evil, or AI is gonna save us all. That, that discussion I'm not as interested in, and I feel like in my work, the only way that I can make a contribution is to try to make the AI that I hope we have,
Cain: Well, your work is definitely contributing to the knowledge base in this area and helping us push into the future with possibility and greater understanding. So, um, Erin, thanks so much for taking the time to talk with me about your work and thinking with AI.
Solovey: Thank you so much.
Cain: Erin Solovey is an associate professor in the Department of Computer Science at WPI. You can learn more about this research and our academic programs like AI, computer Science and Neuroscience by visiting our website wpi.edu. This has been The WPI podcast. You can hear more episodes of this podcast and more podcasts from across campus at wpi.edu/listen. There you can also find audio versions of stories about our students, faculty and staff. Please follow this podcast and WPI News on your favorite audio platform. You can also ask Alexa to open WPI. This podcast was produced at the WPI Global Lab in the Innovation Studio. I had audio engineering help today from PhD candidate Varun Bhat. Tune in next time for another episode of The WPI Podcast. I'm Jon Cain. Thanks for listening.