← Previous · All Episodes · Next →
#30. The Intersection of AI, Philosophy, and Humanity with Upol Ehsan Episode 30

#30. The Intersection of AI, Philosophy, and Humanity with Upol Ehsan

· 44:39

|

The Intersection of AI, Philosophy, and Humanity with Upol Ehsan

Today's guest is Upol Ehsan. Upol is a researcher at Georgia Tech and an affiliate at the Data and Society Research Institute. Combining AI and philosophy, his work in explainable AI aims to ensure stakeholders who aren't at the table do not end up on the menu. Most notably, his work has created the field of human-centered explainable AI. His award-winning work is also regularly featured in major media outlets like MIT Technology Review, Vice, and VentureBeat. Outside of research, he's also an advisor to Alor Asha, an educational institute he started for underprivileged children subjected to child labor. He's also a social entrepreneur and has co-founded Desh Labs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

Sheikh Shuvo: Upol, thank you so much for joining us today.

Upol Ehsan: Pleasure.

Sheikh Shuvo: Well, Upol, you've had such an interesting career with so many different projects. Can you take a step back and tell us what your career story is? What were some of the inflection points that led to where you are now?

Upol Ehsan: I think the highest level, it's non-linear. It's very non-linear. Every day, I think after I turned 18, I don't think I ever predicted to be where I am today. So everything seems like a first time. So I went to a small liberal arts school in Lexington, Virginia, called Washington and Lee, and one of my things that I fell in love with was philosophy. And as you can imagine, being a South Asian person, I had to add the engineering to satisfy my father that I will get a job, but philosophy was always my love. If I had a trust fund, if I could do anything, that would have been that. So after I got a degree in philosophy and a degree in electrical engineering, I spent some time in the management consulting world, then did a startup. And then I decided to torture myself more because pain is something I guess I enjoy. It's like I have to do more academics. So then I ended up in Georgia Tech.

Initially, I was working on how to help children with autism with assistive technology, which is very different from what I do now, and then I was working on some of the other projects in the Global South. Oddly enough, everything is coming back now. But at some point, there was this point of interaction when I joined the lab of my current mentor, and I kind of asked, you know, there was like, what do you have for me? And he's like, Not much, but here's this one project that no one wants to take because we don't know what it is or what to do with it. And I said, what is it? And he's like, it's this thing called explainable AI. We have no idea what the hell to do with this. This is 2017, right? So 2016, 2017. And I was like, yep, I'm very good at ill-defined messy things. I will take that off your plate. And then the rest is kind of history. I have had a very kind of, I guess very privileged, but challenging trajectory through this.

But I think one of the parts has been like every single bit of who I am today is informed by that journey in my undergrad. To be honest with you, there doesn't a day go by that I don't use my philosophy degree. In fact, the way I get insights into the kind of work that I do is very, very informed by my training in philosophy, other than the math, I actually have never used my electrical engineering background ever in my life.

Sheikh Shuvo: Don't tell your dad that.

Upol Ehsan: Yeah, my dad to this day, I think he completely forgets I have a philosophy degree. Anytime he introduces me, he's like, "Oh, he did this and now he's doing computer science." So the stereotypes have been fulfilled. And I'll never forget the time when I was telling him, 'cause it was a minor at first, 'cause that's how I eased him into it. I'm like, "I'm doing a minor in philosophy. It's not a big deal." And then I said, "I'm going to do a major in philosophy." And then he's like, "Why are you not doing it with computer science?" So, like, you wanted me to do electrical and computer science. I said, "Oh, actually, I'm getting a whole degree in philosophy." So, Washington and Lee was this very unique place that allowed you to take two degrees. So, like, you walk away with two diplomas, rather than one diploma and two majors. So it was a very unique place. I did the degree in philosophy and I think that to this day, I still thank it for whatever it has done for me.

Sheikh Shuvo: Looking back to those philosophy degrees and the power that it's had in your framing of problems though, what's some of the coursework and the work that you did there that really resonated with you and stays with you now? Mm-hmm?

Upol Ehsan: There were two courses that fundamentally changed how I looked at the world. The first one was Philosophy of Science and what it fundamentally showed me is that science is nothing but a bunch of people working together, right? So up till then, science had a very kind of like pristine objective view of the world and it felt like it came out of nowhere. And when I took that class, the entire class, all we did was read communications between major scientists. Right? So what today would be maybe Twitter battles that go on between major scientific figures, right? So there was this letter exchange between Schrödinger and Einstein that I'll never forget. In this letter, Schrödinger is telling Einstein, like, "Hey, man, you gotta look out what you're saying, because what you say influences my students, stop dunking on quantum, this is real, stop telling that it's like spooky interactions at a distance," right? That's what Einstein used to say. And Einstein died a non-believer, and I use the word believer in a very dogmatic way, in quantum mechanics, right? He didn't believe in it. He thought it was not science. And Einstein comes back and says, "You know, tell me when it's real peace," right? And there's this long exchange of letters between these people and you could really see how their beliefs about how the world works and Schrödinger kind of calls out Einstein saying, "Hey, it's kind of funny that all of a sudden, after seeing there was a solar eclipse, that was the first proof that light had mass because you could see the light rays bend." And it's like, "It's kind of really convenient for you that you, like me, were an anti-realist. Yeah. Before the stupid solar eclipse happened, because your entire general theory of relativity was posited on this notion of like, so the difference between anti-realism and realism is anti-realists don't really care about the actual existence of a quote-unquote electron, let's say."

It posits the existence of the electron, and then it builds theories on it. As long as the experimental observations are consistent with the theory, for all intents and purposes, the electron exists. The realists, on the other hand, would really say, "No, no, no, this existential thing called an electron, we haven't just found a way to see it yet, but it exists." So there is a deep metaphysical tension between the two camps. And you will often see people at the bleeding edge of things be anti-realists. And then when the science becomes a little bit established, it becomes realist. So that course was fundamentally important for me because it showed me that the sociology of science was as important as the work itself. Where you are, who you are, where you come from, what you think about, all of this governs what comes out of you. The second course that changed me was Philosophy of Mind. That one was the one that really helped me understand how different camps of theories of mind come in, right?

And this really dovetails nicely into my work in explainable AI. In fact, there is this paper that I read as a junior in undergrad, called "The Language of Thought Hypothesis" from Jerry Fodor. And it goes something like this: "Hey, why is it that we can talk in multiple languages, but somehow still be able to think? Do we think in multiple languages, or is there a language of thought?" So, Jerry Fodor kind of posited this Language of Thought Hypothesis that we think in this language called 'Mentalese', kind of like a word play on 'legalese', right? So there's this language of thought and we think of these things, and then there's a language of communication where there's a translation between the language of thought and the language of communication, which, to some extent, I think Chomsky built on later on, on his theory of how people retain multiple languages.

So, how does all of this play in? My first paper in Explainable AI was called "Rationale Generation." And one of the fundamental problems of that work that people kind of criticized us about and how philosophy got me out of that rubble was, back then, the kind of stuff that we accept as the status quo in deep learning wasn't a status quo. You had to argue for it. Nowadays, nobody has to argue for a lot of things. So one of the critiques was, "Hey, in this technique called rationale generation, you are not really explaining what's going on in the agent's head or the agent's mind. You're just creating a mapping of language from the internal data structures that they have to some natural language. You're not really explaining anything, right?" So this is how I used Jerry Fodor's work to actually get out of that bind, which actually was one of the main reasons why the paper got accepted and ended up charting a very interesting path in explainability. So the whole notion of rationale generation kind of comes from that work.

It's like, "Look, as human beings, I do not have conscious access to the neural firings that are going on right now. Right? Yet, isn't it interesting that despite you not knowing what I am thinking in my head, and I am not knowing what exactly is going on in my head, if we speak in a shared language, we can develop a mutual theory of mind. If that is the case between two humans and I replace one of the humans with an agent, what's the difference?" So that was basically how I have been using philosophy and certain things that I've learned in undergrad to this day inform the kind of work that we do.

So, I think in summary, two things, the philosophy of science taught me that science is sociology matters as much as the work itself, and philosophy of mind really helped me understand the different camps and how people thought of how we think, how, you know, from the Cartesian "I think therefore I am" kind of way of looking at it towards the connectionist way of looking at it. Which is very informative to deep learning, right? Connectionism was built on neural networks. So these are the two courses that I think have really played a formative part in how I do things.

Sheikh Shuvo: This reminder that science is just a group of people talking and is very much infused with opinions and emotion is a great framework to evaluate the events of this weekend with all the open AI. I saw good too. Awesome.

Upol Ehsan: At the end of the day, there are a bunch of people managing their egos.

Sheikh Shuvo: Thank you for lifting the veil on this. One of the things you mentioned, Upol, is that, growing up in the Global South. So you're from Bangladesh, grew up in Dhaka. How do you think this geographic background influenced your research, if at all?

Upol Ehsan: That's a really good question. Thank you for asking. Nobody has ever asked me that. I think Bangladeshis are fundamentally very resourceful people. As a nation, we have been vulnerable, but we're very resilient. I think there is a trait in that. In Bangla, there is a word called "biri," which kind of means like a rebel. Rebel is different, it doesn't really capture the essence, but a righteous rebel, I guess. And I think there is that intensity. I take that into a lot of my work in the sense that there is a moral imperative to a lot of the things that I do. And there is almost a level of resourcefulness and sheer grit, like I don't care what the world's going to throw at me. I'm going to keep on going because my countrymen keep on going, right? The fact that we got liberated in '71, despite not having an army and kicked out an occupying force, you can't think about that in the present-day world, right?

But we did it. So there is this underdog mentality that has always helped me. There is a level of resilience that I take from looking at how my country got liberated and kind of flourished. So those are some of the things that I think inform a lot of the ways I do things, not what I do, but how I do them, trying to be very frugally innovative. Like, that was the other part, you know, like, how can you do a lot with a little, versus I think people who grew up in very resource-rich environments had the luxury of doing a lot with a lot. Right? So I have always seen people around me doing a lot with very little. And I take that frugal mentality, like the frugally innovative mentality into a lot of the work.

Sheikh Shuvo: I love that as someone who was born in Bangladesh too. I definitely resonate with that. It would make a great T-shirt just to have a label there. Looking at things that you've published in the past, you recently wrote a paper that introduces a great framing there, it's called "A Simple, Explainable AI," which is a term I love, and it focuses on awareness of blind spots and uncertainties in technology that can help users leverage the technology better. Could you talk a little bit more about what you mean by seams and what drew the inspiration for the project?

Upol Ehsan: Thank you for bringing that up. We just had some good news. The project has now been accepted to a top-tier conference as of a few weeks ago. So hopefully in 2024, I'll get to do a more formal introduction to the world on this. In terms of motivations, this has been something that I've been playing around with for around three years.

And here is the main kind of lightbulb moment. During one of our reading group's discussions, I came across this paper called "Seamful Design" and Matthew Chalmers coined this term and introduced it in the world of ubiquitous computing, right? So the world of ubiquitous computing in the modern world is kind of IoT, connected stores, connected houses, right?

And there is a massive push in the world of UbiComp to make things seamless. In fact, I think anyone in technology would have heard the word "seamless design." You want seamless interactions. But sometimes what happens is in seamlessness, there's also powerlessness, right? When you make a cell phone not have an earphone jack to make it seamless, you're taking that power away from the user to use an analog earphone. You're forcing them to use a Bluetooth headset. When you are removing the Dell XPS, some of the models removed the etching on the touchpad. You know how normal laptops have a little etching that you could tactilely feel. So they tried to make it seamless to make it look really sleek. But then how can people with visual impairments or any other kinds of impairments who rely on touch, figure out whether they're touching the right place or not.

So I started understanding, okay, like there's this interesting parallel between seamlessness and black boxness, because at the end of the day, what is more seamless than a black box? A bunch of inputs and outputs, and you don't know anything about what's going on. And that has been the central focus of all of explainable AI. So, in a very weird way, all that we do in explainable AI, I argue, is undoing some of that seamlessness. We're trying to open the block. And I started reading a lot of this work that had nothing to do with technology, like how do people make seams in woodworking? How do they make seams in clothing? What purposes do seams serve physically? And I started realizing the seams are super interesting things. They kind of bind things together. They kind of tell you that two things have been joined. They also hold things together, right? Like, if you think about the seams in our clothing. But then there's certain clothing that you want seamless, and then there's certain clothing where the seams are the beauty. If you think of any really good formal wear, like even the one that I'm wearing, the seams are actually adding structural aesthetics to the point. So there was this interesting idea to undo the seamlessness.

Right? So then what? So I started like, I've been thinking about this, talking about this to a lot of people. And at some point, I reached out to Matthew Chalmers and, you know, God bless him, one of the nicest people on earth. I basically randomly emailed him one day. I said, "Hey, I have this weird, crazy idea. I want to transfer this thing that you did in the world of ubiquitous computing. And I want to transfer it in the world of AI." Because seams mean different things in different contexts. So what is a seam in the context of AI? I think an example is best used to illustrate the point. So imagine, you know, you're a loan applicant. I'm a loan officer. You and I have known each other for a long time, and I know you should be getting the loan, but my company has recently introduced this system. Let's call it LoanDAO. So this LoanDAO system, I input your data, I know you, and I know you should be getting one, right? But somehow you're rejected. And here's the problem, right? It's not that, and the system LoanDAO is kind of explainable. It gives you the top five features that it considered before it tells you no. And none of those features are wrong. So I know that the computer is technically right, but the decision is wrong.

What do I do here? Now, what if I knew, because I'm expecting this thing to work for you, right? Like I don't care where it was made and I'm expecting it to work for you. Now, what if I knew that the system was trained on data in the U.S., and now you and I are in Bangladesh and we're working together, right? So there's a training, the context in which it was trained does not match the deployment context. That's a seam right there. Seams, in my view, are the difference between expectations and reality. What we ideally expect the technology to do and what it actually does in reality. Ideally, we might expect the loan system to work for all people everywhere for all time. In reality, it might only work for a certain group of people for a certain amount of time in a certain geography. So this is what a seam is, but now what do you do with it? Very central to this idea of seamful design is not just anticipating these seams or finding them. That's part one. The more important part is what do you do with it? So I'll use a non-AI example and I'll tie it back into the loan example to show how this helps.

So the non-AI example that everyone listening might relate to, is if I told you and gave you a map of where in your house the Wi-Fi does not work, you would be best able to use the Wi-Fi. In other words, you're not going to hold a meeting in a dead zone. You're going to put a piece of furniture there or something like that, like a bookcase, right? So this is a seam. This is called seamful awareness. You're leveraging the naturally occurring seams in the infrastructure to help you do your job.

So in this case, if I, as the loan officer, knew that maybe you are a farmer, so wealth is encoded very differently in the geographic infrastructure that we're talking about versus the training dataset where wealth is encoded very differently in, let's say, the U.S., I might now feel more empowered to contest the AI's decision and actually know, okay, the AI isn't dumb, and seams give very important information that traditional XAI methods cannot, which is giving you the information on the 'why not'. Traditional XAI is about 'why did something happen', which are all these features, which is good. But why did you not give him the loan? Right? Is the question that a seam can answer. The answer is there is a mismatch between training and testing contexts. Other seams include model drifts, right, data drifts. All these kinds of examples are kind of examples of seams.

Seams are undeniable, they're inevitable, they exist. Seamful design is about making them visible to the user, but also giving them the power to harness them. In seamless design, right, seams are often thought of as negative, like they're a bad thing, I want to make it seamless, it's a hindrance. What seamful design does is turn that negative into a positive, and helps you understand that certain interactions are better to be seamful because guess what, if you make it seamless, then the loan officer, I wouldn't be able to know how to calibrate my reliance. I might be over-trusting the system, which is a massive issue in AI systems these days, right? Because we don't know where the data comes from. We don't know where anything comes from. We're just given a decision. And even if there's an explanation on top of it, it's very hard for us to make an educated judgment on whether to rely on it or not.

Sheikh Shuvo: It seems that an understanding of seamful design is very much one in acceptance that these seams exist and a way to create a mental model throughout the design process to understand and see how you can adapt. But say I'm a PM at a company working on shipping an AI feature. Outside of helping to facilitate discussions and having seamful design be a cultural shift, are there any aspects of this that you think could be, say, deployed in an automated fashion? How can I scale this across a giant team outside of purely a cultural shift?

Upol Ehsan: That's a good question. So in this paper, we kind of offer a design process that people can use. So it's a 3-step process. It starts with these notions of breakdowns. So breakdowns are what could go wrong. And then you ask, okay, where could they go wrong in? Cause it's not like one thing. We have these different stages of the AI's life cycle that you could start predicting the reasons for those breakdowns. The reasons for those breakdowns are actually seams, and then there's this process of, okay, you might have highlighted, like, during this process, like 50 seams. You can't show all of them to the user, right? So which ones do you hide and which ones do you show? Similar to our clothing, there are certain seams that are hidden, and then there are certain seams that are shown. So this allows you to methodologically start anticipating harms or any kind of things that could go wrong, and then not just proactively anticipate them, but actually turn the negative into a positive, right? Like, how do I show the fact that there is a limitation? Let's say, you know, ChatGPT is a good example. You know, the last update was from September 2021. That's really good information for me to know as a user. It's a seam because today, unmindfully, I might be knowing that, oh, this is a live system. It's so powerful. I expect it to work on the news that happened today. But if I know the data stops in 2021, it's a very simple seam, but it allows me to calibrate my trust on things that happened after 2021 and my questions around it, right? So there is a process that actually now, for Fortune 500 companies, I can't name them because of NDA, are currently using them to do responsible AI work, in massive multinational contexts, where they're using this design process. And the beautiful part of the design process is it doesn't have to be done in one day. It doesn't have to be done by one team. There are multiple passes can be taken asynchronously. It's a mural board that we created and people can just take it and basically use it. And then the best part is the deliberation, right? So once you figure out what could go wrong, where they could go wrong. And then the best part is when multiple teams, like the data science team, the legal team, the product team come together and deliberates, can we show this seam to the user? There are seams that are really useful for the user, but also open you up to liability. Those are some of the aspects where large companies today, as of today, are using these AI efforts.

Shiekh Shuvo: Outside of ChatGPT's use of dates, are there any other tech products available today that you think would be a good example of showing the seams in tech, either AI or not?

Upol Ehsan: It's a good point. I think if you start analyzing products, you'll find a lot of seams that they have not shown. ChatGPT, I mean, I'm not saying it's the best seamful design product out there, but there is this one interesting thing that I really liked. And any product that kind of talks a bit about the weaknesses in very clear terms up front, this is the base level of seams, right? Here is a very advanced seam. An advanced seam could be, regulated technology, let's say medical decision supports, right? They are not updated as the model does not update in tandem with the regulations, so there are times where you might have a very outdated model because of regulations. So these are very complicated factors where let's say if you're doing a cancer diagnosis, and you know there is a new standard of care, but the AI is saying or recommending an old standard of care, it's not that the AI is stupid or the AI doesn't know. It's just that there are very realistic considerations that are happening around the world that is impacting the AI's decision.

So I don't recall any at the top of my head, but I would invite anyone watching to kind of now be on the lookout. Like, you know, are there certain products that are proactively talking about their weaknesses? Are there products that are being very honest about what they cannot do? Right? Seams are these ideas about mismatches and gaps and they're fundamental about performance. So, you know, what would be interesting is like, think about the Gender Shades work, right? That came out a few years ago where they talked about the fact that certain facial recognition systems cannot reliably recognize non-white, non-male faces. Imagine if that label is attached all the way up top, how would that impact user behavior?

There's another example where I wish there was a seam involved. Now, there was an embassy that had deployed this automated camera where you just have to look in and they will create the right dimension photo for you, which is a massive pain in many different embassies. So there's an East Asian gentleman. He was continuously being requested to open their eyes and it's a very frustrating example, right? Like, I'm opening my eyes and apparently the computer couldn't recognize the fact that they were opening their eyes. But what if, right? At least the embassy people knew that this thing was trained with a demographic shift in the data that has nothing to do with the population that they're working with. I hope they would discontinue using it or update it. But these are really good examples where seamful awareness can really change how you react to technology.

Sheikh Shuvo: Seems like it's very related to the idea of model cards, but going a couple steps deeper and being able to display those in a publicly consumable way.

Upol Ehsan: You're 100 percent right, and you bring up a very important point. System cards, model cards, data sheets for datasets, they're amazing tools for showing transparency. Seamful design is about actionability. What do you do with it? Now that you know a seam exists, can you use it as an advantage to you? So it takes that extra effort. So for people who are more involved in the user-centered design process, if we think about it, like if you take one of the diamonds, right, there's this generation process and then there's the filtering process, right? Most responsible AI tools are very good at the generation part. I will envision harms for you. What Seamful Design does is it builds on that and extends it to then also tell you what to do with it. So we have this artifact called Reflection Cards on which you could show this seamful information in a way that is contextually appropriate for the user. So this is where that modulation comes in. When do I show which seam and for what? That is the question that teams deliberate. It's beautiful to see that negotiation going on in team environments. And the best part is it's like, because it's there in this mural board. It's very visible to everyone, and you could share it. So it's an artifact that you could actually now share within the organization, that was the other part that we didn't see coming. It's only happened when things got adopted. I started realizing, holy crap, the end product of the design process itself is a good artifact for the organization to have. I thought the process would be the process and we'd get rid of this whiteboard at the end. And turns out, no, you can clean it up and make it into a very resourceful way of also training people. So that's the other part. It's like new employees when they were coming in, because everyone has a seamless design ideal in their head. I have never seen a single person question, why seamless design? Never. I have always seen people ask questions. Why seamful design? There is a mismatch, right? Like one of the two enjoys this unquestioned design virtue status, everything should be seamless. There's this positivity around it. The moment I utter the word seamful, people are like, what are you trying to do? Would you do this? No one asks the other one, and my argument is like, a lot of harms come from not questioning the first one. Like, why are we trying to make this seamless?

Sheikh Shuvo: It all goes back to your philosophy of science and thought class.

Upol Ehsan: Yep. I think so.

Sheikh Shuvo: Now, you first published this paper in 2022. And since then, it seems the pace of AI development in particular has only accelerated since. Are there any aspects of the design process that you recommend in there, that you would revise given technology shifts?

Upol Ehsan: I think it's a work in progress. A good question. I don't know if there's any, I would revise because it was fundamentally built on something that we're actually now starting to use, is built into this notion of seamful design in XAI. It is this understanding that we are accounting for the design materiality of AI. And what do I mean by that? AI as a design material is very finicky. It's very unpredictable. It's stochastic. Okay. Use cases are unbounded unlike traditional software. So if you think about cybersecurity, because it's a design process, we had to balance out specificity as well as the interpretive flexibility in it. So in terms of the design process, I don't see a lot that I would change, but I think what are the use cases I envisioned for that design process to be used has definitely evolved. And I think that would be something, the kind of work that I do. I'm not the kind of researcher that like drops the paper and moves on to the next project. My whole medium of research is I have a consulting business. I kind of intertwined the two, like what I make is what I use, rather than like make something, drop it and move on to the next project. So I'm pretty sure I'll have a better answer for you. Maybe in three years, once a lot of the things that we're currently doing, actually, okay, like reach some kind of maturation and then maybe we can go back and say, okay, here are the ways we could revise because one really interesting way this is becoming very useful that we didn't really plan for was a red teaming, large language models.

Sheikh Shuvo: Very timely.

Upol Ehsan: Yeah.

Sheikh Shuvo: Tell us more about that, especially with Biden's recent executive order. Red teaming has become a priority for many teams to focus on. What are some of the ways that seamful design applies to red teaming large language models?

Upol Ehsan: First of all, I think red teaming is the right philosophy to think about because it's a proactive philosophy. Most of the time when we deal with AI harms, it's been reactive. We deploy something, something bad happens, then we react. So I think proactive awareness is the right thing. So, philosophically, I'm very much on board with the idea of red teaming. I think seamful design can make you do red teaming in a better way. And here's how. Traditional notions of red teaming come from, and often the methods are incorporated and translated from traditional domains like cybersecurity, where the software as a design material is much more predictable. It's much more bounded in its use cases, and there is a lot less stochasticity and non-deterministic natures to it.

When you transfer it to the world of AI, I think that's where things start going very wrong because now you're starting to deal with things that have a level of stochasticity. You don't know, especially with these large language model cases, like where they will be actually used, right? What are the use cases that would come out of it? Because they're these general-purpose uses. So Seamful Design has this native compatibility that acknowledges the fact that this is a very finicky design material and it's very hard to pin down. So there's no one and done process, right? So the design process kind of evolves as the use cases evolve.

So that notion of what could go wrong is a changing answer as you go through the different ways. So right now there are two companies who are re-imagining what red teaming looks like. And these are companies in regulated environments, so they really need to get their act right, otherwise, they will risk not only client data safety.

So this is the big difference. One of the things that I often do is I primarily work with non-large tech companies. Because I think that's where the need is the most. So, if you're a Fortune 500 company who are not FANG-esque, right, I think the need for these kinds of things is the highest there. Because in the FANG-esque world, there are many amazing people doing responsible AI work. And you know, I think the need is less there. There are a lot of in-house teams that are building these frameworks out. But let's say you're an oil and gas company, and you know, you're working with systems that have mission criticalness to it. People can die if they goof up. So these are the places where I see a lot of this technology, especially seamful thinking, making a lot of dents and oddly enough, these are the places where I incur the least friction because everyone is trying to incorporate some generative AI feature into their product lines these days. So this process helped them kind of proactively take advantage of not just what could go wrong, but what can I do with it. So that was something that like, when we wrote this, if I were to change something, maybe I'll revise it to add this LLM section that wasn't there or relevant at that time when we did the actual research, but in terms of the design process itself, I feel like it's pretty robust and it's specific enough to sink your teeth in, but it's also malleable enough that you could morph it to whatever you want it to be.

Sheikh Shuvo: As you think more broadly about either this research or what you're exploring next, it very much seems to come down to the structure between humans and how they interact with the AI system. Do you draw inspiration from any other fields or modalities there? Things outside of the tech world?

Upol Ehsan: Absolutely. I think I read more non-tech, and this might sound weird, like I objectively read old research. One of my criteria is like, what happened in the world, like 30, 40 years ago, what was the state of the art back then? What were the conversations people were having back then? For instance, one of the parts of one of my things that I'm known for is this notion of social transparency in explainable AI. It was the first work that showed that you could increase the explainability of an AI system without touching a line of code in the model. And that was really provocative at that time. People were not happy to hear this because it fundamentally says that algorithmic transparency is necessary, but not sufficient for explainability, right? Because if I keep algorithmic transparency constant, and I add social transparency on it, and people's understanding of the AI system goes up, then clearly explainability has this component. So, the way I came about that idea was very much reading this piece of work, published in the 90s, not even in organizational sciences, not even in computer science. And I was like, huh, I wonder what that and this blending would do. So, I did a lot of work in science and technology studies. So, STS, I read a lot of old, kind of organizational science work, like at the dawn of collaborative computing, when computing became more than one person. Beautiful pieces of work were written by actually IBM research had a lot of these beautiful pieces because they were the first company that had these collaborative systems. And then there's one person like Jeffrey Bowker and Susan Leigh Star, I think wrote everything that ever was useful for CS, which is be done with it because their work continues to inform a lot of the things that I do, and I frankly take reading to be a very active part of my research in computer science. I would argue we have become a right-only community. We only write. We don't read much. I fundamentally take thanks to my philosophy upbringing, I think reading is a fundamental part of research. And not just reading what's coming out because the whole point is, right? Like, here's another practical advice, if you're trying to create new things, reading the stuff that just came out means you're already behind the game because these people have done this six months ago. If you read my paper today and you want to know where I'm going next, tough luck because the paper that came out today was the stuff that I was thinking two years ago. So if you really want to buck the trend and get ahead of it, you are better served by looking at out there ideas. So there's this thing called critical technical practice. That I ascribe to, it's a concept developed by Phil Agre, who was one of the AI luminaries. So the central notion of critical technical practice is, by doing what we're doing today, what are we missing? By, let's say, using a microphone on a boom mic like this right now, what are we missing? So it's questioning the status quo. The example being by focusing too much on the algorithm, by focusing too much on opening the black box, what are we missing? And when you highlight the central, marginal insights automatically appear. Then the question is, okay, can I put the margin back into the center, like an omelet almost, right? You're folding. And do you see a different technology? Transparency work is exactly a byproduct of this thinking. By focusing too much on the algorithm, what are we missing? We're missing the fact that black boxes by themselves don't do the work.

Humans with black boxes do the work. Well, in organizational settings, when there is collaborative work, a lot of transparency is needed on the socio-organizational context. Who's capturing that? Turns out no one. Well, can I try to do that? Turns out yes. Very simply. Who did what, when, and why. Turns out when you attach these four things next to every AI's decision to give the person a historical understanding of what others have done in the past and why, their understanding of the AI's blind spots start becoming clearer. So I read a lot of these old literature. I continue to read a lot of philosophy, especially analytic philosophy. I don't really bother myself with the Immanuel Kants and the Wittgensteins of the world. I had my classes, know the basics. I don't deal with them. I really deal with like, I think 19th century and onwards. Remember like philosophy was the part that cognitive science came out of. Computer science kind of came out of the cognitive world, right? So I look at analytic philosophy, post-19th century, look a lot of American authors, because I think the world power also shifts. That's the other part to be mindful of that at certain stages of our world, certain parts had more money than others, right? So wherever there is well-resourced stuff, scientists will go there and do their stuff. So that's another marker that I think about like, okay, if I want to study the 1600s, what was the world like in the 1600s? Where did they have the most money? Where did they have the most resources? And then go find literature from that era.

Sheikh Shuvo: On top of engineer, scientist, and philosopher, you'll need to add a historian just to confuse your dad even a little bit more.

Upol Ehsan: More recently, I have actually been, this is a very concerted effort. I have started, better understanding, innovations from Muslim scholars throughout time and things that got whitewashed things that got adopted and cheated, and stolen.

Sheikh Shuvo: Algebra and geometry.

Upol Ehsan: Exactly. And, our algorithms for that matter, like, cause I think there's a cultural notion of looking at algorithms. I do read and spend a lot of time reading literature, and actually fiction and nonfiction both, Isaac Asimov's work to this day informs 80 percent of what I do if I break it down enough. So like those books, so, you know, in addition to academic stuff, I think I have a very strong inclination of reading very out there speculative stuff because it helps me like keep my creativity fresh. One of the things that I've realized over time is once you get somewhat familiar with the field, the field becomes your box, right? And I kind of use these things as my forcing functions to not look at the box. Like I, often will have these willful ignorance towards certain things. To not get too entrenched in them, so that I can think a little bit differently.

Sheikh Shuvo: That'll be my very last question for you then, what fiction books are by your bed right now? What would you recommend that the world read?

Upol Ehsan: I continue to read and reread all the works of Isaac Asimov. Robot Dreams, I mean, my favorite book of his has nothing to do with robots; it's called The End of Eternity. It's a weird masterpiece of time travel. The reason why I like all of these is that the heart of all of Asimov's books is this issue of human desire to control. Think about all his books on robots. It's about how they found a way to break the three laws of robotics. And who made them? The humans made them. Why did they make them? To control the creation that they had. And it's a very poignant conversation now with generative AI and our thinking that can we control these technologies? Can we make sure they don't harm us? And there's a deep philosophy in these kinds of conversations that he has about what it means for us to be the creators rather than the created. And I often think, okay, if you have any faith, or if you believe there is a creator, like, how is my relationship to my creator versus the things that I create? And I think this in a very weird way informs a lot of the work I do. So like, I've been reading Gayatri Spivak's Can the Subaltern Speak? So I'm also now starting to reread a lot of the South Asian literature and the scholars out there. Gayatri Spivak is a phenomenal person who talked about the fact that, you know, think about you and I, right? Like we are speaking in a quote-unquote colonizer's tongue. So there are emotions that we have that the language of English cannot afford. So by making people talk, act, think in ways that are not native to them, are you even giving them the language to express themselves? So this becomes very relevant when we think about AI harms and how technologies developed in the global North get exported to the global South without any notion of context, or voices from that. None of the user studies take those people into account. The billion people that we are quote-unquote, trying to serve. So those have been some of the things that I've been reading outside academia; a lot of rereading Asimov's books and in a new light, like some of Asimov's books are extremely highly used right now because I've been reading them since I was in high school, but there are different areas of my highlights that almost tell me my own growth. Like I would highlight a line, I would have a comment on the margins, and I'm like, what the hell was I thinking then? But, and then there are certain lines I'm like, Oh my God, I can't believe I was thinking like that at that time. And my handwriting changes over time. So like I can have a measure of like when it was written. So I think like revisiting stuff that we have already read is actually a very good exercise in my view to also see your own growth or lack thereof. Like if you see the exact same interpretation as you saw like five years ago, you should probably do some soul searching.

Sheikh Shuvo: Awesome. Well, that is sage advice, and I'm pretty sure we could talk about this for a couple of days in a row, but, Upol, thank you so much for sharing about your world. Lots of thoughtful things there to apply day to day. The very first thing I'll be doing after this is, downloading some Asimov.

Upol Ehsan: Yeah, I think you should. Thank you for having me. I appreciate it.

View episode details


Subscribe

Listen to Humans of AI using one of many popular podcasting apps or directories.

Spotify
← Previous · All Episodes · Next →