← Previous · All Episodes · Next →
#20. On the Intersection of AI, Ethics, and Philosophy with Ravit Dotan Episode 20

#20. On the Intersection of AI, Ethics, and Philosophy with Ravit Dotan

· 22:43

|

Sheikh Shuvo:
Hi, everyone. I'm Sheikh, and welcome back to the Humans of AI, where we meet all the wonderful people building the magic that's changing the world. Today, we're meeting a very special guest who's going to talk all about the implications of ethics and responsible AI development. Ravit, thank you so much for joining us.

Ravit Dotan:
Thank you for inviting me. It's great to be here.

Sheikh Shuvo:
Yeah, Ravit, the very first question. How would you describe what you do to a 5-year-old?

Ravit Dotan:
Yes. I would say I'm trying to make computers do better things for us.

Sheikh Shuvo:
Yes. That's great.

Ravit Dotan:
Yeah, I actually have a four-year-old niece, so I should try it on her and see.

Sheikh Shuvo:
Yeah. Awesome. Awesome. You've had such an interesting career starting in software and product management there. Could you tell us what your career story is? And, what were some of the inflection points along the way that led to your current focus?

Ravit Dotan:
Okay. I'm going to start chronologically. My very first job actually was in tech. That's how I started my career, but I wasn't thinking about it seriously because my goal was to be a scientist, and I thought, how will I make the most money in the shortest amount of time? I could be a waitress, but no, I'll work in tech because that's going to make me more money.

So, I found myself working in tech but more like a side hustle. That was actually my introduction into the tech industry. And one of the first companies I worked in was an AI company. I didn't think about it much at the time. It was more than a decade ago. Yeah, but that's how I got started then I did my undergrad in physics and chemistry because I wanted to be a scientist. But as I was doing that, I realized that the questions I'm asking are not actually physics questions. They're actually philosophy questions. I switched to philosophy. In philosophy, what I was very interested in is reasoning, especially reasoning in science, because science is stereotyped to be the best kind of reasoning, right? When we reason well, that's when we do science or, that's a stereotype.

I, towards the end of my undergrad, was actually questioning that, right? What is so good about the scientific reasoning that we do? What is supposed to distinguish it from other things? This is not a science question, this is a philosophy question. It was one of the reasons that I switched to philosophy.

Sheikh Shuvo:
Interesting.

Ravit Dotan:
Yeah, and I was especially interested in values. What is the role of values in this scientific reasoning? It's supposed to be objective or something, but what does that mean? And so at the beginning of the 20th century, people, I know it sounds like a tangent, but you will see how it's related to AI in a minute.

Sheikh Shuvo:
This is a great tangent, at least.

Ravit Dotan:
So at the beginning of the 20th century, in the discipline and philosophy of science, people thought that when science is at its best, what they call value-free, right? So leave your values at the door, leave your values to the politician. As a scientist, you're supposed to create something objective, that, later people could use for political purposes, but not in science, leave it at the door.

So this was the beginning of the 20th century. With time, people realized that this view of science, or, for me, it's not about science per se, it's more about reasoning. But they realized that no, it's not really a view that makes a lot of sense to us. And as the 20th century evolved, people realize no, no values are actually an inherent part of science. We can't pull them out, as time progressed, they embraced the role of values more, and I came into the scene in the 21st century. At that point in the discipline of philosophy of science, people were no longer asking as much whether values are part of science, but rather, given that they are part of science, how do we manage them?

So it's not about shooting them out. It's about identifying them and utilizing them, right? So, there's a great analogy that values in science are like knives in the kitchen. Yes. If you use it irresponsibly, it's going to be dangerous, but if you're not going to use it, you're just not going to make a lot of progress.

I was engrossed in this whole debate, which focuses mostly on natural sciences like physics and social sciences like anthropology, sociology, and I thought this is great. However, what about this new discipline, the new kid in town, machine learning? We've been talking about physics for ages, but.

This new discipline, it's a combination, I think, of science and engineering. And I'm hearing a lot of the similar voices. It's value-free, it’s objective, it's just math. I'm an engineer, leave me alone, go talk to the ethicist if you want to talk about values. And I thought, wait a minute, though. This sounds awfully familiar.

And actually many of the lessons that we've learned in the philosophy of science, I think, apply to machine learning, too. It's not actually value-free. It's a misconception, and it's a harmful misconception because when people don't realize how social and political values integrate in their work, that's when the irresponsibility is gonna happen, right? Because they're not gonna notice. They're just not gonna notice. So I did an undergrad in physics and chemistry. Then I switched to philosophy because I realized my questions are actually philosophy. And then realized that I really care about what's going on in this discipline of AI, such machine learning and the things we've been talking about actually apply.

So this was during my PhD, you get a PhD in philosophy at Berkeley. So my first task was I want to show that the discipline is not value free. I want to show how political and social values are actually deeply integrated into what's happening in the discipline. With time, I, just like the shift happened in philosophy of science, I, too, realized it's not about showing that the values are there.

It is about managing them. That became increasingly more important to me. I didn't want to just show it In a paper, I didn't want to write academic papers arguing for what people could theoretically do. No, I wanted to shift the industry in a different direction. I wanted to shift how the development and use of AI is happening.

So that it would, first of all, recognize the political and social aspects, and then also act to make it better. I find myself here today.

Sheikh Shuvo:
Seems the summary of your career is to question everything and get to the roots of what the values are.

Ravit Dotan:
It's funny you should say it that way. In a way, yes. The first thing I learned to question, Is people who say they question everything.

Sheikh Shuvo:
Nice. That's awesome. On top of your current consulting work and academic research, I saw that your work with Bria, where you're working as being the responsible AI leader there. Could you share any examples of processes or ideas that you introduced that helped shape the product development processes?

Ravit Dotan:
Yeah. Okay. So I'm going to answer this question in a more broad way. And also maybe I should have mentioned more about what I actually do right now. So what I do right now is a mix in the field of responsible AI. There's a lot of unknown when it comes to what companies should actually be doing. So there are two ways that I'm approaching this.

First working with companies hands on. Bria, as you mentioned, is one of those companies that I work with. And then also doing research and specifically academic research to figure out what to do. What we have right now in this space is a lot of documents from various companies, consulting firms.

Unfortunately, many of those companies are not as useful. Sometimes they're more like marketing documents, right? Because those organizations, they want clientele. and academia is not producing as much of this as we might want, so there's a research gap that is happening. So that's what I'm doing.

I'm doing a hybrid of research and working with organizations. And also I should say there are two angles to this question. One is what should companies be doing? How do they become irresponsible? The other end of the spectrum though, is what is going to be the motivation? Regulation is great.

However, it's not the only thing. I'm focusing on following the money. So all of those actors that funnel money into the AI ecosystem, such as investors, especially VCs, procurement, especially procurement in public administration. They also should be a part of the responsible AI game. And also they are often motivated, but just like the tech companies, they don't know what to do because there isn't enough for them.

So it's the same with that I do. It's research, right? To figure out conceptually, what should I be doing? And then also working hands on with organizations. So that's where my answer is going to be coming from and why I'm going to give you a more general answer. I think a really common mistake that companies make is they start with, “Oh, let's put up some kind of a document. Let's write our AI ethics principles.” The problem is that... We don't have any indication that implementation follows from those activities. I just finished a research study just on this, so I got some data, actually about what companies are doing, and we just haven't seen any evidence of this correlation, that companies can start with writing those broad commitments, documents, whatever, and move on to implementing.

It just doesn't really seem to happen in practice as much. There could be many reasons for that. That's, I think, one bottleneck. This approach just hasn't really proved itself. And I think an alternative approach is to do a bottoms-up kind of effort. So it could start with a team or a product or a feature, right? And try to build up. So that's one aspect of it.

The other aspect of it that I think is equally as important. Often AI ethics, people perceive it as some kind of like a side project, nice to have. There are two aspects that I want to highlight here. One is it's a nice to have, so anytime something urgent is going to come up and something urgent is going to come up always, it's just going to be pushed aside. That's the nice-to-have aspect.

And then the side project aspect is that it's not integrated in other things that the company is doing. It's not integrated in the business model. It's not in the revenue streams, which is why it's really difficult to get buy in, buy in from senior management and also buy in from the actual employees that are supposed to be doing the work.

And it's connected with the first point that I made: why aren't the AI ethics documents succeeding? I'm hearing again and again from people, we were just document that we're trying to get by for management and it's difficult and just struggling to get resources. Why? I think it's because of those two reasons.

It's considered a nice-to-have. People don't see the connection to revenue. And it's a side project. Maybe they think about it as something that the engineering team is supposed to do, but it's not related to marketing or sales. These are the things that are getting in the way, I think.

So my opinion is that if you seriously want to do AI ethics, first of all, understand how it impacts your business model, put it as a part of your business plan. Understand how it impacts all teams. Because otherwise it's just not going to happen. Also I seriously believe that it does help with profitability, even though it's not maybe the main reason to do it, but it does.

So if you're not seeing how, you should look into that. I would say first figure out how it fits in your business model. It's just not going to happen otherwise. That's my broad advice.

Sheikh Shuvo:
You mentioned that based on those public disclosures, there's no evidence for many companies of things actually leading to implementation. But along that spectrum are there any companies or organizations you discovered that are making quite meaningful changes based on that? Any ideas as to what's different about those companies and the cultures there that's made that possible?

Ravit Dotan:
Probably best to not name names. Yeah, but here's a good rule of thumb. When I'm looking at a company and I want to know whether it's doing well or not, the first question I'm asking, what are they measuring? It’s great if they have a document in which they say we believe in fairness and anti biases.

Totally want our system to be non-biased. Great. What kind of bias are you measuring? What kind of bias are you measuring? How are you measuring it? These are the first two questions to ask. Unfortunately, I'm not, given the maturity of the market right now I'm not even asking what are you doing to mitigate?

What are you measuring? When you say fair, what do you mean? And so some companies in their public documents do actually talk about that. For example, Duolingo, I live in Pittsburgh, so go Pittsburgh. So Pittsburgh is the headquarters of Duolingo, that's why. But for one of their products, they have a really detailed AI Ethics document and they do, talk, in more detail than many companies that I've seen about what do they measure and then what do they do with those measurements.

I can't speak to what they're doing internally, but from the document, at least that document gives us some kind of indication to what they do. And there's a question of culture, which you've raised, which I think is key. What is it that makes culture support responsible AI activity versus not?

To me, that's an empirical question. It's actually one of the projects that I have on the stove right now. Because there's a theory in the field of organizational psychology. The theory is called organizational climate theory. It says something that is super commonsensical. When the employees at a company perceive that some kind of facet is important, like privacy, for example.

So when the employees perceive that privacy is important, that's when the company is going to do better at that thing. So if the employees perceive that, genuinely, the company values privacy, they're actually going to work on it. and then the company's going to improve.

So this is a theory, conditional climate theory. And one of my projects that is ongoing and I don't have conclusions yet is in the case of AI, what are those things that are going to make the employees perceive that the topic is genuinely important to the company. What I can say based on my work so far is. What is not doing that job? The principal documents, I don't think they're doing that job. we're not seeing that.

Moreover, as a part of this project, I did start some interviews with practitioners. I asked them, “Do you think AI ethics is important to your company?” And they'll always say, “Yes, of course, so important.” And I'll say, “Great, what makes you think that it's so important? What are those indicators?” And often I'll hear yes, we have a team that thinks about that. We have a document about that thing. And I'll say, “Great. Does it impact your work at all? Your day to day work?” And they'll say, “No, like it's not my job. It's someone else's job.” So you see this gap. What are those factors that are going to push culture in a different way? We can, I have some guesses myself, but the reality is, I do think this is something we should look into empirically in a rigorous way.

So that's what I'm doing. Okay. Maybe this is unsatisfying a bit. So I'll say what I think it goes back to the measurements. What does the company actually measure, And what are the implications of those measurements? I think I would start with that.

Sheikh Shuvo:
Good. Then outside of those public disclosures when you're doing research or advising a company as you try to help them figure out what to measure, what are the things that you look at internally, is it product roadmap documents and engineering things? Who are you talking to, to get a holistic understanding of what these responsible frameworks should be?

Ravit Dotan:
Yeah. Okay. It's really gonna depend on the company and especially on the size of the company. If it's a startup up to say 50 people, it has to be the CEO that's gonna push it forward. The CEO has to be committed. If it's larger, then it really depends. It could be the engineering team if there's an AI Ethics team, it could be them. It could be the privacy team. I actually think it could be also marketing sales, those kinds of teams, because I don't think it's bad. It's good to make it a part of the business case. Sometimes there's external facing teams, customer success, marketing sales. they're the eyes and ears of the company.

So they understand if they can identify that the customer is concerned about something, they're the ones who are going to bring those priorities into the company. And then I think the effort must involve people from all of those teams that I've mentioned, right? It's gotta be the engineering, some kind of external-facing team. If there's some kind of privacy, compliance, cybersecurity, whatever it is that they have, it needs to be a collaboration between all of those parties. And I think that the first step would be identify, think seriously, what are some of the risks that are relevant to you or your customers? Ask your customers, ask your investors, you can do a survey.

Or just have a conversation with them. Identify those things that you genuinely think are important for your company values, for your revenue, identify three, say privacy, fairness, and transparency. Now you've chosen them. Ask all the difficult questions. What will you actually measure? Do not put it all on the engineering team because it's actually not your skill set, which is why they push back, right?

What they often need. They'll say, give us a metric, give us something to optimize on, we'll do that. But give them something to optimize on. So make those difficult choices. When you say fairness what does that mean? For example, which groups are included? Is it gender? Is it race? Which genders? Which races? You have to answer those questions and you have to know numerically what does that even mean? There are no, obviously right or wrong choices. There are some wrong choices. It's just value decisions that must be made.

Sheikh Shuvo:
Interesting. I know that a lot of your research also focuses on the role of power dynamics in the interpretation and development of ML and how political the process can get as you bring your own biases and viewpoints there. Within that context, what are your views on the recent executive order from last week? Are there any aspects of it that you think carry a certain level of bias?

Ravit Dotan:
Yeah, that's a great question. That executive order is very high-level, so it focuses mostly on what federal offices should be doing. However, something that comes up frequently in the AI space is the focusing on large corporations most and smaller businesses less. That is a very important part of the dynamic. Often, it's the big corporations who have the seat at the table. Sometimes they're being accused of pushing for regulation to suppress compensation. And when I see some of the regulation that comes out, for example, the executive order recommends the NIST AI RMF, the Risk Management Framework, which is, I think, the leading document on AI governance. However, when you read that document, I think it has more of an emphasis on those large corporations. It can have an unintended consequence on those smaller companies. It's not the unavoidable result. I'm actually working on a maturity model based on this. That's going to be more friendly to a wider range of companies.

Sheikh Shuvo:
Yeah, that'll be great to see. Cool. Shifting gears a bit. So there's obviously an explosion in AI tools and services and companies, and many people who are on the buying side are getting inundated with requests for all of these AI tools. As a buyer of technology, let's say any tech company, what are some of the questions I should be asking these AI vendors selling their tech in order for me to identify potential ethical risks?

Ravit Dotan:
Exactly. I love this question because I also have a project on this. Ask them what they measure. Do not settle for whether they have AI ethics principles. Do not settle for whether you see [leadership]. Definitely do not settle for whether they have personnel. Sometimes they'll push back. I'll say, "Yeah, we have employed all these people." Great. But what do they do? Do they just try to look good, or is there more? The best thing to start with is to make sure the tool is even accurate. Do not assume the tool is accurate. And if it's accurate, do not assume it's accurate for your target population because of those biases.

Sheikh Shuvo:
What would be an example of a good metric to measure? And is there any sense of acceptable benchmarks out there?

Ravit Dotan:
Yeah, no benchmarks, unfortunately. This question is difficult inherently. I can't give you a clear-cut answer, but I'll give you an analogy. Think of DEI. In the DEI world; we want companies to be diverse, but what does that mean? Does that mean employing women? How many? Senior management? Everyone? Okay, do we want racial diversity? Oh, we want sexual orientation diversity, right? What does it mean? What do we need to measure exactly? I can't tell you because these are deeply political questions. But I can tell you that it makes a difference when the company decides to measure something. When it decides to measure something, it increases the likelihood that it will make progress. Also, when you see what it measures, then you can think about how that aligns with your own values. If they only measure how many women they have and not in senior management, that's not good enough for me. You can say that. Only when something is measured can you start a meaningful conversation, but it's not about having strict benchmarks or knowledge about what's the right thing to measure. What is the right number?

Sheikh Shuvo:
Interesting. Okay. That's a good segue to my next and last question for you, Ravit, let's say I'm a manager at a tech company working on an AI product. I'm really interested in ethics and responsibility. There's no company-wide practice quite yet. But on the individual level, where can I go, and what resources would you recommend for me to learn more about this world and how I can apply it to my day-to-day work?

Ravit Dotan:
Okay. I would say, follow the news. There are many online courses and resources available, and people might be interested in more abstract ethics views. I said, my PhD is philosophy, so I encounter people like, oh, what if we apply consequentialism? I don't think that's the right way to go. It's the skill. But the skill you need to develop is practical. You need to practice the skill, which is thinking about an AI system, identifying the impact that this AI system has in the world, and how it can be harmful or helpful. If you are an engineer, think about what you can do. People tend to blame data sets. Yes, data sets are part of the issue, but it’s not the only thing. Changing the data set that you train on is not the only thing you can do. So I would say the first step, think about real case studies. Look at the news. Just today I read, you know in June there was a chatbot that a mental health organization built and let's say it's way by the wayside. You can read about something and go through this process yourself. What happened, or like what could happen? What could I do to fix it?

Sheikh Shuvo:
Awesome. That's great practical advice. That's all I have for now, Ravit. Thank you so much for the wonderful and thoughtful chat.

Ravit Dotan:
Thank you for having me.

View episode details


Subscribe

Listen to Humans of AI using one of many popular podcasting apps or directories.

Spotify
← Previous · All Episodes · Next →