← Previous · All Episodes · Next →
#11. On the Crossroads of AI and Inclusive Change with Cari Miller Episode 11

#11. On the Crossroads of AI and Inclusive Change with Cari Miller

· 15:51

|

On the Crossroads of AI and Inclusive Change with Cari Miller

Sheikh Shuvo: Hi, everyone. I'm Sheikh. Welcome back to Humans of AI, where we learn about the people that make AI magic happen. Today, we're chatting with Cari Miller, the founder of the Center for Inclusive Change, where she focuses on AI governance and research. Cari, thank you so much for joining us.

Cari Miller: Thank you so much for having me.

Sheikh Shuvo: You have such a fascinating career, but the first thing I'd like to start with is, if you had to describe your job to a five-year-old, what would you say you do, Cari?

Cari Miller: Well, I try to help and make sure that new things that come inside of computers are safe for everybody to use. That would be the basics of it.

Sheikh Shuvo: That's an amazing answer that I wish was on t-shirts.

Cari Miller: Yeah, right. Cool.

Sheikh Shuvo: Well, going into your career then a bit more, it seems you started in marketing and evolved into being a leader in AI governance. Can you share a bit more about what your career story is and how you got where you are?

Cari Miller: Yeah, sure. My undergraduate is in international business. And I took Spanish for a long time. Studying cultures was interesting to me. When I got into business, I was in corporate strategy. What that really meant was I had a front row seat to digital transformation because I got into business in the late 90s. Through the 2000s, everybody was doing digital transformation. Because of the position I was in, I was always on those teams. I always had a fascination with using data to answer questions. I never wanted to only rely on the qualitative, things people felt in their gut. Like, let me just scope that out with the data and make sure you're telling me what I think you're telling me. I got sucked into it that way. My master's degree, though, is in marketing. That's where my AI trigger started. I feel like that's where I really got hold of the world, in the marketing sphere. Whenever Google took hold and Facebook comes along and the algorithms start to consume our souls, you know, and feed us what it wants us to see. I started to see that with some of the way the advertising was working. It wasn't until I started to see how some of the employment advertising was following the same patterns as like, you would see for high heels or dog food. And I'm like, wait, that you can't do that the same way. That's not fair and equal, and that's going to prejudice. That was what kind of shifted me into the governance aspect of some of that stuff. Then, I had been with a couple of these companies for a while. I wanted to follow a passion project. I quit and I did that at the same time. I also had the secret bucket list item I never really told anyone about, which was to get a doctorate degree. So I followed that too. When I got into the doctorate program, that's when I doubled down on governance and risk mitigation and really honed in. Really, on the employment life cycle and the amount of AI that's occurring in that space, which is extremely eye-opening.

Sheikh Shuvo: Oh, yeah, within that doctorate program, it seems like a very new type of degree that's responding to our world right now. Could you talk a bit about what type of coursework was involved there and what your research focus was within that?

Cari Miller: Well, it's actually just a doctorate in business administration. So it's literally just regular business. And the concepts and theories. I love this question because I really debated hard on doing a PhD versus a DBA. A PhD has a lot of philosophy with it, and you're discovering new theories and concepts. A DBA is more practical - rubber on the road. It is about getting things done. That's me; I make stuff happen. It was all the practical stuff. That's why I went down that road. You take everything practical, take your topic, and figure out how to make it happen. That's why I picked AI governance because I can tell a company exactly how to make governance happen for AI now.

Sheikh Shuvo: Interesting. And looking at a lot of your recent workshops online, you write a lot about AI procurement. Could you explain what that means?

Cari Miller: What I found is that the more I looked at the governance topic, all roads seem to lead back to procurement. You need the right people in the right places, with the right talents and skills - that's governance. You need to have policies in place. But then you have to have the protocols, practices, processes, and that really came down to what you buy, whether you let the good stuff in or the bad stuff, and how you deal with it when it's in there. Everything kept coming back to procurement for me. So I started looking at how you govern procurement. What I loved about it is it doesn't take a law or regulation; anybody can decide when buying something to ask questions, put rules in place, figure the vendor out. We're not going to have bad stuff. That's what attracted me to procurement. It's an interesting little nugget to unpack.

Sheikh Shuvo: What are the categories of questions that go into your AI procurement framework? As a buyer of tech, what should I be asking my vendors?

Cari Miller: Well, it really depends on the risk that you're dealing with in the procurement. If you're just trying to buy a little frequently asked questions tool that's going to look at your HR manual and generate responses, that's a pretty low-risk tool. You don't have to worry too much about that. A higher-risk tool would be something you use maybe to help decide salaries or evaluate call center employees. Those are higher risk because livelihood and well-being could be involved.

What I want to know from a vendor is about their responsible practices. First, where did they get their training data? Are they allowed to have it? Did they follow the rules in obtaining that data? Do they have good governance? Do they have policies about their own responsible AI uses? What does their leadership think about these issues?

Then, I want to know more about what's inside their machine. Did they choose the most difficult, black box model they could find, or did they use something more basic like linear regression that they can actually explain? Why did they make those choices? Tell me about those trade-offs. Maybe there's a reason for it, but maybe they don't have a good reason and just wanted to be fancy.

Sheikh Shuvo: Yeah, and when you're advising different companies on their AI governance policies, generally, what team do you start the conversation with? Is this a conversation starting with the C-suite, the engineering team, the product side? How does that look?

Cari Miller: Bring me whoever you want. If you're willing to start something, we will talk. There's been no pattern. It's a lot of, "It's not my fault. It's their problem. They'll do it." It starts wherever, whoever is willing to raise their hand. It's so new. A lot of times, companies don't think they need it, or they're not sure. So, it's a bit like pulling teeth. It's usually one person, kind of timidly like, "I don't know, maybe." Which is fine. You start where you want to start. It's crawl, walk, run.

Sheikh Shuvo: Is there any industry or sector you're focusing on right now?

Cari Miller: No, it's all over the board. It really is. Because it's so new, it's whoever's willing to raise their hand. The technology is pervasive, especially in the employment space, which is where I tend to do most of my work.

Sheikh Shuvo: Within AI governance becoming more of a mainstream topic, and more research being done in the field, is there an area of research you're particularly excited about right now that might not be as widely known?

Cari Miller: I can see a trend in HR tech that is bugging me. There's a convergence of things going on. They call it skill-based hiring. Several large companies are creating what they call skills clouds, which is an interesting approach. But the way they're doing it, I'm not a fan of, because they haven't provided enough explainability for me to understand what's going on. It feels a bit dangerous. It sounds like what they're doing is scraping skills and assuming skills for individuals. They'll show you some parts of what they've scraped up for you, and some parts they don't. Then, these skills proliferate into deciding compensation, raises, bonuses, promotions, job suggestions, project suggestions, and resume-building. It starts to feel a bit like, what are we doing here?

So put that in a box. Similar companies, the same companies, are also creating their own LLMs (large language models). Now you have this skill box, and then you have an LLM that's like, "Oh, now I can ask questions." I don't know how these things are interacting, but they're all driving towards determining compensation, raises, bonuses. It's a lot of AI in a very sensitive domain. I'm just like, what are we doing here? Can I see some explainability, please?

Sheikh Shuvo: Interesting. Let's say I'm an AI team member working on a new feature. As I think about what type of models I should be using, what type of QA and validation work I need, is there anything that comes close as an industry standard for what I should be benchmarking myself against?

Cari Miller: I don't know because there's the Department of Labor which publishes a site called Onet. It provides lists of job types and duties that go along with them. It's very thoughtfully done. Whether or not everybody's using that, I don't know. But to me, it feels like pulling a slot machine arm on some of this stuff. Is it the right job? Did you match the right skills? What does your skill set say? It feels very lottery-like to me right now.

Sheikh Shuvo: When it comes to regulation, that's about as scary as it gets.

Cari Miller: Yeah.

Sheikh Shuvo: The very last question I have for you, Cari, is if I'm just starting my career and I really want to get involved in the world of AI governance, how and where do I get involved?

Cari Miller: There are some colleges out there that are starting to have good programs. They're kind of sporadic. I believe Alltech is Human has a list of them. I can tell you how I did it. I just started reading and listening to podcasts. It's an interesting field because it's changing by the minute. Like, I created a program for workplace employees on generative AI, how to use it safely. Before it's done, I'm like, "Microsoft did what?" It's one of these industries that's moving at light speed. Even if you go to college for it, you still need to find the influencers in the industry, follow them on LinkedIn, listen to their podcasts, read journals and articles. Just do it slowly. Find a niche that interests you. You can't cover all of it. I like workplace tech and ed tech, but it's hard to keep up with even two domains.

Sheikh Shuvo: And if there are any listeners out there who want to get in touch with you, what would be the best way to find you online?

Cari Miller: Through LinkedIn is probably the easiest, most reliable way. I'm always on LinkedIn.

Sheikh Shuvo: Awesome. Well, Cari, this has been a wonderful conversation, lots of fun stuff to consider. Thank you so much for taking the time.

Cari Miller: Thanks for having me. I appreciate it.

View episode details


Subscribe

Listen to Humans of AI using one of many popular podcasting apps or directories.

Spotify
← Previous · All Episodes · Next →