← Previous · All Episodes · Next →
#14. The Role of Responsible AI in Enterprise with Rob Katz Episode 14

#14. The Role of Responsible AI in Enterprise with Rob Katz

· 40:30

|

The Role of Responsible AI in Enterprise with Rob Katz

Sheikh Shuvo: Hi, everyone. I'm Sheikh. Welcome to Humans of AI, where we meet the people shaping the tech that's changing our world. Today's guest is Rob Katz, VP of Product for Responsible AI and Tech at Salesforce. Thanks so much for joining, Rob.

Rob Katz: Thank you for having me, Sheikh.

Sheikh Shuvo: Rob, you've had such a cool career and so many different experiences. But before diving into any of that, I want to ask you my favorite question.

How would you describe your work to a five-year-old?

Rob Katz: Well, that's a great question because I have a six-year-old and a four-year-old. So if you average them together, I would say that I work with other grown-ups to help ensure that the technologies and the tools that we use are designed and built in a way that's safe and inclusive for everybody.

Sheikh Shuvo: That's a great answer. I think your kids might have a bigger vocabulary.

Well, let's take a step backward then. Tell us about your career story, Rob, and how you got where you are.

Rob Katz: Thanks. I think that the real takeaway is serendipity. It's always your friend, or it was for me. I've always had an interest in trying to work at the intersection of business and positive social impact.

So when I was in undergrad, I was very interested in how businesses, political organizations, and economies functioned in a way that could be more equitable for everybody. After I graduated from undergrad, I worked in a nonprofit think tank in Washington, D.C., called the World Resources Institute.

I did some work on how businesses can be engines of growth in emerging markets and developing economies. Then, I made a move to working in a for-impact venture capital fund called Acumen. At Acumen, I was on the team that helped to do the investment decision-making and the learnings from those investments as we were investing in entrepreneurs.

They were building businesses that serve middle and low-income people, often forgotten in emerging markets, with key products and services like healthcare, agriculture services, clean water, and clean energy. For example, Acumen invested in a couple of entrepreneurs who were building an off-grid energy company.

In northern India that uses discarded rice husks, which are typically an agricultural waste product as the fuel for gen sets that could be built and operated locally and could displace the use of kerosene in those communities. People were paying for kerosene, and instead of paying for kerosene, they would pay for the appliance that they own.

So, one lightbulb cost a certain amount per month, two lightbulbs cost a certain amount per month, two lightbulbs and a fan, two lightbulbs, a fan, and a television, and so on and so forth. It became a great opportunity for an investor. It was not something that a typical venture fund wanted to get into because it was a higher risk and it was in a very hard to reach area.

Yet, that was something that Ackerman was very interested in. During that stint, I had the opportunity and the pleasure to live and work outside of the United States. For someone who grew up in the suburbs of Philadelphia, that was pretty eye-opening in a good way.

While I was living and working in India, I applied to and was accepted into the Stanford Graduate School of Business off the waitlist. Here's to the people who get in, just sneak in under the line. It was a great opportunity and a real privilege to go to the graduate school of business.

While I was there, two important things happened. Most importantly, I met my wife. At the time, we were not married, but she was in the class ahead of me. We'll get back to that in a minute. The second is that I was in Silicon Valley during the Halcyon days of tech, which was tech can do great things. Tech is good. All things tech are great.

There were a lot of really cool things happening in the tech space in 2012, 2013, 2014, which is when I was in grad school. It was also really clear that the technologies and the companies that were building these technologies and the people that were building these technologies were where the next opportunity for impact was going to lie.

I was able to spend time at Stanford, getting a little bit more aware of and involved in the technology landscape. Then when Clara, my now wife, got a job in Seattle after she graduated in 2013, I focused my job search to getting a job in Seattle when I graduated in 2014.

I went to the on-campus recruiting opportunity for Amazon that was hiring product managers, and I exercised my one-time option as an MBA student who didn't have a background in tech or product management or computer science to convince a group of Amazon interviewers I was a smart enough generalist to be effective at Amazon.

I passed that test. There you go. The bar was raised, or maybe not, and I got a job at Amazon, which I started in 2014. I moved to Seattle and had a great first job in Amazon working for a wonderful manager who's still a good friend of mine.

Her name is Meg. And we were working on a payments product, nothing to do with investing for impact, not really having much to do with social impact. But it was a really great introduction to Amazon because we were building a product that was competing with Square. So rewind the clock to 2014. It was all about plugging in a little dongle into your phone and swiping cards at the food truck or at the ice cream store or at the farmer's market or at the flea market. We were competing with Square, and there's a whole strategic reasoning for why Amazon was in that business. But the cool thing that I got to do was all kinds of different product management. I was dealing with in-stock management and vendor management. I was dealing with working with third-party partners that we were assorting the product in. We had a shift in strategy, and then I became the app product manager so that the merchant using the system was actually using the software that my team and I were building together. I was working with iOS apps and Android apps, and I just basically got thrown into the deep end of like, "Build this technology product that's also a hardware product and a software product." So I got about a year's worth of hardware, software, and other product management, plus intro to Amazon.

I had the great opportunity to do this training that I don't know they still do, which is they send the management types to the warehouse for a week to learn what it's like to work at the distribution center. I think that they call them fulfillment centers, but it's the warehouse. And that was a really cool and eye-opening week. It really gave me a lot of appreciation for the folks who are working in those fulfillment centers. I was very, uh. Sore after a week of working in a fulfillment center.

So I was on my honeymoon. This is October 2015, and I got a call, saying, "Hey, um, I hope you're having fun on your honeymoon. Don't worry, you still have a job, but we're shutting down the business that you work on. Amazon's made a strategic decision. We're not gonna be competing with Square on this anymore. RIP Amazon Register. Have a great honeymoon. Call us back if you have questions." So I was like, great, finish the honeymoon, had a good time, came back, and I started looking for another job in Amazon. A VP I liked and respected was working on a secret project, and I went and talked to him. I went and talked to a bunch of other people who I was lucky enough to network in and meet. He couldn't tell me what the secret project was, but he said it would be interesting and hard.

So I took a bit of a flyer, and this is sort of serendipity opportunity number and on this story, and I ended up working on the Alexa team unbeknownst to me because it had not yet launched. So I was working on the Alexa team, and I was trying to introduce shopping by voice to Alexa, which is a really hard problem because I bet most of your listeners and viewers have used Amazon.

And it's a very visually rich experience. There's an image. There's a product description. There are ratings. There's all kinds of reviews. There's all kinds of information. People who like this also like. You might like. All this stuff on the Amazon detail page. And you just strip all of that out, and you have to be able to describe what it is someone's buying in three to five seconds. The product descriptions are not written for short pithy, you know, voice responses. So it was a really hard problem. And I got to work on that hard problem for a couple of years.

After a couple of years working on it, we shipped some really cool stuff. I think the feature I'm proudest of is "Where's my stuff," which is package tracking. At the time, I didn't realize this, but one of the number one sources of customer service calls to Amazon was, and it makes sense, "Where's my stuff?" "Where's my package?" It's not here. And that costs Amazon a lot of money because they have real people answering those questions. So we launched a very straightforward feature that would do package tracking by voice. And we figured out that it saved the company a lot of money.

There was then an opportunity to work on another new team. So I guess I like doing these sort of zero to one type things. Someone was building out another great manager, whose name is Beatrice, was building out a new function to do two different things. One was to build features that would target older users and the people who were in their care community. And the second was to deal with and think about privacy, not from the perspective of engineering or compliance, which Amazon was very good at, but from the perspective of product.

I really wanted to go work on the Alexa for aging and Alexa for care community product because I had an uncle, a great uncle who was a retired professor and he was blind because he had macular degeneration and he had Parkinson's. So he couldn't use Braille because he had tremors. And he had cancer. He had a very bad hand, really smart guy, really gregarious and outgoing. And we gave him an echo device so that he could ask for the time, get medication reminders, play the Yankees game. Now I'm a Phillies fan because I grew up outside of Philadelphia. So it pains me to say that my uncle Bob wanted to listen to the Yankees, but much love he was a Yankees fan. That's what he wanted to listen. And actually, the person who liked this device the most wasn't uncle Bob. It was actually aunt Wilma who didn't have to constantly be telling uncle Bob what time it was or to be tuning the radio to the Yankees game. And it gave him a sense of independence. Well, there was one problem.

Beatrice had hired an excellent person to work on this aging and care community thing like a week before I learned about it. So she said, "Well, I can't hire you for that, but do you want to come work on privacy?" I was like, "Well, privacy, who cares about privacy?”

And I did some thinking about it. And I started talking to a few friends I started to realize that privacy and data ethics and sort of consent management was like a really big problem for the tech industry, and it was about to be a bigger problem. And it seemed like a really interesting place to go and have some impact.

So I took a flyer on privacy, and I went to work for Beatrice, and I was the first product manager working on privacy in Alexa. We were building features for the Echo devices, like a privacy center. And we were also handling, like, gnarly stuff around Compliance with the general data protection regulation.

Everybody and their friends know about the GDPR, and if you're a product manager or you want to be a product manager and you want to work in a technology company that uses data, if you don't know about GDPR, you will know about GDPR. Sheikh Shuvo: and you're smiling because clearly you've been exposed to the GDPR fun. It's my favorite.

Sheikh Shuvo: Four-letter word.

Rob Katz: Second favorite four-letter word is CP CCPA or CPRA.

Sheikh Shuvo: Well, HIPAA is not too far behind that.

Rob Katz: SOC2, COPPA. I mean, you got it. All the great compliance acronyms. That's a conversation over bourbon.

Okay, I don't think we have enough bourbon going for that long. But my point is, I got an opportunity to, again, be in this sort of zero to one space, and so stay tuned on that. And had a great because it was really hard. And it's almost an existential question for Alexa. It's like, is it listening when I'm not talking to it? And the answer, by the way, is no, it is not listening except for the wake word. So the way the technology works is it's effectively streaming in little 32nd to two-minute increments, all of the background noise happening, and it's only listening for the wake word, which can be Alexa, Echo, computer, or Amazon. And I think you might now be able to do your custom wake word. At the time there were just the four, and everything else is white noise or wake word. And then they delete on the device. It's all on device. They delete that two-minute stream on a rolling basis. And you can disable the microphones by pressing a physical button on the device, and then the whole thing turns red. And at least that's how it was when I was working there, and it was very clear. And yet we had a terrible time convincing people that it wasn't always listening to them because as it turns out, Amazon's recommendation algorithms are really accurate, and so if you were talking about something, it was likely that your behavioral data somewhere in the background was indicating that you might actually be interested in buying that new Nalgene bottle.

And then when you were talking about it with your roommate or your spouse or your friend. And then when you went onto Amazon and it said like, are you sure you're interested? Hey, this Nalgene bottle is on sale. I'd be like, Oh my God, it's listening to me. It's not listening to you. Believe me, Amazon has better things to do with its compute than process all of the background conversations happening in all of the households and places where Echo devices are.

That being said, it was a big barrier to purchase, right? So we were building things like Alexa. Why did you say that? Or the privacy features? And while I was doing that, you know, this is 2018, 2019, and I was becoming aware of more and more the question of algorithmic bias, and it had to do with algorithmic bias in speech patterns and algorithmic bias in dialects because I was working in voice and natural language understanding and speech recognition.

But I also became aware of it through the work of Dr. Joy Buolamwini, who has a new book, who it's all about the Algorithmic Justice League, right? Bingo. Algorithmic Justice League and other folks who are working on this. Meg Mitchell, Timnit Gebru, really wonderful human beings who were raising questions around the accuracy, fairness, bias of recommendation systems, especially facial recognition.

And the unintended consequences associated with AI systems making consequential decisions in high-risk areas. So, being an Amazonian, I wrote a six-page document arguing that we should be investing in a tech ethics leader inside of Alexa for these reasons. And, you know, it was a pretty good document.

By this point I had worked at Amazon for five years and I knew how to write a six-page document at least reasonably well. I was doing this on the nights and weekends, which was fine, very Amazon. And the long story short is the folks with whom I reviewed it were like, this is a great idea. We should totally do this. But not right now.

And I just mistimed it and Sheikh, you know, this Amazon goes through these cycles of hiring and digesting headcount. And we were in a digestion period. We had just hired a bunch of folks and then it was like, Hey, we don't have a ton of net new headcount to go take a flyer on Rob's good, but unproven, you know, tech ethics concept or ethical AI concept.

So I was told not now. And right around that time, I got this serendipitous call from someone with whom I had worked in the past. And I had worked with her because she worked at the company called Omidyar Network that I had been in at Stanford. I was the course assistant for the head of Omidyar Network, who on the side was teaching a class.

He was teaching a class about impact investing. I had been doing impact investing. I needed a job to help pay for Stanford because Stanford is great. And if you're thinking about going to Stanford for graduate school, especially the business school, do not pass, go, do not collect 200. Go directly to the Stanford graduate school of business by all means.

You should do that. It's not cheap. So I needed to supplement my walking around money with some extra income. And I got a job working for this guy who was the head of Omidyar Network, whose name is Matt Bannock and Matt had someone at his, on his team, help develop the course with me. And her name is Paula Goldman.

And so Paula had recently taken a job at Salesforce as our chief ethical and humane use officer. And she called me up, and we were chatting, and she said, "Look, I'm curious. Do you know anybody who can translate ethical use principles into the product and software development life cycle?" "No, I don't, but I would be willing to try myself."

And so I wrote her a proposal about what that would look like. I did a couple of interviews. I did a couple more interviews, and I had this serendipitous opportunity to join Paula as one of the first folks in our Office of Ethical and Humane Use of Technology, which was started at Salesforce in 2018. And I joined in 2019 as, um, to basically build out a new function back to the zero-to-one.

Like it was, it was as if I came in, and there was a piece of paper that they had written down: "Embed ethics into software development lifecycle/product development lifecycle, hire someone," strike through "Rob." And it was like, "Here's the paper, go figure it out." Um, which was great, you know, and as a product manager and as a nascent leader, it was great to be able to identify and build out a roadmap like that.

And I was very lucky because at Salesforce, we already had on board someone who was at the time working in our AI research organization. And her name is Kathy Baxter. Kathy's our principal architect of ethical AI and is the co-lead of the team that I work on. So she and I co-lead our responsible AI work because inside large companies there are sometimes reorgs, and we brought Kathy in to work for Paula, and I was working for Paula, and Kumbaya.

We've built out this new function in the company. Fast forward four plus years later, we're elbows deep in all things responsible AI and tech. Um, this is a great job. I feel incredibly lucky to have been in the right place at the right time more than once. Um, but to have always followed that North star from all the way back of how can we use business to create positive impact in the world?

And I feel like we could always be doing more and we could always be doing better. And it's not without its pitfalls, but we are doing the right thing for the right reasons with the right people. And that is how I feel about my job. What an amazing journey right there.

Sheikh Shuvo: I think I counted nine distinct moments of serendipity. So that's definitely the guiding philosophy there. One of the things I wanted to come back to, Rob, is, um, you mentioned when you first started at Amazon, it was very eye-opening to go to a fulfillment center and see what operations is like and experiencing it yourself. When you came to Salesforce and started diving into what building responsible tech meant, what was the equivalent of spending a week in the fulfillment center?

Rob Katz: That's a great question. Salesforce is very different than Amazon because most of Amazon's products that I worked on were consumer-facing. Alexa, even the Amazon Register product, was merchant-facing, and it wasn't too much of a logical leap to see what it would be like to be a merchant using it, and we could go talk to merchants, and I would. Clara hated it because we would always go to farmers' markets, and I would always be like, "So why did you choose that payment processor and not ours?"

And like, what about this one? And let me see the app. And she's like, "Can we just buy it already and go?" Um, so anyway, it's harder to do user research with enterprise CRM. Um, but to that point, I became good friends with someone on our team and on our sister team in our research and insights organization, which is our user experience researchers. And this person's name is Emily Witt. And she is our primary liaison to the ethical and humane use work inside of user research. So to really understand what it's like as a Salesforce admin, as a Salesforce user to use the products that we're building, and then to really understand the products, like who are our users, their salespeople, service agents, marketers, website, merchandisers, data analysts, you know, it's a business. Um, enterprise user persona. And so I really, um, got deep in those user personas. And one of the best ways to do that was to, um, read a lot of user research, but also to do work on Trailhead. So Salesforce has this free online learning platform called Trailhead. It's all national park-themed. Um, so you can be a mountaineer, and you can be a ranger, and there's all these ranks, and it's gamified, and it's actually great because you can learn all about how the software works. And for me as a product manager, if you don't know how the software works, then you can't make good decisions about the software. So I needed to get smart about it.

Sheikh Shuvo: Awesome. Salesforce has been in the game since 2018, early for big tech. Has the focus on responsible AI practices set you apart from other vendors, especially considering Salesforce's extensive sales cycles? If anything, it's becoming a crucial differentiator now due to the challenges introduced by generative AI. These challenges touch on trust, data privacy, and ethics. Large language models often produce what we call "hallucinations," though I prefer not to anthropomorphize the technology. These are essentially confidently incorrect responses. They can also generate biased or toxic content, and there's a risk of corporate and personal data seeping into their training data. Major companies are understandably concerned about confidently incorrect answers, especially in critical areas like freight routing or legal decision-making. Bias and toxicity pose genuine threats to reputation. Furthermore, organizations with proprietary data are wary of it getting into large foundation models used for training. At Salesforce, we're addressing these complex issues and more. Every customer is now inquiring about Responsible AI, and thanks to our long-term investments, we have an initial answer, though it's not flawless. We've built specific components and have a detailed plan of how it all comes together. It might seem like an overnight success, but in reality, it's the result of five years of dedicated work, with a touch of serendipity - being in the right place at the right time.

Sheikh Shuvo: Shifting gears, Salesforce is known for its complexity, given its status as one of the pioneers in the cloud world. As you've been working on instilling a culture of responsible AI and integrating these frameworks into the product, what challenges or internal friction have you encountered along the way?

Rob Katz: The main challenge we've faced relates to perception. As a product manager, I'm always focused on reducing friction and streamlining the path from idea to execution, while ensuring safety.

And so, when people perceive the ethical use team as a tax collector, meaning do this checklist, go through this process, slow down, then we are very, uh, we are less effective. But because we've positioned ourselves as amplifiers and accelerators to our product and engineering colleagues. And in fact, we are in the product organization. We report up through the product organization intentionally because we want to be a partner to our product colleagues and our engineering colleagues, and as a result, we are able to actively work together with those product and engineering colleagues to create things and ethical differentiators that are actually helping us get to market faster and not positioning ourselves as a tax collector or as a barrier.

Sheikh Shuvo: It's a great perspective. Um, in looking at, um, just product management in general has in your experiences, um, as a PM, do you view, uh, doing product management for, uh, an AI product to be meaningfully different from a, for a non-AI product? It's becoming different. And the reason is that we're moving from a deterministic to a probabilistic product. Right? Product management could be distilled down at some level to sit in a room like this conference room I'm sitting in and use a whiteboard like that one and figure out all of the if-then statements. If the user does this, then the product should do that. If the user does this, then the product should do that. Corner case. If the user does this, but this thing fails, then the product should do that. And so, and there's just a lot, a product requirements document is, at its sort of atomic level, that list of if-then statements. So that we can then work with our engineering and design colleagues to figure out, you know, how to build it. Generative AI has turned deterministic into probabilistic. Where the system itself doesn't always do the same thing based on the input. It does something slightly different. And so we're moving towards a world in which product management actually becomes all about prompt writing, where you are developing natural language instructions to a system to work with another system. And within these guardrails to do this thing and to, um, constrain itself this way or to, uh, amplify that. And it's becoming, I think, much more artistic and much less deterministic. So I anticipate product management having a much different flavor over the next 12 to 18 months, especially when you're working on the AI tools, but everybody can do this too. Um, so I think it's a really interesting time to be in product management.

What aspects of that has translated into, uh, internal PM trainings at Salesforce? Is there any habits people have to unlearn?

Rob Katz: Yeah. I mean, everybody is using our own internal instance of our Einstein tools. Einstein is the brand name for our generative AI technology. And so we have a playground, and people can use it to put inputs in and get outputs out. So the other day someone sent out the meeting notes from a very long day-long meeting. They took the transcript and they took their own notes and they dumped them into the playground and they generated a summary and the key takeaways, you know, that is something that might've taken a product manager or whoever drew the short straw, a long time to do. And the system was able to help them, um, do it much more easily. Um, similarly, you can. Uh, how to scan for personally identifiable information in a much more, um, effective way if you're using a tool for it rather than using a, um, a set a list, you know.

Sheikh Shuvo: Interesting. Uh, looking at another side of Salesforce, it's obviously a very, um, global company, um, as you think about, um, what ethics by design means and what day to day actions are, is there any regional or industry variances, uh, given Salesforce's global blueprint? Do different product teams look at it differently in parts of the world?

Rob Katz: We will. Right now, most of our generative AI tools are available, so we're recording this in late October of 2023, so if you're watching this later, hopefully it won't be true because, um, we're, you know, we're working on non-English and non-U. S. Um, and this is all in, you know, forward-looking statement, you know, don't make your buying decisions based on what you're, what you're listening to on this podcast. Nostradamus. Thank you. Um, but you know, I'll get from a, from a, from a responsible AI perspective, sociocultural biases are very location-dependent.

And so what may be biased or quasi, somewhat biased or toxic or, or, or, or offensive in the U. S. and in English is going to vary across the world. And, uh, we have to be very cogent about that. For example, certain Spanish language words are perfectly colloquial in Spain, but are, um, offensive in Mexico or Colombia. And so it's not simply about translation, it's also about localization. And doing that requires a combination of people and tech and technology in order to, to do that localization in a way that optimizes for accuracy, safety, and honesty.

Sheikh Shuvo: Absolutely. Looking more broadly as well, beyond Salesforce, uh, responsible AI is definitely of industry-wide duty, uh, not just the mission of a single company. What does your collaboration, um, look like with other companies when it comes to building?

Rob Katz: You know, I'm really lucky. Um, I mentioned Paula before. So Paula Goldman, our Chief Ethical and Humane Use Officer, is on the National A. I. Advisory Council, and she chairs the generative A. I. Subcommittee. So she's working at the policy level. Um, Kathy Baxter, who I mentioned earlier, uh, as well, our principal architect of ethical A. I. Is working with the National Institutes of Standard and Technology on their risk management framework for generative A. I. And we recently signed on to the White House's commitments commitments to building safe and responsible generative AI, along with a number of other peer companies, um, like Microsoft and Google and Amazon and others. Um, and so that's the official stance. And the unofficial stance is that it's still a somewhat small community. And we all get a chance to talk to one another, um, from time to time, either during workshops or at academic conferences or, um, in sort of stakeholder engagement. Uh, opportunities like through the world economic forum. Um, and so, you know, there's an upcoming AI governance summit that the World Economic Forum is hosting, and I'm fortunate enough to be on one of their committees and going, and I'll be able to meet a lot of my, my colleagues and peers from around the industry, um, at that kind of a meeting where we're sharing best practices and approaches. Uh, right now, it's very much a, hey, the more we can all do this. Well, the better off we all are as an industry. Um, and so it's a rising tide kind of situation. Nice.

Sheikh Shuvo: Uh, shifting gears a bit, uh, looking at, uh, sort of AI on a, on a personal level, uh, you mentioned using the, um, uh, Einstein playground to do a lot of mundane internal tasks like, uh, summarizing meeting notes, but, uh, as an individual, uh, what are some ways that you use AI in your personal workflows?

Rob Katz: Oh, I mean, summarizing stuff is absolutely top of mind, you know, getting started with, you know, write an outline for this document, um, or organize this strategy and it can take the first crack at something, especially when I'm starting from that. I mentioned a blank sheet of paper earlier, but it's more like blank Google document.

You know, or blank quip document or blank canvas in slack. And I can use Einstein to, because it's within the corporate boundary, I can put, you know, corporate information in there, um, to help me with any number of tasks. You know, we're going to start working on next year's plan. So I'm going to ask, hey, take this year's plan and I know we're going to tweak A, B and C. Um, write me an outline for next year's plan and it can save me 15 minutes. Easy doing that kind of thing.

Sheikh Shuvo: Wonderful. Um, now my very last question for you, Rob, is, uh, just, um, traveling back in time a bit and, um, let's say I'm a PM just beginning my career and I really want to work on AI tech questions. Should I be asking the companies that I'm interviewing to make sure that it'll be worth my time?

Rob Katz: Great question. Well, hard to say, I mean, as you know, I didn't start out as a product manager and I kind of. Backed into it because I needed to move to Seattle so that I could romantic relationship, uh, which worked out.

So step one is to meet a partner, step one. Yeah. And it's harder to find a good partner than it is. So, you know, always use that life advice. Um, but you know, I would say that, you know, curiosity is a great tool for a PM and ability to learn is another great tool. And when I'm hiring people, I'm always screening. for curiosity and willingness to learn and also willingness to change your mind because there are certain things that you might want to believe are true and then you might learn over time that they're not quite as true as you thought. Um, but as far as questions to ask, it's You know, questions around, okay, what are the foundational technologies that we're going to build on? What are the jobs to be done with AI? Um, candidly, there are a lot of really fun, but ultimately,

There's a lot of AI toys out there, you know, write me a haiku about eating pizza in Seattle. It'll do that. Generate me an image of shake riding the ferry to Bainbridge Island. It's like, it'll do that too. You know, that's cute and fun and good for my text threads with my friends. But like, that's not actually changing anything and where I think I'm biased.

I work in enterprise software, which is usually kind of a boring enterprise software kind of thing, right? But this is where generative AI is going to have a huge impact because it's going to augment. Human work, and it's going to remove a lot of those rote and repetitive aspects of work or of school or of all kinds of, I mean, we now, Clara and I have a couple of kids, like I mentioned, we have a lot of like rote management stuff that we have to do.

We, when we have meetings about virtual meetings, we Stuff for the house. We call it a board beating for our, our bad nonprofit called our household. Um, and anyhow, um, it can really augment work and it can create opportunities for humans to exercise judgment and empathy, but only when you have a good job to be done when you're looking to be a PM and an AI organization, are they looking at everything as a nail and AI is the hammer. Or are they actually trying to solve a real problem that you can get your heart and mind behind? Um, I would argue, again I'm biased, that the tools of AI are much more interesting to address and to run after, uh, and worthwhile to address and run after than are the toys. Um, but maybe I'm just no fun cause I'm a middle-aged guy and I don't like toys anymore. Um, so you can take it for what it's worth.

Sheikh Shuvo: Well, the toys just get more complicated and more expensive, but, uh, but they're still fun. Well, Rob, that's, uh, all I got for now. This was a super fun to chat together. I'll probably come back to you in 18 months and see if your predictions and how product management is changing or accurate or not.

Rob Katz: Awesome. Uh, well, I will look forward to my AI meeting assistant, uh, schedule our next, uh, podcast recording episode.

Sheikh Shuvo: Awesome. Thanks, Rob.

Rob Katz: Thank you, Sheikh.

View episode details


Subscribe

Listen to Humans of AI using one of many popular podcasting apps or directories.

Spotify
← Previous · All Episodes · Next →