· 32:04
Inside Hugging Face with Lewis Tunstall
Sheikh Shuvo: Hi, I'm Sheikh and welcome to Humans of AI. Today, we're going to meet Lewis Tunstall, an ML engineer at Hugging Face, where he focuses on LLMs and research based in Switzerland. Lewis literally wrote the book on natural language processing and transformers. We'll dive more into that in just a bit. Lewis, thank you so much for joining.
Lewis Tunstall: Thank you so much for having me, Sheikh. Excited for the chat.
Sheikh Shuvo: Now, the very first question I ask my guests: How would you describe the work that you do to a five-year-old?
Lewis Tunstall: That's a really nice question because I have a two-year-old. Actually, no, a three-year-old now. And he is going to ask me quite soon, "What do I do all day except for sit in a chair and stare at a screen?"
Sheikh Shuvo: If you can translate this to a three-year-old instead, more power to you.
Lewis Tunstall: All right, I'll take a stab at it. So, um, let's see. If I was going to explain this, I would say something like, right now we have some machines, or let's call them artificial intelligences, which are essentially ways of trying to understand or compress much of the human knowledge in the world through a single interface. And these machines, we can nowadays talk to them, and if they're five years old, they probably have already used ChatGPT. And I would say this is the kind of thing that I'm talking about. Part of my work has been around trying to create something similar to ChatGPT using open source tools, libraries, and models. The concept of open source is around this idea that when we create new technology, it is typically better done in a transparent way. So we publish all the code, all the data that goes into producing these systems. And the hope here is that the community can collectively work together to build these very capable machines, which then can help you do your homework. That would be my first shot at it.
Sheikh Shuvo: Sounds like you're going to have a very bright five-year-old. Yeah.
Lewis Tunstall: Yeah, yeah. We'll see, maybe he'll be a carpenter.
Sheikh Shuvo: Yeah, absolutely. Yeah. Oh, I'm taking a step back in time, could you tell us what your career story is and how exactly you landed where you are here at Hugging Face?
Lewis Tunstall: Yeah, so it's a bit of a convoluted story because it wasn't planned out in any way. So, in fact, I'll go even as far back as saying originally when I was younger, I really wanted to be a musician.
Sheikh Shuvo: I see the collection of guitars in the back.
Lewis Tunstall: Exactly, exactly. And so, when I left high school, I actually didn't go to university. I spent a few years playing in a band in Australia. And during the process of writing...
Sheikh Shuvo: What was the name of the band?
Lewis Tunstall: A very silly name called Mad Uncle. Nice. And so we were kind of like a punk/rock band. And at some point, one of my favorite bands and inspirations is a band called Muse. And they wrote an album called The Origin of Symmetry. And I was reading the lyrics and I was like, "Oh, this seems kind of weird." And it turned out it was based on a physics book called "Hyperspace." So I read this book from the library. It's one of those popular science books that physicists write to give the public a sense of what's going on in physics. And I got really hooked. It was basically about string theory and extra dimensions and all this kind of weird science fiction stuff. And I said to myself, "Oh, maybe I'll go to uni now as a hobby and just do a bit of physics on the side while I play music." And so I did that, but it was a bit of a long journey because I didn't have any math or science background. So I had to do night school for about a year to learn all the basics. And then I went to uni and did physics for my undergraduate in Tasmania in Australia and then later in another place called Adelaide. And I got really addicted to theoretical physics, which was about trying to deeply understand how the universe works at its most fundamental level. And when I was finishing my PhD in Australia, as most students do, you apply for postdocs, and I kind of applied all over the world, and one option was to come to Switzerland. And in Australia, you grow up a little bit isolated, so you don't really know your geography very well, and at first I had no idea where Switzerland was. But then I had a look at the map, and especially at Bern, and it looked like a beautiful city, so I said, "Alright, let's come here."
Lewis Tunstall: And, um, for a few years of my postdoc, I was doing what's called particle physics. This is essentially trying to model the interactions that happen between proton collisions at the Large Hadron Collider at CERN. And, a large part of this is very mathematical, very analytic, heavy math calculations.
One of my friends showed me one day this piece of code he'd written using TensorFlow, which automated a large chunk of that work. Particle physicists have to solve how to detect in the collider which particles belong to which type of category. Like, was it an electron or a photon that was detected? And then try to use this to figure out what the new physics beyond the standard model is, if such a thing exists. This showed me that neural nets, which I had been ignoring for my whole life, actually work, and maybe I should pay attention to them.
Shortly after that, a friend and I said, let's just teach ourselves how to do this stuff. So we joined a Kaggle competition about predicting housing prices in Russia. From there, I got hooked on this idea of teaching neural nets to learn these complex mathematical functions.
I reached a point in my postdoc where I could continue down academia or join industry. I was thinking the chances of having a real-world impact are much higher in industry because most of my work in academia probably would take 50 years to be validated. So I did a massive career pivot into machine learning. My initial job was at a small startup in Bern, Switzerland, where, funnily enough, I did no machine learning, just lots of data engineering and stream processing. Gradually, I started to get my hands dirty in more complex projects.
Then, around 2018, there was a conference in Switzerland where one of the transformers authors was giving a talk about this brand-new architecture called the transformer. It was a similar experience to when I was reading the physics book. The auditorium was packed, people stretching out the corridor to listen to this guy talk. This idea that you no longer had to train these neural nets from scratch to get good performance was pivotal. I decided to now focus really on NLP.
Together with my co-author, Leandro Fonberra, we were working together and said, let's dive deep into these transformers. While we were doing that, we noticed that it was quite hard at the time to get started if you were a practitioner. We were just data scientists trying to apply transformers to use cases that weren't just text classification. This inspired the idea to write a book where we would try to communicate our experience in trying to make this work in the real world. We wrote a few chapters, and then my wife suggested contacting Hugging Face, as I kept talking about them. She thought it would be sad if both of us released a book at the same time.
Lewis Tunstall: So, we just cold emailed Tom Wolfe, who's one of the co-founders [of Hugging Face], and I was expecting no chance he replies to these two unknown data scientists. To my surprise, he said, "Yeah, let's have a chat." He read a couple of drafts and was pretty excited about the project, so we joined forces. This was the start of our collaboration, which, about a year later, led us to joining Hugging Face.
Sheikh Shuvo: Now, if you listen to hyper music now, does your interpretation of it change?
Lewis Tunstall: A little bit, yes. One of the things you realize, when you are on one side of being the expert and not being the expert, is that sometimes explaining complex topics requires the use of metaphors or analogies, which don't always capture exactly the content. I've even experienced it myself. Like when I was learning how transformers worked, I was using Jay Alamar’s blog posts, which are spectacular conceptual explanations of how transformers work. But then, if you look at the code, you're like, "Well, how do I relate these things together?" So I think there's a learning process that often goes between the metaphor or conceptual level to the actual technical details. That's the process that I am also quite passionate about trying to help others bridge that gap.
Sheikh Shuvo: Yeah, it seems that the best careers out there are definitely the ones defined by pivots and serendipity, and you definitely have lived that.
Lewis Tunstall: Yeah, it was never planned, and it's one of these funny things. Sometimes people ask, "Do you have advice for young developers on what they should do?" And it's really hard because honestly, if I had given myself the same advice, it wouldn't have worked, right? It's very hard. And so the only thing I've taken from all this lesson is that being open-minded to change has helped. Being willing to jump into a new technology or a new idea has been really helpful. And the other one that's been super helpful is having friends who want to do it with you. Going on these journeys with someone else is a lot more fun than grinding by yourself.
Sheikh Shuvo: Absolutely. That's great advice right there. As a trained theoretical physicist, using that academic background as you tackle challenges in the fundamental machine learning world, do you think that's influenced your perspective or given you a certain lens versus others?
Lewis Tunstall: Yeah, it's a bit hard to say. For example, in my day job, I do very little mathematics. I spend most of my time debugging distributed training systems, which is a very different kettle of fish. The one thing I think physics helps a lot with is this idea of first principles thinking or trying to derive things yourself. A lot of the time when I start a new project, a large part of this is me just trying to understand where the foundational layer is. Like, can I re-implement many of these things for myself?
Lewis Tunstall: And that way you understand how they really work. Then at that point, you switch to a higher level API. I think that kind of methodology of trying to derive things for oneself tends to be pretty common amongst physicists and also many of my computer science colleagues at Hugging Face. I think that might be the main distinguishing factor. And, of course, the math is a bit easier in machine learning than in physics.
Sheikh Shuvo: If it seems easier, that's definitely a win. Yeah, exactly. Cool. Cool. Well, you mentioned the great story of the book that you wrote, "Natural Language Processing with Transformers." The last revised edition was in May 2022. I'm wondering, looking at the past year and a half of explosive change in the field, what parts of that book need another revision?
Lewis Tunstall: Yeah, that's a great question. And I'm also quite happy we wrote the book before ChatGPT because it would have taken us another year to finish it.
Sheikh Shuvo: You have the OpenAI Dev Day. Yesterday as well. It probably requires some more.
Lewis Tunstall: Exactly. Yes. So, when we wrote the book, the common paradigm around LLMs and natural language processing was that if you want a ready-made solution, use an API, like text DaVinci from OpenAI, which was one of the best proprietary language models. But if you wanted to do something very task-specific, like named entity recognition or summarization, you would be better off taking a pre-trained language model and fine-tuning it on your data for that task. The book is structured around that paradigm of adapting pre-trained models.
What's changed with ChatGPT, especially GPT-4, is the realization that once language models become sufficiently capable, they can do many specific tasks without needing to be fine-tuned. This idea of using prompt engineering or in-context learning really works well for these next-generation models. The capability of the proprietary models is very impressive. It doesn't mean that you still shouldn't fine-tune models because there are caveats with using proprietary models, like concerns around data privacy or the model itself changing over time, and you can't really control that.
The other side that has been interesting is the explosion of interest around the alignment of language models. ChatGPT, as far as we know, was trained using reinforcement learning from human feedback, where you collect human annotator labels indicating which response was better or worse for a given prompt.
Lewis Tunstall: There are now at least 10 different algorithms from the academic community trying to find alternative ways of doing reinforcement learning from human feedback. The extra thing I would include in the book would be how to push the capability of fine-tuned chat models to their next level.
Sheikh Shuvo: Makes sense. Interesting. Now, in terms of growth at Hugging Face, it's become a core part of an ML practitioner's experience. In your two and a half to three years at the company during this hyper-growth phase, what are some of the ways that the company's changed culturally in response?
Lewis Tunstall: It's really interesting because I was there when the company was 30 people, and now we're getting close to 200. So in just about two years, it's almost quadrupled in size. Hugging Face has been founded as a remote-first company, which seems to help maintain the culture. Everyone is distributed across the world – in Europe, the U.S., China, India. The common mission is to make ML systems as accessible as possible to the community. We have verticals like open source, monetization, product, and research. Each vertical has small teams of two to four people, very autonomous without the concept of managers. It's very flat. These teams decide what would be the most impactful thing for the community and optimize for that.
In my case, I'm part of the broader research team, focusing on how to make the alignment of language models more accessible. A large part of that is around reading papers, testing ideas to see if they work in practice, and then sharing those results with the community.
So, to wrap up on the culture question, when I joined the company, the culture was very much about autonomy and optimizing for the impact you can have on the community. As we've grown, maintaining small autonomous teams has enabled us to maintain that part of the culture but also to maintain the helpfulness. People will very happily help you on very technical issues, ethics issues, policy issues. That, for me, is one of the biggest strengths of the company – the willingness for people to help each other.
Sheikh Shuvo: That explains the smiling face as the company logo too. Exactly. Now, one of the things you mentioned, Lewis, is on the model evaluation side. And I know one of the things you've worked on has been the evaluation on the hub concept. Since releasing that part of Hugging Face into the world about a year and a half ago, what have been some of the surprising ways, if any, that the community has been using that?
Lewis Tunstall: Yeah, this project started about a year and a half ago. The challenge was to make evaluation of transformer models on the hub more accessible to the community. For context, we have just over a hundred or two hundred thousand transformer models in the hub, and around a hundred thousand datasets. A very common question from enterprise customers was which model they should use for their use case. The goal was to centralize that information. With evaluation on the hub, we built infrastructure so users can launch evaluations directly from the hub with just a few clicks through a simple application. We were possibly a bit too early in trying to make this engaging for the community. The usage led to about 8% of models getting evaluated on the hub. It's not zero, but it's not a huge number.
One of the big surprises was about a year later, my colleague Ed Beeching created the open LLM leaderboard. It's the same concept – you submit your language model to be evaluated on a set of tasks. Because of the huge interest by the community in figuring out which language model is better for training, it really exploded. This particular application of evaluation is now probably one of the most viral apps on the hub and one of the most visited pages. It shows that timing is very important. But I think evaluation still remains a pretty big challenge for the open source community, especially with other systems like ChatGPT being extremely good.
Lewis Tunstall: When someone says, "I've got a new chatbot that is preferred by human annotators as well as ChatGPT," that can be true. But then if you actually talk to ChatGPT, you quickly realize this model is on a different caliber. Figuring out how to make evaluation work in a scalable way for the next generation of language models will remain a big challenge.
Sheikh Shuvo: I love the leaderboard concept, and I imagine that most executives just have it pinned to their desktops.
Lewis Tunstall: To be honest, a lot of companies now ping us, saying, "Hey, we've got a new model coming. What would it take to get it evaluated or put on the leaderboard?" So, I think it has become a kind of industry standard. When we were developing it, we weren't necessarily thinking about that context. The leaderboard team is working on adding some new benchmarks soon, which will expand the axes that we measure language models on.
Sheikh Shuvo: My suggestion would be to do something with the next World Cup then. Cool. Shifting gears just a bit, looking at the team that you work with at Hugging Face, outside of just raw technical skills and aptitude, what are some of the qualities in your teammates that you look for as you're recruiting researchers and ML engineers? How can someone really stand out at this time?
Lewis Tunstall: That's a great question, and it's more complex in the presence of a remote company. Often, I don't actually know the people face to face in my team. The things I personally look for in my colleagues are a scrappy attitude. First focus on building something, even if it's not great, and then iterating on that is a great quality to have at Hugging Face because the field moves so fast. It's difficult to adopt the perfectionist attitude often found in academia.
Being humble and helpful are very good personal qualities at Hugging Face. There are many people around me who are much smarter than me, so being humble is a good way to remind yourself that you have a lot to learn. Another element that is harder to measure is how community-driven you are as an individual. Hugging Face wouldn't exist without the open AI community's support, and a large part of that involves building good relationships with members of the community.
Lewis Tunstall: Whether it's large companies like Meta or individuals, there are a lot of these new indie research labs that have come up around chat models. Working with them is both very fulfilling and a nice way to remind yourself that what we do is in service of the community. So, those are the three qualities I look for.
Sheikh Shuvo: Absolutely. You mentioned the challenges of just keeping up with the fast-paced nature of the industry. Are there any main ways that you stay on top of the latest research and industry trends? Any favorite blogs, podcasts, events?
Lewis Tunstall: It's basically an impossible task. The number of papers written on LLMs is astronomical. At Hugging Face, our internal Slack acts like a filter where people recommend interesting papers or events. Outside of that, I enjoy a newsletter by Jack Clark, head of policy at Anthropic. He shares a diverse range of news items, from policy to technical developments. For podcasts, I recently got addicted to Dwarkesh Patel's podcast, focused on existential risk interviews. It's helpful to understand the perspective of those concerned about AGI. Another resource I find useful is a blog called 'Shtetl Optimized' by Scott Aronson, covering everything from quantum computing to LLMs.
Sheikh Shuvo: Awesome. Those are great recommendations. I'll include links to those in the show notes. The last question I have for you, Lewis, is since largely your inspiration started with music, if you were to go back to Muse or any other band, what song would you say best characterizes the AI space right now?
Lewis Tunstall: That's a really good question. Let me think. So, the field feels extremely frenetic. Every week there's a new model landing, typically on the Hugging Face Hub, which is the next state of the art. Then people go wild, adapting it for chat models and stuff like that. At the same time, OpenAI and Anthropic are making significant progress towards improving the capabilities of language models. So, on the one hand, you've got the scrappy community indie hackers style, and then you've got the large-scale efforts. Something that captures that would probably have to be punk. So maybe like the Offspring or early Sex Pistols.
Sheikh Shuvo: Sex Pistols?
Lewis Tunstall: Yeah. Or at least for me, the Offspring was one of the big influences. So let's say maybe "The Kids Aren't Alright."
Sheikh Shuvo: That's a song that will bring back memories. Awesome. Cool. Well, Lewis, thank you so much for your time and for sharing about your world. It's been lovely to connect with you and learn what your inspiration has been.
Lewis Tunstall: Thank you very much, Sheikh. It was a pleasure to be here.
Listen to Humans of AI using one of many popular podcasting apps or directories.