Episode 2240: Parmy Olson on the race for global AI supremacy between OpenAI and Deep Mind
It’s the race that will change the world. In Supremacy, one of the FT’s six short-listed best business book of the year, Bloomberg columnist Parmy Olson tells the story of what she sees as the key battle of our digital age between Sam Altman’s OpenAI and Demis Hassabis’ DeepMind. Altman and Hassabis, Olson argues, are fighting to dominate our new AI world and this war, she suggests, is as much one of personal style as of corporate power. It’s a refreshingly original take on an AI story which tends to be reported with either annoyingly utopian glee or equally childish dystopian fear. And Olson’s narrative on our brave new AI world is a particularly interesting take on the future of Alphabet, DeepMind’s parent corporation, which, she suggests, might, in the not too distant future, have Demis Hassabis as its CEO. “There's a very human story behind the development of AI.” -Parmy Olson TRANSCRIPT: AK: Hello, everybody. A few weeks ago, about three weeks ago, the Nobel Prizes were awarded. And it was the year for AI and physics. John Hopfield and Geoffrey Hinton, Geoffrey Hinton being known as the godfather of AI. Hinton had worked for Google for a while, and then in chemistry, the prize went to three scientists, including Demis Hassabis and John Jumper of Google DeepMind. Hassabis is a remarkable fellow on many different levels. One person who, I think, follows Hassabis with a great deal of care and interest is my guest today, Parmy Olson. She's a London-based Bloomberg opinion columnist and the author of a very intriguing new book, Supremacy: AI, ChatGPT, and the Race That Will Change the World. Parmy is joining us from the Bloomberg office in London. Parmy, would it be fair to call this new book, which has actually been shortlisted for the F.T. Business Book of the Year Award, one of the six books on the shortlist, is it fair to call this book a kind of parallel narrative of Demis Hassabis at DeepMind and of course, Sam Altman at OpenAI? Is that the narrative of your book? PARMY OLSON: That's a big part of it. I wanted to tell the story of the AI boom and some of the possible risks that could come from AI, particularly around the control of AI, by talking about the humans behind it. So, I think there's a very human story behind the development of AI. And so, that's why I structured the first half of the book as a tale about the careers and lives and accomplishments, and failures as well, of Demis Hassabis and Sam Altman, including their rivalry. AK: Yeah, it's interesting and kind of ironic given that AI's about smart machines. Some people fear that it might turn us humans into footnotes, some people have suggested that AI is our last invention. And Hassabis has always been presented as the good face, the nice guy, obviously a genius, but at the same time quite reasonable. Whereas, of course, Altman is a much more controversial figure. It's not quite Elon Musk, but he's certainly closer to being like Musk than Hassabis. Is that a fair generalization, or do you reveal that Hassabis is actually a rather more complicated figure than his public persona suggests? PARMY OLSON: Yes, we could say that. I mean, first of all, I would say that publicly, in terms of how both men position themselves and come across, I think Hassabis comes across as a nice guy and someone who is very scientifically minded and very focused on pushing ahead scientific frontiers and discovery, whereas Sam is much more of a business person. You could see him more as a capitalist, someone who really wants to grow his power and influence. Demis is a little bit more driven by prestige. He has wanted, for years, to win a Nobel Prize. That was very much a— AK: Who doesn't, Parmy? We all want Nobel prizes, except most of us, we're not going to be considered, I think, by the committee. PARMY OLSON: Sure, but which CEOs actually sit down with their engineers and say "this is how we're going to measure success, is going to be winning 2 or 3 Nobel Prizes over the next ten years"? It was an actual concrete metric for success within his company. So, prestige was very important to him. I think in both cases, though, both men—and this was a thing I really wanted to get across with the book—is that both men had these very big humanitarian ideals around building powerful AI. Demis would talk about using it to—when they eventually build AGI, which is artificial general intelligence, or AI that surpasses our brains, and it can solve all sorts of problems that we can't solve, for example, curing cancer or solving climate change. He would often talk about that in interviews, and Sam wanted to do the same thing, but for a slightly different reason. He wanted to bring abundance to humanity and elevate the wealth of everyone and just improve everyone's well-being and lives. But what ended up happening over the years, of course, is on their journeys to trying to build AGI, the economics of that endeavor were such that they had to align themselves with larger tech companies. And those those objectives, as humanitarian goals, ultimately faded into the background. And they both, whether you see one is more Machiavellian than the other, I don't think really either of them had that kind of intent, but both ended up helping to enrich and extend the power and wealth of the world's largest tech companies, the world's largest de facto companies. AK: And those, of course, are Google and OpenAI. PARMY OLSON: And Microsoft. AK: It's interesting that you focus, initially, on the ethics in terms of comparing Hassabis and Altman. The reviews of the book, Parmy, of course, have been very good. As I suggested, you are on the shortlist for the F.T. Book of the Year. But a couple of reviewers, The LA Times, suggested that you didn't address—you yourself as the author—didn't address the ethical questions associated with a AI, and The Wall Street Journal reviewer concurred. Is that, I won't say a fair criticism, but do you think that that was part of your job, or given that you were focusing on two remarkable individuals, Hassabis and Altman, with very clear ethical goals, for better or worse (some people might suggest that some of those ethics aren't for real), that it wasn't your job as an author to get involved in making judgments on yourself? PARMY OLSON: Oh, but I completely disagree with that analysis, because—and I mean, I would say that as the author, of course, pushed back against those reviews—but in the middle of the book— AK: The reviews were good. It was just that one— PARMY OLSON: Oh, sure. Yeah. Okay. But the there's a whole section in the middle of the book which talks about the ethics of AI, and AI research, and why academic research into AI is not looking into the ethics of and measuring the success of AI in terms of well-being for humans, fairness, justice, those sorts of things being measured by capability and power and growth, because the academic field that researches artificial intelligence is completely funded by big tech. And that has been increasingly the case over the last ten years. And a few years ago, there were some researchers at Google who warned about the ethical problems that were inherent in the design of these large language models, like the ones that underpin ChatGPT, and Anthropic's Claude, and all these other ones that are coming to the fore now. And unfortunately, that whole effort became quite controversial. The researchers were fired. It was a quite a messy situation. They did get the word out, though—which I think I think was very important—that people started to pay more attention to the problems, for example, around bias in some of these language models, and the training data that's used to actually create these models. Also, the last 25% of the book is almost like a polemic by me against some of the ethical downfalls of the designs of these systems. So I do go into it quite a bit. I almost worried when I was writing the book that I was pushing a little bit too hard. So it's it's funny people are interpreting it in different ways. Some people have said, well, you you're quite kind to the founders. And I think that's because I, again, don't see them as having malintent. I think they were caught up in a system where the force of gravity around big tech companies is so strong that what they were trying to build just got sucked into that, and their ideals, and their efforts around governance—which we can talk about more—just really fell apart. And they ended up just kind of becoming de facto product arms of these companies. And the ethical considerations really, just like the humanitarian ideals, really just got pushed to the wayside. AK: Parmy, it's interesting you mention that Google researcher who got fired. Her name is Margaret Mitchell. She was actually on the show, and she talked a little bit about being a female in all this. I mean, obviously, Demis Hassabis is male, as is Sam Altman. They both are very well-educated men. They're not from the ruling class, of course. Hassabis is from a working-class family in England, but he graduated with a double first from Oxford. PARMY OLSON: Cambridge. AK: Altman dropped out, in classic fashion, from Stanford after a couple of years. Is the fact that they're men in a business where men tend to dominate—we all know the fate of Marissa Mayer, and perhaps even Sheryl Sandberg—does that tell us that, in some ways, whilst Hassabis and Altman are very different kinds of men, they might have more in common than divides them? PARMY OLSON: Because they're both men? AK: Yeah, well, they're both men. Classic men in tech, highly intelligent, for example, they're adjusted. Hassabis sat down with his DeepMind people and said, well, one of the things we need to do over the next ten years is win three Nobel Prizes. That's a very...not just a male thing to do, but a very...I mean, he's from the UK, but it's a very Silicon Valley kind of thing. PARMY OLSON: I think there's actually more that sets them apart. They're just so different in terms of their approach, even to building companies. You know, Demis, as we were talking about with the Nobel Prizes, was very focused on prestige. And that's kind of how he set up DeepMind as well. It was quite hierarchical. And people who worked at the company who were scientists or researchers who had PhDs were like the rockstars within the company. And they were the ones that got face time with Demis, whereas other people who didn't have that status did not have that kind of access. And you look at somewhere like OpenAI I under Sam Altman, it was a much more flat kind of organization. Doors were open. If you wanted to talk to Sam, you could. He would interview new recruits himself, often for hours at a time. He spent something like a third of his time on recruitment. That was a big part of just how he spent his day to craft together the most effective potential workforce. And OpenAI wasn't made up of research scientists with PhDs. It was engineers and hackers and former startup founders from the Y Combinator accelerator. So it was a very different kind of culture, quite, almost freewheeling. And and I think that reflected the personalities and the approach of both men. AK: Let's remind ourselves, Parmy, because not everyone knows this about the history of DeepMind. It was founded by Hassabis and...who was his co-founder? Suleyman... PARMY OLSON: Yeah, Mustafa Suleyman, and— AK: He's been on the show, he was on the shortlist last year for the F.T. Book of the Year. He had a book out on AI, which I'm sure you're very familiar with. PARMY OLSON: I've read it, yes. It's very good. AK: I think they may have met at Oxford. When did they found DeepMind? Tell us about the story. Everyone knows about OpenAI, particularly given what happened last summer. But I think less people are familiar with the story of DeepMind. PARMY OLSON: So it was the two of them and also Shane Legg, who was a research scientist who was one of the early proponents of artificial general intelligence. This was 2010. When we talk about AGI now, it's kind of become a mainstream discussion point. But in 2010, 14 years ago, it was a fringe theory. And if you were a scientist, you were liable to be laughed out of the room if you talked about it. And so, Shane and Demis knew each other from University College London, where they had both been doing PhDs. And Demis knew Mustafa as a childhood friend, like the two of— AK: Yeah, they're London boys, aren't they? PARMY OLSON: London Boys. And Mustafa, known as Moose, was actually friends with Demis's brother, and he and his brother and Demis, they actually played poker together, like serious professional poker. And at one point they went to Las Vegas and took part in a poker tournament. And they had this whole kind of like game plan, and had their tactics, and that was just something they had done in their 20s. And then, basically, when Demis was at UCL, Mustafa was interested in coming over and possibly studying there. And Demis said, well, why don't you come along, join us in some of these lunch lectures that we do. And then the three of them ended up just kind of coalescing together and having lunch together in this restaurant called Carluccio's, just around the corner from the university. Shane told me, they— AK: I just want to remind everyone, if anyone wants to write a novel about this, that if you go into UCL, the first person you see is a dead person, is Jeremy Bentham and the spirit of Bentham's Utilitarianism and his way of framing the world is perhaps the dominant one today. So, there's a lot of fictional qualities to this narrative, I mean, although you've done enough. PARMY OLSON: Well, someone really can go right in and write the novel if they want. I did not know that. That's a cool little aside there. AK: He's hard to miss, he's right in the middle when you walk into UCL. PARMY OLSON: Oh, boy. Okay. So it sets the tone. AK: He's not alive of course. Physically. His spirit is still there. PARMY OLSON: The three of them felt that they couldn't really talk about AGI on the premises of UCL, or this is what Shane Legg told me anyway. So they met in Carluccio's, where they just had a little bit more space and they were— AK: Is it, kind of, trashy Italian— PARMY OLSON: Ish. You have to go there. Don't knock it till you've tried it. AK: Well it's not exactly the Ritz, is it? PARMY OLSON: No, it's not. But they were...I don't know how much money they had at the time, but they were talking about potentially doing a company. And they you know, Demis and Shane talked about trying to, you know...the wildest, most galactically ambitious idea you can think of. Let's build AGI, the most powerful AI mankind has ever seen. How are they going to do it? They couldn't do it within a university setting, because they wouldn't get the funding they needed. They needed big computers to run the training systems. They needed to build these algorithms. Demis knew that from the beginning. He realized they had to start a company. And Mustafa, having had some experience working with starting companies in the past, he had already co-founded another company, the three of them worked together and started DeepMind, and they were very secretive to begin with. And they also really struggled to raise money in the UK. Andrew if you've spoken to tech people in the UK, you just don't have the same scale in funding and crazy visionary thinking that you get in Silicon Valley. AK: And these were just north London kids. Nobody had heard of Demis Hassabis. PARMY OLSON: With the crazy idea. Yeah, exactly. With a crazy idea. AK: Solomon has a checkered reputation now. When Google acquired DeepMind, he went over there, and then got fired for his behavior. Was Demis the genius and Mustafa the street hustler? What was their relationship like at DeepMind? PARMY OLSON: So, I think both of them were quite charismatic figures within the company. Shane was a little bit more in the background. Demis certainly was the brain. But Mustafa was like, I've heard him described as the Pied Piper. So he would interview people from academia, civil servants, to join DeepMind. And people have told me that within 20 minutes of speaking to Mustafa, they had to join DeepMind, because they absolutely believed that this was a company that was going to change the world, that was going to build Artificial General Intelligence. They were already getting way ahead within just a few years of starting. They were starting to attract some of the top scientists in deep learning. AK: I want to get to the Google acquisition in a second, but did they know Geoffrey Hinton? Geoffrey Hinton, of course, is British and ended up in Canada as a researcher. But did they have meetings with Hinton? PARMY OLSON: I don't know. Yes, I believe they did. Gosh, I'll have to go back and look over my notes, but I think when they actually did the Google acquisition, that Hinton was part of the entourage that came over to London to talk to them. AK: Wow. That's part of the movie. So. So what did they build? What year did they get acquired by Google? PARMY OLSON: 2015. And the reason for that was: they were struggling to keep their engineers from getting poached. So by that time, big tech companies like Facebook and Google and Microsoft were realizing, hey, deep learning is a thing. This suite was kind of a backwater in AI, and suddenly there were some milestones that were achieved and it was like, we need to hire the best— AK: And this was before Hinson's LLMs became huge, right? This was pre LLM? PARMY OLSON: Very much pre large language models. AK: So AI was still potential rather than, sort of, actuality. PARMY OLSON: Yes, but some tests had shown that deep learning was very good for vision recognition. So, recognizing that a cat was a cat. And so, when Facebook, Google, Microsoft wanted to hire the top deep learning scientists, turned out they were all working for DeepMind because Demis and Mustafa had done such a good job of hiring the world's best deep learning scientists. And they were offering two, three times the salary of what DeepMind could pay. And so Demis reached this realization like, we're going to have to take some money from a large tech player, or even be acquired, if we're going to reach AGI. And Facebook initially put 800 million dollars on the table, but DeepMind rejected it, because they wanted to have an ethics board where there would be some independent members of the board who would have legal control of AGI when they eventually built it. And Mark Zuckerberg, spoiler alert, said definitely not to that, walked away, and then Google came along, offered 650 million, and they agreed on that, because Google also agreed to have this ethics board, which, by the way, never actually happened. AK: It's a fascinating story, Parmy, on a lot of levels. Firstly, DeepMind's focus on scientific genius is very Google-like, isn't it? I assume that Demis in particular has quite a lot in common with Larry and Sergey as a personality type. Would that be fair? PARMY OLSON: Very much so. In fact, Demis and Larry had a very special bond, and that remained very important to DeepMind's position within Google for years. Larry allowed Demis and DeepMind to have quite a lot of independence, because Larry just trusted what Demis was doing. His father was a computer scientist involved in artificial intelligence, and Larry was deeply interested in artificial intelligence himself. He believed in mind uploading and very much believed in artificial general intelligence, this kind of potential utopia with with AI. So he was very much on board with what Demis was trying to do. The problem was, of course, as you know, Larry eventually stepped down as CEO of Google. And over time, that link that that Demis had to the top person at Google just became...he lost that. AK: So what's your reading of the Suleyman story, I mean, it was a little bit of a scandal at the time. Now he's at Microsoft. He's the AI supremo there. Why did he get thrown out of Google, in your view? PARMY OLSON: I've reported on that. I did a very long investigative story at The Wall Street Journal, and I think I broke the story on that. So essentially what happened was, there were allegations of bullying within DeepMind. DeepMind hired an independent investigator, a.k.a. a lawyer, to come in and see what was going on. And Mustafa was removed from his management position within DeepMind. He went on leave for a little while and then was welcomed to Google in Mountain View with open arms and took a vice presidential position within the company. But he didn't have— AK: Just to be clear, DeepMind had been acquired by Google. PARMY OLSON: Correct. AK: So, DeepMind was part of the Google, or Apple, or the Alphabet...network, a corporate network. Mustafa Suleyman was, what, the president at DeepMind. He got pushed out of DeepMind because of bullying, and he went to Google. Wasn't that a bit odd? PARMY OLSON: Odd in what way? I mean, it's odd in the sense that Google kind of— AK: Why would they hire a guy who's already accused of bullying? PARMY OLSON: Well yes, if that's what you mean, then 100%. And Google has this reputation and a history of giving a hero's welcome to executives who have not behaved very well. The top leader of Android was accused of harassment. There was a very big payout. You know, it's not the best look for Google. “I think the management team [at Google] have realized that this is the age they're living in, where companies like Perplexity AI are coming up with real, viable competition.” -Parmy Olson AK: They appreciate the naughty boys more than other big tech companies. I always assume that are a lot of these kind of characters, particularly in Silicon Valley. PARMY OLSON: I mean, here's what I think. I think that's a moral failing of Google. And I think that it's, simply put, it's kind of the bro culture of Silicon Valley, that these people are able to get away with that kind of behavior. AK: Although the bros don't run...I mean, Google is less of a bro company, certainly, than Uber, and perhaps in some ways, Amazon or Facebook. Anyway— PARMY OLSON: Depends on how you define that. AK: What about the relationship between Demis and Mustafa? Did they fall out? PARMY OLSON: It's hard to tell now. I think they're still in touch. There was a really interesting article, I think in The New York Times by Cade Metz, where Demis made some comment—this is literally just in the last few months—and he said something like, "Mustafa is sort of where he is today in large part because of me." I'm completely paraphrasing that. But it was kind of a passive...Well, the reason I bring it up is because I've been to a few DeepMind events and AI events here in London, and that keeps coming up. People keep saying, did you read that comment that he made to The New York Times? So it does feel a little bit tense between the two. And no surprise. Demis is running AI for Google. And Mustafa is a big honcho for AI for Microsoft, and, of course, they're going head to head. AK: Right, and of course, Microsoft's use of OpenAI is incredibly complicated and controversial. PARMY OLSON: That's right. AK: So, before we get to the emergence of OpenAI, for these first few years, did Google essentially leave DeepMind alone and allow them to do all their research, hire these brilliant scientists and just pursue AGI, this big vision of machines that will have a kind of consciousness? Is that fair? PARMY OLSON: Yeah. 100% they left DeepMind to go alone. I don't know if their intention was for DeepMind to create technology that had consciousness, but certainly to create powerful AI. AK: Well, that AGI is, isn't it? PARMY OLSON: There's different definitions. Consciousness had always been part of the definition. Well, I'll just answer your question, then we can go to that. But for a long time, DeepMind operated very independently, and it actually spent years, at one point, trying to break away from Google, because remember, I said that Google agreed to DeepMind having the ethics board. Well, a couple of years after the acquisition, if you recall, Google turned into a conglomerate called Alphabet. And as part of that restructuring, various bets within the alphabet umbrella were able to spin out like Verily, the Life Sciences group, and Waymo, and organizations like that. And the DeepMind founders were told, you can also spin out, and you can become your own autonomous bet. And so the founders spent years talking to lawyers, drafting legal documents, to become a new type of company. They were going to be called a "general interest corporation." They actually took their entire staff on a plane up to Scotland for a retreat, where they announced this to the staff and said, we're going to be a separate organization. And the reason they were doing this was because they wanted to protect their future AGI from the control of a single corporation, a.k.a. Google. Their new organization was going to have a board. It was going to be staffed with very high ranking former political figures. They were reaching out to people like Al Gore and Barack Obama to be on this board. And they proposed all this to Google. And Google said, yes. Google even—and this is what I found through my reporting for the book—Google signed a term sheet where they agreed to fund this new organization, DeepMind, to the tune of 15 billion dollars over ten years as a kind of endowment. And so, the founders ran with this. They waited for it to happen. We're going to protect AGI. The staff loved it. And then during Covid, Google told Demis, the Google executives, actually, we're not going to let you spin out. We're going to draw you in even tighter. AK: Who at Google told him? PARMY OLSON: I actually don't know which particular exec—I assume it was Sundar, but Demis, at one point in April, I think of 2020 or 2021, had a meeting with the entire staff of DeepMind and just told them that the negotiations, which had been going on for years to spin out, were coming to an end and it was not going to happen. And I can tell you that the vast majority of people within DeepMind at the time were very, very disappointed. AK: I can imagine. And then, there's a remarkable symmetry to the narrative here. You can't make this stuff up, Parmy, which you haven't, of course. Meanwhile, OpenAI, similar sorts of issues were brewing. Is that fair? I mean, Altman's involvement with the company, Musk, of course, was one of the co-founders, as so many other people in Silicon Valley. It was founded as a nonprofit with similar idealistic concerns and goals. Is that fair? PARMY OLSON: Absolutely. It was just being approached from a different direction right? So, they started off as a nonprofit. Elon Musk co-founded it with Altman, in part because he was also concerned about Google having sole control of AGI. Musk was one of the early funders of DeepMind, and so he'd made a little bit of money when DeepMind sold to Google. But he knew what kind of research that DeepMind was doing. He felt they were very much on the cutting edge and they were getting close, and he felt we need to start an organization that isn't beholden to anyone financially, that isn't going to be opaque and closed, but it's going to be transparent. That's going to be cooperative and work with organizations, because when AGI comes, when we eventually build it, we can't have one single company controlling it, because then that would not benefit humanity. That would only benefit that one company. “I don't see them as having malintent. I think they were caught up in a system where the force of gravity around big tech companies is so strong that what they were trying to build just got sucked into that, and their ideals, and their efforts around governance…really fell apart.” -Parmy Olson AK: And then tell me if I'm wrong. But there's so many ironies here. So, large language model technology was pioneered by Hinton, of course, who won who won the physics prize this year, and a team in Toronto. While they were at Google, Hinton somehow convinced Google to buy his researchers. But then Google didn't develop the large language model technology. And that got developed almost as a hunch by OpenAI. Is that a fair summary of this bizarre narrative? PARMY OLSON: Very much. And I would say actually Hinton didn't have as big a role in language models. He was more deep learning. It was another team of scientists within Google who worked on language models. In particular, they came up with this architecture called "the Transformer." So that's the T in chat GPT. And there was a group of them who wrote this paper, and it became— AK: That was in Toronto, wasn't it? PARMY OLSON: This was actually in Google's headquarters. They were all working in Google's headquarters in Mountain View. And they wrote this paper, and they released it. And OpenAI took that finding and built their own build on top of the Transformer and built their own versions of language models that eventually became ChatGPT. That's a that's a big oversimplification of a lot of work over several years. But that's essentially what happened. And part of the reason— AK: It's an astonishing irony, Parmy, that the idea of the Transformer was developed by Google scientists, Google researchers, but it was the people at OpenAI who were willing to invest in the idea. They had a hunch it could work, whereas Google essentially passed on it. It's almost like a VC who missed the investment in Google or Facebook or Amazon or something. PARMY OLSON: 100%. And I've spoken to people who were at OpenAI when they were building this, and they've told me when they were building the early versions of ChatGPT, they were so worried, like really, truly scared that Google was just about to release the exact same thing, because they were thinking, well, Google's—because the culture within the field of AI is just to release new innovations, like when you come up with a new architecture like the Transformer, you put it out into the world. And that's why other people, like the folks at OpenAI, could play with it. One person told me it was like, we're playing with Google's toys, and they're not doing anything. So they were shocked when they were releasing their own— AK: They were playing with Google's toys. PARMY OLSON: Yeah, they were shocked when they could put out their own versions and they were just waiting for Google to come out with the same thing. And it never did. And actually, there were language models being developed within Google, including by the guy who ended up starting character AI, which is a which was a very big AI chop up company. And they just wouldn't release it. And a big part of it is the innovator's dilemma, right? So Google knew, I think some part of them knew, if they put a big powerful chat bot out onto the internet, well, people might just use that instead of Google search. So why would they release something like that? And there was also a lot of concern about chat bots being online and saying toxic, crazy, unpredictable things, which could really hurt the company's reputation. So that was a big reason also why they didn't release it. AK: So meanwhile, while all this is going on, OpenAI does ChatGPT, which changes the world. What was the response within Google, and particularly for Demis? He must have been, in a way, furious that OpenAI had jumped on this technology, which, in some ways, the Google people had been pioneering. How did the DeepMind people allow Google to ignore the Transformer and the work that Hinton's people were doing? Or is that just a separate division, sort of parallel worlds? PARMY OLSON: Well, I think at DeepMind there wasn't as much interest in language models, and that was actually a little bit of a source of tension between Mustafa and Demis. Mustafa was very interested in the potential of large language models. This was even a couple of years before ChatGPT came out, but Demis was very interested in games and training AI through chess, or Starcraft, or Go, which is of course— AK: Which was their big breakthrough, on Go, wasn't it? PARMY OLSON: It was their very big breakthrough, very big PR moment for them as well. It was kind of like their version of IBM Watson beating Garry Kasparov and the chess champions. Now, they were beating someone who was able to, you know, master an even more complex game, which was Go. And that got them a lot of really positive press attention a few years ago. But it wasn't quite the same kind of public breakthrough. Again, this was something that appealed more to higher level scientists, whereas ChatGPT, remember we were talking about kind of the differences in approach, Sam Altman was more product-oriented. Demis is more about sort of prestige and science was with ChatGPT. This was something the entire public could play with. It was just a web page anybody could access. But that approach to AI was not something that Demis was as interested in. He wanted to try and build AGI by approaching in lots of different ways, whereas OpenAI and Sam, they saw the potential with language models as their one route. They wanted to just pick that one thing and just stick obsessively with it. And that's what they got. AK: Yeah, very focused, which is the startup way. Meanwhile, Hassabis has come around to recognizing this, he gave an interview last month to Axios suggesting that he now sees a watershed moment for AI. And Google has reorganized itself, its Gemini app team, Gemini being, in a sense, I guess their version of ChatGPT, now is managed by the DeepMind Group. So is Hassabis become, through perhaps some clever politics, has he become Mr. AI at Google? PARMY OLSON: He absolutely is. In fact, there's every possibility he could become the next CEO of Google. That is what I've heard from former executives of DeepMind and Google. AK: Wow. Who are his rivals at Google? Who are the other people who you think could take over from the current CEO? PARMY OLSON: I mean, maybe someone like Ruth Porat, who is sort of in the CFO role. AK: The former CFO, who now has a stupid role there. PARMY OLSON: I don't know who else could, really— AK: I mean, there's a German, I can't remember his last name, called Philip, who runs that business side, who I think is quite powerful. PARMY OLSON: Yeah, well, Jeff Dean was the person who could have ended up doing what Demis is doing now. But Demis sort of took that role instead. And we could talk about why that is. But I think ultimately, he is the most one of the most powerful people in Google right now. He's running their AI efforts in Silicon Valley and in London. So he absolutely is Mr. AI Guy. I think the big question is whether he would want to uproot his family and move to Silicon Valley if he ever wanted to take that even further. And, you know, take that top position, which might be something, you know, shareholders, activist investors might want to push for because— AK: What kind of lifestyle does have? Altman has a notoriously weird Silicon Valley, one, doesn't seem to move anywhere, doesn't seem to care about money. Does Hassabis enjoy the cash that he's got? I mean, he's a very rich man. PARMY OLSON: I think they all enjoy the cash. Have you seen the video of Sam Altman in his sports car, driving around San Francisco? AK: What kind of life? He's in London. Where does Hassabis live? PARMY OLSON: I don't know specifically where he lives. I presume it's somewhere near north London. But he does live in London with his family. AK: He's probably got the whole of the Caledonian Road, where I used to live. The real narrative of Andrew O'Hagan was on the show, His Caledonian Road. I've got a few more questions, this is such interesting stuff from Parmy. You mentioned the innovator's dilemma at Google, and that this new AI and these large language models, the Transformer would eat up their search dominance. Is there acknowledgment now, do you think, within Google that that innovator's dilemma is something they have to address and that they have to somehow embrace AI, and even AGI, even if it in some ways undermines the traditional notions of search? PARMY OLSON: Yeah, I think in a way they're kind of standing at an abyss and they have to just jump in. It's all very much an unpredictable. Google has always been a company that does not do a lot of radical change, which feels weird to say because you think, "Google. Tech company, cutting edge, AI. Surely, they..." But actually, just look at the Google home page. It has barely changed in a decade. There's really not a lot of big changes that Google ever makes to its core products, because it derives 80% of all its revenue from advertising. And so, messing around with that formula is a potential recipe for a disaster. But I think you're right. I think the management team have realized that this is the age they're living in, where companies like Perplexity AI are coming up with real, viable competition. This is a chat bot that a lot of people are using instead of Google search because it's just giving you a singular answer, and a very comprehensive answer. Whereas you look something up on Google, whether it's something related to a transaction or advice, and, I don't know how often you use Google, but you know, there's a ton of ads. AK: But of course, OpenAI also came out last week with its own version of search. And again, history is, in so many ways, repeating itself. Now, OpenAI is what Google once was to Microsoft. Meanwhile, Suleyman is at Microsoft. So many ironies. I know you got to run, finally Parmy—and we'll have to get you back on the show because it's so much more to talk about here—you probably heard there was an election in the US this week. PARMY OLSON: Hah. Oh, really? AK: A certain Donald Trump now is running the show. His supposed right hand man and left hand man is a certain Elon Musk who put his fortune on the line. He was one of the co-founders of OpenAI. Lots of questions now about what Trump will do with AI. One piece by Scott Rosenberg in Axios suggests that young AI just got a ticket to run wild. Given your narrative in supremacy, is this moment historically now the opportunity for both Hassabis and Sam Altman to really profoundly change the world? They won't have to deal with Lena Khan in DC or any other kind of regulation? PARMY OLSON: I actually have kind of a contrarian view to the one you just described from Axios, and a few other people have said, "Oh, Trump's in power, light touch regulation. Elon Musk isn't whispering in his ear. He's got an AI company. Obviously, things are going to be fast and loose for AI companies." I actually am not entirely sure that's how it's going to play out. And again, it's just because of all that I've read and research on Elon Musk. He is a guy who can put ideology before his own business interests, and he has done that before. And this guy truly is worried about AI as an existential risk. This has been years in the making for him. He broke up his friendship with Larry Page, the co-founder of Google, over an argument about the risks of AI destroying human civilization. He's publicly stated that. And it is absolutely a reason why he co-founded OpenAI because he was worried about Google controlling AGI. So I think this means that if Trump comes into power—Trump didn't even talk about AI on the campaign trail. He mentioned it a couple of times. But didn't really talk about it. AK: He's more into crypto, I think. PARMY OLSON: I think he just doesn't really care that much about AI. I don't think it really interests him, which leaves the door open for someone else to steer. And I think that person could well be Elon, because he cares about it so much. I don't think it would be JD Vance, because JD is more on the whole "big tech little tech" thing. I think Musk is really the AI guy in the administration, and I think he would push for safety measures. And the current executive order that Biden put in place on AI, Trump has said he'll get rid of it. Sure, he might do that, but he'll put something very similar in its place that will still push companies to check their AI models for safety. But they will also be told, you don't have to have filters in place on toxic content. So, because Trump and both Musk are so anti-censorship, anti-woke censorship. So I think we're going to have chat bots in the next couple of years that say a lot of wild things. AK: Yeah, well, you've said some very sensible things, Parmy. Your book, Supremacy: AI, ChatGPT, and the Race that will Change the World is on the shortlist of the F.T. Books of the year. That will be announced, I think, in London on December 9th. If you win, are you going to share the proceeds with Hassabis and Altman? PARMY OLSON: Or maybe I should put it into the race to build AGI because that's for the sake of humanity, right? That would be the charitable thing to do. AK: Right, well, Parmy, Olson, really honored to have you on the show. Fascinating subject. So much more to talk about. We'll get you back on the show in the not-too-distant future. Thank you so much. PARMY OLSON: Thank you. Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. A former reporter for the Wall Street Journal and Forbes, she is the author of We Are Anonymous and a recipient of the Palo Alto Networks Cyber Security Cannon Award. Olson has been writing about artificial intelligence systems and the money behind them for seven years. Her reporting on Facebook’s $19 billion acquisition of WhatsApp and the subsequent fallout resulted in two Forbes cover stories and two honourable mentions in the SABEW business journalism awards. At the Wall Street Journal she investigated companies that exaggerated their AI capabilities and was the first to report on a secret effort at Google’s top AI lab to spin out from the company in order to control the artificial super intelligence it created. Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children. Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
From "Keen On"
Comments
Add comment Feedback