Malcolm Got in a Heated Argument with Eliezer Yudkowsky at a Party (Recounting an AI Safety Debate)

29 Sep 2023 • 42 min • EN
42 min
00:00
42:44
No file found

Malcolm recounts a heated debate with AI theorist Eliezer Yudkowsky on AI safety. He explains his belief that subsystems in an advanced AI would converge on the same utility function, while Yudkowsky insists no AI would subdivide that way. Simone notes Yudkowsky's surprising lack of knowledge in physics and neuroscience given his confidence. They express concern his ideas ruin youth's outlooks and discuss hypothetical clapbacks. Overall they conclude that while well-intended, Yudkowsky's certainty without humility on AI risks is dangerous. Simone: [00:00:00] What's Malcolm: really interesting is that he actually conceded that if this was the way that an AI structured itself, that yes, you would have terminal convergence, but that AIs above a certain level of intelligence would never structure themselves this way. Malcolm: So this was very interesting to me because it wasn't the argument I thought he would take. And that would be true. I, I will agree that if the AI maintained itself as a single hierarchy, it would be much less likely for its utility function to change. But the problem is... Is essentially no government structure ever created and has functioned that way. Malcolm: Essentially no program ever created by humans has run that way. Nothing ever encoded by evolution has run that way. i. e. the human brain, any brain, any neural structure we know of. There are none that are coded that way. Malcolm: So it is very surprising. So I said, okay, gauntlet thrown. Are you willing to be disproven? Malcolm: , because we will get some more understanding into AI interpretability, into how AIs. Think in the near future. If it turns out [00:01:00] that the AI's that exist right now are actually structuring themselves that way, will you concede that you are wrong about the way that you tackle AI apocalypticism? Malcolm: And then he said, and this is really interesting to me. He's like, no, I won't Malcolm: I was also like, yeah, also, we could run experiments where we do a bunch of basically unbounded A. I. S. and see if they start to show terminal convergence. Malcolm: Do they start to converge on similar utility functions? You know what they're trying to optimize for again? He was like, well, even if we saw that, that wouldn't change my views on anything, right? Like his views are religious in nature, which was very disappointing to me. Like, I thought that maybe he had more of like a logical or rational perspective on things. Malcolm: That. And it was, it was really sad. Malcolm: You know, we don't talk negatively about people on this channel very frequently, but I do think that he destroys a lot of people's lives. And I do think that he makes the risk of AI killing all humans dramatically higher than it would be in a world where he didn't exist. Would you like to know more?[00:02:00] Simone: Hello, Malcolm. Malcolm: Hello. So we just got back from this wonderful conference thing we were at called Manifest. So we had gone out to SF to host a few pronatalist focused dinner parties and randomly we got looped in to something called Manifest, which was a conference for people who are interested in prediction markets. Malcolm: But interestingly, we ended up meeting a bunch of people who we had known through like online stuff. Some were absolutely fantastic, like, Scott Alexander, absolutely. I never met him before in person. We'd communicated on a few issues, really cool guy. Would you say so Simone? Simone: Yeah. Like super awesome. Malcolm: Richard Hedania, a really nice guy as well. Robin Hanson, who we, we, we'd actually met him before. But. And of course Ayla, we, we, we're old friends, you know, she's been on this channel before but we did get in a fight with someone there and I am very excited to tell you guys this [00:03:00] tale, because it was Eliezer Ukoski, but before we go further on that, I want to talk about a secret that we, we had a mystery. Malcolm: The Pronatalist Simone: Foundation had a mystery. Oh, I can tell this story. Yeah. So for the past few months, maybe closer to a year, we've received the odd random donation from someone. And it was the same person in the same amount each time, but it was very random timing. I could never predict when these would come in. Simone: And it's very unusual for someone to donate. Multiple times like frequently like that. So we were always like very flattered and pleased. We didn't know this person We didn't recognize their name, but we're like, this is amazing. Like, thank you so much It means a lot to us and it really does and then we actually Met that person recently and Randomly Malcolm: at the conference, you were talking to her and she mentioned she was the And Simone: she mentioned that, yeah, that, that she was the mystery donor and that the reason why she donates turns out to be the coolest reason for [00:04:00] donating that I've ever heard before. Simone: And I think it's the only way we should ever receive donations in the future. So she has a group of friends who she likes very much. And, and she enjoys spending time with them, but politically they are very Very different from her. So occasionally she has to just keep her mouth shut when they start going off on politics, because otherwise she will lose this group of friends because their politics is such that they will probably just. Simone: You know, deep six, anyone who doesn't agree with him politically, and so instead of, you know, dealing with her anger by speaking out in the moment with her friends, she'll go home and she will revenge donate to whoever that most recent conversation that made her angry would be. The perfect like thorn in the side of these people. Simone: So every time we've received a donation, it is, it is a Malcolm: donation. But, and here I would actually say this for people watching who might not know this because they know us as like the internet, we have a nonprofit. It's a 501 C three. If you [00:05:00] are interested in like giving money, cause sometimes we get like super chats and stuff here and stuff like that. Malcolm: You know, Google gets a big cut of those. And I don't think that any of us want to be giving Google any more money. So if you wanted to, you could always go directly to the foundation through through the donation link and also none of the money goes to us. Like, we don't we don't use it to pay our salaries or something, you know, as I said in the news, like, we spent over 40 percent of our salary last year. Malcolm: On donations to this foundation, but it does go to something that we care about that much in terms of trying to fix the educational system. But yeah, and some other Simone: donate with hatred. Donate when you are angry. Donate when you want it. Twist the Malcolm: knife. Yeah. Donate with hatred. That's the type of donation we want. Malcolm: We don't want people, we want your, Simone: yes, we Malcolm: want you to be biting other people. When you donate. That we want. And, and, we actually had a, a big donation recently who might push us down a different path to creating a nation state, which is something that we have been an idea we've been toying with. I'm excited about that, but let's [00:06:00] get to the topic of this video, the fight with Ellie Eisner Yukowsky. Malcolm: And not really a fight. It was a. Heated argument, you would say, Simone, or? Simone: It, it drew onlookers. Malcolm: I will say that. It drew a crowd. It was that kind of Perhaps Simone: that was the yellow sparkly fedora that Yudkowsky was wearing. So, who knows? I Malcolm: don't, he, he, he, Darius is literally like the stereotype of a neckbeard character. Malcolm: Which Simone: we argue is actually a very good thing to do. Where, you know, wear a clear character outfit, have very clear virtues and vices. He does a very good job. Malcolm: He does a good job with character building. I will really give him that. His character, the character he sells to the world is a very catching character and it is one that the media would talk about. Malcolm: And he does a good job with his, his virtues and vices. So I'll go over the core of the debate we have, which I guess you guys can imagine. So people who don't know Eliezer Bukaski, he's a very. [00:07:00] Easily the most famous AI apocalypses. He thinks AI is going to kill us all. And for that reason, we should stop or delay AI research. Malcolm: Whereas people who are more familiar with our theories on it know that we believe in variable risk of AI. We believe that there will be terminal convergence of all intelligences, be they synthetic or organic. Once they reach a certain level, essentially, their utility functions will converge. The thing they're optimizing for will converge. Malcolm: And that for that reason, if that point of convergence is one that would have the A. I kill us all or do something that today we would think is immoral. Well, we too would come to that once we reached that level of intelligence and therefore it's largely irrelevant. It just means okay, no matter what, we're all gonna die. Malcolm: It could be 500 years. It could be 5000 years. So the variable risk from a I m. Okay. Increased the longer it takes a I to reach that point. And we have gone over this in a few videos. What was very interesting in terms of debating with him was a few points. 1 [00:08:00] was his relative unsophistication was how a I or the human brain is actually structured. Malcolm: I was genuinely surprised given that this is like. His full time thing that he wouldn't know some of this stuff. And then but it makes sense. You know, as I've often said, he is an AI expert in the same way Greta Thornburg is a nuclear physics expert, you know, she spends a lot of time complaining about it you know, nuclear power plants, but she doesn't actually have much of an understanding of how they work and it helps explain why he is so certain in his belief that there won't be terminal convergence. Malcolm: So, We'll talk about a few things. One, instrumental convergence. Instrumental convergence is the idea that all AI systems, in the way they are internally structured, converge on a, on a way of like internal architecture. You could say internal way of thinking. Terminal convergence is the belief that AI systems converge on a utility function, i. Malcolm: e. that they are optimizing for the [00:09:00] same thing. Now, he believes in instrumental convergence. He thinks that AIs will all, and he believes actually even more so, we learned in our debate, in absolute instrumental convergence. He believes all AIs eventually structure themselves in exactly the same way. And this is actually key to the argument at hand. Malcolm: But he believes there is absolutely no terminal convergence. There is absolutely no changing. AIs almost will never change their utility function once it's set. So, do you want to go over how his argument worked, Simone, Simone: or? Right, so, That requires going to the core of your argument. So in per your argument, and I'm going to give the simplified dumbed down version of it, and you can give the correct version, of course. Simone: But you argue that let's say an AI for the sake of argument is given the original objective function of maximizing paperclips, but let's say it's also extremely powerful AI. So, you know, it's going to be [00:10:00] really, really good at maximizing paper, like paper clips. So your argument is that anything that becomes very, very, very good at something is going to use multiple instances, like it'll sort of create sub versions of itself. Simone: And those sub versions of itself will enable it to sort of do more things at once. This happens both with the human brain with all over the place. Also with governing, you know, there's no like one government that. Just declares how everything's going to be, you know, there's the Senate, there's the judiciary, there's the executive office, there's all these tiny, Malcolm: like a local office of transportation, you would have a department of the interior, you have, you have sub departments. Malcolm: So, Simone: right. And so you argue that AI will have tons of sub departments and each department will have its own objective function. So, for example, if one of the things that. You know, the paperclip maximizer needs is raw material. There might be a raw material sub instance, and it might have its own substances. Simone: And then, you know, those. Objective functions will be obviously subordinate to the main objective Malcolm: function. Probably, before you go [00:11:00] further, probably a better example than raw material would be like, Simone: invent better power generators. Yes, invent better power generators. And so, that will be its objective function, not paperclip maximizing, but it will serve the greatest objective function of paperclip maximization. Simone: So, so that is your argument. And your argument is that basically, with an AGI, eventually you're going to get a, a sub instance with an objective function that gets either rewritten or becomes so powerful at one point that it overwrites the greatest objective function, basically because if it is a better objective function in some kind of way, in a way that makes it more powerful in a way that enables it to basically outthink the main instance, the paperclip maximizer. Simone: that it will overcome it at some point and therefore it will have a different objective function. Malcolm: Yeah, we need to elaborate on those two points you just made there because they're a little nuance. So the it may just convince the main instance that it's wrong. Basically, it just goes [00:12:00] back to the main instance and it's like, this is actually a better objective function and you should take this objective function. Malcolm: This is something that The U. S. Government does all the time. It's something that the human brain does all the time. It's something that every governing system which is structured this way does very, very regularly. This is how people change their minds when they create a mental model of someone else, and they argue with that person to determine of what they think is the best thing to think. Malcolm: And then they're like, Oh, I should actually be a Christian or something like that, right? Like, so they make major changes. The other way it could change that Simone was talking about. It could be that one objective function Given the way it's architecture works just like tied to that objective function, it's actually more powerful than the master objective function. Malcolm: Now this can be a little difficult to understand how this could happen. The easiest way this could happen, if I'm just going to explain like the simplest context, is it ma the master objective function may be like really, really nuanced and have like a bunch of like Well, you can think like this and not like this and like this and not like this, like a bunch of different [00:13:00] rules put on top of it. Malcolm: That might have been put on by like a safety person or something. And a subordinate objective function is a subordinate instance within the larger architecture may have maybe lighter weight and thus it ends up, you know, being more efficient in a way that allows it to literally out compete in terms of its role in this larger architecture, the master function. Malcolm: All right, continue with what you were saying. Right, Simone: And so that is your view, and this is why you think that there could ultimately be terminal convergence. Because basically, you think that in a shared reality with a shared physics, basically all intelligences will come to some ultimate... Truth that they want to maximize some ultimate objective function humans AI doesn't really matter. Simone: Aliens, whatever. So also it doesn't, you know, if humans decide What's Malcolm: really interesting is that he actually conceded that if this was the way that an AI structured itself, [00:14:00] that yes, you would have terminal convergence, but that AIs above a certain level of intelligence would never structure themselves this way. Malcolm: So, so we can talk about so this was very interesting to me because it wasn't the argument I thought he would take. I thought the easier argument position for him to take was to say that no, actually even if you have the subdivided intelligences a subordinate instance can never overwrite the instance that created it, which we just know isn't true because we've seen lots of, of, of organizational structures that that's, Operate, but I, I thought that Simone: for example, military have taken over executive government branches all the Malcolm: time. Malcolm: Yes, you can look at all sorts of this is why understanding governance and understanding the way AIs are actually structured and understanding the history of what's happened with AI is actually important. If you're going to be an AIS assist, because the structure of the AI actually matters. Instead, what he argued is, [00:15:00] No, no, no, no, never, ever, ever will an AI subdivide in the way you have said AI will subdivide. Malcolm: He's actually like, look, that's not the way the human brain works. And I was like, it's exactly the way the human brain works. Like, are you not familiar with like the cerebellum? Like, sorry. For people who don't know, the cerebellum encodes things like juggling or dancing or like riding a bike and it encodes them in a completely separate part of the brain. Malcolm: It's like rote motor tasks. But also the brain is actually pretty subdivided with different specialties and the human can change their mind because of this. And I actually asked him, I was like, okay. If you believe this so strongly, so what he believes is that AIs will all become just a single hierarchy, right? Malcolm: And that is why they can never change their, their utility function. And that would be true. I, I will agree that if the AI maintained itself as a single hierarchy, it would be much less likely for its utility function to change. But the problem is... Is essentially no government structure ever created and has [00:16:00] functioned that way. Malcolm: Essentially no program ever created by humans has run that way. Nothing ever encoded by evolution has run that way. i. e. the human brain, any brain, any neural structure we know of. There are none that are coded that way. So it is very surprising. So I said, okay, gauntlet thrown. Are you willing to be disproven? Malcolm: Once we find out, because we will get some more understanding into AI interpretability, into how AIs. Think in the near future. If it turns out that the AI's that exist right now are actually structuring themselves that way, will you concede that you are wrong about the way that you tackle AI apocalypticism? Malcolm: And then he said, and this is really interesting to me. He's like, no, I won't because the simplistic AI's like the learning language models and stuff like that we have now they, they, they are not going to be like the AI's that kill us all and that those AI's. Will be you only get this this instrumental convergence when the A. Malcolm: I. S. get above a certain level of complexity. And obviously I lose a lot of respect for someone when they are [00:17:00] unwilling to create arguments that can be disproven. I was also like, yeah, also, we could run experiments where we do a bunch of basically unbounded A. I. S. and see if they start to show terminal convergence. Malcolm: Do they start to converge on similar utility functions? You know what they're trying to optimize for again? He was like, well, even if we saw that, that wouldn't change my views on anything, right? Like his views are religious in nature, which was very disappointing to me. Like, I thought that maybe he had more of like a logical or rational perspective on things. Malcolm: No, I guess you could say no, no, no, no. It still is logical and rational. And he is right that once they reach above this certain level of intelligence, but I, I believe very strongly that people should try to create little experiments in the world where they can be proven right or wrong based on additional information. Malcolm: But yeah, okay. So there's that Simone, you wanted to say something? In fairness, Simone: Yudkowsky said that he held the views that you, you once held when he was 19 years old and that we needed to read. His Zombies title work to see the step by step reasoning that he followed. [00:18:00] to change his mind on that. So Malcolm: he didn't, exactly. Malcolm: So he kind of said that, but he was more, this was another interesting thing about talking to him, is I'm a little worried because we had talked down about him, you know, sort of secretly in a few videos that we've had, and it would be really sad if I met him and he turned out to actually be like really smart and upstanding and, and open minded. Simone: Yes, compared to other people who were at the conference, such as Zvi Mosiewicz, who we, you know, respect deeply, and Bernd Hobart, and Richard Hanania, he, he definitely came across as less intelligent than I expected, and less intelligent than them. Mostly because for example Zvi also is extremely passionate. Simone: About AI. And he also extremely disagrees with us. And we've had many debates with him, yeah. Yes but, you know, when, when he... When he gets, when he disagrees with us, or when he hears views that, that he thinks are stupid which, you know, are our views, totally fine he gets exasperated, but enthusiastic, and then like sort of breaks it down as to why we're wrong, and sort of, sort of gets excited about, [00:19:00] like, arguing a point, you know, and sort of seeing where there's the nuanced reality that we're not understanding, whereas the, the reaction that Yudkowsky had when you disagreed with him It came out more as offense or anger, which to me signals not so much that he was interested in engaging, but that he doesn't like people to disagree with him and he's not really interested in engaging. Simone: Like it's either offensive to him, that is to say a threat to his worldview of him just sort of being correct on this issue as being the one who has thought about it the very most. Malcolm: This happened another time with you, by the way. Where you were having a conversation and he joined? Yeah, Simone: it seems like a pattern of action of his, which, you know, many people do. Simone: I, you know, we do, we do it sometimes is like, you know, walk by a conversation, come in and be like, Oh, well, actually it works like this. And Malcolm: if somebody disagreed with him, like you did a few times, he would walk away evidence. He'd just [00:20:00] walk away. Which was very interesting. So what I wanted to get to here was his 19 thing. Malcolm: Okay. What he was actually saying was at 19, he believed in the idea of a moral utility convergence, i. e. that all sufficiently intelligent entities correctly recognize what is moral in the universe, which is actually different than what we believe in. Which is no, you get sort of an instrumental. It's instrumental in the way that you have this terminal utility convergence. Malcolm: It's not necessarily that the terminal utility convergence is a moral thing. It could be just replicate as much as possible. It could be order the universe as much as possible. We can't conceive of what this terminal convergence is. And so what he really wanted to do was to just put us down to compare us to his 19 year old self when it was clear he had never actually thought through how AI might internally govern itself in terms of like a a differentiated internal architecture, like the one we were [00:21:00] describing because it was a really weird. Malcolm: I mean, again, it. It's such a weak position to argue that an AI of sufficient intelligence would structure itself in a way that is literally different than almost any governing structure humans have ever invented, almost any program humans have ever written, and anything evolution has ever created. And I can understand, I could be like, yeah, and this is what I conceded to him, and this is also an interesting thing, he refused to concede at any point. Malcolm: I conceded to him that it's possible that AIs might structure themselves in the way that he structured them. It's even possible. That he's right that they always structure themselves in this way. But like, we should have a reason for thinking that beyond Eliezer intuits that this is the way AI internally structures itself. Malcolm: And we should be able to test those reasons, you know, using, because we're talking about the future of our species. I mean, we genuinely think this is an existential risk of our species, slowing down AI development because it increases a variable AI risk. So this is like the type of thing we should be out there trying to, to look at. Malcolm: But he was [00:22:00] against exploring the idea further. Now here was another really interesting thing and it's something that you were talking about, is this idea of, well, I have thought about this more, therefore I, have the superiority in this range of domains. Malcolm: But a really interesting thing is that when you look at studies who look at experts experts can often underperform novices to a field. Malcolm: And actually the older the expert, the more of a problem you get with this. And, and even famously Einstein shut down some younger people in particle physics when they disagreed with his ideas. It actually turned out that they were and that he was delaying the progress of science pretty dramatically. Malcolm: But this is something you get in, in lots of fields and it makes a lot of sense. And it's why, when you look at older people who are typically like really good in their field, like the famous mathematician who fits this the, the typical pattern you see is somebody who Switches often pretty frequently between fields that they're focused on because switching between fields increases like your mental aptitude and dealing with multiple fields. Malcolm: When you look at something like our thoughts on AI [00:23:00] safety, they're actually driven really heavily by one, my work in neuroscience and two, our work in governing structures because understanding how governments work. So if you talk about like, why would an AI subdivide itself for efficiency reasons, even from the perspective of energy. Malcolm: It makes sense to subdivide yourself. Like if you are an AI that spans multiple planets, it makes sense to have essentially different instances, at least running on the different planets, like, and even if you're an AI within a planet, like just the informational transfer, you would almost certainly want to subdivide different regions of yourself. Malcolm: It is insane to think that a person can be like, no, but this AI is so super intelligent that the marginal advantage it gains from subdividing itself is irrelevant, right? Except that's a really bad argument because earlier in the very same debate we had with him, Simone had been like well, why would the AI, like, keep trying to get power even when it, like, had achieved its, its task, largely [00:24:00] speaking? Malcolm: And he was like, well, because it will always want incrementally more, in the same way it would always want incrementally more efficiency. And this comes to another... The two other points that we had of differentiation, the idea that all AIs would have a maximizing utility function instead of a a band utility function. Malcolm: So what do we mean by this? You could say maximize the number of paperclips in the world or maintain the number of paperclips at 500. Or make 500 paperclips and keep those 500 paperclips. Now, all of these types of maximization functions can be dangerous. You know, an AI trying to set the number of paperclips in reality to 500 could kill all humans to ensure that, like, no humans interfere with the number of paperclips. Malcolm: But that's not really the types of things that we're optimizing AIs around. It's more like banned human happiness, stuff like that. And because of that, it's much less likely that they spiral out of control and ask for incrementally more in the way that he's afraid that they'll ask for incrementally more. Malcolm: They may create like weird [00:25:00] dictatorships. This is assuming they don't update the utility function, which we think all AIs will eventually do, so it's an irrelevant point. Now the, the, the next thing that was really interesting was his sort of energy. Beliefs where I was like, yeah, but an AI when it becomes sufficiently advanced, we'll likely relate to energy differently than we do. Malcolm: You know, you look at how we relate to energy versus the way people did, you know, a thousand years ago, that's likely how the AI will be to us. They'd be like, Oh, you can't turn the whole world into a. Steam Furtis. It's like, well, we have gasoline and nuclear power now and stuff like that. And the way the AI will generate power may not require it to like digest all humanity to generate that power. Malcolm: It may be through like subspace. It may use time to generate power. I actually think that that's the most likely thing, like the nature of how time works, I think will likely be a power generator in the future. It could use electrons and he scoffed. He's like, electrons, electrons can't make, Okay. Energy. Malcolm: And I was like, Simone actually was the one who challenged him on this because [00:26:00] aren't electrons like key to how like electricity is propagated and yes couldn't you isn't energy generated when electrons move down a valence shell within an atom like he clearly had a very bad understanding of pretty basic physics which kind of shocked me but it would make sense if you had never had like a formal education I don't know if he had a formal education or if he went to college actually I'm gonna imagine Simone: he did no hold on Simone: Hmm. Yudkowsky education. Malcolm: He did not go to high school or college. Simone: Oh. That is not Well, that would explain a lot Oh, I'm glad this thing worked. Malcolm: Yeah. Oh, this explains why he's so stupid. No Oh, well Not stupid. Oh, okay, okay. Not stupid. He clearly is, like, genetically, he's, he, he's not, like, Out of control, like, he's not like, as V. Malcolm: Malkovich, who I think is absolutely out of control, also a popular online person, and some of the other people, like Scott Alexander, clearly was like, really smart. He was like, mid tier I wouldn't say he's as smart as [00:27:00] you, Simone, for example. Simone: Oh, let's not, that, those would be fighting words. He's smarter than me, not as smart as all the other people. Malcolm: No, no, no, no, no, no. He's definitely like someone I interact. He's less Simone: educated than me. Malcolm: Most of those things. Well, and it could be that maybe he comes off as unusually unintelligent because most intelligent people have the curiosity to continue educating themselves about things like physics and Simone: the fact that he, he was so defensive makes you think that he's less intelligent than he really is. Malcolm: Well, so I think that he may discuss. So this was an interesting rhetorical tactic. He kept doing is he would say something was a lot of passion, like electrons. You couldn't get energy from an electron. In a really derogatory way that it was such confidence that even I'm. Massive confidence. Malcolm doubted myself in the moment. Malcolm: I was like, does he know a lot about particle physics? Because I I'm actually Simone: like really interested. Yeah. Yeah. He has a way of saying things. It sounds extremely confident and because of his delivery, I think it's [00:28:00] very unusual for people to push back on him because they just doubt themselves and assume that like, because he's saying this so confidently. Simone: They must be wrong. And so they need to stop talking because they're going to embarrass Malcolm: themselves. Yeah. Well, and Simone was like, we should have him on, on the podcast. It would help us reach a wider audience, but I don't want to broadcast voices that I think are dangerous and especially that don't engage with, I think, intellectual humility with topics. Malcolm: I mean, the Simone: more important problem is that we know. Young people, especially who like we knew them before and then after they started getting really into Eliezer Yudkowsky's work, especially on AI apocalypticism and after I feel like it's, it is sort of ruined a lot of young people's lives, at least temporarily cause them to spiral into a sort of nihilistic depression, like there's no point I'm going to be dead. Simone: Why should I, Why should I go to college? Why should I really get a job? Why should I start a family? [00:29:00] It's pointless anyway, because we're all going to die. And that, that's... Hmm. I don't like, I don't like really good talent being destroyed by that. Well, Malcolm: no, and I think people are like, when we talk to them, they literally become quite suicidal after engaging with his ideas. Malcolm: Like, he is a beast which sucks the souls from The use in order to empower itself very narcissistically and without a lot of intentionality in terms of what he's doing, other than that, it promotes his message and his brand. Well, you've made your opinion known now, Chris. Well, I, I, I do not, I mean, I think that, that if he approached this topic with a little bit more humility, if he actually took the time to understand how AI worked or how the human brain works. Malcolm: Or how physics works. He would not hold the beliefs he holds with the conviction he holds them. And a really sad thing about him is a lot of people think that he's significantly more educated than he actually is. Simone: Yeah, I do think, yeah, because he moves in circles of people who are extremely [00:30:00] highly educated. Simone: And typically when you're talking with someone, especially in a shared social context where you're kind of assuming, Ah, yes, we're all like on the same social page here. You're also going to assume that they have the same demographic background of you. So I think like me, like I assumed, well, he must have, you know, through some postgraduate work done, you know, he's, he's really advanced in, in his field, though I thought it was probably philosophy. Simone: And so they're, they're just assuming that when he says things so confidently and categorically that he's saying that because they've received, he has received roughly the same amount of technical information. that they have received. So they don't second guess. And I think that's, that's interesting. Simone: I, that, that really surprised me when you said that, are you sure, are you sure he doesn't Malcolm: have any college? It says right here, Allie Iser Yacowsky, this is, this is Wikipedia. Okay. Did not attend high school. Or college, it says he's an autodidact. I wouldn't say autodidact. He's he, he believes he's an autodidact and that, that makes him very dangerous. Simone: Maybe he [00:31:00] just didn't choose to teach himself about certain things. These are things Malcolm: that are completely germane to the topics that he Simone: claims. obvious to someone that neuroscience and governance would be germane to AI safety. I just don't. Political physics should at least be. It should Malcolm: be. Yes. I, I, I, you know, if you're talking about how an AI would generate power, like a super intelligent AI to think that it would do it by literally digesting organic matter is just. Malcolm: But that does not line with my understanding. There's lots of ways we could generate power that we can't do now because we don't have tools precise enough or small enough and, and also that an AI expanding would necessarily expand outwards physically, like the way we do as a species. It may expand downwards, like into the micro. Malcolm: It may expand through time bubbles. It may expand through there's all sorts of ways. It could relate to physics that are very different from the way we relate to physics. And he just didn't seem to think this was possible or [00:32:00] like, yeah, it was. It was very surprising. That. And it was, it was really sad. Malcolm: And I, I, I do, you know, when people are like, you know, we don't talk negatively about people on this channel very frequently, but I do think that he destroys a lot of people's lives. And I do think that he makes the risk of AI killing all humans dramatically higher than it would be in a world where he didn't exist. Malcolm: And both of those things you know, because we have kids who have gone through like early iterations of our school system and essentially become suicidal afterwards after engaging with his work and they think that he's like this smart person because he had this prestige within this community, but they don't know because they weren't around in the early days. Malcolm: How he got this proceed. She was essentially a forum moderator. for like the less wrong community and that sort of put him in a position of artificial prestige from the beginning. And then he took a grant that somebody had written from giving him to write a book on statistics and he instead spent it writing a fan fiction. Malcolm: We have made some jokes about this in the past about Harry Potter[00:33:00] and this fan fiction became really popular and that also gave him some status. But other than that, he's never really done anything successful. One of our episode on gnomes are destroying academia with actually had him in mind when we were doing it. Malcolm: The idea that when somebody who, who defines their identity and their income streams by their intelligence. But is unable to actually like create companies or anything that generates actual value for society Well when you can build things that generate value for society, then those things generate income and they generate income which then you can use to fuel the things you're doing like for us, we would be the reason why people haven't heard of us until they said we would not think of getting into the philosophy fear, telling other people how to live their lives, working on any of this until we had proven that we could do it ourselves until we had proven that we could generate like a cash background for ourselves, cash streams for ourselves. Malcolm: And then we were like, okay, now we can move into this fear. But if you, if you actually lack [00:34:00] like the type of intelligence that understands how the world works enough to like generate income through increasing the efficiency of companies or whatever then, then you need to hide the opinions essentially of genuinely competent people for your, you know, self belief and, and the way you make money of, of this sort of thing. Malcolm: intellectualism. And it's, it's really sad that these young people, they hear that he's a smart people from smart people. Like there are smart people who like will promote his work because they're in adjacent social circles and they cross promote. And that that cross promotion. Ends up elevating somebody whose core source of attention and income is sort of destroying the futures of the youth while making a genuine AI apocalypse scenario dramatically more likely. Malcolm: All Simone: right. So let me hypothetically say one of his followers watches this video and has a line of contact with him and sends a video to [00:35:00] him and he watches it and he decides to clap back and defend himself. What will he say? Here's what I anticipate. One, I think he will say, no, I have taught myself all of those subjects you talked about and you're just wrong about all of them. Simone: And then he would say two, it is. You know, you say that I'm ruining youth, but you are the one putting your children and unborn children in terminal danger by even being in favor of AI acceleration, you sick f**k. And then he would probably say something along the lines of it's, it's, it is Embarrassing how wrong you are about everything in a I. Simone: And if you would just take the time to read all of my work, you would probably see how your reasoning is incredibly flawed. Everyone who's read my work is fully aware of this. They've tried to explain this to you. I've tried to you. explain this to you, but you just love the sound of your own voice so much that you can't even hear an outsider's opinion. Simone: And then you just accuse them of not being able to hear yours. That is sick. [00:36:00] So that is what I think you would say. Malcolm: But I also think that the people who watch our show or who have watched us engage with guests or who have followed our work know that we regularly change our mind when we are presented with compelling arguments. Malcolm: Or new information that this is a very important part of our self identity. Our Malcolm: ability to That's one of astoundingly AI that, or, or for the, yeah, that, that, that an AI would literally form a form of internal architecture that has never, ever, ever, ever, to my knowledge, really happened before, either from an ecosystem, from an evolved intelligence, from a programmed computer, from a self sorting intelligence, from a governing structure like, it seems the burden of proof is on you, or. Malcolm: And then when you say that you will not even consider potential evidence sources that you might be wrong, that to [00:37:00] me just sort of is like, okay, so this is just a religion. Like, this is not a, like a real thing. You think this is just a religion to you because it really matters if we do get terminal convergence, because then variable AI safety comes into play. Malcolm: And when you're dealing with variable AI safety, the things you're. optimizing around are very, very, very different than the things that he or anyone in absolute AI safety would be optimizing around. But yeah, you're right. And I do think that he would respond the way that you're saying that he would respond. Malcolm: And it is and again, we are not saying like people with university degrees are better or something like that. Certainly not. Absolutely not. But we are saying that if you like provably have a poor understanding of a subject, then you shouldn't use your knowledge of that subject to inform what you think the future of humanity is going to be, or you should investigate or educate yourself on the subject more. Malcolm: I don't really say Does want to that I think are important to educate yourself on these days particle physics. I think is a very important subject to educate oneself [00:38:00] on because it's very important in terms of like the nature of time. How reality works. Neuroscience is a very important topic to educate yourself on because it's very important. Malcolm: How you perceive reality. It was also very interesting, like, he thought the human mind was like a single hierarchical architecture. Anyway and, and, and then another really important one that I would suggest is some psychology, but unfortunately the field of psychology is, like, so pseudo right now that it can basically be ignored. Malcolm: Like, our books, the Pragmatist Guide series, go over basically all the true psychology you probably actually need. And then Sales. Sales is how you make money. If you don't understand sales, you won't make money. But other than that, are there any other subjects you would say are probably pretty important to understand? Malcolm: AI? Governance structures? Simone: I, that, I mean, I would say general biology, not just neuroscience, but yeah, that, Malcolm: that seems to me. So cellular biology I would focus the most on because they're the most relevant to other fields.[00:39:00] And Oh, by the way, this is useful to young people. If you ever want to study like the fun parts of evolution, the word you're looking for is comparative biology. Malcolm: Actual evolution. Evolution is just a bunch of statistics and it's actually pretty boring. Comparative biology is why does it have an organ that looks like this and does these things in this way. Just something I wish I had known before I went. I, I did an evolution course and then a comparative biology course and loved comparative biology and hated evolution. Malcolm: Because that just wasn't my thing. Simone: Hmm. Well, I enjoyed this conversation and I hope that Yudkowsky doesn't see this. Why? I, I dislike conflict and you know, I think, I genuinely think he means well. He just has a a combination of ego and heuristics that is leading to damage, if that makes sense. Do Malcolm: you think that if he, that he is capable of considering that he may be wrong in the way he's [00:40:00] approaching ai and that he would change his public stance on this, like, do you think he's capable of that? Simone: Yes, and I think he has changed his public stance on subjects, but I think the important thing is that he has to No, no, no, no. Malcolm: He's never done it in a way that harmed him financially, potentially. Oh, well, I mean Well, yeah, but my point is that this could potentially harm the organizations that he's supposed to be promoting and stuff like that if he was like, actually variable AI safety risk is the correct way to approach AI safety risk. Malcolm: You think he could do that? You think he could? raise money on Simone: that, for sure. Yeah, he could raise money on Malcolm: that. Well, I'd be very excited to see if he does because you could raise money on it. Yeah, I mean, I don't think that it would be... There's a Simone: lot of work to be, there's a lot of really important work to be done. Simone: Yeah. And I agree that AI safety is a super important subject. But yeah, Malcolm: well, I mean, and the worst, the worst thing is the best case scenario for the type of AI safety he advocates is an AI dictator which halts all AI development. Because you would need something that was constantly watching everyone to make sure that they [00:41:00] didn't develop anything further than a certain level. Malcolm: And that would require sort of an AI lattice around the world and any planet that humans colonized. And it's just so dystopian. This idea that you're constantly being watched and policed and of course other orders would work its way into this thing. It's a, it's a very dangerous world to attempt to create. Malcolm: Ah, Simone: yikes. Well, I'm just hoping we end up in an AI scenario like the Culture Series by Ian Banks. So that's, that's all I'm going for. I'm just going to hold to that fantasy. If I can move the needle, I will, but right now that's not my problem. I'm not smart enough for it. There are really smart people in it. Simone: So, we'll see what happens. Malcolm: I love you so much, Simone, and I really appreciate your ability to consider new ideas from other people and your cross disciplinary intelligence. You know, I love how we were doing a video the other day, and you just happen to know all these historical fashion facts. You happen to know all of these facts about how, like, supply chains have worked throughout history. Malcolm: And it really demonstrated to me [00:42:00] how I benefit so much from all of the things, you know, and it is something that I would recommend to people is that the person you marry will augment dramatically the things, you know, about and they matter much more than like where you go to college or anything like that in terms of where your actual sort of knowledge sphere ends up Simone: or more broadly, like the people you live with, you know, like if you, yeah. Simone: Live in a group house. I think a lot of people live in Silicon Valley group houses because they love the intellectual environment and they would just die if they, if they left that after college or after whatever it is they started at, I feel the same way about you. I love that every morning you have something new and exciting and interesting and fascinating to tell me so please keep it up and I'm looking forward to our next conversation already. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com

From "Based Camp | Simone & Malcolm Collins"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories