Weyl Versus the Rationalists


Author’s Note: this post is based in part on Discord conversations and a couple of email exchanges with Glen Weyl, but is largely original

Recently, the author of one of the articles my earlier post was interacting with wrote a response to another of the articles my post was interacting with. This is a disagreement that I spent a few days following in the way some follow celebrity divorces. Part of my interest was due to the people involved, Glen Weyl and Scott Alexander, whom I had rarely seen acknowledge each other before. An even bigger part, however, was the long-standing curiosity, frustration, and confusion with which I have followed Weyl’s criticisms of Rationalists. Fair warning: this will be long and rambling to match my thoughts in the wake of this, and the links will be very Twitter-heavy. Much of what I will say here will be based around this particular encounter, but it will also draw on earlier comments, and a couple of email exchanges I had with Weyl concerning his criticisms (which I have gotten his permission to share some details of). There will also be some footnotes based on a subsequent phone call.

Whenever one of Weyl’s critiques surfaces, I wind up thinking about it way too long and way too much for my own sake. The first time I read his “Why I’m Not a Technocrat” article, I was left pacing in my school’s library for at least an hour, and was pretty much unable to do work for the rest of the day. On reflection, I credit this to a couple of things. One reason is that he has made this out to be a big problem. At one point in his recent comments, he described the Technocratic attitude he sees Rationalists as manifesting as “Darth Vader”. In one of our later email exchanges, I defended an anti-work automation futurist position, and he replied that he found views of this sort “incredibly problematic and unethical”. He didn’t elaborate at this point because he didn’t have the time/bandwidth (and make no mistake, when he had time in our exchanges he was extremely generous and thorough. His first email response to me was one of the nicest emails I have ever received). This urgency is worrying to me if I don’t see what’s wrong. I don’t want to be doing anything seriously bad without noticing, I don’t know how constructive this heuristic actually is but… I actually really don’t want to be one of “the baddies”.

While I am not perfectly in line with the Rationalists or anything (alright, for the purposes of this discussion at least, I might as well just call myself a Rationalist, it is pretty descriptive), I share several ideas and admire several associated figures. When I read/interact with them, I don’t find anything nearly as objectionable as Weyl, which I think should be worrying if we are looking at the same thing.

That’s not it though, I have seen harsh criticisms of Rationalist and Effective Altruist thought that I have been able to shrug off. Part of it is that I have a lot of prior respect for Weyl’s views, but maybe an even bigger part of this is that I understood the other criticisms, and simply think they are wrong. A big part of why I can’t let Weyl’s criticisms go is that the claims about Rationalists he makes are hard for me to judge, because they rarely connect to anything all that specific, and I often just disagree with the loose impressions he gives. One of the most frustrating things I have seen was a sort of “it’s obvious when I interact with them” line I have seen him repeat a couple of times. As a way of demonstrating what is wrong with Rationalists to outsiders, this has the potential to be effective, maybe they will read the exchanges and simply notice what’s wrong.

As a way of talking to Rationalists, it is frustratingly unhelpful. After all, Rationalists speak to, and hell, as Rationalists pretty frequently. If there’s something for them to notice in how Rationalists talk, they will have noticed it. If Weyl can’t explain what it is that’s wrong in his view, pointing to Rationalists and, in effect, just saying “you!” does not offer clarity. The main specific thing I can think of that he’s pointed to as being problematic was his conversation with Eliezer Yudkowsky. It’s true that this went badly, but, putting aside whether it went sooo uniquely badly for a twitter argument, it is probably the worst I have personally seen Weyl go through with a Rationalist, Yudkowsky apologized for it later, and Yudkowsky is, in my opinion, not the most agreeable person in the community to begin with (I have had issues with how he approaches some conversations myself). Weyl also mentioned to me that some of the impressions of the community he has gotten have been from more junior figures he is uncomfortable calling out, which I can respect. It is also possible that if I regularly hung out on LessWrong or something, I would get a similar impression myself (for all I know, my own interactions with him might have been part of his negative impressions of the community as well). As it is, my interaction is almost exclusively reading the writing of more senior community figures, and interactions with self-described Rationalists like Nick who is, if anything, in my probably biased opinion, even more open and intellectually receptive than the average community member I know of 1. It is possible that the difference is just in the different corners of Rationalism we interact with, but this doesn’t explain my disagreement with his assessments of those we both look at. 2

Still, I will admit that another big part of the reason I was so interested in Scott Alexander’s response to Weyl is that I thought their interactions would go much better. I think they did, but it still felt like much of the time they were talking past each other. In particular, Alexander started off with at least two big misunderstandings of Weyl’s work. The first is that Alexander claimed that on a spectrum of “mechanism” versus “judgment” (systems that produce dynamic, individually uncontrolled outcomes, versus subjective interpretations by some individual or body), Weyl favors something less mechanistic and more judgement-based in policies. As Weyl himself pointed out, this contradicts pretty much his whole career, which has been focused on designing mechanisms that are better and more responsive. 3

The other major misconception of Alexander’s is that Weyl’s claim that Technocrats don’t value “continental philosophy and the humanistic social sciences” enough, is basically a suggestion for how to appeal to the non-Technocrat masses. What Weyl actually means is that part of what is required to appeal to the masses, or a diverse array of people from the masses, is writing in ways that appeal to various worldviews and memespaces, including ones very uncomfortable to the average rationalist like -gasp- continental philosophy. This is not a suggestion that making a standardized message seated in Heideggerian jargon is going to be more accessible to the average person on the streets than what Rationalists are doing now.

Still, Weyl’s responses to this particular misunderstanding leaves much to be desired as well. Late in the comments thread, he seems to imply that the difficulty Rationalists like Alexander have understanding his article is evidence that being legible to outsiders is not something they really think about or value. Aside from the fact that one of my favorite write-ups on the value of translations is from a Rationalist blogger, this seems like a far less obvious take-away than the simpler one that Weyl himself did not succeed in translating into the language of the Rationalists. Indeed, another blogger wrote up a translation of Weyl’s piece that Alexander finally understood.

While I agree that there aren’t many write-ups of Rationalism in continental terms (which, considering few people exclusively think in continental philosophy, seems forgivable), it is plain wrong to claim that Rationalists don’t engage with humanities-based explanations or more accessible, publicly legible modes of communication. Scott Alexander himself has written a fully online novel. Many people were drawn into the Rationalist community through a fanfiction website due to the stories of Yudkowsky and others. Bloggers like Tim Urban use common vernacular, doodles, and graphs to make concepts more easily digestible 4. Kelsey Piper writes for Vox. Peter Singer (more Effective Altruist than Rationalist) helped bring ethics out of the ivory tower and into the public eye with popular works like “Animal Liberation”. Bryan Caplan recently wrote a serious policy book as a quick, fun graphic (novel? Non-fiction?) that has been praised for its innovation as a style of economics communication by occasional Rationalist critic Tyler Cowen.

This is not everyone, but it is not insignificant. And yet, when this was pointed out to Weyl after his Technocrat article was released, his rough response was to acknowledge the problems with his own community, without conceding an inch on the Rationalists, preferring to say that all of their efforts were only to make their work more accessible to white men… I have to wonder which part of the efforts mentioned in Lantz’s thread suggested “white men” to Weyl? Surely fanfiction is notoriously the most reactionary subculture on the internet.

I think that Weyl’s whole perspective on the Rationalists is also wrongheaded in a key way, or at least limited. In particular I think Weyl compares the Rationalists to casual or political discourse too much, and to scientific institutions not nearly enough. Nearly every criticism he brings against the Rationalists feels significantly more pronounced in science, and science-deferent skeptics, and even conventional popular science. Scientists do very important original work, but their institutions and incentives are sort of terrible, and have led to a major epistemic crisis. Professional scientists make a habit of writing almost unreadably in order to be taken seriously by journals, which then put up paywalls that are prohibitively limiting to anyone in the public who would tolerate this unnecessary style anyway. Most of all, scientists have tremendous power, much more than self-described Rationalists, over policy and technology. This is not ignored by Weyl. In our emails he said that the discipline of Economics is a key focus of his criticism as well for example, but I haven’t seen him do much to make the connection back to the Rationalists in his critiques of science.

In particular, the story he gave of his view of the Rationalists in our emails is that they were started as a way of withdrawing from the irrational masses, and creating a space where smart people could make actual intellectual progress together. If you start by looking at the sciences, this is already pretty much how things work, and the early evolution of Rationalism actually runs in reverse from this.

Unlike scientists, there is an attempt to write with minimal jargon and obfuscation, and openness to different mediums and genres for this communication. There is an attempt to expand the epistemology of science so that it doesn’t just take studies as holy texts, but is applicable in day to day life, on difficult, speculative questions, and can offer meta-critiques of things like the replication crisis and the assumptions that drove it, by appealing to concepts more basic than science. The major works of the movement are published for free online. I have been reading The Sequences for the first time recently, and am struck by the criticisms of science and skeptics, in particular those who adhere to science as an aesthetic or genre. Most of all I am struck by the fact that Rationalists aren’t just science popularizers. Popular science still distinguishes the scientists (“One day, with work and expensive higher degrees, maybe you can join us… if our administrators let you!”) from the people being taught. It is free to participate in the discussion spaces of Yudkowsky, Alexander, and Hanson. You are all Rationalists.

One story of The Sequences is of a condescending nerd who needed to pronounce the inferiority of the biased masses as loudly as possible. Another is of a high school drop-out who liked science fiction and was aggravated by condescending self-proclaimed skeptics telling him his weird research interests were unserious. Both are OK origin stories (and neither works as a full story of The Sequences, let alone Rationalism), but if I can only choose one, the latter is more persuasive to me. It is also one that puts the Rationalists, as a movement, significantly in line with how Weyl sees RadicalXChange versus Technocracy.

I have spent some time pushing back against Weyl’s criticisms, so where do they work? I think many of his specific criticisms are ineffective on me, but he was writing these for some reason. This wasn’t just some deep personal vendetta, by his own admission he is not an expert on the Rationalists, and this probably drives a good deal of what falls flat in the specifics 5. In the first place, since I haven’t written any really dedicated criticism of Effective Altruism here yet, I want to point out a significant way I think the movement is Technocratic as Weyl says.

I roughly agree with Robert Wiblin both that Weyl’s specific criticisms of EA are often importantly misled, and that his general points about Technocracy are relevant to the movement in general. The biggest part of this is the focus on controlling how the movement is perceived, and how it expands. I recall hearing semi-official advice (directed at someone else) not to form an EA club unless you are really careful, and sure that you know enough about the movement yourself. I started my own school’s club while something of a baby EA, with many blindspots and hesitations. To this day the RIT EA club is extremely loose and casual. The Discord often features discussions of important cause areas, but also cartoons and prog rock and Sherlock Holmes video games. It is one of my favorite spaces, and it likely wouldn’t exist if I followed the current conventional EA wisdom.

I get why the consensus is currently like this, backtracking on early associations between the movement and “Earning to Give” has been an unending slog. Still, it is ultimately unsustainable, and if the existing memes in EA spaces only work well under a semi-esoteric system, the better approach will have to be to improve the memes to lower the risk from spreading them. But this is not too hard of a fix in my opinion. It’s non-trivial, but I expect it will happen.

Weyl’s harshest criticisms have been of the Rationalists, and his specific Technocracy arguments, in my opinion, just don’t look like they apply as well there. One part of his unique concern over Rationalists, as he has both said in our emails, and in public, is that the Rationalists are uniquely positioned to influence the development of AI. I have strong disagreements with Weyl’s views on AI, which I have communicated to him, but he didn’t have much time and so I didn’t get to know his counter-arguments well.

His Technocracy article in particular (Alexander didn’t interact with this section at all, finding it so egregious that he thought “it would be kinder to all involved to just pass over it entirely”, one admittedly clear case of condescension) seems to suggest that the AI alignment field is dominated by people who want a benevolent AI dictator, or what amounts to it. Many unfortunately do, but for reasons he gives many do not (I do not know Yudkowsky’s own specific views, but I am pretty sure from the relevant sequences I’ve read so far that the answer is “not”).

Furthermore specifics of the AI alignment problem are controversial, there is a good deal of speculation that different architectures may not have this problem at all. One of the most prominent figures in AI alignment believes this. Weyl seems to take a view like this (though he phrases it differently from others), but it still seems to me that he has a bad habit of:

  1. Assuming homogenous opinion among people in the AI alignment circles on the specifics of this issue

  2. Assuming Rationalists have way more power to shape AI than others he might criticize (if he, for instance, believes OpenAI will produce the first superintelligent AGI he may be right, but as per point 1, I see no reason he couldn’t target OpenAI more directly if so. Rationalists do have a ton of power on the emerging field of AGI safety, but AGI safety research seems like a strictly good thing that they wish others would get involved with more as well)

  3. Pathologizing the disagreements of his detractors. That is, insisting there is something deeply wrong with them or their ideology because they disagree with him on this difficult set of questions (in a way other voices in the field with similar views, like the aforementioned Stuart Russell, simply don’t)

There are some additional related concerns I have. For instance, that he doesn’t see value in architectures that would almost certainly have the alignment problem, even though I think there are some convincing reasons people want them. Also that he isn’t risk averse enough about AI alignment in the event that a dangerous architecture in fact is built, or in the event that he is wrong about it not being a problem for his preferred architecture. I won’t dwell on this point, it is probably the biggest part of the reason Weyl cares so much about the Rationalists, but I disagree with it significantly as a criticism.

That said, there are other reasons why I think Weyl is particularly suspicious of the Rationalist movement, even if they aren’t as big a part of the reasons he cares so much. In particular, there are three things, visible about Rationalism from the outside, which do not explain directly what is wrong with the Rationalist movement, but which would make someone reasonable very worried that something is wrong.

One is that some people leave the movement fairly traumatized, feeling that the movement was unhealthy and cult-like (and often call themselves “Post-Rationalists”). If I recall Weyl has said that he knows several ex-Rationalists in the RadicalXChange movement (sadly I’ve been searching and can neither find a link, nor him saying this in our email exchanges. I believe I am thinking of an old tweet), so he is likely aware of this. Then again, the one concrete example I am aware of him giving is of Taiwan’s Digital Minister Audrey Tang, whom Gwern Branwen found evidence likely wasn’t very involved with the Rationalists, and who, though only commenting briefly, gave the impression that they were ambivalent at worst about the community (though I suppose this could have been politeness).

Regardless of who Weyl knows (and he may not wish them identified), people of this description do exist. This thread in particular gives a disturbing overview of one person’s experience. My impression is that this mostly happens to people who are deeply immersed in rationalist memes, which prove unhealthy for many people. There are others who find these spaces extremely nice and healthy. I am not super familiar with “Post-Rationalist” figures like this, but that these people clearly exist, with varying levels of trauma, gives a reason to worry.

Another issue is that Rationalists are not very diverse at all 6, as Weyl notes implicitly in his comments on Frank Lantz’ thread. This is especially true along race and gender lines. There are other ways it is fairly diverse, for instance there are many queer and neurodivergent people in the movement. That said, this low diversity is a bad sign, especially considering that most of the survey respondents are left of center, which means the rate of underrepresentation of non-men and non-whites is likely even more significant than it would be expected to be against the background population (I haven’t done an analysis on the full data at all, I’m just looking at the aggregates. I lack the skills to go beyond this, and don’t know the stories doing so might tell).

The final factor is one that I’ve mentioned before, and which is an especially common criticism. One Weyl also notes in his Technocracy article. Rationalist spaces have served as an incubator to Neo-Reactionary thought. In my aforementioned article, I point out that Neo-Reactionaries are still uncommon among rationalists. Still, going off the data set again, they appear to be around 5.1% of Slate Star Codex readers. Considering how small the Neo-Reactionary movement is, this is probably way way bigger than their proportion in other communities. Why are these far right weirdos so much more keen to hang out with Rationalists than elsewhere, given that “Rationalism”, on its face, has little to do with Neo-Reaction (in fairness SlateStarCodex might overrepresent NRX relative to other Rationalist spaces, since the blog has made posts directly arguing with their positions in the past). I think in terms of direct effects, Neo-Reaction is simply too small to have a huge effect on Rationalist ideals as a whole, but it is certainly worth being suspicious when you have nearly half as many monarchists as women and nearly ten times as many monarchists as black people in your community (again, exclusively drawing off of the SlateStarCodex survey).

The trouble with each of these observable issues is that on their own they don’t present obvious causes. I think a big part of Weyl’s discomfort with Rationalists is these outside observations, but most of his criticisms are of its ideology. The truth is, it is not perfectly clear what causes each of these, and I remain confused to unpersuaded about his explanations. In fact, I can imagine relatively innocent explanations for each of them. 7

The Post-Rationalists are hard to explain innocently, but it may be something as simple as “don’t get too hyper-sucked in, because that is damaging like other excesses of ideology our brains weren’t meant to take”. Without knowing precisely how common this phenomenon is, I can’t tell how unusual it should seem for some people to get in too deep. The low diversity might stem, in large part or fully, from sampling from less diverse spaces. I’m not an expert on how diverse CS and AI spaces are, but as I recall they are hardly representative of the population, and this is one of the spaces Rationalism is most popular in in my experience. This is not good, but would be a problem with the relevant fields rather than the community. As for the Neo-Reactionaries (and explaining them away will take some doing so bear with me), freer speech norms are a significant part of Rationalist culture. As I’ve discussed, there is a natural tendency for this to make more extreme and horrifying ideas more prominent. Any free speech space will develop some version of the Alt-Right, and NRX is the Rationalist version. Given this, there is nothing unusual about them hanging out in these spaces, and if it is allowed that they are less horrible than the Alt-Right (I think probably true, very racist monarchists are still better than genocidally racist fascists, though these aren’t 100% accurate descriptions), then the Rationalist movement may have actually improved this inevitable fringe through its norms (though not nearly enough).

All of these are nice rationalizations, and each is probably part of the story, but none of them convince me totally. Here are a few more sinister stories. QC is right about the abusive dynamics at work in some Rationalist organizations, and the apparent urgency of their work leads many to violate norms that are important for the mental health of others – or to use this as an excuse at least. The low diversity may have started as some base-rate problem or whatever, but over time the low diversity and presence of NRXes turned potentially interested people off, disproportionately those from underrepresented/marginalized groups. The NRXes stick around Rationalist spaces because they are put up with, but they started out there because some Rationalists really like SJW-bashing to an unhealthy degree. All of these stories seem at least as plausible, and there are undoubtedly more sinister, if less plausible stories, that also have some force.

Roughly speaking, there are three ways these types of externally visible things can be a problem. As mentioned, they can indicate that a problem exists. Even if it turns out that they don’t however, they might directly cause a problem. Low diversity, for instance, can leave blind-spots in the ideas and priorities of a group. They can also indirectly cause problems, by making potential allies suspicious. This sounds a little dirty to even mention, but for the types of reasons I have already given, these external factors should make a reasonable outsider suspicious, and if it can be confirmed that there is in fact no problem from the inside, many good people will never even get that far (and in this case, it is not clear to me from the relatively inside that there is no problem). This is where I see Weyl. To go out on a bit of a limb 8, he notices these very suspicious things which, in fact, might indicate serious problems, and is understandably uncomfortable. He is not particularly inside the community, so his diagnoses are often flawed, but they come from somewhere very significant. All else being equal, this indirect concern gives at least some reason to avoid obvious red flags like this, if you can change them.

I think most of these things are true to a lesser extent, when at all, of the Effective Altruism community. As far as I can tell, NRX EAs are basically non-existent. Based on this survey, only 0.9% of EAs even identify as being right of center-right (arguably a problem in itself, but that’s another story). There are more than twice as many female EAs as female Rationalists as a proportion of the community, but the proportion of white people versus non-white people is about the same. I don’t think traumatized ex-EAs are unheard of, but as far as I can tell it is not a huge phenomenon.

To an extent, I think EAs look at these things as problems more than Rationalists. EAs promote strong norms against violating commonsense morality (or generally being a dick) for the greater good, and despite often endorsing a formally demanding ethic, also discourage berating the less committed. As the post this survey is in indicates, there is also already concern within the community about the low diversity. As for NRX, maybe caring about animal rights and sick people overseas is just fundamentally too bleeding heart for people who are to the right of the last two centuries. I am far more attached to the Effective Altruism movement than the Rationalist movement, so this is all pretty good for me, but I still badly want the Rationalists to improve if they can. I believe that the movement has good bones, and many figures I greatly respect. If there is something that can be done about these types of apparent problems, it should be a priority to discuss what it is, and fix it. At the moment, I don’t see that happening much amongst Rationalists, and that is a pretty serious criticism in itself.


  1. Ed. Note: And same to you! Although, on the opposite end, I fear my openness doesn’t have enough skepticism at the object level. Someday, I should write a post about false beliefs I had during the $GME thing… ↩︎

  2. After showing the draft of this post to Weyl for his approval, he tried to clarify some of his concerns to me in a call. Some of these remain points of vague, hard to explain disagreement between us, but there were other things that gave me a better idea of what broadly differentiated Rationalists, from other flawed online tribes in his eyes.

    A big part, it seems, is his view that Rationalists come off as saying they are smarter than everyone they disagree with. Some of this is superficial things, like the fact that they call themselves “Rationalists” at all (and I agree this name, like “Effective Altruists”, is unfortunate), but another is how the ingroup versus outgroup is defined. Most online tribes are based on things like political beliefs, so the outgroup is condescended to based on different beliefs, whereas Rationalists are defined more by caring about methods for improving reasoning and beliefs under conditions of uncertainty, so these are the things the outgroup is often defined by when they are condescended to. It is hard to think of a solution to this other than “stop being condescending to the outgroup ever” which would be awfully nice, or just always basing movements on things like political beliefs rather than reasoning norms. I think we remain in disagreement about how good an idea the latter is. ↩︎

  3. I also join the chorus of people in the comments shocked that Alexander has never read “Radical Markets” before, and I sincerely hope he decides to, as I think he will really like it even when he disagrees with it. ↩︎

  4. Ed. Note: “The Cook and the Chef” is still one of the best introductions to rationality, period. ↩︎

  5. In our subsequent call, Weyl defended his difficulty making specific criticisms of Rationalist ideas by noting that he can never get a straight answer on what specific ideas Rationalists stand for. There are some, I believe Nick included, who would contradict me on this, but I actually agree with Weyl (and Alexander) that this is at least in part because there is no defining Rationalist idea, nothing necessary and sufficient. ↩︎

  6. I don’t know where to find a more official survey, so I will defer to the results of the most recent SlateStarCodex community survey in evaluating this. Consequently, don’t take my cited stats allll that seriously. ↩︎

  7. Ed. Note: One SSC article in particular might explain a bit of this. ↩︎

  8. And based on our call, I think Weyl would endorse much of this himself as well. ↩︎


If
, help us write more by donating to our Patreon.


Tagged: