Appendices:"Complexities of Free Speech: Some Approaches, Problems, and Reasons to Care"

Author’s Note: this post is a response to an earlier post of mine

For some context on this post, read Appendix A of this earlier post.

Appendix A

I am a little hesitant to write these appendices, in large part because, while the last article I wrote appendices for got a good deal of feedback, most of it neutral to negative, this one, while getting very little feedback, got mostly neutral to positive feedback. Still, these are the appendices I planned on for longer, because there were a number of areas of my original post that I left somewhere unsatisfying or misleading which I have subsequently had more thoughts on.

This area of discussion is one I am unusually interested in in general, as you might notice through my prolific presence on our “discourse” tag, and as I already have more relevant thoughts and areas of discomfort with where I leave things by the end of this post, it is entirely possible that I will write a second post of appendices for this article in the future. Some more brief housekeeping: I would strongly, strongly encourage you to read my original article before this one, it is much longer, and I won’t review most of the concepts from the original, but I want to briefly clarify how I will be using the phrase “free speech” for those who don’t read the original anyway. When I say free speech, I don’t mean some sort of legal, rules-based definition, akin to the US first amendment, I mean a state of affairs in which people feel comfortable expressing a wide range of opinions and arguments without fear of direct consequences for those points. This would be something more akin to the ideals described in John Stuart Mill’s “On Liberty”. I have my reasons for this which I don’t want to get into, but when I refer to some sort of “deplatforming” as a way of restricting free speech, I don’t mean that deplatforming is literally violating the first amendment, and likewise, when I say that it is worth considering restricting free speech in some way, I don’t mean that we should carve out exceptions to the first amendment.

The other reason for my writing this is sort of the elephant in the room. Since my last wave of posts, the core interest of the article, “free speech”, has been discussed a lot more. This has not only caused me to think more about some of the issues relevant to my original post, such as a couple of additional heuristics, and the flaws/advantages of ones I already discussed, but it feels obligatory for me to express some sort of opinion of my own since I have already written about it a fair bit. I therefore wanted to start out these appendices with a discussion relevant to recent controversies. Hopefully this opinion will not be too unoriginal to be worth voicing, but I think it is something that hasn’t been dissected quite how I want to do it. This is the issue of firings (side note, when first working on this appendix, this issue was much more discussed than it seems to be at this point, possibly more people explicitly endorse a position like mine now, there seems to have been some movement in that direction). Some of the most egregious cases of recent firings have already been prominently discussed by left-IDW people (is that the best descriptor? It at least seems to fit.) like Yascha Mounk. In my opinion, the most significant issue in the arena of free speech today, or at least the one I have thought about and developed the strongest investment in, are firings of these sorts (or broadly major career penalties that go along with this, such as having a very difficult time finding work at all). Thinking about this has led me to the view that penalties of this sort are not a good idea, or at least are very rarely a good idea, and we should have a strong presumption against normalizing them.

The biggest part of this is organization. When you get someone fired, presumably you are sending some sort of a signal. Indeed, since getting fired is for most people very bad, it sends an incredibly strong signal, one arguably on the level of some legal penalties (though not similar to legal penalties in every relevant way). As with many speech signals however, it tends to often be disorganized on a matter where a more specific signal seems to be needed, and difficult to achieve. Emmanuel Cafferty, one of the oft-repeated recent examples, noted of his sudden firing (for gesturing the okay-sign unaware that it is sometimes used as a dog-whistle), “what am I supposed to learn from this? It’s like I was struck by lightning.” If there is not a clear rulebook for what will and won’t get you fired, the signal will not work as intended, whether or not the rule being signaled is a good one. Before creating predictable norms of this sort, it seems as though you need an idea of what makes such a rule sufficiently just to begin with. Here, three things all need to be decided. The first level of nuance one might admit is how bad does a belief have to be to justify a firing. The second is how big of a penalty is the firing for this person. The final is what sort of penalty is the firing supposed to be. This is in descending order of how often I have seen each one discussed, but all seem relevant, and indeed some of the less discussed ones seem to need answering prior to achieving a decent answer for the more discussed ones.

The relevance of the first one is fairly obvious, firing is a much harsher, and more discussion chilling penalty to award than things like not inviting someone to speak, banning someone from a social media platform, etc. Insofar as we believe that speech penalties of some sort can be justified, it seems that something like firings in particular could only be justified at a certain, unique degree or type of harm. I don’t think this one is too controversial, but even it is arguably pretty hard to isolate, deciding which speech to restrict in some way to begin with is pretty difficult to figure out a good consistent standard for, as discussed in my original post.

The second level admits a huge range of possible values, I think on its own it is enough to make penalties of this sort very questionable. Getting fired is a very different penalty for a millionaire (of course this also applies to billionaires, but I’m not sure I can even think of any billionaires who have been successfully “canceled”) versus someone living from paycheck to paycheck. It gets worse. Your average person living from paycheck to paycheck has almost no personal influence compared to many millionaires. It gets still yet worse. It is much much easier to fire someone who isn’t wealthy or powerful over very little than to fire someone who is very influential. Cafferty is a key example of this, since he wasn’t even fired over views he did hold, just ones he was accused of holding, based on exceedingly dubious evidence, as he wasn’t powerful enough to defend himself with any force.

In fact I have seen the difficulty of exacting lasting career penalties on the powerful cited as a reason that “cancel culture” is not really a thing, or at least not a problem. When defined as backlash specific to the powerful, I tend to agree, although this is not how I see many modern critics of “cancel culture” define it, nor is it clear to me that it can be easily divided off as an independent phenomenon from similar backlash against the less powerful. In my view, what these cases of the less powerful facing significant consequences teach is that it is plausibly a problem. What cases of the unaccountable powerful teach is that it just isn’t much of a solution. Just applying a standard that assigns a flat rate of backlash to anyone who expresses a certain view will have a very hard time reaching the important targets, and even after they are reached, they are often wealthy and influential enough to continue loudly saying what they want. As for incentives, even if a celebrity is fully, successfully “canceled”, they are likely to have enough money to lead a comfortable life in retirement. Meanwhile the livelihoods of less wealthy people who have done much less is the plausible collateral damage of normalizing this tactic.

The final factor, which is most relevant for people who are not wealthy, is what is expected to happen after the firing. Wealthy people will perhaps just undergo an early retirement, but should the fired person get another job, or be permanently unemployed? What sort of other job should they be able to get? Being permanently unemployed seems like a pretty harsh penalty for, well, almost anything. Perhaps it would be a decent fate if welfare covered someone’s cost of living (factoring in location and family size) indefinitely. As it happens, since the 90s in particular, Welfare in the US is usually conditional on unemployment being short term. If you get someone fired in the US, which by default communicates to possible future employers that they are a liability, you likely condemn them to homelessness (at least those who don’t have a loved one willing to indefinitely provide for them). If I were to choose whether to be homeless or in prison in the US for a given period of time, I’m honestly not sure what I would pick. A further factor is that if you are the center of a social media controversy, you may be getting a few even more sinister messages, including death threats. This is not a situation where you want to have insecure housing and healthcare. A possible solution is to improve the welfare state so that this is not as much of an issue. While I believe improving and expanding welfare is a good idea anyway, as a solution it is not terribly convincing. If someone could either live off of long-term welfare or get a low paying job, it is not clear that the former is that preferable as a punishment, and it is certainly more expensive for the state.

A less harsh penalty might be that certain employers should be able to hire someone, but not others, for instance lower paying work, or work that is less influential on discourse. This is another of several reasons why Cafferty’s case is often held up as especially disturbing. His job was mapping utility lines, he was not being paid millions, and he was hardly an influencer of some sort. It is very unclear what career penalty could be intended from there.

There are stranger possible intentions, for example that someone should only get hired by places that share their stated values, for instance if a conservative New York Times writer gets fired, they should start writing for Fox. This reshuffling doesn’t seem clearly worthwhile to me, but based on the “not aligned with our values” statements companies often issue in the face of controversies, it seems to at least be part of the picture. It seems to me to be more of an explanation than a justification. Another strange possibility is that you want someone fired, but just once, and then anyone else can hire them. A sort of brief penalty in stress and loans and time wasted getting something else. As mentioned however, in practice, it is hard to get someone fired without implicitly labeling them as a liability. Another final possibility, almost as disturbing to me as wanting someone’s penalty to be homelessness, is to want them to apologize before they get allowed back into the workforce. Coercing workers into issuing transparently insincere apologies on pain of joblessness is certainly a power move, but come on. That said, I like to think the motive is rarely something quite like this, perhaps something lighter like requiring an apology to be allowed in certain roles is more common, though not much more conducive of sincerity.

Still, any of these apparent desired penalties, homelessness, a different type of job, a differently politically aligned job, a coerced apology, requires the ability to communicate precise information about what you want to happen. And organizing who it happens to in the first place requires yet more consensus around how harsh a penalty someone deserves, and how harsh a penalty being fired is for a given person, and what the conditions of the firing are. All of these things need deciding. I don’t see attempts of this sort to fire someone leading to something like this clear of a message. Even when a firing is based on a signed open letter that is specific on all of these points, there will be a different signed open letter every time. To a degree, penalties of this sort will still feel more like being struck by lightning than getting one’s just rewards.

There is an additional problem, people often want to criticize views but don’t want those they criticize to lose their jobs. It is my impression that, in the case of firings like those highlighted by Mounk, pretty few people supported them. I have not examined each case specifically, but there are some signs that a significant part of the backlash that may have caused firings was not attempts to get the person fired. In a recent article on another of the common examples of unpopular recent firings, David Shor, Matthew Yglesias pointed out that while Shor’s incriminating tweet was pretty widely criticized, most people seem to agree the firing was going too far (and contrary the official story of Civis, as both this article and Mounk’s note, this tweet is pretty much certainly why he was fired). Indeed the two tweets Yglesias highlights as examples of backlash, while both quite critical, did not call for Shor to be fired.

One criticism of the critics of “cancel culture” I have seen is along the lines of “cancel-culture is just people complaining about being criticized”. While I think most versions of this criticism are just bad types of criticisms (either attempts to play dumb about people who oppose things like firings, or making note of the trivial fact that people who are less likely to personally face career penalties for criticizing things like this are also more likely to publicly criticize them), there is a significant charitable interpretation as well. It may be that a ton of people criticize someone’s views, a small handful call for them to be fired or face some other serious penalty, and, in part based on the larger backlash, not just how many people demand the specific penalty, this person is penalized in a significant way. If you criticize someone’s views, and then the next day they get fired, are you at fault? Or more to the point, if you know someone is going to fire a person if you express your disagreement with them, but you don’t think they should be fired, should you avoid expressing this disagreement? This should not even have to be a question.

Now we see a secondary effect of things like firings. If firings are widely unpopular, and most people don’t want them to happen to most people they would criticize, then that also does something to silence the critics. Those who complain that defenders of free speech don’t respect the free speech of critics could be talking about a very real effect that should not exist.

The aforementioned Matthew Yglesias was himself involved in a bizarre dynamic of this sort recently. His coworker Emily VanDerWerff wrote a statement criticizing him for signing the notorious “Harper Letter”. Subsequently, some people criticized her for making this statement, purporting that it was an attempt to get Yglesias fired, even though she specifically said in the statement that she didn’t want him to be fired. Subsequently, after VanDerWerff received a bunch of threats, there was backlash against people who had criticized her criticism, for criticizing her, and so sending a twitter horde in her direction. It would almost be funny if it weren’t all so horrible to watch. A backlash over meta-free-speech standards triggering backlash at the criticism for things the criticism didn’t advocate, in turn triggering backlash against critics of the critic for things their criticisms didn’t advocate. This… does not look terribly healthy.

This type of dynamic is hard to fix, I’m not certain about what to do myself. I think one thing all of this shows is that severe career consequences are usually best taken off the table altogether. Less risky and more egalitarian penalties like social media deplatforming should probably take their place in cases warranting a strong response. The role of employment in these discussions raises the stakes of all criticism immediately, it can have a chilling effect both on people who fear getting fired, and people who fear getting them fired, or who just fear accusations of attempting to. Still, telling companies to please not fire people carelessly seems pretty toothless as a fix to me. Broadly I agree with Zaid Jilani’s article for the aforementioned Mounk’s own publication “Persuasion”, that there may be a unique opportunity and unique reasons to implement better legal protections in this area for workers instead (and that we are in a unique time when this may be politically palatable for both the left and the right). There are some possible issues with implementing legal protections as well, which I might talk about if I write yet another set of appendices, but I don’t want to get bogged down for now.

It also seems like there are clear cases where firing someone for their beliefs, at least from certain jobs, should still be possible or even demanded—if you find a police officer whose favorite book is The Turner Diaries, I don’t think you should wait to see whether their policing appears to be subpar or measurably biased to kick them off the force—but I think broader policy protections against the career penalty in particular is my main contribution to the current free speech controversy.

Appendix B

In my original article, I say

“Another possibility is the idea that free speech shouldn’t argue against something really morally essential, like whether a group belongs in our moral circle for making judgments at all (this is my attempt to define a consistent standard of what is often called ‘hate speech’. As I hope to show, it doesn’t work that well, but without some consistent standard there is little reason to separate ‘hate speech’ from speech that is worth restricting in general, and I believe the other ways of formalizing ‘hate speech’ I don’t mention here have serious problems as well).”

I have thought a lot more about this since then, and have come to the conclusion that not only is my definition for “hate speech” unsatisfying, it is questionable how meaningful it is in most situations. Back in 2019, the conservative philosopher Robert George and the progressive philosopher Cornel West came to my school at the time, Rochester Institute of Technology, to discuss their friendship, and generally how to have a positive relationship with someone you disagree with. I wasn’t able to see it myself, but I did watch the video of it afterwards.

Much of the talk appealed to values I cannot especially relate to, like virtue, and “humanity”, but the part of it that stuck with me the most was the second to last audience question. A transgender non-binary lesbian noted that Robert George had written in opposition to gay marriage, and called transgenderism “absurd”, and they asked Cornel West how they could simply agree to disagree with someone who didn’t recognize their “right to exist”. West pushed back by insisting that George did not question their right to exist or, in his words, their “human preciousness and pricelessness”, but just disagreed on a different level. In other words, the questioner was within George’s moral circle, in the same way as all other human beings.

If I understand West correctly, this is the type of distinction I was trying to discuss, but what precisely does this consist in? West describes “preciousness and pricelessness”, but in practice, more is needed to define what it means to be included within the moral circle. A plausible interpretation of this is that moral patients ought to be treated equally to others within this moral circle by a certain set of standards. In this case the disagreement over whether George denies the questioner their human rights, and thus “humanity”, seems to consist largely in a different view of human rights.

This type of disagreement may not seem like it is quite as strong as challenging someone’s moral patienthood or “right to exist”. I am significantly tempted to outright agree with West that the questioner’s interpretation is too strong in this case, a whole different level from “right to exist”, but this distinction is not as overwhelming as it at first appears. Recall my biggest criticism of the moral circle heuristic in the original article, when I point out that a slave owner could say that their slaves are within their moral circle, if this doesn’t entail any specific assumptions beyond self-declared inclusion by the offending party. In this case the purported view could be that slaves, as human beings, deserve equal access to human rights, but that it is not contrary to human rights to be property, and once you are sold to someone, it is proper for you, and any of your offspring, to be slaves of your current owner. The better view is that humans have a fundamental right not to be property, and that therefore, you are not treating someone as a real person by using them as property.

In the case of Robert George, he likely holds the view that everyone equally has a right to marry someone of the opposite sex or to live as the gender they have been assigned at birth, and therefore his views are inclusive of gay and trans people. The questioner on the other hand likely felt excluded on the grounds that everyone within the moral circle should equally have a right to marry the person they love, and to live as the gender they identify with. Robert George’s views are therefore interpretable as a version of this view that outright excludes gay and trans people from the moral circle. Unlike the slavery example, I think that this disagreement is likely good faith. Nonetheless, it seems to me, an argument of the sort West makes does not satisfy on its own, because we can and often do seem to make an argument that one fails to include someone in the proper moral circle if they fail by a certain essential standard, rather than just their own self-declared standard.

That being said, there is a substantial difference between a self-declared standard applying equally to all members of a group, and a self-declared standard that altogether excludes that group from equal consideration, especially in cases of good faith. If you are included within a concept of “human rights”, then you must be treated by the same standards, even if these standards are terribly wrong or arbitrary in some way, whereas being outside of these standards means you don’t even have this much security.

Peter Singer, for instance, emphasizes this point in his seminal first chapter of Animal Liberation, “All Animals are Equal… “. Examining common moral intuitions, Peter Singer comes to the conclusion that our basis for treating non-human animals in the way that we do is not something like intelligence or capacity to feel, since we do not treat humans differently on the basis of intelligence in a way consistent with our treatment of non-humans, and many animals we mistreat probably have the capacity for conscious feeling. Our treatment is explainable only on the basis of a double standard along the line of mere species membership.

In the case of Cornel West and Robert George’s appeal to common humanity, this is not directly true, as they are both Christian, and can claim to be discriminating on the basis of immortal souls for instance (Singer might still make the indirect case that this view is defensible only when a double standard is applied in how different teachings of scripture are reinterpreted in light of the modern world), but species membership is one of the broadest popular lines for drawing a discrete “moral circle” currently accepted (that is, a double standard that appeals to mere group membership). A question remains of how necessary this difference is however, in what ways do the moral circle, and the equal rights of its members, relate to each other?

There is one bit in the Futurama episode “Neutopia” which seems to capture the obvious objection to the idea of a discrete “moral circle” in a memorable way.

Hermes: “Sorry. It’s in your contract. ‘All female employees must pose nude if requested.’”

Leela: “That’s discriminatory!”

Hermes: “No, it’s in all our contracts.”

The manipulation could be more subtle than this. To go back to the Peter Singer example, there is something of a cottage industry, rather notoriously, around trying to figure out in what ways humans are special. Whether it is some notion of reflective self-consciousness, or tool use, or language, or creativity, or some other categorically significant level of mental life, this is often proposed, and, at least while it is considered a candidate, subsequently upheld as highly morally significant, due to the fact that it is unique to humans. The obvious interpretation of this is that we want to be able to distribute a contract to everyone, and don’t want it to say “all humans have a right to use non-humans how they want, all non-humans are to be used how humans want”, so the cottage industry rewards the proposal and discovery of effective proxies for species membership (a more generous interpretation is that humans have clearly done things that are extremely bizarre for any other similarly biologically homogenous group to have done. While bacteria are quite accomplished at things other groups are not, they are also a very biologically diverse group. Discovering “humanness” therefore may just be a proxy for discovering, in terms of narrow time and biology scales, what the hell just happened!? I accept this counter-example, and maintain that the function I describe also exists for some people, and therefore is a valid example of the phenomenon in question).

It seems like the right answer is that our moral circle should be the one it is natural to draw given our moral principles. I consider myself a utilitarian, specifically a hedonistic utilitarian, which means that I believe that everyone in my moral circle should have their positive and negative feelings counted equally relative to anyone else. This leads to a very obvious way of drawing the line, whoever is able to feel is within my moral circle. I could frame this either as a contract saying “your feelings count equally to everyone else’s” given to everyone, or the same exact contract given only to those who are able to feel, and the result is identical.

Utilitarianism is able to pretty easily extrapolate the appropriate moral circle from non-arbitrary given rules, but the tradeoff is that it is not at all detailed in practice. The only rule it fundamentally assumes is that we ought to maximize the positive balance of welfare, which requires discussion of what it is that maximizes welfare. Essentially, the purest version of the moral circle framework, when applied by utilitarianism, still indicates that someone could defend slavery. Although they would be fighting an uphill argumentative battle, and other standards could be used to restrict the view, there is nothing in this heuristic itself that would absolutely justify restricting their arguments.

The trade-off with something more rights based is that connecting a moral circle to a non-arbitrary moral rule is difficult. You can say that slavery is simply intolerable, but you first need to prove why this is the correct way to balance a rule and a circle under a view like yours. What is it that makes a certain group, say humans or feeling beings, not able to be property? After all, they are not incapable of being property in a physical sense, nor is their outgroup required to be property by definition.

At the end of the day, moral circles are not an incredibly useful discursive category. They provide some limits when they are based on self-declaration if the one declaring their standard does so in good faith, but it is possible to replicate defenses of nearly any sort of exclusion without appealing to a different moral circle, especially if you are trying to. It is comparatively difficult to find a way of deeply connecting a moral circle to basic rules to counter this effect. Appeals to moral circle may always face prohibitive limitations, as do stronger, broader definitions of hate speech that appeal to a certain stable set of guaranteed human (or feeling, or something else) rights.

Appendix C

In my original post, when discussing the heuristic of disproportionately listening to those who are affected by an issue, I briefly mention that another heuristic which I don’t discuss is disproportionately listening to experts. As with listening to those affected, I actually think this is one of the relatively better heuristics, and important to apply to at least some degree. Some simple problems I highlighted with the other heuristic apply to this one as well. I mentioned that the “listen to those affected” heuristic has the problem that it sometimes undervalues expertise, in particular in those cases where the affected parties consistently lack expertise, such as children. The converse problem might exist as well, although arguably it usually takes a somewhat different form.

I can think of almost no social/political issues that wouldn’t benefit from referencing the relevant literature as well as listening to on the ground voices. After all, the point of policies is generally to accomplish something on the ground, which doesn’t specifically correspond to the experience of someone affected by it. To extend the torturer example I used heavily in the original article, you may want to listen to the person who is going to be tortured about whether they want that or not, but you should listen to whoever knows how to remove their restraints when trying to free them.

That said, barring a reason for doubting a group’s expertise, such as the children example, being in the affected group has a potential to correspond to expertise, simply because those affected have a strong sincere reason for wanting to find a solution. This is one of the key differences between the expertise and affected parties examples, most people have strong biases in what they care about and what they are aiming for. In fact because of the perverse incentives that often exist in the journal/academia seat of expertise, having the wrong biases may be one of the stronger points against just uncritically listening to “expertise” in many fields. There is a good case that one of the most important projects in modern science is the improvement of academic incentives.

Affected parties, like everyone, also tend to be biased, but their biases are aligned with serving the interests of those affected by an issue because they are the ones affected by it. They have more reason, all else being equal, to seek out effective solutions to the problem at hand. If the person who is going to be tortured and the other people in the room have both had the opportunity to study whatever they want in the dungeon library, the person strapped down is much more likely to have sought out the most reliable books on how to unfasten their straps than the others. This is a significant part of the impetus behind ideas like “Futarchy”, in which there is an attempt to use monetary rewards to artificially provide technocrats with real skin in the game.

Still, even a well-meaning, more well-calibrated model of expertise may face problems when overvalued relative to the experiences of those with local knowledge (not intrinsically, but usually, the group affected by the decision). This is the problem of technocracy highlighted by Glen Weyl in his previously mentioned article “Why I’m Not a Technocrat” (which as I understand it draws on James C. Scott’s classic Seeing Like a State, although I’ve never read it). In particular he points out cases where a lack of input from people on the ground seemed to lead to decisions with significant unforeseen problems like post-Soviet “shock therapy”.

On-the-ground experiences may also be important to understanding value in harder to describe ways, how bad or good something actually is. A classic example is waterboarding. There has been controversy about whether or not waterboarding qualifies as a torture. In 2009 the conservative talk show radio host Erich “Mancow” Muller voluntarily underwent waterboarding to prove that it wasn’t torture. He came out convinced that it was torture. This should be incredible when we think about it from the perspective of other types of disagreement. How many times have you gone into an argument or started reading an article, with a confident opinion of the relevant issue, and come out convinced that you were entirely wrong?

A tricky thing about this type of information is that it’s often difficult to communicate or compare. This is especially problematic because it may be difficult to know which types of experiences have a similar effect to waterboarding in terms of the decisiveness of experience. Local information issues of this sort are unfortunately pretty immune to the type of bias realignment embodied by things like Futarchy, though other proposals like Weyl’s own “Quadratic Voting” tries to capture something like this with a budget for weighting your votes, making it costly to signal your personal qualification in a given domain. Either system seems to have strong theoretical advantages over classical expertise technocracy of any sort, and although I believe they both have issues, I think either might have a valuable place if implemented right. Imitating either in casual discourse seems almost hopeless however, and in forming opinions, both seem to indicate the importance of understanding and paying attention to surveys and polls.

The other major problem expertise has in common with listening to those affected is that it provides an excuse. Just as saying that young children aren’t qualified to speak for themselves opens up excuses for not listening to groups that are qualified, but who there is prejudice against, saying that expertise in emerging fields like psychology and sociology is worth doubting opens you up to anti-vaxxers and climate-change deniers. Yet it is undeniable that some fields of expertise provide more authoritative information than others, we have a remarkably good understanding of physics on the right scale, by comparison the most policy relevant sciences like economics, sociology, and psychology just aren’t anywhere near as reliable.

There are other areas where it’s controversial whether “expertise”, of the sort which allows any degree of deferral, exists at all. As David Chalmers noted in a recent interview, it is controversial to say that a philosopher should let what other philosophers believe affect their own beliefs at all (superficially, resistance is understandable, but at the extreme it seems as though being the only moral realist or only physicalist in the world should make you at least a bit wary).

Appendix D

Along with commenting more on things I already talked about in my original post, I want to use some of these appendices to, as with Appendix A, discuss some more heuristics related to speech restriction. Although I didn’t lead with these because they have less of an “elephant in the room” quality to them, they share other things with Appendix A. Namely they are meant to be generalized limits on what ways it is reasonable to constrain speech, as opposed to, as my original article focused on, suggestions for possible ways one might think to decide which speech to restrict in the first place.

The first of these possible limits is “majoritarianism” of some sort. Crucially, this is inappropriate as a guide for which speech to restrict in the first place, it only works as a way of deciding which speech you should avoid restricting. I don’t think most people believe we should restrict every view that a minority of people hold, but in brief, doing this would first of all ensure permanent ideological stagnation, and second of all be incoherent without some sort of specific categorization rules. As an example, let’s say that 1/3 of people believe that all guns should be legal, 1/3 of people believe all guns should be illegal, and 1/3 of people believe some guns should be legal and some illegal. At this point, a majoritarian view holds that these views are all impermissible, however so is the view that all of them are wrong, since no one holds that view.

On the other hand the constraint that a majority view should not be restricted is coherent. It may also be an important, measurable quality for some other proposed lenses like the one that says that we should maintain a moral circle expansion when it happens (which involves figuring out what distribution of beliefs exists at a given time). Majoritarianism is not the only possible crucial point for determining matters of this sort, but it is the most obvious, and appears to be the intuitive crucial point converged on by a number of different arguments in this space. While this appears forceful at first, a key problem is that what this “majority” means is not totally clear, and appears to be most naturally drawn in different ways given the different arguments that might be called in its favor.

Among the simplest arguments is an appeal to the political dynamics of liberal democracy. If a majority of people support a given legal rule, limiting speech about it will not prevent it from winning elections, and meanwhile shuts down discussion of the topic, and so prevents you from swaying this majority. That is, even if you’re not sure that debate will go your way, you have more to lose than to gain politically by restricting the discussion. 1

One fairly reasonable counter-argument to this is that if you shut down arguments in favor of a view, you don’t shut down arguments against it, and so this majority might still be swayed. This seems unlikely to me. First of all, spite seems to work against it, people are likely much less motivated to enter an argument with an openness to being swayed if they think that they will not, in turn, be allowed to try to sway the other side. Second of all it seems unlikely to work because arguments are generally dialectic, in that they are a series of responses against other arguments. Absent a real understanding of the other arguments, the presented arguments are less likely to be convincing. Finally, as I will discuss in the next majoritarian argument, completely silencing a view seems more impractical than only silencing it in some spaces, and partial silencing may lead to echo-chambers. If people feel unwelcome in one speaking place, where their opponents are allowed to argue but not them, they may look for a speaking space where their voice is welcome, such as one filtered to reinforce, and even radicalize, their views.

Since there are a number of considerations and counter-considerations concerning the efficacy of this argument for majoritarianism, I won’t belabor the point, but I do want to ask what majority means here. In this case it seems pretty straight-forward because it is tied to politics. In this case, it means that you should not restrict the voicing of a view a majority of voters hold which they might vote on (presumably views totally outside of the legal sphere are not touched by this argument). This is itself a bit complicated, especially in a federalist country like the US. We might hold elections on an issue that will have legal ramifications at the city, state, or federal level. Since the majority view is likely to differ on all three levels, which one do you need a majority of? If you restrict a view which is unpopular countrywide, but a majority view in many states or districts, then you may not be risking these problems on the federal political level, but will risk problems more locally. This would suggest that, all else being equal, higher levels of restriction are problematic. Counter-intuitively (depending on your intuitions), this may also suggest that the wider your influence is, the less restrictive you should be in what you are willing to discuss in many cases.

The next majoritarian argument is related to the previously mentioned point, that is, that people whose speech is not tolerated on one stage will, if they have the ability, take it elsewhere. To draw on a recent example, will the regulars on the now banned r/thedonald and r/chapotraphouse subreddits be more likely to stop making their extremist points anywhere online, or to start frequenting 4chan? If they go onto a platform that is inclusive towards speech that is restricted elsewhere, this is likely to make the banned people more rather than less extreme.

In the real world of course, things are more complex than that. If your views are banned on reddit, they may be allowed on twitter. It is significant whether the most available alternative discussion spaces are frequented by a proportionate sample of the population’s views, or disproportionately represents banned speech from elsewhere 2 (arguably a significant dynamic at work here is probably that any space that has not banned some speech will somewhat disproportionately represent banned speech, but that the recently banned will prefer the space they are allowed in that is most ideologically similar to their last one, or just the one where the biggest audience is).

Painted broadly however, the argument here goes something like this. When you exclude someone from a discussion space, you are not preventing them from being further radicalized, and are plausibly making it likely that they will get more radicalized. In theory, you are trading this for the insurance that the people who remain able to speak in this space will not be radicalized at all. Perhaps this is in many cases a worthwhile form of damage control, a sort of ideological quarantine, but if you exclude the views of more than half of the participants, it is a method that risks radicalization for more than it prevents radicalization of. This type of argument is also subject to counters, although some are less convincing. For instance perhaps even excluding more than half of a group is justified, if the excluded parties are less likely to be severely further radicalized than the included parties are to be radicalized if they were to remain. Again, I will not interact with this much more, it is an empirical question I don’t have the research to say much on.

The alternative, pro-free-speech side of this says that these spaces where more controversial speech is tolerated will be the spaces where good ideas will most flourish, and that over time they might naturally expand as more and more people see how nice they are. This is something like Scott Alexander’s apparent suggestion in his classic article “In Favor of Niceness, Community, and Civilization”. The trouble is Alexander’s own spaces for niceness have not vindicated this suggestion. One of the most common criticisms he has gotten is that, not only does he interact with and tolerate too many views in his discussion spaces, including dangerously bad ones like Neo-Reaction, but these spaces have come to overrepresent these views relative to their proportion in his larger readership, such as the “Cultural War” thread on his subreddit he eventually ejected into the not-officially-associated subreddit r/themotte.

If this criticism holds up, where has Alexander’s view failed? Perhaps he should have seen this coming with the composition of anonymous online forums like 4chan, but maybe he believed his rules of niceness were what was missing from these spaces (and I think it is a significant improvement at least). A tempting answer is to think of it as a failure of free speech in general, but I think coordination issues are more likely. If there are a few spaces you are allowed (for people with controversial ideas, the spaces with the freest speech norms), you will spend your time in these space, and try especially hard to defend your belonging and to an extent ownership over these spaces, and you will outcompete the people who can afford to share their views elsewhere, and so are more willing to concede these spaces, and don’t have endless energy to respond on them. This tendency is weakened if either most other spaces restrict a huge portion of possible views (in which case only a few people have alternative spaces) and/or if most spaces have similarly permissive speech norms (in which case the people with alternatives don’t have many alternatives). One possible consequence of this perspective however is some optimism if you are a strong pro-free-speech advocate; if other spaces restrict a majority, the unrestrictive ones may improve overall. If you are on the inside of the restriction however, the direction taken by a space entirely consisting of views you wish to reject probably won’t look great anyway. Thus, those doing the restriction still have reason not to restrict, on this argument.

The major change this version of majoritarianism introduces is ambiguity as to whether what is required is for the majority view on an issue to be protected, or whether it is required for a majority of people to have no untolerated views. Since all that is required for most people to be excluded is for there to be one issue where the majority view is excluded, this is inherently a stricter measurement. Because of the formation of ideological coalitions, based around associated elements of different issues and old fashion conformity, peoples’ positions on one issue can be highly predictive of their position on others. Still, this is likely a much more strict standard, which requires a more general way of picking your battles, and in which your standards for restriction on one issue cannot be judged independently of your standards on others.

It is not clear that this standard is the one that is required by the radicalization argument. After all, people might only change platforms to interact with the other restricted people after a certain portion of their views are not allowed to be voiced on the original, not whenever they have any view that is considered impermissible. On the other side though, some people who only hold tolerated beliefs might still leave a restrictive platform if they would prefer more interaction with those they disagree with than this platform allows. This is, again, a tricky empirical question, but the ideal standard would presumably be somewhere between not restricting the majority view on any given issue and not restricting any view of a majority of people.

Something similar can be said in other areas, for instance specific instances of deplatforming. People are often restricted from speaking on the basis of having some restricted view, regardless of whether they are speaking about that particular view. To draw on a personally salient example, Robin Hanson’s invitation to speak at an EA Munich event was recently canceled based on some (yes, frankly pretty dumb) blogposts and tweets, despite the fact that he wasn’t slated to speak about related subjects, but rather Tort Law. Perhaps this is the wrong way to determine whether someone should be allowed to speak in a venue, but if it is the right standard for deciding this, then it presumably requires much more permissiveness than a standard that is restrictive on an issue to issue basis.

Another way a standard of this sort may come up is a more deontological or justice-based view of speech restriction. It may be argued that speech restriction is appropriate only insofar as it is a response to someone who is in fact a bad person, based on their views. That is, speech restriction is a punishment, and a punishment cannot be applied to the innocent, those who act permissibly. Although I do not share this view of ethics, and also tend to reject demandingness arguments in ethics, another possible argument is that a standard of virtue cannot consider the majority of the population bad people. If you are better than the majority of the population, you are by definition a good person, and undeserving of punishment for your level of virtue.

If someone’s virtue is judged by which beliefs they hold, then, the argument might go, it must not be a standard that considers a majority of people deserving of punishment for their views. Therefore punishing by a standard that would punish the median partisan is punishing the innocent, and is impermissible even if it serves the greater good. The most naïve version of this argument is the most demanding version of this sort of majoritarianism, because not only does it measure based on whether someone has any restricted opinions, since it is a measure of virtue, it arguably requires, as its scope, a majority of people. Not just people nearby, but across the world, and not just now, but throughout history, since any of these people might be judged for their level of virtue.

The more realistic version of this doesn’t just view beliefs themselves as virtues, but as virtuous relative to one’s upbringing, or knowledge. There are certain trade-offs involved in this. Those who wish to limit speech, who are on the left in particular (though even most on the right or elsewhere will tend to find their particular slate of views much rarer at any other point in history, people tend to have political views highly idiosyncratic to their time), face the problem that their ability to apply even somewhat stringent limits on the basis of virtue, indeed quite possibly their ability to limit the speech of nearly anyone currently alive, would require excluding most people throughout human history from the same majoritarian pool. Doing so, on the basis of relativizing this virtue to the wisdom of their time, requires the assumption that discourse has advanced reliably and significantly, to a sufficient extent, that current people really do know that much better than people of the past.

Doing this is a tremendous vindication of the natural progress of discourse in a positive direction. That is, supporting a version of the virtue argument that doesn’t pretty much entirely prevent you from restricting modern speech, seriously erodes the tactical justification for manipulating speech. After all, if someone in the modern day should know better than anyone before the 19th century, then the modern discourse must have advanced in incredible ways over time. The virtue and tactical justifications for speech restriction are not entirely but, especially for those on the left, have a tendency to, undermine each other. Justifying strong restrictions based on one undermines the arguments from the other. This is the part of Alexander’s article that does hold up, and makes it a classic, the reinforcement of the idea that liberal and left values are functionally some sort of “terrifying unspeakable eldritch god” with incredible power over history and discourse.

Something similar could be, and often is said about epistemic justifications. If you are so confident in a modern day ideology that contradicts most people throughout history, to such an extent that you consider it appropriate to restrict these historically popular views because of this certainty, then you have made a very strong argument that the quality of our ideas has an incredible tendency to improve, and so there is less reason to restrict discourse. Uncertainty and tactics have a similar mutually undermining nature to virtue and tactics.

In the end, some form of majoritarian limits on speech restriction seem to be compelling in many anti-speech-restriction arguments, but there is not a single thing they converge on. Some majoritarian arguments give an overwhelming presumption against any restriction on current speech, whereas they all seem to indicate, at the least, that a view which is the national majority view should probably not be restricted in discourse. It is only under strange or unstable circumstances that a majority view can be restricted like this anyway, and it seems doing so might create highly unstable social dynamics. Overall, I think the proper takeaway from this is that we should have at least somewhat looser speech norms, and possibly significantly looser ones.

Appendix E

A final limit of speech restriction I wanted to discuss was meta-deplatforming. Broadly put, this is punishing someone for thinking someone’s views should be tolerated, whose views you think are intoleratable. I rarely see something like this precise heuristic used, but some degree of a view like this is implied in a number of contexts, such as the general category of guilt by association, or cases where the editor or editors of a controversial piece of writing are made to step down from a publication, as in the case of Tom Cotton’s recent New York Times article on military intervention in Black Lives Matter protests. The intuitive logic of this is clear, some speech is unacceptable, if you enable unacceptable speech, then you are culpable as well. There are three problems with this, that make it so that I don’t think this is a great response.

The first is the most basic. While first level speech restriction standards can be permissive of some disagreement, treating the defense of airing views you consider impermissible on the first level as, itself, impermissible, is absolutely intolerant. Meta-disagreements about free speech involving disagreements about which ideas should be allowed to be aired don’t have a range of reasonable disagreement, because any disagreement over which views have a right to be aired implicates the more permissive party of the disagreement in enabling intolerable views according to the less permissive party’s standards.

The second reason is very simply that, while people seem to act on something like this rule in settings like the cited publication example, I think it is unlikely that it is a view most people would formally endorse. In particular, If I were to ask you what range of meta-views about how to regulate speech should be open for discussion, independently of asking about the guilt by association principle, I think very few people would intuitively say “oh everyone who has an even slightly more permissive view of speech than me should not be tolerated in discourse”. Many people would probably say that on the meta-level they are pretty much absolutely permissive in fact, in the sense that “discourse should be totally unbounded” is a view they consider worth interacting with, even if they disagree with it. While I fall into this category, a question remains about how things change when you move into another context such as, say, journalism.

While I would tolerate someone who argues that we should listen to Nazis, if someone persistently publishes writing by Nazis, would I just stand by because they are sincerely manifesting tolerable meta-speech views? I think that in certain positions, a more restrictive standard is warranted, but the question is if this gets us all the way to the full guilt by association principle mentioned earlier, one by which the manifestation of meta-disagreement is absolutely intolerable. Although this seems crazy superficially, there is at least one context where absolute intolerance is accepted as the norm, political elections. Even if all candidates are ones you consider tolerable, it is considered perfectly reasonable to just vote for the one you agree with the most, applying an absolutely discerning standard (though this point itself only holds up under a first-past-the-post system). It may be argued that editors at publications, for instance, set publication policies in a way akin to lawmakers, and so responding to them like you would political representatives is appropriate. Only those who most agree with you get a pass, and any degree of disagreement is worth factoring in as a point against them.

There are some key differences that make this a poor comparison however. One is that, in theory at least, having many different perspectives in the media helps keep the press ecosystem healthy and accountable, which requires you to allow for editors who disagree with you. While there is only one open position in any given election, reducing down the number of perspectives represented in the press to only your own directly hampers its function. This function is arguably prior to elected representatives, as the press is meant to influence voters, including by challenging their current views, whereas politicians are merely supposed to represent the views their electorate ends up with. Presumably this applies to meta-views on speech as well.

The more significant difference is that you choose between candidates in elections. It is appropriate to be absolutely discerning, because the full choice is captured by the possibilities you see. If, as with editor firings, you merely voted on whether you approved of a given politician, an entirely intolerant standard of the sort applied when choosing between candidates would be unreasonable, as you would pretty much always vote against the given option. Instead what is needed for something like this is a threshold. In the case of meta-speech disagreements, it is in fact very hard to land on something precise enough to be adopted. As I have highlighted for instance in my original article, it is very hard to find a specific heuristic that totally makes sense to agree on as the standard, and a disorganized threshold has even less chance of becoming a standard.

Perhaps on some issues an absolutely intolerant position would be possible to defend. For instance, and I’m not advocating this to be clear, you could remove anyone who does not agree with you on who they will vote for in a given office, and a very sizable portion of the population would be left to you because of the limited nature of the options. Standards for acceptable discourse on the other hand, as well as standards for ways to handle less and more acceptable discourse, are in fact very hard to agree on, and there are way more options. Arguably the plurality view, because it is so specific by its nature, is complete tolerance and lack of intervention in the marketplace of ideas. Aside from the fact that the standard form of this approach would not restrict other meta-views, for reasons I discussed in the previous article, a version that only modified itself to be restrictive of other meta-speech standards is apparently hypocritical, although not necessarily paradoxical.

However, meta-speech restrictions of this sort are not in fact absolutely intolerant, in practice at least, otherwise what we would see in their application would merely be chaos. People’s meta-speech views are only ever restricted for being too tolerant. If you publish a story someone thinks should not have seen the light of day, you have transgressed, but as a publisher you never have to signal who you think should be restricted, presumably, because you just wouldn’t publish them to begin with. Pro-free-speech publications like Quillette and the aforementioned Persuasion, for example, will not conceivably be penalized by their pro-free-speech reader base for failing to publish articles they consider outside of their range of tolerance. Despite this, both seem to have relatively clear partisan leanings overall, and there are some articles you can find in one that would be odd to see published in the other. Positive standards are not enforced, except occasionally in retrospect after compiling data about apparently consistent absences. Negative standards are more immediately enforceable, and can be manifested through responses to specific events.

This means that the standard of meta-speech guilt by association I am discussing only ever creates a pressure for publications to be more, rather than less restrictive. I would compare this to the unilateralist’s curse, in that the most restrictive popular standard is what will presumably win, so nearly every standard thinks it is too restrictive, and it probably is. It may actually be worse than that however. There’s no reason to believe that publications won’t respond to the most restrictive standard on each specific issue, making for an overall standard that may be far more restrictive than even the most restrictive popularly endorsed threshold.

As with the career issue of Appendix A, it is not totally clear what to individually do about this. After all, even if you have an average meta-speech standard which you do not try to enforce yourself, the group-dynamic will still lead to pressure from the most restrictive popular standard. Unlike with Appendix A, I see no promising policy solutions to a larger group dynamic problem, short of doing something like awarding journal editors tenure. I nonetheless think there are enough other reasons why meta-speech guilt by association standards are damaging that the group-dynamic argument is not needed to convince the individual not to adopt this standard. As for what standard one should apply to meta-speech, especially in powerful positions for curating speech like editing, I’m not sure, but considering group dynamics push for too restrictive a standard, it seems generally valuable to at least push back. Prefer to underestimate rather than overestimate your own standard. Award the benefit of the doubt, not because those you are judging simply deserve it, but because if you have marginal power, it will probably be because your standard is too harsh.

  1. Ed. Note: This may be related to how Trump supporters feel in the current political climate. ↩︎

  2. Ed. Note: This is what happened to Voat and BitChute. ↩︎

, help us write more by donating to our Patreon.