Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that government-level banning of hate speech is problematic for the reasons you describe.

However, as a member of a minority group who grew up being the butt of many racist jokes, I would like to have the freedom to participate in an online community where hate speech is moderated. It’s fine to me if other people want to participate on platforms where hate speech isn’t banned, but I will stick to ones that are less unpleasant for me personally. I don’t think I’m alone in this point of view.



No one who advocates against hate speech laws doesn't want you to have that freedom. What we do want is for the government to stop trying to make it illegal (as in throw-your-ass-in-jail illegal) to make those jokes online.


I also disagree with government level censorship. At least in the American right, there is some political pressure and discourse around pushing free speech as a principle from the government level (where I agree with it) down into the platform level (where I do not agree with it).

It seems like you are not one of those people advocating for that, but this is why I would push back on the statement “No one who advocates against hate speech laws doesn’t want you to have that freedom”


I think this is a bit tricky when computers are involved. "Facebook" is not a community; it's a technological encapsulation of millions of communities. So if Facebook bans something globally, that's not the same as keeping your individual community the way you'd like it.

If some on the American right is saying "Facebook is too big to moderate all communication, and it should allow everyone to speak", that doesn't disallow you from having one or more communities that are moderated more locally in whatever way is allowed there. It's just that Facebook the corporation won't be able to reach into any community and have veto power over speech.


It’s certainly not true that Facebook has a monopoly on communication (here we are communicating without Facebook involved)

I do have some agreement that when a platform gets very large, it makes sense to have some regulation around the censorship decisions they take. Specifically, rules around transparency make the most sense to me. On the other hand, a blunt regulation such as saying that large tech platforms are not allowed to do any platform or country-wide moderation doesn’t make as much sense to me. My experience with all unmoderated online communities is that they devolve into something extremely unpleasant, and forcing Facebook to push ALL of the moderation work down to specific groups or users themselves is not a workable approach to this problem.


Company towns weren't a monopoly either. You could always walk outside town and enjoy your free speech there. But free speech laws applied because they functioned as public places.

Facebook is such a public place.

Additionally, Facebook and the rest did function as a cartel when they coordinated to ban Alex Jones from all platforms, even the ones where he wasn't present.


In a very real sense a lot of people had no choice but to stay in company towns. There are certainly similarities between a company town and Facebook, but choosing not to use it is not a matter of survival for anyone.

I agree that Facebook, like company towns, should be regulated given the scope. But I think we should evaluate bannings like Alex Jones more from first principles such as whether there is more harm in allowing him to stay on these platforms or in banning him rather than being strictly adherent to the principle of free speech.


> It’s certainly not true that Facebook has a monopoly on communication

For sure - I didn't say it did.

The rest of what you say I don't really agree with. Expecting a professional moderator from Facebook to moderate a community is the unworkable approach. It has to be done by those communities.

Here's another issue: if someone breaks the law in their country that bans them from using some/all of Facebook then should Facebook comply? In some cases it makes sense (e.g. someone convicted of something related to children should probably be banned from communicating with children) but not in others, say where lower caste/class people are legally banned from speaking.

Facebook is a big enough place that these things really matter, and I don't think we or Facebook should assume that the decision to veto speech is Facebook's.

To repeat my key point: people demanding that Facebook not be the veto decider are not trying to threaten your online communities. They're just looking at the power dynamics and concluding that that power should be held by those communities (or the law of the land(s)), and not by Facebook.


Maybe I'm misunderstanding you but it seems like you are arguing for a fairly hard-line stance of: Facebook shouldn't be allowed to censor or take down any content, and all decisions to do this should be pushed down to individual users or group moderators.

This is certainly unworkable given the level of bots, spam, etc. that would be untenable for a volunteer moderator on a Facebook group, or an individual user, to deal with.

Maybe I am misunderstanding you and your point is more subtle: the government should have some rules around what Facebook can and cannot censor. Then I don't disagree with you in principle, I think we would have to get into specific proposals and weigh their pros and cons.

A blunt approach of "free speech, no censorship" isn't a convincing argument, but given the scale of impact that site-wide or nation-wide moderation decisions have on Facebook, I could be convinced that government oversight in certain cases makes sense.


Thanks - that's helpful to help me clarify.

Bots don't get any speech because they're bots. I'm all for people who we know are people talking, and no-one else (although if their identity is hidden I don't mind so much, as long as it's one account per person). Then we can moderate people.

Highly non-partisan issues such as not allowing explicit images of children is important, along with any other general enforcement of the law of the land.

The fact that there are multiple laws and lands makes this tricky, so to some extent I would say Facebook should decide where they want to operate, and if their fundamental values contradict the country then they shouldn't operate there (e.g. one rule could be "if a country bans a group - other than children without their parents' consent - from using Facebook, then we don't operate there).

Note that in the above system, it allows people with values I don't like to speak. That includes people whose values are pro an American corporation censoring the world. Sadly for me, that is one of my values (-:


Their moderation policies are already highly country-specific though? Why does FB need to ban anything globally?


Sure. I may have misphrased my previous explanation, but "country" is still far bigger than "community".


I see what you mean. Yes, I agree, it’s in some sense suboptimal for a community to outsource moderation to a larger entity.

However I think moderation is difficult (and this fact is under-appreciated) and so the limit of “perfect moderation per community” is probably not practical. (Indeed I’m not even sure that’s a possible thing, since each community member might disagree, even though there are emergent norms at any given time within a community.)


Absolutely, and those norms are best enforced by that community.

I feel like I'm describing a social network I should be building. Or has already been built.


> No one who advocates against hate speech laws doesn't want you to have that freedom

I've recently heard a number of "free speech" advocates try to expand that to be a right to be heard, i.e. a right to a captured audience & other people's attention.

These same advocates also additionally argued that their rights were being trampled upon when certain communities rejected them due to disagreements on the contents of their speech.


What some other people say doesn't invalidate his argument.


His argument was "No one who advocates against hate speech laws doesn't want you to have that freedom" -- while saying "No one.." is always dangerous (as there is always someone, somewhere), I'd say it is fairly common in my experience for people in favour of laws against free speech to also think companies like Twitter should be forced to also allow free speech.


People who say "no one" or "everyone" do not mean 100%. That isn't how English works. It's why English provides qualifiers like "absolutely no one", or "100% of people".

Just like if I say "everybody likes ice cream", you and I both know that there's somebody, somewhere who loathes ice cream.


> I've recently heard a number of "free speech" advocates try to expand that to be a right to be heard, i.e. a right to a captured audience & other people's attention.

You're confusing two different aspects: free speech as a guaranteed right and free speech as a principle.

As a right, it means the state shouldn't prevent you from communicating something. An example is the US's First Amendment.

As a principle, it means you support speech, even when you disagree with it. An example of this is "I disapprove of what you say, but I will defend to the death your right to say it."

As a publisher, you have the free speech right to NOT publish something you disagree with, but you can't then say you support free speech as a concept.

It's quite an imposition to force someone to say something he doesn't desire to say. But the free speech advocates you speech of (like me) only want the law to be changed for quasi-public-square entities such as Facebook and Twitter.


I'm not confusing anything, I merely stated my observations on people expanding (if your distinction applies,to both the right and the concept).

That statement should be amended to "I disapprove of what you say, but I will defend to the death your right to say it to those who choose to listen"

Just as real-life town squares allow people to walk away, boo or shout back at speakers, the same rights should be preserved for audiences on Facebook and Twitter.


* I would like to have the freedom to participate in an online community where hate speech is moderated. It’s fine to me if other people want to participate on platforms where hate speech isn’t banned, but I will stick to ones that are less unpleasant for me personally.*

That's great, but it doesn't have anything to do with government-mandated censorship. I too wouldn't exclusively use unmoderated online communities.


I was responding to the parent comment and tried to make clear that I too am against government mandated censorship. I’m talking about platform level censorship because I thought it would be interesting to the discussion to clarify these two different levels. At least in American discourse about tech platforms I often feel these two levels are conflated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: