It’s certainly not true that Facebook has a monopoly on communication (here we are communicating without Facebook involved)
I do have some agreement that when a platform gets very large, it makes sense to have some regulation around the censorship decisions they take. Specifically, rules around transparency make the most sense to me. On the other hand, a blunt regulation such as saying that large tech platforms are not allowed to do any platform or country-wide moderation doesn’t make as much sense to me. My experience with all unmoderated online communities is that they devolve into something extremely unpleasant, and forcing Facebook to push ALL of the moderation work down to specific groups or users themselves is not a workable approach to this problem.
Company towns weren't a monopoly either. You could always walk outside town and enjoy your free speech there. But free speech laws applied because they functioned as public places.
Facebook is such a public place.
Additionally, Facebook and the rest did function as a cartel when they coordinated to ban Alex Jones from all platforms, even the ones where he wasn't present.
In a very real sense a lot of people had no choice but to stay in company towns. There are certainly similarities between a company town and Facebook, but choosing not to use it is not a matter of survival for anyone.
I agree that Facebook, like company towns, should be regulated given the scope. But I think we should evaluate bannings like Alex Jones more from first principles such as whether there is more harm in allowing him to stay on these platforms or in banning him rather than being strictly adherent to the principle of free speech.
> It’s certainly not true that Facebook has a monopoly on communication
For sure - I didn't say it did.
The rest of what you say I don't really agree with. Expecting a professional moderator from Facebook to moderate a community is the unworkable approach. It has to be done by those communities.
Here's another issue: if someone breaks the law in their country that bans them from using some/all of Facebook then should Facebook comply? In some cases it makes sense (e.g. someone convicted of something related to children should probably be banned from communicating with children) but not in others, say where lower caste/class people are legally banned from speaking.
Facebook is a big enough place that these things really matter, and I don't think we or Facebook should assume that the decision to veto speech is Facebook's.
To repeat my key point: people demanding that Facebook not be the veto decider are not trying to threaten your online communities. They're just looking at the power dynamics and concluding that that power should be held by those communities (or the law of the land(s)), and not by Facebook.
Maybe I'm misunderstanding you but it seems like you are arguing for a fairly hard-line stance of: Facebook shouldn't be allowed to censor or take down any content, and all decisions to do this should be pushed down to individual users or group moderators.
This is certainly unworkable given the level of bots, spam, etc. that would be untenable for a volunteer moderator on a Facebook group, or an individual user, to deal with.
Maybe I am misunderstanding you and your point is more subtle: the government should have some rules around what Facebook can and cannot censor. Then I don't disagree with you in principle, I think we would have to get into specific proposals and weigh their pros and cons.
A blunt approach of "free speech, no censorship" isn't a convincing argument, but given the scale of impact that site-wide or nation-wide moderation decisions have on Facebook, I could be convinced that government oversight in certain cases makes sense.
Bots don't get any speech because they're bots. I'm all for people who we know are people talking, and no-one else (although if their identity is hidden I don't mind so much, as long as it's one account per person). Then we can moderate people.
Highly non-partisan issues such as not allowing explicit images of children is important, along with any other general enforcement of the law of the land.
The fact that there are multiple laws and lands makes this tricky, so to some extent I would say Facebook should decide where they want to operate, and if their fundamental values contradict the country then they shouldn't operate there (e.g. one rule could be "if a country bans a group - other than children without their parents' consent - from using Facebook, then we don't operate there).
Note that in the above system, it allows people with values I don't like to speak. That includes people whose values are pro an American corporation censoring the world. Sadly for me, that is one of my values (-:
I do have some agreement that when a platform gets very large, it makes sense to have some regulation around the censorship decisions they take. Specifically, rules around transparency make the most sense to me. On the other hand, a blunt regulation such as saying that large tech platforms are not allowed to do any platform or country-wide moderation doesn’t make as much sense to me. My experience with all unmoderated online communities is that they devolve into something extremely unpleasant, and forcing Facebook to push ALL of the moderation work down to specific groups or users themselves is not a workable approach to this problem.