This article defends platforms’ moral responsibility to moderate wrongful speech posted by users. Several duties together ground and shape this responsibility. First, platforms have duties to defend others from harm when they can do so at reasonable cost. Second, platforms have a moral duty to avoid complicity with users’ wrongfully harmful or dangerous speech. I will argue that one can be complicit in wrongs committed by others by supplying them with a space in which they will foreseeably commit them. For platforms, proactive content moderation is required to avoid such complicity. Further, platforms have an especially stringent complicity-based duty not to amplify users’ wrongful speech, thereby increasing its harm or danger. Finally, platforms have a duty not to enable new wrongs by amplifying otherwise innocuous speech that becomes wrongfully harmful only through amplification. I close by considering an objection—that content moderation by platforms constitutes an objectionable form of private censorship—explaining how it can be answered.
The link will be posted when it is published.