Sigh, Just as a follow up to the previous post where I was talking about tactics regarding how to confront the Trump administration, I was watching Real Time Bill Maher last night and one of the guests on the show was Joe Manchin. On a side note, I have my disagreements with Joe Manchin but I have a lot more respect for him because unlike Kyrsten Sinema, Joe Manchin will tell you exactly why he isn’t supporting a particular piece of legislation and will propose changes whereas Kyrsten Sinema will leave everyone guessing. Anyway, getting back to Joe Manchin said something in regard to section 230 protection in the ‘Overtime’ (the after the show discussion which goes on for around 15-20 minutes) talking about it should be taken away from online planforms.

It appears he doesn’t understand what section 230 does because to get rid of the protection it will result in platforms being liable for what is being published on their platform with the consequence being that most platforms will simply either shut up shop, move overseas or have censorship regimes so severe that it’ll have a chilling effect on freedom of speech. Don’t get me wrong, I’m certainly no fan of the way in which internet platforms have used algorithms to hook people into doom scrolling not to mention amplifying misinformation, disinformation and rage bait but to remove section 230 protections demonstrates a misunderstanding of what it is there for. Now, if Joe and others do wish to reign in these internet platforms then one option is to implement a rule that if they use algorithmic curation on their platform then they lose section 230 protections but if they don’t use algorithms curation (in other words only show what people subscribe to in chronological order) and people have to opt in to follow someone’s time line (rather than currently on Facebook (as one example) where if you add someone as a friend you’re automatically signed up to their time line) then you maintain your section 230 protection.

The way in which algorithms curate timelines is that they insert content that is receiving a lot of engagement, but the problem is that the algorithm has no way to know whether it is good or bad engagement or whether it is fake engagement (see bot farms) other than people are engaging with it. They have your past behaviour and demographic information and then target what they believe is content you may find interesting. The problem with this is that bot farms make use of that to create fake engagement and therefore amplify disinformation and misinformation – content that otherwise would never have been the light of day through organic sharing between actual humans with accounts quickly get amplified an audience far larger than what would have happened through people sharing content with friends and family members. Something tells me that I don’t see something like that happening in my lifetime or ever.

Posted in , ,

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.