Over the last 15-20 years there has been cottage industry of people and/or organisations spreading disinformation and misinformation which is then amplified not only by a large network of bots but people whose first reaction is ‘like first, ask questions never’ because it happens to tickle lizard brain. Unfortunately such an environment has had catastrophic consequences to society, upending a politics where now it is now ‘he or she who has the loudest and most obnoxious voice spouting the most insane nonsense gets the attention’ and in some cases creating an environment where individuals take it upon themselves with disastrous consequences such as the loss of life. Social media tends to get a lot of the focus because the speed at which such ideas can be spread whereas in the past you may have a right wing shock jock ranting down the microphone on an AM station (insert Simpson’s parody of Rush Limbaugh in the form of Birch Barlow) but their reach was limited – the ability to cut and share a clip with friends was limited to those who you know and even a smaller community online.
So what is the solution to the spread of disinformation and misinformation particularly when we’ve seen the most extreme manifestations result in the loss of life? the heavy hand of the state in the form of the ‘Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2024’ that was proposed in Australia and but has subsequently decided by the government that they won’t be proceeding with it? The use of soft power project by governments in a hope that they can nudge the social networking to do the ‘right thing’ through the use of technology analysing and then alerting for manual review content that falls under misinformation, disinformation, promoting of violence and/or hatred? If that is the avenue taken then who defines the difference between misinformation, disinformation or just a difference of opinion when there is an agreement on the facts but different conclusions are drawn? that is particularly problematic when you think of qualified doctors making legitimate content but just have a difference of opinion when analysing the same data – which side do you fall down on?
There are three steps that I believe need to be taken to address the issue of misinformation and disinformation:
- Ban the use of algorithms for user created content. Algorithms are great for scenarios where you have a defined set of data points for example, you are music subscription business with informaiton such as track name, title, year of release, the genre, beats per minute etc. so then it is possible that if someone is listening to ‘Bob Marley and the Wailers’ and based on the person’s IP address they’re located in New Zealand so the algorithm may suggest ‘Fat Freddy’s Drop’ and ‘The Black Seeds’ then depending on the time of the year might offer an ad for the ‘Raggamuffin Music Festival’. If the algorithm is even more elaborate then it may learn from what other people listen to who also listen to ‘Bob Marley and the Wailers’ and suggest that to the user. In otherwords, a finite set of data points and nothing strange will happen. The moment you start using alogirithms with user created content the problem is that the system doesn’t know the difference between good engagement or bad engagement, it cannot parse the content along with understanding the nuances of language and the context, with the end result being content being flagged that is perfectly innocent while crafty individuals are able to fly below the radar. Then on top of that you add non-state actors (funded via hostile states) using the algorithm by creating bot farms who generate fake engagement which then amplifies which in turn results in that content being suggested to more people. TL;DR ban the use of algorithms on platforms with user created content or at the very least every user should default to a chronological timeline and each user should have its own algorithm that they can tweak and customise (see BlueSky which does a pretty good job).
- Make the settings on social media opt in – for example, if you setup an account with Facebook and you add a friend at the moment you’re automatically signed up to their timeline so what I suggest is that when you add someone as a friend you have to opt into following their timeline. If you make it opt in then that (combined with the banning of algorithms for user created content) should break the virality effect and the gaming of the system by bot farms. In other words end users would only be shown the content that they follow and from those whom they have opted into wanting to see their pots rather than those managing the platform automatically assuming that the end use would like a particular feature enabled by default.
- Not only in schools but through public education campaigns – media literacy and social media literacy, being able to pick up on something that may excite your lizard brain but having the self awareness to recognise that you may agree with it but it doesn’t necessarily make true. A good example of this is is a recent article from NPR (link) and yes even the left are being fed conspiracy theorys such as “they’re installing incinerators at Alligator Alcatraz,” (as outlined the article) when there is no evidence but many on the left believing it because paints an already horrible president in an even worse light (I’m no fan of Trump but if you’re going to push back against the Trump regime then it needs to be done based on facts not spreading unhinged conspiracy theories). When I went through school we used to have ‘newspapers in classrooms’ where we would look at stories, see how they were written, the headlines being used etc. Being able to dissect stories allowed one to uncover any sort of bias, whether the author was trying to illicit a reaction from the reader, what was missing, what details were left out etc. For example, I like to keep track of what is happening with Kiwirail and the rail developments occurring in Wellington and Auckland (specifically the city rail link) and although the media gives an ok top level overview what you tend to find, if you read the white papers and meeting documents that city councils make available, that there is a lot more detail that are left out – the rationale why certain decisions were made etc.
Will the above solve every problem? I don’t think it is possible but I believe that taking away the incentives in place for people to spread misinformation and disinformation along with giving ordinary people the tools to spot misinformation and disinformation will do a better job long term than hoping that a system, prone to flagging campaigns and not picking up on truly harmful content or scams using social media advertising network, to step in when ideally it is up to the individual to exercise responsibility for the content they consume and whether they believe it to be true and then share it in conjunction with taking away the very technology that bad actors exploit to expand their reach.

Leave a comment