Tag Archives: Social Media

How to Spot Propaganda Online

How to Spot Propaganda Online

How do you spot propaganda on social media? The advancement of propaganda online is a pressing issue. We are now faced with a constant onslaught of manipulative information campaigns from government and corporate sources across the entire length of the political and economic spectrum.

Continue Reading
Artificial Evidence and the Death of Public Truth

Artificial Evidence and the Death of Public Truth

Truth in the public sphere is dying and there doesn’t seem to be much we can do to stop it from happening.  What’s needed is a movement towards critical thinking and interrogation of all narratives presented to the public, building a more nuanced picture of truth that does not merely rely on what were previously considered reliable sources – such as video, audio and quotes from ‘reliable sources’.

Rapid advancements in technology are allowing mediums such as video and audio to be artificially constructed. These advancements will allow anyone to show people saying or doing something they never did; committing an act they were never present for; and build up media reports of activities that never happened.  AI can now create people that never existed, with expressive faces presenting a convincing facade of lived experience behind the empty pixels.

It’s always been the case that written reports could be fabricated and propaganda presented easily as fact. The danger now is that we have come to believe that our new forms of evidence are more credible, often treating them as near irrefutable.  Perhaps we believe these sources because they come directly to our personal feeds and feel more intimately tailored to ourselves. Whatever the cause, it is clear that we are losing a kind of collective common sense to interrogate the agendas behind different reports.

Beyond just the ability for lies to be presented as fact, the truly worrying aspect is that real truths can be brushed aside with a simple ‘fake news’ defence. The ability to artificially construct evidence completely undermines the concept of truth in the public sphere. This can be seen with recent examples, such as people doubting video interviews with Julian Assange thinking he had been assassinated or incarcerated without the public’s knowledge.

The widespread use of this technology can both dominate and undermine political discourse.  Would the Watergate scandal have played out the same way if the audio recordings were doctored?  What if the widely shared remarks used to highlight Donald Trump’s misogynistic and abusive personality were faked by his political opponents?  I’ve heard others say the same thing about footage of Joe Biden’s behaviour around children of supporters – dismissed as either out of context, or forged entirely.  We’ve also heard that footage of Osama Bin Laden was faked for years in order to maintain the terrorism threat narrative (on both sides).

I’m not supporting any side, but rather trying to show how devastating this can be to the idea of public truth. As more people become aware of the (technically amazing) ability we now have to fabricate video and audio of any individual – to the same degree that photographic or written evidence can be forged – then how do we effectively debate important evidence-based issues? If we are questioning the validity of all material evidence, then certainty becomes impossible and division in society all but assured.  Perhaps this is the ultimate goal of those who seek control, as there’s no need to convince the populace of something if you can make us doubt everything.

Assange by Thierry Ehrmann

The other worrying side of this is how readily we accept propaganda as fact; consider how readily people accept ‘anonymous sources’ as a valid form of evidence.  Such statements should instantly be seen as suspicious, or at least requiring other evidence to back up any claims that might be put forward. Instead, journalists are increasingly accepting these sources at face value – perhaps in order to maintain the kind of access to government channels that Chomsky talks about – without digging deeper than the surface level of what they are told.  Rarely pausing to even add in phrases such as ‘allegedly’ that would help nuance the conversation or, if they do, only including those later once the damage has been done.

With this approach becoming widely accepted as valid journalism, what hope do we have when far more convincing mediums of evidence than ‘someone said’ can be easily fabricated and presented as fact?

One canary in the coal mine was the rapid onset of online astroturfing and manipulation of debate.  Astroturfing is the creation of a fabricated ‘grassroots’ – fake comments, accounts and blog posts (etc.) that build up a seemingly strong public viewpoint, but behind the facade is a confusing nest of paid agents, false narratives and algorithmic voting patterns.  Interestingly, this tactic has come to be known by most people because it is highlighted as a form of Russian interference in Western democracies. Bot-farms are infecting Twitter; fake campaign groups and ads run on Facebook; and every comment online comes under suspicion for being just another Russian shill.  

It’s certainly the case that Russia is engaged in a wide range of internet-based psy ops… but they’re not alone.  Correct the Record is a famous example that supported Hillary Clinton’s 2016 presidential campaign; equally met (evidently even more successfully) by a legion of Republican-organised campaigns that spread falsehoods and memes throughout the US election.  Each side was trying to dominate the media narrative and manipulate any online commentary that followed it.  This even extends offline to hiring crowds to appear at political rallies and bolster the sense of support for particular candidates.  The UK referendum that led to Brexit was caught up in similar turmoil in what can only be defined as widespread, systemic and technology-led abuse of the democratic process.

We’ve reached a state of such division on popular online platforms that often there is only minimal dialogue occurring between opposing viewpoints.  We are all caught up in our own echo chambers (groups/subreddits/hashtags), multiplied by the influence of a sea of fake accounts that quickly drown out and downvote dissenting opinion.

There’s also evidence of this drastic shift in perception in seemingly more mundane, but perhaps no less damaging ways.  Photo touch-ups have long been a standard aspect of advertising campaigns, celebrity image and the world of Instagram models.  Recently we’ve seen apps that allow you to doctor photos instantly and almost seamlessly, putting this kind of reality fabrication in the hands of everybody with a social media account.  Celebrities are now even fighting against the use of their likeness in pornography, which reached mainstream viability in the last few years with the emergence of ‘deep fake’ videos and the software to produce them.

Given that these advancements in artificially constructed media are unstoppable – indeed, they are already available and being used – how should we respond to help bolster the ability for public truth to exist?

As with many problems, the first answers relies upon education. We need people from all demographics and political leanings to understand that these forms of ‘evidence’ are able to be forged and therefore all information – particularly when used for political purposes – needs to be properly scrutinised.  This should start at the same time that our children are beginning to explore the internet and navigate their identities in collaboration with their peers and everyone else they come into contact with online. Both families, schools and media creators need to take shared responsibility for this education.  It needs to be pervasive, authentic and convincing to the children and young adults that are coming into contact with a near-infinite source of opinion forming ideologies and information streams.

Thankfully, we are seeing some widespread acknowledgement of the issue.  Starting with the infamous and terrifyingly convincing Obama fake controlled by Jordan Peele and continuing recently with videos from Shane Dawson covering the topic and reaching tens of millions of viewers.  Developers and academics are coming out to highlight their concerns and discuss the social impact of these advancements, but worryingly one of the main responses seems to be to put the capacity into the hands of everyday people – both to draw attention to the issue, but also to monetise it.  I’m not necessarily against democratising the ability to create artificial evidence – better in the hands of everyone than just an elite few – but the widespread bullying and abuse that can result is clear and already happening.

The second element we need to figure out is journalistic accountability.  A large part of the problem are journalists who play the part of government mouthpieces, perhaps unknowingly, and allow the line between propaganda and news to be blurred beyond recognition. Pressure needs to be put on outlets that allow mistruths to be published and the development of a variety of organisations that independently explore trust and truth in journalism is necessary to restore faith in the accuracy of our news outlets.  The hard part, of course, will be in making sure those organisations aren’t themselves hotbeds of partisan propaganda and shadow-funded agendas.

The third area lies in diagnostics.  The ability to run analysis on or attach cryptographic signatures to audio-visual sources to determine their authenticity will be vital – perhaps arriving at some kind of percentage based trust-rating.  Whether or not this proves to be a long-term solution is unknown, but the hope is that software developers can find ways to  identify artificially constructed media and ideally releasing their programmes into the public sphere.

Finally, we need to develop methods of dissemination that don’t rely on the public sphere.  This takes us back to the days before global telecommunications; to trusted peer networks, collaborative organisations and locally sourced, know-your-neighbour politics.  This kind of approach would hopefully look more like a consensus-driven model based on compromise and debate rather than identity politics absolutes. When we can understand that truth is difficult, in some cases impossible, to arrive at then perhaps a collaborative model will prove more effective in bringing harmony to the greatest number of people.

The images, sounds and words put before us are not what they seem.  They are constructs designed to convince us to follow and submit to a higher authority.  Even if authentic, they have been carefully chosen and interpreted. Whether from political, commercial or personal sources, our minds and psyches are being sought after for the attention, wealth and power of others.  The ability to construct artificial evidence only takes us further away from any universal foundation of truth that might enable us to build towards more compassionate and inclusive forms of being. What price are we willing to pay and how can we defend these foundations of truth against the relentless goose-steps of technological progress?

Because when every medium of evidence is able to be artificially constructed, truth will cease to exist in the public sphere.

Header image by Blake Patterson, Flickr, Creative Commons.


9 Reasons Why You Should Delete Facebook

9 Reasons Why You Should Delete Facebook

This is a moment of reckoning for Mark Zuckerberg and Facebook’s relentless pursuit of social media dominance. There is a widespread call for accountability that could change the conversation around digital privacy, but it will ultimately rely on enough of us moving away from the platform to ensure that lasting changes are made. Following onContinue Reading

The Spirituality of Technology: Foundations of a New Humanity

What impact is technology having on the driving motivations for human endeavour and activity? Are we formulating a clear view of what progress means in this newly emergent age? Can we talk about a spirituality of technology, and what might that mean?Continue Reading

A Citizen’s Guide to Protecting Yourself from Mass Surveillance

Here are some perfectly legal, morally sound, and relatively easy steps that every conscientious citizen should be taking to say no to untargeted government surveillance.Continue Reading

TIDAL and the Tipping Point of Glamour

The recent launch of TIDAL from Jay-Z and Co. have given us another perfect example of this particular combination of elitism, self-regard and detachment from everyday reality.Continue Reading