Tag Archives: artificial intelligence

Artificial Evidence and the Death of Public Truth

Artificial Evidence and the Death of Public Truth

Truth in the public sphere is dying and there doesn’t seem to be much we can do to stop it from happening.  What’s needed is a movement towards critical thinking and interrogation of all narratives presented to the public, building a more nuanced picture of truth that does not merely rely on what were previously considered reliable sources – such as video, audio and quotes from ‘reliable sources’.

Rapid advancements in technology are allowing mediums such as video and audio to be artificially constructed. These advancements will allow anyone to show people saying or doing something they never did; committing an act they were never present for; and build up media reports of activities that never happened.  AI can now create people that never existed, with expressive faces presenting a convincing facade of lived experience behind the empty pixels.

It’s always been the case that written reports could be fabricated and propaganda presented easily as fact. The danger now is that we have come to believe that our new forms of evidence are more credible, often treating them as near irrefutable.  Perhaps we believe these sources because they come directly to our personal feeds and feel more intimately tailored to ourselves. Whatever the cause, it is clear that we are losing a kind of collective common sense to interrogate the agendas behind different reports.

Beyond just the ability for lies to be presented as fact, the truly worrying aspect is that real truths can be brushed aside with a simple ‘fake news’ defence. The ability to artificially construct evidence completely undermines the concept of truth in the public sphere. This can be seen with recent examples, such as people doubting video interviews with Julian Assange thinking he had been assassinated or incarcerated without the public’s knowledge.

The widespread use of this technology can both dominate and undermine political discourse.  Would the Watergate scandal have played out the same way if the audio recordings were doctored?  What if the widely shared remarks used to highlight Donald Trump’s misogynistic and abusive personality were faked by his political opponents?  I’ve heard others say the same thing about footage of Joe Biden’s behaviour around children of supporters – dismissed as either out of context, or forged entirely.  We’ve also heard that footage of Osama Bin Laden was faked for years in order to maintain the terrorism threat narrative (on both sides).

I’m not supporting any side, but rather trying to show how devastating this can be to the idea of public truth. As more people become aware of the (technically amazing) ability we now have to fabricate video and audio of any individual – to the same degree that photographic or written evidence can be forged – then how do we effectively debate important evidence-based issues? If we are questioning the validity of all material evidence, then certainty becomes impossible and division in society all but assured.  Perhaps this is the ultimate goal of those who seek control, as there’s no need to convince the populace of something if you can make us doubt everything.

Assange by Thierry Ehrmann

The other worrying side of this is how readily we accept propaganda as fact; consider how readily people accept ‘anonymous sources’ as a valid form of evidence.  Such statements should instantly be seen as suspicious, or at least requiring other evidence to back up any claims that might be put forward. Instead, journalists are increasingly accepting these sources at face value – perhaps in order to maintain the kind of access to government channels that Chomsky talks about – without digging deeper than the surface level of what they are told.  Rarely pausing to even add in phrases such as ‘allegedly’ that would help nuance the conversation or, if they do, only including those later once the damage has been done.

With this approach becoming widely accepted as valid journalism, what hope do we have when far more convincing mediums of evidence than ‘someone said’ can be easily fabricated and presented as fact?

One canary in the coal mine was the rapid onset of online astroturfing and manipulation of debate.  Astroturfing is the creation of a fabricated ‘grassroots’ – fake comments, accounts and blog posts (etc.) that build up a seemingly strong public viewpoint, but behind the facade is a confusing nest of paid agents, false narratives and algorithmic voting patterns.  Interestingly, this tactic has come to be known by most people because it is highlighted as a form of Russian interference in Western democracies. Bot-farms are infecting Twitter; fake campaign groups and ads run on Facebook; and every comment online comes under suspicion for being just another Russian shill.  

It’s certainly the case that Russia is engaged in a wide range of internet-based psy ops… but they’re not alone.  Correct the Record is a famous example that supported Hillary Clinton’s 2016 presidential campaign; equally met (evidently even more successfully) by a legion of Republican-organised campaigns that spread falsehoods and memes throughout the US election.  Each side was trying to dominate the media narrative and manipulate any online commentary that followed it.  This even extends offline to hiring crowds to appear at political rallies and bolster the sense of support for particular candidates.  The UK referendum that led to Brexit was caught up in similar turmoil in what can only be defined as widespread, systemic and technology-led abuse of the democratic process.

We’ve reached a state of such division on popular online platforms that often there is only minimal dialogue occurring between opposing viewpoints.  We are all caught up in our own echo chambers (groups/subreddits/hashtags), multiplied by the influence of a sea of fake accounts that quickly drown out and downvote dissenting opinion.

There’s also evidence of this drastic shift in perception in seemingly more mundane, but perhaps no less damaging ways.  Photo touch-ups have long been a standard aspect of advertising campaigns, celebrity image and the world of Instagram models.  Recently we’ve seen apps that allow you to doctor photos instantly and almost seamlessly, putting this kind of reality fabrication in the hands of everybody with a social media account.  Celebrities are now even fighting against the use of their likeness in pornography, which reached mainstream viability in the last few years with the emergence of ‘deep fake’ videos and the software to produce them.

Given that these advancements in artificially constructed media are unstoppable – indeed, they are already available and being used – how should we respond to help bolster the ability for public truth to exist?

As with many problems, the first answers relies upon education. We need people from all demographics and political leanings to understand that these forms of ‘evidence’ are able to be forged and therefore all information – particularly when used for political purposes – needs to be properly scrutinised.  This should start at the same time that our children are beginning to explore the internet and navigate their identities in collaboration with their peers and everyone else they come into contact with online. Both families, schools and media creators need to take shared responsibility for this education.  It needs to be pervasive, authentic and convincing to the children and young adults that are coming into contact with a near-infinite source of opinion forming ideologies and information streams.

Thankfully, we are seeing some widespread acknowledgement of the issue.  Starting with the infamous and terrifyingly convincing Obama fake controlled by Jordan Peele and continuing recently with videos from Shane Dawson covering the topic and reaching tens of millions of viewers.  Developers and academics are coming out to highlight their concerns and discuss the social impact of these advancements, but worryingly one of the main responses seems to be to put the capacity into the hands of everyday people – both to draw attention to the issue, but also to monetise it.  I’m not necessarily against democratising the ability to create artificial evidence – better in the hands of everyone than just an elite few – but the widespread bullying and abuse that can result is clear and already happening.

The second element we need to figure out is journalistic accountability.  A large part of the problem are journalists who play the part of government mouthpieces, perhaps unknowingly, and allow the line between propaganda and news to be blurred beyond recognition. Pressure needs to be put on outlets that allow mistruths to be published and the development of a variety of organisations that independently explore trust and truth in journalism is necessary to restore faith in the accuracy of our news outlets.  The hard part, of course, will be in making sure those organisations aren’t themselves hotbeds of partisan propaganda and shadow-funded agendas.

The third area lies in diagnostics.  The ability to run analysis on or attach cryptographic signatures to audio-visual sources to determine their authenticity will be vital – perhaps arriving at some kind of percentage based trust-rating.  Whether or not this proves to be a long-term solution is unknown, but the hope is that software developers can find ways to  identify artificially constructed media and ideally releasing their programmes into the public sphere.

Finally, we need to develop methods of dissemination that don’t rely on the public sphere.  This takes us back to the days before global telecommunications; to trusted peer networks, collaborative organisations and locally sourced, know-your-neighbour politics.  This kind of approach would hopefully look more like a consensus-driven model based on compromise and debate rather than identity politics absolutes. When we can understand that truth is difficult, in some cases impossible, to arrive at then perhaps a collaborative model will prove more effective in bringing harmony to the greatest number of people.

The images, sounds and words put before us are not what they seem.  They are constructs designed to convince us to follow and submit to a higher authority.  Even if authentic, they have been carefully chosen and interpreted. Whether from political, commercial or personal sources, our minds and psyches are being sought after for the attention, wealth and power of others.  The ability to construct artificial evidence only takes us further away from any universal foundation of truth that might enable us to build towards more compassionate and inclusive forms of being. What price are we willing to pay and how can we defend these foundations of truth against the relentless goose-steps of technological progress?

Because when every medium of evidence is able to be artificially constructed, truth will cease to exist in the public sphere.

Header image by Blake Patterson, Flickr, Creative Commons.


Automation and the New Socioeconomic Reality

Automation and the New Socioeconomic Reality

Image above by Ken Clare (CC, Flickr)

Automation is a topic that has long been of interest to futurists.  I first put a marker down in 2010 with two prediction posts and it has proven to be an area of increasing relevance surrounded by a growing call for attention.  It will be interesting to see whether this develops into a long-term policy response or falls to the wayside as another flavour of the month dropped for something new.  Indeed, the impact and uncertainty of the UK referendum could easily contribute to that…

Putting that aside, there has been a noticeable increase in activity around the topic in recent months.  The Financial Times named Rise of the Robots: Technology and the Threat of a Jobless Future their book of the year for 2015; the Economist ran a conference last week on Navigating the Changing World of Work; the Oxford Martin School continues to publish their groundbreaking research; and Chatham House is today hosting a high-level conference on the Future of Work which dedicates significant time to questions arising from automation.

Even amongst all of this it’s a tricky subject to get our heads around, primarily because it is one of those areas of technological advancement that will emerge very quickly.  It won’t necessarily look like it’s going to happen on a large scale – until it suddenly does.  At the moment, not many people outside of the tech sector are really taking the notion very seriously.  Policymakers aren’t yet ready to prioritise the issue, it seems, despite some very clear indicators from the private sector that shifting business models are already bringing some deep-seated repercussions.

The impact of automation came back into the forefront of my attention upon reading Paul Mason’s book PostCapitalism: Envisaging a Shared Future which was released at the end of last year.  His passionate take on the subject, and the idea that capitalism s being systematically undermined by the fruits of its own innovation, was a powerful rallying call that he has continued to make convincingly (read my full review of the book here).  More recently, I have been at three different events that explored the subject – either tangentially or directly – and it was fascinating to see the different ways the topic is being treated.

The first of these was a conference at Goldsmith’s College run by the Political Economy Research Centre (PERC) – Beyond the Zombie Economy:  Building a Common Agenda for Change – that aimed to explore viable economic alternatives to the predominately neoliberal status quo.  Whilst covering everything from systems-based modelling, to the impact of debt, to feminist economics, the first day of the conference had no mention of the shift that automation might bring.  I was not alone in thinking this was an omission, and whilst I was looking for a good moment to raise the question it popped up from others in a number of different sessions on the second day.  The response on both occasions took on a quite dismissive tone.  Automation (or mechanisation, as it was referred to, more on that in a moment…) was seen as a fanciful and unrealistic framework to place discussions about the potential for new economic systems.

I found this response intriguing, particularly given the topic of the conference and radical makeup of its audience.  The closest it got to a serious discussion on the issue revolved more around Universal Basic Income (UBI) – and even this was looked upon suspiciously by left-wing commentators as little more than a power grab from the right in order to dismantle welfare states and absolve themselves of responsibility for social care.  It was a great conference for many different reasons, but it was clear that the topic of automation just wasn’t going to be taken seriously…at least not yet.

Circular Economy - lulupinney Flickr CCThis experience contrasted with the next two occasions.  The first was a discussion held under the Chatham House Rule geared towards exploring models of circular economy.  It didn’t take long before automation was brought to the table – highlighted as something which will redefine labour markets, radically alter notions of productivity and consumption, and that unfortunately has the potential to exacerbate inequality even further if not directed towards a more citizen-focused devolved economy.  The concept of UBI arose as a necessary consumption antidote to the shift, at least in transition, and measures of growth such as GDP were looked upon as already outdated and in need of a complete overhaul in order to accurately measure the flow of an economy within this new socioeconomic reality (or even within the current one).

The final event was a discussion centred upon a talk by Charles Ross, Chair of the Brain Mind Forum.  He takes the notion of automation very seriously and spoke about three key areas: robotics and employment, the ethics of artificial intelligence, and the long term impact of convergence between our technology, our bodies and our minds.  It was his contention that much of the political discord we are seeing in developed economies at the moment (with specific mention of the rise of Donald Trump, and one could add the result of the recent UK referendum) is primarily the result of wage stagnation/depression, which is itself being caused by the ICT revolution and the inevitable move towards an automated economy.

This technological paradigm shift has the capacity to generate vast amounts of wealth and creative capacity, as is already self-evident, but at the same time begins to break down the relationship between labour and productivity that our modern socioeconomic context relies upon.  This is causing a great deal of disaffection, which in turn is being projected onto wrongly perceived external threats such as immigration.  The discussion this time took the rise of automation almost entirely as a given, perhaps unsurprising being a group who all had connections to IT, although there was disagreement on whether or not the end result would take a utopian form or be devastating to our collective psychology and a threat to the very foundations of the human condition.

As an emerging topic that is not yet fully recognised in some circles, there is a lot of work to be done to bring the imminent paradigm shift of automation into the public eye and the attention of policymakers.  It’s also important to recognise that such radical change could occur on a timeframe even quicker than the devastating impact of climate change.  In many respects it is because of the enormity of the climate change challenge that the potential problems of automation are not being addressed, and those currently paying the most attention are doing so primarily because of workflow gains, profitability and improved efficiency in business models.

There is so much to add to this topic that I will be revisiting it over the coming months.  For now there are two things that I would add to the discussions I have been part of recently.  My intention here is to contribute to the conversation and help bring it to the attention of those looking into areas of new economics and policymakers who are interested in strengthening economic resilience.

Barber Play - brunurb Flickr CCThe first is that we need to define more clearly the difference between mechanisation and automation.  A large part of the dismissal of this subject is the idea that “we’ve been hearing this for decades, and it never happened”.  Many people just don’t believe this time will be any different, but that is because they are misinterpreting the issue being discussed.

Even leaving aside the fact that General Motors at its peak had 850,000+ global employees and Facebook has about 15,000 for a far larger market capitalisation – i.e. the hyper-efficiency of the digital economy as it already stands – what people are missing when they refer to the automation revolution as mechanisation is one of its core components: artificial intelligence.

It is incorrect to think that the process of automation consists merely of factory workers being replaced by robotics that run through pre-programmed and repetitive tasks.  This isn’t just an Industrial Revolution 2.0, and we need to make this clear to those dismissing its impact on such grounds.  The changes that will result from this revolutionary transition are not just extensions of the mechanical capacity of human beings, but rather fundamentally change the notion of productivity through the power of increasingly omniscient system awareness and instantaneous plural decision-making.  Machine learning and artificial intelligence goes well beyond an increase in production capacity and delivery speed, which were the hallmarks of the industrial revolution, as it has the potential to change the inherent nature of the workforce entirely.  This is in addition to the overcoming of risk factors and workplace conditions that human beings should never be made to endure and could be a very positive outcome of the automation process.

The final point I would want to add (for now) is about how we might embrace some of the opportunities afforded by automation.  Opportunities to redefine industries and social roles that are currently undervalued by a product-driven economic system – particularly those interactions that rely more on relationships over transactions.  One of the points put forward very strongly at the PERC conference was how deeply undervalued the care industry, and caring for people in general, is to our current economic status quo.  Most care activity occurs unpaid and increasingly unsupported, with terrible working conditions, yet it is one of the true cornerstones of a healthy and flourishing society.

By radically changing the landscape of labour participation in many previously booming sectors, perhaps we might be able to elevate areas that are economically marginalised.  The role of creative industries and service-driven relationships that require deep human-to-human emotional interaction have the potential to be uplifted to new levels of status and recognition – not to mention pay and employment conditions – once the more mechanical aspects of our workforce become more fully automated.

The transition to a radically different system has already begun.  It is now our collective responsibility to openly discuss what this means.  Not just for the business models that might be able to take immediate advantage of it, but for the changes it foreshadows to the socioeconomic realities that we all feed into and rely upon.  This is fundamentally different to the industrial revolution, because it gets closer to the core of the human condition.  It requires us to formulate new ways of seeing ourselves as productive members of society, of connecting with others in a multiplicity of communities both fixed and transitory.  Formulating together new narratives for what it means to positively contribute to the inter-generational project of humanity.

We have an opportunity to elevate global society away from the difficulties of subsistence living, empowering ourselves to seek out new landscapes of meaning and collaborative vision.  Yet there is also the risk of doing a great deal of damage to our collective psyche, of enabling this next phase of human evolution to enslave rather than liberate us.  Automation is a systemic revolution that runs throughout the structures that we currently build society upon – our production, consumption, education, politics, health and identity.  We need to work diligently to bring this vital topic into the public discourse, and find ways to make the challenges it presents immediately relevant to policymakers, innovators and citizens alike.

Automation is happening.  It’s important now to ask just what, and for whom, it is happening for.

[Review] Ex Machina – The Future is Now

Like the best of futurist thinking, this is a film that demands us to question ourselves and the world we are co-creating.Continue Reading

[Review] Her – Love, Sentience and the Transition of our Times

‘Her’ gives us a glimpse beyond the veil, both to the temporal future but equally inward into the eternal capacity of the human soul.Continue Reading

Beyond Cute Robots: Towards a New Concept of Sentience

As we begin to more and more closely assimilate artificial intelligence with highly advanced engineering we should not see ourselves as limited by the biological necessities that have previously created a boundary for physical existence and the expression of identity.Continue Reading

Robot maids and the human-tech future?

As with most things of a robotic nature, it seems that Japan and Korea are the places to watch for any significant advancements – particularly when concerned with consumer products. The latest announcement coming out of the Korea Institute of Science and Technology continues this trend with their latest demo of a pair of domestic robots that are capable of autonomous movement and activity within the household.Continue Reading