Friday 5: Five things to be aware of in online ethics

Digital and data ethics came under the spotlight last week after the UK’s Online Safety Bill was announced in the Queen’s Speech. In this week’s ‘Five 5’ Velvet’s Andy Riley delves into recent headlines to see who’s on the nice list and who’s naughty when it comes to the murky world of digital ethics.

Do the right thing and read on to find out more … 

  1. The Online Safety Bill fails to keep everyone happy: who saw that one coming? Years in the making, the legislation formerly known as the Online Harms Bill was conceived as a means to control online content that could be harmful to children, such as pornography. The remit has been extended to cover racism and terrorism, as well as disinformation and online scams. Most controversially it also seeks to protect ‘democratically important’ content.

Why should I care?

Speaking as the parent of a young child, I thought this legislation was well overdue. After all, surely no-one could really complain about protecting kids from the darkest corners of the internet? That’s easier said than done of course so it does make sense that the UK government should focus on those platforms where harmful materials can easily be shared – even if you aren’t actively seeking them out.

So, what’s not to like? The issue lies in what constitutes ‘democratically important content’ in the era of ‘fake news’? In protecting freedom of speech around the big issues like an election, referendum or even whether you should get the jab, the Bill seems a little conflicted.

Any content published under the auspices of an ‘official’ news source does not come under scrutiny unless a complaint is raised to Ofcom. This is as it should be in a democracy – even if that story is a contentious opinion masquerading as news. However, the moment someone shares the same story on a social network, it could be deemed ‘harmful’ and the platform owner can then be fined up to £18million. One commentator has labelled it “a censor’s charter… outsourced to Silicon Valley.”

The Bill came from a good place, yet somehow it’s ended up in the badlands. While clearly there is a problem with disinformation on social networks, it remains to be seen if this system is in any way workable. But that’s just my opinion of course…

 

  1. Fake news begone. Again: Facebook is itself attempting to limit the spread of disinformation with a new feature that asks users to rethink before sharing a news story they haven’t already read.

Why should I care?

The day before the official unveiling of the Online Safety Bill – which is an uncanny coincidence when you think about it – Facebook announced it is taking proactive steps to purge ‘fake news’ from its platform.

 If someone tries to share a(ny) news story without having first clicked through to read it, they will be hit by a pop-up message warning them “You’re about to share this article without opening it.”

Twitter trialled a similar feature for people retweeting stories they hadn’t read in June last year. While Facebook is yet to announce any metrics, Twitter revealed users were 40% more likely to click through to the article after receiving the prompt. 

 

  1. Musk be greener: Tesla has backtracked on its decision to allow consumers to use Bitcoin to buy its vehicles. Elon Musk tweeted he was concerned with the amount of fossil fuels expended in Bitcoin mining.

Why should I care?

For a company that builds electric vehicles and solar roofs, Tesla sometimes has a ‘surprising’ record on its sustainability choices. Notably, the company made $518 million in Q1 from selling on emissions credits, awarded by 14 US States for exceeding emissions and fuel economy standards, to other automakers that were not hitting targets.

More eyebrows were raised when Elon Musk became the poster boy for cryptocurrency when his company bought $1.5 worth of Bitcoin in February.

The problem with cryptocurrencies is the amount of electricity expended in the mining. In fact, extracting Bitcoin uses more power each year than the whole of Argentina. 70% of this activity happens in China where miners invariably opt for the cheapest electricity tariffs, typically generated from coal-fired power stations.    

Analysts suggest it was most likely Tesla’s concerned investors who were behind the decision to pull out of Bitcoin. However, Musk isn’t finished with crypto yet – Bitcoin contributed $101 million to Tesla’s revenues in Q1 – and he is looking to other less polluting cryptocurrencies – at least until Bitcoin mining transitions to more sustainable energy.

 

  1. Social media detox: Facebook’s oversight board decided to follow Twitter’s lead in permanently banning Donald Trump from the platform. Meanwhile Twitter takes steps to encourage users against replying in haste.

Why should I care?

I once posted an opinion about Brexit on Twitter. This proved to be a mistake. I can report that social media can be a highly toxic place. Eventually my nemesis and I agreed to disagree but for a couple of hours I had some insight into what those in the public eye must go through every day.

Without wishing to perpetuate any stereotypes, a lot of the keyboard warriors you come across do tend to be angry old men. As such, it probably makes sense that Facebook should permanently unfriend Donald Trump. In response The Donald set up his own ‘social media’ site, albeit without the social bit – users aren’t able to disagree with the site’s one power user… Although they are able to share his wisdom via Twitter and Facebook.

The already Donald-less Twitter decided to further detoxify its network by releasing a new feature for the IOS and Android app that asks you to reconsider any replies to Tweets written in haste that might contain harmful or offensive language. While falling short of censorship, users are asked if they’d like to edit or reconsider their tweet before sending. Twitter suggests these prompts successfully convince a third of users to reconsider before hitting the ‘send’ button.  

 

  1. AIs have feelings too (sort of): A study in Italy is trying to understand AI’s decision-making processes by listening to a robot called Pepper’s internal monologue as it completes set tasks. By asking Pepper to think out loud, scientists hope to better understand how the subconscious is mimicked in a machine.

Why should I care?

First, have you seen Battlestar Galactica!? If yes, you probably don’t need to read any further.

In less genocidal terms, the study seeks to understand why AIs are sometimes unable to complete user requests.

In one experiment, the researchers trained Pepper in table-laying etiquette but then asked the robot to do so incorrectly. Poor old Pepper was heard to say to itself: “This situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants.”

As someone who suffers from terrible internal guilt, I will now never again be rude to Alexa.