A crime is a crime, even if it's online – here are six ways to stop cyberhate
If you're abused by a stranger in the street, there's help available. It shouldn't be any different online, write Nicole Vincent and Emma Jane.
If you're abused by a stranger in the street, there's help available. It shouldn't be any different online, write Nicole Vincent and Emma Jane.
OPINION: Picture this. You're confronted by a stranger in the park who calls you a "fat, ugly, c***" and threatens to smash your face in with a hammer next time they see you there.
Or you discover nude photos of yourself stuck to light poles around your suburb – accompanied by your name, address, and an open invite to everyone to pop around for rough sex.
Or your boss tells you to get a sense of humour and stop being so difficult when you report a co-worker who is bullying you for being gay or a racial or religious minority.
Offline, police, courts, legislation, workplace regulations and a range of government institutions would be there to help.
The machinery of the state does its best to maintain public order, ensure safety, protect our physical and emotional wellbeing, and keep things running smoothly.
Guidelines are also in place for the design of safer and more inclusive environments, including adequate street lighting and wheel-friendly ramps as well as stairs.
So why, despite an increasingly large portion of our lives taking place in the cybersphere, is no-one there to help when situations such as those described above happen online? More importantly, how can this staggering void be addressed?
This was the topic of a cyberhate symposium we staged in Sydney recently. Sixty participants from around Australia – including police, lawyers, activists, representatives from government and media, online moderators, designers, and owners, gamers, coders, academics and cyberhate targets – brainstormed solutions to cyberhate problems.
Here are six concrete ideas:
It is well past time we recognised the real impact of online violence. To do this, we need to refrain from referring to it with vague, innocuous-sounding, and meaningless terminology.
Online abuse will not be taken seriously if violent threats such as "Raped? I'd do a lot worse things than rape you!!" and worse keep being referred to with generic labels like "strong language", "in bad taste", or "trolling".
Alongside more accurate language, new categories of criminal offences are needed.
Provisions must be made to levy fines for recognised offences, like we already do for parking and speeding violations, or for lewd, disruptive, threatening, and menacing conduct on public transport and in public spaces.
Civil remedies are also needed. These should include protection orders and litigation against individual offenders, as well as class action law suits against software designers and platform operators who create and maintain unsafe environments (see the fifth point below).
Police need reassurance that these offences, despite occurring in the Nowheresville of cyberspace, indeed fall within their jurisdictions.
They also need training and clear guidance about what evidence to collect, and the resources to swiftly follow up when victims report online abuse.
To support all this, our technology is in dire need of an upgrade. Not the sort that produces faster processing, more storage, or yet another suite of cute emoticons — but an ethical upgrade.
Consider, for instance, a ban on instant/disposable accounts, accompanied by the slow unlocking of full account functionality on various platforms once we have earned it by proving we are good netizens.
While so-called "real name" policies are open to criticism, it is also worth considering the advantages of new account applicants having to provide enough evidence of who they are as real, flesh-and-blood humans.
Details that could then be used by authorities to track down offenders, regardless of whether they abandon their accounts after committing abuse.
Software designers and platform managers must take responsibility for designing safer spaces — like the safety we build into offline environments (for instance, not designing streets and walkways with dark and dingy nooks and crannies where innocent passers-by can be cornered and attacked).
Why should safety standards only apply to streets and buildings, when online highways and platforms are increasingly becoming the places we spend our time?
Our view is that if software designers and platform managers do not discharge their responsibility with due diligence, then they must accept potential fines, liability, and even criminal sanctions when their patrons get hurt.
To design the right technological solutions, ethics needs to be taught to engineering and design students. Not as a soft subject or "politically correct" inconvenience, but as a way of "baking in" ethical and not just practical functionality into software and platforms.
In fact, our case is that learning to build ethical functionality into artefacts and environments should be an integral part of the training of every designer and engineer, and just as important as learning to build and program any other functional requirement.
The technology writer Nilay Patel has observed that we don't do things "on the internet" any more — we just do things. Accordingly, in addition to updating our language, laws, and policing practices, we think an ethical upgrade of our tech via what is known as "value-sensitive design" should become as essential as improving broadband speeds and site clickability.
Dr Nicole Vincent is a fellow at Macquarie University and Dr Emma Jane is a senior research fellow at UNSW Sydney. Their latest published research can be found in the new collection Cybercrime and its Victims. This article was originally published by ABC Online.