Fighting the ‘rising tide’ of hate speech and misinformation

The Chief Censor’s Office used to worry about the odd boundary-pushing book, film or video game. These days it’s  extreme stuff published online that forms the bulk of the work. David Shanks leaves the top job as the government embarks on a re-jig of regulating all media content to “better protect us from harm.” Could that compromise the media’s fundamental freedoms?

Chief Censor David Shanks launches the ‘Edge of the Infodemic’ report.

Chief Censor David Shanks launches the ‘Edge of the Infodemic’ report. Photo: Photo / RNZ Mediawatch

Twenty-five years ago Massey University media experts Chris Watson and Roy Shuker surveyed the state of censorship in a book called In the Public Good?

There are chapters about film, TV, music, literature and computer games with classic cases that made headlines down the years like Lady Chatterley’s Lover,  Lolita, Last Tango in Paris and Mortal Kombat

Only the final chapter of the book pondered digital technology and the future – and most of that was about alarm over adult content beamed in on satellite TV. 

But the internet was already up and running by then – and in 1995 National MP Trevor Rogers drafted a Bill to oblige phone companies to block online pornography and violence. 

It went nowhere at that time – and Rogers only lasted another year as an MP.  

But the system of regulating media content – and censoring it – has barely changed since then. 

A snapshot of censorship inn New Zealand back in 1998.

A snapshot of censorship inn New Zealand back in 1998. Photo: photo / RNZ Mediawatch

Back in 1998 Watson and Shuker said media headlines about internet pornography had an air of ‘moral panic’. 

Young people back then weren’t overly interested in it, judging by what they’d seen on TV One.   

“On the Holmes Show, boys and girls from a city college were firm in their assertion that they had only found material that Holmes wanted for his programme at his insistence,” the book noted.

“It is not really very easy to find, download, or reproduce free of charge,” they said. 

But that wasn’t the case for long – likewise, content created by extremists and even terrorists.

Facebook and YouTube were up and running within just a few years – and live-streamed violence was possible long before the March 15 atrocity in Christchurch sickened this country a quarter of century later. 

In recent years, online indecency and violence has dominated the official censors’ workload.

In March 2019, the chief censor David Shanks hit headlines by banning the video of the Christchurch mosque atrocity and the racist manifesto behind it. 

In 2020, the law was urgently changed to make livestreaming objectionable content a specific criminal offence and to empower the Chief Censor to classify a publication immediately to stop the online spread. 

But the government dropped proposals to filter the internet and block entire websites after pushback from free speech advocates and some politicians.  

The then-leader of the opposition Simon Bridges warned Parliament such powers could even interfere with legitimate  newsgathering, citing George Floyd-style eye-witness video or Newsroom’s revealing videos of Oranga Tamariki uplifts that year.

“All that this sort of thing is doing . . . is forcing material underground, where we can’t see it. That’s not the New Zealand way,” Bridges told Parliament

National’s media and broadcasting spokesperson Melissa Lee called it “the start of. the next national debate on free speech and censorship in New Zealand”. 

It wasn’t – but one could be sparked by subsequent government plans to reform media regulation and censorship. 

The Chief Censor David Shanks speaking at a conference about social media and democracy the University of Otago this week.

The Chief Censor David Shanks speaking at a conference about social media and democracy the University of Otago this week. Photo: screenshot / Facebook

Aside from the Classification Office, only the Broadcasting Standards Authority, with jurisdiction over television and radio and broadcasters own online content, has statutory powers and members appointed by government.

The New Zealand Media Council operates a watchdog for news publishers and the Advertising Standards Authority enforces rules for ads. Both are self-regulatory and funded by the publishers and advertisers themselves.

Almost a decade after the Law Commission first called for a body with a “consistent set of standards” across all media, the Ministry of Internal Affairs proposed a new system to “better protect New Zealanders from harmful content.”

But not only is that mission very different to upholding agreed standards, ‘harm’ also means very different things to different people – with different ideas about the limits of freedom of expression. 

When David Shanks came to the end of his five-year term last week, he said the government’s Content Regulatory Review provides a real opportunity to address “the rising tide of misogyny, hate speech, racism and misinformation across so much of the internet.”

News media leaders are wary of any single regulator incorporating the power to censor and block online content having power over them as well.  

“I don’t believe they should be worried but I can understand some reservation about what change will mean,” David Shanks told Mediawatch. 

“Right across the world, you’ve got regulatory structures that reflect the world as it was. And it’s important to keep some of those distinctions clear when we’re working through whatever comes next. But I think you can do that,” he said. 

“Freedom of the press is important – and the ability to report without anxiety and concern that there might be some legal liability for some kind of unlawful material.

“One of the most interesting overseas comparators that I have seen is Ireland where they are looking to . . . an integrated office, but within that a broadcasting commissioner or a content commissioner with separate jurisdiction. That sort of model makes sense to me,” he said. He also cited the EU’s Digital Services Act which aims to create “a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses.”

“They’re saying: ‘Look, if you’re operating an online platform and distributing content and profiting from that, in a way you are a broadcaster – but we’ve got to accept that you don’t have the sort of editorial control and oversight that a traditional broadcaster has. So we’re going to apply some standards of transparency, some baseline standards of what you need to do to ensure that your users are kept safe – moderation of hate speech and the like.’ So it’s a bit of a mix.”

Any egregious anti-social breaches by media here can be dealt with by other laws such as the Harmful Digital Communications Act and the Crimes Act. Soon there might be hate speech laws as well reining in the media. 

The government’s content review has already noted the news media are a ‘low risk’ of harm.   

Why not make news media exempt from any new regulator with authority over content created without professional oversight?

“The question that immediately comes up is what is classified as news media?,” Shanks said. 

“By and large I think news media take a responsible professional approach and make some very, very difficult and testing calls in real time about risk and reporting while not creating harm,” he said. 

“But we’ve got online channels calling themselves news channels and broadcasters which have none of those levels of professionalism and balance. It really doesn’t take much imagination or much looking around to find overseas examples. So any system we put in place has got to accommodate that,” he said. 

The government’s Content Regulatory Review says content can cause harm to individuals “social, emotional, and/or mental wellbeing.”

It also says content can harm “communities and identity groups . . . when members of a community experience harm relating to their membership of that community.”

It also says content can cause harm to wider society.

“This might look like individuals or communities losing trust in, or access to, key public institutions,” the review document states. 

But all these kinds of ‘harm’ could be caused by legitimate reporting that’s clearly in the public interest. 

Think of reporting on a community like Gloriavale or recent reports lifting the lid on the conduct of Arise church’s leaders. Or recent reporting about arranged marriages within migrant communities that have caused offence within those communities. 

 “We cannot be designing a system to prevent any harm at all, or any risk of harm in terms of in those feelings that people will naturally experience. But there are some things that get to the point where you are not just reporting the news, you’re inflicting harm or vicarious trauma upon your viewers and readers.

“But I think the only way you unravel that knot is in working . . . through exactly these scenarios and understanding where there will be freedom and where there will be expectations of a balance,” he said.

0 Shares:
Leave a Reply

Your email address will not be published.

You May Also Like