The struggle to balance freedom of expression and social responsibility in the online world is heating up. The European Union is debating legislative proposals to regulate internet services, including new rules against abusive or illegal content. In the United States, a social media app sued after it was dropped by its web hosting service. Meanwhile President Joe Biden’s call to revise the law that protects social media platforms from legal action over online content has sparked a backlash.
These and other territories would do well to look at the experience of Australia. The country created the world’s first online safety agency in 2015 following the suicide of a television presenter who had faced abuse from internet trolls. Julie Inman Grant, a former Microsoft executive who heads the agency as eSafety Commissioner, has made progress in getting social media platforms to take down cyberbullying and nonconsensual intimate photos. But at a time when the pandemic has shifted much social and economic activity online, this is a major uphill battle. Between March and September last year, reports to the commissioner’s office of child sexual abuse material online more than doubled while abuse involving adults jumped 49 percent.
Regulators can make people aware of the risks and of ways to protect themselves better, Inman Grant says, but the really big breakthroughs need to come from social media companies and other service providers building safety into their products from the outset, a concept known as ”safety by design.”
“This is a change in ethos from moving fast and breaking things to deliberately making the experience less toxic, more civil, and ultimately safer,” says Inman Grant. She spoke recently with Paul Mee, an Oliver Wyman partner who leads the firm’s Cyber Risk pPlatform and co-heads the Oliver Wyman Forum Cyber initiative.
The eSafety Commissioner started by focusing on children’s online safety. How big is the problem?
The research that we've done has stayed relatively consistent over the past 10 years: One in five Australian children is cyber bullied. The average age is about 14. Girls are bullied more than boys. At that time, we were starting to see increases in mental health distress and, while there isn't a direct causal link, a lot more teen suicides. Cyberbullying was really tearing at the fabric of society.
We're not here to do content moderation for the platforms — they need to do that themselves. But we're here to serve as a safety net and advocate on behalf of children if things fall through the cracks and serious cyberbullying isn’t dealt with. We've helped thousands of young children get content taken down that wouldn't have otherwise been taken down. There is an inherent power imbalance between the platforms and individual users. Even as companies are starting to introduce appeals processes and oversight boards, these don’t really operate effectively in time and at scale.
How do you know you’re making a difference?
First, we focus on preventing harms from happening. We know it takes a long time to achieve meaningful behavior change. You can't lead with fear-based messages or judgment. And, you need to use the media and language that young people are using.
One of our most successful programs is called Rewrite Your Story, which are video vignettes based on real-life scenarios. We’ve developed educational resources for teachers. They show the videos in class, they guide conversation so that children are problem-solving, we're not solving the problem for them.
We also know that only 50 percent of young people will speak to a trusted adult when something goes wrong online. So, we need to encourage parents to become that front line of defense, to become engaged in their children's online lives like they are their everyday lives. To set time limitations, to talk to them about where to go if things go wrong.
Prevention is great but what happens when things go wrong?
With youth-based cyber bullying, it's been relatively easy. We've had a 100 percent compliance rate with the major social media sites. We haven't had to use our powers to fine online tech or media firms or compel takedown for any cases that we’ve brought to date. With image-based abuse, which is taking down the nonconsensual sharing of intimate images and videos, we have an 85 percent success rate.
Do you anticipate a time when there will be fewer signs of self-harm because of the steps you're taking?
I am very cautious about saying cyberbullying leads to teen suicide. We know that suicide is very individualized, very complex, and there are usually underlying mental health issues. There has been some research that when you have underlying mental health issues, there is a combination of face-to-face bullying and cyberbullying so the person feels like they can’t escape, and suicidal ideation can escalate. But one of the reasons you don't want people saying this causes that is because it takes out the important step of getting children to seek out help, to get mental health support, to report abuse when it happens, to get their parents or their schools involved.
You also work with the elderly and minority populations. Are their issues different?
The least digitally engaged population are Australians over the age of 65. They're also a much more trusting generation so they are more susceptible to scams and social engineering.
Just over four years ago, we started a program called Be Connected, which is all about engaging older Australians with basic digital literacy skills as well as online safety fundamentals. It was challenging getting seniors engaged. However, with COVID, we've seen a huge spike in interest in things like how to video conference or bank and shop online. We're grateful that it's there because the online world has played such an important role in ensuring that those most susceptible to social isolation can actually connect.
We also know that those who are more at risk in the real world also tend to be more at risk online. The prevalence of targeted online abuse and hate is much higher for women than it is for men. If you're an indigenous person, if you identify as LGBTQI, or if you have a disability, you're three times more likely to become a victim of some form of online abuse.
Does the technology sector bear any responsibility for online abuse?
I think you've seen a lot about bias in artificial intelligence. That bias reflects who's doing the technology development. When I joined Microsoft back in 1995, it was 70 percent men to 30 percent women.
The odds are still the same today. You still have the same proportion of men, usually white Anglo-Saxon men but increasingly more diverse populations. That definitely has influenced algorithms that drive what content is promoted online. And so, we're creating massive filter bubbles where particularly negative or extreme views are being elevated and escalated. Once you get into a filter bubble, it's very hard to get out.
How do you cut through that bias?
Security by design and privacy by design have been embraced by companies because they help drive trust and sales. When you get into your car today, you take for granted that the brakes will work, the airbags will deploy, and seat belts will be effective.
We need to apply the same kind of thinking to the online world. Tech companies know how their platforms can and are being weaponized. To me it's a question of corporate will. They have the resources, the intellectual capability, and the advanced technology. So why aren’t they thinking actively about the risks to their users and then building in the professional protections at the front end, rather than bolting them on after something terrible happens?
If there’s a regulatory threat or a reputational threat or revenue threat, companies act. So, I think the movement of safety by design is picking up speed. People are losing patience. Why should we become the online casualties on the digital roads?
Clearly, this goes beyond any single jurisdiction. What thoughts do you have regarding how to promote safety at a global scale?
Many governments have come to us. We're building a tool kit now regarding how our investigations and our regulations work. I expect in the near term that the UK will have an independent online harms regulator, Canada will have one, and Ireland will have a digital safety commissioner. It's interesting to note that Joe Biden and Kamala Harris have talked about standing up a task force to look at the intersection of online harassment, violence against women, and cyber exploitation, which is what we call image-based abuse.
A small regulator in the South Pacific is not going to be able to go to war with the internet. But what I think we've been able to show over the past five years is that when you are dedicated, you are better able to bring the global behemoths to heel when you need to. While it is important to have a big stick, there’s something of a dance where we want to do as much cooperatively as we can. There are good people working in these companies who want to do the right thing. But we need more leadership and much more proactive action.
While the internet is global, nation-states are sovereign. I'm hoping there will be a vibrant active network of like-minded online harms regulators like the eSafety Commissioner, just as there are data protection authorities around the globe. Such a network can enable us to collectively work together to achieve critical mass in terms of changing things for the better.