Safer Digital Spaces: The Vital Role of Technology in Combating CSAM and Preventing Online Harms 

forensic fix

In this episode, Adam Firman is joined by Tom Farrell QPM and Jesse Nicholson from SafeToNet. They delve into the remarkable work undertaken by SafeToNet, while also sharing their personal journeys and experiences within the industry. 

Tom Farrell is the Chief Operating Officer at SafeToNet. Prior to joining the organization, Tom spent nearly two decades as a law enforcement officer. He dedicated the latter part of his policing career, approximately 12-13 years, to combating online child sexual abuse and exploitation. His exceptional work in this field was acknowledged with the prestigious Queen’s Police Medal. 

Jesse Nicholson, the Chief Technology Officer at SafeToNet, has an impressive background in the software industry spanning 17 years, and over 20 years in the tech sector altogether. In 2021, he brought his extensive expertise to SafeToNet, where alongside a dedicated team, he’s been working on pushing the boundaries of technology in order to safeguard innocent victims from online harms.  

The trio had an engaging and insightful discussion revolving around the pertinent topic of digital forensics and the progress in the ongoing fight against CSAM material. They explored the role of technology in preventing online harms, making it an inspiring episode that highlights the importance of continuous research and implementation efforts aimed at creating a safer world. 

 

Show Links:

Listen to the Podcast at: 

Key Takeaways

 

Their Career Paths: How Did They End Up Working with SafeToNet? 

Tom Farrell and Jesse Nicholson are two key individuals contributing their expertise to SafeToNet. With their extensive backgrounds in law enforcement and software development, respectively, they play vital roles in the organization’s efforts to combat online child sexual abuse and exploitation and develop innovative technology solutions for online safety 

[02:33] Tom Farrel: Most of my work for the last probably 12, 13 years has been in the space of online child sexual abuse and exploitation. Adam and I used to work together at North and Suffolk and Stadbury’s in that kind of area. And I moved on from Suffolk and Stadbury to the home office and then to SafeToNet a couple of years back to carry on the journey, but in a slightly different way. 

[03:05] Jesse Nicholson: So I’ve had a long career in both the hardware and software side of computers, software for the longest. I started my own business doing software development when I was about 20. I’ll skip a whole bunch of long, boring history, but I got into content filtering somewhere around late 2011, and I started developing my own content filtering software, which eventually led up to my company being acquired by SafeToNet in2021, when I came on board and here I am today. 

 

On their Collaboration: From Ideation to Implementation 

Working collaboratively, their dynamic partnership brings a balance between visionary ideas and technical feasibility. Tom’s ability to conceptualize and Jesse’s technical expertise enable them to shape innovations that appeal to law enforcement agencies and tech companies, fostering adoption and implementation. 

[O6:00] Tom Farrell: The reality is that I can come up with ideas in my head about generally how I think things should go, but Jesse can bring me back down to Earth with: You can’t technically do that, or you can technically do it. And, I can then try and shape some of his technical innovations in such a way that the end customer, be that law enforcement or be that tech companies, will want to hire and want to implement it onto their platforms or into their digital investigations processes. 

 

What Is SafeToNet? 

[01:53] Tom Farrell: SafeToNet is a UK founded cyber safety tech company. So, SafeToNet fits neatly in between the tech industry, and those who are looking to put tools to the greater good to protect children and other vulnerable users online. SafeToNet is established in the UK, and we have staff in UK, Germany, USA and Jesse in Canada. And we’ll work remotely, but we’ll work together to try and develop solutions to protect children online.   

 

What Does SafeToNet Do? 

At its core, SafeToNet develops technology to make the internet a safer place, especially for children. There’s a wide range of actions and initiatives they do with that goal in mind

[06:56] Tom Farrell: We do some of the more conventional parental controls type of work. We as a company acquired the US organization Net Nanny several years ago. So, we are in the processes of redeveloping our parental control solution, Net Nanny, which will be released later this year with a more child-focused approach to it, respecting child privacy, adhering to the ICO Children’s Code, for example. And then we also take part in projects with other organizations. We just launched a project with seven other participants for the European Union and this is to build a preventative solution to help those who are trying to stop themselves from viewing child sexual abuse online. So, we do some of those kinds of innovative approaches. Probably our most interesting development of the last year or so has been SafeToWatch which is Jesse’s invention. 

 

What is SafeToWatch?  

SafeToWatch is a real-time solution from SafeToNet designed to detect and prevent harmful content in images and videos. 

[08:01] Jesse Nicholson: SafeToWatch is a software development kit that does predictive analysis on computer vision. So, the idea is that you can feed it images or video and it’ll give you an idea of what it strongly believes that imagery contains. When I first came on board Safe to Net, I had in my own software, I had developed something like that, but specifically for pornography. CSAM wasn’t even something that was on my mind or radar. SafeToWatch is a really good illustration of how Tom and I work so well together. So, we expanded that initial concept to include CSAM. SafeToWatch is capable of not only – it can recognize pornography independent of CSAM, it can tell the difference between them. And we’ve done that in… in partnership with the IWF. We had this initial foundation of that technology and it was Tom who said, can we tackle CSAM with this as well? And we put in a lot of time, development back and forth, planning, but we got there. It’s actually up there, production-ready now and it’s an ongoing work that will only continue to make better. [16:57] SafeToWatch is all visual space. So, it’s “looking” at what the content is, and it’s making a judgment call just like a human moderator would. Maybe not as perfectly as an expert moderator would, but yeah, that’s essentially all it’s doing. It’s “looking” and then giving you its best advice as to what it “sees”. 

 

Bridging the Gap: SafeToWatch Provides a Privacy-Focused Approach to Tackling Online Harms 

Amidst the polarized discourse on online safety versus privacy, Safe2Watch emerges as a purposeful middle ground, deliberately addressing the critical issue of online harms while maintaining a privacy-focused approach. 

[10:54] Tom Farrell: We’ve seen so much over the last couple of years with the polarized debate around online safety sitting at one end of the spectrum, privacy sitting at the other. We’ve very deliberately not been part of that debate and the reasons being not because we’re not passionate about it, but because we think Safe2Watch can be the effective compromise that sits in the middle. It’s privacy focused, but also deals with the awful online harms that we all know exist.  

 

Taking a Preventative Approach to Online Harms and Child Abuse Material 

The focus of SafeToNet technology is on its truly preventative nature rather than a punitive approach. Its primary goal is to prevent the creation and consumption of harmful content online, in all its forms. 

[11:21] Tom Farrell: So what I’d say and what we do say when we’re speaking to organizations regarding SafetoWatch: here is an approach that is preventative – truly preventative at heart. Its core function is not designed to catch people. The core function is designed to prevent the creation and consumption of child abuse material, material generated via sextortion, material that children have innocently sent to each other, but maybe afterwards come to regret, but it’s on the internet already, so there’s nothing they can do about it.  

Our approach purely is: We have a solution. If you want to try it out and you want to test it and see whether it works on your platform, you can test it. We’re happy for anyone to test it. We won’t charge for that testing and then we’ll enter into discussions about how it can be embedded in time. So that’s not a cop out. It’s just that we don’t think that we need to be part of that kind of debate. We think we can be part of the exclusion to it. 

 

Preventing CSAM Across Devices, Applications, and Networks 

SafeToWatch offers versatile integration options, ranging from embedding it directly at the device level to potentially incorporating it into applications, operating systems, and even browser extensions. 

[13:12] Jesse Nicholson: It can actually be embedded directly at the device level. So, as you say, handset manufacturer can have machine learning at the system level that can do things for you. You can do that with Safe2Watch as well. So you could in theory prevent any application from using a camera to capture a CSAM in some fashion, or being able to select a CSAM file and put it into any app. [16:14] I suppose you can deploy it in a browser extension as well. I mean, you can deploy it pretty much at any level. So you can put it into the operating system. You can put it into an application. You can put it into the network. A lot of corporations monitor their own network for security viruses and whatnot. That exact same pipeline can be used for detecting CSAM and keeping it off the network.   

 

Comprehensive Prevention and the Importance of Projects Such as Stop It Now Helpline 

One of the main challenges in online safety and combating CSAM is recognizing the need for comprehensive prevention. 

[14:08] Tom Farrell: I think one of the hardest things in online safety and tackling online harms is an acceptance that prevention has to encompass everything. The project we’re talking about, the European project, there are a huge number of people who are voluntarily putting their hands up saying: I’ve got a problem consuming child sexual abuse material. They haven’t been caught by the police. So, this isn’t a response to being caught by the police. They’re actively contacting organizations such as Stop It Now helpline run by the Lucy Faithfull foundation. And they’re saying, is there a technical way to stop me? If you’re in CSAM, at the moment, the answer largely is no, there isn’t. So, this is a variation using the core SafetoWatch type technology that when they seek to view that kind of content, it’s built in, it’s on their device – they can’t do it. It’s to aid them, it’s just another part of the whole kind of prevention approach. Not just trying to prevent at the victim end or the potential victim end, but also on the part of those who are offenders but don’t want to be an offender. 

 

On Their Technology’s Performance 

[18:09] Jesse Nicholson: It can run in real time even on older mobile phones. We’ve benchmarked it. I dug up an old, like seven-year-old Intel Atom netbook and benchmarked it on there. We were able to do video in real time. That thing should have been in a recycling bin a long time ago, but anyway. So yeah, the real time performance is there pretty much regardless of what kind of hardware you’re running it on. In terms of performance – because we want to see SafeToWatch implemented directly into applications that are user facing, that’s always been a concern is how can we keep false positives down to zero, if possible. In our current model, we did an extensive joint evaluation paper with the IWF where we measured that specifically. So during that test, to give a rough idea. Normally, when someone comes up with a type of neural network, there’s a standardized test called ImageNet and they evaluate its performance with about a hundred test images per category. Well, we threw about 22,000 test images of CSAM at this model, we threw out 100,000 neutral images to measure the false positive rate. And our false positivity rate… so raw on post-process, it was around 1.34% false positivity rate and driven all the way down to 0.34. If you threshold that a little bit, now you lose some true positives if you do that. but it’s less of a headache. And in video, it’s an order of magnitude lower because of our temporal algorithm – it was 0.02% in classified video. 

 

Education Goes Both Ways: Acknowledging the Wisdom Kids Hold in Navigating the Digital World 

In Episode 4, Austin Berrier talked about the need for parents to educate their children about online threats and set a positive example.  

This time, the tables are slightly turning. Recognizing the valuable knowledge that children possess about online trends and activities, there is a need for a shift in perspective where children also educate their parents about the digital landscape.  

[22:05] Tom Farrell: Children need to educate their parents about what is going on online. We often hear about education about how parents need to tell their children what they should and shouldn’t do online. Well, as a parent of three daughters and somebody who’s involved in the tech space, I would still say they know an awful lot more about what’s going on online, and particularly new current trends as well. [26:31] Some of my best information about what the new current trends are come from a fairly innocuous 20 minutes in the car with my daughters, when I ask them questions about: What are people doing on TikTok to get around the fact that there’s age verification, et cetera on there? And I remember months and months ago, they told me: Ah, the way people do it is they all have the same login to the same private account, and they share content there. Well, last week, there’s a big expose published about how people are publishing child sexual abuse material on TikTok via single log-ons to private accounts. So, the children are the ones who are noticing this probably way before anyone else. It would be daft not to listen to them and gain some of our education that way. 

 

Hopes for the Future of SafeToNet 

[30:00] Jesse Nicholson: I hope a couple of things. One that demonstrates that something can be done, where it’s not going to negatively impact your application or your service or whatever it is. And with that being demonstrated, I hope that Safe2Watch and any technology like Safe2Watch would generally become implemented because I feel like in the technological revolution, we’re very much in the wild west where everything’s unregulated. And there was once upon a time with the industrial revolution where it was the same way. And we look back on that now and say, that’s crazy. Why did we do things that way? We can do better. And I think SafeToWatch is the proof that we can. And I just hope we do collectively start to do better with technologies like SafeToWatch. 

 

Thank you for joining us on the fifth episode of Forensic Fix.