© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Dr. Rachel O'Connell, TrustElevate: “children are 51 times more likely to be victims of identity fraud”


While personalized ads or video recommendations might seem like a useful feature, they can also lead children into a rabbit hole of harmful content.

While the internet can be a place to learn and access entertainment, it can also prove to be harmful – particularly for children. Experts say that the root of the problem is not only inattentive parents but also big companies who take advantage of young users’ data for marketing purposes.

Dr. Rachel O'Connell believes that data harvesting is not only wrong but can be especially damaging for children. Our guest is the founder and CEO of TrustElevate – a company making sure the internet is a safe space for everyone.

Tell us more about your story. What inspired you to create TrustElevate?

My PhD examined the implications of online pedophile activity for investigative strategies. The findings informed the 1998 Child Pornography Act (Ireland) and the 2003 Sexual Offences Act (UK). From 2000, I led a series of Pan-European projects, funded by the European Commission, coordinating a network of Internet Safety Centres located in 19 countries.

In 2006, I joined Bebo and was instrumental in operationally building the business and defining its child safety processes. I chaired a number of cross-industry groups focused on self-regulation and developed, e.g. the EU Safer Social Networking Principles.

A growing body of evidence indicates that if platforms reliably knew the ages of their users, they could create safer spaces. I worked with the UK Government on their Verify digital identity initiative. I am the technical author of the British Standards Institution PAS 1296 Age Checking code of practice.

In 2016, my tech Co-Founder and I architected a zero data eKYC service that enables child age verification and parental consent for social media and gaming companies, and for banks and fintech to enable digital onboarding of child bank accounts in alignment with the anti-money laundering Directive and the Payment Services Directive requirements.

TrustElevate's child age verification and parental consent technology is the implementation of years of research, thought leadership and commercial experience in the field as well as the result of a lifelong commitment to upholding the fundamental rights of children, stipulated in the UNCRC, Convention 108+ and expanded upon in the GDPR and the ICO's Children’s Code.

Can you introduce us to what you do? Why does handling children’s data require special attention?

Processing of data relating to children is noted to carry certain risks. Furthermore, where online services are provided to a child and consent is relied on as the basis for the lawful processing of his or her data, consent must be given or authorized by a person with parental responsibility for the child, this applies to children aged 13 and under in the UK.

The proper handling of children’s data is core to our business since mishandled data can have such an impact on people’s lives, especially children who are 51 times more likely to be victims of identity fraud with mishandled and leaked data central to those activities.

We facilitate verification across all age bands and throughout the customer lifecycle on our trusted platform, and we do not rely on biometrics or document scanning. Also, we are a Zero Data company, meaning we do not store data, which reduces regulatory exposure for our customers and helps them meet other environmental, social, and governance targets.

Companies deploy our services to meet regulatory requirements that facilitate safe digital parenting and build transparency and accountability that handle young people’s data.

It is a requirement of GDPR that service providers must check whether a user is a child and adjust their data operations accordingly. TrustElevate’s age verification service enables that process and verifies that the person giving consent is actually that child’s parent, thus fulfilling the requirements of the law.

Additionally, what issues can arise if proper measures are not implemented?

One example is that AI frictionlessly connects adults with a sexual interest in children with kids who are innocently live streaming dances, and it enables these adults to act as a pack and request, cajole, and coercively control children to engage in increasingly sexually explicit acts. This has directly led to a spike in what is referred to by INHOPE as self-generated child sexual abuse material. Adults with a sexual interest in children no longer need to search extensively for children online to groom. They simply watch a few videos produced by children in a specific age band, gender, and ethnicity that meets their preferences. The algorithm then connects them with both children that match these criteria and other adults who share their predilections.

These automated processes also facilitate the repeated surfacing of increasingly harmful content to children who may have viewed content related to, for example, self-harm, disordered eating, or suicide. This repeated exposure may increase a child’s vulnerability or harm their sense of wellbeing. Data-driven harms are symptomatic of a systemic failure to protect children online.

TrustElevate’s age and consent verification facilitate the enforcement of those minimum age requirements by sharing the user’s age band with the service provider. In this way, children can be catered to appropriate and protected from adult-oriented and (not illegal but) harmful content and contact online, including that of adults with a sexual interest in children.

This may seem like an extreme jump from data handling to grooming and child sexual abuse, but in digital environments, there is so little oversight and accountability that poor safeguarding practices do lead to both of those things resulting in real psychological and physical consequences for children and young people.

How do you think the pandemic affected the global state of cybersecurity? Did you add any new features to your services as a result?

The UN's Global Humanitarian Response Plan to COVID-19 states that child sexual exploitation is an anticipated indirect impact of the pandemic and increased internet usage without necessary solutions or protections in place. The pandemic has strengthened companies' need for our service and the core tenet of our service: the zero data model.

Incorporating a zero data model into the TrustElevate service means that do not retain and/or transfer personal data. Zero data proofs allow parties to prove something, i.e., a user’s age band, e.g. <13, without revealing the actual date of birth. This is critical to ensuring privacy and combating cyber crimes, which are often facilitated by insecure data and vulnerable transfers.

Cybersecurity is the application of technologies, processes, and controls to protect systems, networks, programs, devices, and data from cyberattacks. To date, Internet safety was regarded as keeping people safe and requiring education programs. However, a distinction between cybersecurity and internet safety is no longer tenable as the requirement to exercise a greater duty of care toward people, and their data gains traction.

Data-driven operations that put people's data at risk continue to grow in sophistication. The centraliZation of personal data is itself a major security challenge: in 2018, Google suffered a data breach potentially exposing over 50 million personal profiles.

Often hidden from users, companies use a variety of tactics; behavior modification, AI, predictive analytics, and profiling, based on personal data, to manipulate users in ways that are known to pose a significant risk of harm to people and society.

What would you consider to be the most serious data privacy issues at the moment?

I believe the harvesting of data to build personalized psychographic profiles is one of the major issues of our time, and we haven’t yet seen the true extent of the ways they can be used against the people they’re based on.

Again, we can look to their intersection with recommendation systems to see the way in which the attention economy and the big tech platforms engage in it predate on our vulnerabilities to keeping users engaged. Children are particularly vulnerable to such tactics.

The most recent story about this was Frances Haugen’s revelations about Facebook’s toxicity to young women and their body image in particular. In fact, we’re seeing the impact of social media detecting an interest in diet content and continuously recommending more of that to them: rates of disordered eating are spiraling among children due to exposure to such content: NHS data for April - October 2021 showed a 41% increase in hospital admissions for children (<17) across all regions for eating disorders compared to 2020.

Another major story we shouldn’t forget comes from 2017 - when it was uncovered that Facebook told advertisers it could monitor posts and photos in real-time to determine when young people feel “stressed”, “defeated”, “overwhelmed”, etc. for targeting purposes.

These are data privacy issues in that they rely on the invasion of users’ privacy to extract data to the extent that they were not given explicit and informed consent for. As expected, the principle of privacy has been violated, and children’s own personal (datafied) experiences are being manipulated to increase their susceptibility to advertising.

Besides implementing various security measures, what other actions can parents take to help their kids maintain a healthy relationship with modern devices and the internet?

Digital parenting is absolutely vital in this day and age, but it can be a real challenge. Social norms are shifting around what is acceptable, often disarming our children against harmful behaviors, and online environments don’t have bouncers to guard their digital doors. Parents often feel overwhelmed and ill-equipped to deal with these challenges.

There have been precedents set in the real world regarding creating safer environments for children. Before health and safety regulations had an impact on real-world playgrounds, they could pose a risk to children's safety and well-being. Parents spearheaded campaigns to ensure local councils created safer spaces. New regulations require companies to deploy Safety Tech services such as TrustElevate’s age verification and parental consent, which will ensure that age-appropriate digital playgrounds can be created.

Parents should urge companies to use such tools and advocate for companies to take more responsibility for children's safety and mobilize to hold companies to account. Complacent on the part of parents means that companies are slow to act. Follow the example set by Duncan McCann, who has taken a class action against Youtube for violations of UK children’s privacy.

Be sure to have open and honest communication about the internet, your relationships with it, and the importance of telling someone about anything that upsets them.

Parents interested in integrating into their children’s digital lives, parallel to what one would expect from a cinema or a sports club in the real world, should connect with TrustElevate and keep up to date with our journey.

In the age of online learning, what would you consider to be the essential security measures organizations should implement?

Distanced learning has become a mainstay of education, and as part of the administrative activities of educational work, many edtech platforms gather and store students’ personal data. The collection of sensitive information by edtech presents unique exploitation opportunities. Edtech providers should minimize data collection and protect whatever is stored and shared using a zero-trust model.

Organizations should also work to assess any third parties with which they share data and prove that the standards of that party are up to their own.

Schools have several responsibilities under the Data Protection Act 2018 when procuring edtech services. These include due diligence, safeguarding, and oversight. Guidance on the ICO’s website indicates that “Schools must think carefully about the responsibilities they and the edtech provider will hold under a specific contractual agreement. More specifically, the degree to which the edtech provider will be able to influence how children’s data is used. Schools need to consider who is acting as a sole data controller, or whether they are joint controllers and processors.”

This is not explicitly a security consideration, but it does relate to the fact that there are different security responsibilities associated with data controllership and processing.

Schools and edtech providers looking to minimize children’s exposure to harmful content, contact, and conduct online while securing their personal information should consider TrustElevate’s zero-trust age verification service. The technology has far-reaching implications, such that once it has been integrated in, say, the educational context, it can be used across the internet to ensure both wellbeing and data security.

As for personal Internet browsing, which tools would you recommend for safe online usage?

I would recommend that people use a reputable and secure VPN and opt for privacy-preserving services like Brave, a browser that automatically blocks ads and trackers, and DuckDuckGo, a search engine that does not store users’ personal information.

Also, be aware of how your data is used and potentially misused. For example, when a webpage loads, data about users’ interests are broadcast to tens or hundreds of companies to allow companies representing advertisers to compete to buy ad space to show specific users their ads. This process is known in the online advertising industry as “real-time bidding”.

Real-time bidding is such a problematic practice because privacy is not upheld. Companies use dark patterns (manipulative UI design, such as hard-to-find opt-out buttons) to extract ‘consent’ - which is neither informed or explicit - from users before subjecting their data to these incredibly large-scale, privacy-insensitive auctions. These practices are currently under investigation by several regulators, including the ICO, the Irish Data Protection Regulator, and others. Taking the time to become aware of these data practices and related investigations is an important step to being educated and informed.

And finally, would you like to share what’s next for TrustElevate?

TrustElevate is creating the world’s most effective child online safety measures. We are currently developing a Child Rights Impact Assessment (CRIA) as part of a UK government-funded piece of work. A CRIA is a series of interrelated decision trees requiring engineers, data scientists, and commercial teams to consider risks, harms, and safeguards associated with product features made accessible to children in specific age bands.

In today's competitive marketplace, it's important to meet industry-wide standards of care in cybersecurity, and with the emergence of the Child Rights Impact Assessment and Accountability Index, the processes technologies that platforms deploy to safeguard the security and safety of users will be a booming area. On 26 January, the EU Commission put forward a declaration on digital rights and principles for everyone in the EU.

We foresee this becoming a major tool in organizations’ belts, enabling them to implement both Age Appropriate and Safety by Design principles and thereby much more effectively protecting their users while participating in the ever-expanding ESG market.

Alongside this, we are continuing to develop our child age verification and parental consent solution to ensure that it is as accurate and secure as possible. So, what’s next is ensuring that those who do have a duty of care toward children online work to implement that duty with emerging technical solutions like TrustElevate’s. Our aim is to make the internet a safe place for children and young people to learn, play and grow, and we’re going to do that in collaboration with those platforms on which children are trying to do that.

COVID-19’s impact on the digital landscape has been enormous. Organizations including the FBI, UN, and NCA are warning that because Facebook, Google, and other sites have sent human moderators home during COVID-19, children's exposure to harmful content and contact online is skyrocketing. Consequently, child abuse images are circulating unchecked, and online sexual grooming is increasing rapidly.

This is a marked increase in the threat to children's safety online at a time when they are using the internet more than ever. The UN's Global Humanitarian Response Plan to COVID-19 states that child sexual exploitation is an anticipated indirect impact of the pandemic.

And the FBI’s Internet Crime Complaint Centre (IC3) revealed in its most recent internet crime report that the losses reported by internet users to the IC3 as a result of cybercrime amounted to over $146 million, which represents a 171 percent increase in losses from 2019.



Leave a Reply

Your email address will not be published. Required fields are marked