Artificial Intelligence concept about safety, security, privacy, and the risks of using Artificial Intelligence. SUBMITTED PHOTO

At a basketball tournament, one of my friends pointed across the court and asked me what instrument the band member was holding. I couldn’t quite see it, so I pulled out my phone and focused on the instrument using the Google identification app.

At least, I thought I did. When I clicked “search,” it identified the young musician, where she lives, and her Instagram profile. I could have accessed more, but I was so horrified at the availability of information about her that I immediately closed the app after showing my friend the information Google had provided.

A while ago, I was on a human trafficking awareness panel that was part of informing a high school audience about what human trafficking is and how to be aware and avoid it. One of my fellow panelists, representing a local sexual assault prevention organization, refuted the information I presented on the dangers of sharing personal photos on social media.

As a mother of a 14-year-old, she said there was no danger. As a trained expert, I worried for the safety of innocent children. I get it, though. It’s hard to wrap our heads around the fact that a sexual predator’s only job is to live out sexual fantasies using children’s photos or, worse, locate them and exploit them.

Each year, as schools resume their activities, I post a plea on social media to ask everyone not to post photos of their children. Confidentially share with family. Know that from a single photo, the unscrupulous can tell everything they need to know to exploit the photo’s subject. Even places such as schools should not photograph their students of any age and share them throughout the system. It’s not only a privacy issue; it’s a safety issue.

With the advancement of AI, as demonstrated by Google, we have gone far beyond what was safe. I have always been an early adopter of technology; AI is no exception. It is, however, in large part, uncharted territory. We are responsible for our creations. Photographing and posting photos without faces is not protection. Photos can be manipulated.


What can we do? The responsibility lies everywhere. One remedy is the development of laws and policies that protect victims and provide private right to action against violators.

The bi-partisan DEFIANCE Act was introduced in January 2024 to “hold accountable those responsible for the proliferation of non-consensual, sexually explicit deep-fake videos and images.” (The US Senate Committee on the Judiciary) Governments must prosecute social media owners as co-conspirators.

Should we hold everyone accountable? I’ve long thought children’s photos should not be posted because children aren’t old enough to consent. Lately, as I see parents exploiting their children on social media to sell products, I am disgusted and wonder, how do we educate them that this isn’t right?

The rise of AI has increased the dangers of posting photos of children, but the danger has always been there. Educate and empower. Safety first. Be curious, not fearful.

Comments are not available on this story.

filed under: