Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.
NEWYou can now listen to Fox News articles!

One of the sad truths of American life is that sexual material tends to be on the cutting edge of technology. This probably dates back to Gutenberg, but more recently we’ve seen it with VCRs and then streaming video. And now, artificial intelligence images called deepfakes are hitting the news the same way but are sure to become an issue in all future conflicts and elections.

In the latest case, New Jersey teen girls were victimized by artificially created, sexualized images. And there’s little they can do about it. "At least one student had used girls’ photos found online to create the fake nudes and then shared them with other boys in group chats," according to the New York Post. One of the victims was a 14-year-old girl who had fake sexual images created of her.

There’s a school investigation and police are involved, but whatever happens, the penalty won’t equal what the girl faces. Those false images could be shared online and follow her around forever.

deepfake illustration

Government can help by moving forward with privacy legislation. If companies aren’t as free to collect your data, then it becomes harder for AI to mine what it knows about you.

This is only the latest example of how AI potential and potential threats are both creeping into our daily lives. Thankfully, it’s not all bad. Beatles fans just got an interesting hint at the possibilities of artificial intelligence. The lost Beatles song "Now and Then" features an official music video that blends new video of the remaining members of the band with old clips, at times difficult to tell apart.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Singer Paul McCartney begins the video playing guitar and the other surviving member of the band, Ringo Starr, is shown singing. At times, all four members appear together. It’s both inspiring and disturbing. It’s also easy to see the entertainment possibilities – new movies starring John Wayne or Humphrey Bogart or new comedy bits from Robin Williams. Maybe new songs by Amy Winehouse or Kurt Cobain.

But as Spider-Man can tell you, with great power comes great responsibility. And not enough people are exercising it with AI. With powerful tools available to everyone, there is bound to be abuse.

Who owns those images? Wayne and Bogart signed contracts decades before anyone envisioned technology that could recreate them. Even Winehouse and Cobain probably didn’t address that in their contracts. Their estates would have an intellectual property claim but that’s not going to stop everyone. 

It all ends in a nightmare of lawsuits and copyrights, while many ordinary users will pay no attention to the rules in a new era reminiscent of Napster’s widespread downloading of copyrighted songs. The potential to own, rent and sell manufactured works either in the U.S. or abroad almost guarantees people will be bombarded by fakes. 

Famous women, who are already subject to online harassment, will now face countless examples of fan-created deepfakes depicting them in false ways – first in photos and then in realistic audio and video.

Scarlett johansson at the Clooney Foundation event

Actress Scarlett Johansson is suing a company that used her likeness and voice to advertise on X. (Cindy Ord/Getty Images)

Actress Scarlett Johansson is suing a company that used her likeness and voice to advertise on X, which used to be known as Twitter. The short, 22-second ad promoted a deepfake image-generating app named "Lisa AI: 90s Yearbook & Avatar." 

DEEPFAKES TO BE INDISTINGUISHABLE FROM REALITY AS EARLY AS 2024, REPORT WARNS

Variety reported that the ad started out with the real Scar-Jo and transitioned into the AI version. "Fine print under the advertisement reads: ‘Images produced by Lisa AI. It has nothing to do with this person.’" That’s what lawyers are for. But lawsuits take time and are very difficult to win if the violators are outside the U.S. 

Now, picture the images you can create that will create propaganda in a war like the one Israel is fighting against Hamas terrorists. This won’t be like the one Pallywood actor who keeps cropping up in all the Hamas photos. They will look believable and could be in Gaza, or Ukraine or any other global hotspot. Just like other false reporting, those who want to believe will believe. And that problem will escalate as technology improves.

Any news event is just as vulnerable to abuse, especially elections. There are reportedly more than 60 major elections across the globe next year. AI will surely play a role in some, many or perhaps all of them. A new AP poll found 58% of adults worry that AI, "will increase the spread of false and misleading information during next year's elections," AP explained. 

Campaigns are already exploring ways to mimic reality with ads. What’s to stop corrupt individuals from manufacturing material? Remember, that’s what social media companies claimed had happened with the Hunter Biden laptop. That time they were wrong. What about next time? Even a few incidents will once more encourage social media firms and leftist politicians to control online speech.

That’s a result no free person should want. 

HOW DEEPFAKES ARE ON THE VERGE OF DESTROYING POLITICAL ACCOUNTABILITY

Government is trying hard to maybe, sort of deal with the problem. You can guarantee they will either fail or over-regulate. Those are the two things that government typically does with complicated topics.

Where that leaves us is demanding more from each other. More as users of news and social media. More as parents. More as citizens. We need to be better with what we post and what we believe. I know, that’s hard when the major media themselves push entirely false narratives like Russian collusion. 

Here are five rules to help you:

Who do you trust?

That means outlets and people – friends, family and co-workers. If you trust them, the believability of what they tell you goes up. If you don’t, then it goes down. That also puts pressure on those you are closest to for them to do better.

Be suspicious

If something looks too good to be true, assume it is. No matter how much you want a story to be true, if it looks questionable, don’t rush to share it.

CLICK HERE FOR MORE FOX NEWS OPINION

You don’t have to be first

I have spent decades in journalism and media criticism. We all love to be first. You get the credit, you get the followers who see you as an influencer (Ugh, that word.) If you aren’t in the news business, forget being first. Focus on being correct.

First, do no harm

You probably aren’t a doctor, but that motto isn’t bad for this. If a story says awful things about a person, group or organization, be extra careful. It’s not just a question about being sued. It’s about doing no harm. Remember, how quick many were to blame the Covington Catholic High School students for offenses they never committed. Simply because one was caught on camera smirking. 

Admit when you are wrong

Journalists hate this one. But if you say something about someone that’s wrong, don’t just delete it, admit it. Then delete. But it’s easier if you never make the mistake in the first place.

CLICK HERE TO GET THE FOX NEWS APP

Government can help things by moving forward with privacy legislation. If companies aren’t as free to collect your data, then it becomes harder for AI to mine what it knows about you. Social media and tech companies are saying they are going to do their part, but we’ve heard that before and they used any rules as an excuse to restrict speech they didn’t like.

That leaves it up to us to get the best out of AI and keep the worst from taking over.

CLICK HERE FOR MORE FROM DAN GAINOR