Seasonal Scam Alert
CISA, the Cybersecurity & Infrastructure Security Agency, has issued a warning about scams in the aftermath of natural disasters.
You can click the logo to view this warning (and others), but essentially, they want us all to be on alert for criminals using “email or malicious websites to solicit personal information by posing as a trustworthy organization, notably as charities providing relief. Exercise caution in handling emails with hurricane/typhoon-related subject lines, attachments, or hyperlinks to avoid compromise. In addition, be wary of social media pleas, texts, or door-to-door solicitations related to severe weather events.”
Welcome to Hurricane Season, where the weather is not even the biggest part of the disaster anymore.
AI News to Know – 3 Headlines
AI-generated Photo Impacts the Market
You may remember the Pope’s puffer jacket image and story from April. This is a bit more serious.
Last week an AI-generated image of an explosion near the Pentagon was posted on social media and shared widely, even by some verified accounts. Officials confirmed no such event had happened, but not before the markets dipped.
The good news is that no lasting damage was done.
The bad news is that this is going to happen again. And we need to know how to recognize these fake images.
Al Jazeera reports “Artificial intelligence still has a difficult time recreating locations without introducing random artefacts… This can result in people having extra limbs and objects that are morphed with their surroundings.”
You can verify buildings by comparing to Google Street View.
And we should all keep in mind that news doesn’t happen in a vacuum. One single report with no eye witnesses and no other corroboration should be treated with skepticism.
Lawyer Submits Fake Cases Cited by ChatGPT
A District Court case happening in New York right now is dealing with AI issues that will have rippling effects.
Simon Willison breaks it down here:
“The TLDR version
A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.
ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air.
When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.
At some point, they asked ChatGPT to confirm that the cases were real… and ChatGPT said that they were. They included screenshots of this in another filing.
The judge is furious. Many of the parties involved are about to have a very bad time.”
Willison thinks there may be more to the story than we know right now, and he heard from other lawyers that this is happening in other places as well.
So what’s the lesson here?
Pay attention to the fine print. There is footer on every page of ChatGPT stating, “ChatGPT may produce inaccurate information about people, places, or facts.”
And as Cat always like to say, ‘Trust, but verify.’ I talked about some of the possibilities and limitations of AI in this video, and specifically pointed out the importance of fact checking and doing your own research.
I’m no lawyer, but I would have expected an associate of mine to independently confirm the cases he or she was going to name in a court filing or present to a judge.
AI Presents Political Peril for 2024
As if election season wasn’t already too long and aggravating enough, the Associated Press recently wrote about AI’s threat to mislead voters in the upcoming presidential election.
“Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.
The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.”
The article includes disinformation examples we have already seen go viral, and these tools are only getting more sophisticated.
So what can we do?
First, we need to accept that deepfakes are becoming more common. They’re easier to make, and they are becoming more convincing. So we have to be more critical of everything we consume on social media.
Consider the source of who’s posting and whether there is verifiable information on other reputable websites, not just social media.
Look for indications of deepfakes, such as movements that look ‘wrong’ somehow or that don’t align with sound.
Confirm that something is authentic before sharing it, and report posts that you know are fraudulent. Helping to stop the spread of misinformation can go a long way in shutting down the deception.
Note: I’m not sharing these articles because I think AI is evil. It’s not, and I don’t. It’s a tool. I just want you to be aware of the ways it can be used against you — ideally before that happens — so you will recognize the warning signs.
Good News
A scam email and text operation in Madrid, Seville and Guadalajara has been taken down by the National Police of Spain with over 40 arrests.
KnowBe4 reports on the Los Trinitarios gang that is believed to have defrauded 300,000 people. And cybercrime was just their side gig. Their main criminal activity involves weapons and narcotics, which the cybercrime activity helped to fund.
As much as I want you to be aware of all the threats out there, it’s good to keep in mind that agencies all over the world are working tirelessly to stop them, too.