Terminator

You’ve probably read about the “existential threat” posed by Artificial General Intelligence (AGI). It’s a dark future where super-intelligent machines outsmart us and cause humanity to go extinct. We may be mesmerized by this high-stakes narrative, but we’re also being misled. The real threats of AI are already here, lurking in our everyday digital experiences. While tech titans and the media tout a dystopian AI future, they’re drawing our attention away from the AI and related data privacy issues we need to solve right now.

The “Intelligent” Tools That Are Already Threatening Us

Here are ten ways current narrow-focused, application-specific AI can weaponize data with the potential to do serious harm to individuals, companies, and even governments. (The list could be much longer.) None of these use cases require super intelligence or AGI. All are possible today with existing readily available tools.

1. AI-Powered Personalization: Manipulates individual choices by analyzing personal data for targeted ads. AI is used to iterate these ads until they reach the desired level of engagement. You can think of it as automated persuasion.

2. Deepfakes and Misinformation: Can damage reputations and disrupt democratic processes through hyper-realism.

3. AI-Powered Hacking: Automated large-scale data breaches can expose individuals and companies to significant risks.

4. Financial Market Manipulation: By employing both rapid transactions executed by AI trading bots and AI messaging bots, bad actors can manipulate stock prices and destabilize markets.

5. AI-Enforced Discrimination: In law enforcement and recruitment, AI can unintentionally amplify biases, leading to injustice. This is particularly true right now because certain questions are prohibited from being asked and the data are simply missing. For example: Is this person male or female? You can’t ask the question in a job interview, so the data are not stored. How can engineers check to see if there is bias in a system when the data they would use to check are missing?

6. Mass Surveillance: Governments can use AI technologies like facial recognition, IMEIs of phones, MAC addresses of devices, Wi-Fi signals, etc. for mass surveillance, potentially leading to civil liberty violations. Some employers already do a version of this with both on-location and remote workers.

7. Biased Healthcare Treatments: Skewed data can lead to inadequate or incorrect treatments for underrepresented groups.

8. Price Discrimination: AI can (and does) adjust prices based on consumer data, leading to unfair pricing practices.

9. Cyberwarfare: Weaponized AI can disrupt critical infrastructure, enabling devastating cyberattacks on nations.

10. Data Privacy: Current laws fail to adequately protect against misuse of personal, corporate, or government data.

These current challenges (and dozens I have not listed) pose an immediate threat and call for immediate action. We’ve needed comprehensive data privacy laws for a long time — we still need them. We also need to champion safe, ethical AI practices – which have yet to be fully defined.

Regulation

Can the AI industry self-regulate? That is what it is hoping for. From the AI industry’s point of view, if regulators say no to self-regulation, the next best thing would be governments imposing regulations ASAP. Inept government regulation (and that is what we are likely to get) would practically ensure that big tech stays big and small companies are shut out of the AI innovation race because big tech is in a much better position to “game” any rules regulators might impose.

Today’s Danger Is Clear and Present

AI is not a future threat; it’s a current reality. It’s time we update our conversations about AI to reflect this fact, turning our collective gaze from the future to the immediate and very real challenges we face. As our fascination with AGI continues, let’s remember the pressing issues that need our attention right now. We shouldn’t just fear a future where AI becomes too intelligent. We should be concerned about how AI is being used today – because the true threat of AI isn’t just about what’s to come. It’s already here.

If you want to learn more about how AI works and how it can be used to enhance productivity, check out our free online course Generative AI for Execs. It will help you frame this important issue.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Categories

PreviousShould You Fear Killer Drones? NextWill Apple "Apple-ize" AR/VR Today?

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe