In February 2024, Google’s newly released generative AI tool Gemini sparked controversy by incorrectly depicting historical figures as people of color in a wayward attempt to promote diversity. The embarrassment was prolonged when Google unveiled its Search Generative Experience (SGE), resulting in outlandish AI-generated results circulating on social media. One suggested adding glue to pizza to help the cheese stick; another advised eating at least one rock to fight an iron deficiency.

These incidents underscore the fundamental issue of relying on AI to convey truth accurately. While AI technologies can offer productive benefits, they are susceptible to errors, malfunction, and manipulation, posing significant ethical, legal, and security consequences for organizations and individuals alike. Let’s explore these concerns briefly.

Ethical implications

AI decision-making algorithms draw conclusions based on the data they process. If the underlying data is biased, incomplete, or inaccurate, the AI algorithm will imbibe those properties and characteristics as well. This may result in discriminatory decisions, especially if the AI is being used solely in lieu of sound human judgement, such as making hiring decisions, criminal justice matters, or for loan approvals.

But it’s not just biased data that can lead to biased outputs.

FEATURED VIDEO

An Inc.com Featured Presentation

According to NIST, biases in AI systems can result from both human and systemic biases. If the organization running the AI system engages in practices that harm specific segments of society, it can negatively influence the AI’s decision-making. Ethical concerns include lack of transparency (where users are unaware of the data or algorithms used in AI decision-making), and autonomy issues (where AI systems filtering data based on certain criteria may limit the diversity of information, potentially resulting in discriminatory or biased outputs). AI technologies are also prone to hallucinations, or wild assertions not grounded in truth or reality.

Legal implications

Organizations involved in the development or use of AI technologies must anticipate legal challenges related to privacy, data protection, liability, and intellectual property rights. AI systems analyze deep pools of data, which can bring up concerns regarding user consent for data analysis, and whether privacy and data protection measures are adequate for safeguarding personal information.

In terms of liability, who is held responsible if an AI system makes an error in calculation or writes something defamatory? Intellectual property infringements such as theft and plagiarism are also becoming problematic. Determining ownership of copyright for content created by AI can lead to legal challenges. Artists and illustrators have brought lawsuits against AI image generators for this reason.

Security implications

AI generates information based on its training model. But what if this training data is manipulated or tampered with by bad actors? NIST has warned organizations about data poisoning attacks. AI technology can be exploited by adversaries in a number of different ways, such as producing hyper-targeted phishing campaigns construed by analyzing the online behavior and personal details of a specific mark. Bad actors can use AI to deconstruct code and uncover vulnerabilities; they can leverage synthetic media (AI-doctored or cloned audio, video, and images) to spread disinformation and social engineer users. AI can also be used to create malicious payloads and to design highly evasive malware that evades detection by conventional security controls. Using malicious prompts (a.k.a. prompt injection), threat actors can make large language models (LLMs) perform unintended or unauthorized actions.

How can organizations mitigate AI risks?

As AI technologies mature and proliferate, the ethical, legal, and security risks of AI will become even more insidious and widespread. Here’s how organizations can protect themselves:

Avoid blindly trusting AI content: Always double-check to verify information generated by AI tools, especially for critical tasks or decisions. AI is imperfect and can produce inaccuracies. Use multiple sources to confirm the authenticity of facts.

Be aware of biases: Assess your training data. Understand that current AI systems can absorb and amplify biases. Learn to be critical and cautious of using AI-supplied information.

Protect intellectual property: Ensure that your organization only uses copyrighted content for training of AI models, especially when it’s feasible to license such content.

Foster critical thinking: Educate employees about the potential risks and errors AI can introduce to the organization. Direct users to consume information from reputable sources, to avoid spreading misinformation, and to be wary of deepfakes and other social engineering threats.

Simulate AI attacks for training purposes: Take advantage of phishing simulation platforms to train users on AI-powered disinformation, synthetic media like deepfakes and cloned voices, and other social engineering techniques. Aim to establish a healthy culture of security vigilance and resilience.

Demand transparency: Always insist that third party and supply chain partners disclose how their AI systems work and how they consume your personal and business information.

Always review policies: Before utilizing AI services, ensure that your business verifies whether data is being managed in compliance with specified regulations, policies, and best practices.

Like any advanced technology AI arrives with its own unique risks and challenges. As a society, we should aknowledge these risks and collaboratively address them. Understanding the strengths and weaknesses of AI tools, exercising caution instead of blind trust in AI-based decisions, and educating staff about AI-related risks are crucial steps organizations must take to develop the critical thinking skills necessary to navigate this new AI-driven world.