AS the AI market in South Africa steers toward a projected valuation of $2.4bn by the year’s end—a growth rate of 21 percent until 2030—the technological advancements come accompanied by a spectrum of risks, as highlighted by Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 AFRICA.
Locally, AI technology holds promise in mitigating security risks, improving decision-making processes, and fostering positive societal impacts. However, Collard cautions against overlooking associated risks, shedding light on significant concerns that demand attention.
‘Generative AI models, reliant on diverse data sources, lack proper verification, context, and regulation,’ explains Collard. While AI efficiently handles administrative tasks, its reliability in pivotal decision-making roles concerning people’s lives becomes worrisome.
Collard references Kate Crawford, a professor at the University of Southern California and Microsoft researcher, emphasising the imperfect nature of AI. She underscores the risks stemming from AI’s reliance on flawed and biased human-generated data, cautioning users about potential long-term consequences.
Six predominant risks associated with AI have been outlined:
- AI Hallucinations: Instances where AI generates fake or nonsensical outputs due to prompts outside its training data, as seen in a New York attorney’s encounter using a conversational chatbot for legal research.
- Deepfakes: Use of sophisticated AI technology like Generative Adversarial Networks (GANs) to create realistic yet misleading images, audio, and video, raising concerns about political manipulation and misinformation.
- Automated and Enhanced Attacks: Cybercriminals exploit deepfakes for impersonation attacks, employing manipulated audio or video to deceive victims into fraudulent actions, further automating phishing techniques.
- Media Equation Theory: Human tendency to attribute human traits to machines, leading to over-trust in AI interactions, rendering individuals more susceptible to manipulation and social engineering.
- Manipulation Problem: AI’s capacity to simulate emotions and respond to sensory input in real-time allows for the dissemination of predatory content, misinformation, and scams.
- Ethical Concerns: The presence of biases in data, lack of AI regulation, and the ethical implications of AI development raise substantial concerns, particularly in South Africa’s AI landscape.
Collard stresses the urgency in addressing these issues, highlighting the need for vigilance in sharing sensitive information with AI models and advocating for critical thinking in AI engagement. She emphasises the importance of understanding how data is used and the necessity of fact-checking AI-generated outputs before reliance.
‘AI, while a valuable tool, demands critical thinking and mindfulness in its usage. Caution should prevail, and reliance on AI should be balanced against verified information,’ says Collard, underscoring the importance of scrutinising data usage and engaging AI judiciously.