THE transition from COP 27 to COP 28 has underscored a crucial lesson: the pivotal role of Artificial Intelligence (AI) tools in mitigating risks associated with sensitive data across all sectors, both in developed and developing economies. In today’s landscape, the use of generative AI tools like ChatGPT and innovations such as Microsoft 365 Copilot has become commonplace, prompting a critical examination of how organisations can safeguard their sensitive data within the realm of sustainability.
The proliferation of AI, particularly in the form of generative tools, has sparked significant interest. However, this surge in AI innovation also raises profound concerns about protecting sensitive information. To address these concerns, cybersecurity and IT leaders, among others, must employ specific measures to mitigate risks associated with tools like ChatGPT.
Ensuring data integrity stands as a paramount responsibility for sector leaders. The adoption of generative AI, particularly AI that operates on the premise of ingesting sensitive data, necessitates robust safety measures. One effective strategy involves leveraging security service edge (SSE) solutions to intercept, redact, or block sensitive data inputs, thereby maintaining data integrity at communication points.
A proactive stance, especially through the strategic use of block options at web connection points and APIs, is critical in preserving data confidentiality and ensuring consistent adherence to security protocols within organizations. This approach is instrumental in maintaining the confidentiality of sensitive data and ensuring a seamless adherence to comprehensive security measures.
Additionally, a meticulous examination of information security protocols is imperative, especially concerning commercial off-the-shelf (COTS) generative AI solutions like Microsoft 365 Copilot. While these tools offer promising possibilities, organisations must rigorously assess the data security measures underpinning these tools, especially when dealing with proprietary or client data.
Tailoring the use of generative AI tools based on the nature of the data can optimise innovation and productivity in open data environments, while stringent evaluations become imperative when handling confidential information. This underscores the need for an integrative and sustainable strategy that harmonises the power of generative AI with data integrity and regulatory compliance.
Organisations seeking maximum control over data protection can consider developing tailored generative AI applications using foundational models, exemplified by Microsoft’s Azure OpenAI service. This tailored approach empowers organisations to craft applications that align precisely with their unique data security requirements, offering flexibility in development while maintaining security obligations.
Moreover, institutions equipped with substantial resources can explore training domain-specific large language models (LLMs) like BloombergGPT using proprietary data. By training LLMs from scratch, these entities can fortify AI models to adhere closely to their data security parameters, establishing robust defences against data leakage and other vulnerabilities.
As the landscape of generative AI continues to evolve, it brings both promise and peril. Security experts, armed with a carefully devised roadmap, can navigate this terrain, harnessing the potential of ChatGPT and generative AI while safeguarding the sanctity of sensitive data.
In conclusion, a symbiotic relationship between innovation and protection enables companies to unlock the true potential of AI without compromising data integrity. Embracing AI sustainably ensures the safeguarding of sensitive data while driving progress and innovation in a rapidly evolving digital era.
Professor Ojo Emmanuel Ademola is the first Nigerian Professor of Cyber Security and Information Technology Management, and the first Professor of African descent to be awarded a Chartered Manager Status, and by extension, Chartered Fellow (CMgr FCMI) by the highly Reputable Royal Chartered Management Institute