A REGULATORY FRAMEWORK FOR ARTIFICIAL INTELLIGENCE

SONNY IROCHE argues for an AI regulatory framework to protect the financial services sector, and safeguard the electoral process

As Nigeria approaches the 2027 general elections, the nation once again, stands at a crossroads. The rapid advancement of artificial intelligence (AI) offers unprecedented opportunities to enhance economic growth, improve financial services, and streamline governance. However, these advancements come with significant risks—cybersecurity threats to the banking sector, the proliferation of malicious malware and blackmail, plagiarism in academic and creative spheres, and the weaponization of deepfakes and propaganda in political campaigns. Without a robust regulatory framework, these threats could undermine Nigeria’s financial stability, democratic integrity, and societal trust.

The global landscape provides stark lessons. From the United States to India, democracies have grappled with AI-driven disruptions, often with devastating consequences when unprepared. Nigeria, with its vibrant democracy and burgeoning digital economy, cannot afford to lag behind. This article explores the urgent need for an AI regulatory framework to protect the banking and financial services sector, curb cyber threats, and safeguard the electoral process, drawing on international examples to inform a proactive Nigerian strategy.

Nigeria’s banking sector has embraced digital transformation, with mobile banking, fintech innovations, and online transactions becoming the norm. The Central Bank of Nigeria (CBN) reported that digital payments surged by 89% between 2019 and 2022, reflecting the sector’s reliance on technology. However, this digital leap has exposed vulnerabilities, particularly as AI empowers cybercriminals with sophisticated tools.

AI can analyze vast datasets to identify weaknesses in financial systems, enabling targeted attacks. In 2016, the Bangladesh Bank heist saw hackers use malware to steal $81 million via the SWIFT network, exploiting lax cybersecurity measures. While not explicitly AI-driven, experts note that modern iterations of such attacks increasingly leverage AI to evade detection. In Nigeria, the 2021 ransomware attack on a major bank, which demanded millions in cryptocurrency, hints at the potential for AI-augmented malware to wreak havoc.

AI also facilitates blackmail by synthesizing personal data into coercive tools. In the United States, the 2017 Equifax breach exposed the data of 147 million people, later used for identity theft and extortion. An AI regulatory framework could mandate stringent data protection standards, preventing such incidents from crippling Nigeria’s financial institutions.

South Africa offers a regional example. The 2013 “Dexter” malware attack on retailers, which stole millions of rands, underscored the need for proactive cybersecurity regulation. South Africa responded with the Cybercrimes Act of 2020, which includes provisions for emerging technologies like AI. Nigeria must follow suit, tailoring regulations to its unique financial ecosystem.

Malware remains a persistent threat, evolving with AI to become more elusive. In 2023, the “WormGPT” tool—a malicious AI variant—emerged, enabling cybercriminals to craft phishing emails and ransomware with alarming precision. Nigeria’s digital economy, projected to reach $88 billion by 2027, is a prime target.

The 2017 WannaCry ransomware attack affected over 200,000 systems worldwide, including the UK’s National Health Service, costing billions. AI could have accelerated its spread by adapting to antivirus defenses in real-time. The European Union’s response—the General Data Protection Regulation (GDPR)—imposes strict penalties for data breaches, encouraging preemptive cybersecurity investments. Nigeria lacks a comparable framework, leaving its systems vulnerable.

Local incidents, such as the 2020 hacking of Nigeria’s National Identity Management Commission (NIMC) database, highlight the stakes. An AI regulatory framework could enforce standards for software development, mandating “secure-by-design” principles to thwart malware proliferation.

AI tools like ChatGPT have revolutionized content creation but also fueled plagiarism. In academia and media, unoriginal work undermines credibility and intellectual property rights—a concern for Nigeria’s educational and creative sectors.

In 2022, a US university expelled students for using AI to generate essays, sparking debates over academic integrity. India faced a similar crisis in 2023, when AI-generated articles flooded online platforms, prompting calls for regulatory oversight. These cases illustrate AI’s dual nature: a tool for innovation and a vector for ethical breaches.

Nigeria’s universities, already battling plagiarism, face a new frontier with AI. A regulatory framework could establish guidelines for AI use in education, ensuring transparency and accountability while fostering innovation.

Perhaps the most alarming AI-driven threat is its potential to distort democracy. Deepfakes—AI-generated audio or video mimicking real individuals—can sway elections through misinformation. With Nigeria’s 2027 elections looming, the stakes are high.

Historical Examples in Democracies

 One, United States (2020): During the 2020 presidential election, deepfake videos of candidates surfaced, though their crude quality limited impact. Experts warn that by 2024, improved technology amplified their reach, with a fabricated video of Joe Biden “confessing” to voter fraud gaining millions of views before being debunked.

 Two, Slovakia (2023): Days before parliamentary elections, an AI-generated audio falsely depicted a candidate discussing vote-buying. The damage was swift, eroding trust despite official denials. Slovakia lacked a framework to counter such threats, a gap Nigeria must avoid.

 Three, India (2024): In the world’s largest democracy, deepfake ads targeting opposition leaders during state elections fueled unrest. India’s subsequent AI policy draft emphasizes real-time detection and public awareness—measures Nigeria could emulate.

Nigeria’s history of election-related misinformation—e.g., the 2019 spread of fake results via WhatsApp—makes it ripe for AI exploitation. A deepfake of a presidential candidate conceding defeat or inciting violence could ignite chaos. Without regulation, political opponents could deploy such tactics unchecked.

Beyond deepfakes, AI enables broader political interference. Bots and algorithms can amplify divisive narratives, while hacks expose sensitive campaign data.

Global Case Studies: United States (2016): The Russian-linked Internet Research Agency used AI-driven bots to spread propaganda, influencing the presidential election. The Mueller Report highlighted regulatory gaps that allowed this interference.

  France (2017): The Macron Leaks—hacked emails released before the election—were amplified by AI bots. France’s swift legal response mitigated damage, a lesson in proactive governance.

  Kenya (2022): AI-generated social media campaigns exacerbated ethnic tensions during elections, prompting calls for digital oversight.

Nigeria’s 2027 Outlook: Nigeria’s polarized political landscape, coupled with widespread social media use (over 30 million active users), amplifies these risks. A regulatory framework could mandate transparency in digital campaigns, penalize malicious AI use, and bolster cybersecurity for electoral bodies like INEC.

The threats outlined above—cybersecurity breaches, malware, plagiarism, deepfakes, and political interference—share a common thread: AI’s unchecked potential. A regulatory framework is not a luxury but a necessity to protect Nigeria’s financial stability and democratic integrity.

Safeguarding Banking and Finance,

 Mandate AI Audits: Require financial institutions to audit AI systems for vulnerabilities, mirroring the US Treasury’s 2024 AI risk management guidelines.

 • Enhance Data Protection: Update the Nigeria Data Protection Regulation (NDPR) to address AI-specific risks, drawing from GDPR’s success.

 • Foster Collaboration: Establish a public-private task force, akin to South Africa’s cybersecurity hubs, to share threat intelligence.

Curbing Malware and Blackmail: Secure-by-Design Standards: Enforce software development protocols to preempt AI-enhanced malware, inspired by the UK’s secure AI guidelines.

  Penalize Malicious Use: Legislate harsh penalties for deploying AI in blackmail or cyberattacks, deterring perpetrators.

Addressing Plagiarism: Educational Guidelines: Regulate AI use in schools and universities, balancing innovation with integrity, as seen in India’s draft policies.

 • Detection Tools: Invest in AI-driven plagiarism detectors, ensuring accountability across sectors.

Protecting Elections, Deepfake Detection: Partner with tech firms to deploy real-time detection tools, as tried in India’s 2024 elections.

 • Campaign Transparency: Require digital campaign ads to disclose AI use, emulating France’s post-2017 reforms.

 • Cybersecurity for INEC: Equip the Independent National Electoral Commission with AI-resistant systems, learning from Kenya’s 2022 upgrades.

Challenges to Implementation in Nigeria

Crafting and enforcing an AI regulatory framework is not without hurdles:

 One, Technical Capacity: Nigeria lacks sufficient AI expertise, necessitating investment in training and international partnerships.

 Two, Political Will: Bureaucratic inertia and competing priorities may delay action, a pitfall seen in early US AI debates.

 Three, Cost: Funding a robust framework could strain budgets, though global examples (e.g., EU’s AI Act funding model) suggest phased approaches.

  Balancing Innovation: Overregulation risks stifling AI-driven growth, a concern India navigates through regulatory sandboxes.

To address these challenges and seize the opportunity, Nigeria should adopt a multi-pronged strategy: Legislative Action: Enact an AI Governance Act by 2026, drawing on the EU AI Act’s risk-based approach; Capacity Building: Partner with organizations like UNESCO and the African Union to train regulators and technologists; Public Awareness: Launch campaigns to educate citizens on AI risks, mirroring the UK’s AI Safety Summit outreach;

 International Cooperation: Join global AI governance bodies, ensuring Nigeria’s voice shapes emerging norms.

As Nigeria prepares for 2027 elections and beyond, the non implementation of the National AI Strategy Report prepared by the committee constituted by Bosun Tijani, the Minister of Communications, Innovation and Digital Economy in April 2024 does not augur well for an AI regulatory framework in Nigeria. 

The banking and financial sector remains vulnerable to risks under cyberattacks, while elections face manipulation through deepfakes and propaganda. Global democracies—from the US to Slovakia—offer cautionary tales and solutions. Nigeria must act decisively, crafting a framework that safeguards cybersecurity, curbs malicious AI use, and preserves democratic integrity. The stakes are too high to delay. Let 2027 be a testament to Nigeria’s foresight, not for the usual musings that characterize the history of elections in the country since independence in 1960. 

 Iroche has a Post Graduate degree in Artificial Intelligence from the Saïd Business School of the University of Oxford. UK; and a member of the Technical Working Group of UNESCO on AI Readiness Assessment Methodology

Related Articles