Dr Ayodele Bakare, Assistant Director, Cybersecurity Department, National Information Technology Development Agency (NITDA), has identified Artificial Intelligence (AI)-driven disinformation and misinformation as one of the most significant emerging cybersecurity threats.
Bakare said this yesterday in an interview with the News Agency of Nigeria (NAN) in Abuja, while speaking on the evolving risks associated with AI.
He described the growing misuse of AI technologies to manipulate information as a form of “cognitive cyber warfare” capable of distorting public perception and influencing societal realities.
According to him, the increasing sophistication of AI tools has made it easier for malicious actors to generate convincing false content, particularly through technologies such as Deepfake.
“We will continue to see AI-driven disinformation and misinformation that can alter our reality and this is one of the greatest threats posed by AI in cyberspace.
“The ability of AI systems to generate highly realistic fake videos, images and audio has made it increasingly difficult even for trained cybersecurity experts to easily distinguish between authentic and manipulated content,” he said.
The NITDA official said that addressing the threat required collective efforts involving government, the private sector, academia, civil society, the international community and individuals.
According to him, the government must strengthen policy responses by improving existing frameworks such as the National Artificial Intelligence Strategy.
Bakare also advocated for the review of the National Cybersecurity Policy and Strategy to include a dedicated section addressing AI-driven cybersecurity threats.
He further urged citizens to develop digital awareness and acquire the necessary knowledge to identify and respond to emerging AI-related cyber risks.
“If you are aware of the threats that can affect you and understand your vulnerabilities, you have a better chance of protecting yourself,” he said.
He emphasised that public awareness and training would play a critical role in helping individuals detect manipulated content and protect themselves from AI-enabled cyber threats.

