The Computer Society of Kenya

Since 1986

cybersecurityThe Cyber Signals report warns that these kinds of attacks are only going to become more sophisticated as AI evolves social engineering tactics. This is of concern for businesses in Africa, which is still a global cybercrime hotspot.

While Nigeria and South Africa estimate annual losses to cybercrime at around $500 million and R2.2 billion respectively, Kenya experienced its highest-ever number of cyberattacks last year, recording 860 million attacks.

A KnowBe4 survey of hundreds of employees across the continent revealed that 74 percent of participants were easily manipulated by a Deepfake. Fortunately, AI can also be used to help companies disrupt fraud attempts. In fact, Microsoft records around 2.5 billion cloud-based, AI-driven detections every day.

AI-powered defence tactics can take multiple forms. Beyond the use of tools like Copilot to enhance security posture, Microsoft’s Cyber Signals report offers four additional recommendations for local firms to better defend themselves in a rapidly evolving cybersecurity landscape.

Adopt a Zero Trust approach

The key is to ensure the organisation’s data remains private and controlled from end to end. Conditional access policies can provide clear, self-deploying guidance to strengthen the organisation’s security posture, and will automatically protect tenants based on risk signals, licensing and usage.

Enabling multifactor authentication for all users, especially for administrator functions, can also reduce the risk of account takeover by more than 99 percent.

Employees' awareness drive

Aside from educating employees to recognise phishing emails and social engineering attacks, IT leaders can proactively share their organisations’ policies on the use and risks of AI. This includes specifying which designated AI tools are approved for enterprise and providing points of contact for access and information.

Apply vendor AI controls

Through clear and open practices, IT leaders should assess areas where AI can come in contact with their organisation’s data, including through third-party partners and suppliers. Anytime an enterprise introduces AI, the security team should assess the relevant vendors’ built-in features to ascertain the AI’s access to employees and teams using the technology.

Protect against prompt injections

Finally, it’s important to implement strict input validation for user-provided prompts to AI. Context-aware filtering and output encoding can help prevent prompt manipulation. Cyber risk leaders should also regularly update and fine-tune large language models to improve the models’ understanding of malicious inputs and edge cases.

As we look to secure the future, we must ensure that we balance preparing securely for AI and leveraging its benefits, because AI has the power to elevate human potential and solve some of our most serious challenges.

The writer is Microsoft Country Manager for Kenya.

Share this page