By Kelly Butler ,
Cyber Practice Leader, Marsh Specialty
05/03/2024 · 3 minute read
In a complex and evolving global risk landscape, cybercrime and cyber insecurity have emerged as urgent concerns for business leaders worldwide — both in the immediate future and long-term.
According to the 2024 Global Risks Report, created by the World Economic Forum (WEF) in collaboration with Marsh McLennan and other partners, business leaders ranked cyber insecurity as the fourth most significant risk in upcoming years, and eighth in the next decade. For risk managers in the UK, high-profile ransomware attacks have pushed the conversation around cybercrime further into the spotlight.
Groundbreaking advancements like artificial intelligence (AI) bring new threats — especially as the rapid acceleration and integration of these technologies can expose businesses to unforeseen digital vulnerabilities. Equally, innovations relating to AI could prove essential to tackling cyber threats. The question is: how can you strike a balance between embracing the best aspects of innovation while protecting your organisation against cyber risks and becoming more resilient?
The advent of AI has revealed potentially transformative opportunities for companies to increase efficiency, improve decision-making, and strengthen their cybersecurity strategy. One significant opportunity includes enabling computers to effectively detect and filter out phishing scams from email, mitigating the risk of malware attacks.
That said, AI can also have a harmful impact on a company’s cybersecurity — from spreading misinformation to exploiting vulnerabilities in digital networks. While companies continue to harness the benefits of AI, it’s important that employees are educated about how to use this technology effectively and are trained to understand the inherent risks that impact cybersecurity.
As companies continue to embrace new technologies while strengthening their approach to cyber risk, it’s critical to consider how AI and other tools can support human decision-making and problem-solving, rather than replace human judgment and expertise.
With deep learning models, for example, if the rules set for the AI algorithms or the data sets it learns from are imprecise, the answers will be as well. Humans are needed to ensure the quality, diversity, and scale of data provided is sufficient, as well as validate the outputs. Humans can also address potential biases and ethical considerations, ultimately enhancing the model’s performance and trustworthiness.
Cyber risk is a company-wide, strategic business issue that impacts every corner of businesses. Now, more than ever, proactive planning is critical to responding to and recovering from a cyberattack. However, the effectiveness and efficiency of an incident response plan relies on the preparedness and engagement of the people involved.
For this reason, it’s very important that companies have robust principles around implementing, using, and updating technologies. This includes enforcing the proper testing, training, monitoring, and auditing practices down the chain of command, beginning with C-suite and filtering to every employee. This can also help ensure compliance with local and global regulations relating to how technology is developed and used. Regular testing of procedures and reassessment of processes is an essential part of this control.
Innovation creates opportunity — both in the way companies implement new technologies and re-evaluate their cyber risk management.
As business leaders worldwide agreed in the 2024 Global Risks Report, cyber risks are not going anywhere, and threat actors are only finding more sophisticated ways to evade business cybersecurity measures. To enhance preparedness, organisations must consider both the benefits and risks associated with new tools like AI, ensure people are well equipped to use and validate these tools, and create alignment on a unified response across the company.