{"id":21198,"date":"2024-07-30T15:10:39","date_gmt":"2024-07-30T22:10:39","guid":{"rendered":"https:\/\/webdev.securin.xyz\/?p=21198"},"modified":"2024-10-22T09:49:35","modified_gmt":"2024-10-22T16:49:35","slug":"how-evolving-ai-regulations-impact-cybersecurity","status":"publish","type":"post","link":"https:\/\/webdev.securin.xyz\/articles\/how-evolving-ai-regulations-impact-cybersecurity\/","title":{"rendered":"How Evolving AI Regulations Impact Cybersecurity"},"content":{"rendered":"\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

This article by Ram Movva and Aviral Verma was featured in InfoWorld.<\/a><\/em><\/strong><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t

\n\t\t\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

While their business and tech colleagues are busy experimenting and developing new applications, cybersecurity leaders are looking for ways to anticipate and counter new, AI-driven threats.<\/p>

It\u2019s always been clear that AI impacts cybersecurity, but it\u2019s a two-way street. Where AI is increasingly being used to predict and mitigate attacks, these applications are themselves vulnerable. The same automation, scale, and speed everyone\u2019s excited about are also available to cybercriminals and threat actors. Although far from mainstream yet, malicious use of AI has been growing. From generative adversarial networks to massive botnets and automated DDoS attacks, the potential is there for a new breed of cyberattack that can adapt and learn to evade detection and mitigation.<\/p>

In this environment, how can we defend AI systems from attack? What forms will offensive AI take? What will the threat actors\u2019 AI models look like? Can we pentest AI\u2014when should we start and why? As businesses and governments expand their AI pipelines, how will we protect the massive volumes of data they depend on?<\/p><\/div><\/div><\/div>

It\u2019s questions like these that have seen both the US government and the European Union placing cybersecurity front and center as each seeks to develop guidance, rules, and regulations to identify and mitigate a new risk landscape. Not for the first time, there\u2019s a marked difference in approach, but that\u2019s not to say there isn\u2019t overlap.<\/p>

Let\u2019s take a brief look at what\u2019s involved, before moving on to consider what it all means for cybersecurity leaders and CISOs.<\/p><\/div><\/div><\/div>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t

\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t

US AI Regulatory Approach \u2013 An Overview<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

Executive Order<\/a> aside, the United States\u2019 decentralized approach to AI regulation is underlined by states like\u00a0California developing<\/a> their own legal guidelines. As the home of Silicon Valley, California\u2019s decisions are likely to heavily influence how tech companies develop and implement AI, all the way to the data sets used to train applications. While this will absolutely influence everyone involved in developing new technologies and applications, from a purely CISO or cybersecurity leader perspective, it\u2019s important to note that, while the US landscape emphasizes innovation and self-regulation, the overarching approach is risk-based.<\/p><\/div><\/div><\/div>

The United States\u2019 regulatory landscape emphasizes innovation while also addressing potential risks associated with AI technologies. Regulations focus on promoting responsible AI development and deployment, with an emphasis on industry self-regulation and voluntary compliance.<\/p>

For CISOs and other cybersecurity leaders, it\u2019s important to note that the Executive Order instructs the National Institute of Standards and Technology (NIST) to\u00a0develop standards<\/a>\u00a0for red team testing of AI systems. There\u2019s also a call for \u201cthe most powerful AI systems\u201d to be obliged to undergo penetration testing and share the results with government.<\/p><\/div><\/div><\/div>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t

\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t

The EU\u2019s AI Act \u2013 An Overview<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

The European Union\u2019s more precautionary approach bakes cybersecurity and data privacy in from the get-go, with mandated standards and enforcement mechanisms. Like other EU laws, the AI Act<\/a> is principle-based: The onus is on organizations to prove compliance through documentation supporting their practices.<\/p><\/div><\/div><\/div>

For CISOs and other cybersecurity leaders, Article 9.1<\/a> has garnered a lot of attention. It states that:<\/p>

High-risk AI systems shall be designed and developed following the principle of\u00a0security by design and by default<\/em>. In light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their life cycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.<\/p><\/blockquote>

At the most fundamental level, Article 9.1 means that cybersecurity leaders at critical infrastructure and other high-risk organizations will need to conduct AI risk assessments and adhere to cybersecurity standards. Article 15 of the Act covers cybersecurity measures that could be taken to protect, mitigate, and control attacks, including ones that attempt to manipulate training data sets (\u201cdata poisoning\u201d) or models. For CISOs, cybersecurity leaders, and AI developers alike, this means that anyone building a high-risk system will have to take cybersecurity implications into account from day one.<\/p><\/div><\/div><\/div>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t

\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t

EU AI Act VS. US AI Regulatory Approach \u2013 Key Differences<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t

What AI Regulations Mean for CISOs and Other Cybersecurity Leaders<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

Despite the contrasting approaches, both the EU and US advocate for a risk-based approach. And, as we\u2019ve seen with GDPR, there is plenty of scope for alignment as we edge towards collaboration and consensus on global standards.<\/p><\/div><\/div><\/div>

From a cybersecurity leader\u2019s perspective, it\u2019s clear that regulations and standards for AI are in the early levels of maturity and will almost certainly evolve as we learn more about the technologies and applications. As both the US and EU regulatory approaches underline, cybersecurity and governance regulations are far more mature, not least because the cybersecurity community has already put considerable resources, expertise, and effort into building awareness and knowledge.<\/p>

The overlap and interdependency between AI and cybersecurity have meant that cybersecurity leaders have been more keenly aware of emerging consequences. After all, many have been using AI and machine learning for malware detection and mitigation, malicious IP blocking, and threat classification. For now, CISOs will be tasked with developing comprehensive AI strategies to ensure privacy, security, and compliance across the business, including steps such as:<\/p>