Bringing in the BoM Squad, Part 2: AI/ML Libraries and the Vulnerabilities Within

One of the most pressing concerns in AI security is the presence of vulnerabilities within AI/ML libraries. These libraries are the building blocks for developing sophisticated AI models and applications, but can harbor critical security flaws that, if exploited, could have severe consequences, for example: 

Both of these vulnerabilities could create pathways for attackers to compromise AI systems. Understanding and managing these vulnerabilities is essential for maintaining a robust AI security posture.

We combined our AIBoM research with Securin Vulnerability Intelligence to bring a comprehensive and proactive approach to identifying, analyzing and mitigating vulnerabilities in AI/ML libraries, enhancing the security posture and resilience of AI-driven systems.  Here’s how we did it.

A Closer Inspection of AI/ML Libraries

In our previous blog, we detailed the generation and construction of an AI Bill of Materials (AIBoM) across ~500k Models in the Hugging Face repository. The programming libraries and packages that form the foundation of AI and ML development are a crucial element of our AIBoMs, providing essential tools and functionalities for building, training and deploying intelligent models and applications. We identified 3000+ libraries and ranked them by their frequency, noting expected guests such as:

  • transformers: 295,242 models
  • pytorch: 175,423 models
  • safetensors: 146,254 models
  • tensorboard: 90,142 models
  • diffusers: 25,049 models

Apples & Oranges: Categorizing AI/ML Libraries

Categorizing AI/ML libraries helps streamline the selection process based on specific needs and functionalities. We could clearly see the necessity of differentiating Data Manipulation and Analysis libraries like Pandas and NumPy vs Deep Learning frameworks such as TensorFlow and PyTorch. Our approach therefore categorized the libraries into 14 buckets that enhance efficiency, clarity and specialization in AI and ML development tasks, ensuring the right place of identification within the supply chain:

  • Data Manipulation and Analysis – 106
  • Machine Learning – 93
  • Utilities – 73
  • Deep Learning – 66
  • Natural Language Processing (NLP) – 58
  • Data Visualization – 53
  • Model Deployment and Serving – 43
  • Computer Vision – 41
  • Data Collection and Web Scraping – 39
  • Unknown – 39
  • Reinforcement Learning – 29
  • Data Storage and Databases – 27
  • Explainability and Interpretability – 18
  • Optimization and Hyperparameter Tuning – 12

Navigating Critical Risks in the Supply Chain

On to the forefront of supply chain security:  our exploration navigates the landscape of potential risks posed by vulnerabilities within these critical frameworks. By uncovering vulnerabilities and their implications, we illuminate the path toward fortifying AI systems with robust defenses and proactive security measures.

We analyzed the Top 300 python AI/ML Libraries, leveraging Securin’s Vulnerability Intelligence to map 838 known CVEs for each unique package version.

With 426 CVEs, TensorFlow takes the first spot in our AI/ML python libraries ranked by number of direct vulnerabilities. It’s interesting to note the mix of top affected libraries with Deep Learning packages such as TensorFlow, Data Manipulation and Analysis libraries like NumPy as well as utilities like Django making up the Top 10.

The threats associated with the vulnerabilities are concerning, with the number of weaponized CVEs (122) and the chatter around them in deep/dark web forums (353) indicating a growing interest from cybercriminals in exploiting them.

One of these, CVE-2023-4863: Heap buffer overflow in libwebp in Google Chrome which has been exploited in the wild, also impacts the pillow library.

Let’s go another layer deep, drilling down to the weaknesses that foster vulnerabilities within AI/ML Libraries. Twelve of the MITRE Top 25 Weaknesses are present in the top 25 weaknesses across AI/ML Library CVEs. Some standouts are:

  • CWE-369: Divide By Zero – 51 CVEs
  • CWE-824: Access of Uninitialized Pointer – 14 CVEs
  • CWE-1333: Inefficient Regular Expression Complexity – 13 CVEs

These weaknesses are rare occurrences in the larger CVE dataset but become more prominent within AI/ML frameworks.  You won’t see a Software Engineer leave behind a “Divide by Zero” bug! That’s why Data Scientists and Engineers are so different! .

Operationalizing within the AI/ML Supply Chain

So now we have the models, their AIBoMs, and visibility across the AI/ML libraries by analyzing each unique deployment. For example, consider the mistral-inference repository to run mistral 7B, 8x7B and 8x22B models. The repository’s poetry consists of minimal package-versions such as numpy (>=1.21.6) and tensorflow (>=2.11.0). 

Considering a deployment of mistralai/Mistral-7B-Instruct-v0.2 with minimal package requirements, we can see the following AIBoM Vulnerabilities (note that the package versions are under the control of the developers and not the LLM provider):

What’s next? We’re glad you asked…

Frameworks, Vulnerabilities and AIBoM Components

Part three of this series will explore the AI Attack Surface: MITRE ATLAS, OWASP Top 10 for LLMs & ML and how it all fits over the AI Bill of Materials. Our research will provide a comprehensive AI Attack Surface overview, neatly tying together the frameworks, vulnerabilities and AIBoM components.

Part 3 Coming Soon:

The AI Attack Surface – MITRE ATLAS, OWASP Top 10 for LLMs & ML … how does it all fit over the AI Bill of Materials, our research provides a comprehensive AI Attack Surface overview neatly tying together the frameworks, vulnerabilities and AIBoM components.

Go Back and Read Part 1:

Defining and Generating AI Bill of Materials – AI tools are revolutionizing the tech supply chain. AIBoMs will help us to evolve securely and responsibly. Here’s what you need to know.

Learn more about AI and ML from our experts to strengthen your cybersecurity posture.

Share This Post On