The Evolution of Enterprise AI: Navigating the Risks and Rewards of an Emerging Landscape

The Evolution of Enterprise AI: Navigating the Risks and Rewards of an Emerging Landscape
Fitz Nowlan
  January 27, 2025

AI is moving at breakneck speed – can your enterprise keep up without breaking down?

As businesses race to adopt AI, the push for innovation is relentless. But while AI promises to accelerate development and unlock new opportunities, it also opens the door to unforeseen risks. Malicious models, supply chain vulnerabilities, and unpredictable behavior can derail even the most well-intentioned AI initiatives.

The Rise of AI and Open-Source Supply Chains

Open-source software has long been a cornerstone of enterprise development. Open-source consumption continues to grow at an astonishing rate – with trillions of download requests across major ecosystems annually. Open source now permeates nearly every piece of software used today.

However, the AI revolution has introduced a parallel supply chain – the AI supply chain – which intersects with open-source ecosystems. Developers are increasingly relying on AI tools, models, and frameworks such as TensorFlow, Hugging Face, and scikit-learn to build cutting-edge applications. This convergence has sparked innovation but also amplified the risks of unvetted dependencies and malicious code, not to mention erroneous or unintentional logic.

One area where this is becoming particularly evident is in QA and software testing. GenAI is already disrupting QA practices by automating repetitive tasks and enhancing test coverage. According to industry analysis, nearly half of large organizations believe that GenAI is impacting their operations today, with another third expecting significant effects within the next 18 months. This shift is driven by the need to keep pace with continuous delivery and the rapid development cycles demanded by agile and DevOps practices.

Emerging Risks in the AI Supply Chain

One of the most troubling trends is the rise of malicious software disguised as open-source packages or AI models. Over the past few years, there has been a staggering 400% year-over-year increase in malicious packages targeting developers. These aren’t simple vulnerabilities – they are designed to hijack development environments, exfiltrate sensitive data, or install backdoors.

As enterprises rush to adopt AI tools, they risk importing not only models but also potential threats lurking within the AI supply chain. High-profile supply chain incidents like SolarWinds have already demonstrated the far-reaching consequences of these vulnerabilities. Another major example of such hostile behavior in open-source development was a backdoor targeting Linux’s XZ utility.

Furthermore, observability is becoming a crucial factor in managing AI-driven applications. Traditional software observability focuses on infrastructure and application performance, but AI observability introduces new complexities. Monitoring large language models (LLMs), transformers, and AI-generated code requires tracking how models behave over time and ensuring they align with business logic. As observed in several AI projects, errors in AI behavior can lead to unpredictable outputs, making robust observability practices essential.

AI’s Expanding Role in Enterprise Software

AI’s rapid adoption is reflected in the numbers. Recently, there has been a significant increase in the use of traditional AI models and an astonishing surge in LLM adoption. Enterprises are embedding AI across products and workflows, driven by open-source models such as  LLaMA and Stable Diffusion.

This surge mirrors the broader industry trend – organizations feel the urgency to integrate AI features into their offerings. GenAI and machine learning are becoming essential components in products and services, reflecting widespread market adoption.

In QA, for example, GenAI is being used to automate test case creation, execute tests faster, and reduce the need for manual testing. By 2028, GenAI is expected to write 80% of software tests, driving improvements in usability and overall software quality. This not only accelerates development but also reduces the cost and impact of bugs in production.

The Unique Risk Categories of AI

While the benefits of AI are undeniable, enterprises must contend with distinct risks that accompany AI integration. There are seven primary risk areas that organizations must address:

  1. Security vulnerabilities – Ensuring AI tools and models behave as intended without exposing systems to attacks.
  2. Maliciously trained models – Preventing the use of models that might perform hidden, harmful actions when triggered.
  3. Data exfiltration – Avoiding unauthorized access to sensitive data by AI agents or models.
  4. Licensing issues – Managing the legal implications of using AI models under restrictive or proprietary licenses.
  5. Model quality – Ensuring the reliability and accuracy of AI outputs, especially for mission-critical tasks.
  6. Legal and copyright concerns – Addressing the uncertainty around the copyrightability of AI-generated content, as well as the implications of models having been trained on copyrighted material.
  7. Cultural and ethical considerations – Understanding how AI aligns with organizational values and user expectations.

Practical Steps for Managing AI Risk

The first step in managing AI risk is gaining visibility into where AI is being used across the SDLC. Enterprises must extend their supply chain monitoring practices to include AI models and components, treating them like any other third-party dependency.

Asking the question, “Where exactly are we using AI technologies and models?” can uncover hidden risks within development pipelines. By automating AI audits and integrating them into existing policies, organizations can mitigate vulnerabilities before they become costly incidents.

The Role of Tooling and Automation

To stay ahead of evolving threats, enterprises should leverage existing SDLC tools to monitor and secure AI components. Automated auditing tools enable organizations to assess AI models, enforce licensing policies, and generate AI bills of materials (BOMs).

By applying automated controls, enterprises can prevent malicious AI models from entering development environments and ensure compliance with internal policies. Reusing existing tooling, rather than reinventing the wheel, allows organizations to move fast without breaking things. Additionally, these automated tools provide a valuable audit trail of how and when the performance and state of AI-based systems has changed over time.

Regulatory Landscape and Future Considerations

The regulatory environment surrounding AI is evolving rapidly. Governments and industry bodies are introducing new frameworks to govern AI usage, from the European Union’s AI Act to sector-specific guidelines. While regulations set minimum standards, enterprises are ultimately responsible for safeguarding their AI ecosystems.

Most organizations recognize that they cannot wait for regulators to catch up. Proactively establishing AI governance policies will not only ensure compliance but also build trust with customers and stakeholders.

Conclusion

AI is accelerating enterprise innovation, but it also introduces new risks – security vulnerabilities, unpredictable system behavior, and operational disruption. As the AI supply chain grows, organizations must address these complexities to avoid costly setbacks and protect their competitive edge.

With the right strategies, AI can transform software development – enhancing quality, speeding delivery, and driving resilience. By integrating AI governance, observability, and automated testing, enterprises can turn potential risks into opportunities for growth.

Ready to harness AI with confidence? Connect with SmartBear to explore how our enterprise-ready AI solutions can help you secure your development pipeline and stay ahead of the curve.

You Might Also Like