11 May 2025
Thought leadership
Read time: 3 Min
19k

The Future of Enterprise AI Belongs to Open Source

By DIRK NEUMANN

Open source transforms artificial intelligence. Beyond cost savings and technical flexibility, a profound shift reshapes how organizations build, deploy, and govern AI systems. The April 2025 McKinsey report reveals that 75% of organizations plan to increase their use of open-source AI technologies, signaling a fundamental change in enterprise technology strategy.

This growing adoption represents more than a passing trend. It marks the emergence of a new paradigm where open-source and proprietary AI technologies combine strategically across the enterprise stack. Organizations increasingly recognize that the question isn't whether to choose open source or proprietary AI, but rather how to leverage both approaches for maximum strategic advantage.

Why Organizations Embrace Open Source Beyond Cost

While cost efficiency drives initial interest in open-source AI, deeper factors fuel sustained adoption. "The primary driver is control and adaptability," explains Dirk Neumann, CEO of Brisken. "Organizations want the freedom to fine-tune models for their unique use cases, deploy them securely on their own infrastructure, and avoid lock-in from closed vendors."

This control imperative grows more urgent as AI becomes mission-critical. Companies need assurance they can maintain operational continuity regardless of vendor pricing changes, service interruptions, or strategic pivots. Open-source models provide a foundation of stability and sovereignty that proprietary APIs cannot match.

Transparency also drives adoption as regulatory scrutiny intensifies. Open-source models allow organizations to inspect and audit how AI works, reducing legal and reputational risks. This inspectability creates a foundation for responsible AI governance that closed systems struggle to provide.

Perhaps most significantly, open-source AI acts as a powerful talent magnet. The McKinsey report notes experienced developers are 40% more likely to use open source than less experienced peers. "Senior developers don't just prefer open source, they expect it," Neumann observes. "They want to experiment, debug, and extend models without black-box constraints. Organizations adopting open-source AI aren't just saving money, they're building technical credibility that attracts top-tier engineers."

The Rise of Hybrid AI Ecosystems

Rather than an either/or proposition, the future belongs to hybrid approaches that combine open-source foundations with proprietary differentiation. "We're heading toward a true hybrid ecosystem, but the long-term center of gravity is shifting toward openness," Neumann predicts.

This shift follows a familiar pattern seen in previous technology transformations. "In the next 3-5 years, we'll see foundational models increasingly commoditized, whether open-source from day one like Mistral or Falcon, or de facto open due to leak, reverse-engineering, or competitive pressure. At that layer, open source will likely dominate."

However, value creation extends beyond base models. "What differentiates value isn't just the base model. It's how you orchestrate it, the data you feed it, the user experience, and how it integrates into workflows. That's where proprietary layers still matter."

This hybrid approach mirrors the evolution of other foundational technologies. "The playbook is similar to what we saw with Linux or Kubernetes. Open-core infrastructure with proprietary extensions, services, and SLAs."

Learning From Previous Open Source Transformations

Organizations developing AI strategies can draw valuable lessons from earlier open-source transformations. Neumann identifies three critical insights:

"First, openness wins platforms, not just minds. Both Linux and Kubernetes became dominant not because they were cheaper, but because they attracted the most builders. That network effect created the richest ecosystems, the best integrations, and the fastest innovation cycles. The same is happening in AI: the winning platform won't be the most closed or even the most performant. It'll be the one everyone builds on."

"Second, you don't need to open everything, just the right thing. Red Hat didn't own Linux. It owned support, tooling, and opinionated enterprise packaging. HashiCorp built business value around orchestration, not just code. Similarly, AI companies can open models, frameworks, or agent infrastructure while retaining value in enterprise-grade workflows, fine-tuning, or compliance layers."

"Third, community is strategy, not charity. Open source isn't about giving things away. It's about getting more than you could build alone. Contributions, extensions, bug fixes, community testing. It's compounding leverage. The smartest AI strategies today treat open source as a force multiplier for product-market fit and ecosystem growth."

The conclusion is clear: "You win in AI like you won in infrastructure: by owning the right slice of open."

The Talent Dimension as Competitive Advantage

With 81% of developers reporting that open-source experience is highly valued in their field, the talent dimension becomes a critical competitive factor. This preference is reshaping organizational power dynamics and technology decision-making.

"When your AI stack is open and composable, developers don't just execute. They choose," Neumann explains. "They pick the model, shape the agentic flow, plug in vector stores, fine-tune performance. That creative control moves decision-making closer to the builder layer, reducing reliance on top-down platform mandates."

This shift makes recruitment a strategic differentiator. "If your organization can't attract or empower AI-savvy developers, you'll fall behind. Today's top developers aren't asking 'What vendor are you using?' They're asking 'Can I contribute? Can I customize? Can I build on top of it?' Open source is both their resume and their sandbox."

Internal platform teams also gain influence as they become critical enablers of scale. "To keep up, they'll need to speak the language of OSS: opinionated defaults, plug-and-play modularity, and strong guardrails that don't get in the way of experimentation."

Perhaps most significantly, veto power flips. "In traditional enterprise IT, vendors had the upper hand. With open source, developers can prototype a better solution over a weekend. The proof of concept becomes the pitch. And if that proof outperforms a top-down vendor product, it wins."

The Future Enterprise AI Stack

As hybrid approaches mature, a clearer picture emerges of how enterprise AI stacks will evolve. "In five years, the ideal enterprise AI stack will be modular, hybrid, and highly composable. Open-source components at the core and proprietary layers wrapped around value, control, and compliance," Neumann predicts.

The open-source core will likely include foundation models (LLMs like Mistral, Falcon, and LLaMA), model orchestration and agent frameworks, data pipelines and vector databases for retrieval-augmented generation, and evaluation and safety tooling.

Proprietary layers will focus on areas of unique business value: enterprise context integration with systems of record, custom workflows and orchestration logic, security and compliance wrappers, and fine-tuned models trained on private data.

"The net effect? A stack where open drives innovation, and proprietary ensures defensibility. That's the future-proof hybrid," Neumann concludes.

From Multi Cloud to Multi Model

The parallels between open-source AI adoption and cloud computing evolution provide a useful framework for understanding future developments. Just as organizations moved from single-cloud lock-in to multi-cloud strategies, AI is following a similar trajectory.

"In the early cloud era, enterprises locked into AWS or Azure and built around their constraints. Eventually, they realized the risk and demanded multi-cloud to regain control," Neumann explains. "In AI, we're watching the same story unfold. Early adopters rushed to closed APIs like OpenAI. Now, they're asking: 'What if pricing changes? What if the API goes down? What if I need full control?'"

The AI equivalent of multi-cloud is emerging as "multi-model, multi-agent, multi-surface AI" with open tooling as the connective tissue. This approach might include a summarizer using Mistral, a translator using a closed API, a pricing agent using a fine-tuned open LLaMA model hosted locally, all coordinated by an open agentic runtime.

This architectural evolution follows other familiar patterns: from monoliths to modularity (microservices to agentic decomposition), from centralization to edge and sovereignty (more on-prem LLMs and privacy-first deployments), and from consumers to creators (building custom copilots and coworkers rather than just using ChatGPT).

Implementation Challenges and Evolution

Despite growing adoption, organizations face significant challenges when implementing open-source AI. Neumann identifies five key hurdles based on Brisken's enterprise customer experience:

"First, integration complexity. Open models don't come with batteries included. You need to wire up context management, access controls, observability, and user experience. For a typical enterprise IT team, this is like assembling IKEA furniture with missing parts and no manual."

"Second, lack of support and SLAs. Unlike API-first vendors, open-source AI doesn't come with 24/7 support. When something breaks, who do you call?"

"Third, governance uncertainty. Legal, security, and compliance teams still don't know how to audit open weights, validate training data lineage, or assess license risk. Without standards, they slow everything down."

"Fourth, MLOps and deployment maturity. Enterprises often underestimate the engineering required to operationalize an open model: containerizing it, scaling inference, updating prompts, managing prompt drift."

"Fifth, skills and culture gaps. Even with OSS tools available, many teams lack the in-house talent to deploy them safely and strategically."

However, these challenges represent growing pains rather than permanent barriers. "These are exactly the kind of friction that open ecosystems historically smooth out over time. We're just in the messy middle," Neumann observes.

The ecosystem is already evolving to address these issues through open-source blueprints for common use cases, service wrappers around open components, emerging governance frameworks and audits, AI deployment pipelines modeled on DevOps practices, and training and co-piloting programs for enterprise teams.

Managing Risk in Open Source AI

The McKinsey report highlights significant concerns about open-source AI risks, particularly cybersecurity (62%), regulatory compliance (54%), and intellectual property issues (50%). However, these concerns represent maturation rather than fundamental barriers.

"Every transformative tech wave starts with risk confusion, then evolves toward structured management. We're now in that shift with open-source AI," Neumann explains.

For cybersecurity risks, organizations are developing architectural approaches: model signing and provenance tracking, isolating LLMs in VPC-secured inference clusters, implementing prompt firewalls and behavior filters, and adopting open-source red teaming tools. "Security shifts from reactive to architectural. You don't just defend AI, you design for secure-by-default AI."

Regulatory compliance challenges are being addressed through layer-specific governance (separating model layer, data layer, and output layer), LLMOps platforms with full audit trails, and open-source explainability tools. "At Brisken, we're embedding agent-level explainability so every decision or generation is traceable. That's essential in finance and enterprise AI."

For IP and licensing concerns, organizations are building risk matrices for AI similar to those used for software, choosing models with clear licenses like Apache 2.0 or MIT, and increasing scrutiny on training data transparency. Long-term, "clean room" model training pipelines using verified public data, licensed corpora, and synthetic augmentation will become standard practice.

"The future of open-source AI isn't about being risk-free. It's about being risk-accountable," Neumann concludes.

Timeline to Performance Parity

The McKinsey report notes that open-source models are closing performance gaps with proprietary solutions. Based on current trends, we can project when open-source models will achieve parity or even surpass proprietary models in enterprise applications.

For immediate term (2024-2025), open-source models are already sufficient for text classification, summarization, question answering, and retrieval-augmented generation when paired with domain-tuned prompts and smart orchestration. "Enterprises using OnePilot are deploying these models in finance, procurement, and treasury workflows right now," Neumann notes.

In the short term (2025-2026), open source will close remaining gaps in conversational AI, multi-turn agents, and structured reasoning through more public fine-tunes, reinforcement learning loops, and agentic memory layers. "Expect to see open agentic frameworks rivaling proprietary copilots, especially in regulated industries where transparency and sovereignty matter."

By mid-term (2026-2028), open-source models will likely match or exceed proprietary ones in multimodal, multilingual, and high-accuracy tasks. "We'll see open multimodal stacks trained on curated, community-scale data."

Long-term advantages for open-source models include superior adaptability (rapid fine-tuning to niche contexts), inspectability and safety (enterprises will increasingly trust what they can audit), and innovation velocity (the community now replicates frontier research in approximately 6-12 months).

"By 2026, open models will power the majority of enterprise agent stacks, especially where sovereignty, compliance, and customization matter more than squeezing out a few extra benchmark points," Neumann predicts. "Proprietary models will still lead at the absolute frontier, but open models will own the workflows that run the business."

Brisken OnePilot as Hybrid AI in Practice

Brisken's OnePilot framework exemplifies the hybrid approach that balances open-source foundations with proprietary differentiation. "OnePilot was built from day one as a modular and model-agnostic digital coworker framework, so leveraging both proprietary and open-source components is in its DNA," Neumann explains.

The framework currently integrates open source in several ways: using open-source orchestration frameworks as a foundation with a proprietary agentic architecture layered on top, supporting open-source LLMs like LLaMA and Mistral alongside commercial APIs, and building on open infrastructure for retrieval and memory while enriching it with proprietary skillsets and safety layers.

Proprietary components focus on domain-specific value: orchestration logic for SAP-integrated workflows and financial operations, enterprise context management bridging systems like SAP and Bloomberg, and security guardrails including prompt firewalls and role-aware logic.

Looking ahead, Brisken plans to open-source portions of OnePilot's agent framework, particularly the runtime and skill interface, while maintaining proprietary value in enterprise extensions and domain-specific capabilities. "In that future, Brisken becomes not just a vendor but a steward of an open AI coworker ecosystem, providing the hardened enterprise extensions, compliance wrappers, and finance-specific skills on top."

This approach delivers what enterprises increasingly demand: "Control, transparency, and compounding value, powered by both open innovation and proprietary insight."

The Inevitable Open Future

The trajectory of open-source AI parallels previous technology transformations, suggesting an inevitable shift toward openness as the foundation of enterprise AI stacks. Just as Linux eventually dominated infrastructure and open-source databases became industry standards, AI is following a similar path.

This doesn't mean proprietary technologies will disappear. Rather, they will evolve to focus on areas where they add unique value: specialized capabilities, enterprise-grade support, compliance frameworks, and industry-specific solutions.

Organizations that recognize this shift early gain significant advantages: greater control over their AI destiny, enhanced ability to attract and retain technical talent, reduced vendor lock-in risk, and more flexible adaptation to regulatory requirements.

The future belongs not to those who build closed AI empires, but to those who strategically combine open foundations with proprietary differentiation. As Neumann succinctly puts it: "You win in AI like you won in infrastructure: by owning the right slice of open."

For enterprises navigating this transformation, the message is clear: open source isn't just an alternative approach to AI. It's becoming the essential foundation upon which competitive AI strategies are built. The question isn't whether to incorporate open-source AI, but how to leverage it most effectively while managing the associated challenges.

The democratization of AI through open source represents not just a technical evolution, but a fundamental shift in how organizations build, deploy, and govern intelligent systems. Those who embrace this shift strategically will find themselves well-positioned for the multi-model, multi-agent AI future that lies ahead.

media-contact-avatar
CONTACT DETAILS

Email for press purposes only

imt@hitech.com

NEWSLETTER

Receive news by email

Press release
Company updates
Thought leadership

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply

You have successfully subscribed to the news!

Something went wrong!