Artificial intelligence is creeping into IT service management. With the rise of Open Source in parallel in the same field, it raises crucial questions for IT Departments and users of ITSM tools. As companies look to leverage AI capabilities to improve their support, incident management and maintenance processes, IT managers face a new dilemma: proprietary or open source?
AI in Open Source: choosing a model
- Proprietary AI on an open source tool
Many ITSM players choose to integrate proprietary AI bricks into existing open source tools. The undeniable advantage of this combination is rapid implementation, often through ready-to-use APIs, with a high level of integration. The editor provides support, updates and sometimes even training for the model on anonymized business data.
But this approach creates a structural dependence on the supplier. There are often recurring costs, which can be difficult to anticipate. The model itself is a black box, and rarely audited. This can pose a problem in environments that are subject to compliance or traceability obligations.
- Self-hosted open-source AI
At the other end of the spectrum, some companies choose to deploy their own open source models, locally, on their own infrastructure. This solution provides the company with maximum control over its data, its architecture and its processing. It can adapt the models to its business processes, train them with its own corpora, and guarantee a high level of confidentiality.
However, autonomy comes at a cost. It requires advanced technical skills, for both integration and model maintenance. It also requires appropriate computing infrastructure, to manage upgrades, monitor performance and safeguard the entire AI pipeline.
- Managed open-source AI (third-party cloud)
To ease these technical constraints, some companies are turning to open source models that are accessible via managed cloud platforms. This option makes it possible to enjoy the richness of open source while delegating operational management. The models are quickly accessible, with regular updates and immediate scalability .
However, this solution means giving up control in part. Data is transferred to third-party environments, sometimes subject to non-European regulations. What’s more, the business model is often based on consumption, which may mean significant variations in operating costs.
- Hybrid approach
More and more organizations are adopting a hybrid strategy. They combine proprietary and open source models, which are self-hosted or managed, depending on use cases and business requirements. This approach combines flexibility, security and performance.
But hybridization adds a further layer of complexity. It requires clear governance, a precise definition of responsibilities, and an architecture that can absorb diversity without creating silos or inconsistencies.
The real questions to ask when making a choice
1. What are the real benefits of open source for AI?
The main advantage of open source is transparency. Unlike proprietary models, which are often opaque, open models can be audited, understood and modified. This guarantees better traceability, essential for meeting strict regulatory or industry requirements.
Community innovation is another powerful driver. Ecosystems such as Hugging Face and PyTorch are boosting research and experimentation, while making increasingly high-performance models accessible. The openness of these tools makes it possible to create tailor-made AI to suit the realities of ITSM businesses, rather than imposing generic solutions.
Also, initiatives such as the MCP protocol are moving towards standardizing exchanges between IT tools and AI components. This could simplify integration considerably and foster the emergence of a common technological foundation.
2. Can open source guarantee quality AI models compared to the cloud giants?
The cloud giants have a considerable lead in terms of technology and computing power. Their models are robust, trained on a large scale, and continuously improved. Open source models are evolving rapidly, but suffer from a lack of testing and production resources in comparison.
The choice between a large generalist model and a small specialized model comes down to context. In ITSM, a lightweight model, trained through internal tickets, could be more appropriate than a very broad model with little contextualization. It involves a trade-off between scale, precision and adaptability.
One of open source’s advantages is mutualization. Dataset sharing, collaborative fine-tuning tools, open weights models: these mechanisms enable the open source ecosystem to advance collectively, without having to rely on an individual party.
3. Is open source a guarantee of technological sovereignty in AI?
Sovereignty is a core issue for CIOs today. As the market is dominated by a few platforms, open source seems to offer an alternative solution. It enables control over models, data and infrastructure.
But that sovereignty is only real if the company has the skills and resources to exploit those tools to the full. Control of the entire value chain, from training to deployment, is an absolute must.
Regulations such as the IA Act will reinforce the requirement for control. They will demand greater documentation, transparency and accountability on the use of AI. Open source models will have to comply if they are to remain credible in mission-critical environments.
4. Finding the right balance between a sustainable economic model and community logic
Open source does not mean cost-free. AI, in particular, requires computing power, energy, storage and networks. These resources have a cost, which is often invisible at first glance, but very real.
The infrastructure is funded in part by private players. Some major cloud providers actively support open source projects… integrating them into the products they market. Harnessing open source in this way can be viewed as a threat to the ecosystem’s neutrality, or as an opportunity for large-scale distribution.
Models like Llama 2 illustrate this evolution. Semi-open, with restrictive licenses, they represent a form of hybridization between open source and proprietary models. This raises the question: are we moving away from open source utopia towards a more realistic, but also more ambiguous, model?
5. Is open source AI really more ethical and sustainable?
Affordability is often cited as a means of facilitating inclusion. In theory, open source democratizes access to AI. In practice, only players who can harness skills and resources can get real value out of it. Openness alone is not enough to bridge the gap between organizations.
From an environmental standpoint, open source AI has its share of critics. The proliferation of models, systematic training and multiple tests contribute to a significant energy footprint. The idea of “clean” AI is beginning to emerge, with lighter, more targeted approaches, however they are on the margins for now.
Also, the issue of intellectual property remains unclear. Many models are trained on public data, while their status is not always obvious. Who really owns the rights to the content? And what happens when a model, trained on that data, comes to market? Open source still needs to clear up these grey areas to establish its legitimacy.
At Combodo, we believe that open source represents a strategic tool for building systems that are more agile, more transparent and better aligned with business realities. That belief is even stronger when it comes to AI: open source makes it possible to keep control of data, to understand automated decisions, and to choose appropriate levels of automation.
But AI is still a choice. A binding one. When used properly, it’s a choice that can save support staff considerable time, improve the user experience and streamline operations. The time saved has a value, and it’s only natural that it should have a cost.
However, we oppose any attempt to use AI as a justification for raising prices or creating technological lock-in. At Combodo, we advocate a balanced approach: open but pragmatic, sovereign but interoperable, innovative but under control. AI must not be an inaccessible black box or a luxury reserved for the few. It must remain a tool that benefits agents, improving processes and service quality.