At the AI Festival, in the panel dedicated to new intelligence in the food supply chain, a common line emerged from corporations and start-ups: in food, there is no need to chase the rhetoric of ‘being ahead’, but rather to start integrating AI into real processes. In a fragmented and regulated value chain such as the agri-food sector, where quality, safety, logistics, sustainability and customer relations are key, AI is only useful if it becomes an operational component of daily work, not a separate layer of analysis.
The starting point for large-scale distribution is stated in no uncertain terms. Gianni Montrone (quality manager, Conad North West): “AI only makes sense if it is based on clear processes and reliable data and passes scalability, integrability and security tests.” The rest is governance: sustainability/TCO assessments, integration criteria such as web, e-commerce, cloud, and cross-functional projects with shared responsibilities. AI does not ‘create’ quality; if anything, it makes it readable and traceable when upstream processes are defined, measured and respected.
The picture is completed by the last mile. Chiara Ninci (Conad Nord Ovest): “We are evaluating how AI agents, which make our quality system processes more efficient, can bring measurable improvements in terms of the quality perceived by customers for the service and for the freshest products in the store.” AI therefore makes sense if it is able to reduce friction between the various stages, purchasing centre, logistics and point of sale, and if it promotes a stable experience for shoppers: same standards, same thresholds, transparency on origin, storage, allergens and sustainability. It is a matter of interfaces, not fireworks: getting systems that were not designed to talk to each other to communicate.
From the start-up side, the message remains pragmatic. Marco Papalini (Biorsaf) summarises the enabling condition for moving from analysis to action: “With structured data, AI can anticipate problems, not just analyse them.” The quantum leap, even for agentic solutions that perform tasks, comes when datasets, roles and responsibilities are clear: anomalies identified in time, priorities suggested on a risk basis, operational prescriptions that cut minutes and ambiguity for teams in the field (warehouse, HACCP, inspections, recalls). In other words, AI becomes a process tool: it reduces variance, standardises repetitive decisions, and leaves non-standard cases and merit-based choices to people.
When the horizon shifts to an industrial scale, the issue ceases to be purely technical and becomes one of values. Gianluca Cristallo (CAMST Group) puts it this way: “For us, AI must help us make better decisions, consistent with our market and with being a B Corp.” In collective catering, this translates into reliability and replicability: menus and supplies to be optimised, waste to be reduced, special diets to be managed without errors. Here, ‘agentic’ means tracking, motivating and verifying: because a system that proposes an order or a shift plan must explain the constraints and priorities on which it is based. Responsibility is not shifted onto the machine; it is documented with the machine.
The research theme completes the picture. Chiara Bertaso (Peptofarm) points to a choice of method that has immediate repercussions on timing and impact: “Using AI before the laboratory means reducing unnecessary testing and directing research towards more effective and sustainable solutions from the outset.” Here, AI acts as a filter and compass: it selects hypotheses with a higher probability of success, shortens the exploratory phase, reduces materials and energy consumption, and moves sustainability upstream in the process rather than treating it as a downstream outcome. It is an approach that affects laboratories as much as product and purchasing functions: fewer random attempts, more iterative cycles guided by data and real constraints.
What does it mean, in concrete terms, to have ‘started’ with Agentic AI in the food supply chain? The panel discussion revealed some common features. Anchoring to processes: start with a clear map of flows (quality, safety, logistics, sales) and insert AI capabilities where there is a repetitive or slow decision. You don’t just ‘add’ an algorithm at the end; you rewrite a step so that it becomes measurable and automatable. Useful data, not just lots of it: data quality beats quantity. Cleanliness, consistent labels and shared definitions between functions reduce noise and allow AI to anticipate rather than comment after the fact. Short cycles, real impact: sprint projects (8–12 weeks) with simple indicators, minutes saved, waste avoided, non-compliance reduced, are worth more than monumental roadmaps. Adoption grows when operations improve immediately. Finally, governance and explainability: especially in regulated contexts, every automation must leave a trace and a reason. Trust comes from why as well as what.
In the background remains the temptation, typical of conferences and announcements, to measure oneself with superlatives. The AI Festival panel showed another way: the value of AI in the food supply chain does not lie in declaring oneself ‘ahead’, but in laying the first brick where there is currently a bottleneck, a recurring error, or downtime. It is a patient construction, based on processes, data, and shared responsibilities. The difference today is not between those who are ‘ahead’ and those who are ‘behind’. It is between those who have already started to integrate AI into their way of working, with measurable results, even small ones, and those who are waiting for the perfect moment. In food, as Conad Nord Ovest, CAMST Group, Biorsaf and Peptofarm pointed out, getting started is often the most competitive step you can take.
ALL RIGHTS RESERVED ©