Artificial intelligence is everywhere in business conversations right now - and that’s why data foundations matter. Organisations are experimenting with AI tools, trialing copilots and exploring how automation and analytics can improve productivity. But while the excitement around AI is understandable, many organisations are still struggling to move beyond experimentation and into meaningful, scalable outcomes.
According to Francois Le Roux, Director of Enterprise Architecture & Security at PerData, the key difference between experimentation and real value comes down to one thing: data.
“AI works most effectively when data is unified, governed and trusted,” he says. “The technology itself is powerful, but without the right data foundations, you’re unlikely to see consistent or scalable results.”
Many organisations today are experimenting with AI in small ways - running pilots, testing use cases or exploring new tools. That’s an important step, but it’s not the same as deploying AI across a business.
Francois describes this progression as a journey from experimentation to commercialisation.
“In the early phase, you want flexibility,” he explains. “You want to be able to bring data together, explore it and run experiments without worrying too much about whether everything is perfectly structured or certified.”
Platforms such as Microsoft Fabric, combined with AI development tools like AI Foundry, support this journey by providing an environment where organisations can test ideas quickly. Data can be ingested in raw form, analysed in sandbox environments and used to explore possible AI-driven insights.
But as those experiments mature, the requirements change.
“Once you move toward production use, especially if the outputs are influencing financial or operational decisions you need certified, governed data,” Le Roux says. “That’s where proper data platforms and governance frameworks become critical.”
One of the biggest misconceptions organisations have about AI is that all data needs to be perfectly structured before it can be used. In reality, AI works with a wide range of data types.
Francois describes three main categories:
The type of data required depends heavily on the use case.
“If a CFO asks AI how to optimise operating profit, the system needs highly structured, verified financial data,” Francois explains. “But if you’re analysing customer sentiment or market trends, you’re working with unstructured data like social media conversations.”
In those cases, the power of AI lies in its ability to interpret language, detect patterns and extract meaning from messy, real-world information.
Another factor organisations often overlook is the speed at which data needs to move.
Some AI scenarios rely on historical data processed in batches, such as forecasting or trend analysis. Others require near real-time or streaming data, particularly in areas such as IoT, logistics or robotics.
Modern data platforms support multiple data velocities simultaneously, allowing organisations to combine historical insights with real-time signals.
“The right platform allows you to ingest data in real time, near real time, or scheduled batches,” Francois says. “Different AI use cases require different approaches.”
Data silos remain one of the biggest barriers to effective AI adoption.
Many organisations maintain separate systems for finance, CRM, HR and other operational functions. While these systems often serve as reliable sources of truth, problems arise when they cannot easily be correlated.
“The real value appears when you connect those domains,” Francois explains. “For example, linking CRM data with financial performance gives you a much clearer picture of customer value.”
Modern data platforms such as Microsoft Fabric help bring these sources together into a unified environment where data can be harmonised, validated and analysed across the organisation.
As AI becomes more embedded in business operations, data governance and security become even more important.
In some cases, AI will access data directly from collaboration platforms such as SharePoint, Teams, Outlook or OneDrive. In others, it may use curated datasets inside a data platform.
Either way, organisations need clear classification, access controls and data policies in place.
“AI should only be able to access the data it’s allowed to see,” Francois says. “And if it produces outputs, those outputs should only be visible to the people who are authorised to consume them.”
Tools such as Microsoft Purview play a key role here by automatically classifying sensitive data and enforcing governance policies across an organisation’s information environment.
Despite all the technical considerations, Francois believes organisations should begin their AI journey with a simple question: Where is the return on investment?
Many executives are enthusiastic about AI, but they still need to justify the investment.
“A CFO might say, ‘I’m happy to invest in AI, but show me the return,’” he says. “That’s where targeted use cases become important.”
Instead of deploying AI broadly without a clear objective, organisations should focus on specific problems, such as:
Reducing operational waste
Improving operating profit
Increasing market share
Identifying new customer opportunities
Pilot projects in these areas allow organisations to test AI capabilities while building evidence for broader investment.
“You don’t need to wait until everything is perfect,” Francois says. “But you do need to experiment in a structured way that helps you identify real business value.”
In other words, AI success isn’t just about adopting the latest technology - it’s about building the right data foundation to turn experimentation into meaningful outcomes.