Smarter Labs, Faster Science: Turning Data into Predictive Power

Smarter Labs, Faster Science Turning Data into Predictive Power

At a recent industry event, CEO and Founder of ZONTAL Wolfgang Colsman explored a question that is top of mind across life sciences: how do we move from simply digitizing the lab to making it truly intelligent?

While Artificial Intelligence continues to dominate the conversation, the reality is that most organizations are still working through foundational challenges. The opportunity is not just adopting AI, but enabling it—by ensuring data is connected, contextualized, and ready to be used.

Putting AI into Perspective

AI is not new to science. For decades, organizations have experimented with machine learning, but early efforts were limited by infrastructure and fragmented data.

Today, with cloud technologies and the rise of generative AI, the landscape has changed. However, one core issue remains: scientists still spend a significant amount of time preparing and cleaning data rather than using it.

The challenge has not disappeared—it has evolved. And solving it requires more than new tools; it requires a new approach to data itself.

The Role of Predictive AI in the Lab

Not all AI delivers the same value. As described in the session, the most impactful category for science today is limited memory AI—systems that learn from historical data to improve decisions in the present.

These systems can analyze decades of experimental results, connect them to real-time measurements, and recommend the next best experiment. Rather than replacing scientists, they augment decision-making by providing deeper insight and context.

However, this level of capability depends entirely on the quality and accessibility of the underlying data.

Why Data Remains the Bottleneck

Despite advances in technology, most laboratories still operate with highly fragmented data environments. Information is spread across ELNs, LIMS, instruments, file shares, and cloud systems, often in inconsistent formats with missing metadata.

This fragmentation limits the ability of AI to understand relationships between data points. Without context, even the most advanced models cannot generate meaningful or reliable insights.

The real frontier is not data collection—it is data harmonization.

The Three Barriers to Scalable AI

The session highlighted three core barriers that continue to limit progress:

  1. Fragmented data ecosystems remain the most visible challenge, where disconnected systems force scientists to spend time moving and cleaning data instead of using it.
  2. Legacy infrastructure adds complexity, with outdated systems continuing to store valuable data but limiting accessibility. These “zombie systems” create both cost and risk, while preventing data from being reused effectively.
  3. Interoperability is the third barrier, as tools and platforms often operate in isolation. Without shared standards, AI systems cannot communicate or collaborate across workflows.

Together, these challenges prevent AI from becoming part of everyday scientific operations.

From Data Management to Data Intelligence

Addressing these barriers requires a shift in mindset.

Rather than focusing on individual systems, organizations are beginning to adopt open, data-centric platforms that bring information together across the lab. These platforms harmonize data, preserve context, and make it accessible regardless of where it was originally generated.

This is where the transition from data management to data intelligence begins. Data is no longer just stored—it is understood, connected, and continuously reused.

Releasing Data from Legacy Systems

One of the most immediate opportunities lies in unlocking data trapped in legacy systems.

By separating data from the systems that created it, organizations can preserve its full scientific and regulatory value while making it available for real-time use. This allows companies to reduce costs, minimize risk, and ensure that historical knowledge remains accessible.

It also creates a foundation for AI, where past and present data can be used together to drive better outcomes.

The Next Frontier: Interoperable AI

As AI adoption grows, interoperability becomes critical.

Emerging approaches such as model context protocols and agent-to-agent communication are enabling AI systems to work together rather than in isolation. This creates the possibility of coordinated workflows, where different AI agents handle planning, execution, and analysis in a connected ecosystem.

The result is not just automation, but orchestration—where systems collaborate to support the full scientific process.

Toward Autonomous, Learning Laboratories

When data is harmonized and accessible, AI can move beyond reactive use cases.

Instead of simply responding to inputs, systems can begin to design experiments, execute workflows, interpret results, and continuously learn from outcomes. This creates a closed loop between data, decision-making, and discovery.

Over time, the value compounds as knowledge builds across experiments, teams, and organizations.

Closing Thought

The future of the lab is not defined by AI alone—it is defined by the data that powers it.

As Wolfgang Colsman emphasized, the question is no longer whether AI will transform science. It already has. The real question is whether our data strategies will evolve fast enough to support it.

By embracing open standards, harmonized data, and interoperable systems, organizations can move beyond digital transformation and begin to redefine how science learns.

Turn your lab data into predictive, actionable intelligence.

Get in Touch