From Keynote to Practice: Governing Data, Software, and Risk in Digital Labs

From Keynote to Practice Governing Data, Software, and Risk in Digital Labs

Introduction

Ahead of his keynote at AIDDD 2025 in Boston, MA, ZONTAL CEO Wolfgang Colsman participated in a pre-interview to discuss the state of digital transformation in life sciences and what it will take to make AI-driven discovery scale beyond isolated success.

The conversation did not center on models or algorithms. Instead, it focused on a more fundamental question:

How do you make AI-driven discovery repeatable, auditable, and reliable across the enterprise?

In both the interview and the keynote that followed, a consistent theme emerged. What ultimately determines whether AI scales is not the intelligence layer. It is the structure of the environment in which that intelligence operates.

The Limiting Factor Is No Longer Technology

Most large organizations no longer struggle to access advanced tooling. The ecosystem has matured. Platforms, vendors, and models are readily available.

Yet outcomes remain inconsistent.

Data pipelines degrade under variation. Integration efforts remain localized rather than systemic. Validation is often introduced after the fact instead of designed as part of the system. The result is a pattern that is now familiar: isolated success followed by difficulty scaling.

This is not a limitation of technology. It is a limitation of system design. When the underlying environment does not enforce structure across data, processes, and interfaces, variability accumulates faster than capability. Over time, that variability becomes the dominant constraint.

What Governs a Digital Lab Environment

At a practical level, digital lab environments are governed by how software is defined, how data is structured, and how responsibility is distributed across the organization.

These are often treated as separate concerns—technical, operational, and organizational—but in practice they are tightly coupled. Decisions made in one layer propagate quickly into the others.

Software scope, for example, is not simply a contractual boundary. It defines how systems evolve, how changes are introduced, and how dependencies are managed over time. Where that scope is unclear or inconsistently applied, the effects appear quickly in integration timelines, validation effort, and ultimately in data quality.

In parallel, data governance cannot be treated as a policy applied after deployment. It must be embedded into how systems operate. Data that lacks context, lineage, or consistency cannot be reliably reused, regardless of how advanced the analytical layer becomes.

These dynamics are not theoretical. They manifest directly in onboarding timelines, rework cycles, and the effort required to demonstrate compliance.

From Capability to Accountability

A consistent theme across both the keynote and the interview is that capability and accountability do not move together.

Platforms increase what organizations are able to do. They do not reduce what organizations are responsible for.

Responsibility for data integrity, validation, and appropriate use remains with the organization operating the system. This becomes more—not less—important as environments become more automated and more interconnected.

In regulated settings, systems must be demonstrated to perform according to their intended use. That requirement does not end at deployment. It persists throughout the lifecycle of the system and must be maintained as conditions change.

Acceptance, in this context, is not a milestone. It is an operating condition. It depends on continued alignment between system behavior and real-world use, supported by traceability and controlled change.

The practical implication is straightforward: automation shifts the location of oversight, but it does not eliminate it.

Risk Is Not Transferred—It Is Reshaped

As digital platforms become more central to lab operations, there is often an implicit expectation that they will absorb or mitigate risk.

In practice, risk is not transferred. It is reshaped.

Technology providers define capabilities and constraints. They enable new ways of working, but they do not determine outcomes. Scientific conclusions, operational decisions, and regulatory positions remain the responsibility of the organization.

This is reflected structurally in how software relationships are defined. Liability is typically bounded, while responsibility for use and outcome remains internal.

This is not a limitation of the model. It is a necessary condition for operating in complex and regulated environments. It also reinforces the need for organizations to develop internal frameworks—processes, controls, and governance mechanisms—that match the capabilities of the systems they deploy.

Trust Becomes the Scaling Constraint

As digital lab environments expand across sites, systems, and teams, the central constraint shifts from capability to trust.

The question is no longer whether data can be generated or accessed. It is whether that data can be relied upon, audited, and reproduced under scrutiny.

These are not abstract concerns. They determine whether decisions can be defended, whether collaboration can extend beyond local teams, and whether results can be reused with confidence.

Data protection and confidentiality are necessary components of this, but they are not sufficient on their own. Trust also depends on structure—on consistent formats, controlled access, and traceable transformations.

Without that structure, scale introduces friction. With it, scale becomes manageable.

Building for Repeatability

The long-term shift underway in life sciences is not simply toward more digital systems. It is toward more repeatable systems.

Repeatability in how data is generated, how integrations are delivered, and how validation is performed and maintained.

This is what allows organizations to move from project-based execution to operational capability. It is also what makes AI viable at scale.

AI systems depend on consistent, structured, and governed inputs. Where those conditions are not met, outputs become difficult to interpret and harder to trust. Where they are met, the same systems can deliver materially different results.

What This Means in Practice

For organizations evaluating their current environment, the question is not whether additional tools are required. It is whether the existing foundation can support what comes next.

This includes whether data is structured in a way that supports reuse, whether systems can be validated without disproportionate effort, whether ownership is clearly defined across teams, and whether integration and change can be managed without introducing additional variability.

Where these conditions are absent, adding new layers of technology will increase complexity without improving outcomes. Where they are present, the same technologies can scale effectively.

Closing Thought

The transition to AI-driven labs is often framed as a question of innovation.

In practice, it is a question of structure.

Organizations that invest in governed, repeatable systems will find that innovation becomes easier to scale and easier to defend. Those that do not will continue to make progress—but struggle to extend it beyond isolated use cases.

Watch the Keynote

This perspective was discussed in more detail during Wolfgang Colsman’s AIDDD 2025 keynote.

👉 Watch the AIDDD Boston Keynote

Preserve and protect your scientific data for long-term use.

Get in Touch