Artificial Intelligence is advancing at a pace few technologies in human history have matched. Every few months, new models, new capabilities and new applications emerge that reshape how organisations operate, how decisions are made and how societies function.
Amid this rapid expansion, a parallel conversation has emerged across governments, research institutions and industry leaders around the world the need to establish AI standards.
At first glance, the idea appears straightforward. Standards create order. Standards create trust. Standards allow technologies to scale across borders and industries. But the deeper question is rarely asked.
Do standards merely support innovation, or do they also shape the boundaries within which innovation can occur?
Understanding this distinction is critical, because throughout history, standards have played a powerful role not only in enabling progress, but also in defining the range of possibilities that society explores.
What Are AI Standards?
In simple terms, AI standards are agreed rules, frameworks and technical parameters that guide how artificial intelligence systems are designed, developed, tested and deployed.
These standards typically address several key areas.
Safety and reliability AI systems must perform consistently and avoid harmful outcomes.
Transparency and explainability Users and regulators must understand how decisions are being made.
Ethical use AI systems must respect privacy, fairness and human rights.
Interoperability Different AI systems must be able to communicate and function across platforms and industries.
Risk management Developers and organisations must assess and manage potential unintended consequences.
Organisations such as the ISO – International Organization for Standardization and the IEC (International Electrotechnical Commission) have already begun developing global frameworks through committees like ISO/IEC JTC 1/SC 42, which focuses specifically on Artificial Intelligence.
At the national level, countries are also developing their own policy frameworks. In Malaysia, initiatives connected to the National AI Office Malaysia (NAIO) aim to guide responsible AI development while positioning the country within the global AI ecosystem.
The intention behind these efforts is clear to ensure that AI develops in a way that is safe, trustworthy and beneficial for society.
But foresight demands that we also ask another question.
The Hidden Power of Standards
Standards are often seen as neutral instruments. In reality, they are not.
Standards quietly define how problems are framed, what solutions are considered acceptable, and which directions innovation is likely to follow.
Once a standard becomes widely adopted, it begins to shape entire industries.
For example:
• Electrical standards determined how power grids were built across nations
• Internet protocols shaped the architecture of global communication
• Aviation safety standards defined how aircraft are designed and operated
In each case, standards enabled massive growth and coordination. Yet they also created structural pathways that influenced how technologies evolved.
AI will likely follow the same trajectory.
The standards that emerge today will influence not only how AI systems operate, but also how researchers, developers and organisations imagine the future of artificial intelligence.
When Standards Enable Innovation
To be clear, standards are not inherently restrictive. In many cases, they are essential for progress.
Without standards:
• companies cannot integrate systems effectively
• governments cannot regulate technology responsibly
• users cannot trust the safety of AI-driven decisions
Standards create shared confidence, allowing technologies to move from experimental laboratories into real-world environments.
For example:
Healthcare AI requires clear safety and validation standards before it can assist doctors in diagnosing disease.
Autonomous vehicles require rigorous technical standards before they can operate safely on public roads.
Financial AI systems must follow governance frameworks to prevent manipulation or systemic risk.
In these contexts, standards are not barriers to innovation. They are foundations that allow innovation to scale responsibly.
When Standards Quietly Narrow Possibilities
However, there is another side to the story.
When standards are introduced too early, or designed with excessive rigidity, they can unintentionally narrow the space for experimentation.
Innovation often emerges from unconventional ideas that initially fall outside established frameworks.
If standards define the acceptable architecture of AI systems too tightly, they may favour certain approaches while discouraging others.
History offers many examples of this dynamic.
Technologies that initially appeared unconventional or impractical often became breakthroughs later.
Rigid frameworks can create invisible guardrails that steer innovation along predictable paths while discouraging exploration beyond them.
This is not necessarily intentional. It is simply the nature of systems that prioritise order and stability.
But from a foresight perspective, this dynamic matters greatly.
Because the future is rarely shaped by the ideas that fit comfortably within existing rules.
It is often shaped by the ideas that initially fall outside them.
A Foresight Perspective on AI Standards
Strategic foresight is not about predicting the future. It is about understanding how decisions made today influence the range of futures that become possible.
AI standards should therefore be approached not only as regulatory frameworks, but also as architectures of possibility.
The key question is not whether standards should exist. They must.
The more important question is how they are designed.
Effective AI standards must strike a delicate balance.
They must be strong enough to ensure safety, trust and accountability.
Yet flexible enough to allow new ideas, alternative architectures and unexpected discoveries to emerge.
In other words, standards should guide innovation without silently confining it.
This requires continuous review, open dialogue between policymakers and innovators, and the willingness to adapt frameworks as technology evolves.
The Future Will Not Be Shaped by AI Alone
Artificial Intelligence is often discussed as the defining technology of our era.
Yet history teaches us something important.
Technologies alone do not shape the future.
The systems, rules and structures surrounding those technologies often determine how they evolve.
Standards are one of those invisible structures.
They can accelerate innovation.
They can build trust across industries and societies.
But they can also define the boundaries within which imagination operates.
From a foresight perspective, this is the real issue.
Because the standards we establish today will influence which possibilities are explored and which remain unexplored.
And in the rapidly evolving world of Artificial Intelligence, that distinction may shape far more than we realise.

