Hide shortcuts

Dispelling the fear and embracing the potential of artificial intelligence.

Artificial intelligence (AI) is everywhere – and that’s something to be marvelled at. AI is powering everything from advanced web searches to social media recommendations and video game design. But it could do infinitely more.

AI has the potential to revolutionize our societies and economies. Discussions about the future of AI tend to focus on the risks; but issues around data bias, lackluster transparency and privacy are in fact driven by unscrupulous use of the technology, not the technology itself.

We will never realize the benefits of AI by solely focusing on the negatives. An AI-positive future is possible, but we need to actively pursue it. If we approach AI with a positive mindset, placing societal needs such as ethics and sustainability at the heart of its development, then we can unlock its full potential.

The promise of AI

Imagine if the advances we have seen in AI in the last year had happened even half a decade ago. Could it have accelerated the development of coronavirus vaccines? Could it have averted the global economic downturn we experienced this year? Questions and potential scenarios like these lend credence to the argument that it could be unethical not to develop AI.

AI, like any other powerful technology, can be a double-edged sword. Take for instance the automotive industry: the early days were just as contentious as the AI revolution is today. There were inherent risks in the use of automobiles. And yes, there were complications, but we built a system with guardrails that made cars safer. These continue to be improved since their development more than a century ago. It’s time now to put the guardrails in place for AI.

AI for good

A positive outcome for AI won’t simply happen – we need to steer towards it. This requires us to advance its responsible and ethical development, underpinned by international, cross-sectoral cooperation. AI’s capabilities shouldn’t be defined only by what is technically possible, but also by what society needs and expects.

We need a diverse range of expert inputs covering various geographies, industries and roles, with everyone across the AI ecosystem involved in the process of setting standards. International Standards play an important role as the interface between technological possibilities and societal expectations. This will provide a robust foundation for the equitable development of AI.

If it is developed ethically and responsibly, AI could help to usher in a new era of innovation and inclusion. This cutting-edge technology could be used to make our world safer and better, opening up possibilities that seemed like science fiction just a few years ago.

A collaborative ecosystem

The applications of AI are vast and varied. To ensure the standards we develop are fit for purpose, we need an international ecosystem that includes diverse organizational perspectives and reflects the multiple ways in which the technology will be used.

Standards incorporating desired societal and ethical outcomes serve as a foundational framework for developing, deploying and regulating AI systems. Gone are the days where performance, cost and scalability were given priority over sustainability and trustworthiness. The future of IT, including AI, requires that we simultaneously deal with all these considerations. 

With this philosophy in mind, our AI experts are leveraging the full toolset of the ISO system to develop standards that will ensure the widest and most responsible adoption of AI. We need to continue working closely with other international organizations, regulators, policymakers and end users in a collaborative and cohesive ecosystem.

Looking into the future

Like with any new technology or ground-breaking product, there are inflection points as development and adoption progress. It is precisely at these points that ISO and its peers have a real opportunity to make a tangible, positive impact.

International Standards offer a framework for the creation and development of responsible, resilient AI systems based on the input and voices of all stakeholders. Standards help to foster interoperability, safety, and transparency across AI applications, ensuring the benefits of AI are accessible, comprehensible, and meaningful to everyone.

One thing is for sure: AI will not remain static. It will continually evolve as new use cases appear. As we push the boundaries of AI, we will also adapt our standards to encompass new innovations, applications and scenarios.

The future of AI is filled with opportunity, but we must advance with foresight and responsibility. As long as we can embrace collaboration, bringing all voices on board, we can help steer the technology for the betterment of humankind. 

(Source : Wael William Diab - Chair of the joint ISO/IEC subcommittee on AI (ISO/IEC JTC 1/SC 42))

What is artificial intelligence?

AI is a technology that makes machines and computer programs smart, enabling them to do tasks that typically require human intelligence. It includes things like understanding human language, recognizing patterns, learning from experience and making decisions. In general, AI systems work by processing vast amounts of data, looking for patterns by which to model their own decision making.

While this definition resonates with the lay person, it is, however, not entirely accurate. So what exactly is artificial intelligence? According to ISO/IEC 22989:2020, AI is the “capability to acquire, process, create and apply knowledge, held in the form of a model, to conduct one or more given tasks”. This definition is more accurate from the technological perspective and is not limited to fields where AI is already being used, but allows space for further development.

About AI management systems

So how does AI work? An AI system works on the basis of input, including predefined rules and data, which can be provided by humans or machines, to perform specific tasks. In other words, the machine receives input from the environment, then computes and infers an output by processing the input through one or more models and underlying algorithms.

As the capabilities of AI grow exponentially, there are deep concerns about privacy, bias, inequality, safety and security. Looking at how AI risk impacts users is crucial to ensuring the responsible and sustainable deployment of these technologies. More than ever, businesses today need a framework to guide them on their AI journey. ISO/IEC 42001, the world’s first AI management system standard, meets that need. 

ISO/IEC 42001 is a globally recognized standard that provides guidelines for the governance and management of AI technologies. It offers a systematic approach to addressing the challenges associated with AI implementation in a recognized management system framework covering areas such as ethics, accountability, transparency and data privacy. Designed to oversee the various aspects of artificial intelligence, it provides an integrated approach to managing AI projects, from risk assessment to effective treatment of these risks. 

Monday, January 8, 2024