Artificial intelligence (AI) technologies and their applications continue to grow and evolve. AI technologies are now being deployed across almost every industry and sector, including transportation, healthcare, defence, finance and manufacturing. But what exactly are these technologies? How prevalent are they? And with AI developing so rapidly, how will International Standards respond to these challenges?
As artificial intelligence (AI) becomes increasingly ubiquitous in various industry sectors, establishing a common terminology for AI and examining its various applications is more important than ever. In the international standardization arena, much work is being undertaken by ISO/IEC’s joint technical committee JTC 1 , Information technology, subcommittee SC 42, Artificial intelligence, to establish a precise and workable definition of AI. Through its working group WG 4, SC 42 is looking at various use cases and applications. The Convenor of SC 42/WG 4 is Dr Fumihiro Maruyama, Senior Expert on AI at Fujitsu Laboratories.
Currently, there are a total of 70 use cases that the working group is examining. Health, for example, is a fascinating area to explore. Dr Maruyama himself describes one use case in which a program undertakes a “knowledge graph” of ten billion pieces of information from existing research papers and databases in the medical field. The application then attempts to form a path representing the likely development from a given gene mutation to the disease that deep learning has predicted from the mutation.
Solutions for health
Dr Radouane Oudrhiri is Chief Data Scientist at Eagle Genomics, whose work involves research undertaken “in silico” – that is, using primarily computer or data-driven innovation. One area of focus is on microbiome, which comprises all of the genetic material of micro-organisms (bacteria, viruses and fungi) within an entire collection, such as the human gut, mouth or skin. Microbiomes aren’t just limited to humans and other animals: oceans, soils and rivers all host microbiome communities that impact entire ecosystems. Microbiome data is very complex as it is hyper-dimensional and compositional. Dr Oudrhiri’s colleagues analyse microbiome data using AI and machine-learning computational tools for spotting associations that humans simply cannot. This radically improves productivity and enables revolutionary discoveries. It identifies new, sustainable ingredients and therapeutic targets and informs safer, more efficient industry practices.
AI technologies have been used to analyse human tumours for some time now, but as Prof. Frank Rudzicz, the Canada representative for SC 42, Director of AI at Surgical Safety Technologies Inc., and Associate Professor of Computer Science at the University of Toronto, points out in an interview for this article, this is just one application of several. For instance, an application has been deployed recently to identify early-onset dementia in more elderly patients. Residents at care facilities, normally assessed by a doctor once every six months for 15 minutes, have instead been issued with a computer tablet and asked to respond verbally to a series of questions. The program then alerts the medical team if anything seems awry, such as a change in the patient’s voice patterns, or if they seem unable to spot obvious relationships in an image of a family group.
Dr Oudrhiri has also been working with a company that has developed an AI solution, initially designed to make shoes smarter by collecting biomechanical metrics, measuring aspects such as shoe usage and sporting performance. It works via a chip inserted into the sole. The application has been so successful that advances in technology will soon allow it to be used for the detection of the likelihood of developing diseases – such as Parkinson’s – just by analysing the way in which an individual walks.
The AI of everything
Health, of course, is not the only field that the work of SC 42 will impact. Dr Maruyama also cites the example of an AI program that uses ultrasonic waves for inspecting wind turbines. The program flags up any portions of the turbines that may have defects, clearing the way for its human inspection experts to make an informed choice about any subsequent course of action. Crucially, as the program is undertaking the initial inspection, time is freed up for human experts to inspect more turbines.
Intelligent transportation systems (ITS) is another sector that already relies heavily on AI. Dr Mahmood Hikmet, Head of Research and Development at Ohmio Automotion, a company that focuses on ITS, points to lidar technology, which measures distance to an object by the use of laser light rather than sound or radio frequency. If several of these laser lights are stacked on top of one another and spun round at top speed, the result is a three-dimensional “print cloud” showing how far away a given object might be. All of this takes place at “tens or hundreds of times a second”. It’s an application that can be used in driverless cars, with the capability even to distinguish between different blades of grass.
Dr Hikmet also highlights crowd-counting AI for driverless cars, a predictive analysis application drawn from data on the infrastructure side of ITS (as opposed to the cars themselves). This involves cameras that monitor people walking back and forth, tracking them throughout an entire shot, whilst predicting their likely “route” as they interact with others. This data is then picked up by the car and used to prevent any possible collisions.
Behaviour training for machine learning
YOLO – You Only Look Once – is object-recognition technology that separates to the tiniest degree the different aspects of disparate objects. It has obvious applications in safety and security contexts. Behavioural cloning is another field of AI, in which a machine is obliged to learn a series of tasks through reinforcement training. It’s “a way of punishing and rewarding a neural network for doing things right or wrong,” Dr Hikmet explains. The network ends up learning from the reward or punishment signals it receives from the human user as to how it is supposed to “behave”.
Venture capital is key to certain aspects of Dr Oudrhiri’s work. One exciting area of research seeks to digitize and systematize nothing less than “the entire entrepreneurship process”. By gathering data throughout the venture life cycle, identifying innovation challenges and categorizing information, the platform provides predictive models on a company’s performance, growth potential and valuation. A risk profile is therefore established, assisting in the selection process and the entire start-up evolution. Until now, information of this kind has been collected through human responses to surveys. These are more aggregate in nature, do not lend themselves to easily-built predictive models, or often lead to unwittingly biased conclusions. After all, it is only natural that company owners will want their projects to succeed.
These examples are as ingenious as they are effective. And yet the vast majority of us are unlikely to have heard of these specific AI technologies, still less to have an awareness of their impact. Current AI solutions are often developed in silos and built for very specialized applications; their true power will be properly realized when they are considered in a holistic framework, such as the horizontal frameworks SC 42 is developing.
A role for standards
For this and other reasons, International Standards are now under development. Dr Oudrhiri suggests that standards are needed to “cut through the hype” so that fears and objections to AI can be either taken on board or simply rebutted as groundless. Radical ideas for AI applications are often promoted with great fanfare in the media and other public forums – for better or worse – yet, as Dr Maruyama points out, many, if not most of these ideas never get past the Proof of Concept (PoC) phase.
Consumers do need to be protected – from physical harm, certainly, but also from companies that use the phrase “artificial intelligence” as a way of promoting a product simply to spike its share price. And given that AI data is at the intersection of many different fields – software engineering, neuroscience, decision making – it is hugely important that a common framework is developed, so that consumers, producers and regulators can speak a common language.
This is not as ridiculous or unlikely as it first sounds. Experts talk of “AI winters” in which previous generations of AI technology peaked, only to fall away because of misplaced experimentation and consequent withdrawal of funding. The same could possibly happen again and undo much of the progress in today’s world.
STATE OF THE PRACTICE
It is precisely because AI technologies are developing so quickly that International Standards are so needed. In the words of Dr Oudrhiri, they should focus on the “state of the practice, not the art”. SC 42 has already produced draft technical reports, with standards under development. The subcommittee is working with technical committee ISO/TC 69, Applications of statistical methods, on mapping both terminologies and concepts within the machine-learning world, between statistics, software engineering, AI, data science, and operational research. An entire working group – SC 42/WG 3 – is looking solely at trustworthiness.
Dr Maruyama believes the best approach to developing International Standards is to converge around a limited number of alternatives, and to “focus on where technology is already stable”. A common language and criteria are being created to get beyond the PoC stage. Another area of focus is describing the process and life cycle for developing AI applications. They will also help capture the broad requirements of consumer needs, which must include the ethical and societal considerations in use cases and applications. A third area focuses on model validation. This is highly technical and statistical in nature, but will one day ensure that programs and machines will do what they are supposed to be doing.