Overview of Artificial Intelligence in Health Care Notes from the Internet – Denham Pole July 2015 At its simplest, 'intelligence' is human mental ability. To cynics this is nothing more than the ability to get a high IQ score on the Stanford-Binet intelligence test. The test, mainly given to children and young adults, consists of a number of puzzles of varying complexity covering different types of problems that they may need to solve in their daily life, using only mental skills. The test has been continually refined in order to remove questions with cultural bias, so that it can be used to measure intelligence in children from different ethnic and cultural backgrounds. The 'quotient' initially compared the score that a child got compared to the age of other children who would normally get that same score. So if a gifted 12-yr old scored like a typical 14-yr old, their IQ would be calculated as 14 ÷ 12 x 100 = 116. Nowadays more complicated algorithms are used involving percentiles but the results are expressed similar to the well-known IQ. These examples from an early test of Alfred Binet from the late 19th century that show 'beautiful' and 'ugly' faces, compared to a modern version available as a self-test on the Internet, indicate how the test has evolved: Intelligence tests from 1900 to 2000
Located 35 miles south of San Francisco and 20 miles north of San Jose, 'Stanford' is nestled in the heart of Northern California's dynamic 'Silicon Valley', home to Google, Hewlett-Packard, Yahoo!, Apple and other cutting-edge companies. Many of them were started and continue to be led, by Stanford alumni and faculty. You will read a lot about Stanford if you research Artificial Intelligence (AI) – the development of AI is largely synonymous with Stanford University:
Stanford University in Silicon Valley Overview of AI in medicine.odt
1
The usual way of quantifying human intelligence is by separating it into other, less abstract measures of intellectual capability: judgment, reason, understanding, acumen, wit, sense, insight, perception, perspicacity, penetration, discernment, sharpness, quick-wittiness, smartness, astuteness, intuition, acuity, alertness, cleverness, brilliance, aptness, ability, talent, intellect... But some of those measures of mental capability are deceptive – intuition seems to reduce to nothing more than the speed of reaching a conclusion by taking short-cuts; judgment and common sense seem to be based on rules learned in childhood that are not questioned in later life. The more we try to pin down a definition of intelligence separate from learning, the more difficult it becomes. In common usage, human intelligence seems to be a measure of how successfully an individual can solve a problem that they have not seen before. Because of the difficulty in defining intelligence, many of those active in the field do not like the term AI. Even the term 'Artificial Intelligence' has been subject to argument, as some researchers feel it sounds unscientific. They argue that the word 'artificial' suggests lesser or fake intelligence, more like science fiction than academic research. They prefer to use terms like 'computational neuroscience' or that emphasize the particular subset of the field where they work, like 'semantic logic' or 'machine learning'. Nevertheless, the term 'Artificial Intelligence' has gained popular acceptance and graces the names of various international conferences and university courses. In a direct comparison to human intelligence however, it has not yet been fully simulated by computers. In fact, it is only Hollywood that has produced really convincing AI so far (as seen for example in the Steven Spielberg film 'Artificial Intelligence').
Artificial Intelligence - definition The term Artificial Intelligence' was coined by Dartmouth mathematician John McCarthy (who also invented the programming language LISP). The term was used as the theme of a conference at Dartmouth College, New Hampshire, in 1956 to study every aspect of learning and intelligence and to design machines to simulate them. This would include making machines that use language, form abstractions and concepts, and solve the kind of problems now reserved for humans. Such machines should even have been able to improve themselves. Just as 'Intelligence' is difficult to define, McCarthy (and his conference) were not able define 'AI' precisely and in the 50 years since this conference, no single consensus has emerged as to what precisely 'AI' means. Current thinking on this falls into 3 categories: Simplistic – can pass the Turing test (a machine that reacts like a human being in impromptu conversation); Pragmatic – can perform a function that would require intelligence when performed by humans, or can perform equal to or better than humans on specific human tasks; Methodological – capable of solving problems that are normally handled by humans (using expert systems, neural nets, machine learning, genetic algorithms, deep learning, fuzzy logic etc.).
Classification of AI Strong/Broad AI – behaving generally like a human with a mind vs. Weak/Narrow AI – can only do a certain number of skilled tasks that normally require some intelligence; Stand-alone systems vs. Integrated/complimentary systems; Neat (systems that copy/emulate human ways of doing things) vs. Scruffy (just get the job done without copying how humans would do it); Neats consider that their solutions should be elegant, clear and provably correct (in other words 'neat'), whereas Scruffies believe that human intelligence is too complicated to emulate at this stage of IT development. Overview of AI in medicine.odt
2
However, some of the greatest successes in AI have come from combining neat and scruffy approaches. For example, there are many cognitive models matching human psychological data built into the Soar and ACT-R programs. While both of these systems have formal representations and execution systems, the rules put into the systems to create the models are generated ad hoc. It is perhaps safe to say at this stage of development that medical AI will have to remain Scruffy until we have more complete models of how human intelligence works.
Early history of AI in health care The early attempts at producing AI health care systems after the Dartmouth College conference, all had knowledge built in but represented it in different ways: MYCIN had representation of rules as the predominant form of knowledge with recursive control mechanisms that continually scanned through the rules looking for significant ones; the Digitalis Therapy Advisor had a set of patient-specific models built in and searched for answers with expectation-driven procedures; CASNET/Glaucoma used a causal-associational network with computational 'points of interest'; INTERNIST and PIP (Present Illness Program) had disease frames with partitioning heuristics; All these early systems had powerful reasoning mechanisms that accessed expert knowledge.
AI Winter Immediately after the Dartmouth Conference, there was great enthusiasm for AI but it was not sustained. The high expectations announced were not fulfilled and for those working in health care computing, there was no consensus what computers should do and how they should do it. The public support for the exciting new developments pioneered by Dartmouth followed the trend, well-known in marketing, described as the 'Gartner hype cycle': This affected more the attempts to produce 'Strong AI' rather than the more successful 'Weak AI'. The Gartner hype cycle
Situation today Articles about medical diagnostic decision support (MDDS) systems often begin with a disclaimer such as, 'despite many years of research and millions of dollars of expenditures on medical diagnostic systems, none is in widespread use at the present time'. While this statement is true in the sense that no single diagnostic system is in widespread use, it is misleading with regard to the state of the art of these systems. 1 Diagnostic systems are now ubiquitous, and research on MDDS systems is growing. For example, in clinical laboratory work and signal analysis (ECG, EEG, EMG, monitors) there has been a lot of development of AI: ECG signal + clinical analysis → 1
Examples of successful AI: banking SW, personal assistants, fuzzy logic controllers, heuristic searching, data mining/data analytics, machine translation, industrial robots, video games, speech recognition, spell checkers, Google's search engine, IBM's Watson, Google's Deep Mind...
Overview of AI in medicine.odt
3
In trying to explain the 'AI winter', it should be pointed out that the early (pre-1990) AI systems had obvious weaknesses: • the need for duplicate data entry; • each program covered a separate area; • output was often puerile and could hardly qualify as 'intelligent'. However these drawbacks are no longer present, and the nature of clinical AI systems has diversified over the last 30 years. In fact, the prospect for adoption of large-scale diagnostic systems is better now than ever before, due to the development of electronic medical records that can feed data directly into these systems. Diagnostic decision support systems in Europe and the United States have become an established component of medical technology. Weak AI – In many imaginative developments from automatic vacuum cleaners to self-driving cars, weak AI is now ubiquitous. In aeroplanes, nuclear plants, baby nurseries, smartphones, hospitals and casinos, you will find many weak AI applications. The field extends from the trivial (pocket calculators) to BBC-front-page stuff (drones that find terrorists). In a hospital, AI systems may hide in infusion pumps, bathroom scales, surgical theaters and even the doctors' wrist watches. In health care, there are already systems providing alerts, diagnostic assistance, therapy critiquing/planning, clinical guidelines and information at the bedside, while others may be simply helping staff to conform to official regulations and prescribing rules. Strong AI - although there has been no major breakthrough in health care, this is an area of enormous financial investment. We will certainly see more of this in the coming few years especially in imaging and machine learning. Although not yet 'Intelligent', surgical robotic devices are starting to appear throughout the US, thanks partly to the success of the leader in the field, the company 'Intuitive Surgical' and their product the 'Da-Vinci' laparascopic machine:
A robotic surgical device to assist in laparoscopies
Another important area of information technology that needs intelligent solutions is called 'Big data'. This is where extremely large data sets need to be analysed computationally to reveal patterns, trends and associations, especially data relating to human behaviour and interactions. Overview of AI in medicine.odt
4
Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization and information privacy. Big data is expected to have most impact in the areas of intelligent searches to answer questions, cover post-marketing surveillance of pharmaceuticals, analyse genomic information, compute hypotheses, provide insilico patients to do research on, carry out learning in medicine and automatically track medications and infusions from manufacture to patient administration.
The immediate future In the patient-doctor interaction, the amount of information that a clinician needs to process quickly and accurately can be enormous. If computers are to provide intelligent suggestions in this scenario, they could perhaps start to work on some of the simpler problems first: • Identify the more serious patients to be seen first; • Propose differential diagnoses on the first review to be further looked into; • Display the costs of different treatments; • Base diagnosis / treatment on local prevalence, weather patterns, migration trends which change from day to day; • Show whether the doctor is conforming to the bulk of their colleagues in frequency of diagnosis, therapy, indications for surgery and many other benchmarking measures.
Concern about intelligent machines taking over from the human species The thought of machines becoming superior in our core area of intelligence, which seems to separate us from other living things, has lead some people to express existential concerns. Elon Musk (Tesla), Stephen Hawkins and Sam Harris among others, are concerned about intelligent machines taking over from humans, competing for resources and perhaps enslaving or even eliminating us. A four-day conference held in Puerto Rico early this year made three claims: • Current AI research seeks to develop intelligent agents. The foremost goal of research is to construct systems that perceive and act in various environments at (or above) human level; • AI research is advancing very quickly and has great potential to benefit humanity. Fast and steady progress in AI forecasts a growing impact on society. The potential benefits are unprecedented, so emphasis should be on developing 'useful AI,' rather than simply improving capacity; • With great power comes great responsibility. AI has great potential to help humanity but it can also be extremely damaging. Hence, great care is needed in reaping its benefits while avoiding potential pitfalls. However, the Stanford graduate / AI expert Stuart Russell (now at UCAL Berkeley) thinks that the problems can be handled. His speech (https://www.youtube.com/watch?v=GYQrNfSmQ0M) is worth watching. It is a good antidote to the more pessimistic tone set by Sam Harris (http://www.samharris.org/blog/item/can-we-avoid-a-digital-apocalypse).
Suggested further reading 'Clinical decision support systems.'; Swiss Med Wkly. 2014 Dec 23; 144:w14073. doi: 10.4414/smw.2014.14073. eCollection 2014; Beeler PE, Bates DW, Hug BL.
Overview of AI in medicine.odt
5