What is Artificial Intelligence?
Artificial intelligence (AI). Two simple words that conjure up all sorts of terrifying thoughts of a future run amok. The concept isn't new; science fiction writers have peppered pop culture with books and films about the dangers of computers thinking for themselves and turning on their creators. For example, armies of robots or a monolithic tower of microchips that learn how to be better hunters, in an ever faster race to wipe out the last humans. But the idea of AI has its roots in something far more benign: simple efficiency. It learns from past iterations to make future runs faster and easier. In real life, the only thing AI hunts down is inefficiency in the process.
The Encyclopedia Britannica defines AI as "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings."1 IBM gives us humans a little more credit. It defines AI as leveraging "computers and machines to mimic the problem-solving and decision-making capabilities of the human mind."2 AI can be further defined based on its specific applications across industries. This means AI is no longer a single entity but has become an overarching technology that can touch every aspect of human life. AI has spawned several sub-fields as research into its uses and application grows.
Types of Artificial Intelligence
- Weak (or Narrow) AI: Weak AI is what we primarily see in operation today. It is also called narrow AI: it is narrow in its scope, it is goal-oriented, and it is designed to perform a particular task or a set of closely-related tasks.3 This is the type of AI utilized to develop some of the most commonly used applications, such as autonomous vehicles, IBM’s Watson, or Alexa and Siri, the digital assistants from Amazon and Apple, respectively.
- Strong (or General) AI: This is a theoretical form of AI where machines have intelligence levels similar to humans. They would have the ability to be self-aware and think creatively and strategically.3 While weak AI uses past data to inform future decisions, strong AI would use past feelings and experiences. It is supposed that it would someday surpass human intelligence. This is the kind of AI portrayed in most sci-fi movies and TV shows. And to the likely relief of many, researchers have not yet achieved it. There are no examples of strong AI yet in the real world, but it has the interest of researchers and developers.
Subsets of Artificial Intelligence
Two subsets of AI include natural language processing (NLP) and machine learning (ML). NLP is defined as the ability of a system to understand speech and text the way humans do.4 The use of NLP is widely evident in chatbots and similar applications, allowing for interaction with humans in a natural and personalized manner.
As defined by AI pioneer Arthur Samuel in 1959, ML is the “field of study that gives computers the ability to learn without being explicitly programmed.”5 They evolve along their own path, and when applied in combination with data, analytics, and automation, they can help businesses achieve their varied goals.
ML itself has a subset: deep learning (DL). DL automatically processes complex data sets with minimal feature engineering.6 Another significant difference is that much of the data processing in DL requires no human intervention; it can process raw, unstructured data—making it more scalable. In contrast, the broader ML process requires human experts and more structured data to learn.
History of Artificial Intelligence
The “thinking machine” topic has been discussed as far back as ancient Greece. But the foundations for useable AI began with the onset of electronic computing after the industrial revolution. In 1950, Alan Turing, famous for breaking the Nazis’ ENIGMA during WWII, published a paper called Computing Machinery and Intelligence.7 In it, he tries to answer the question, “can machines think?” He writes about the Turing test he developed to determine if a machine can demonstrate the same thought process as a human. This was the first instance when machines were practically tested for their “thinking” capabilities.
The following year, the first artificial neural network was developed. An artificial neural network simulates a biological brain, where nodes (the neurons) are interconnected. During initial training, those nodes and connections are then weighted for importance, depending on the accuracy of their outputs. In the 1951 setup, three thousand vacuum tubes were tied together to simulate 40 neurons and successfully simulated how a rat would escape a maze.8
In 1955, John McCarthy coined the term “artificial intelligence” in a proposal to hold the first-ever conference on the subject the following year at Dartmouth College.9 In the same year, the first-ever running AI software program, Logic Theorist, was created by J.C. Shaw, Allen Newell, and Herbert Simon.9
During the 1980s, the aforementioned neural networks are increasingly used in AI applications. The next major milestone came in 1997 when IBM’s Deep Blue defeated Garry Kasparov in a six-game chess match.10
In the 2000s, the work on AI accelerated as the computing power of widely available machines increased, and AI became more common. But in terms of significance, the defeat of Lee Sedol was immense. Sedol, a world-champion Go player, lost in a match to DeepMind’s AlphaGo program, which was powered by a deep neural network. It was extraordinary, given the vast number of possible configurations (over 5 trillion) after only five moves had been made in the game.11
Benefits and Applications of Artificial Intelligence
From the nascent stage, the purpose and intention of enabling a machine to think are to answer the question – "What can the machines do for us if they are developed with the ability to think and are smarter and more intelligent?"
Benefits:
AI has a plethora of benefits in almost every industry simply because it removes the margin of human error. Some of the most apparent benefits of AI include the following:
- Increased Efficiency: AI can take a holistic view of an organization and eliminate any friction points while improving analytics and resource utilization. This results in significant cost reductions. It can predict maintenance needs by automating complex processes and minimizing downtime. However, AI is not just about streamlining laborious and repetitive tasks. With ML and DL, AI further learns from the results and constantly improves its performance and insights.
- Improved Decision-making and Accuracy: AI is not supposed to replace human intelligence but rather augment it with deep analytics and pattern prediction capabilities that will enhance the quality, effectiveness, and depth of human decisions.
- Creativity: Humans spend a large part of their time at work doing repetitive and mundane tasks. Such activities can be assigned to AI, and humans can utilize the time to do more creative and high-value work that will drive more productivity and encourage innovation. AI can also help all workers increase productivity, leading to a happier workforce and better work-life balance.
- Smarter Products and Services: AI-enabled systems can analyze problems and assess situations differently than humans. This helps the systems recognize issues and opportunities more quickly, empowering companies to launch better products faster and with more innovative services through various channels and business models.
- 24/7 Customer Service: This has been one of the earliest adaptations of AI. Companies provide various avenues to interact with customers, and AI has transformed this significantly. Many basic tasks are now handled by AI chatbots that can provide information to customers 24/7, as they are not time-bound like humans. AI-based apps enable businesses to gather real-time information to provide deeper insights to drive greater overall customer satisfaction.
- Computer Vision: This form of AI derives meaningful information from the analysis of videos, digital images, and different kinds of visual inputs. Uniquely, computer vision creates an interactive link between the digital and physical worlds. The neural networks used in computer vision have applications in social media, medical imaging, and helping autonomous driving cars make sense of their surroundings.
Applications of AI:
New ways of leveraging AI are being developed every day. Deriving optimal benefits from it depends on an organization’s ability to identify where AI can be used and to what level. There is a multitude of real-world applications, from low-level task management to advanced techniques in research and development (R&D) and product development. Some of the most common industries using AI include robotics, surveillance, space exploration, retail, finance, drug development, and more. They have all utilized AI to augment human intelligence, providing deeper and broader access to information. AI has been used to improve enterprise-level performance, increase productivity, and monitor tasks that, until very recently, required full-time human attention.
AI really shines in its scalability: it can extract useful information from data on a level that is out of the scope of human capabilities. The prowess of AI in analyzing data brings substantial benefits and advantages for businesses. Marketing uses of AI technologies are invaluable where swift decision-making is essential. Strategies can be formulated and tweaked based on data collection, analysis, and results of economic or search trends. For example, online retail companies use AI engines to constantly improve product recommendations, showing customers items they are far more likely to purchase. These insights are derived from analyzing the customer's browsing, purchasing, and search data.
Artificial Intelligence in Life Sciences and Healthcare
The impact of AI in the life sciences and healthcare space has been tremendous, and its adaptation has accelerated in the past decade. It is utilized in various areas, transforming organizations at all levels, from drug research to market launch to clinical trials and day-to-day operations. It is helping to bring life-saving therapies to the market faster than ever while streamlining operations. Applying AI to big data in life sciences ushers in a range of opportunities, from reshaping manufacturing and business models, to enhancing research and clinical trial data flow, to product intelligence. AI enables pharmaceutical companies to gain deep insights from massive data sets at blistering speed. It efficiently processes data, creates automated workflows, and delivers actionable insights.
Another crucial application of AI is in the operating room. Technologies such as augmented reality (AR), based on computer vision, help surgeons conduct highly-critical operations. AR uses cameras to enhance what the surgeon sees, enabling them to be more precise in their movements.
Ethics in Artificial Intelligence
The introduction of every new technology involves ethics, morality, and a code of conduct. This is especially apparent with AI, as it opens possibilities no other technology has in recent times. It has the potential to change the landscape of humanity itself. The scope and use-case scenarios of AI are so vast they can touch every corner of our lives; even become all-pervasive. The idea of a ubiquitous AI is frightening for some and exciting for others. Whether it becomes what we fear or not, the responsibility is on us in how we proceed with its development and use in our lives while ensuring it continues to conform to established ethics, morals, and trust.
Organizations need to build a foundation of trust and accountability with their customers, clients, employees, and the public in general. AI needs to be designed and deployed in such a way that it complements the abilities of an organization and its employees while also creating a positive impact on society at large. That will, in turn, imbue trust and confidence in companies to develop and deploy AI at scale.
Focus areas should be privacy, data security, trust, and transparency; without these, AI deployment can run into several challenges. If AI has to develop into its true potential, anything associated with it must be transparent, accountable, and free from built-in biases. There needs to be a clear definition of control, ownership, and morality embedded in it.
AI should never be allowed to become the bane of humanity; instead, it should become the force that propels us into the next phase of human evolution.
The Future of Artificial Intelligence
AI is deeply intertwined with humanity’s future. It will be the face of the next industrial revolution. What will that new AI-empowered world look like? To start, it will not be as limited in its scale as it is today. It would have already expanded to encompass almost every facet of our lives. AI will know you and your interests and will search and present the information you seek daily. By taking care of the daily mundane projects, it will allow us to focus on more creative work that will advance our species at an unprecedented pace. It will control almost everything that makes the industrialized human civilization work.
If AI is properly harnessed, it has the potential to generate enormous prosperity and opportunities. It can immensely improve the quality of life for humans, help cure diseases, and make us all safer by eliminating the chances of human error. But after all this, we must realize that AI only knows what it has been taught: the data that has been fed into the AI systems and the information it has been able to derive from it. Access to this data will decide whether human rights and privileges are trampled upon by AI or by those who control it. There must be democratization in the control of AI so that few people do not end up having dominion over the entire set of resources.
Currently, AI is great with data processing and uncovering insights from that data. But right now, it does not have the creativity or intelligence to innovate. Will the theory of strong AI—a general AI that can learn and grow its intelligence like humans—come into being? Will we have sentient androids walking down the streets, standing guard at high-security buildings, or helping an elderly person cross the street? If so, can we trust them? AI is already more successful at detecting cancer in patients, so will we see androids operating on human patients in the future? That scenario, if it comes to pass, is still far off in the future. Rest easy; we are safe from those fictional movie-style robot overlords. However, we are on the verge of seeing AI scale up, almost as revolutionary as the assembly line. We are much closer to seeing our generation’s version of Henry Ford’s Model T, not James Cameron’s Model T-1000.
References
- Copeland BJ. Artificial intelligence [Internet]. Chicago: Encyclopedia Britannica; 1998 Jul 20 [updated 2022 Nov 11; cited 2023 Feb 2]. Available from: https://www.britannica.com/technology/artificial-intelligence
- IBM. What is artificial intelligence (AI)? [Internet]. Armonk (NY): IBM [cited 2023 Feb 2]. Available from: https://www.ibm.com/topics/artificial-intelligence
- Jajal T. Distinguishing between narrow AI, general AI and super AI [Internet]. San Francisco: Medium; 2018 May 21 [cited 2023 Feb 2]. Available from: https://medium.com/mapping-out-2050/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22
- Lutkevich B. What is natural language processing [Internet]. Newton (MA): Techtarget; 2023 [cited 2023 Feb 2]. Available from: https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP
- Brown S. Machine learning, explained [Internet]. Cambridge (MA): MIT Sloan School of Management; 2021 Apr 21 [cited 2023 Feb 2]. Available from: https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
- Bhamidipati SK, Chickarmane V, Lorenzo R. Predicting patient adopters through deep learning [Internet]. Berkeley Heights (NJ): Axtria Inc.; 2019 Oct 3 [cited 2023 Feb 2]. Available from: URL https://medium.com/@axtria/predicting-patient-adopters-through-deep-learning-d684fae4efa4
- Turing AM. Computing machinery and intelligence. Mind [Internet]. 1950 Oct 1 [cited 2023 Feb 2]; Volume LIX (Issue 236): pages 433–460. Available from: https://academic.oup.com/mind/article/LIX/236/433/986238; DOI: https://doi.org/10.1093/mind/LIX.236.433
- Toosi A, Bottino A, Saboury B, Siegel E, Rahmin A. A brief history of AI: how to prevent another winter. PET Clinics [Internet]. 2022 Dec 9 [cited 2023 Feb 2]; Volume 16 (Issue 4): pages 449-469. Available from: https://arxiv.org/pdf/2109.01517.pdf; DOI: https://doi.org/10.1016/j.cpet.2021.07.001
- McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer research project on artificial intelligence [Internet]. Stanford: Stanford University; 1955 Aug 31 [cited 2023 Feb 2]. Available from: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
- IBM. Deep blue [Internet]. Armonk (NY): IBM. [Cited 2023 Feb 2]. Available from: https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
- Cho A. Huge leap forward: computer that mimics human brain beats professional at game of Go [Internet]. Washington, DC: Science; 2016 Jan 27 [cited 2023 Feb 2]. Available from: https://www.science.org/content/article/huge-leap-forward-computer-mimics-human-brain-beats-professional-game-go
Table of Contents
- What is Artificial Intelligence?
- Types of Artificial Intelligence
- Subsets of Artificial Intelligence
- History of Artificial Intelligence
- Benefits and Applications of Artificial Intelligence
- Artificial Intelligence in Life Sciences and Healthcare
- Ethics in Artificial Intelligence
- The Future of Artificial Intelligence