AI Reboots: Now Businesses Must Adapt



Don Grust
08/22/2017

Artificial Intelligence (AI) is transforming our world in subtle but profound ways. I am not referring to the futuristic Westworld version of AI but the more immediate, practical version that portends a new era of computing – an era in which powerful algorithms unlock valuable insight from data and where data becomes an asset that can be monetized.

The rise of AI parallels that of other historic computing advances. I joined the mobile industry in 1991 when it seemed that a powerful and exciting market was emerging – in hindsight we can safely say that we were witnessing the onset of a new era in computing. Today, AI and its cousins (machine learning, deep learning, neural networks and so forth) present the same way that mobility did in 1991 and the dawn of the internet before that: potent technologic advances mixed with extraordinary results.

Regardless of how you frame this, successful businesses will increasingly use data and AI to inform decision making and augment business processes. Consequently, business executives need to come up to speed on AI to be able to assess the business opportunities and threats and to discern the signals from the noise. This does not require a PhD in math, but it will require some effort to understand what problems these tools can solve and how to connect that to meaningful business value.

AI That “Learns” Unlocks Powerful New Applications

Artificial Intelligence refers broadly to machines that exhibit intelligence. This includes robots, self-driving cars, intelligent agents like Siri and Alexa, and smart software systems such as Google Search, Facebook feeds and Amazon product recommendation systems. These systems do not think or reason per se. Instead, they use an AI programming technique called Machine Learning to teach these systems to do things like recognize speech, learn the kinds of books you like and predict price movements.

Early AI systems were “rules-based” systems programmed to perform specific tasks. A famous example of a rules-based system is Deep Blue, IBM’s chess-playing computer that defeated Garry Kasparov, the reigning world chess champion, in 1997. Deep Blue intelligently simulated the next 6-20 moves and chose the one that had the best chance of winning.

Rules-based systems, also known as expert systems, worked reasonably well for some commercial applications. However, many problems are not easily described by rules. To overcome this limitation, much of the AI research focused on designing systems that can learn. For example, instead of telling a computer how to play chess by explicitly programming it with all the rules, you use machine learning to “train” a computer model by showing it a large number of games from the first move to the last (win, lose or draw). With each pass through the data, the model adjusts its parameters until it learns how to win with high accuracy. Then you use the model with the winning parameters to play a real game. And it works, stunningly well.

Some problems that are best solved if the computer can learn the solution include:

  •        Recognizing faces and expressions (photo tagging)
  •        Recognizing objects (autonomous driving)
  •        Identifying and interpreting spoken words and text (intelligent agents, search, sentiment analysis of tweets a customer service logs)
  •         Recommending books or movies to a user (ala Amazon and Netflix)
  •         Playing the game of Go
  •         Analyzing MRIs
  •         Recognizing anomalous sensor readings in an industrial plant
  •         Uncovering / preventing hacking attempts

The AI Market Reawakens

The seminal AI algorithms were invented in the ʼ70s and ʼ80s, but by the ʼ90s, many researchers had given up, leading to the start of an “AI Winter.” The general theory focused on neural networks, which are powerful learning models inspired by the brain. The theory was sound yet the algorithms did not work well in practice, particularly when applied to complex tasks like those listed above.

But by the late ʼ00s, marked improvements in speech and image recognition rekindled interest in neural networks. It turns out that the algorithms were not the problem: the real issue was that the computers were too slow and there was not enough data. Hence, the availability of massive amounts of data, plummeting processing and storage costs, and blazing fast math processors unlocked the power of the algorithms and triggered amazing progress in AI over last 6-8 years.

  • Massive amounts of data: We know we are swimming in data. IBM actually tried to quantify it: “Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone.” (Quintillion: add 9 zeroes to a billion.) More training data leads to more accurate models.
  • Plummeting processor and storage costs: Thank you Amazon, Microsoft, Google and others for the cloud computing price wars.
  • Blazing fast math processors: Machine learning algorithms are math-intensive. It takes billions of calculations to train a neural network made up of millions of “neurons” on millions of training examples. To speed this up, one of the coolest developments has been to deploy special purpose processors known as GPUs (graphic processing units) or its many variants to speed numerical computing roughly tenfold. (See this amusing Mythbuster video.)

On The Cusp of a New Era

Driven by these advancements, AI and machine learning now touch us every day in the form of Alexa, Siri, Google searches, Facebook photo tagging, LinkedIn and Facebook feeds, spam detection, mobile check deposit, Google Maps, Amazon and Netflix recommendations, and Uber, just to name a few.

For developers, a surge of open source AI software has made powerful tools and platforms readily available. In addition, cloud-based machine learning solutions from Google, Amazon, Microsoft, IBM and others include APIs for speech recognition, text analysis, facial/image recognition and stream analytics (for IoT data); a wide range of pre-built learning algorithms including advanced neural networks; and much, much more.

A few other indicators that signal the emergence of AI include:

AI Investment and M&A

According to CB Insights, venture capital firms invested $5 billion in 550 AI companies in 2016, a 5-year high. CB Insights also reported that nearly 140 private AI companies have been acquired since 2011, with over 40 acquisitions taking place in 2016.

Data scientist and “business translator” shortages

A recent McKinsey report projects a shortfall of some 250,000 data scientists over the next few years to meet the demand of companies integrating data and analytics into their organizations. This includes top data scientists with PhDs in math, computer science and other sciences who advance the cutting edge, as well as practitioners who expertly apply advanced models to solve business problems and ensure reliable and scalable deployment of these models. This shortage is also driving “acqui-hire” deals, with elite startups commanding up to $5 million to $10 million per employee. For example, Google acquired DeepMind (the company that beat the Go champion) and their 75 employees for $600 million, or $8 million per employee.

McKinsey also identified a shortage of what they call business translators. These are business subject matter experts who also understand the technology and can team with the analytics and AI talent to pragmatically solve valuable business problems. McKinsey estimates that there will be a shortage of 2-4 million business translators over the next decade.

AI in the mainstream media

You increasingly hear the terms AI, machine learning and algorithms in the news. It is even starting to capture the attention of the mainstream press:

The NY Times Magazine ran a terrific cover story in its December 14, 2016 issue entitled, “The Great A.I. Awakening.” It was bylined, “How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.” According to the article, Google Translate now boasts 500 million users per month with 140 billion words translated daily.

The Economist also put out a cover story and a special section on AI that delves into the technical advances along with ethical issues (June 25, 2016).

Towards A Strategic Vision for AI in Business

The great AI reboot heralds a call to action for business executives to get up to speed on what is new and possible with AI. As with the internet and mobile eras before, successful businesses will adapt their business strategies to use AI and data science to create competitive advantage.