Intro to AI for Digital Health Marketers

Intro to AI for Digital Health Marketers

Download Episode.

Overview

To best introduce you to Artificial Intelligence (AI) I’m going to start out with a little history lesson.  Next I’m going to introduce and define a few terms like deep learning, and natural language processing.  Then I’m going to cover structured and unstructured data, and get into natural language processing.  And, finally, I’ll sum it up with a couple of good examples of AI in healthcare and some steps for really making it happen within your organization.

A Brief History of Artificial Intelligence

In 1948, in response to a comment at a lecture that it was impossible for a machine to think, John von Neumann stated, “you insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!”.

Twenty years earlier, in 1928, Von Neumann founded the field of game theory as a mathematical discipline.  Game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.” Originally, it addressed zero-sum games, in which one person’s gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans and both computers. Game theory ultimately lead to both perfect and imperfect (or incomplete) information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found.

In 1950, in a paper titled “Computing Machinery and Intelligence”, Alan Turing proposed a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.  At the time he considered a text only interface but, as we’ll see later, this idea has grown dramatically since the 50’s.

AI continued to grow and evolve throughout the 1950’s and 60’s.  If you’d like to check out a detailed history, here’s a great timeline on Wikipedia (https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence).

Then, in 1970 Mycin was created as a doctoral project at Stanford University.  Mycin was an early expert system that used artificial intelligence to identify bacteria that caused severe infections, such as bacteremia and meningitis.  The Mycin AI also recommend antibiotics, with the dosage adjusted for patient’s body weight. Its performance was shown to be comparable to and sometimes more accurate than that of Stanford infectious disease faculty, though it was never used in practice (because it could not be integrated with patient records and physician workflow).  They creator of Mycin, Edward Hance Shortliffe, went on to found biomedical informatics (now known as health informatics), which is a multidisciplinary field that uses health information technology (HIT) to improve health care via any combination of higher quality, higher efficiency, (lower cost and greater availability), and new opportunities.

While not directly related to digital health, in 1979 Hans Berliner created BKG, a backgammon playing AI that could defeat the world backgammon champion.  (As a side note I love backgammon.  If anyone else out there is a fan, reach out.  I’d love to roll a few games.)  In 1979 BKG became the first computer program to defeat a world champion in any game.

This was a trend that would be continued with The Deep Blue chess machine (built by IBM) defeating the then world chess champion, Garry Kasparov in 1997.  In 2011 IBM’s Watson computer competed and won the television game show Jeopardy! beating our two reigning champions Rutter and Jennings.  Then again in 2015 Google DeepMind’s AlphaGo defeated 3-time European Go champion 2 dan professional Fan Hui by 5-0, stepping that up in 2016 to defeat Lee Sedol, a 9 dan professional Korean Go champion.  Most recently in 2017, Carnegie Mellon’s Libratus won against four top players at no-limit Texas hold ’em. Unlike Go and Chess, Poker is a game in which some information is hidden (the cards of the other player) which makes it much harder to model.

Defining Some Terms

The reason it is important to understand the history of AI is not just because it’s fun to have interesting factoids at geek dinner parties.  There’s also a lot of relevance to it.  It’s important to note that early versions of AI date back to the 1950’s, but they were not AI as you have come to know it today.

Expert Systems

From the 1950’s through to the 1980’s AI primarily consisted of rules-based programs called “expert systems”.  Expert systems mimic human decision making process through hard or hand-coded if-then or decision statements.  An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts.  Mycin was a good example of this.

Artificial Intelligence Timeline from nvidia
Figure 1 source: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai

Starting in the 1980’s and dominating through about 2010 a new approach to AI called Machine Learning was the main-stream.  Machine Learning did not rely on hard-coded rules and decisions, instead the practice gives computers the ability to learn without being explicitly programmed.

Machine Learning

Instead of programming, Machine Learning relies on learning data sets to train.  Through feedback, which can be provided by a machine or a subject matter expert, the computer learns.  Once trained, the AI can perform accurately on other yet unseen tasks and datasets with high confidence.  To really understand what this looks like I encourage you to watch the IBM Documentary, “IBM Watson: Smartest Machine on Earth”, on YouTube (https://www.youtube.com/watch?v=bTIxX857KcM).  The core idea in layman’s terms is to keep asking the computer questions and tell it when it gets a correct or incorrect answer.  Over time it will learn to increase its confidence (the statistical meaning of the word) until it’s reached a satisfactory point.  Then you test it with questions it’s never seen before.

Neural Networks & Deep Learning

The real term I’d like to define is Deep Learning, because that’s one you’ll hear a lot. But to understand deep learning, you’ll first have to understand the basics of Neural Networks.  Neural Networks are, simply put, computer hardware and software that are structured similarly to how the human brain works.  Given this, the goal of the neural network is to solve problems in the same way that the human brain would.  The basic foundational unit of a neural network is the neuron.

The architecture of a neuron is actually conceptually quite simple.  There is a set of inputs and an output — in the middle there is a function.  Really, any function for the sake of this explanation.  What’s important is that each neuron’s function weights the inputs and creates an output. That output then becomes the input of the next neuron.

Bilwaj K Gaonkark of UCLA explained on Quora:  think of a bucket which you are filling with water. After a certain amount of water goes into the bucket, it over flows. Now imagine if every time a bucket over flows you can start filling another bucket with the overflowing water by connecting it with a hose. One could construct a large series of buckets, or entire (deep) networks of buckets, connected by hoses where multiple overflows go to a single bucket and so on.

Now suppose that five streams of water are running down a mountain in America and a gardener sets them up to flow into five buckets. He wants to water three gardens. One garden contains cactus plants (require little water), another contains chrysanthemums (require medium water) and another contains water hyacinths (require lot of water).  We can consider this the training data set.

The gardener starts with hoses and buckets of equal size to construct a network that takes the five streams and waters three gardens. The first year all his plants die because he did not have the right set of hoses going to the right plants. So next year he changes hose sizes and bucket sizes and the cacti live but other two gardens don’t bloom.  The gardener keeps meddling with the hose and bucket sizes until all gardens bloom. His “network” of buckets is now trained.

When his friend in India has the same problem, he tells him not to do the whole thing again. Instead, the first gardener gives him the bucket sizes and hose sizes so his friend’s gardens can flourish as well.  This is the equivalent of introducing a testing data set – something the AI has not seen before – so we can see how it preforms.

Deep Learning

Now that we understand neural networks, deep learning is only a step away.  As I mentioned before, the output of a neuron is connected to the input of one, or more, other neurons.  Similar to how neurons are organized as layers in the human brain, so too are neurons in neural networks.  They are layered like a lasagna.  Neurons on the bottom layer receive signals from the inputs.  These inputs could be voice, an image, a video, data, text, etc. As the data passes through the neuron the function performs some action and passes a value out the top. In doing this, each layer modifies the data by the function within, then passes it up and out the top.  As the data moves through the layers and reaches the top the system has a high-confidence in the correct solution.  These layers are what make up the “deep” in deep learning.  And, as I already described above, the link between the neurons (or how we arrange the hoses in the example) is the “learning” part.

So there you have it “deep learning”.

Let’s Talk Data

When we think about all the problems we’ll want to solve with AI, there’s no shortage of ideas. AI has already been used for games, cancer diagnosis and treatment, and even cooking the world’s best cookie. There’s no shortage of problems to solve, so of the most important things you’ll have to consider is what kind and how much data you have to support the AI.  AI, like any other intelligence, is just an empty brain without information.

The first thing you’ll have to know about data is that for the purposes of AI there are two main types of data – structured data and unstructured data.  Both are very important, but the approach to using them will be dramatically different.

Structured data has a high-degree of organization.  Think of an excel spreadsheet with rows and columns. Those of you that are more technical can think of a rational database.  Both are good examples of structured data.  Each column has a header, so you have a good idea what data will be below it. It’s predictable because it follows a pre-set structure, and often the data itself is typed (which means it’s pre-defined as an integer, character, timestamp, etc.). Because of the high-degree of organization, structured data is easily queried and indexed.  This allows the data to be easily recognized and used by the computer.

Unstructured data is just the opposite.  Unstructured data does not have a pre-defined format.  This blog post, for instance is unstructured data.  It can contain text, dates, numbers, HTML links, images, videos, and many more types of data in no particular order.  This is what makes unstructured data notoriously difficult to use.  The lack of organization results in irregularities and ambiguities that make it specifically difficult for computers to process and understand as compared to structured data.

In 1998 Merrill Lynch created a rule of thumb that between 80% – 90% of usable business data may originate in unstructured form.  More recent estimates put that number closer to 70% – 80%, but the key takeaway is that unstructured data makes up most of the data we have access to.  If you think about it, every web page, blog post, journal article, medical record (at least the free-form fields), books, emails, audio, video, and much more make up unstructured data.

An example of unstructured data could be the spoken or text words of a doctor:

“I reviewed Jane Smith at the Jonestown clinic today.  She was referred to us from Doctor Lee for further evaluation. The patient came in complaining of chest pain, shortness of breath, and lingering headaches. She does not have a cough, smokes one pack of cigarettes a day and has no family history of heart disease.  The patient has been experiencing similar symptoms for the past 12 hours….”

Let’s think about what’s really going on here.  As I read it, it makes sense and the message is pretty clear – but how would an AI read this?  Just some of the things that will need to be done are language identification, lexical analysis, classification, disambiguation, entity extraction, fact extraction, concept extraction, relationship extraction, and probably a whole bunch more “extractions”.  Together this is called Natural Language Understanding (NLU).  What all those complicated terms are really doing is figuring out the “who” of these words since they refer to “I, Jane Smith, she, us, Dr. Lee, the patient, family”.  The AI is also trying to figure out the “what”, or “clinic, chest pain, shortness of breath, headaches, cough, cigarettes”.  There’s also the “when”, or “today, day, history, 12 hours”.

I could keep going, but you should be starting to get the point.  AI has a lot of work to understand what we as humans may take for granted as just six sentences.

Natural Language Processing (NLP)

Once we have the structured and unstructured data feed processing, and the deep-learning online we get to thinking about how to interface with the artificially intelligent computer.  At this point you may have taken it for granted, but haven’t talked interface yet.  Interfaces for AI can be quite broad.  It may be a chat bot, or your AI could be answering the phone.  AIs are used in robotics, used to label images, and much more.  So a big part of considering what you want to use an AI for is understanding how users will interface with it.

With that in mind, one of the core tenants of an AI is the ability to simply talk to it.  Whether you remember HAL from 2001: A Space Odyssey, Iron Man’s JARVIS, or the unnamed computer that ran the bridge of the Enterprise in Star Trek, the ability to speak to a computer and have it talk back is something we’ve been thinking about how to do since the 1950’s.  But what does it really take to do this?

There are two main parts to NLP: taking your speech and turning it to text, and taking that text and making it meaningful to the AI.  At the core of talking to a computer is something called Natural Language Processing (NLP).  This is the field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages and, in particular, concerned with programming computers to fruitfully process large natural language samples.  NLP relies on a technology called Automated Speech Recognition (ASR), which deals with the methodologies and technologies that enables the recognition and translation of spoken language into text by computers.

I can keep going on this, because we as humans take in information in a variety of ways, many of which have computerized counterparts or metaphors.  Computer Vision and the ability for self-driving cars to see the road, people, and obey signs are just a few of those examples.

Natural Language Generation (NLG)

NLG system is like a translator that converts data into a natural language representation.  The key to NLG is that it’s effectively the opposite of NLP.  Where NLP is breaking things down, NLG has to make decisions on how to put a concept into words.

This language representation can take several forms.  In some cases, it could be as simple as filling in variables to a form letter.  An example of this could be a tweet that denotes the winner of a football game with the score.  More complex output can actually generate the text, in which case the NLG will have to consider the rules of grammar for the language it’s outputting (because it can be multi-lingual).  Building on this further is actually giving the AI a voice.  This has been historically difficult because it’s not just content and grammar, but the expressiveness of a voice.

They key to natural language generation is getting it right.  There’s a theory in animation called “the uncanny valley”, but I think it can be applied to NLG as well.  As the theory goes, we can continue to make human-replicas seem more human.  But there is a point at which the replication gets very good, but still does not quite seem real.  It’s at this point where we, as humans, flip out, which is a technical term.  We actually don’t like it.  It’s close enough to real, but not real, that we resist and repel it.  The uncanny valley is meant for animation, but I think it applies to language as well.

Getting Started with Artificial Intelligence

Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing are still all keywords and catchphrases at this point. If you want to go deeper and actually leverage AI for a project, then there are some considerations when getting started.

What AI Is and Isn’t

Congratulations, you’ve already started the first step in implementing an AI project.  It may sound silly at first, but one of the most important steps to implementing AI is understanding what it is and what it is not – what it can and can’t do well. Because of the learning structure of AI you can see how it can be good at doing one thing very well.  In AI, much to the chagrin of the AI fear-mongers, there is no good general AI right now.

Many AI gurus will start off by asking you to identify a problem.  While I don’t disagree with that, I also believe it’s an unreasonable request.  AI is super complex, and despite the research and detail I’ve put into this article, most people will still only have an extremely high-level idea what AI is and where it can/should be used.  There’s a great Dilbert cartoon, by Scott Adams that explains it quite well.

Dilbert, by Scott Adams

Because of the limited scope of current AI implementations, the best problems are those that have some defined input and some define output that can be made very quickly and repetitively.  Here are some good examples from Andrew Ng’s Harvard Business Review article, What Artificial Intelligence Can and Can’t Do Right Now:

Andrew NG - What Machine Learning Can Do, via HBR.org

The main idea is that it is typically not appropriate for to ask a non-expert how to best implement something as complex as an AI.  At the same time, effective implementation of an AI requires the input of business owners and subject matter experts.  What makes this so difficult is AI, as a final product, is not well defined.  The features, however, as described above, are.  So here are a few things to consider when analyzing a problem to see if AI will be a good fit.

Data Sources

Data is the fuel for any artificial intelligence system.  Without data, or with bad data, there’s just an empty box of cool technology that doesn’t do a whole lot or, to keep with the engine metaphor, a race car engine that won’t start.  Of course with machine learning and deep learning principles we’re no longer limited to only considering structured data (now you know what that is), which opens up many new sources of information possibilities.  Never the less, we must consider what data we have access to in order to solve any give problem with AI.

At this stage, the right question to ask is what data do I own or have access to that will help in supporting the AI’s decision making.  If, for instance, you’d like to use an AI to support a call center, what will be the primary sources of data from which to pull answers?  Do you log the most common questions and answers?  How do you evaluate the success or quality of the answer?  How many languages will it need to support?  And the one that many people forget about when it comes to AI, how will you generate the question and answer set significantly large enough to train and test it?

It should also be noted that when I refer to data, I should really qualify that with HUGE amounts of data.  Smart photo recognition systems are neat to play with on facebook and google photos, but the reality is that behind the scenes there are 10’s, if not 100’s, of thousands of example photos already tagged with the correct answer that have been used for training and testing the AI.

Data, it should be noted, is also a scarce resource.  In most cases technology can be copied, but data is a differentiator.

Problem Identification

Problem identification is listed here as the second thing to consider, but really you’ll need to think about the problem and the data together.  Without either of these things your project will never get out of the gate.  The few times I’ve been lucky enough to run a project like the we typically start out with an intensive working session to cover these two topics – what is an AI followed quickly with brainstorming problems.

As noted above, a good AI problem has an input, some type of interpretation of the input based on what it has learned, and some type of response to that input.  Another feature of a problem is something repetitive, or that involves consuming and making sense of large amounts of information and then applying that knowledge to a particular problem.

Often times the user journey will cross with the brand goals to come up with some list of desired behaviors and primary barriers.  From there we architect a solution, most commonly leveraging channels and content to play some role in behavior change with a tracked outcome.  While a sound model, that kind of thinking will naturally eliminate many excellent internal options for AI implementations.

Recall that AI is good at repetitive tasks that require the evaluation of an input against a large set of data.  Nowhere in that definition did anyone say it has to be consumer only.  An AI, for example can replace a call center as a 24-hour chatbot that helps users find the right solution.  We’ve seen AI play as complex a role as cancer diagnostics and compound identification based on genetic research.  We’ve also seen it help plan out your meeting calendar, tell you the weather, and write articles for newspapers, twitter, and blogs.  In short, AI is helping us do a lot, which is why it’s best to make sure your team understands AI before you start problem solving.

A 4 Step User-Centric Approach to AI Projects

Now I’m not saying this is the exact right approach for everyone, but I would like to leave you with a tangible takeaway on how to start from scratch and get an AI project off the ground.  While your problem or industry may be different, hopefully these steps will apply. If you have a different approach, I’d love to hear about it.

My assumption here is that the AI project you are looking to take on is of some significance.  There are plenty of “off the shelf” AI-driven tools out there, but I don’t consider them true AI projects so much as projects you would have done anyway that happen to contain AI technology.  An example of this is a service I evaluated that leverage natural language processing to map customer sentiment vs. the traditional role of surveying customers. While very interesting and quite neat, the AI was not core to the functionality that I was looking to implement so much as an added feature or selling point above and beyond the core functionality I was looking for.

Step 1. Document the User Journey

Most organizations I’ve worked with have a pretty good understanding of the sales cycle, some even have it written down and documented.  From there they have an informed perspective of the user journey as it applies to the sales cycle.  And very few have actually documented it with respect to the content and experience.

When I say document it, I mean really documented in a flow diagram that maps all the multiple and various channels, messages, and key customer interaction points with your organization.  Examples could include digital channels like CRM, web, and banners, but it may also include direct mail, rep visits, and incoming phone calls for information.  This is NOT an easy thing to do and, to make it more difficult, it’s constantly changing each time you have a new initiative.  But, I believe that if you are serious about taking on a project like AI integration, taking the time to document the user journey is an invaluable tool.

I should note that a user journey does not have to be external.  There are many internal processes, like cancer research, that can benefit from AI.  Plotting your user journey for these things can be just as helpful as doing it for external customers. A well-documented user journey is like a good business plan – the true value is not in the having of it, but in the making of it.  That’s when you’ll find the opportunities.

Step 2. Review the User Journey Considering Where AI could Help

Once you have the user journey you’ll start to see the opportunities.  Are there areas where your customers can get “stuck” and may need help? Do you have important tasks that just won’t scale well with increased business or geography?  Are there repetitive tasks that could be automated?  Are there choices that need to be made which require a lot of data, processing of information, or validating submissions, etc.?    Is there an interaction that could be reinforced with information to make the experience better?  These areas and more are all great opportunities to consider if AI could be right to help you out.

Step 3. Data

I said it before and I’ll say it again now, data is the fuel that makes the AI motor run.  I can solve the problem of getting to Mars by building a rocket, but without fuel it’s just an expensive paper weight.  Once you have you journey and have used it to identify points where AI can help improve it, the next thing to do is consider the data.  Take a look at each of your potential AI projects and consider what kind of data you’ll need to support it.  Ask yourself if you have the data, or do you need to get it.  If you’ll need to leverage multiple data sets, figure out what it will take to integrate them.  Data can be a differentiator, but it can also be a show stopper, so think about data early on in the AI process.

Step 4. Find an AI that Meets Your Needs

As you know by now AI is not just one thing – it could be anything.  Now that you have a really firm grasp of your user experiences and have identified a few potential points of improving them, it’s time to start your research.  There are hundreds, and soon to be thousands of AI vendors. Some of them just use AI, and some of them build AI.  Now that you have a much better idea of what AI can do (check out the above sections if you skipped them), you are much better prepared to being finding the right solution for you.

Some of the Most Popular AI Platforms

Most people, if they’re not thinking of HAL or the Terminator, will recall IBM’s Watson AI either beating Kasperov in chess or Jennings on Jeopardy!.  Through a platform called Bluemix, IBM has opened up Watson for use.  Of course you’ll have to pay for the privilege, but along with a whole lot of functionality you’ll get IBM Watson’s services team walking you through the process (and a really excellent and informative demo if you can get to their offices).

Here are a few more options in no particular order:

Tensorflow is an open source AI platform.  Tensorflow was originally developed by the Google Brain team and later open sourced to the general public.  This is a good option if you are looking to geek out a bit and dig into AI programming on a budget.  Another open source option is Alchemy.  It has many of the features you would want as a developer but, like most open source, the support is community based.

Microsoft’s Azure Machine Learning platform is more focused toward the business customer, much like IBM’s Watson.  It’s a fully managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions among other things… many other things.

Amazon’s AWS AI service is also pretty comprehensive including features for image recognition, text-to-speech, voice & text chatbots, machine learning, and more.  What’s nice about this is Amazon has deliberately set it up in four levels, ranging from AI Services to AI Infrastructure, depending how deep, technical, and customized you want to go.

A Word of Caution

The last group consists of everyone else.  According to VentureScanner there are nearly 1,000 different AI companies and start-up ranging across approximately a dozen groups.  In most cases, unless you are an innovative team looking to do something completely new that the world has not seen before, you’ll be going with one of these.

957 Artificial Intelligence Companies from VentureScanner

Finding the right one, however, is the real key.  And I tell you, it’s not easy.  There’s an old joke that sums up the AI (and many other) marketplaces very well:

A group of people die and go to heaven.  At the pearly gates Saint Peter says, “we’re not quite sure what to do here.  You’re not good, you’re not bad, so we don’t know what to do with you.  Rather than make a choice we’re going to give you the option.  Explore both heaven and hell for one day, and on the third day make a choice where you’d like to spend all eternity”.  So the team, already pretty confident, decide to check out heaven first.  They walk around and it’s very serene.  People are reading on fluffy clouds in the sun listening to harp music.  Everyone seems content and happy.

The next day they all head down stairs to evaluate the underworld.  Much to their surprise there is a party going on. The weather is perfect, people are dancing, drinking, the devil is the DJ and music is playing, and everyone is more than content, they’re having an epic time!  The devil wave’s them in and, astonished, they partake in the party.   After hitting a few rounds of golf with him, as fast as a day can go by, they wave good-bye to the party and head back to the gates to make a choice.  “Well”, they say to Saint Peter, “I can’t believe I’m saying this, but I’m choosing the netherworld!”  “Very well,” says Saint Peter, and they are cast down.

No sooner do they get there than they realize it’s hot, there’s no more music, and everyone is miserable.  “Hey devil”, they shout, “what gives!  What happened to the people, the dancing, and the partying?”  “That was the demo”, he replies.

While it is a joke, just remember that AI can do a lot.  So if you have an organization that is good with AI, odds are they’re open to trying a lot of things.  This is good and bad.  Without focus they’re stuck in the perpetual beta and you’re essentially buying their AI expertise.  But when it comes time to deliver, you need to be confident they’re the right team to deliver the solution to your problem.

Closing Remarks

Okay, so that’s AI, machine learning, and deep learning in a nutshell.  It can be highly technical and somewhat overwhelming, but I find there is a trick to making sense of it.  If you think about the AI as a newborn baby.  An intelligent being with all the possibilities of becoming anything in life.  S/he could grow up to a Doctor, Lawyer, President, or your best sales rep.  They could become your partner in reporting the news as is the case with Heliograf at The Washington Post, or your partner in cancer care like at Memorial Sloan Kettering.

But unlike that child, the decision as to what they become is up to you.  You will choose what data it learns and teach it right from wrong.  You will direct and shape its intelligence to help solve your business and marketing problems.  It’s really up to you.  But the best thing you can do is have focus and a solid understanding of AI, the problem you are solving, and the data you’ll use to support it.

Related posts

One thought on “Intro to AI for Digital Health Marketers

  1. Steve Wubbena

    After looking into a handful of the articles on your web page, I truly appreciate your way of writing a blog. I bookmarked it to my bookmark website list and will be checking back soon. Take a look at my web site as well and let me know what you think.

Leave a Comment