Interview: Jenni Lloyd on Whether the Future of Tech Can Work For Everyone
By Gemma JoyceOct 22nd
Published August 2nd 2017
I recently sat down to chat with Bob Rogers, Intel’s Chief Data Scientist for Analytics and AI.
I sought out answers to the some of the most popular questions related to artificial and augmented intelligence. The entire interview was illuminating, and shed some light on aspects that many people wouldn’t know about.
So what is AI? Artificial intelligence is human-like intelligence that works in a similar way to our brains—though not quite, of course. It supports our capabilities, simplifies our life, and improves our ability to make use of information.
AI can do what we can’t do in the sense that it can process a lot more data than we can at any given instance. Current AI technology is powerful enough to identify a kitten in a picture, or identify an object lying on the roadside, and then determine the chances of that object moving onto the road.
Autonomous vehicles are a great example of AI technologies in action.
In the future, AI will be able to identify the most important elements in huge data chunks, and then take advantageous actions.
So here’s the question that most of us ask: How will AI show up in our lives? For the foreseeable future, AI is there augmenting our capabilities, allowing us to do more, with better accuracy, in less time.
Let’s see what Bob Rogers has to say about AI, augmented intelligence, and their interesting applications.
There is a fine line between AI and augmented intelligence. Bob Rogers describes it as, “the difference in how we apply the technology.”
AI has progressed rapidly in the recent past; the era began with us describing the information and extracting meaning. But today, AI solutions are a lot more advanced, so advanced that they can play around with huge amounts of data in just a matter of seconds.
Analytical systems in the past were just basic questioning-answering systems.
So for instance, if you wanted to know how many products had been sold in the past quarter, the technology would prepare a simple report.
Later on, machine learning algorithms were used, and somewhat more advanced questions were answered, for instance grouping customers meaningfully based on their characteristics.
This is referred to as clustering which led to supervised learning. Today, AI can perform much more complex tasks such as determining which customers are most likely to buy another product and when, based on previous buying habits.
AI can now understand elements of unstructured data, such as images, text and videos. It can now perform simple image recognition tasks faster, more accurately and more consistently than humans can.
Going back to the kitten in the photo, AI becomes more interesting when we can start asking contextual questions.
Are there other objects in the photo? What else is there in the background? A simple example of this kind of AI in action is automatically organizing similar-themed photos on your phone or via an app.
Most of us worry that AI will replace cognitive tasks. Think about it this way: AI can identify people breaking a specific traffic rule in tens of millions of locations simultaneously, and capture instantaneous images to document their findings.
Humans can’t work on this kind of scale, and frankly wouldn’t want to with only a single cognitive task. AI is not really replacing us, it is just augmenting our capabilities.
Technology promises to harness human-like intelligence to solve complex challenges and global issues. The “Intel Inside, Safer Children Outside” project, aimed at ending child sexual abuse, is one such example.
A non-profit organization matches children in escort ads to children who have gone missing. The database of missing children includes 460,000 images which must be searched every time an escort ad containing a child’s image is identified by law enforcement.
While humans have been working on this, AI can be used to achieve quicker results through a system. When we combine AI and human intelligence together, the results are significantly improved.
In other words, AI lets us connect the dots that form the big picture.
AI developers are looking for opportunities to augment our abilities, and to develop tools to do tasks that we cannot do.
Consider healthcare professionals, who only have seconds or minutes to make huge decisions.
For example, trauma surgeons typically monitor a large number of different patient factors to help determine which actions to take. These factors are related in very complex ways, so AI that can track, integrate and highlight the most relevant changes in a patient in real time, allowing the surgeon to assess the situation and respond faster to save lives.
We don’t want AI to make the care decisions, but doctors can do their job much better if AI gives them the information they need in order to make these decisions.
Or here’s another example: Can you identify the most important information in a collection of a billion documents when a keyword is queried?
AI cannot only identify it, but prioritize it as well. Search engines learn continuously to deliver better results in the future.
Augmented intelligence in real life applications allows us to make use of images and video to recognize speech and interpret text.
Going back to the clinical decision-making example from above, AI applications are often seen in the healthcare industry.
They can help determine patients that are transitioning into a life threatening situation, and suggest appropriate treatments. AI can provide real-time actionable insights so that clinicians can make more informed decisions.
AI can coordinate disparate systems in healthcare and distill out key information from unstructured data, such as doctor’s dictation and patient history embedded in scanned documents, so that doctors in one healthcare organization can receive the critical information they need from other healthcare information systems, bridging gaps between different systems.
Machine learning can also be used for grouping patients naturally on the basis of similar conditions. Similarly, it can identify actual heart failure patients, and correlate those who are at great risk of heart failure complications.
Recently, contextually intelligent agents, or CIAs, have emerged as a powerful tool to help patients navigate their care following acute illness or surgery. A CIA is a coach or assistant that uses AI to interact with a patient in a natural way through text or speech, and also uses AI to prioritize recommended actions to guide the patient through the recovery process.
Similar applications can be found in the fields of fraud detection, financial services and retail.
Today, businesses can capture and store important data, but here is the question: How can value be derived from this data? For most companies, moving forward strategically is the difficult part.
Bob suggests that you should first think of a question for which you want an answer. Think of AI as doing a simple task, and providing you with valuable information about it. After all, as Bob says, “Take it from a rocket scientist, this is NOT rocket science!”
Training can be done quickly and efficiently through a variety of machine learning algorithms, including deep learning using techniques like transfer learning.
So how can you incorporate AI into your business operations?
In practice, you will begin with your current systems and your traditional algorithms. Gradually, you want to add new capabilities that let you deal with new and difficult problems, and that allow you to introduce unstructured data in the form of images and videos.
More data means better insights. So the key is starting with your expertise, what you are best at, and then traversing upwards from there. Intel has already helped several organizations to incorporate AI by using state-of-the-art tools for machine, deep and memory based learning, which form AI’s core.
If you’re interested in exploring AI and beyond, come and join us at the Now You Know Europe Conference this October, in London. Find out more here.