© 2024 KRWG
News that Matters.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Q&A: What Machines Can Learn From People And We Can Learn From Them

A woman holds up a tablet to showcase data analytics conducted by IBM's Watson technology.
IBM
A woman holds up a tablet to showcase data analytics conducted by IBM's Watson technology.

Guruduth Banavar is an executive at IBM leading the team developing a new generation of cognitive systems — don't call it artificial intelligence — known as Watson.

Watson, of course, is the supercomputer most famous for its victory against two men on Jeopardy! in 2011. IBM is now lending Watson's computing power to startups and businesses across all types of industries, including healthcare, energy, education and even food. It sort of tried to sing to Bob Dylan in a new commercial.

Banavar is one of the researchers working to develop a new Turing test, a way to assess a machine's intelligence. I spoke with him this week about possibilities, limitations and ethics of creating ultra-smart machines. The interview has been edited for length and clarity.

Alina Selyukh: I was hoping to use the Turing test to make sure you're not a machine.

Guruduth Banavar, IBM's vice president of cognitive computing
/ IBM
/
IBM
Guruduth Banavar, IBM's vice president of cognitive computing

Guru Banavar: Go ahead.

I guess I don't really have a panel of judges for you to convince.

If someone offers to administer a Turing test, a machine would not say, "Go ahead!" Or maybe it would, I don't know.

Where does your work on updating the Turing test stand?

It's a scientific journal article that we've written, it's probably going to show up in one of the AAAI (Association for the Advancement of Artificial Intelligence) magazines. It was a very interesting forum, where many of the luminaries of AI got together together to take a retrospective of how far we have come with the Turing test, what should a future Turing test look like. It cannot be this idea of imitating humans, mimicking humans, because that's very limiting. The intent in the bigger picture of what Alan Turing had in mind was to do everything like a person would. And I'm actually not convinced that that is the right way to think about the machine, at all.

So you see a distinction between artificial and cognitive intelligence, can you explain the nuance?

The whole history of artificial intelligence was around building machines that mimic humans. Every aspect of humanity, whether doing simple analytical problems or very complex natural language kinds of dialogue, or some strategic thinking about complex issues that have to do with large geopolitical kinds of environments — all of those things were in the scope of what AI was going to do. And it includes interpersonal relationships and emotion and intuition. But we think of cognitive computing as not competing or mimicking with humans, but rather as complementing people.

Doesn't the ability to complement human intelligence require the ability to mimic a human in the first place?

Yes and no. There are aspects of intelligence that of course are embodied in a human, like basic logic and basic arithmetic, but there's lots of things humans cannot do. ... If you think about finding patterns in enormous amounts of data, which is what we have everywhere around the world, nobody can really understand all the patterns in that data, even when it is fairly well-curated. Humans fundamentally cannot do lots of things, but the machines can. ... The amount of data that's available today in the world is just enormous and most of that data is never processed or even seen, we like to refer to it as "dark data." Think about, say, video footage from the streets of New York City, nobody will ever see that. But if somebody was able to see the behavior of certain kinds of traffic at a particular intersection, maybe there's something's not working. ... All of the data exists, but we don't extract insights and act on that insight and we'd like Watson to do that where there's business value.

So machines can do more than humans, but do they have to start first with being able to do all the things that humans can do? I guess put simpler, what are the limitations to cognitive systems?

There are some aspects of human behavior, like value judgments or common sense reasoning or creating completely new and interesting artistic expression — all of those things uniquely human, and we all believe that that is what enables to make some pretty relevant and sometimes very complex decisions to make progress. Cognitive systems are not intended to simply try and recreate the same kinds of decision processes that humans go through, but rather they are intended to discover and analyze and reason in ways that humans can't. ... Cognitive systems learn at scale. They can also reason with a specific goal in mind. The goal itself is not something that computers or cognitive systems come up with autonomously, they're not self-directed like humans.

You say autonomy isn't the goal. Does that mean there will always be a human to oversee AI's work?

When given a goal, the work to get to the goal can be done by the cognitive systems without supervision, but the goal itself is something has to be provided to the system. All the cases we're working on ... are motivated by practical applications, where people are confronted with very difficult tasks, but they're not able to process all the information to get to those goals. ... I don't believe that at least cognitive systems are going to be built for self-direction.

What about liability then? Watson does work in healthcare; let's say a computer misdiagnoses a patient, who's responsible?

A cognitive system is a decision-support system, and the ultimate responsibility for the decision rests with the physician or the doctor, who's going through not only the digitized data about the patient but also a lot of the factors that may not be digitized, which includes various social and other cultural or historical aspects of the patient. Regulation requires doctors to be at the point of decision and that's how we expect the cognitive-enhanced or -augmented medical professions to look like in the future as well.

Do you think there should be a code of ethics for AI?

It is essential for technologists to be aware of and thinking about all of the ethical implications of their work every day. There's always been this idea that when you have powerful technologies, at the end of the day it's about how we put them to use. We should all be educated and constantly be thinking about it.

Without a code of ethics, how do you think people should mitigate or follow up on potential unintended side-effects?

You're getting into the realm of regulations and controls. I'll take an example, when we had a lot of data available on the Internet, there were a bunch of people who said that we need privacy for data, and that's a human right. Because of activism of a bunch of people who decided that they want privacy, eventually we came up with laws like HIPAA. And that is a social and political process, where there are human rights, people will stand up and say, "This makes sense and therefore we need to have this kind of law around it." So I expect that social process to work in any new environment, including cognitive computing. I don't think there's any difference.

Of all of Watson projects, what's the one most fascinating to you?

To me, the drug discovery work is really fascinating. In a typical drug discovery scenario there is a lot of investment, but also it takes a long time. ... People may not be very good at finding some remote experiment that somebody did in a remote corner of the world and connecting it to some experiment they're trying to do today in another part of the world. But Watson can go off and make those connections. When those connections are made, what used to take years can take weeks. And that means we can probably come up with better drugs and come up with them faster, and we can test them in the field faster and come up with cures faster.

In a recent paper, IBM says that cognitive systems will help us learn. I'd posit most people would say technology is actually pretty distracting, how will these systems help our society to learn?

We fundametnally think that cognitive computing can help us design material for educating every individual in a unique way, in a way that aligns well with the learning style of that person. Every person is actually strong at certain ways of learning and discovering that is a difficult process. ... We are working pretty hard on building cognitive systems that discover a person's learning style and then personalize the material that suit the individual, including the kinds of questions, exercises or modality, images versus text, symbols versus words.

Do you think it's implausible for AI systems to achieve human qualities like intuition or empathy?

Many things that we said were implausible are no longer implausible, so I don't want to say something is implausible. But the question is, what is our design point, and it is to do the things that humans can't do so easily, because that's where we think the business value is.

So you're not teaching Watson to feel empathy or gain intuition?

We are certainly teaching Watson to understand people when they express some kind of emotion. That's called affective computing. ... But your question is will Watson ever feel any of these things like empathy?

Is it too far into sci-fi territory?

Exactly.

Do you watch AI sci-fi movies or avoid them?

I grew up with them! I've already watched "The Martian" twice in two weeks. It's really good.

What's your opinion of HAL?

That kind of predates me, I would say.

What do you think of "Her"? Can I count on Watson to keep me company on lonely rainy weekends one day?

I think it's a great movie.

But is it a plausible scenario?

It's a great movie.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Alina Selyukh is a business correspondent at NPR, where she follows the path of the retail and tech industries, tracking how America's biggest companies are influencing the way we spend our time, money, and energy.