When you have a large amount of data that is labeled so a computer knows what it means, and you have a large amount of computing power, and you're trying to find patterns in that data, we've found that deep learning is unbeatable.

It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.

An AI utopia is a place where people have income guaranteed because their machines are working for them. Instead, they focus on activities that they want to do, that are personally meaningful like art or, where human creativity still shines, in science.

I'd like to make a fundamental impact on one of the most exciting, intelligent questions of all time. Can we use software and hardware to build intelligence into a machine? Can that machine help us solve cancer? Can that machine help us solve climate change?

Some people have proposed universal basic income, UBI, basically making sure that everybody gets a certain amount of money to live off of. I think that's a wonderful idea. The problem is, we haven't been able to guarantee universal healthcare in this country.

I became interested in AI in high school because I read 'Goedel, Escher, Bach,' a book by Douglas Hofstader. He showed how all their work in some ways fit together, and he talked about artificial intelligence. I thought 'Wow, this is what I want to be doing.'

People thrive on genuine connections - not with machines, but with each other. You don't want a robot taking care of your baby; an ailing elder needs to be loved, to be listened to, fed, and sung to. This is one job category that people are - and will continue to be - best at.

I'm not a big fan of self-driving cars where there's no steering wheel or brake pedal. Knowing what I know about computer vision and AI, I'd be pretty uncomfortable with that. But I am a fan of a combined system - one that can brake for you if you fall asleep at the wheel, for example.

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals and have its own will and will use its faster processing abilities and deep databases to beat humans at their own game.

A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly, AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets or keeping up to date on medical research.

Cloud computing, smartphones, social media platforms, and Internet of Things devices have already transformed how we communicate, work, shop, and socialize. These technologies gather unprecedented data streams leading to formidable challenges around privacy, profiling, manipulation, and personal safety.

We don't want A.I. to engage in cyberbullying, stock manipulation, or terrorist threats; we don't want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don't want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Instead of expecting truck drivers and warehouse workers to rapidly retrain so they can compete with tireless, increasingly capable machines, let's play to their human strengths and create opportunities for workers as companions and caregivers for our elders, our children, and our special-needs population.

Things that are so hard for people, like playing championship-level Go and poker, have turned out to be relatively easy for the machines. Yet at the same time, the things that are easiest for a person - like making sense of what they see in front of them, speaking in their mother tongue - the machines really struggle with.

I think it's important for us to have a rule that if a system is really an AI bot, it ought to be labeled as such. 'AI inside.' It shouldn't pretend to be a person. It's bad enough to have a person calling you and harassing you, or emailing you. What if they're bots? An army of bots constantly haranguing you - that's terrible.

What are we going to do as automation increases, as computers get more sophisticated? One thing that people say is we'll retrain people, right? We'll take coal miners and turn them into data miners. Of course, we do need to retrain people technically. We need to increase technical literacy, but that's not going to work for everybody.

Netbot was the first comparison shopping company. We realized comparison shopping can be quite tedious if you are driving from one furniture store to another. On the Internet, you can automatically look at a bunch of different stores and see where can you get the best price on a computer or some such thing, so that was the motivation.

AI works really well when you couple AI in a raisin bread model. AI is the raisins, but you wrap it in a good user interface and product design, and that's the bread. If you think about raisin bread, it's not raisin bread without the raisins. Right? Then it's just bread, but it's also not raisin bread without the bread. Then it's just raisins.

Scientists need the infrastructure for scientific search to aid their research, and they need it to offer relevancy and ways to separate the wheat from the chaff - the useful from the noise - via AI-enabled algorithms. With AI, such an infrastructure would be able to identify the exact study a scientist needs from the tens of thousands on a topic.

Share This Page