Quotes of All Topics . Occasions . Authors
Humans have 3 percent human error, and a lot of companies can't afford to be wrong 3 percent of the time anymore, so we close that 3 percent gap with some of the technologies. The AI we've developed doesn't make mistakes.
There's a reason the Chinese government is very concerned about Ai Weiwei. It's because he has all of these ingredients in his life that allow him to attract enormous attention across a very broad spectrum of the population.
Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops and working closely with the vibrant Chinese AI research community.
The biggest ethical challenge AI is facing is jobs. You have to reskill your workforce not just to create a wealthier society but a fairer one. A lot of call centre jobs will go away, and a radiologist's job will be transformed.
We're at a point now where we've built AI tools to detect when terrorists are trying to spread content, and 99 percent of the terrorist content that we take down, our systems flag before any human sees them or flags them for us.
Elon Musk is worried about AI apocalypse, but I am worried about people losing their jobs. The society will have to adapt to a situation where people learn throughout their lives depending on the skills needed in the marketplace.
A lot of the game of AI today is finding the appropriate business context to fit it in. I love technology. It opens up lots of opportunities. But in the end, technology needs to be contextualized and fit into a business use case.
The government adoption of AI will not bring about a government being run by robots. Instead, our government will continue to be run by people, with help from algorithms dramatically improving government services for all Americans.
I often tell my students not to be misled by the name 'artificial intelligence' - there is nothing artificial about it. AI is made by humans, intended to behave by humans, and, ultimately, to impact humans' lives and human society.
It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.
As one of the leaders in the world for AI, I feel tremendous excitement and responsibility to create the most awesome and benevolent technology for society and to educate the most awesome and benevolent technologists - that's my calling.
Even companies like Baidu and Google, which have amazing AI teams, cannot do all the work needed to get us to an AI-powered society. I thought the best way to get us there would be creating courses to welcome more people to deep learning.
We really believe that long-term, the way AI will drive is similar to the way humans drive - we don't break the problem down into objects and vision and localization and planning. But how long it will take us to get there is questionable.
Many researchers are exploring other forms of AI, some of which have proved useful in limited contexts; there may well be a breakthrough that makes higher levels of intelligence possible, but there is still no clear path yet to this goal.
I'm a geek through and through. My last job at Microsoft was leading much of the search engine relevance work on Bing. There we got to play with huge amounts of data, with neural networks and other AI techniques, with massive server farms.
I think that AI will lead to a low cost and better quality life for millions of people. Like electricity, it's a possibility to build a wonderful society. Also, right now, I don't see a clear path for AI to surpass human-level intelligence.
I co-founded Affectiva with Professor Rosalind W. Picard when we spun out of MIT Media Lab in 2009. I acted as Chief Technology and Science Officer for several years until becoming CEO mid-2016, one of a handful of female CEOs in the AI space.
If you were a computer and read all the AI articles and extracted out the names that are quoted, I guarantee you that women rarely show up. For every woman who has been quoted about AI technology, there are a hundred more times men were quoted.
The real use of AI in industry is generally for very narrow pattern-matchers - a better search algorithm, an object-detection algorithm, etc. These things are tools which we can use - for good or evil. But they're nothing like self-aware beings.
As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. Understanding what AI can do and how it fits into your strategy is the beginning, not the end, of that process.
AI has been making tremendous progress in machine translation, self-driving cars, etc. Basically, all the progress I see is in specialised intelligence. It might be hundreds or thousands of years or, if there is an unexpected breakthrough, decades.
An AI utopia is a place where people have income guaranteed because their machines are working for them. Instead, they focus on activities that they want to do, that are personally meaningful like art or, where human creativity still shines, in science.
Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they'd never do anything are now happy to call them AI and try and get some of the money.
'Sunspring,' the first known screenplay written by an AI, was produced recently. It is awesome. Awesomely awful. But it's worth watching all ten minutes of it to get a taste of the gap between a great screenplay and something an AI can currently produce.
I know we've had AI films, but they've been quite specific in their scope. The scope of 'Humans' is a world set up where this technology is universally accepted. I haven't seen anything that's dealt with it in that multi-layered, every-layer-of-society way.
One of the things Baidu did well early on was to create an internal platform that made it possible for any engineer to apply deep learning to whatever application they wanted to, including applications that AI researchers like me would never have thought of.
I became interested in AI in high school because I read 'Goedel, Escher, Bach,' a book by Douglas Hofstader. He showed how all their work in some ways fit together, and he talked about artificial intelligence. I thought 'Wow, this is what I want to be doing.'
I see the 'z' in 'Humanz' as referring to robots, AI, programming, brainwashing, indoctrination. And it's a question to us: are we human, or are we humanz? Have we lost the ability to think for ourselves? Do we just believe what we're told? That's how I see it.
I am often asked what the future holds for Emotion AI, and my answer is simple: it will be ubiquitous, engrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic and interactive.
I don't think there's a particular technology that will set the trajectory for us moving forward. We don't want to be one of the companies that say AI is the next big thing, let's go build an AI application for Robinhood. That might not work. It might be awkward.
'Indigo Prophecy' already brought a lot of new features to the traditional adventure genre, including the Action system, MultiView, Bending Stories, etc. 'Heavy Rain' will include features like advanced physics and AI, realistic characters and living environments.
When AI approximates Machine Intelligence, then many online and computer-run RPGs will move towards actual RPG activity. Nonetheless, that will not replace the experience of 'being there,' any more than seeing a theatrical motion picture can replace the stage play.
The conceptual artist Ai WeiWei illustrates the schizoid society that rapid change has produced - sometimes by reassembling Ming-style furniture into absurd and useless arrangements, or by carefully painting and antiquing a Coca-Cola logo on an ancient Chinese pot.
I will continue my work to shepherd in this important societal change... In addition to working on AI myself, I will also explore new ways to support all of you in the global AI community so that we can all work together to bring this AI-powered society to fruition.
On the path to ubiquity of AI, there will be many ethics-related decisions that we, as AI leaders, need to make. We have a responsibility to drive those decisions, not only because it is the right thing to do for society but because it is the smart business decision.
Making AI more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains.
The amount of money and industrial energy that has been put into accelerating AI code has meant that there hasn't been as much energy put into thinking about social, economic, ethical frameworks for these systems. We think there's a very urgent need for this to happen faster.
Baidu's AI is incredibly strong, and the team is stacked up and down with talent; I am confident AI at Baidu will continue to flourish. After Baidu, I am excited to continue working toward the AI transformation of our society and the use of AI to make life better for everyone.
Even though chess isn't the toughest thing that computers will tackle for centuries, it stood as a handy symbol for human intelligence. No matter what human-like feat computers perform in the future, the Deep Blue match demands an indelible dot on all timelines of AI progress.
I am looking into quite a few ideas in parallel and exploring new AI businesses that I can build. One thing that excites me is finding ways to support the global AI community so that people everywhere can access the knowledge and tools that they need to make AI transformations.
There is a lot of work out there to take people out of the loop in things like medical diagnosis. But if you are taking humans out of the loop, you are in danger of ending up with a very cold form of AI that really has no sense of human interest, human emotions, or human values.
The development of exponential technologies like new biotech and AI hint at a larger trend - one in which humanity can shift from a world of constraints to one in which we think with a long-term purpose where sustainable food production, housing, and fresh water is available for all.
I'm not a big fan of self-driving cars where there's no steering wheel or brake pedal. Knowing what I know about computer vision and AI, I'd be pretty uncomfortable with that. But I am a fan of a combined system - one that can brake for you if you fall asleep at the wheel, for example.
The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals and have its own will and will use its faster processing abilities and deep databases to beat humans at their own game.
In many areas, the E.U. regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics.
A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly, AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets or keeping up to date on medical research.
AI does not keep me up at night. Almost no one is working on conscious machines. Deep learning algorithms, or Google search, or Facebook personalization, or Siri or self driving cars or Watson, those have the same relationship to conscious machines as a toaster does to a chess-playing computer.
OpenAI is doing important work by releasing tools which promote AI to be developed in the open. Compute power is largely produced by NVIDIA and Intel and still relatively expensive but openly purchasable. Blockchains may be the key final ingredient by providing massive pools of open training data.
Every company has messy data, and even the best of AI companies are not fully satisfied with their data. If you have data, it is probably a good idea to get an AI team to have a look at it and give feedback. This can develop into a positive feedback loop for both the IT and AI teams in any company.
I imagine if you're one of those genius people working on AI, the desire to find out what's possible is presumably the driving factor, but I hope there are just as many people who are thinking about what we actually want. Just because something's possible it doesn't mean it's going to be good to us.