Quotes of All Topics . Occasions . Authors
We didn't evolve through billions of years to remain animals.
Frederick Nietzsche was important to me, in teaching that it's okay to strive to improve the human being.
My goal is to try to tell the public that America could use more science and technology in all aspects of its life.
It amazes me that we spend 20% of the US budget on defense and far-off wars, and not on fighting cancer, disease, and aging.
Science and technology can solve all the world's problems, and historically it has been shown to make the world better and better.
Transhumanism literally means "beyond human." It's using science and technology to radically change and improve the human species and experience.
Transhumanist technology will do much for the world that the world can't really imagine yet, including overcome some of the climate issues the world is facing.
The American Dream has become a death sentence of drudgery, consumerism, and fatalism: a garage sale where the best of the human spirit is bartered away for comfort, obedience and trinkets. It's unequivocally absurd.
Maybe we want to keep A.I. to the level of a 16-year-old or a 17-year-old adolescent, rather than some fully maxed-out artificial intelligence that becomes 10,000 times smarter than us in just a matter of years. Who knows what could happen? It could be a very dangerous scenario.
I think we're already getting to a stage where the basic artificial intelligences are discovering moral systems. I think, in many ways, moral systems are simply things that we have programmed into ourselves, either through childhood or just through genetic, ingrained ideas. So the same thing applies when you talk about machines.
I'm a fan of the simulation theory. I tend to think that most of our existence, if not all of it, is part of a hologram created by some type of other life form, or some type of other artificial intelligence. Now, it may be impossible for us to ever know that, but a bunch of recent studies in string theory physics have proved that.
I advocate as a futurist and also as a member of the Transhumanist Party, that we never let artificial intelligence completely go on its own. I just don't see why the human species needs an artificial entity, an artificial intelligence entity, that's 10,000 times smarter than us. I just don't see why that could ever be a good thing.
Whoever creates an artificial intelligence first has such a distinct military advantage over every other nation on the planet that they will forever, or they will at least indefinitely, rule the planet. It's very important that a nice country, a democratic country, develops A.I. first, to protect other A.I.'s from developing that might be negative, or evil, or used for military purposes.
I think whatever nation or whoever develops one artificial intelligence will probably make it so that artificial intelligence always stays ahead of any other developing artificial intelligence at any other point in time. It might even do things like send viruses to a second artificial intelligence, just so it can wipe it out, to protect its grounds. It's gonna be very similar to national politics.
If there is any person that I do follow somewhat closely, at least ideas I like, it's been Frederich Nietzsche, but he's been dead a few hundred years. And at the same time, I wouldn't say that I actually, from a political standpoint, like many of his ideas. It just happened to be the core of a lot of my own beliefs of trying to modify my body and live indefinitely. What really applies is an evolutionary instinct to become a better entity altogether.
The chances of human beings being the only intelligent form of life in the universe are so minuscule that it's really kind of crazy to actually - no scientist could ever argue that we would be alone. It's much more likely that there are hundreds of thousands of other intelligences and other life forms out there in the universe just based on a strictly mathematical formula. And what that means is that artificial intelligence has probably already occurred in the universe.
What I advocate for is that, as soon as we get to the point when artificial intelligence can take off and be as smart, or even 10 times more intelligent than us, we stop that research and we have the research of cranial implant technology or the brainwave. And we make that so good so that, when artificial intelligence actually decides - when we actually decide to switch the on-button - human beings will also be a part of that intelligence. We will be merged, basically directly.
Once you've created an intelligence so smart, the real job of that intelligence is to protect itself from other intelligences becoming more intelligent than it. It's just kind of like human beings. The way you look at money or the way you look at the success of your child, you always want to make sure that as far as it gets, it can protect itself and continue forward. So I think any type of intelligence, no matter what it is, is going to have this very basic principle to protect the power that it has gained.
If there's something else already out there in the universe, it would almost certainly have puts limits on our growth of intelligence. And the reason it would have put limits on us is because it doesn't want us to grow so intelligent that we would one day maybe take away their superpowered intelligence. Whatever advanced intelligence evolves, it always puts a roadblock in the way of other intelligences evolving. And the reason this happens is so nobody can take away one's power, no matter how far up the ladder they've gone.
TEF is predicated on logic, a simple wager that every human faces: If a reasoning human being loves and values life, they will want to live as long as possible-the desire to be immortal. Nevertheless, it's impossible to know if they're going to be immortal once they die. To do nothing doesn't help the odds of attaining immortality-since it seems evident that everyone will die someday and possibly cease to exist. To try to do something scientifically constructive towards ensuring immortality beforehand is the most logical conclusion.
In the future it's very possible you could have an artificial intelligence system that can run the country better than a human being. Because human beings are naturally selfish. Human beings are naturally after their own interests. We are geared towards pursuing our own desires, but oftentimes, those desires have contrasts to the benefit of society, at large, or against the benefit of the greater good. Whereas, if you have a machine, you will be able to program that machine to, hopefully, benefit the greatest good, and really go after that.