The dangers of pushing tech too far. Elon Musk donates $10 million to help save humanity from human-level artificial intelligence


Elon Musk made his billions in the technology space, but he still recognises the dangers of pushing tech too far.

Musk recently made a $10 million donation to the Future of Life Institute (FLI), a volunteer-run non-profit research and outreach organisation “working to mitigate existential risks facing humanity.”

FLI’s first area of concern is the “potential risks from the development of human-level artificial intelligence.”

“AI potentially more dangerous than nukes.” Just what we need, more methods to destroy the human race in the hands of government sociopaths. RT Reports:

Beginning January 22, Musk’s donation will support an open grant competition for AI researchers and AI-related research in fields such as economics, law, ethics, and policy. Musk, the CEO of Tesla and SpaceX, has called AI potentially more dangerous than nukes.

“Here are all these leading AI researchers saying that AI safety is important,” said Musk in a released statement. “I agree with them, so I’m…committing $10 M to support research aimed at keeping AI beneficial for humanity.”

There are mounting concerns among technology and scientific leaders that too much emphasis and money goes towards research into “speech recognition, image classification, autonomous vehicles, machine translation, legged location and question-answering systems,” but little is spent on analyzing how these new advances could help society.

“While heavy industry and government investment has finally brought AI from niche academic research to early forms of potential world-transforming technology, to date relatively little funding has been available to help ensure that this change is actually a net positive one for humanity,” said Professor Anthony Aguirre, FLI co-founder.

Musk joined Stephen Hawking and other technologists in an open letter calling on the artificial intelligence science community to devote time to research to make sure the advances have positive outcomes and can be controlled.

FLI’s suggested research priorities include how to avoid AI automation from leading to job destruction and further income inequality, ethical questions around autonomous vehicle collisions, and the implications of autonomous weapons complying with humanitarian law.

“It’s best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive,” Musk said, according to The Verge. “This is a case where the range of negative outcomes, some of them are quite severe. It’s not clear whether we’d be able to recover from some of these negative outcomes. In fact, you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe, it seems like you should be proactive and not reactive.”


Help us grow. Support The Duran on Patreon!


The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of The Duran.

What do you think?

Notify of
Newest Most Voted
Inline Feedbacks
View all comments
January 17, 2015

#The dangers of pushing tech too far. #ElonMusk donates $10 million to help save… #Politics

January 19, 2015


Gee…I thought our opposable thumbs coupled with a relatively large and functioning brain trumped all.


January 19, 2015


Gee…I thought our opposable thumbs coupled with a relatively large and functioning brain trumped all.


Saudi Arabia publicly beheads a Burmese woman in the Saudi city of Mecca

US orders Poroshenko to fight Novorussia to the very last Ukrainian. Porky signs decree to draft 50,000 healthy men and women into war