About a week ago, Elon Musk called artificial intelligence a "fundamental existential risk for human civilisation. He said robots would be able to do everything better than humans, and really emphasised the EVERYTHING part of the comment. On Sunday, during his now-infamous public BBQ session, Mark Zuckerberg called the comments 'pretty irresponsible' and said Musk was drumming up a doomsday scenario. Also Read: e27 Discussions: Should you drop out of college or wait to gain some experience before launching own venture? So who is correct? What do you think about the future of AI? Comment below. —
e27 discussions, is Artificial Intelligence an existential risk to humanity?
Elon Musk says Artificial Intelligence is a "fundamental existential risk", Mark Zuckerberg called his comments irresponsible. Who is correct?