In an episode of The Twilight Zone called "A Thing About Machines," the main character Bartlett Finchley is a bit of an oddball. He lives a solitary life, aside from visits from the machine repairmen that he seems to need quite often. Turns out, his typewriter, television and other appliances are trying to tell him something: Get out!

Since the birth of the computer age in the 1940s, futurists have speculated that within a few decades machines would be smarter than humans, take over and the human race would be their slaves. The next prediction is in... the machines will have to wait a few more decades for world domination.

Nick Bostrom, director of the Future of Humanity Institute at Oxford University, wrote a book called "Superintelligence: Paths, Dangers, Strategies," that spells out the results of four surveys of artificial intelligence researchers, according to Bloomberg.

For Bostrom's purposes, human-like intelligence is defined as "one that can carry out most human professions at least as well as a typical human," according to Bloomberg.

This means intuition and logic, Mr. Spock, not replication. The ability to learn. Emotions.

According to Bloomberg, Bostrom suggests that the creation of AI isn't the big deal; rather, the "what happens next" factor is the AI big bang. Once machines know how to reason, know how to improve themselves and reach superintelligence, old Bartlett Finchley is in for it.

Tesla CEO Elon Musk has warned of the foreseen dangers with AI. In October, he said that AI could eliminate humans like "spam" (email,) according to news.au.com. On Aug. 3, Musk Tweeted: "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable."

Will technology keep advancing to the point where a cyborg visits from the future to kill Sarah Connor and prevent her from having a son? Humans aren't very good at predicting the future - not now, nor in the future (proof: an identical cyborg came back to protect John Connor. Make up your mind!)

And who says that artificial intelligence will still be a priority in a few decades?

Theoretical physicist Stephen Hawking asked in a May 2014 column in Independent if we are taking AI seriously enough. "Success in creating AI would be the biggest event in human history," wrote Hawking. "It might also be the last."