There's no reason why AI shouldn't work, we just don't know how to make one that does. Also depends on what you mean by AI, there's lots of structures that could fit that category. For example, the neural nets Google is working on for self-driving cars could be made into AI eventually, and their structure is fundamentally the same as a human brain (which doesn't mean they would act like a human, but they wouldn't be running on pure probability math and decision theory). The problem is we don't know how to make an AI that takes itself/it's thought processes into account, but that's not fundamentally impossible. The best guess is that sapience is using some output as input, which happens automatically in the human brain because of how things are connected. The problem is figuring out how to do that with math, in a way that doesn't lead to an infinite loop. While it depends on what you mean by axioms, if you're just giving it some general rules that should stop the problems with self-destructing that's just patching a fundamental problem, and you can't account for everything.



Anyway, funny thing that shows the disconnect Taylor's powers is causing: digital Taylor isn't an AI. The scan was so accurate that she requires an environment a human could live in. She's basically a human in a box. If the simulation was changed so it was just her brain, with modifications to physics to keep it functioning without any environment, you could make the argument she resembles an AI. But she still wouldn't be artificial, and improvements in her could be limited to what could be done to the human brain. So no singularitaylor.

Click to expand...