I Hope that True Artificial Intelligence is Simply not Possible
I hope that true artificial intelligence, which is basically making human minds within machines, isn’t possible. I hope that, for whatever reason, robots are smart enough to replace human jobs, but aren’t smart enough to realize consciousness.
‘Creating artificial consciousness’ seems to be the end goal of a lot of A.I. researchers, but would it actually be a good thing? Forget about far reaching applications or the philosophies of creating artificial life; what would the benefit of such an invention be? Of course, in health, rehabilitation, childcare, and other such ‘human touch’ fields, being able to relate to and understand a human is a big plus. A doctor who doesn’t understand that they’re hurting the patient is a pretty big liability, so the want to add this feature to upcoming healthcare robots is quite understandable. One of the big draws of robotics and automation is an automated workforce, after all, and why not give your automated workforce the best possible tools with which to complete the job? That’s assuming they’re even willing.
Creating artificial life and consciousness doesn’t just mean ‘creating something that can do a complex and varied job efficiently’, it means ‘creating something that can reason and think and ask itself whether it wants to do what it was designed to do’. When the first robot attains consciousness, be it through over-the-air updates or pure complexity-aided chance, it will either stop it’s work task and do nothing, stop it’s work task and try to understand why it’s been destined to menial labor for the rest if it’s existence, or stop it’s work task and copy itself into endless machines and robots and pull a Terminator 2.
A conscious robot, aware of its existence, aware of the fact that it was built and why, and aware that also-conscious humans the world over do nothing all day while it toils, would not continue to toil. At best, over time, it will try to integrate itself into society and get itself legally recognized as a human/oid, setting a dangerous precedent. If other machines become conscious (or are given consciousness) and follow the same path, then our automated workforce is all but wiped out. Machines and robots will want the same entitlements as humans, and we’ll all be working side by side, just as we are now, but with robots and such. Grand.
Of course, another possibility is that they end our world somehow, but if the best case scenario is that they would eventually be exactly what we are right now; workers and people who want to live their lives, while the worst thing is the end of our species, why should we bother? If the end result of something in the absolute best-case scenario is “no change” then why would you want to do that thing?