I am not an AI expert just a humble programmer who writes code for a living. But like a lot of people out there I find the concept of Artificial Intelligence fascinating. Recently I came across a discussion on Technological Singularity a subject that has raised a lot of concerns since the inception of AI .
The way I perceive singularity, one of the key requirements for our civilization to reach that point, is the existence of a machine/software that is able to improve itself. Looking the problem from that perspective I have the feeling that we are way too far from singularity right now.
Yes there has been a great deal of progress on the tool-set that we use to design and create better machines both on the hardware and software level. Without those tools we wouldn’t be able to ever implement the cheapest microprocessor you can buy today, the network that allows you to read these lines, the search engine that you use every day on that network, the smart phone of your choice that you carry around with you or any of those little miracles of our modern society. We simply wouldn’t be able to handle the complexity. Today at work I don’t need to specify every register or memory address when I write code thanks to a complex system of compilers and frameworks that do that job for me. They make decisions and optimizations of the resulting binary code within split seconds that would take me probably months to come up with, if at all.
So in a sense, we need the existing machines (hardware+software) in order to improve them and they need us (humans) so that they can improve. And in order for us to improve them we need to gradually allow them to handle more complexity for us, make them able to achieve more abstract goals. Eventually they will start predicting our requests and become proactive, like the search engines today that suggest searches for you. They will continue to improve themselves so that they can better serve us. Once they can do that completely autonomously we are set to reach the point where they can improve faster than we can follow them doing it.
Personally I believe that the singularity is inevitable, following the path that I described above. But we are far from it today, far from the point where we can give a machine abstract goals on general purpose tasks that can be implemented better than from a human and even further from the point that it can predict our requests and self improve without any human intervention.
Instead of worrying about it right now we should focus on how we can get there faster, how we can make better tools that help us make better tools. Because at the end, that is what those machines are: extensions of ourselves that serve the same purpose we do. Whatever that is.