The every day turing test of internet

This summer, on 23rd of June, was the 100th anniversary from the birth of Alan Turing the great British mathematician and philosopher who envisioned the existence of thinking machines and laid down the foundations of computing and artificial intelligence.

Alan Turing  proposed in his 1950 paper “Computing Machinery and Intelligence,” a way to determine whether a machine is actually intelligent or not, known today as the “Turing test“.

The Turing test setupDuring the test a human judge ( noted as C in the image above ) would make a series of questions through a terminal to a couple of individuals A and B, one of whom is actually a machine impersonating a human being. The judge would have to determine who is the human and which is the machine based only on their answers. If the human judge can not tell the difference then the assumption is that the machine is as intelligent as the other human.

 Do we really want machines like that?

Computer programs that can impersonate human beings are proving to be a big problem today. Ironically when I posted my thoughts on singularity a few months ago I received a comment along these lines*:

Even so, wouldn’t it make sesne to describe what those who read this and are scientists or developers can do? Or even how to become a scientist or developer capable of working on this (especially if one is a college student choosing a field of study, although there are also adult education options), so as to make a living by helping to create and shape the Singularity. (Another audience would be people with funding that they would like to direct towards said scientists or developers. Said funders rarely wish to support said efforts merely by giving money to cheerleaders.) Advocating the support of something, without being able to point to specific, active projects trying to accomplish some identifiable component of that thing, can be worse than pointless: it’s hype that doesn’t actually affect any material thing, and sometimes even detracts from the projects actually trying to accomplish the intended goal.

The comment was marked as spam by the Akismet plugin of WordPress but I was not so sure. You see it looks like a perfectly valid opinion of a human being and is very well aligned with the subject of my own post. If it wasn’t for the suspicious “spelling mistake” that spam bots are known to add randomly in order to make their text more authentic and the vague impression that something was missing or was incomplete, I would have serious doubts on the plugin’s judgment on this one.

Of course I was wrong and a google search later I found out that the above is actually an extract from a comment on a relevant post somewhere else on the Internet.

Full circle

Seems like our programs are becoming much better at imitating us. Not just that: they are becoming much better at recognizing programs that try to imitate us. And we end up depending on the latter, programs that are “smarter” or rather more efficient than us in certain domains, to protect ourselves from the former.

In any case we should always keep in mind that there are humans that create and operate those programs on both ends.

* The actual spam comment had been deleted and this is an approximate extract I recreated from the original comment which was used as the source.

Posted in Uncategorized | Tagged , | Comments Off on The every day turing test of internet

On singularity

I am not an AI expert just a humble programmer who writes code for a living. But like a lot of people out there I find the concept of Artificial Intelligence fascinating. Recently I came across a discussion on Technological Singularity a subject that has raised a lot of concerns since the inception of AI .

The way I perceive singularity, one of the key requirements for our civilization to reach that point, is the existence of a machine/software that is able to improve itself. Looking the problem from that perspective I have the feeling that we are way too far from singularity right now.

Yes there has been a great deal of progress on the tool-set that we use to design and create better machines both on the hardware and software level. Without those tools we wouldn’t be able to ever implement the cheapest microprocessor you can buy today, the network that allows you to read these lines, the search engine that you use every day on that network, the smart phone of your choice that you carry around with you or any of those little miracles of our modern society. We simply wouldn’t be able to handle the complexity. Today at work I don’t need to specify every register or memory address when I write code thanks to a complex system of compilers and frameworks that do that job for me. They make decisions and optimizations of the resulting binary code within split seconds that would take me probably months to come up with, if at all.

So in a sense, we need the existing machines (hardware+software) in order to improve them and they need us (humans) so that they can improve. And in order for us to improve them we need to gradually allow them to handle more complexity for us, make them able to achieve more abstract goals. Eventually they will start predicting our requests and become proactive, like the search engines today that suggest searches for you. They will continue to improve themselves so that they can better serve us. Once they can do that completely autonomously we are set to reach the point where they can improve faster than we can follow them doing it.

Personally I believe that the singularity is inevitable, following the path that I described above. But we are far from it today, far from the point where we can give a machine abstract goals on general purpose tasks that can be implemented better than from a human and even further from the point that it can predict our requests and self improve without any human intervention.

Instead of worrying about it right now we should focus on how we can get there faster, how we can make better tools that help us make better tools. Because at the end, that is what those machines are: extensions of ourselves that serve the same purpose we do. Whatever that is.

Posted in Uncategorized | Tagged , | Comments Off on On singularity

The broken promise of Artificial Intelligence

Recently I came across this article about how Star Trek artists imagined the i-Pad 23 years ago. It is becoming commonplace these days technology and science catching up with human imagination. However there are still a few big promises from science fiction that remain unfulfilled. One of them has to be Artificial Intelligence, the holy grail of computer science.

Even though there has been over half a century of research on the field since the official establishment of the term in 1956 the smartest devices that are known to the general public today are the smart-phones. It’s not that there hasn’t been any progress. On the contrary there have been impressive discoveries during this period and the very concept of intelligence has been explored and understood through that process in much greater extent than ever before in human history. Still we have failed to produce something that is anywhere near to be described “Intelligent” and we have a pile of failures staring at us unable to comply.

HAL 9000 Eye

“I’m sorry Dave, I’m afraid I can’t do that”

So where is HAL 9000?

Here is the answer to this burning question that can be found in wikipedia’s article on the History of Artificial Intelligence :

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on hard science: many leading AI researchers also believed that such a machine would exist by the year 2001.[157]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[158] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[159] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[160] Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[161] There are many other explanations and for each there is a corresponding research program underway

Ten years after that prediction we are still 30 years away from a machine that will be able to “do anything a man can do“. Every year.

Posted in Uncategorized | Tagged | Comments Off on The broken promise of Artificial Intelligence