“He who cannot lie, does not know, what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra

The Turing Test, proposed by Mathematician Alan Turing in 1950, was developed
to examine if an AI reached human level intelligence.

Simplified,
a person performs text chats with an human and the AI,
if the person is not able to discern which chat partner the AI is,
then the AI has passed the Turing Test.

The Loebner Prize performs every year a Turing Test contest.

It took me some time to realize, that the Turing Test is not so much about
intelligence, but about lying and empathy.

If an AI wants to pass the Turing Test it has to lie to the chat partner,
and to be able to lie, it has to develop some level of empathy,
and some level of selfawareness.

Beside other criticism, the Chinese Room Argument states that no consciousness
is needed to perform such an task, and therefore other tests have been developed.

Personally I prefer the Metzinger-Test,
an hypothecical event, when AIs start to discuss with human philosophers,
and defend successfully their own theory of consciousness.

I am not sure if the Singularity is going to take off,
but i guess that the philosophers corner is one of the last domains that AIs
are going to conquer, and if they succeed we can be pretty sure to have another
Apex on earth

Turing predicted that by the year 2000 machines will fool 30% of human judges,
he was wrong, the Loebner Prize has still no Silver Medal winner for the
25 minutes text chat category.

So, -1 points for the Singularity to take off.