Luddite - is the Singularity near?

The Singularity

In physics, a singularity is a point in spacetime where our currently developed
theories are not valid anymore, we are literally not able to describe what
happens inside, cos the density becomes infinite.

The technological Singularity, as described by Transhumanists,
is a grade of technological development, where humans are not able to understand
the undergoing process anymore. The technological environment starts to feed
its own development in an feedback loop - computers help to build better
computers, which helps to build better computers, that helps to build better
computers...and so on.

So, when will the technological Singularity take off?

Considering the feedback loop, it is already present, maybe since the first
computers were build.

Considering the density of information processing that exceeds human
understanding, we may reached that point too.

Imagine a computer technique that is easy to set up and use,
outperforms any humans in its task,
but we can not really explain what happens inside, it is a black box.

Such an technique is present (and currently hyped) => ANNs,
Artificial Neural Networks.

Of course we do know what happens inside, cos we build the machine, but when
it comes to the question of reasoning, why the machine did this or that, we
really have an black box in front of us.

So, humans already build better computers with the help of better computers,
and humans use machines that outperform humans in an specific task and
are not really able to reason its results....

obviously, +1 points for the Singularity to take off.

Zuse's Devils Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback
between computation result and executed program in 1983 in his lecture
"Faust, Mephistopheles and Computer" and coined the term Devils Wire.

In the early days of computer history, the program to compute and the
data to compute on was separated.

Nowadays computer use the same memory for both,
so it is possible to write programs that manipulate their own program.

Zuse says, that behind every technology Mephistopheles stands behind
and grins, but the modern world needs computers to solve actual and upcoming
problems, but better, read the lecture by yourself...

+1 points for the Singularity to take off.

Super AI in Sci-Fi

Books and movies address our collective fears, hopes and wishes,
and there seems to be in main five story-lines concerning AI in Sci-Fi...

Super AI takes over world domination
Colossus, Terminator, Matrix

Something went wrong
Odyssey 2001, Das System, Ex Machina

Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her

The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence

There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica


+1 points for the Singularity to take off.

The Turing Test

“He who cannot lie, does not know, what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra

The Turing Test, proposed by Mathematician Alan Turing in 1950, was developed
to examine if an AI reached human level intelligence.

Simplified,
a person performs text chats with an human and the AI,
if the person is not able to discern which chat partner the AI is,
then the AI has passed the Turing Test.

The Loebner Prize performs every year a Turing Test contest.

It took me some time to realize, that the Turing Test is not so much about
intelligence, but about lying and empathy.

If an AI wants to pass the Turing Test it has to lie to the chat partner,
and to be able to lie, it has to develop some level of empathy,
and some level of selfawareness.

Beside other criticism, the Chinese Room Argument states that no consciousness
is needed to perform such an task, and therefore other tests have been developed.

Personally I prefer the Metzinger-Test,
an hypothecical event, when AIs start to discuss with human philosophers,
and defend successfully their own theory of consciousness.

I am not sure if the Singularity is going to take off,
but i guess that the philosophers corner is one of the last domains that AIs
are going to conquer, and if they succeed we can be pretty sure to have another
Apex on earth

Turing predicted that by the year 2000 machines will fool 30% of human judges,
he was wrong, the Loebner Prize has still no Silver Medal winner for the
25 minutes text chat category.

So, -1 points for the Singularity to take off.

On Peak Human

One of the early Peak Human prophets was Malthus,
in his 1798 book, 'An Essay on the Principle of Population',
he postulated that the human population growths exponentially,
but food production only linear,
so there will occur fluctuation in population growth around an upper limit.

Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968),
that we will reach an limit in the 1980s.

Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004),
that we reached an upper limit already in the 1980s.

In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed
the upper bound.

UNO predictions say we may hit 9 billion humans in 2050,
so the exponential population growth rate already declines,
but the effects of an wast-fully economy pop up in many corners.

Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al.
were right.

Is is not about how many people Earth can feed,
but how many people can live in an comfortable but sustainable manner.

What does Peak Human mean for the Technological Singularity?

The advent of Computers was driven by the exponential population growth in
the 20th century. All the groundbreaking work was done in the 20th century.

When we face an decline in population growth,
we also have to face an decline in new technologies developed.

Cos it is not only about developing new technologies,
but also about maintaining the old knowledge.

Here is the point AI steps in, mankind's population growth alters,
but the whole AI sector is growing and expanding.

Therefore the question is, is AI able to take on the decline?

Time will tell.

I guess the major uncertainty is,
how Moore's Law will live on beyond 2021,
when the 4 nm transistor production is reached,
what some scientists consider as an physical and economical barrier.

I predict that by hitting the 8 billion humans mark,
we will have developed another, groundbreaking, technology,
similar with the advent of the transistor, integrated circuit and microchip.

So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.

 

The Rise Of The Matrix

Looking at the tag cloud of this blog, there are two major topics,
pro and con Singularity,
AI (Artificial Intelligence) vs. ELE (Extinction Level Event).

So, we slide, step by step, to an event called Singularity,
but concurrently we face more and more the extinction of mankind.

What about combining those two events?

Let us assume we damage our ecosphere sustainable,
but at the same moment our technology advances to an level where it is possible
to connect via an Brain-Computer-Interface directly with the cyberspace.

People already spend more and more time in virtual realities,
with the advent of Smart Phones, they are connected all the time with the
cyberspace, they meet people in digital social networks,
they play games in computer generated worlds,
create and buy virtual goods with virtual money,
and, essentially, they like it.

To prevent an upcoming ELE, we would need to cut our consumption of goods
significantly, but the mass of people wants more and more.

So, let us give them more and more, in the virtual, computer generated worlds.

Let us create the Matrix, where people can connect directly with their brain,
and buy whatever experience they wish.

A virtual car would need only some electricity and silicon to run on,
but the harm to Mother Earth would be significantly less than a real car.

We could create millions or billions of new jobs, all busy with designing
virtual worlds, virtual goods, and virtual experiences.

And Mother Earth will get an break, to recover from the damage billions of
consuming people caused.

ELE + Singularity => Matrix

+1 points for the Singularity to take off.

AlphaZero - The Krampus Has Come

Okay, this one affected me personally.

Google's Deepmind team adapted their AlphaZero approach for the games of
chess and shogi and dropped the bomb already on the 5th of December.

https://arxiv.org/abs/1712.01815

For chess they trained the Deep Neural Network for 4 to 9 hours on an
cluster with 5000+64 TPUs (1st+2nd gen) and reached super human level.

Unlike in Go, they did not compete with humans, cos chess engines are already
on an super grandmaster like level, no, they did compete with the worlds strongest
open source engine - Stockfish, result:

100 game match with 28 wins, 72 draws, and zero losses for AlphaZero.

This is definitely a smack in the face for all computer chess programmers out
there. Next stop Neanderthal Man.

So, with thanks to the Krampus,
+1 points for the Singularity to take off.

Super AI Doomsday Prophets

They are smart, they have money, and they predict the Super AI Doomsday:

Stephen Hawking
"The development of full artificial intelligence could spell the end of the human race”

James Lovelock
"Before the end of this century, robots will have taken over"

Nick Bostrom
"Some little idiot is bound to press the ignite button just to see what happens."

Elon Musk
"Artificial intelligence is our biggest existential threat"

So, obviously, +1 points for the Singularity to take off.

Home - Top