Luddite - is the Singularity near?

On Artificial Neural Networks

It is non-stop in the news, every week it pops up in another corner, AIs based on Deep Neural Networks, so i will give it a try to write a lill, biased article about this topic...

The brain

The human brain consists of about 100 billion neurons, as much as stars in our galaxy, the Milky Way, and each neuron is connected via synapses with about 1000 other neurons, resulting in 100 trillion connections.

For comparison, the game playing AI, AlphaZero, by Google Deepmind used about 50 million connections to play chess on super human level.

The inner neurons of our brain are connected via our senses, eyes, ears, etc, with the outer world.

One neuron has multiple, weighted inputs and one ouput, if a certain threshold of input is reached, its output is activated, the neuron fires an signal to another neuron.

The activation of the synapse is an electrical and chemical process, neurotransmitters can restrain or foster the activation potential, just consider the effect alcohol or coffee has to your cognitive performance.

Common artificial neural networks do not emulate the chemical part.

The brain wires these connections between neurons during learning, so they can act as memory, or can be used for computation.

The "Von Neumann" architecture

Most nowadays computers are based on the von Neumann architecture, they have no neurons or synapses but transistors.

The main components are the ALU, Arithmetic Logic Unit, memory for program and data,  and various inputs and outputs.

Artificial Neural Networks have to be built in software, running on these von Neumann computers.

Von Neumann said that his proposed architecture was inspired by the idea of how the brain works, memory and computation. And in his book, "The computer and the brain", he gives an comparision of computers and the knowledge about biological neural networks of that time.

Dartmouth

First work on ANNs were published already in the 1940s, and in 1956 the "Dartmouth Summer Research Project on Artificial Intelligence" was held, coining the term Artificial Intelligence, and marking one milestone in AI. The work on ANNs continued, and first neuromorphic chips were developed.

AI-Winter

In the 1970s the AI-Winter occurred, problems in computational theory and the lack of compute power needed by large ANNs, resulted in cutting funds, and splitting the work into strong and weak AI.

Deep Neural Networks

With the rise of compute power (driven by GPGPU), further research, and Big Data, it was possible to train faster better and larger networks in the 21st century.

The term Deep Neural Networks, for deep hierarchical structures or deep learning techniques was coined.

One of the first and common usage for ANNs was and is pattern recognition, for example character recognition.

You can train a neural network with a set of the same, but different looking character, with the aim that the ANN will recognize the same character in various appearances.

With a deeper topology of the neural network, it is possible to identify for example pictures of cars with different net layers for color, shape etc.

The Brain vs. The Machine

A computer can perform fast arithmetic and logical operations, therefore the transistors are used.

Contrary, the neural network of our brain works massiv parallel.

The synapses of the human brain are clocked with 10 to 100 hertz, means they can fire to other neurons up to 100 times per second.

Nowadays computer chips are clocked with 4 giga hertz, means they can compute 4 000 000 000 operations per second per ALU.

The brain has 100 billion neurons, 100 trillion connections and consumes ~20 watt, nowadays biggest chips have 12 billion transistors with an usage of 250 watt.

We can not compare the compute power of an brain directly with an von Neumann computer, but we can estimate what kind of computer we would need to map the neural network of an human brain.

Assuming 100 trillion connections, we would need about 400 terabytes of memory to store the weights of the neurons. Assuming 100 hertz as clock rate, we would need at least 40 petaFLOPS (floating point operations per second) to compute the activation potentials.

For comparison, the current number one high performance computer in the world is able to perform ~93 petaFLOPS, has ~1 petabyte memory,  but an power consumption of more than 15 megawatt.

So, considering simply the energy efficiency of the human brain,
i give -1 points for the Singularity to take off.

Super AI in Sci-Fi

Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...

Super AI takes over world domination
Colossus, Terminator, Matrix

Something went wrong
Odyssey 2001, Das System, Ex Machina

Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her

The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence

There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica


+1 points for the Singularity to take off.

Robophilosophy 2018

Human Philosophers discuss the impact of social robots on mankind, still no Strong AI in sight that joins the debate.

Cherry piking...

The Moral Life of Androids - Should Robots Have Rights?
Edward Howlett Spence

"The question I explore is whether intelligent autonomous Robots will have moral rights. Insofar as robots can develop fully autonomous intelligence, I will argue that Robots will have moral rights for the same reasons we do. ..."

Robot Deus
Robert Trappl

"The ascription of god-like properties to machines has a long tradition. Robots of today invite to do so. We will present and discuss god-like properties, to be found in movies as well as in scientific publications, advantages and risks of robots both as good or evil gods, and probably end with a robot theology."

+1 points for the Singularity to take off.

The Turing Test

“He who cannot lie does not know what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra

The Turing Test, proposed by Mathematician Alan Turing in 1950, was developed to examine if an AI reached human level intelligence.

Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.

The Loebner Prize performs every year a Turing Test contest.

It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.

If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.

Beside other criticism, the Chinese Room Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.

Personally I prefer the Metzinger-Test, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.

I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth

Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.

So, -1 points for the Singularity to take off.

On Peak Human

One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.

Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.

Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004),  that we reached an upper limit already in the 1980s.

In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.

UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.

Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.

Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.

What does Peak Human mean for the Technological Singularity?

The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.

When we face an decline in population growth, we also have to face an decline in new technologies developed.

Cos it is not only about developing new technologies, but also about maintaining the old knowledge.

Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.

Therefore the question is, is AI able to take on the decline?

Time will tell.

I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.

I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.

So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.

 

The Rise Of The Matrix

Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).

So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.

What about combining those two events?

Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.

People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.

To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.

So, let us give them more and more, in the virtual, computer generated worlds.

Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.

A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.

We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.

And Mother Earth will get an break, to recover from the damage billions of consuming people caused.

ELE + Singularity => Matrix

+1 points for the Singularity to take off.

Home - Top
Older posts → ← Newer posts

Pages
-0--1--2--3--4--5-