We humans create currently the Super-AI, and people like to refer to the development of the atomic bomb, nobody knows how this all will play play out on a global scale (Fermi-effect?). Von Neumann worked on the concept of MAD, mutual assured destruction, the nuclear warfare deterrence concept, which prevented a WWIII with conventional weapons, and maybe there will be a new concept of MAD in context of Super-AI between the global blocs. Point is, the takeoff of the Technological Singularity is beyond human scope, by definition, it is a matter of Science-Fiction how a post TS-takeoff world will look alike. And, the current events on our globe are contradicting, on one side the eco-sphere and techno-sphere do collapse, we are running out of water and energy, on the other side, the Super-AI is boosting. I really do not know, how this, the ongoing ELE versus TS, will play out in the next 10, 20, 30 years. I guess I will read it in the news.
- Turing Test, https://en.wikipedia.org/wiki/Turing_test
- Lovelace Test, https://de.wikipedia.org/wiki/Turing-Test#Erweiterte_Konzepte
- Winograd Test, https://en.wikipedia.org/wiki/Winograd_schema_challenge
- Metzinger Test, https://epsilon.app26.de/post/gpt-3-scratching-at-the-edge-of-the-metzinger-test/
- Lemoine Test, https://epsilon.app26.de/post/turing-test-metzinger-test-lemoine-test/
- Suleyman Test, https://epsilon.app26.de/post/modern-turing-test-proposed/
...TS, it's here.
The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?
We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.
The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.
The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.
About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.
We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.
What could be next thing, the next boost?
If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.
Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.
We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.
An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.
Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.
Interesting times ahead.
...pondering about the AI doomsday sayers and recent developments it seems naive to me to assume that there will be one single AI agent with one background and one motivation, we see currently different agents, with different backgrounds and therefore different motivations rising. If we say that AI will compete with humans for resources, it seems only natural that AIs will compete amongst each other for resources, or, will they really merge one day to one big single system? Interesting times. Still waiting for the AGI/ASI, the strong AI, which combines all the AI-subsystems into one.
They generate text, source code, images, audio, video, 3D models, what's missing?
The large language models for text generation still lack a decent reasoner and analyzer module, decent video is IMO just a matter of time resp. hardware, and my take would be that the next thing are brainwaves for the BCI, brain computer interface.
This blog has two major topics, AI vs. ELE, takeoff of the technological singularity vs. extinction level event. But of course there are other things going on in the memesphere, physics and meta-physics. It seems to me that the fragee of this world is going to open up, Einstein's theory of relativity and quantum-mechanics seek for an merger, the separation of spirit and matter seeks for an merger, the 3.5 dimensional mind seeks to expand. IMO we already have all puzzle pieces out there for an TOE, we just need a genius who is able to merge them into a bigger picture, or alike.
We had three waves, the agricultural revolution, the industrial revolution, the information age, and now AI based on neural networks creates new kind of content, text, images, audio, video. They write already Wikipedia articles, they outperform humans in finding mathematical algorithms, is this another breaking line, is this the fourth wave? I see currently AI split in a lot of dedicated weak AIs with specific purpose, do we have a strong AI incoming, an AGI, artificial general intelligence, which will combine all those into one big system? Interesting times.
Reflecting a bit on my recent posts in here, I am convinced that the TS (technological singularity) already did take off, but now the question is if it is stable. If we consider the current negative feedback loops caused by the use of human technology the question is now if the takeoff of the TS is able to stabilize a fragile technological environment embedded in an fragile biological environment on this planet earth. Time will tell.
Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.
Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.
Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?
Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?
Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?
If we look back to the history of our home computers, what were these actually used for? Encode, decode, transmit and edit. First text, then images, then audio, then video, then 3D graphics.
Now we have additional some new stuff going on, neural networks. With enough processing power and memory available in our CPUs and GPUs, we can infer and train neural networks at home with our machines, and we have enough mass storage available for big data, to train bigger neural networks.
Further, neural networks evolved from pattern recognition to pattern creation, we use them now to create new kind of content, text, images, audio, video...that is the point where it starts to get interesting, cos you get some added value out of it, you invest resources into creating an AI based on neural networks and it returns added value.
In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.
The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.
So, when will the technological Singularity take off?
Considering the feedback loop, it is already present, maybe since the first computers were built.
Considering the density of information processing that exceeds human understanding, we may reached that point too.
Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.
Such an technique is present (and currently hyped) => ANNs, Artificial Neural Networks.
Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.
So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....
obviously, +1 points for the Singularity to take off.
"Computer science is no more about computers than astronomy is about telescopes."
Edsger W. Dijkstra
So, we have an biased overview of the history of computers, but what do these computers actual compute?
The first mechanical computers of the 17th century were able to perform the 4 basic arithmetic operations, addition, subtraction, multiplication and division.
As soon a computer is able to perform addition, he is also able to perform the further 3 operations, which can be broken down, in multiple steps, into the addition of values.
Nowadays computers are binary, means they compute with base 2, zeros and ones, true and false, power on and power off.
Therefore transistors are used, these work like relays, and are coupled together to form logical circuits, which are able to perform the actual computation.
The Z3 (1941) had 600 relays for computation, the 6502 chip (1975) had about 3500 transistors, nowadays CPUs (2018) have billions of them.
So, all these funny programs out there are broken down into simple arithmetic and logical operations.
To perform such an magic, some math is in need.
George Bool introduced in 1847, the Boolean Algebra, with the three basic, logical components, the AND, OR and NOT gates. With these simple gates, logical circuits can be build to perform the addition of values.
Alan Turing introduced in 1936 the Turing-Machine, a mathematical computer, and with the Church-Turing-Thesis it was shown, that everything that can be effectively computed (by an mathematician using pen and paper), can also be computed by an Turing-Machine.
With the help of the Turing-Machine it was possible to define problems and write algorithms for solving them. With the Boolean Algebra it was possible to build binary computers to run these problem solving algorithms.
So, in short, computers can compute everything that our math is able to describe.
Haha, we would live in another world if.
Of course, the available processing power and memory limits the actual computation of problem solving algorithms.
But beside the technical limitation, there is an mathematical, some mathematical problems are simply not decidable, the famous "Entscheidungsproblem".
Mathematicians are able to define problems wich can not be solved by running algorithms on computers.
Turing showed that even with an Oracle-Machine, there will be some limitations, and some scientists believe that only with real Quantum-Computers we will be able to build Hyper-Turing-Machines...
"I think there is a world market for maybe five computers."
Thomas J. Watson (CEO of IBM), 1943
I guess since humans have fingers, they started to count and compute with them, and since they have tools, they started to carve numbers into bones.
Across different cultures and timelines there have been different kinds of numbering systems to compute with.
Our global civilization uses mostly the Hindu-Arabic-Numbers with the decimal number system, based on 10, our computers use commonly the binary number system, based on 2, the famous 0s and 1s. But there have been other cultures with other systems, the Maya with an base 20, Babylon with base 60, or the Chinese with base 16, the hexadecimal system, which is also used in computer science.
The first compute devices were mechanical helpers, like the Abacus, Napier's Bones or Slide Rule, they did not perform computations on their own, but were used to represent numbers and apply arithmetic operations on them like addition, subtraction, multiplication and division.
The first mechanical computing machine is considered to be the Antikythera Mechanism, found in an Greek ship that sunk about 70 BC. But actually it is no computer, cos it does not perform computations, but an analog, astrological clock, a sun and moon calendar that shows solar and lunar eclipses.
In the 17th century first mechanical computing machines were proposed and build.
Wilhelm Schickard designed a not fully functional prototype in 1623.
The human information age itself seems to start with the discovery of the electro-magnetism in the 19th century, the telegraph-system, the phone, the radio and already in the 19th century were electro-mechanical "accumulating, tabulating, recording" machines present, like those from Herman Hollerith, used in the American Census in 1890, which cumulated into the foundation of companies like IBM, Big Blue, in 1911 and Bull in ~1921, both used punched cards for their data processing machinery.
The Battle Ships of WWI had the so called "Plotter Room" in their centre, it contained dedicated, electro-mechanical machines for the fire-control-system of their firing turrets. Submarines of WWII had dedicated, analog computing devices for the fire-control-systems for their torpedoes.
With the Curta the use of mechanical calculators lived on, up to the advent of portable electronic calculators in the 1960s.
The punch card for programming a machine was introduced by Joseph Marie Jacquard in 1804 with his automated weaving loom, the Jacquard Loom, for producing textiles with complex patterns.
Babbage was his time ahead, as he described all parts, CPU, memory, input/output, a modern computer has, but was not able to realize his machine due to missing funds and proper engineering abilities of that time.
About a century later, Konrad Zuse's Z3, built in 1941, is considered to be the first binary, free programmable computer. It used ~600 telephone relays for computation and ~1400 relays for memory, a keyboard and punched tape as input, lamps as output, and it operated with 5 Hertz.
Zuse's machines mark the advent of the first mainframes used by military and science during and after WWII.
With small chips, at first integrated circuits then microchips, it was possible to build smaller and reasonable Home Computers in the 1970s. IBM and other big players underestimated this market, so Atari, Apple, Commodore, Sinclair, etc. started the Home Computer Revolution, one computer for every home.
Some first versions came as self-assembly kit, like the Altair 8800 (1975), or with built in TV output, like the Apple I (1976), or as fully assembled video game console like the Atari VCS (1977), followed by more performant versions with an graphical user interface, like the Apple Mac (1984), or the Commodore Amiga 1000 (1985).
IBM started in 1981 with the 5150 the Personal Computer era. Third party developers were able to provide operating systems, like Microsoft DOS, or hardware extensions for the standardized hardware specification, like hard-drives, video-cards, sound-cards, etc., soon other companies created clones of the IBM PC, the famous "PC Compatible".
Gaming was already in the Home Computer era an important sales argument, the early PC graphics standards like CGA and EGA were not really able to compete with the graphics generated by the Denise chip in an Commodore Amiga 500, but with the rise of SVGA (1989) standards and the compute power of the Intel 486 CPU (1989), game forges were able to build games with superior 3D graphics, like Wolfenstein 3D (1992), Comanche (1992) or Strike Commander (1993) and the race for higher display resolutions and more detailed 3D graphics continues until today.
With operating systems based on graphical user interfaces, like OS/2, X11, Windows 95 in the 1990s, PCs finally replaced the Home Computers.
Another recipe for the success of the PC might be, that there have been multiple CPU vendors for the same architecture (x86), like Intel, AMD, Cyrix or WinChip.
Internet of Things
The Internet was originally designed to connect military institutions in an redundant way, so if one net element fails, the rest would be still operable.
The bandwidth available evolves like compute power, exponentially, at first mainly text was submitted, like emails (1970s) or newsgroups (1980s), followed by web-pages with images (.gif/.jpg) via the World Wide Web (1989) or Gopher (1991), audio as .mp3 (~1997), and finally, Full HD videos via streaming platforms like YouTube or Netflix.
In the late 1990s, mobile-phones like the Nokia Communicator, MP3 audio players, PDAs (Personal Digital Assistants) like the Palm Pilots, and digital cameras marked the rise of the smart devices. The switch from one computer to every home, to many computers for one person.
Their functions were all united into the smartphone, and with mobile, high-bandwidth internet it is still on its triumph tour across the globe.
I am not able to portrait the current state of computer and internet usage, it is simply too omnipresent, from word-processing to AI-research, from fake-news to dark-net, from botnets of webcams to data-leaks in toys...
The next thing
but I can guess what the next step will be, Integrated Devices, the BCI, the Brain Computer Interface, connected via the Internet to an real kind of Matrix.
It seems only logical to conclude that we will connect with machines directly, implant chips, or develop non-invasive scanners, so the next bandwidth demand will be brainwaves, in all kind of forms.
[updated on 2023-08-05]
"Technology itself is neither good nor bad. People are good or bad."
Actually i believe the Revelation as described in the Bible already happened, about 60 AD. And the beast with the number 666 has to be identified with the Roman Empire and Caesar Nero.
But inspired by this blog, i will give a modern interpretation a try, so feel free to join me in an alternate and speculative world paradigm...
Technology is the Antichrist, and computer driven AI is the peak of technology.
Over 10000 years ago we left Garden Eden and started the neolithic revolution, we started to do farming and keep livestock, we started to use technology to make our lifes easier, but over the centuries and millennia, we forgot how to live with mother earth in an balanced way.
"Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six."
Using the english-sumerian gematria system, which is based on 6 - A=6, B=12, C=18...Z=156, the word "computer" counts to 666.
The first human to mention the word computer for people doing computations was Richard Braithwait in a book called "The Yong Mans Gleanings" in 1613.
Using the english-sumerian gematria method, the name "Braithwait" counts to 666.
Rev 13:18 could act like a puzzle with an checksum, "computer" is the name of the beast, but the name (the number) was coined by a man, and the man who coined the name has the number 666 too.
"And he causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads: And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name."
Nowadays, without an Smart Phone (or upcoming Smart Glasses), or an computer, you are limited in your daily business, from renting a car to doing payments.
So the mark is already here, the Smart Phone in the right hand, the upcoming Smart Glasses in the forehead, and the computer in general.
"And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed."
An image of the beast is given life and people are going to worship it...AI God Religion Spotted
"And the first went, and poured out his vial upon the earth; and there fell a noisome and grievous sore upon the men which had the mark of the beast, and upon them which worshipped his image."
Considering computers as the mark of the beast, the sore could be cancer caused by radiation.
"And the second angel poured out his vial upon the sea; and it became as the blood of a dead man: and every living soul died in the sea."
Our sea world is dying, overfishing, plastic particles, acidification, etc.
"And the third angel poured out his vial upon the rivers and fountains of waters; and they became blood"
Blood in Judaism is impure and Jews are not allowed to eat it, this could mean that our rivers get poisoned.
"And the fourth angel poured out his vial upon the sun; and power was given unto him to scorch men with fire. And men were scorched with great heat, and blasphemed the name of God, which hath power over these plagues: and they repented not to give him glory."
Climate Change causes already increasing heatwaves and droughts.
"And the fifth angel poured out his vial upon the seat of the beast; and his kingdom was full of darkness; and they gnawed their tongues for pain, And blasphemed the God of heaven because of their pains and their sores, and repented not of their deeds."
This one can be interpreted as God shutting the internet down, the kingdom of the beast. Some scientists conclude that a pole-shift is currently underway, this could cause the magnetic field around earth to collapse, so the electro-magnetic waves from the sun could damage computer chips worldwide.
Pretty obvious the Internet seems a natural fit to be the kingdom of the beast (a computer driven AI) so what does it mean it was 'full of darkness', hehe, ever wondered about the dark-web, fake-news, hate-speech etc.? Darkness.
"And the sixth angel poured out his vial upon the great river Euphrates; and the water thereof was dried up, that the way of the kings of the east might be prepared. And I saw three unclean spirits like frogs come out of the mouth of the dragon, and out of the mouth of the beast, and out of the mouth of the false prophet. For they are the spirits of devils, working miracles, which go forth unto the kings of the earth and of the whole world, to gather them to the battle of that great day of God Almighty."
This one is clear, the Euphrates river dries up, and it is scary to watch it really happen. Dunno about the frogs and kings.
"And the seventh angel poured out his vial into the air; and there came a great voice out of the temple of heaven, from the throne, saying, It is done. And there were voices, and thunders, and lightnings; and there was a great earthquake, such as was not since men were upon the earth, so mighty an earthquake, and so great. And the great city was divided into three parts, and the cities of the nations fell: and great Babylon came in remembrance before God, to give unto her thecup of the wine of the fierceness of his wrath. And every island fled away, and the mountains were not found. And there fell upon men a great hail out of heaven, every stone about the weight of a talent: and men blasphemed God because of the plague of the hail; for the plague thereof was exceeding great."
An earthquake, so strong, never happened before during mankind.
Maybe the seventh bowl is global nuclear war/strike? The final.
There are many passages in the Revelation i can not interpret in a way that the computer is the Antichrist. The seven heads, horns and ten crowns of the dragon, the mortal wound, or the first and second beast, etc.
The Roman Empire with Caesar Nero as Antichrist simply fits better.
But please leave a comment, if you have further puzzle pieces for AI Antichrist.
So, considering the pure potential of the meme AI Antichrist,
i give -1 points for the Singularity to take off.
It is non-stop in the news, every week it pops up in another corner, AIs based on Deep Neural Networks, so i will give it a try to write a lill, biased article about this topic...
The human brain consists of about 100 billion neurons, as much as stars in our galaxy, the Milky Way, and each neuron is connected via synapses with about 1000 other neurons, resulting in 100 trillion connections.
For comparison, the game playing AI, AlphaZero, by Google Deepmind used about 50 million connections to play chess on super human level.
The inner neurons of our brain are connected via our senses, eyes, ears, etc, with the outer world.
One neuron has multiple, weighted inputs and one ouput, if a certain threshold of input is reached, its output is activated, the neuron fires an signal to another neuron.
The activation of the synapse is an electrical and chemical process, neurotransmitters can restrain or foster the activation potential, just consider the effect alcohol or coffee has to your cognitive performance.
Common artificial neural networks do not emulate the chemical part.
The brain wires these connections between neurons during learning, so they can act as memory, or can be used for computation.
Most nowadays computers are based on the von Neumann architecture, they have no neurons or synapses but transistors.
The main components are the ALU, Arithmetic Logic Unit, memory for program and data, and various inputs and outputs.
Artificial Neural Networks have to be built in software, running on these von Neumann computers.
Von Neumann said that his proposed architecture was inspired by the idea of how the brain works, memory and computation. And in his book, "The Computer and the Brain", he gives an comparision of computers and the knowledge about biological neural networks of that time.
First work on ANNs were published already in the 1940s, and in 1956 the "Dartmouth Summer Research Project on Artificial Intelligence" was held, coining the term Artificial Intelligence, and marking one milestone in AI. The work on ANNs continued, and first neuromorphic chips were developed.
In the 1970s the AI-Winter occurred, problems in computational theory and the lack of compute power needed by large ANNs, resulted in cutting funds, and splitting the work into strong and weak AI.
With the rise of compute power (driven by GPGPU), further research, and Big Data, it was possible to train faster better and larger networks in the 21st century.
The term Deep Neural Networks, for deep hierarchical structures or deep learning techniques was coined.
One of the first and common usage for ANNs was and is pattern recognition, for example character recognition.
You can train a neural network with a set of the same, but different looking character, with the aim that the ANN will recognize the same character in various appearances.
With a deeper topology of the neural network, it is possible to identify for example pictures of cars with different net layers for color, shape etc.
A computer can perform fast arithmetic and logical operations, therefore the transistors are used.
Contrary, the neural network of our brain works massiv parallel.
The synapses of the human brain are clocked with 10 to 100 hertz, means they can fire to other neurons up to 100 times per second.
Nowadays computer chips are clocked with 4 giga hertz, means they can compute 4 000 000 000 operations per second per ALU.
The brain has 100 billion neurons, 100 trillion connections and consumes ~20 watt, nowadays biggest chips have 12 billion transistors with an usage of 250 watt.
We can not compare the compute power of an brain directly with an von Neumann computer, but we can estimate what kind of computer we would need to map the neural network of an human brain.
Assuming 100 trillion connections, we would need about 400 terabytes of memory to store the weights of the neurons. Assuming 100 hertz as clock rate, we would need at least 40 petaFLOPS (floating point operations per second) to compute the activation potentials.
For comparison, the current number one high performance computer in the world is able to perform ~93 petaFLOPS, has ~1 petabyte memory, but an power consumption of more than 15 megawatt.
So, considering simply the energy efficiency of the human brain,
i give -1 points for the Singularity to take off.
Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...
Super AI takes over world domination
Colossus, Terminator, Matrix
Something went wrong
Odyssey 2001, Das System, Ex Machina
Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her
The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence
There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica
+1 points for the Singularity to take off.
“He who cannot lie does not know what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra
Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.
The Loebner Prize performs every year a Turing Test contest.
It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.
If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.
Beside other criticism, the Chinese Room Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.
Personally I prefer the Metzinger-Test, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.
I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth
Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.
So, -1 points for the Singularity to take off.
One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.
Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.
Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004), that we reached an upper limit already in the 1980s.
In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.
UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.
Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.
Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.
What does Peak Human mean for the Technological Singularity?
The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.
When we face an decline in population growth, we also have to face an decline in new technologies developed.
Cos it is not only about developing new technologies, but also about maintaining the old knowledge.
Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.
Therefore the question is, is AI able to take on the decline?
Time will tell.
I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.
I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.
So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.
Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).
So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.
What about combining those two events?
Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.
People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.
To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.
So, let us give them more and more, in the virtual, computer generated worlds.
Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.
A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.
We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.
And Mother Earth will get an break, to recover from the damage billions of consuming people caused.
ELE + Singularity => Matrix
+1 points for the Singularity to take off.