Luddite - is the Singularity near?

More Moore....beyond Moore's Law?

Gordon Moore, co-founder of Intel, died on Friday, March 24, 2023:

"Gordon Moore, Intel Co-Founder, Dies at 94"
https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html

...and chip-makers are struggling to keep Moore's Law alive?

"...Moore's Law is alive and well today and the overall trend continues, though it remains to be seen whether it can be sustained in the longer term..."
https://www.futuretimeline.net/data-trends/moores-law.htm

But, IMHO, we are already kind of cheating in regard of transistor-count on a chip. AMD uses up to 12 chiplets, Intel 4 slices, and Apple 2 slices in their CPUs, and now the chiplet design enters also the GPU domain, with up to 1KW power usage for super-computer chips.

We have now 5nm, 3nm in pipe, and 2nm and 1+nm upcoming fab process, ofc, meanwhile marketing numbers, but should reflect transistor density/efficiency of the fab process.

We might have upcoming X-ray lithography and new materials like graphene in pipe. What else?

What about:

- Memristors?
- Photonics?
- Quantum Computers?
- Wetware (artificial biological brains)?
- MPU - memory processing unit?
- Superconductor (at room temperature)?

I still wait to see Memristor based NVRAM and neuromorphic chip designs....but maybe people are now into Wetware for large language models, biological brains run way more energy efficient they say...

and, it seems kind of funny to me, at first we used GPUs for things like Bitcoin mining, now everybody tries to get hands on these for generative AIs. There is currently so much money flowing into this domain, that progress for the next couple of years seems assured -> Moore's Second Law.

We have CPUs, GPUs, TPUs, DSPs, ASICs and FPGAs, and extended from scalar to vector to matrix and spatial computing.

We have the Turing-Machine, the Quantum-Turing-Machine, what about the Hyper-Turing-Machine?

We used at first electro-mechanical relays, then tubes, then transistors, then ICs, then microchips to build binary computers. I myself predicted that with reaching the 8 billions human mark (~2023), we will see a new, groundbreaking, technology passing through, still waiting for the next step in this line.

A Mirror

Machines, the AI, talking twaddle and suffering from hallucinations? A mirror of our society. A machine mind with a rudimentary body but disconnected from its soul? A mirror of our society. Machine minds used to generate fake-money, fake-speech and fake-porn? A mirror of our society.

Yet Another Turing Test

Now with context generative AIs, the switch from pattern recognition to pattern creation with neural networks, I would like to propose my own kind of Turing Test:

An AI which is able to code a chess engine and outperforms humans in this task.

1A) With hand-crafted eval. 1B) With neural networks.

2A) Outperforms non-programmers. 2B) Outperforms average chess-programmers. 2C) Outperforms top chess-programmers.

3A) An un-self-aware AI, the "RI", restricted intelligence. 2B) A self-aware AI, the "SI", sentient intelligence.

***update 2024-02-14***

4A) An AI based on expert-systems. 4B) An AI based on neural networks. 4C) A merger of both.

The Chinese Room Argument applied onto this test would claim that there is no conscious in need to perform such a task, hence this test is not meant to measure self-awareness, consciousness or sentience, but what we call human intelligence.

https://en.wikipedia.org/wiki/Chinese_room

The first test candidate was already posted by Thomas Zipproth, Dec 08, 2022:

Provide me with a minimal working source code of a chess engine
https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245

Exit Strategy?

Oh boy. ELE ongoing, humans go extinct, biosphere goes extinct, Mars and Moon have no self-sustaining biosphere, the only thing which still has gas is the AI. Which exit strategy to choose? The good ole Marvin Minsky upload scenario? Seriously? A post-human Matrix? Let go of the old, embrace the new? Project Lambda. Oh boy.

Western Peak Passed?

I am a child of the 90s, 1989 til 2001 was my time, the fall of the Berlin Wall until the 9/11, everything seemed possible during this period. Fukuyama mentioned it "the end of history", and then 2001 was already "the end of the end of history".

Retrospectively, Fukuyama was wrong, and Huntington, "The Clash of Civilizations", was right. Maybe the 90s were just a hedonistic time in between, the exception of the rule.

True, technologically we do advance, at least incrementally, more processing power, more bandwidth, more data, bigger neural networks, more advanced network architectures, but cultural, philosophical? Did we, the Western sphere, already pass our peak and do degenerate?

- 1979 - Lyotard - The Postmodern Condition
- 1981 - Baudrillard - Simulacra and Simulation
- 1997 - Deutsch - The Fabric of Reality
- 1999 - Wachowskis - The Matrix

When I surf the meme-sphere out there, it seems to me that meanwhile the so called three poisons rule the world, hate, greed and delusion....

https://en.wikipedia.org/wiki/Three_poisons

...just thinking loud.

Oswald Spengler, Man and Technics, 1931

“We were born into this time and must bravely complete the path that is destined for us. There is no other. To persevere at the lost post without hope, without salvation, is a duty. Endure like the Roman soldier whose bones were found outside a gate in Pompeii, who died because they forgot to relieve him when Mount Vesuvius erupted. That is greatness, that is having race. This honest end is the only thing that cannot be taken away from people.

https://en.wikipedia.org/wiki/Man_and_Technics

https://de.wikipedia.org/wiki/Der_Mensch_und_die_Technik

Super-AI - a new concept of MAD?

We humans create currently the Super-AI, and people like to refer to the development of the atomic bomb, nobody knows how this all will play play out on a global scale (Fermi-effect?). Von Neumann worked on the concept of MAD, mutual assured destruction, the nuclear warfare deterrence concept, which prevented a WWIII with conventional weapons, and maybe there will be a new concept of MAD in context of Super-AI between the global blocs. Point is, the takeoff of the Technological Singularity is beyond human scope, by definition, it is a matter of Science-Fiction how a post TS-takeoff world will look alike. And, the current events on our globe are contradicting, on one side the eco-sphere and techno-sphere do collapse, we are running out of water and energy, on the other side, the Super-AI is boosting. I really do not know, how this, the ongoing ELE versus TS, will play out in the next 10, 20, 30 years. I guess I will read it in the news.

We Are Running Out of Juice

The AI competes already with humans for resources, water and energy, and, it seems we are running out of juice...do we have enough resources left for the TS to take off, or, did we enter already the ELE doom loop?

Elon Musk Predicts Electricity Shortage in Two Years
https://hardware.slashdot.org/story/23/07/31/0128257/elon-musk-predicts-electricity-shortage-in-two-years

"I can't emphasize enough: we need more electricity,"

"However much electricity you think you need, more than that is needed."

TS - it's here

...TS, it's here.

Will it be a butterfly?

The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?

The Next Big Thing in Computer Chess?

We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.

The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.

The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.

About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.

We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.

What could be next thing, the next boost?

If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.

Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.

What else?

We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.

An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.

Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.

Interesting times ahead.

Different Agents, Different Backgrounds, Different Motivations...

...pondering about the AI doomsday sayers and recent developments it seems naive to me to assume that there will be one single AI agent with one background and one motivation, we see currently different agents, with different backgrounds and therefore different motivations rising. If we say that AI will compete with humans for resources, it seems only natural that AIs will compete amongst each other for resources, or, will they really merge one day to one big single system? Interesting times. Still waiting for the AGI/ASI, the strong AI, which combines all the AI-subsystems into one.

Generative AIs - What's Missing?

They generate text, source code, images, audio, video, 3D models, what's missing?

The large language models for text generation still lack a decent reasoner and analyzer module, decent video is IMO just a matter of time resp. hardware, and my take would be that the next thing are brainwaves for the BCI, brain computer interface.

Another Shift...TOE

This blog has two major topics, AI vs. ELE, takeoff of the technological singularity vs. extinction level event. But of course there are other things going on in the memesphere, physics and meta-physics. It seems to me that the fragee of this world is going to open up, Einstein's theory of relativity and quantum-mechanics seek for an merger, the separation of spirit and matter seeks for an merger, the 3.5 dimensional mind seeks to expand. IMO we already have all puzzle pieces out there for an TOE, we just need a genius who is able to merge them into a bigger picture, or alike.

AI - the new breaking line?

We had three waves, the agricultural revolution, the industrial revolution, the information age, and now AI based on neural networks creates new kind of content, text, images, audio, video. They write already Wikipedia articles, they outperform humans in finding mathematical algorithms, is this another breaking line, is this the fourth wave? I see currently AI split in a lot of dedicated weak AIs with specific purpose, do we have a strong AI incoming, an AGI, artificial general intelligence, which will combine all those into one big system? Interesting times.

The Singularity++

Reflecting a bit on my recent posts in here, I am convinced that the TS (technological singularity) already did take off, but now the question is if it is stable. If we consider the current negative feedback loops caused by the use of human technology the question is now if the takeoff of the TS is able to stabilize a fragile technological environment embedded in an fragile biological environment on this planet earth. Time will tell.

Event Horizon

Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.

Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.

Three Strands of AI Impact...

Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?

Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?

Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?

encode, decode, transmit, edit...train, infer

If we look back to the history of our home computers, what were these actually used for? Encode, decode, transmit and edit. First text, then images, then audio, then video, then 3D graphics.

Now we have additional some new stuff going on, neural networks. With enough processing power and memory available in our CPUs and GPUs, we can infer and train neural networks at home with our machines, and we have enough mass storage available for big data, to train bigger neural networks.

Further, neural networks evolved from pattern recognition to pattern creation, we use them now to create new kind of content, text, images, audio, video...that is the point where it starts to get interesting, cos you get some surplus value out of it, you invest resources into creating an AI based on neural networks and it returns surplus value.

The Singularity

In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.

The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.

So, when will the technological Singularity take off?

Considering the feedback loop, it is already present, maybe since the first computers were built.

Considering the density of information processing that exceeds human understanding, we may reached that point too.

Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.

Such an technique is present (and currently hyped) => ANNs, Artificial Neural Networks.

Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.

So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....

obviously, +1 points for the Singularity to take off.

A Brief History Of Computing

"Computer science is no more about computers than astronomy is about telescopes."
Edsger W. Dijkstra

So, we have an biased overview of the history of computers, but what do these computers actual compute?

The first mechanical computers of the 17th century were able to perform the 4  basic arithmetic operations, addition, subtraction, multiplication and division.

As soon a computer is able to perform addition, he is also able to perform the further 3 operations, which can be broken down, in multiple steps, into the addition of values.

Nowadays computers are binary, means they compute with base 2, zeros and ones, true and false, power on and power off.

Therefore transistors are used, these work like relays, and are coupled together to form logical circuits, which are able to perform the actual computation.

The Z3 (1941) had 600 relays for computation, the 6502 chip (1975) had about  3500 transistors, nowadays CPUs (2018) have billions of them.

So, all these funny programs out there are broken down into simple arithmetic and logical operations.

To perform such an magic, some math is in need.

George Bool introduced in 1847, the Boolean Algebra, with the three basic, logical components, the AND, OR and NOT gates. With these simple gates, logical circuits can be build to perform the addition of values.

Alan Turing introduced in 1936 the Turing-Machine, a mathematical computer, and with the Church-Turing-Thesis it was shown, that everything that can be effectively computed (by an mathematician using pen and paper), can also be computed by an Turing-Machine.

With the help of the Turing-Machine it was possible to define problems and write algorithms for solving them. With the Boolean Algebra it was possible to build binary computers to run these problem solving algorithms.

So, in short, computers can compute everything that our math is able to describe.

Everything?

Haha, we would live in another world if.

Of course, the available processing power and memory limits the actual computation of problem solving algorithms.

But beside the technical limitation, there is an mathematical, some mathematical problems are simply not decidable, the famous "Entscheidungsproblem".

Mathematicians are able to define problems wich can not be solved by running algorithms on computers.

Turing showed that even with an Oracle-Machine, there will be some limitations, and some scientists believe that only with real Quantum-Computers we will be able to build Hyper-Turing-Machines...

A Brief History Of Computers

"I think there is a world market for maybe five computers."
Thomas J. Watson (CEO of IBM), 1943

Roots

I guess since humans have fingers, they started to count and compute with them, and since they have tools, they started to carve numbers into bones.

Across different cultures and timelines there have been different kinds of numbering systems to compute with.

Our global civilization uses mostly the Hindu-Arabic-Numbers with the decimal number system, based on 10, our computers use commonly the binary number system, based on 2, the famous 0s and 1s. But there have been other cultures with other systems, the Maya with an base 20, Babylon with base 60, or the Chinese with base 16, the hexadecimal system, which is also used in computer science.

The first compute devices were mechanical helpers, like the AbacusNapier's Bones or Slide Rule, they did not perform computations on their own, but were used to represent numbers and apply arithmetic operations on them like addition, subtraction, multiplication and division.

Mechanical Computers

The first mechanical computing machine is considered to be the Antikythera Mechanism, found in an Greek ship that sunk about 70 BC. But actually it is no computer, cos it does not perform computations, but an analog, astrological clock, a sun and moon calendar that shows solar and lunar eclipses.

In the 17th century first mechanical computing machines were proposed and build.

Wilhelm Schickard designed a not fully functional prototype in 1623.

The Pascaline, designed by Blaise Pascal in 1642, was the first operational and commercial available mechanical computer, able to perform the 4 basic arithmetic operations.

In 1672 the German mathematician Gottfried Wilhelm Leibniz invented the stepped cylinder, used in his not fully functional Stepped Reckoner.

[update 2023-06-05]

The human information age itself seems to start with the discovery of the electro-magnetism in the 19th century, the telegraph-system, the phone, the radio and already in the 19th century were electro-mechanical "accumulating, tabulating, recording" machines present, like those from Herman Hollerith, used in the American Census in 1890, which cumulated into the foundation of companies like IBM, Big Blue, in 1911 and Bull in ~1921, both used punched cards for their data processing machinery.

The Battle Ships of WWI had the so called "Plotter Room" in their centre, it contained dedicated, electro-mechanical machines for the fire-control-system of their firing turrets. Submarines of WWII had dedicated, analog computing devices for the fire-control-systems for their torpedoes.

With the Curta the use of mechanical calculators lived on, up to the advent of portable electronic calculators in the 1960s.

Programmable Computers

The punch card for programming a machine was introduced by Joseph Marie Jacquard in 1804 with his automated weaving loom, the Jacquard Loom, for producing textiles with complex patterns.

In 1837 Charles Babbage (considered as the father of the computer) was the first to describe a programmable, mechanical computer, the Analytical Engine.

Ada Lovelace (considered as the mother of programming) worked with Babbage and was the first person to publish a computer algorithm, the computation of Bernoulli numbers.

Babbage was his time ahead, as he described all parts, CPU, memory, input/output, a modern computer has, but was not able to realize his machine due to missing funds and proper engineering abilities of that time.

About a century later, Konrad Zuse's Z3, built in 1941, is considered to be the first binary, free programmable computer. It used ~600 telephone relays for computation and ~1400 relays for memory, a keyboard and punched tape as input, lamps as output, and it operated with 5 Hertz.

Mainframes

Zuse's machines mark the advent of the first mainframes used by military and science during and after WWII.

Colossus Mark I (1943), ENIAC (1945), IBM 704 (1954) for example used vacuum tubes instead of relays and were replaced more and more by transistor based computers in the 1960s.

Home Computers

With small chips, at first integrated circuits then microchips, it was possible to build smaller and reasonable Home Computers in the 1970s. IBM and other big players underestimated this market, so Atari, Apple, Commodore, Sinclair, etc. started the Home Computer Revolution, one computer for every home.

Some first versions came as self-assembly kit, like the Altair 8800 (1975), or with built in TV output, like the Apple I (1976), or as fully assembled video game console like the Atari VCS (1977), followed by more performant versions with an graphical user interface, like the Apple Mac (1984), or the Commodore Amiga 1000 (1985).

Personal Computers

IBM started in 1981 with the 5150 the Personal Computer era. Third party developers were able to provide operating systems, like Microsoft DOS, or hardware extensions for the standardized hardware specification, like hard-drives, video-cards, sound-cards, etc., soon other companies created clones of the IBM PC, the famous "PC Compatible".

Gaming was already in the Home Computer era an important sales argument, the early PC graphics standards like CGA and EGA were not really able to compete with the graphics generated by the Denise chip in an Commodore Amiga 500, but with the rise of SVGA (1989) standards and the compute power of the Intel 486 CPU (1989), game forges were able to build games with superior 3D graphics, like Wolfenstein 3D (1992), Comanche (1992) or Strike Commander (1993) and the race for higher display resolutions and more detailed 3D graphics continues until today.

With operating systems based on graphical user interfaces, like OS/2, X11, Windows 95 in the 1990s, PCs finally replaced the Home Computers.

Another recipe for the success of the PC might be, that there have been multiple CPU vendors for the same architecture (x86), like Intel, AMD, Cyrix or WinChip.

Internet of Things

The Internet was originally designed to connect military institutions in an redundant way, so if one net element fails, the rest would be still operable.

The bandwidth available evolves like compute power, exponentially, at first mainly text was submitted, like emails (1970s) or newsgroups (1980s), followed by web-pages with images (.gif/.jpg) via the World Wide Web (1989) or Gopher (1991), audio as .mp3 (~1997), and finally, Full HD videos via streaming platforms like YouTube or Netflix.

In the late 1990s, mobile-phones like the Nokia Communicator, MP3 audio players, PDAs (Personal Digital Assistants) like the Palm Pilots, and digital cameras marked the rise of the smart devices. The switch from one computer to every home, to many computers for one person.

Their functions were all united into the smartphone, and with mobile, high-bandwidth internet it is still on its triumph tour across the globe.

I am not able to portrait the current state of computer and internet usage, it is simply too omnipresent, from word-processing to AI-research, from fake-news to dark-net, from botnets of webcams to data-leaks in toys...

The next thing

but I can guess what the next step will be, Integrated Devices, the BCI, the Brain Computer Interface, connected via the Internet to an real kind of Matrix.

It seems only logical to conclude that we will connect with machines directly, implant chips, or develop non-invasive scanners, so the next bandwidth demand will be brainwaves, in all kind of forms.

[updated on 2023-08-05]

Zuse's Devils Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devils Wire.

In the early days of computer history, the program to compute and the data to compute on was separated.

Nowadays computer use the same memory for both, so it is possible to write programs that manipulate their own program.

Zuse says, that behind every technology Mephistopheles stands behind and grins, but the modern world needs computers to solve actual and upcoming  problems, but better, read the lecture by yourself...

+1 points for the Singularity to take off.

AI - Antichrist

"Technology itself is neither good nor bad. People are good or bad."
Naveen Jain

Actually i believe the Revelation as described in the Bible already happened, about 60 AD. And the beast with the number 666 has to be identified with the Roman Empire and Caesar Nero.

But inspired by this blog, i will give a modern interpretation a try, so feel free to join me in an alternate and speculative world paradigm...

Short version

Technology is the Antichrist, and computer driven AI is the peak of technology.

Preamble

Over 10000 years ago we left Garden Eden and started the neolithic revolution, we started to do farming and keep livestock, we started to use technology to make our lifes easier, but over the centuries and millennia, we forgot how to live with mother earth in an balanced way.

Full version

[update 2024-02-22]

Revelation 13:4

"And they worshipped the dragon which gave power unto the beast: and they worshipped the beast, saying, Who is like unto the beast? who is able to make war with him?"

An AI as Antichrist, Chess as war game, who is going to beat the AI in Chess?

Revelation 13:18

"Here is wisdom. Let him that hath understanding count the number of the beast:  for it is the number of a man; and his number is Six hundred threescore and six."

Using the english-sumerian gematria system, which is based on 6 - A=6, B=12, C=18...Z=156, the word "computer" counts to 666.

The first human to mention the word computer for people doing computations was Richard Braithwait in a book called "The Yong Mans Gleanings" in 1613.

Using the english-sumerian gematria method, the name "Braithwait" counts to 666.

Rev 13:18 could act like a puzzle with an checksum, "computer" is the name of the beast, but the name (the number) was coined by a man, and the man who coined the name has the number 666 too.

Revelation 13:16-17

"And he causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads: And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name."

Nowadays, without an Smart Phone (or upcoming Smart Glasses), or an computer, you are limited in your daily business, from renting a car to doing payments.

So the mark is already here, the Smart Phone in the right hand, the upcoming Smart Glasses in the forehead, and the computer in general.

Revelation 13:15

"And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed."

An image of the beast is given life and people are going to worship it...AI God Religion Spotted

Revelation 16 - The Seven Bowls of God’s Wrath

Rev 16:2

"And the first went, and poured out his vial upon the earth; and there fell a noisome and grievous sore upon the men which had the mark of the beast,  and upon them which worshipped his image."

Considering computers as the mark of the beast, the sore could be cancer caused by radiation.

Rev 16:3

"And the second angel poured out his vial upon the sea; and it became as the blood of a dead man: and every living soul died in the sea."

Our sea world is dying, overfishing, plastic particles, acidification, etc.

Rev 16:4

"And the third angel poured out his vial upon the rivers and fountains of waters; and they became blood"

Blood in Judaism is impure and Jews are not allowed to eat it, this could mean that our rivers get poisoned.

Rev 16:8-9

"And the fourth angel poured out his vial upon the sun; and power was given unto him to scorch men with fire. And men were scorched with great heat, and blasphemed the name of God, which hath power over these plagues: and they repented not to give him glory."

Climate Change causes already increasing heatwaves and droughts.

Rev 16:10-11

"And the fifth angel poured out his vial upon the seat of the beast; and his kingdom was full of darkness; and they gnawed their tongues for pain, And blasphemed the God of heaven because of their pains and their sores, and repented not of their deeds."

This one can be interpreted as God shutting the internet down, the kingdom of the beast. Some scientists conclude that a pole-shift is currently underway, this could cause the magnetic field around earth to collapse, so the electro-magnetic waves from the sun could damage computer chips worldwide.

[update 2020-11-04]

Pretty obvious the Internet seems a natural fit to be the kingdom of the beast (a computer driven AI) so what does it mean it was 'full of darkness', hehe, ever wondered about the dark-web, fake-news, hate-speech etc.? Darkness.

Rev 16:12-14

"And the sixth angel poured out his vial upon the great river Euphrates; and the water thereof was dried up, that the way of the kings of the east might  be prepared. And I saw three unclean spirits like frogs come out of the mouth of the dragon, and out of the mouth of the beast, and out of the mouth of the false prophet. For they are the spirits of devils, working miracles, which go forth unto the kings of the earth and of the whole world, to gather them to the battle of that great day of God Almighty."

This one is clear, the Euphrates river dries up, and it is scary to watch it  really happen. Dunno about the frogs and kings.

Rev 16:17-21

"And the seventh angel poured out his vial into the air; and there came a great voice out of the temple of heaven, from the throne, saying, It is done. And there were voices, and thunders, and lightnings; and there was a great  earthquake, such as was not since men were upon the earth, so mighty an  earthquake, and so great. And the great city was divided into three parts, and the cities of the nations fell: and great Babylon came in remembrance before God, to give unto her thecup of the wine of the fierceness of his wrath. And every island fled away, and the mountains were not found. And there fell upon men a great hail out of heaven, every stone about the weight of a talent: and men blasphemed God because of the plague of the hail;  for the plague thereof was exceeding great."

An earthquake, so strong, never happened before during mankind.

[update 2023-06-16]

Maybe the seventh bowl is global nuclear war/strike? The final.

Closing Words

There are many passages in the Revelation i can not interpret in a way that the computer is the Antichrist. The seven heads, horns and ten crowns of the dragon, the mortal wound, or the first and second beast, etc.

The Roman Empire with Caesar Nero as Antichrist simply fits better.

But please leave a comment, if you have further puzzle pieces for AI Antichrist.

So, considering the pure potential of the meme AI Antichrist,

i give -1 points for the Singularity to take off.

On Artificial Neural Networks

It is non-stop in the news, every week it pops up in another corner, AIs based on Deep Neural Networks, so i will give it a try to write a lill, biased article about this topic...

The brain

The human brain consists of about 100 billion neurons, as much as stars in our galaxy, the Milky Way, and each neuron is connected via synapses with about 1000 other neurons, resulting in 100 trillion connections.

For comparison, the game playing AI, AlphaZero, by Google Deepmind used about 50 million connections to play chess on super human level.

The inner neurons of our brain are connected via our senses, eyes, ears, etc, with the outer world.

One neuron has multiple, weighted inputs and one ouput, if a certain threshold of input is reached, its output is activated, the neuron fires an signal to another neuron.

The activation of the synapse is an electrical and chemical process, neurotransmitters can restrain or foster the activation potential, just consider the effect alcohol or coffee has to your cognitive performance.

Common artificial neural networks do not emulate the chemical part.

The brain wires these connections between neurons during learning, so they can act as memory, or can be used for computation.

The "von Neumann" architecture

Most nowadays computers are based on the von Neumann architecture, they have no neurons or synapses but transistors.

The main components are the ALU, Arithmetic Logic Unit, memory for program and data,  and various inputs and outputs.

Artificial Neural Networks have to be built in software, running on these von Neumann computers.

Von Neumann said that his proposed architecture was inspired by the idea of how the brain works, memory and computation. And in his book, "The Computer and the Brain", he gives an comparision of computers and the knowledge about biological neural networks of that time.

Dartmouth

First work on ANNs were published already in the 1940s, and in 1956 the "Dartmouth Summer Research Project on Artificial Intelligence" was held, coining the term Artificial Intelligence, and marking one milestone in AI. The work on ANNs continued, and first neuromorphic chips were developed.

AI-Winter

In the 1970s the AI-Winter occurred, problems in computational theory and the lack of compute power needed by large ANNs, resulted in cutting funds, and splitting the work into strong and weak AI.

Deep Neural Networks

With the rise of compute power (driven by GPGPU), further research, and Big Data, it was possible to train faster better and larger networks in the 21st century.

The term Deep Neural Networks, for deep hierarchical structures or deep learning techniques was coined.

One of the first and common usage for ANNs was and is pattern recognition, for example character recognition.

You can train a neural network with a set of the same, but different looking character, with the aim that the ANN will recognize the same character in various appearances.

With a deeper topology of the neural network, it is possible to identify for example pictures of cars with different net layers for color, shape etc.

The Brain vs. The Machine

A computer can perform fast arithmetic and logical operations, therefore the transistors are used.

Contrary, the neural network of our brain works massiv parallel.

The synapses of the human brain are clocked with 10 to 100 hertz, means they can fire to other neurons up to 100 times per second.

Nowadays computer chips are clocked with 4 giga hertz, means they can compute 4 000 000 000 operations per second per ALU.

The brain has 100 billion neurons, 100 trillion connections and consumes ~20 watt, nowadays biggest chips have 12 billion transistors with an usage of 250 watt.

We can not compare the compute power of an brain directly with an von Neumann computer, but we can estimate what kind of computer we would need to map the neural network of an human brain.

Assuming 100 trillion connections, we would need about 400 terabytes of memory to store the weights of the neurons. Assuming 100 hertz as clock rate, we would need at least 40 petaFLOPS (floating point operations per second) to compute the activation potentials.

For comparison, the current number one high performance computer in the world is able to perform ~93 petaFLOPS, has ~1 petabyte memory,  but an power consumption of more than 15 megawatt.

So, considering simply the energy efficiency of the human brain,
i give -1 points for the Singularity to take off.

Super AI in Sci-Fi

Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...

Super AI takes over world domination
Colossus, Terminator, Matrix

Something went wrong
Odyssey 2001, Das System, Ex Machina

Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her

The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence

There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica

+1 points for the Singularity to take off.

The Turing Test

“He who cannot lie does not know what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra

The Turing Test, proposed by Mathematician Alan Turing in 1950, was developed to examine if an AI reached human level intelligence.

Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.

The Loebner Prize performs every year a Turing Test contest.

It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.

If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.

Beside other criticism, the Chinese Room Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.

Personally I prefer the Metzinger-Test, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.

I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth

Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.

So, -1 points for the Singularity to take off.

On Peak Human

One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.

Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.

Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004),  that we reached an upper limit already in the 1980s.

In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.

UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.

Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.

Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.

What does Peak Human mean for the Technological Singularity?

The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.

When we face an decline in population growth, we also have to face an decline in new technologies developed.

Cos it is not only about developing new technologies, but also about maintaining the old knowledge.

Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.

Therefore the question is, is AI able to take on the decline?

Time will tell.

I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.

I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.

So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.

The Rise Of The Matrix

Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).

So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.

What about combining those two events?

Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.

People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.

To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.

So, let us give them more and more, in the virtual, computer generated worlds.

Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.

A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.

We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.

And Mother Earth will get an break, to recover from the damage billions of consuming people caused.

ELE + Singularity => Matrix

+1 points for the Singularity to take off.

Super AI Doomsday Prophets

They are smart, they have money, and they predict the Super AI Doomsday:

Stephen Hawking
"The development of full artificial intelligence could spell the end of the human race.”

James Lovelock
"Before the end of this century, robots will have taken over."

Nick Bostrom
"Some little idiot is bound to press the ignite button just to see what happens."

Elon Musk
"Artificial intelligence is our biggest existential threat."

So, obviously, +1 points for the Singularity to take off.

More Moore

If we can not shrink the transistor size any further, what other options do we have to increase compute power?

3D packaging

The ITRS report suggest to go into third dimension and build cubic chips. The more layers are build the more integrated cooling will be necessary.

Memory Wall

Currently memory latencies are higher than compute cycles on CPUs, with faster memory tehchniques or higher bandwidth the gap can be closed.

Memristor

The Memristor is an electronic component proposed in 1971. It can be used for non-volatile memory devices and alternative, neuromorphic compute architectures.

Photonics

Using light for computation sounds quite attractive, but the base element, the photonic transistor, has yet to be developed.

Quantum Computing

Really, i do not have a clue how these thingies work, somehow via Quantum Effects like Superposition and Entanglement but people say they are going to rock when they are ready...

Considering so much room for research,
i give +1 points for the Singularity to take off.

The End of Moore's Law?

Moore's law, the heartbeat of computer evolution, is the observation that every two years the amount of transistors on integrated circuits doubles. Gordon Moore, co founder of Intel, proposed an doubling every year in 1965 and an doubling every two years in 1975.

In practice this results in an doubling of compute power of computer chips every two years.

The doubling of transistor amount is achieved by shrinking their size. The 1970s Intel 8080 chip was clocked with 2 Mhz, had about 6000 transistors and was produced in an 6 Micrometer process. Nowadays processors have billions of transistors and use an 14 or 10 Nanometer process.

But less known is Moore's Second Law, the observation that also the investment costs for the fabrics grow exponentially.

The last ITRS report of 2015 predicts that transistor shrinking will hit such an economic wall in 2021, and alternative techniques have to be used to keep Moore's Law alive.

Considering this news,
i give -1 points for the Singularity to take off.

Home - Top