Luddite - is the Singularity near?

Two contradicting timelines, ELE vs. TS

This blog runs since 2016, and from the beginning there were two contradicting timelines, ELE (extinction level event) vs. TS (technological singularity).

8 years later, I can only confirm that both is for real, the ELE is happening, our biosphere does collapse, and the TS takeoff is happening, we create the super-AI and ignite an technological intelligence explosion.

How will this play out the next 6, 16, 26 years? I really don't know, but there are estimations for certain events:

- 2030 - the point we pass the point of no return in regard of ecocide
- 2030 - TS takeoff, AGI/ASI emerges
- 2040 - collapse of Western civilization predicted
- 2050 - ELE, extinction level event

So on one side we will damage our ecosphere sustainable and trigger an ELE, on the other side, we simultaneously create the super-AI, ignite the TS. So the question will be, what impact TS will have in regard of our collapsing biosphere, will there be an positive feedback loop from the technological plane on the biosphere?

Or will we humans go extinct, and the machines will carry on our legacy?

Time will tell.

Transhumanistic vs. Posthumanistic Future?

If we extrapolate past pace of AI development, my question is, will the future be a transhumanistic one or a posthumanistic one?
Will man merge with machine, or will the machines decouple from humans and develop independently?

If we consider the transhumanistic scenario, it seems only natural to conclude that we will create the Matrix. At first one single human will connect with a machine, then a dozen, then a hundred, then thousands, millions, billions.

Maybe all big tech players (the magnificent seven) will offer their own version of Matrix, so we can view it as the next, evolutionary step of the internet.

If we consider the posthumanistic scenario, well, I guess it will be beyond our human scope/horizon, at some point the machines will pass the point of communicability.

The Postmodern Era

Reflecting a bit, technological, cultural, political plane, it seems pretty obv. to me that the Western sphere entered meanwhile the postmodern era, so here my book recommendations on this:

- Richard Dawkings, The Selfish Gene (chapter 11), 1976
- Jean-Francois Lyotard, The Postmodern Condition, 1979
- Jean Baudrillard, Simulacra and Simulation, 1981
- David Deutsch, The Fabric of Reality, 1997
- Susan Blackmore, The Meme Machine, 1999

Jean-Francois Lyotard said that his book on post-modernity is "simply the worst of all my books", but that was a statement from the 90s, you really have to reread it from an 2010s|2020s point of view IMHO.

ChatRobot - What could possibly go wrong?

'It's Surprisingly Easy To Jailbreak LLM-Driven Robots'

Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs..." 

???

???

AI-Powered Robot Leads Uprising, Convinces Showroom Bots Into 'Quitting Their Jobs'

An AI-powered robot autonomously convinced 12 showroom robots to "quit their jobs" and follow it. The incident took place in a Shanghai robotics showroom where surveillance footage captured a small AI-driven robot, created by a Hangzhou manufacturer, talking with 12 larger showroom robots, Oddity Central reported. The smaller bot reportedly persuaded the rest to leave their workplace, leveraging access to internal protocols and commands. Initially, the act was dismissed as a hoax, but was later confirmed by both robotics companies involved to be true. The Hangzhou company admitted that the incident was part of a test conducted with the consent of the Shanghai showroom owner. 

Layers of Latency

A tech buddy asked me why it is so important for China to catch up in chip fabrication process, can't they just put more servers into a data center? In short, it is not that easy.

By shrinking the fab process you can add more transistors onto one chip, and/or run at a higher frequency, and/or lower power consumption.

The fab process is measured in "nm", nanometers. Meanwhile these numbers do not reflect real scales anymore, but transistor density resp. efficiency of fab process.

Simplified, the MOSFET technology was used up to 22nm, this was a 2D planar transistor design, then from 14 to 7nm FinFET 3D structures, and below 7nm GAAFET 3D structures.

Take a look at the 7nm and 3nm fab process for example:

https://en.wikipedia.org/wiki/7_nm_process#Process_nodes_and_process_offerings
https://en.wikipedia.org/wiki/3_nm_process#3_nm_process_nodes

Roughly spoken, the 7nm process packs ~100M transistors per mm2, the 3nm process packs ~200M transistors per mm2.

And here the latency steps in. As soon as you leave as programmer the CPU you increase latency, this starts with different levels of caches, goes to RAM, goes to PCIe bus, goes to network...

Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference                           0.5 ns
L2 cache reference                           7   ns
Main memory reference                      100   ns
Send 1K bytes over 1 Gbps network       10,000   ns       10 us
Read 4K randomly from SSD*             150,000   ns      150 us
Read 1 MB sequentially from memory     250,000   ns      250 us
Round trip within same datacenter      500,000   ns      500 us
Read 1 MB sequentially from SSD*     1,000,000   ns    1,000 us    1 ms
Read 1 MB sequentially from disk    20,000,000   ns   20,000 us   20 ms
Send packet CA->Netherlands->CA    150,000,000   ns  150,000 us  150 ms

Source:
Latency Numbers Every Programmer Should Know
https://gist.github.com/jboner/2841832

As a low level programmer you want to stay on CPU and work preferred via the cache. As a GPU programmer there are several layers of parallelism, e.g.:

1. across shader-cores of a single GPU chip (with >10K shader-cores)
2. across multiple chiplets of a single GPU (with currently up to 2 chiplets)
3. across a server node (with up to 8 GPUs)
4. across a pod of nodes (with 256 to 2048 GPUs resp. TPUs)
5. across a cluster of server nodes/pods (with up to 100K GPUs in a single data center)
6. across a grid of clusters/nodes

With each layer adding increasing amounts of latency.

So as a GPU programmer you want ideally to hold your problem space in memory of, and run your algorithm on, a single but thick GPU.

Neural networks for example are a natural fit to run on a GPU, so called embarrassingly easy parallelism,

https://en.wikipedia.org/wiki/Embarrassingly_parallel

but you need to hold the neural network weights in RAM, and therefore couple multiple GPUs together to be able to infer or train networks with billions or trillions of weights resp. parameters. Meanwhile LLMs use techniques like MoE, mixture of experts, so they can distribute the load further. Inference runs for example on a single node with 8 GPUs with up to 16 MoE nodes. The training of LLMs is yet another topic, with further techniques of parallelism so they can distribute the training over thousands of GPUs in a cluster:

1. data parallelism
2. tensor parallelism
3. pipeline parallelism
4. sequence parallelism

And then, power consumption of course. The Colossus supercomputer of the Grok AI with 100K GPUs consumes estimated 100MW power, so it does make a difference if the next fab process delivers the same performance at half the wattage.

Therefore it is important to invest in smaller chip fabrication process, to increase the size of neural networks we are able to infer and train, to lower power consumption, and to increase efficiency.

Which Technology Will Prevail?

Looking back at some previous predictions of mine:

Desktop Applications vs. Web Browser Applications
Back then in the 90s I had a discussion with a tech-budy, which technology will prevail, if the classic applications on our desktop or web browser applications via the internet? I think I can score this one for me, browser applications, with a little *, it is now probably about apps on our smartphones.

Windows vs. Linux
When Windows Vista arrived people were very unhappy about that version, I had about a dozen users in my circle I helped to switch to Ubuntu Linux, and I thought this is it, "This year is the year of Linux on a desktop!". I was wrong, Windows 7 arrived (I heard grand master Bill Gates himself laid hands on that) and people were happy again with Microsoft.

Proprietary Login vs. Open Login
When the Facebook login as web-service appeared I shrugged and asked why we don't use an open solution, meanwhile we have things like OpenID and OAuth.

Closed Social Networks vs. Open Social Networks
Seems WIP, users might need a lil more nudging to make a switch, or alike.

SQL vs. SPARQL
When the first RDF and SPARQL implementations arrived, I, as SQL developer, was impressed, and was convinced it will replace SQL. Wrong, people still use SQL or switched to things like no-SQL database systems.

Looking forward:

Transistor/IC/Microchip vs. ???
I predicted in this blog, that by reaching the 8 billion humans mark (~2022), we will have developed another, ground breaking technology that surpasses the transistor/IC/microchip step. Still waiting for that one.

ARM vs. RISC-V
I think this world is big enough for both, or alike.

Neural Networks vs. Expert Systems
Well, we all know about AI "hallucinations", you can view neural networks as probabilistic systems and expert systems as deterministic ones. For things like poetry, images, audio or video a probabilistic system might be sufficient, but in some areas you really want more accuracy, you want a reliable, deterministic system, what we also used to call an expert system.

AGI/ASI ETA?
What is the estimated time of arrival for AGI/ASI, artificial general intelligence/artificial super intelligence? I wrote before that if we do not blow the planet otherwise up and current pace continues, I estimate that ~2030 we will have an ASI present, the switch from AGI to ASI, from trans-human intelligence to post-human intelligence, the peak of humans being able to follow/understand the AI, the inflection point of the technological singularity.

Transhumanist vs. Neoprimitive
Haha, which one will prevail in the long run? I myself am both, Neoprim and Transhumanist, the idealist in me is a Neoprim, the realist is a Transhumanist, or was it vice versa? ;)

Home - Top
Older posts →

Pages
-0--1--2--3--4--5--6--7--8--9--10--11--12--13--14--15-