Luddite - is the Singularity near?

Two contradicting timelines, ELE vs. TS

This blog runs since 2016, and from the beginning there were two contradicting timelines, ELE (extinction level event) vs. TS (technological singularity).

8 years later, I can only confirm that both is for real, the ELE is happening, our biosphere does collapse, and the TS takeoff is happening, we create the super-AI and ignite an technological intelligence explosion.

How will this play out the next 6, 16, 26 years? I really don't know, but there are estimations for certain events:

- 2030 - the point we pass the point of no return in regard of ecocide
- 2030 - TS takeoff, AGI/ASI emerges
- 2040 - collapse of Western civilization predicted
- 2050 - ELE, extinction level event

So on one side we will damage our ecosphere sustainable and trigger an ELE, on the other side, we simultaneously create the super-AI, ignite the TS. So the question will be, what impact TS will have in regard of our collapsing biosphere, will there be an positive feedback loop from the technological plane on the biosphere?

Or will we humans go extinct, and the machines will carry on our legacy?

Time will tell.

Transhumanistic vs. Posthumanistic Future?

If we extrapolate past pace of AI development, my question is, will the future be a transhumanistic one or a posthumanistic one?
Will man merge with machine, or will the machines decouple from humans and develop independently?

If we consider the transhumanistic scenario, it seems only natural to conclude that we will create the Matrix. At first one single human will connect with a machine, then a dozen, then a hundred, then thousands, millions, billions.

Maybe all big tech players (the magnificent seven) will offer their own version of Matrix, so we can view it as the next, evolutionary step of the internet.

If we consider the posthumanistic scenario, well, I guess it will be beyond our human scope/horizon, at some point the machines will pass the point of communicability.

The Postmodern Era

Reflecting a bit, technological, cultural, political plane, it seems pretty obv. to me that the Western sphere entered meanwhile the postmodern era, so here my book recommendations on this:

- Richard Dawkings, The Selfish Gene (chapter 11), 1976
- Jean-Francois Lyotard, The Postmodern Condition, 1979
- Jean Baudrillard, Simulacra and Simulation, 1981
- David Deutsch, The Fabric of Reality, 1997
- Susan Blackmore, The Meme Machine, 1999

Jean-Francois Lyotard said that his book on post-modernity is "simply the worst of all my books", but that was a statement from the 90s, you really have to reread it from an 2010s|2020s point of view IMHO.

Which Technology Will Prevail?

Looking back at some previous predictions of mine:

Desktop Applications vs. Web Browser Applications
Back then in the 90s I had a discussion with a tech-budy, which technology will prevail, if the classic applications on our desktop or web browser applications via the internet? I think I can score this one for me, browser applications, with a little *, it is now probably about apps on our smartphones.

Windows vs. Linux
When Windows Vista arrived people were very unhappy about that version, I had about a dozen users in my circle I helped to switch to Ubuntu Linux, and I thought this is it, "This year is the year of Linux on a desktop!". I was wrong, Windows 7 arrived (I heard grand master Bill Gates himself laid hands on that) and people were happy again with Microsoft.

Proprietary Login vs. Open Login
When the Facebook login as web-service appeared I shrugged and asked why we don't use an open solution, meanwhile we have things like OpenID and OAuth.

Closed Social Networks vs. Open Social Networks
Seems WIP, users might need a lil more nudging to make a switch, or alike.

SQL vs. SPARQL
When the first RDF and SPARQL implementations arrived, I, as SQL developer, was impressed, and was convinced it will replace SQL. Wrong, people still use SQL or switched to things like no-SQL database systems.

Looking forward:

Transistor/IC/Microchip vs. ???
I predicted in this blog, that by reaching the 8 billion humans mark (~2022), we will have developed another, ground breaking technology that surpasses the transistor/IC/microchip step. Still waiting for that one.

ARM vs. RISC-V
I think this world is big enough for both, or alike.

Neural Networks vs. Expert Systems
Well, we all know about AI "hallucinations", you can view neural networks as probabilistic systems and expert systems as deterministic ones. For things like poetry, images, audio or video a probabilistic system might be sufficient, but in some areas you really want more accuracy, you want a reliable, deterministic system, what we also used to call an expert system.

AGI/ASI ETA?
What is the estimated time of arrival for AGI/ASI, artificial general intelligence/artificial super intelligence? I wrote before that if we do not blow the planet otherwise up and current pace continues, I estimate that ~2030 we will have an ASI present, the switch from AGI to ASI, from trans-human intelligence to post-human intelligence, the peak of humans being able to follow/understand the AI, the inflection point of the technological singularity.

Transhumanist vs. Neoprimitive
Haha, which one will prevail in the long run? I myself am both, Neoprim and Transhumanist, the idealist in me is a Neoprim, the realist is a Transhumanist, or was it vice versa? ;)

Oh Boy - Project Stargate

100 billion dollars for a data center with 5GW (nuclear) power consumption to

secure enough computing capacity to eventually power "self-improving AI" that won't rely on rapidly depleting human-generated data to train new models

OpenAI asked US to approve energy-guzzling 5GW data centers, report says
https://arstechnica.com/tech-policy/2024/09/openai-asked-us-to-approve-energy-guzzling-5gw-data-centers-report-says/

Well, these guys know what they are up to.

Ray Kurzweil: Technology will let us fully realize our humanity

Ray Kurzweil: Technology will let us fully realize our humanity
https://www.technologyreview.com/2024/08/27/1096148/ray-kurzweil-futurist-ai-medicine-advances-freedom/

"By freeing us from the struggle to meet the most basic needs, technology will serve our deepest human aspirations to learn, create, and connect."

"As superhuman AI makes most goods and services so abundant as to be almost free, the need to structure our lives around jobs will fade away."

"And material abundance will ease economic pressures and afford families the quality time together they've long yearned for."

Haha, definitely a techno-optimist. But I still don't get this AI -> material abundance thing.

AGI/ASI and TS takeoff

People talk a lot about AGI/ASI these days, artificial general intelligence and artificial super intelligence, the strong AI, with different definitions and time estimations when we will reach such a level, but, the actual point in TS takeoff is the feedback loop, when the system starts to feed its own development in an feedback loop and exceeds human understanding. As mentioned, we already have a human <-> computer feedback loop, better computers help us humans to build better computers, but I am still waiting/observing for the direct link, AI builds better AI, computers build better computers, the AI autopoiesis.

https://en.wikipedia.org/wiki/Autopoiesis

Exit Strategy?

Oh boy. ELE ongoing, humans go extinct, biosphere goes extinct, Mars and Moon have no self-sustaining biosphere, the only thing which still has gas is the AI. Which exit strategy to choose? The good ole Marvin Minsky upload scenario? Seriously? A post-human Matrix? Let go of the old, embrace the new? Project Lambda. Oh boy.

Western Peak Passed?

I am a child of the 90s, 1989 til 2001 was my time, the fall of the Berlin Wall until the 9/11, everything seemed possible during this period. Fukuyama mentioned it "the end of history", and then 2001 was already "the end of the end of history".

Retrospectively, Fukuyama was wrong, and Huntington, "The Clash of Civilizations", was right. Maybe the 90s were just a hedonistic time in between, the exception of the rule.

True, technologically we do advance, at least incrementally, more processing power, more bandwidth, more data, bigger neural networks, more advanced network architectures, but cultural, philosophical? Did we, the Western sphere, already pass our peak and do degenerate?

- 1979 - Lyotard - The Postmodern Condition
- 1981 - Baudrillard - Simulacra and Simulation
- 1997 - Deutsch - The Fabric of Reality
- 1999 - Wachowskis - The Matrix

When I surf the meme-sphere out there, it seems to me that meanwhile the so called three poisons rule the world, hate, greed and delusion....

https://en.wikipedia.org/wiki/Three_poisons

...just thinking loud.

We Are Running Out of Juice

The AI competes already with humans for resources, water and energy, and, it seems we are running out of juice...do we have enough resources left for the TS to take off, or, did we enter already the ELE doom loop?

Elon Musk Predicts Electricity Shortage in Two Years
https://hardware.slashdot.org/story/23/07/31/0128257/elon-musk-predicts-electricity-shortage-in-two-years

"I can't emphasize enough: we need more electricity,"

"However much electricity you think you need, more than that is needed."

TS - it's here

...TS, it's here.

Modern Turing Test Proposed

DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots

Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.

Will it be a butterfly?

The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?

It's Water...

The world is facing an imminent water crisis, with demand expected to outstrip the supply of fresh water by 40% by the end of this decade, experts have said on the eve of a crucial UN water summit. From a report: Governments must urgently stop subsidising the extraction and overuse of water through misdirected agricultural subsidies, and industries from mining to manufacturing must be made to overhaul their wasteful practices, according to a landmark report on the economics of water. Nations must start to manage water as a global common good, because most countries are highly dependent on their neighbours for water supplies, and overuse, pollution and the climate crisis threaten water supplies globally, the report's authors say. Johan Rockstrom, the director of the Potsdam Institute for Climate Impact Research and co-chair of the Global Commission on the Economics of Water, and a lead author of the report, told the Guardian the world's neglect of water resources was leading to disaster. "The scientific evidence is that we have a water crisis. We are misusing water, polluting water, and changing the whole global hydrological cycle, through what we are doing to the climate. It's a triple crisis." Rockstrom's fellow Global Commission on the Economics of Water co-chair Mariana Mazzucato, a professor at University College London and also a lead author of the report, added: "We need a much more proactive, and ambitious, common good approach. We have to put justice and equity at the centre of this, it's not just a technological or finance problem."

https://science.slashdot.org/story/23/03/17/175225/global-fresh-water-demand-will-outstrip-supply-by-40-by-2030-say-experts

In a world in need of fresh/drinking water, why the AI?

Event Horizon

Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.

Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.

Negative Feedback Loop

...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:

'Taiwan is facing a drought, and it has prioritized its computer chip business over farmers.'

'U.S. Data Centers Rely on Water from Stressed Basins'

'Musk Wades Into Tesla Water Wars With Berlin’s “Eco Elite”€'

With an incoming ELE, is there still enough momentum in pipe for the TS to take off?

TS Feedback Loop

DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.

https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer

TS Feedback Loop

Google is using AI to design its next generation of AI chips more quickly than humans can. Designs that take humans months can be matched or beaten by AI in six hours

https://www.theverge.com/2021/6/10/22527476/google-machine-learning-chip-design-tpu-floorplanning

Introducing GitHub Copilot: your AI pair programmer

Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.

Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go.

https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/

On Peak Human

One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.

Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.

Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004), that we reached an upper limit already in the 1980s.

In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.

UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.

Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.

Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.

What does Peak Human mean for the Technological Singularity?

The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.

When we face an decline in population growth, we also have to face an decline in new technologies developed.

Cos it is not only about developing new technologies, but also about maintaining the old knowledge.

Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.

Therefore the question is, is AI able to take on the decline?

Time will tell.

I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.

I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.

So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.

Home - Top