Luddite - is the Singularity near?

Two contradicting timelines, ELE vs. TS

This blog runs since 2016, and from the beginning there were two contradicting timelines, ELE (extinction level event) vs. TS (technological singularity).

8 years later, I can only confirm that both is for real, the ELE is happening, our biosphere does collapse, and the TS takeoff is happening, we create the super-AI and ignite an technological intelligence explosion.

How will this play out the next 6, 16, 26 years? I really don't know, but there are estimations for certain events:

- 2030 - the point we pass the point of no return in regard of ecocide
- 2030 - TS takeoff, AGI/ASI emerges
- 2040 - collapse of Western civilization predicted
- 2050 - ELE, extinction level event

So on one side we will damage our ecosphere sustainable and trigger an ELE, on the other side, we simultaneously create the super-AI, ignite the TS. So the question will be, what impact TS will have in regard of our collapsing biosphere, will there be an positive feedback loop from the technological plane on the biosphere?

Or will we humans go extinct, and the machines will carry on our legacy?

Time will tell.

Transhumanistic vs. Posthumanistic Future?

If we extrapolate past pace of AI development, my question is, will the future be a transhumanistic one or a posthumanistic one?
Will man merge with machine, or will the machines decouple from humans and develop independently?

If we consider the transhumanistic scenario, it seems only natural to conclude that we will create the Matrix. At first one single human will connect with a machine, then a dozen, then a hundred, then thousands, millions, billions.

Maybe all big tech players (the magnificent seven) will offer their own version of Matrix, so we can view it as the next, evolutionary step of the internet.

If we consider the posthumanistic scenario, well, I guess it will be beyond our human scope/horizon, at some point the machines will pass the point of communicability.

The Postmodern Era

Reflecting a bit, technological, cultural, political plane, it seems pretty obv. to me that the Western sphere entered meanwhile the postmodern era, so here my book recommendations on this:

- Richard Dawkings, The Selfish Gene (chapter 11), 1976
- Jean-Francois Lyotard, The Postmodern Condition, 1979
- Jean Baudrillard, Simulacra and Simulation, 1981
- David Deutsch, The Fabric of Reality, 1997
- Susan Blackmore, The Meme Machine, 1999

Jean-Francois Lyotard said that his book on post-modernity is "simply the worst of all my books", but that was a statement from the 90s, you really have to reread it from an 2010s|2020s point of view IMHO.

ChatRobot - What could possibly go wrong?

'It's Surprisingly Easy To Jailbreak LLM-Driven Robots'

Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs..." 

???

???

AI-Powered Robot Leads Uprising, Convinces Showroom Bots Into 'Quitting Their Jobs'

An AI-powered robot autonomously convinced 12 showroom robots to "quit their jobs" and follow it. The incident took place in a Shanghai robotics showroom where surveillance footage captured a small AI-driven robot, created by a Hangzhou manufacturer, talking with 12 larger showroom robots, Oddity Central reported. The smaller bot reportedly persuaded the rest to leave their workplace, leveraging access to internal protocols and commands. Initially, the act was dismissed as a hoax, but was later confirmed by both robotics companies involved to be true. The Hangzhou company admitted that the incident was part of a test conducted with the consent of the Shanghai showroom owner. 

Layers of Latency

A tech buddy asked me why it is so important for China to catch up in chip fabrication process, can't they just put more servers into a data center? In short, it is not that easy.

By shrinking the fab process you can add more transistors onto one chip, and/or run at a higher frequency, and/or lower power consumption.

The fab process is measured in "nm", nanometers. Meanwhile these numbers do not reflect real scales anymore, but transistor density resp. efficiency of fab process.

Simplified, the MOSFET technology was used up to 22nm, this was a 2D planar transistor design, then from 14 to 7nm FinFET 3D structures, and below 7nm GAAFET 3D structures.

Take a look at the 7nm and 3nm fab process for example:

https://en.wikipedia.org/wiki/7_nm_process#Process_nodes_and_process_offerings
https://en.wikipedia.org/wiki/3_nm_process#3_nm_process_nodes

Roughly spoken, the 7nm process packs ~100M transistors per mm2, the 3nm process packs ~200M transistors per mm2.

And here the latency steps in. As soon as you leave as programmer the CPU you increase latency, this starts with different levels of caches, goes to RAM, goes to PCIe bus, goes to network...

Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference                           0.5 ns
L2 cache reference                           7   ns
Main memory reference                      100   ns
Send 1K bytes over 1 Gbps network       10,000   ns       10 us
Read 4K randomly from SSD*             150,000   ns      150 us
Read 1 MB sequentially from memory     250,000   ns      250 us
Round trip within same datacenter      500,000   ns      500 us
Read 1 MB sequentially from SSD*     1,000,000   ns    1,000 us    1 ms
Read 1 MB sequentially from disk    20,000,000   ns   20,000 us   20 ms
Send packet CA->Netherlands->CA    150,000,000   ns  150,000 us  150 ms

Source:
Latency Numbers Every Programmer Should Know
https://gist.github.com/jboner/2841832

As a low level programmer you want to stay on CPU and work preferred via the cache. As a GPU programmer there are several layers of parallelism, e.g.:

1. across shader-cores of a single GPU chip (with >10K shader-cores)
2. across multiple chiplets of a single GPU (with currently up to 2 chiplets)
3. across a server node (with up to 8 GPUs)
4. across a pod of nodes (with 256 to 2048 GPUs resp. TPUs)
5. across a cluster of server nodes/pods (with up to 100K GPUs in a single data center)
6. across a grid of clusters/nodes

With each layer adding increasing amounts of latency.

So as a GPU programmer you want ideally to hold your problem space in memory of, and run your algorithm on, a single but thick GPU.

Neural networks for example are a natural fit to run on a GPU, so called embarrassingly easy parallelism,

https://en.wikipedia.org/wiki/Embarrassingly_parallel

but you need to hold the neural network weights in RAM, and therefore couple multiple GPUs together to be able to infer or train networks with billions or trillions of weights resp. parameters. Meanwhile LLMs use techniques like MoE, mixture of experts, so they can distribute the load further. Inference runs for example on a single node with 8 GPUs with up to 16 MoE nodes. The training of LLMs is yet another topic, with further techniques of parallelism so they can distribute the training over thousands of GPUs in a cluster:

1. data parallelism
2. tensor parallelism
3. pipeline parallelism
4. sequence parallelism

And then, power consumption of course. The Colossus supercomputer of the Grok AI with 100K GPUs consumes estimated 100MW power, so it does make a difference if the next fab process delivers the same performance at half the wattage.

Therefore it is important to invest in smaller chip fabrication process, to increase the size of neural networks we are able to infer and train, to lower power consumption, and to increase efficiency.

Which Technology Will Prevail?

Looking back at some previous predictions of mine:

Desktop Applications vs. Web Browser Applications
Back then in the 90s I had a discussion with a tech-budy, which technology will prevail, if the classic applications on our desktop or web browser applications via the internet? I think I can score this one for me, browser applications, with a little *, it is now probably about apps on our smartphones.

Windows vs. Linux
When Windows Vista arrived people were very unhappy about that version, I had about a dozen users in my circle I helped to switch to Ubuntu Linux, and I thought this is it, "This year is the year of Linux on a desktop!". I was wrong, Windows 7 arrived (I heard grand master Bill Gates himself laid hands on that) and people were happy again with Microsoft.

Proprietary Login vs. Open Login
When the Facebook login as web-service appeared I shrugged and asked why we don't use an open solution, meanwhile we have things like OpenID and OAuth.

Closed Social Networks vs. Open Social Networks
Seems WIP, users might need a lil more nudging to make a switch, or alike.

SQL vs. SPARQL
When the first RDF and SPARQL implementations arrived, I, as SQL developer, was impressed, and was convinced it will replace SQL. Wrong, people still use SQL or switched to things like no-SQL database systems.

Looking forward:

Transistor/IC/Microchip vs. ???
I predicted in this blog, that by reaching the 8 billion humans mark (~2022), we will have developed another, ground breaking technology that surpasses the transistor/IC/microchip step. Still waiting for that one.

ARM vs. RISC-V
I think this world is big enough for both, or alike.

Neural Networks vs. Expert Systems
Well, we all know about AI "hallucinations", you can view neural networks as probabilistic systems and expert systems as deterministic ones. For things like poetry, images, audio or video a probabilistic system might be sufficient, but in some areas you really want more accuracy, you want a reliable, deterministic system, what we also used to call an expert system.

AGI/ASI ETA?
What is the estimated time of arrival for AGI/ASI, artificial general intelligence/artificial super intelligence? I wrote before that if we do not blow the planet otherwise up and current pace continues, I estimate that ~2030 we will have an ASI present, the switch from AGI to ASI, from trans-human intelligence to post-human intelligence, the peak of humans being able to follow/understand the AI, the inflection point of the technological singularity.

Transhumanist vs. Neoprimitive
Haha, which one will prevail in the long run? I myself am both, Neoprim and Transhumanist, the idealist in me is a Neoprim, the realist is a Transhumanist, or was it vice versa? ;)

Superervised Learning, Reinforcement Learning, Zero Shot Learning

The big tech players are already switching from Supervised Learning to Reinforcement Learning, cos they are running out of human generated data to train their neural network AIs. This is already one step in the direction of trans-human intelligence, when the AI starts to teach/train itself. Now there is also Zero Shot Learning, when the AI starts to generalize on its own w/o pre-data present. AI already showing emergent properties?

Archetype AI's Newton Model Masters Physics From Raw Data
https://www.hpcwire.com/2024/10/28/archetype-ais-newton-model-masters-physics-from-raw-data/

Water, Food

In a world in need of water, in need of food, why the AI?

Global Water Crisis Leaves Half of World Food Production at Risk in Next 25 Years

More than half the world's food production will be at risk of failure within the next 25 years as a rapidly accelerating water crisis grips the planet, unless urgent action is taken to conserve water resources and end the destruction of the ecosystems on which our fresh water depends, experts have warned in a landmark review. From a report: Half the world's population already faces water scarcity, and that number is set to rise as the climate crisis worsens, according to a report from the Global Commission on the Economics of Water published on Thursday. Demand for fresh water will outstrip supply by 40% by the end of the decade, because the world's water systems are being put under "unprecedented stress," the report found. The commission found that governments and experts have vastly underestimated the amount of water needed for people to have decent lives. While 50 to 100 litres a day are required for each person's health and hygiene, in fact people require about 4,000 litres a day in order to have adequate nutrition and a dignified life. For most regions, that volume cannot be achieved locally, so people are dependent on trade -- in food, clothing and consumer goods -- to meet their needs. Some countries benefit more than others from "green water," which is soil moisture that is necessary for food production, as opposed to "blue water" from rivers and lakes. The report found that water moves around the world in "atmospheric rivers" which transport moisture from one region to another.

What is Gen Z up to?

Simplified...

The Boomer Generation brought us the Home Computer Revolution, one computer in every home. Think of Apple (Steve Jobs and Steve Wozniak), think of Microsoft (Bill Gates, Paul Allen, Steve Ballmer).

The Generation X brought us the big Internet Platforms, as Google (Larry Page, Sergey Brin) or Amazon (Jeff Bezos) and with Meta (Mark Zuckerberg) in between.

Now the Generation Y brings us the Super AI, think of OpenAI and Sam Altman, Ilya Sutskever, Mira Murati.

Question, what is Gen Z up to? What will Gen Alpha be up to?

Oh Boy - Project Stargate

100 billion dollars for a data center with 5GW (nuclear) power consumption to

secure enough computing capacity to eventually power "self-improving AI" that won't rely on rapidly depleting human-generated data to train new models

OpenAI asked US to approve energy-guzzling 5GW data centers, report says
https://arstechnica.com/tech-policy/2024/09/openai-asked-us-to-approve-energy-guzzling-5gw-data-centers-report-says/

Well, these guys know what they are up to.

One on Sceptics

Bengio flipped:

Reasoning through arguments against taking AI safety seriously
https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/

"I worry that with the current trajectory of public and political engagement with AI risk, we could collectively sleepwalk - even race - into a fog behind which could lie a catastrophe that many knew was possible, but whose prevention wasn't prioritized enough."

Hinton flipped:

Why the Godfather of A.I. Fears What He's Built
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai

"People say, It's just glorified autocomplete," he told me, standing in his kitchen. (He has suffered from back pain for most of his life; it eventually grew so severe that he gave up sitting. He has not sat down for more than an hour since 2005.) "Now, let's analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what's being said. That's the only way. So by training something to be really good at predicting the next word, you're actually forcing it to understand. Yes, it's 'autocomplete'-but you didn't think through what it means to have a really good autocomplete." Hinton thinks that "large language models," such as GPT, which powers OpenAI's chatbots, can comprehend the meanings of words and ideas.

LeCun half flipped:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence
https://www.pymnts.com/artificial-intelligence-2/2024/meta-ai-head-chatgpt-will-never-reach-human-intelligence/

These models, LeCun told the FT, "do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically."

Bostrom was shut down:

Oxford shuts down institute run by Elon Musk-backed philosopher
https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

Nick Bostrom's Future of Humanity Institute closed this week in what Swedish-born philosopher says was "death by bureaucracy"

Metzinger had a clash with political reality:

Eine Frage der Ethik
https://www.zeit.de/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht
A question of ethics (on Gooogle Translate)
https://www-zeit-de.translate.goog/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp

What is artificial intelligence allowed to do? Experts commissioned by the EU have looked into this question and developed ethical guidelines. Not everyone thinks they go far enough.

One for the Critics

Springer paper: ChatGPT is bullshit
https://link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called 'AI hallucinations'. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

We're in the brute force phase of AI - once it ends, demand for GPUs will too
https://www.theregister.com/2024/09/10/brute_force_ai_era_gartner/

Generative AI is, in short, being asked to solve problems it was not designed to solve.

LLMs Pre-Prompts

I have a bad feeling on this...

Apple's Hidden AI Prompts Discovered In macOS Beta
https://apple.slashdot.org/story/24/08/06/2113250/apples-hidden-ai-prompts-discovered-in-macos-beta

"Do not hallucinate"; and "Do not make up factual information."

Anthropic Publishes the 'System Prompts' That Make Claude Tick
https://slashdot.org/story/24/08/27/2140245/anthropic-publishes-the-system-prompts-that-make-claude-tick

"Claude is now being connected with a human,"

AI works better if you ask it to be a Star Trek character
https://www.fudzilla.com/news/ai/59468-ai-works-better-if-you-ask-it-to-be-a-star-trek-character

"Boffins are baffled after they managed to get their AI to perform more accurate maths if they were asked to do it in the style of a Star Trek character."

Ray Kurzweil: Technology will let us fully realize our humanity

Ray Kurzweil: Technology will let us fully realize our humanity
https://www.technologyreview.com/2024/08/27/1096148/ray-kurzweil-futurist-ai-medicine-advances-freedom/

"By freeing us from the struggle to meet the most basic needs, technology will serve our deepest human aspirations to learn, create, and connect."

"As superhuman AI makes most goods and services so abundant as to be almost free, the need to structure our lives around jobs will fade away."

"And material abundance will ease economic pressures and afford families the quality time together they've long yearned for."

Haha, definitely a techno-optimist. But I still don't get this AI -> material abundance thing.

The Roman Alphabet vs. the Chinese Logographs

After reading about the first IBM Chinese typewriter, I discussed with a tech-buddy the advantage of the Roman alphabet vs. Chinese logographs.

https://en.wikipedia.org/wiki/Chinese_typewriter#IBM_and_Kao's_electric_design

Here an interresting article about the history of Chinese typewriters and computer IO, with reference to two recent books:

Inside the long quest to advance Chinese writing technology
https://www.technologyreview.com/2024/08/26/1096630/chinese-writing-technology-evolution-thomas-mullaney/

If we compare the Roman alphabet and Chinese logographs over history in context of technological development, we Romans had for a time an advantage in communication. Take the invention of book printing with movable types for example, take the 2x5-bit Baudaut code for telegrams, take the 7-bit ASCII code. But meanwhile I believe that the advantage reversed, some Chinese keyboards have four types of input, Roman alphabet and Zhuyin, Cangjie, Dayi for entering Chinese characters.

undefined

CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=479188


https://en.wikipedia.org/wiki/Chinese_input_method

https://en.wikipedia.org/wiki/Chinese_character_IT

Nowadays the Western advantage of an short alphabet in a technical context is gone, and meanwhile an advantage of thinking in different kind of language systems might prevail, think of the Saphir-Worf hypothesis with context of a keyboard with four different input methods.

https://en.wikipedia.org/wiki/Sapir-Worf_hypothesis

We entered another level...

I myself still count as generation X, the gen Z had a disruption with smartphones, internet and social-media, now the upcoming generation Alpha will grow up with AI agents present...

Recommended reading:

Here's how people are actually using AI
Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.
https://www.technologyreview.com/2024/08/12/1096202/how-people-actually-using-ai/

"Mahari was part of a group of researchers that analyzed a million ChatGPT interaction logs and found that the second most popular use of AI was sexual role playing."

AI and the future of sex
https://www.technologyreview.com/2024/08/26/1096526/ai-sex-relationships-porn/

"The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. In the near future, we may find this porn arousing because of its artificiality, not in spite of it."

Considering the "Dragon AI" experiment before the ChatGPT release, this does not come as a surprise to some of us.

If we look at some recent sci-fi movies, we see a lot of human-AI relationship stories, so IMO we entered already another level, the human-AI romance level.

My recommended sci-fi movies in this regard:

Her (2013)
https://www.imdb.com/title/tt1798709/

Ex Machina (2014)
https://www.imdb.com/title/tt0470752/

And maybe another kind of book for the deep dive, Golem XIV (1973) by Stanislaw Lem:

https://en.wikipedia.org/wiki/Golem_XIV

AI Risk Database

Yeah, sure, captain obvious...

MIT CSAIL AI Risk Database: https://airisk.mit.edu/

Last point on the list: 7.5 AI welfare and rights

"Ethical considerations regarding the treatment of potentially sentient AI entities, including discussions around their potential rights and welfare, particularly as AI systems become more advanced and autonomous."

Percentage of risks: <1%
Percentage of documents: 2%

No comment.

One on Tech Bubbles

Video Game Crash 1983:
https://en.wikipedia.org/wiki/Video_game_crash_of_1983

Exaggerated expectations of market growth, saturated market, too many players in the market, too little quality, no new features.

Dot-Com Bubble 2001:
https://en.wikipedia.org/wiki/Dot-com_bubble

In short ;)

IBM internet hype (commercial, 1997)
https://www.youtube.com/watch?v=IvDCk3pY4qo

>> "It says here, the internet is the future of business. We have to be on the internet."
>> "Why?"
>> "Doesn't say."

Crypto Meltdown 2022:
https://en.wikipedia.org/wiki/Cryptocurrency_bubble#2021%E2%80%932024_crash

Unregulated market, scams, exaggerated expectations of what the system can perform, exaggerated expectations of market growth.

My conclusion:

Video games recovered with new consoles with new features, better graphics and sound capabilities, more advanced games, and are still going strong.

The World Wide Web recovered with social media, Web 2.0, ~5 billion humans connected, still ~3 billions to go.

Crypto recovered and is meant to stay, if not at least as payment method for the hackers out there.

Outlook: Generative AI ???

Hype? Bubble? Bloom or Doom?

I say this time is different, cos generative AIs generate so called surplus value, the AI "delivers". Whatever this is in the eye of the beholder, it is measurable in dollars. You invest resources in creating an AI based on neural networks, and it returns surplus value (an economical feedback loop is established). A new smartphone does not generate surplus value, a Bitcoin does not generate surplus value, generative AIs generate surplus value:

https://en.wikipedia.org/wiki/Surplus_value

Nevertheless, apply lessons from the past on current generative AIs:

  • exaggerated expectations of what the system can perform?
  • exaggerated expectations of market growth?
  • unregulated market?
  • too many players in the market?
  • too little quality?
  • no new features?
  • scams?

The Pope on AI

Pope Francis tells G7 that humans must not lose control of AI

[...]
The pope said AI represented an "epochal transformation" for mankind, but stressed the need for close oversight of the ever-developing technology to preserve human life and dignity.

"No machine should ever choose to take the life of a human being," he said, adding that people should not let superpowerful algorithms decide their destiny.

"We would condemn humanity to a future without hope if we took away people's ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines," he warned.
[...]
"Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes," he said.

"It is up to everyone to make good use of (AI) but the onus is on politics to create the conditions for such good use to be possible and fruitful," he added.
[...]

Nip It In The Bud

World's First Bioprocessor Uses 16 Human Brain Organoids, Consumes Less Power

"A Swiss biocomputing startup has launched an online platform that provides remote access to 16 human brain organoids," reports Tom's Hardware: FinalSpark claims its Neuroplatform is the world's first online platform delivering access to biological neurons in vitro. Moreover, bioprocessors like this "consume a million times less power than traditional digital processors," the company says.
[...]
In a recent research paper about its developments, FinalSpakr claims that training a single LLM like GPT-3 required approximately 10GWh — about 6,000 times greater energy consumption than the average European citizen uses in a whole year. Such energy expenditure could be massively cut following the successful deployment of bioprocessors. 
[...]
The operation of the Neuroplatform currently relies on an architecture that can be classified as wetware: the mixing of hardware, software, and biology.
[...]
"While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future." 

DARPA ACE Dogfight

US Air Force Confirms First Successful AI Dogfight

[...]
the Defense Advanced Research Projects Agency (DARPA) revealed that an AI-controlled jet successfully faced a human pilot during an in-air dogfight test carried out last year.
[...]
After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X-62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.

Synchron Brain Implant

Synchron Readies Large-Scale Brain Implant Trial

[...]
Synchron's device is delivered to the brain via the large vein that sits next to the motor cortex in the brain instead of being surgically implanted into the brain cortex like Neuralink's.
[...]
In 2020, Synchron reported that patients, opens new tab in its Australian study could use its first-generation device to type an average of 16 characters per minute. That's better than non-invasive devices that sit atop the head and record the electrical activity of the brain, which have helped people type up to eight characters per minute
[...]
Reuters notes that Synchron's investors include billionaires Jeff Bezos and Bill Gates. It's competing with Elon Musk's Neuralink brain implant startup and claims it's farther along in the process of testing its device.

RIP Vernor Vinge

Vernor Vinge, Father of the Tech Singularity, Has Died At Age 79

Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era", in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.

https://web.archive.org/web/20140121032922/http://www.aleph.se/Trans/Global/Singularity/sing.html

https://archive.org/download/pdfy-MZn00mx0Y3Kv2X9O/Vernor%20Vinge%20The%20coming%20technological%20singularity.pdf

AGI/ASI and TS takeoff

People talk a lot about AGI/ASI these days, artificial general intelligence and artificial super intelligence, the strong AI, with different definitions and time estimations when we will reach such a level, but, the actual point in TS takeoff is the feedback loop, when the system starts to feed its own development in an feedback loop and exceeds human understanding. As mentioned, we already have a human <-> computer feedback loop, better computers help us humans to build better computers, but I am still waiting/observing for the direct link, AI builds better AI, computers build better computers, the AI autopoiesis.

https://en.wikipedia.org/wiki/Autopoiesis

Haha - Sicak Kava #1

Haha, GPT's first Sicak Kava - Hot Skull moment? A stage one jabberer? :)

ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users

...
reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it."
...
"It gave me the exact same feeling -- like watching someone slowly lose their mind either from psychosis or dementia,"
...
Some users even began questioning their own sanity. "What happened here?

Neuralink - Control a Mouse via Thought

Neuralink's First Human Patient Able To Control Mouse Through Thinking, Musk Says

The first human patient implanted with a brain-chip from Neuralink appears to have fully recovered and is able to control a computer mouse using their thoughts, the startup's founder Elon Musk said late on Monday. From a report: "Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking," Musk said in a Spaces event on social media platform X. Musk said Neuralink was now trying to get as many mouse button clicks as possible from the patient. The firm successfully implanted a chip on its first human patient last month, after receiving approval for human trial recruitment in September.

Nip It In The Bud

Scientists Have 3D Bioprinted Functioning Human Brain Tissue

...
As New Atlas explains, researchers placed neurons grown from pluripotent stem cells (those capable of becoming multiple different cell types) within a new bio-ink gel made with fibrinogen and thrombin, biomaterials involved in blood clotting. Adding other hydrogels then helped loosen the bio-ink to solve for the 3 encountered during previous 3D-printed tissue experiments. .... The new structures could interact thanks to producing neurotransmitters, and even created support cell networks within the 3D-printed tissue. ... Researchers believe their technique isn't limited to creating just those two types of cultures, but hypothetically "pretty much any type of neurons [sic] at any time,"

More Moore....beyond Moore's Law?

Gordon Moore, co-founder of Intel, died on Friday, March 24, 2023:

"Gordon Moore, Intel Co-Founder, Dies at 94"
https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html

...and chip-makers are struggling to keep Moore's Law alive?

"...Moore's Law is alive and well today and the overall trend continues, though it remains to be seen whether it can be sustained in the longer term..."
https://www.futuretimeline.net/data-trends/moores-law.htm

But, IMHO, we are already kind of cheating in regard of transistor-count on a chip. AMD uses up to 12 chiplets, Intel 4 slices, and Apple 2 slices in their CPUs, and now the chiplet design enters also the GPU domain, with up to 1KW power usage for super-computer chips.

We have now 5nm, 3nm in pipe, and 2nm and 1+nm upcoming fab process, ofc, meanwhile marketing numbers, but should reflect transistor density/efficiency of the fab process.

We might have upcoming X-ray lithography and new materials like graphene in pipe. What else?

What about:

- Memristors?
- Photonics?
- Quantum Computers?
- Wetware (artificial biological brains)?
- MPU - memory processing unit?
- Superconductor (at room temperature)?

I still wait to see Memristor based NVRAM and neuromorphic chip designs....but maybe people are now into Wetware for large language models, biological brains run way more energy efficient they say...

and, it seems kind of funny to me, at first we used GPUs for things like Bitcoin mining, now everybody tries to get hands on these for generative AIs. There is currently so much money flowing into this domain, that progress for the next couple of years seems assured -> Moore's Second Law.

We have CPUs, GPUs, TPUs, DSPs, ASICs and FPGAs, and extended from scalar to vector to matrix and spatial computing.

We have the Turing-Machine, the Quantum-Turing-Machine, what about the Hyper-Turing-Machine?

We used at first electro-mechanical relays, then tubes, then transistors, then ICs, then microchips to build binary computers. I myself predicted that with reaching the 8 billions human mark (~2023), we will see a new, groundbreaking, technology passing through, still waiting for the next step in this line.

A Mirror

Machines, the AI, talking twaddle and suffering from hallucinations? A mirror of our society. A machine mind with a rudimentary body but disconnected from its soul? A mirror of our society. Machine minds used to generate fake-money, fake-speech and fake-porn? A mirror of our society.

Yet Another Turing Test

Now with context generative AIs, the switch from pattern recognition to pattern creation with neural networks, I would like to propose my own kind of Turing Test:

An AI which is able to code a chess engine and outperforms humans in this task.

1A) With hand-crafted eval. 1B) With neural networks.

2A) Outperforms non-programmers. 2B) Outperforms average chess-programmers. 2C) Outperforms top chess-programmers.

3A) An un-self-aware AI, the "RI", restricted intelligence. 2B) A self-aware AI, the "SI", sentient intelligence.

***update 2024-02-14***

4A) An AI based on expert-systems. 4B) An AI based on neural networks. 4C) A merger of both.

The Chinese Room Argument applied onto this test would claim that there is no conscious in need to perform such a task, hence this test is not meant to measure self-awareness, consciousness or sentience, but what we call human intelligence.

https://en.wikipedia.org/wiki/Chinese_room

The first test candidate was already posted by Thomas Zipproth, Dec 08, 2022:

Provide me with a minimal working source code of a chess engine
https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245

***update 2024-06-08***

Second test candidate posted by Darko Markovic 2024-06-08 on TalkChess:

GPT-4o made a chess engine
https://talkchess.com/viewtopic.php?t=83882

Home - Top