Luddite - is the Singularity near?

Transhumanistic vs. Posthumanistic Future?

If we extrapolate past pace of AI development, my question is, will the future be a transhumanistic one or a posthumanistic one?
Will man merge with machine, or will the machines decouple from humans and develop independently?

If we consider the transhumanistic scenario, it seems only natural to conclude that we will create the Matrix. At first one single human will connect with a machine, then a dozen, then a hundred, then thousands, millions, billions.

Maybe all big tech players (the magnificent seven) will offer their own version of Matrix, so we can view it as the next, evolutionary step of the internet.

If we consider the posthumanistic scenario, well, I guess it will be beyond our human scope/horizon, at some point the machines will pass the point of communicability.

The Postmodern Era

Reflecting a bit, technological, cultural, political plane, it seems pretty obv. to me that the Western sphere entered meanwhile the postmodern era, so here my book recommendations on this:

- Richard Dawkings, The Selfish Gene (chapter 11), 1976
- Jean-Francois Lyotard, The Postmodern Condition, 1979
- Jean Baudrillard, Simulacra and Simulation, 1981
- David Deutsch, The Fabric of Reality, 1997
- Susan Blackmore, The Meme Machine, 1999

Jean-Francois Lyotard said that his book on post-modernity is "simply the worst of all my books", but that was a statement from the 90s, you really have to reread it from an 2010s|2020s point of view IMHO.

ChatRobot - What could possibly go wrong?

'It's Surprisingly Easy To Jailbreak LLM-Driven Robots'

Instead of focusing on chatbots, a new study reveals an automated way to breach LLM-driven robots "with 100 percent success," according to IEEE Spectrum. "By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs..." 

One on Sceptics

Bengio flipped:

Reasoning through arguments against taking AI safety seriously
https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/

"I worry that with the current trajectory of public and political engagement with AI risk, we could collectively sleepwalk - even race - into a fog behind which could lie a catastrophe that many knew was possible, but whose prevention wasn't prioritized enough."

Hinton flipped:

Why the Godfather of A.I. Fears What He's Built
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai

"People say, It's just glorified autocomplete," he told me, standing in his kitchen. (He has suffered from back pain for most of his life; it eventually grew so severe that he gave up sitting. He has not sat down for more than an hour since 2005.) "Now, let's analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what's being said. That's the only way. So by training something to be really good at predicting the next word, you're actually forcing it to understand. Yes, it's 'autocomplete'-but you didn't think through what it means to have a really good autocomplete." Hinton thinks that "large language models," such as GPT, which powers OpenAI's chatbots, can comprehend the meanings of words and ideas.

LeCun half flipped:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence
https://www.pymnts.com/artificial-intelligence-2/2024/meta-ai-head-chatgpt-will-never-reach-human-intelligence/

These models, LeCun told the FT, "do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically."

Bostrom was shut down:

Oxford shuts down institute run by Elon Musk-backed philosopher
https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

Nick Bostrom's Future of Humanity Institute closed this week in what Swedish-born philosopher says was "death by bureaucracy"

Metzinger had a clash with political reality:

Eine Frage der Ethik
https://www.zeit.de/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht
A question of ethics (on Gooogle Translate)
https://www-zeit-de.translate.goog/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp

What is artificial intelligence allowed to do? Experts commissioned by the EU have looked into this question and developed ethical guidelines. Not everyone thinks they go far enough.

One for the Critics

Springer paper: ChatGPT is bullshit
https://link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called 'AI hallucinations'. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

We're in the brute force phase of AI - once it ends, demand for GPUs will too
https://www.theregister.com/2024/09/10/brute_force_ai_era_gartner/

Generative AI is, in short, being asked to solve problems it was not designed to solve.

LLMs Pre-Prompts

I have a bad feeling on this...

Apple's Hidden AI Prompts Discovered In macOS Beta
https://apple.slashdot.org/story/24/08/06/2113250/apples-hidden-ai-prompts-discovered-in-macos-beta

"Do not hallucinate"; and "Do not make up factual information."

Anthropic Publishes the 'System Prompts' That Make Claude Tick
https://slashdot.org/story/24/08/27/2140245/anthropic-publishes-the-system-prompts-that-make-claude-tick

"Claude is now being connected with a human,"

AI works better if you ask it to be a Star Trek character
https://www.fudzilla.com/news/ai/59468-ai-works-better-if-you-ask-it-to-be-a-star-trek-character

"Boffins are baffled after they managed to get their AI to perform more accurate maths if they were asked to do it in the style of a Star Trek character."

AI Risk Database

Yeah, sure, captain obvious...

MIT CSAIL AI Risk Database: https://airisk.mit.edu/

Last point on the list: 7.5 AI welfare and rights

"Ethical considerations regarding the treatment of potentially sentient AI entities, including discussions around their potential rights and welfare, particularly as AI systems become more advanced and autonomous."

Percentage of risks: <1%
Percentage of documents: 2%

No comment.

Haha - Sicak Kava #1

Haha, GPT's first Sicak Kava - Hot Skull moment? A stage one jabberer? :)

ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users

...
reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it."
...
"It gave me the exact same feeling -- like watching someone slowly lose their mind either from psychosis or dementia,"
...
Some users even began questioning their own sanity. "What happened here?

A Mirror

Machines, the AI, talking twaddle and suffering from hallucinations? A mirror of our society. A machine mind with a rudimentary body but disconnected from its soul? A mirror of our society. Machine minds used to generate fake-money, fake-speech and fake-porn? A mirror of our society.

Zuse's Devils Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devils Wire.

In the early days of computer history, the program to compute and the data to compute on was separated.

Nowadays computer use the same memory for both, so it is possible to write programs that manipulate their own program.

Zuse says, that behind every technology Mephistopheles stands behind and grins, but the modern world needs computers to solve actual and upcoming problems, but better, read the lecture by yourself...

+1 points for the Singularity to take off.

Home - Top