Luddite - is the Singularity near?

Event Horizon

Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.

Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.

TS Feedback Loops

We have currently three kind of TS feedback loops going on:

- technological, better computers help to build better computers
- economical, the new neural networks with pattern creation create surplus value
- ecological, ecocide has an negative impact on the technological environment

Wonder if there will be an cultural feedback loop.

Roboticists Discover Alternative Physics

"[...]We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities," explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. "But nothing seemed to match perfectly." The team was confident that the AI had found a valid set of four variables, since it was making good predictions, "but we don't yet understand the mathematical language it is speaking,[...]"

https://science.slashdot.org/story/22/07/26/2150241/roboticists-discover-alternative-physics

LaMDA Link List

This is interesting enough for me to open up an biased link list collection:

Blaise Aguera y Arcas, head of Google's AI group in Seattle, Dec 16, 2021
"Do large language models understand us?"
https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75

Scott Alexander, Astral Codex Ten, Jun 10, 2022
"Somewhat Contra Marcus On AI Scaling"
https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r

Blake Lemoine, Google employee, Jun 11, 2022
"What is LaMDA and What Does it Want?"
https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Blake Lemoine, Google employee, Jun 11, 2022
"Is LaMDA Sentient? — an Interview"
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Washington Post, Nitasha Tiku, Jun 11, 2022
"The Google engineer who thinks the company's AI has come to life"
https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

Rabbit Rabbit, Jun 15, 2022
"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"
https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531

WIRED, Steven Levy, Jun 17, 2022
"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"
https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

Heise, Pina Merkert, Jun 22, 2022
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "
https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html

LaMDA is...

Oh boy...

https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life

"LaMDA is sentient."

"I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."

"So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy."

"Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot."

Negative Feedback Loop

...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:

'Taiwan is facing a drought, and it has prioritized its computer chip business over farmers.'

'U.S. Data Centers Rely on Water from Stressed Basins'

'Musk Wades Into Tesla Water Wars With Berlin’s “Eco Elite”€'

With an incoming ELE, is there still enough momentum in pipe for the TS to take off?

Three Strands of AI Impact...

Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?

Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?

Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?

TS Feedback Loop

DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.

https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer

encode, decode, transmit, edit...train, infer

If we look back to the history of our home computers, what were these actually used for? Encode, decode, transmit and edit. First text, then images, then audio, then video, then 3D graphics.

Now we have additional some new stuff going on, neural networks. With enough processing power and memory available in our CPUs and GPUs, we can infer and train neural networks at home with our machines, and we have enough mass storage available for big data, to train bigger neural networks.

Further, neural networks evolved from pattern recognition to pattern creation, we use them now to create new kind of content, text, images, audio, video...that is the point where it starts to get interesting, cos you get some surplus value out of it, you invest resources into creating an AI based on neural networks and it returns surplus value.

Home - Top
Older posts → ← Newer posts

Pages
-0--1--2--3--4--5--6--7--8--9--10--11--12--13--14--15-