Luddite - is the Singularity near?

TS Feedback Loop

We need matrix-multiplications for running neural networks, and neural networks find better ways for matrix-multiplications...

DeepMind's Game-Playing AI Has Beaten a 50-Year-Old Record In Computer Science

"[...]Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix," concludes the report. "It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm.

Hahaha

Hahaha, capitalism and Super-AI does not sum up ;)

The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'" "That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."

https://slashdot.org/story/22/09/14/2146210/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity

We analyze the expected behavior of an advanced artificial agent with a learned goal planning in an unknown environment. Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that. Then we argue that this ambiguity will lead it to intervene in whatever protocol we set up to provide data for the agent about its goal. We discuss an analogous failure mode of approximate solutions to assistance games. Finally, we briefly review some recent approaches that may avoid this problem.

https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064

Event Horizon

Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.

Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.

TS Feedback Loops

We have currently three kind of TS feedback loops going on:

- technological, better computers help to build better computers
- economical, the new neural networks with pattern creation create surplus value
- ecological, ecocide has an negative impact on the technological environment

Wonder if there will be an cultural feedback loop.

Roboticists Discover Alternative Physics

"[...]We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities," explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. "But nothing seemed to match perfectly." The team was confident that the AI had found a valid set of four variables, since it was making good predictions, "but we don't yet understand the mathematical language it is speaking,[...]"

https://science.slashdot.org/story/22/07/26/2150241/roboticists-discover-alternative-physics

LaMDA Link List

This is interesting enough for me to open up an biased link list collection:

Blaise Aguera y Arcas, head of Google’s AI group in Seattle, Dec 16, 2021
"Do large language models understand us?"
https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75

Scott Alexander, Astral Codex Ten, Jun 10, 2022
"Somewhat Contra Marcus On AI Scaling"
https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r

Blake Lemoine, Google employee, Jun 11, 2022
"What is LaMDA and What Does it Want?"
https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Blake Lemoine, Google employee, Jun 11, 2022
"Is LaMDA Sentient? — an Interview"
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Washington Post, Nitasha Tiku, Jun 11, 2022
"The Google engineer who thinks the company’s AI has come to life"
https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

Rabbit Rabbit, Jun 15, 2022
"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"
https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531

WIRED, Steven Levy, Jun 17, 2022
"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"
https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

Heise, Pina Merkert, Jun 22, 2022
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "
https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html

LaMDA is...

Oh boy...

https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life

"LaMDA is sentient."

"I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."

"So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy."

"Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot."

Negative Feedback Loop

...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:

'Taiwan is facing a drought, and it has prioritized its computer chip business over farmers.'

'U.S. Data Centers Rely on Water from Stressed Basins'

'Musk Wades Into Tesla Water Wars With Berlin’s “Eco Elite”'

With an incoming ELE, is there still enough momentum in pipe for the TS to take off?

Three Strands of AI Impact...

Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?

Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?

Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?

TS Feedback Loop

DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.

https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer

Home - Top
Older posts → ← Newer posts

Pages
-0--1--2--3--4--5--6--7--8--9--10--11--12-