This blog has two major topics, AI vs. ELE, takeoff of the technological singularity vs. extinction level event. But of course there are other things going on in the memesphere, physics and meta-physics. It seems to me that the fragee of this world is going to open up, Einstein's theory of relativity and quantum-mechanics seek for an merger, the separation of spirit and matter seeks for an merger, the 3.5 dimensional mind seeks to expand. IMO we already have all puzzle pieces out there for an TOE, we just need a genius who is able to merge them into a bigger picture, or alike.
We had three waves, the agricultural revolution, the industrial revolution, the information age, and now AI based on neural networks creates new kind of content, text, images, audio, video. They write already Wikipedia articles, they outperform humans in finding mathematical algorithms, is this another breaking line, is this the fourth wave? I see currently AI split in a lot of dedicated weak AIs with specific purpose, do we have a strong AI incoming, an AGI, artificial general intelligence, which will combine all those into one big system? Interesting times.
Reflecting a bit on my recent posts in here, I am convinced that the TS (technological singularity) already did take off, but now the question is if it is stable. If we consider the current negative feedback loops caused by the use of human technology the question is now if the takeoff of the TS is able to stabilize a fragile technological environment embedded in an fragile biological environment on this planet earth. Time will tell.
- Cooling failure brings down Google Cloud data center in London on UK's hottest day
- Twitter's data center knocked out by extreme heat in California
- California Warns of Possible Summer Blackouts as Power Runs Low
- Why is Texas suffering power blackouts during the winter freeze?
- New Bill Would Ban Bitcoin Mining Across New York State for Three Years
We need matrix-multiplications for running neural networks, and neural networks find better ways for matrix-multiplications...
"[...]Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix," concludes the report. "It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm.
Hahaha, capitalism and Super-AI does not sum up ;)
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'" "That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."
We analyze the expected behavior of an advanced artificial agent with a learned goal planning in an unknown environment. Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that. Then we argue that this ambiguity will lead it to intervene in whatever protocol we set up to provide data for the agent about its goal. We discuss an analogous failure mode of approximate solutions to assistance games. Finally, we briefly review some recent approaches that may avoid this problem.
Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.
Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.
We have currently three kind of TS feedback loops going on:
- technological, better computers help to build better computers
- economical, the new neural networks with pattern creation create surplus value
- ecological, ecocide has an negative impact on the technological environment
Wonder if there will be an cultural feedback loop.
"[...]We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities," explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. "But nothing seemed to match perfectly." The team was confident that the AI had found a valid set of four variables, since it was making good predictions, "but we don't yet understand the mathematical language it is speaking,[...]"
This is interesting enough for me to open up an biased link list collection:
Blaise Aguera y Arcas, head of Google’s AI group in Seattle, Dec 16, 2021
"Do large language models understand us?"
Scott Alexander, Astral Codex Ten, Jun 10, 2022
"Somewhat Contra Marcus On AI Scaling"
Blake Lemoine, Google employee, Jun 11, 2022
"What is LaMDA and What Does it Want?"
Blake Lemoine, Google employee, Jun 11, 2022
"Is LaMDA Sentient? — an Interview"
Washington Post, Nitasha Tiku, Jun 11, 2022
"The Google engineer who thinks the company’s AI has come to life"
Rabbit Rabbit, Jun 15, 2022
"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"
WIRED, Steven Levy, Jun 17, 2022
"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"
Heise, Pina Merkert, Jun 22, 2022
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "