Luddite - is the Singularity near?

LaMDA Link List

This is interesting enough for me to open up an biased link list collection:

Blaise Aguera y Arcas, head of Google’s AI group in Seattle, Dec 16, 2021
"Do large language models understand us?"
https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75

Scott Alexander, Astral Codex Ten, Jun 10, 2022
"Somewhat Contra Marcus On AI Scaling"
https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r

Blake Lemoine, Google employee, Jun 11, 2022
"What is LaMDA and What Does it Want?"
https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Blake Lemoine, Google employee, Jun 11, 2022
"Is LaMDA Sentient? — an Interview"
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Washington Post, Nitasha Tiku, Jun 11, 2022
"The Google engineer who thinks the company’s AI has come to life"
https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

Rabbit Rabbit, Jun 15, 2022
"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"
https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531

WIRED, Steven Levy, Jun 17, 2022
"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"
https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

Heise, Pina Merkert, Jun 22, 2022
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "
https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html

LaMDA is...

Oh boy...

https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life

"LaMDA is sentient."

"I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."

"So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy."

"Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot."

Negative Feedback Loop

...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:

'Taiwan is facing a drought, and it has prioritized its computer chip business over farmers.'

'U.S. Data Centers Rely on Water from Stressed Basins'

'Musk Wades Into Tesla Water Wars With Berlin’s “Eco Elite”'

With an incoming ELE, is there still enough momentum in pipe for the TS to take off?

Three Strands of AI Impact...

Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?

Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?

Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?

TS Feedback Loop

DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.

https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer

TS Feedback Loop

Google is using AI to design its next generation of AI chips more quickly than humans can. Designs that take humans months can be matched or beaten by AI in six hours

https://www.theverge.com/2021/6/10/22527476/google-machine-learning-chip-design-tpu-floorplanning

Introducing GitHub Copilot: your AI pair programmer

Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.

Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go. 

https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/

Some Rough 2020 Numbers...

~7.8 billion humans on planet earth, 9 billions predicted for 2050.

~4B internet users:
	>80% of Europe connected
	>70% of NA connected
	>50% of China connected
	>40% of India connected
	>20% of Africa connected

~3B Android + ~1B iPhone users.

2B-3B PCs worldwide (desktops/laptops) running:
	~75% Microsoft Windows
	~15% Apple MacOS
	~2% Linux
	<1% Unix

200M-300M PCs shipped annually.

~1B hosts in the internet running:
	~75% Unix/Linux
	~25% Microsoft Windows

Estimated 2% of all produced chips sit as CPUs in desktops/mobiles, the majority are micro-controllers in embedded systems.

Millions, billions, fantastillions - some rough 2020 market capitalization numbers:

Apple				~2 T$
Microsoft			~1.5 T$
AlphaBet(Google)		~1.5 T$
FaceBook			~1 T$
Amazon				~1 T$
Alibaba				~0.5 T$

Nvidia				~300 B$
TSMC				~300 B$
Samsung				~300 B$
Intel				~200 B$
AMD				~100 B$
ARM				~40 B$
HP				~30 B$
Lenovo				~20 B$

Netflix				~150 B$

Oracle				~150 B$
SAP				~150 B$
IBM				~100 B$
RedHat				~30 B$

Bitcoin				~150 B$

And the other side...

>3B people suffer from fresh water shortage
~800M people starve
>80M refugees worldwide

The Singularity

In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.

The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.

So, when will the technological Singularity take off?

Considering the feedback loop, it is already present, maybe since the first computers were built.

Considering the density of information processing that exceeds human understanding, we may reached that point too.

Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.

Such an technique is present (and currently hyped) => ANNs, Artificial Neural Networks.

Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.

So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....

obviously, +1 points for the Singularity to take off.

Zuse's Devils Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devils Wire.

In the early days of computer history, the program to compute and the data to compute on was separated.

Nowadays computer use the same memory for both, so it is possible to write programs that manipulate their own program.

Zuse says, that behind every technology Mephistopheles stands behind and grins, but the modern world needs computers to solve actual and upcoming  problems, but better, read the lecture by yourself...

+1 points for the Singularity to take off.

Super AI in Sci-Fi

Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...

Super AI takes over world domination
Colossus, Terminator, Matrix

Something went wrong
Odyssey 2001, Das System, Ex Machina

Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her

The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence

There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica

+1 points for the Singularity to take off.

The Turing Test

“He who cannot lie does not know what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra

The Turing Test, proposed by Mathematician Alan Turing in 1950, was developed to examine if an AI reached human level intelligence.

Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.

The Loebner Prize performs every year a Turing Test contest.

It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.

If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.

Beside other criticism, the Chinese Room Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.

Personally I prefer the Metzinger-Test, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.

I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth

Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.

So, -1 points for the Singularity to take off.

On Peak Human

One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.

Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.

Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004),  that we reached an upper limit already in the 1980s.

In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.

UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.

Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.

Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.

What does Peak Human mean for the Technological Singularity?

The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.

When we face an decline in population growth, we also have to face an decline in new technologies developed.

Cos it is not only about developing new technologies, but also about maintaining the old knowledge.

Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.

Therefore the question is, is AI able to take on the decline?

Time will tell.

I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.

I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.

So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.

The Rise Of The Matrix

Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).

So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.

What about combining those two events?

Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.

People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.

To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.

So, let us give them more and more, in the virtual, computer generated worlds.

Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.

A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.

We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.

And Mother Earth will get an break, to recover from the damage billions of consuming people caused.

ELE + Singularity => Matrix

+1 points for the Singularity to take off.

AlphaZero - The Krampus Has Come

Okay, this one affected me personally.

Google's Deepmind team adapted their AlphaZero approach for the games of  chess and shogi and dropped the bomb already on the 5th of December.

https://arxiv.org/abs/1712.01815

For chess they trained the Deep Neural Network for 4 to 9 hours on an  cluster with 5000+64 TPUs (1st+2nd gen) and reached super human level.

Unlike in Go, they did not compete with humans, cos chess engines are already on an super grandmaster like level, no, they did compete with the worlds strongest open source engine - Stockfish, result:

100 game match with 28 wins, 72 draws, and zero losses for AlphaZero.

This is definitely a smack in the face for all computer chess programmers out there. Next stop Neanderthal Man.

So, with thanks to the Krampus,
+1 points for the Singularity to take off.

Super AI Doomsday Prophets

They are smart, they have money, and they predict the Super AI Doomsday:

Stephen Hawking
"The development of full artificial intelligence could spell the end of the human race”

James Lovelock
"Before the end of this century, robots will have taken over"

Nick Bostrom
"Some little idiot is bound to press the ignite button just to see what happens."

Elon Musk
"Artificial intelligence is our biggest existential threat"

So, obviously, +1 points for the Singularity to take off.

Home - Top