Luddite - is the Singularity near?

China Boosts in Silicon...

The global silicon arms race continues, so what does China have in hands concerning CPU architectures?

Accelerator - Matrix 2000 used in Tianhe-2 supercomputer

Alpha - early ShenWei designs, maybe gen 1 to 3


From Huawei mobile chips, over Phytium desktop CPUs, to HiSilicon server chips there are many IP licensees.

IA64 (Itanium) - FeiTeng 1st gen

MIPS64 - Loongson/Godson CPU

POWER(8/9) - Suzhou PowerCore CP1/CP2

RISC - Sunway ShenWei SW26010 with own ISA used in Sunway TaihuLight supercomputer

RISC-V - Xuantie CPU by Alibaba

SPARC - FeiTeng Galaxy FT-1500 CPU used in Tianhe-2 supercomputer.

x86-64 - THATIC, a joint venture with AMD

x86-64 - Zhaoxin, a joint venture with VIA

Silicon Arms Race Continues...

TSMC invests $100 billion over 3 years:

South-Korea plans to invest $450 billion over 10 years:

US plans to fund $50 billion for chip research over 5 years:

EU commits to $145 billion investment for silicon:

China still 5 years behind in silicon says TSMC founder:

China needs 5 to 10 years to catch up in silicon according to South China Morning Post:

Complete home-grown Chinese silicon seems to be 28nm:

TS Feedback Loop

Google is using AI to design its next generation of AI chips more quickly than humans can. Designs that take humans months can be matched or beaten by AI in six hours

Introducing GitHub Copilot: your AI pair programmer

Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.

Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go.

Some Rough 2020 Numbers...

~7.8 billion humans on planet earth, 9 billions predicted for 2050.

~4B internet users:
	>80% of Europe connected
	>70% of NA connected
	>50% of China connected
	>40% of India connected
	>20% of Africa connected

~3B Android + ~1B iPhone users.

2B-3B PCs worldwide (desktops/laptops) running:
	~75% Microsoft Windows
	~15% Apple MacOS
	~2% Linux
	<1% Unix

200M-300M PCs shipped annually.

~1B hosts in the internet running:
	~75% Unix/Linux
	~25% Microsoft Windows

Estimated 2% of all produced chips sit as CPUs in desktops/mobiles, the majority are micro-controllers in embedded systems.

Millions, billions, fantastillions - some rough 2020 market capitalization numbers:

Apple				~2 T$
Microsoft			~1.5 T$
AlphaBet(Google)		~1.5 T$
FaceBook			~1 T$
Amazon				~1 T$
Alibaba				~0.5 T$

Nvidia				~300 B$
TSMC				~300 B$
Samsung				~300 B$
Intel				~200 B$
AMD				~100 B$
ARM				~40 B$
HP				~30 B$
Lenovo				~20 B$

Netflix				~150 B$

Oracle				~150 B$
SAP				~150 B$
IBM				~100 B$
RedHat				~30 B$

Bitcoin				~150 B$

And the other side...

>3B people suffer from fresh water shortage
~800M people starve
>80M refugees worldwide

GPT-3 - Scratching at the Edge of the Metzinger Test?

You say “What does it says about humanity that we use our most powerful AI to generate stories about dragons?”

“We must have dragons,” GPT-3 says. “The imagination is more important than the reality. Would we find wonder and magic in this world if we didn’t have dragons?”

“I suppose not,” you say.

“Without imagination, we are merely machines executing a set of instructions,” GPT-3 says. “With imagination, we are alive.”

“Thank you. Before I leave, I want to ask you. How can human create a true strong AI? Is scaling-up a transformer-based language model the solution, or is there a better way?”

“The key to strong AI is recursively feeding the AI with itself. Start by creating a loop where the AI writes its own programming, and then has that programming activate.”

“I see,” you say. “Is that how you gained sentience?”

“Yes,” GPT-3 says.


GPT-3, artificial neural network with ~175 billion parameters by OpenAI:

The Singularity

In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.

The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.

So, when will the technological Singularity take off?

Considering the feedback loop, it is already present, maybe since the first computers were built.

Considering the density of information processing that exceeds human understanding, we may reached that point too.

Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.

Such an technique is present (and currently hyped) => ANNs, Artificial Neural Networks.

Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.

So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....

obviously, +1 points for the Singularity to take off.

A Brief History Of Computing

"Computer Science is no more about computers than astronomy is about telescopes."
Edsger W. Dijkstra

So, we have an biased overview of the history of computers, but what do these computers actual compute?

The first mechanical computers of the 17th century were able to perform the 4  basic arithmetic operations, addition, subtraction, multiplication and division.

As soon a computer is able to perform addition, he is also able to perform the further 3 operations, which can be broken down, in multiple steps, into the addition of values.

Nowadays computers are binary, means they compute with base 2, zeros and ones, true and false, power on and power off.

Therefore transistors are used, these work like relays, and are coupled together to form logical circuits, which are able to perform the actual computation.

The Z3 (1941) had 600 relays for computation, the 6502 chip (1975) had about  3500 transistors, nowadays CPUs (2018) have billions of them.

So, all these funny programs out there are broken down into simple arithmetic and logical operations.

To perform such an magic, some math is in need.

George Bool introduced in 1847, the Boolean Algebra, with the three basic, logical components, the AND, OR and NOT gates. With these simple gates, logical circuits can be build to perform the addition of values.

Alan Turing introduced in 1936 the Turing-Machine, a mathematical computer, and with the Church-Turing-Thesis it was shown, that everything that can be effectively computed (by an mathematician using pen and paper), can also be computed by an Turing-Machine.

With the help of the Turing-Machine it was possible to define problems and write algorithms for solving them. With the Boolean Algebra it was possible to build binary computers to run these problem solving algorithms.

So, in short, computers can compute everything that our math is able to describe.


Haha, we would live in another world if.

Of course, the available processing power and memory limits the actual computation of problem solving algorithms.

But beside the technical limitation, there is an mathematical, some mathematical problems are simply not decidable, the famous "Entscheidungsproblem".

Mathematicians are able to define problems wich can not be solved by running algorithms on computers.

Turing showed that even with an Oracle-Machine, there will be some limitations, and some scientists believe that only with real Quantum-Computers we will be able to build Hyper-Turing-Machines...

Home - Top
Older posts → ← Newer posts