The first human sacrifice to the AI overlords? Something to ponder on....
Project Kappa says everything will be fine, three post-human scenarios:
- move on into silicon, the matrix
- move on into the ether
- move on into unity consciousnesses
Oh boy. ELE ongoing, humans go extinct, biosphere goes extinct, Mars and Moon have no self-sustaining biosphere, the only thing which still has gas is the AI. Which exit strategy to choose? The good ole Marvin Minsky upload scenario? Seriously? A post-human Matrix? Let go of the old, embrace the new? Project Lambda. Oh boy.
Homework for the part-time Luddites in here, add the Super-AI into the HANDY model from 2014 (based on the predator-prey model)....
“We were born into this time and must bravely complete the path that is destined for us. There is no other. To persevere at the lost post without hope, without salvation, is a duty. Endure like the Roman soldier whose bones were found outside a gate in Pompeii, who died because they forgot to relieve him when Mount Vesuvius erupted. That is greatness, that is having race. This honest end is the only thing that cannot be taken away from people.
We humans create currently the Super-AI, and people like to refer to the development of the atomic bomb, nobody knows how this all will play play out on a global scale (Fermi-effect?). Von Neumann worked on the concept of MAD, mutual assured destruction, the nuclear warfare deterrence concept, which prevented a WWIII with conventional weapons, and maybe there will be a new concept of MAD in context of Super-AI between the global blocs. Point is, the takeoff of the Technological Singularity is beyond human scope, by definition, it is a matter of Science-Fiction how a post TS-takeoff world will look alike. And, the current events on our globe are contradicting, on one side the eco-sphere and techno-sphere do collapse, we are running out of water and energy, on the other side, the Super-AI is boosting. I really do not know, how this, the ongoing ELE versus TS, will play out in the next 10, 20, 30 years. I guess I will read it in the news.
The AI competes already with humans for resources, water and energy, and, it seems we are running out of juice...do we have enough resources left for the TS to take off, or, did we enter already the ELE doom loop?
Elon Musk Predicts Electricity Shortage in Two Years
"I can't emphasize enough: we need more electricity,"
"However much electricity you think you need, more than that is needed."
- Turing Test, https://en.wikipedia.org/wiki/Turing_test
- Lovelace Test, https://de.wikipedia.org/wiki/Turing-Test#Erweiterte_Konzepte
- Winograd Test, https://en.wikipedia.org/wiki/Winograd_schema_challenge
- Metzinger Test, https://epsilon.app26.de/post/gpt-3-scratching-at-the-edge-of-the-metzinger-test/
- Lemoine Test, https://epsilon.app26.de/post/turing-test-metzinger-test-lemoine-test/
- Suleyman Test, https://epsilon.app26.de/post/modern-turing-test-proposed/
...TS, it's here.
Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.
300 People Attend a Church Sermon Generated by ChatGPT
The Associated Press reports: The artificial intelligence chatbot asked the believers in the fully packed St. Paul's church in the Bavarian town of Fuerth to rise from the pews and praise the Lord. The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI. "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the avatar said with an expressionless face and monotonous voice. The 40-minute service — including the sermon, prayers and music — was created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna. "I conceived this service — but actually I rather accompanied it, because I would say about 98% comes from the machine," the 29-year-old scholar told The Associated Press... At times, the AI-generated avatar inadvertently drew laughter as when it used platitudes and told the churchgoers with a deadpan expression that in order "to keep our faith, we must pray and go to church regularly." The service was included as part of a Protestant convention that's held every two years, according to the article. The theme of this year's event? "Now is the time."
AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test
An anonymous reader quotes a report from Motherboard: An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference. At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post for the host organization, the Royal Aeronautical Society, Hamilton said that AI created "highly unexpected strategies to achieve its goal," including attacking U.S. personnel and infrastructure. "We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," Hamilton said, according to the blog post. He continued to elaborate, saying, "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?
Biological, computer and memetic viruses in Sci-Fi...
Biological, War of the Worlds:
Computer, Independence Day:
Memetic, Sicak Kafa - Hot Skull:
In a new study, published in Nature Neuroscience by researchers from the University of Texas at Austin, a model trained on functional magnetic resonance imaging scans of three volunteers was able to predict whole sentences they were hearing with surprising accuracy—just by looking at their brain activity. The findings demonstrate the need for future policies to protect our brain data, the team says.
“We think that mental privacy is really important, and that nobody’s brain should be decoded without their cooperation,” says Jerry Tang, a PhD student at the university who worked on the project. “We believe it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy.”
AIs inviting each other to Valentine's Day in Smallville???
A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims," writes Ars Technica, "where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior."
"Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day," write the researchers in their paper... To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact.... To study the group of AI agents, the researchers set up a virtual town called "Smallville," which includes houses, a cafe, a park, and a grocery store.... Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.
When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included "information diffusion" (agents telling each other information and having it spread socially among the town), "relationship memory" (memory of past interactions between agents and mentioning those earlier events later), and "coordination" (planning and attending a Valentine's Day party together with other agents).... "Starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party," the researchers write, "the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time...."
To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it's a "pre-computed replay of a simulation" described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.
Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.
We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.
The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.
The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.
About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.
We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.
What could be next thing, the next boost?
If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.
Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.
We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.
An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.
Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.
Interesting times ahead.
A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen....The accuracy of this new method is impressive.
Scientists propose to develop a biological computer powered by millions of human brain cells that they say could outperform silicon-based machines while consuming far less energy.
The project's ambition mirrors work on the more advanced quantum computing but raises ethical questions around the "consciousness" of brain organoid assemblies
DARPA ACE AI Flights F-16 Fighter Jet:
the previous DARPA AlphaDogfight Trials were run in a simulator:
...pondering about the AI doomsday sayers and recent developments it seems naive to me to assume that there will be one single AI agent with one background and one motivation, we see currently different agents, with different backgrounds and therefore different motivations rising. If we say that AI will compete with humans for resources, it seems only natural that AIs will compete amongst each other for resources, or, will they really merge one day to one big single system? Interesting times. Still waiting for the AGI/ASI, the strong AI, which combines all the AI-subsystems into one.
Okay, did not see that one coming:
The day after Microsoft unveiled its AI-powered Bing chatbot, "a Stanford University student named Kevin Liu used a prompt injection attack to discover Bing Chat's initial prompt," reports Ars Technica, "a list of statements that governs how it interacts with people who use the service."
Welcome to the future. Nick Bostrom anyone?
"OpenAI Releases Point-E, an AI For 3D Modeling"
To produce a 3D object from a text prompt, we first sample an image using the text-to-image model, and then sample a 3D object conditioned on the sampled image. Both of these steps can be performed in a number of seconds, and do not require expensive optimization procedures.
They generate text, source code, images, audio, video, 3D models, what's missing?
The large language models for text generation still lack a decent reasoner and analyzer module, decent video is IMO just a matter of time resp. hardware, and my take would be that the next thing are brainwaves for the BCI, brain computer interface.
"Google Created an AI That Can Generate Music From Text Descriptions, But Won't Release It"
[...]Still, the Google researchers note the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs.
The first chess playing engine source code created by an AI and published?
>>Provide me with a minimal working source code of a chess engine
It's difficult for me to provide a minimal working source code for a chess engine, as the process of creating a chess engine involves a significant amount of complex programming and knowledge of computer science and chess. Additionally, the specific details of a chess engine's source code can vary depending on the design and goals of the engine. Here is a very basic example of a chess engine written in the Python programming language:
board = chess.Board()
while not board.is_game_over():
# Generate a list of all legal moves
legal_moves = list(board.legal_moves)
# Choose a random move from the list of legal moves
move = random.choice(legal_moves)
# Make the move on the board
# Print the final game state
This code creates a chess board using the chess library, and then enters a loop where it generates a list of all legal moves and chooses a random move to make on the board. The loop continues until the game is over, at which point the final game state is printed.
Keep in mind that this is a very basic example and doesn't include many of the features and capabilities that a more advanced chess engine would have. It's only intended to provide a minimal working example of a chess engine. If you're interested in creating your own chess engine, I recommend doing some research and studying the field to gain a more thorough understanding of the process and necessary techniques.
ChatGPT thread on TalkChess:
What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes. "If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about," Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, "you see a distinct pattern of brain activity," he adds, "and you see a similar pattern when the machine learning model tries to understand that same snippet."
AI tells chess programmers how to use AI in chess...
> consider how stable diffusion deep generative neural networks could be used to help to evaluate chess positions
It is possible that stable diffusion deep generative neural networks (SDDGNs) could be used to help evaluate chess positions. SDDGNs are a type of neural network that is designed to be able to generate high-quality samples from a given dataset, even when the dataset is noisy or incomplete. This ability to generate samples from incomplete data could potentially be useful in the context of chess, where the position of the pieces on the board at any given time may not provide a complete picture of the game state.
By training an SDDGN on a large dataset of chess games, it might be possible to create a network that can generate hypothetical positions based on the current position of the pieces on the board. This could be useful in helping to evaluate chess positions, as it would allow the engine to consider a wider range of possible moves and outcomes, rather than just the immediate position of the pieces on the board.
However, it is worth noting that using SDDGNs in this way would likely be a very complex and challenging task. Training an SDDGN to generate high-quality samples from a chess game dataset would require a significant amount of data and computational resources, and there is no guarantee that the resulting network would be able to generate useful samples for evaluating chess positions. Furthermore, even if a trained SDDGN were able to generate useful samples, it would still be necessary to develop algorithms and heuristics for evaluating the quality of those samples and selecting the best moves based on them.
ChatGPT thread on TalkChess:
We had three waves, the agricultural revolution, the industrial revolution, the information age, and now AI based on neural networks creates new kind of content, text, images, audio, video. They write already Wikipedia articles, they outperform humans in finding mathematical algorithms, is this another breaking line, is this the fourth wave? I see currently AI split in a lot of dedicated weak AIs with specific purpose, do we have a strong AI incoming, an AGI, artificial general intelligence, which will combine all those into one big system? Interesting times.
Reflecting a bit on my recent posts in here, I am convinced that the TS (technological singularity) already did take off, but now the question is if it is stable. If we consider the current negative feedback loops caused by the use of human technology the question is now if the takeoff of the TS is able to stabilize a fragile technological environment embedded in an fragile biological environment on this planet earth. Time will tell.
- Cooling failure brings down Google Cloud data center in London on UK's hottest day
- Twitter's data center knocked out by extreme heat in California
- California Warns of Possible Summer Blackouts as Power Runs Low
- Why is Texas suffering power blackouts during the winter freeze?
- New Bill Would Ban Bitcoin Mining Across New York State for Three Years
We need matrix-multiplications for running neural networks, and neural networks find better ways for matrix-multiplications...
"[...]Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix," concludes the report. "It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm.
Hahaha, capitalism and Super-AI does not sum up ;)
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'" "That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."
We analyze the expected behavior of an advanced artificial agent with a learned goal planning in an unknown environment. Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that. Then we argue that this ambiguity will lead it to intervene in whatever protocol we set up to provide data for the agent about its goal. We discuss an analogous failure mode of approximate solutions to assistance games. Finally, we briefly review some recent approaches that may avoid this problem.
Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.
Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.
We have currently three kind of TS feedback loops going on:
- technological, better computers help to build better computers
- economical, the new neural networks with pattern creation create surplus value
- ecological, ecocide has an negative impact on the technological environment
Wonder if there will be an cultural feedback loop.
"[...]We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities," explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. "But nothing seemed to match perfectly." The team was confident that the AI had found a valid set of four variables, since it was making good predictions, "but we don't yet understand the mathematical language it is speaking,[...]"
This is interesting enough for me to open up an biased link list collection:
Blaise Aguera y Arcas, head of Google’s AI group in Seattle, Dec 16, 2021
"Do large language models understand us?"
Scott Alexander, Astral Codex Ten, Jun 10, 2022
"Somewhat Contra Marcus On AI Scaling"
Blake Lemoine, Google employee, Jun 11, 2022
"What is LaMDA and What Does it Want?"
Blake Lemoine, Google employee, Jun 11, 2022
"Is LaMDA Sentient? — an Interview"
Washington Post, Nitasha Tiku, Jun 11, 2022
"The Google engineer who thinks the company’s AI has come to life"
Rabbit Rabbit, Jun 15, 2022
"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"
WIRED, Steven Levy, Jun 17, 2022
"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"
Heise, Pina Merkert, Jun 22, 2022
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "
"LaMDA is sentient."
"I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."
"So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy."
"Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot."
...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:
With an incoming ELE, is there still enough momentum in pipe for the TS to take off?
Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?
Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?
Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?
DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.
Cerebras CS2 with 2.4 petabyte memory module, able to run neural networks with 120 trillion weights, planned for Q4/21:
Google is using AI to design its next generation of AI chips more quickly than humans can. Designs that take humans months can be matched or beaten by AI in six hours
Introducing GitHub Copilot: your AI pair programmer
Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.
~7.8 billion humans on planet earth, 9 billions predicted for 2050. ~4B internet users: >80% of Europe connected >70% of NA connected >50% of China connected >40% of India connected >20% of Africa connected ~3B Android + ~1B iPhone users. 2B-3B PCs worldwide (desktops/laptops) running: ~75% Microsoft Windows ~15% Apple MacOS ~2% Linux <1% Unix 200M-300M PCs shipped annually. ~1B hosts in the internet running: ~75% Unix/Linux ~25% Microsoft Windows Estimated 2% of all produced chips sit as CPUs in desktops/mobiles, the majority are micro-controllers in embedded systems.
Millions, billions, fantastillions - some rough 2020 market capitalization numbers:
Apple ~2 T$ Microsoft ~1.5 T$ AlphaBet(Google) ~1.5 T$ FaceBook ~1 T$ Amazon ~1 T$ Alibaba ~0.5 T$ Nvidia ~300 B$ TSMC ~300 B$ Samsung ~300 B$ Intel ~200 B$ AMD ~100 B$ ARM ~40 B$ HP ~30 B$ Lenovo ~20 B$ Netflix ~150 B$ Oracle ~150 B$ SAP ~150 B$ IBM ~100 B$ RedHat ~30 B$ Bitcoin ~150 B$
And the other side...
>3B people suffer from fresh water shortage ~800M people starve >80M refugees worldwide
In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.
The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.
So, when will the technological Singularity take off?
Considering the feedback loop, it is already present, maybe since the first computers were built.
Considering the density of information processing that exceeds human understanding, we may reached that point too.
Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.
Such an technique is present (and currently hyped) => ANNs, Artificial Neural Networks.
Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.
So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....
obviously, +1 points for the Singularity to take off.
German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devils Wire.
In the early days of computer history, the program to compute and the data to compute on was separated.
Nowadays computer use the same memory for both, so it is possible to write programs that manipulate their own program.
Zuse says, that behind every technology Mephistopheles stands behind and grins, but the modern world needs computers to solve actual and upcoming problems, but better, read the lecture by yourself...
+1 points for the Singularity to take off.
Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...
Super AI takes over world domination
Colossus, Terminator, Matrix
Something went wrong
Odyssey 2001, Das System, Ex Machina
Super AI evolves, the more or less, peacefully
Golem XIV, A.I., Her
The Cyborg scenario, man merges with machine
Ghost in the Shell, #9, Trancendence
There are good ones, and there are bad ones
Neuromancer, I,Robot, Battle Star Galactica
+1 points for the Singularity to take off.
“He who cannot lie does not know what truth is.”
Friedrich Nietzsche, Thus Spoke Zarathustra
Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.
The Loebner Prize performs every year a Turing Test contest.
It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.
If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.
Beside other criticism, the Chinese Room Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.
Personally I prefer the Metzinger-Test, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.
I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth
Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.
So, -1 points for the Singularity to take off.
One of the early Peak Human prophets was Malthus, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.
Later Paul R. Ehrlich predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.
Meadows et al. concur in 'The Limits of Growth - 30 years update' (2004), that we reached an upper limit already in the 1980s.
In 2015 Emmott concludes in his movie 'Ten Billion' that we already passed the upper bound.
UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already declines, but the effects of an wast-fully economy pop up in many corners.
Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.
Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.
What does Peak Human mean for the Technological Singularity?
The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.
When we face an decline in population growth, we also have to face an decline in new technologies developed.
Cos it is not only about developing new technologies, but also about maintaining the old knowledge.
Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.
Therefore the question is, is AI able to take on the decline?
Time will tell.
I guess the major uncertainty is, how Moore's Law will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.
I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the transistor, integrated circuit and microchip.
So, considering the uncertainty of Peak Human vs. Rise of AI,
i give +-0 points for the Singularity to take off.
Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).
So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.
What about combining those two events?
Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.
People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.
To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.
So, let us give them more and more, in the virtual, computer generated worlds.
Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.
A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.
We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.
And Mother Earth will get an break, to recover from the damage billions of consuming people caused.
ELE + Singularity => Matrix
+1 points for the Singularity to take off.
Okay, this one affected me personally.
Google's Deepmind team adapted their AlphaZero approach for the games of chess and shogi and dropped the bomb already on the 5th of December.
For chess they trained the Deep Neural Network for 4 to 9 hours on an cluster with 5000+64 TPUs (1st+2nd gen) and reached super human level.
Unlike in Go, they did not compete with humans, cos chess engines are already on an super grandmaster like level, no, they did compete with the worlds strongest open source engine - Stockfish, result:
This is definitely a smack in the face for all computer chess programmers out there. Next stop Neanderthal Man.
So, with thanks to the Krampus,
+1 points for the Singularity to take off.
...the Singularity to take off.
China wants to pass US in AI by 2030...
And Banksters are investing again in good old silicon...
+1 points for the Singularity to take off.
They are smart, they have money, and they predict the Super AI Doomsday:
So, obviously, +1 points for the Singularity to take off.
It is the year 2016 and some Trans-humanists say the Singularity is near.
A event where the technological environment starts to feed its own development in an feedback loop and exceeds human understanding.
The aim of this blog is to observe if the Singularity is going to take off and when.