AI God Religion Rebootet
Anthony Levandowski Reboots Church of AI
https://slashdot.org/story/23/11/23/1511203/anthony-levandowski-reboots-church-of-ai
Anthony Levandowski Reboots Church of AI
https://slashdot.org/story/23/11/23/1511203/anthony-levandowski-reboots-church-of-ai
I am a child of the 90s, 1989 til 2001 was my time, the fall of the Berlin Wall until the 9/11, everything seemed possible during this period. Fukuyama mentioned it "the end of history", and then 2001 was already "the end of the end of history".
Retrospectively, Fukuyama was wrong, and Huntington, "The Clash of Civilizations", was right. Maybe the 90s were just a hedonistic time in between, the exception of the rule.
True, technologically we do advance, at least incrementally, more processing power, more bandwidth, more data, bigger neural networks, more advanced network architectures, but cultural, philosophical? Did we, the Western sphere, already pass our peak and do degenerate?
- 1979 - Lyotard - The Postmodern Condition
- 1981 - Baudrillard - Simulacra and Simulation
- 1997 - Deutsch - The Fabric of Reality
- 1999 - Wachowskis - The Matrix
When I surf the meme-sphere out there, it seems to me that meanwhile the so called three poisons rule the world, hate, greed and delusion....
https://en.wikipedia.org/wiki/Three_poisons
...just thinking loud.
"Bioinks" Used To Print Living Brain Cell Networks
Bioprinted 3D neural networks are likely to be a promising platform for studying how nerves and nerve networks form and grow.
Homework for the part-time Luddites in here, add the Super-AI into the HANDY model from 2014 (based on the predator-prey model)....
https://www.sciencedirect.com/science/article/pii/S0921800914000615
“We were born into this time and must bravely complete the path that is destined for us. There is no other. To persevere at the lost post without hope, without salvation, is a duty. Endure like the Roman soldier whose bones were found outside a gate in Pompeii, who died because they forgot to relieve him when Mount Vesuvius erupted. That is greatness, that is having race. This honest end is the only thing that cannot be taken away from people.
We humans create currently the Super-AI, and people like to refer to the development of the atomic bomb, nobody knows how this all will play play out on a global scale (Fermi-effect?). Von Neumann worked on the concept of MAD, mutual assured destruction, the nuclear warfare deterrence concept, which prevented a WWIII with conventional weapons, and maybe there will be a new concept of MAD in context of Super-AI between the global blocs. Point is, the takeoff of the Technological Singularity is beyond human scope, by definition, it is a matter of Science-Fiction how a post TS-takeoff world will look alike. And, the current events on our globe are contradicting, on one side the eco-sphere and techno-sphere do collapse, we are running out of water and energy, on the other side, the Super-AI is boosting. I really do not know, how this, the ongoing ELE versus TS, will play out in the next 10, 20, 30 years. I guess I will read it in the news.
The AI competes already with humans for resources, water and energy, and, it seems we are running out of juice...do we have enough resources left for the TS to take off, or, did we enter already the ELE doom loop?
Elon Musk Predicts Electricity Shortage in Two Years
https://hardware.slashdot.org/story/23/07/31/0128257/elon-musk-predicts-electricity-shortage-in-two-years
"I can't emphasize enough: we need more electricity,"
"However much electricity you think you need, more than that is needed."
...TS, it's here.
DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots
Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
https://arxiv.org/abs/2305.10601
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.
300 People Attend a Church Sermon Generated by ChatGPT
https://slashdot.org/story/23/06/10/2056210/300-people-attend-a-church-sermon-generated-by-chatgpt
The Associated Press reports: The artificial intelligence chatbot asked the believers in the fully packed St. Paul's church in the Bavarian town of Fuerth to rise from the pews and praise the Lord. The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI. "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the avatar said with an expressionless face and monotonous voice. The 40-minute service — including the sermon, prayers and music — was created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna. "I conceived this service — but actually I rather accompanied it, because I would say about 98% comes from the machine," the 29-year-old scholar told The Associated Press... At times, the AI-generated avatar inadvertently drew laughter as when it used platitudes and told the churchgoers with a deadpan expression that in order "to keep our faith, we must pray and go to church regularly." The service was included as part of a Protestant convention that's held every two years, according to the article. The theme of this year's event? "Now is the time."
Some scientists still discuss if and when the Anthropocene started, others ponder about the Novacene, my question, will the Novacene be a trans-humanist or post-humanist one...
https://en.wikipedia.org/wiki/Anthropocene
https://en.wikipedia.org/wiki/Novacene
AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test
An anonymous reader quotes a report from Motherboard: An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference. At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post for the host organization, the Royal Aeronautical Society, Hamilton said that AI created "highly unexpected strategies to achieve its goal," including attacking U.S. personnel and infrastructure. "We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," Hamilton said, according to the blog post. He continued to elaborate, saying, "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?
Biological, computer and memetic viruses in Sci-Fi...
Biological, War of the Worlds:
https://en.wikipedia.org/wiki/The_War_of_the_Worlds
Computer, Independence Day:
https://en.wikipedia.org/wiki/Independence_Day_(1996_film)
Memetic, Sicak Kafa - Hot Skull:
https://www.imdb.com/title/tt11988676/
Brain scans can translate a person’s thoughts into words
In a new study, published in Nature Neuroscience by researchers from the University of Texas at Austin, a model trained on functional magnetic resonance imaging scans of three volunteers was able to predict whole sentences they were hearing with surprising accuracy—just by looking at their brain activity. The findings demonstrate the need for future policies to protect our brain data, the team says.
“We think that mental privacy is really important, and that nobody’s brain should be decoded without their cooperation,” says Jerry Tang, a PhD student at the university who worked on the project. “We believe it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy.”
AIs inviting each other to Valentine's Day in Smallville???
What Happens When You Put 25 ChatGPT-Backed Agents Into an RPG Town?
A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims," writes Ars Technica, "where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior."
"Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day," write the researchers in their paper... To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact.... To study the group of AI agents, the researchers set up a virtual town called "Smallville," which includes houses, a cafe, a park, and a grocery store.... Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.
When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included "information diffusion" (agents telling each other information and having it spread socially among the town), "relationship memory" (memory of past interactions between agents and mentioning those earlier events later), and "coordination" (planning and attending a Valentine's Day party together with other agents).... "Starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party," the researchers write, "the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time...."
To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it's a "pre-computed replay of a simulation" described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.
Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.
We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.
The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.
The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.
About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.
We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.
What could be next thing, the next boost?
If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.
Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.
What else?
We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.
An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.
Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.
Interesting times ahead.
The world is facing an imminent water crisis, with demand expected to outstrip the supply of fresh water by 40% by the end of this decade, experts have said on the eve of a crucial UN water summit. From a report: Governments must urgently stop subsidising the extraction and overuse of water through misdirected agricultural subsidies, and industries from mining to manufacturing must be made to overhaul their wasteful practices, according to a landmark report on the economics of water. Nations must start to manage water as a global common good, because most countries are highly dependent on their neighbours for water supplies, and overuse, pollution and the climate crisis threaten water supplies globally, the report's authors say. Johan Rockstrom, the director of the Potsdam Institute for Climate Impact Research and co-chair of the Global Commission on the Economics of Water, and a lead author of the report, told the Guardian the world's neglect of water resources was leading to disaster. "The scientific evidence is that we have a water crisis. We are misusing water, polluting water, and changing the whole global hydrological cycle, through what we are doing to the climate. It's a triple crisis." Rockstrom's fellow Global Commission on the Economics of Water co-chair Mariana Mazzucato, a professor at University College London and also a lead author of the report, added: "We need a much more proactive, and ambitious, common good approach. We have to put justice and equity at the centre of this, it's not just a technological or finance problem."
In a world in need of fresh/drinking water, why the AI?
Researchers Claim Their AI Algorithm Can Recreate What People See Using Brain Scans
A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen....The accuracy of this new method is impressive.
Scientists Target 'Biocomputing' Breakthrough With Use of Human Brain Cells
Scientists propose to develop a biological computer powered by millions of human brain cells that they say could outperform silicon-based machines while consuming far less energy.
[...]
The project's ambition mirrors work on the more advanced quantum computing but raises ethical questions around the "consciousness" of brain organoid assemblies
[...]
DARPA ACE AI Flights F-16 Fighter Jet:
https://news.slashdot.org/story/23/02/14/1631249/ai-has-successfully-piloted-a-us-f-16-fighter-jet-darpa-says
the previous DARPA AlphaDogfight Trials were run in a simulator:
https://www.darpa.mil/news-events/2020-08-07
...pondering about the AI doomsday sayers and recent developments it seems naive to me to assume that there will be one single AI agent with one background and one motivation, we see currently different agents, with different backgrounds and therefore different motivations rising. If we say that AI will compete with humans for resources, it seems only natural that AIs will compete amongst each other for resources, or, will they really merge one day to one big single system? Interesting times. Still waiting for the AGI/ASI, the strong AI, which combines all the AI-subsystems into one.
Okay, did not see that one coming:
"Jailbreaking AIs"
The day after Microsoft unveiled its AI-powered Bing chatbot, "a Stanford University student named Kevin Liu used a prompt injection attack to discover Bing Chat's initial prompt," reports Ars Technica, "a list of statements that governs how it interacts with people who use the service."
Welcome to the future. Nick Bostrom anyone?
They generate text, source code, images, audio, video, 3D models, what's missing?
The large language models for text generation still lack a decent reasoner and analyzer module, decent video is IMO just a matter of time resp. hardware, and my take would be that the next thing are brainwaves for the BCI, brain computer interface.
"Google Created an AI That Can Generate Music From Text Descriptions, But Won't Release It"
[...]Still, the Google researchers note the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs.
"OpenAI Releases Point-E, an AI For 3D Modeling"
https://news.slashdot.org/story/22/12/21/2255216/openai-releases-point-e-an-ai-for-3d-modeling
To produce a 3D object from a text prompt, we first sample an image using the text-to-image model, and then sample a 3D object conditioned on the sampled image. Both of these steps can be performed in a number of seconds, and do not require expensive optimization procedures.
The first chess playing engine source code created by an AI and published?
>>Provide me with a minimal working source code of a chess engine
It's difficult for me to provide a minimal working source code for a chess engine, as the process of creating a chess engine involves a significant amount of complex programming and knowledge of computer science and chess. Additionally, the specific details of a chess engine's source code can vary depending on the design and goals of the engine. Here is a very basic example of a chess engine written in the Python programming language:
import chess
board = chess.Board()
while not board.is_game_over():
# Generate a list of all legal moves
legal_moves = list(board.legal_moves)
# Choose a random move from the list of legal moves
move = random.choice(legal_moves)
# Make the move on the board
board.push(move)
# Print the final game state
print(board)
This code creates a chess board using the chess library, and then enters a loop where it generates a list of all legal moves and chooses a random move to make on the board. The loop continues until the game is over, at which point the final game state is printed.
Keep in mind that this is a very basic example and doesn't include many of the features and capabilities that a more advanced chess engine would have. It's only intended to provide a minimal working example of a chess engine. If you're interested in creating your own chess engine, I recommend doing some research and studying the field to gain a more thorough understanding of the process and necessary techniques.
ChatGPT thread on TalkChess:
https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245