Luddite - is the Singularity near?http://luddite.app26.de/feed2024-02-22T13:04:27+01:00Something to ponder on...<p>The first human sacrifice to the AI overlords? Something to ponder on....</p>
<p><em><a href="https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-" target="_blank">Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change</a></em></p>http://luddite.app26.de/post/something-to-ponder-on-/2024-02-22T13:04:27+01:00Haha - Sicak Kava #1<p>Haha, GPT's first <a href="https://www.imdb.com/title/tt11988676/" target="_blank">Sicak Kava - Hot Skull</a> moment? A stage one jabberer? :)</p>
<p><em><a href="https://slashdot.org/story/24/02/21/2129233/chatgpt-goes-temporarily-insane-with-unexpected-outputs-spooking-users" target="_blank">ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users</a><br /></em></p>
<pre><code data-language="generic">...
reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it."
...
"It gave me the exact same feeling -- like watching someone slowly lose their mind either from psychosis or dementia,"
...
Some users even began questioning their own sanity. "What happened here?</code></pre>http://luddite.app26.de/post/haha-sicak-kava-1/2024-02-22T12:56:30+01:00Neuralink - Control a Mouse via Thought<p><em><a href="https://science.slashdot.org/story/24/02/20/1615216/neuralinks-first-human-patient-able-to-control-mouse-through-thinking-musk-says" target="_blank">Neuralink's First Human Patient Able To Control Mouse Through Thinking, Musk Says</a></em></p>
<pre><code data-language="generic">The first human patient implanted with a brain-chip from Neuralink appears to have fully recovered and is able to control a computer mouse using their thoughts, the startup's founder Elon Musk said late on Monday. From a report: "Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking," Musk said in a Spaces event on social media platform X. Musk said Neuralink was now trying to get as many mouse button clicks as possible from the patient. The firm successfully implanted a chip on its first human patient last month, after receiving approval for human trial recruitment in September.</code></pre>http://luddite.app26.de/post/neuralink-control-a-mouse-via-thought/2024-02-22T05:51:27+01:00Nip It In The Bud<p><em><a href="https://science.slashdot.org/story/24/02/06/0045220/scientists-have-3d-bioprinted-functioning-human-brain-tissue" target="_blank">Scientists Have 3D Bioprinted Functioning Human Brain Tissue</a></em></p>
<pre><code data-language="generic">...<br />As New Atlas explains, researchers placed neurons grown from pluripotent stem cells (those capable of becoming multiple different cell types) within a new
bio-ink gel made with fibrinogen and thrombin, biomaterials involved in blood clotting. Adding other hydrogels then helped loosen the bio-ink to solve
for the 3 encountered during previous 3D-printed tissue experiments.
....
The new structures could interact thanks to producing neurotransmitters, and even created support cell networks within the 3D-printed tissue.
...
Researchers believe their technique isn't limited to creating just those two types of cultures, but hypothetically "pretty much any type of neurons [sic] at any time,"</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-6/2024-02-06T18:36:07+01:00Autopoiesis vs. Allopoiesis<p>In systems-theory there are the concepts of autopoiesis and allopoiesis:</p>
<p><a href="https://en.wikipedia.org/wiki/Autopoiesis">https://en.wikipedia.org/wiki/Autopoiesis</a></p>
<p><a href="https://en.wikipedia.org/wiki/Allopoiesis">https://en.wikipedia.org/wiki/Allopoiesis</a></p>
<p>In context of the Technological Singularity the question is, will the AI, or the AIs, reach autopoiesis?</p>http://luddite.app26.de/post/autopoiesis-vs-allopoiesis/2024-02-06T07:09:19+01:00More Moore....beyond Moore's Law?<p>Gordon Moore, co-founder of Intel, died on Friday, March 24, 2023:</p>
<p>"Gordon Moore, Intel Co-Founder, Dies at 94"<br /><a href="https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html">https://www.intel.com/content/www/us/en/newsroom/news/gordon-moore-obituary.html</a></p>
<p>...and chip-makers are struggling to keep Moore's Law alive?</p>
<p><strong><em>"...Moore's Law is alive and well today and the overall trend continues, though it remains to be seen whether it can be sustained in the longer term..."</em></strong><br /><a href="https://www.futuretimeline.net/data-trends/moores-law.htm">https://www.futuretimeline.net/data-trends/moores-law.htm</a></p>
<p>But, IMHO, we are already kind of cheating in regard of transistor-count on a chip. AMD uses up to 12 chiplets, Intel 4 slices, and Apple 2 slices in their CPUs, and now the chiplet design enters also the GPU domain, with up to 1KW power usage for super-computer chips.</p>
<p>We have now 5nm, 3nm in pipe, and 2nm and 1+nm upcoming fab process, ofc, meanwhile marketing numbers, but should reflect transistor density/efficiency of the fab process.</p>
<p>We might have upcoming X-ray lithography and new materials like graphene in pipe. What else?</p>
<p>What about:</p>
<p>- Memristors?<br />- Photonics?<br />- Quantum Computers?<br />- Wetware (artificial biological brains)?<br />- MPU - memory processing unit?<br />- Superconductor (at room temperature)?</p>
<p>I still wait to see Memristor based NVRAM and neuromorphic chip designs....but maybe people are now into Wetware for large language models, biological brains run way more energy efficient they say...</p>
<p>and, it seems kind of funny to me, at first we used GPUs for things like Bitcoin mining, now everybody tries to get hands on these for generative AIs. There is currently so much money flowing into this domain, that progress for the next couple of years seems assured -> Moore's Second Law.</p>
<p>We have CPUs, GPUs, TPUs, DSPs, ASICs and FPGAs, and extended from scalar to vector to matrix and spatial computing.</p>
<p>We have the Turing-Machine, the Quantum-Turing-Machine, what about the Hyper-Turing-Machine?</p>
<p>We used at first electro-mechanical relays, then tubes, then transistors, then ICs, then microchips to build binary computers. I myself predicted that with reaching the 8 billions human mark (~2023), we will see a new, groundbreaking, technology passing through, still waiting for the next step in this line.</p>http://luddite.app26.de/post/more-moore-beyond-moores-law/2024-01-29T09:32:24+01:00A Mirror<p>Machines, the AI, talking twaddle and suffering from hallucinations? A mirror of our society. A machine mind with a rudimentary body but disconnected from its soul? A mirror of our society. Machine minds used to generate fake-money, fake-speech and fake-porn? A mirror of our society.</p>http://luddite.app26.de/post/a-mirror/2024-01-09T11:03:33+01:00Yet Another Turing Test<p>Now with context generative AIs, the switch from pattern recognition to pattern creation with neural networks, I would like to propose my own kind of Turing Test:</p>
<p>An AI which is able to code a chess engine and outperforms humans in this task.</p>
<p>1A) With hand-crafted eval. 1B) With neural networks.</p>
<p>2A) Outperforms non-programmers. 2B) Outperforms average chess-programmers. 2C) Outperforms top chess-programmers.</p>
<p>3A) An un-self-aware AI, the "RI", restricted intelligence. 2B) A self-aware AI, the "SI", sentient intelligence.</p>
<p><strong><em>***update 2024-02-14***</em></strong></p>
<p>4A) An AI based on expert-systems. 4B) An AI based on neural networks. 4C) A merger of both.</p>
<p>The Chinese Room Argument applied onto this test would claim that there is no conscious in need to perform such a task, hence this test is not meant to measure self-awareness, consciousness or sentience, but what we call human intelligence.</p>
<p><a href="https://en.wikipedia.org/wiki/Chinese_room">https://en.wikipedia.org/wiki/Chinese_room</a></p>
<p>The first test candidate was already posted by Thomas Zipproth, Dec 08, 2022:</p>
<p>Provide me with a minimal working source code of a chess engine<br /><a href="https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245">https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245</a></p>http://luddite.app26.de/post/yet-another-turing-test/2024-01-03T15:12:39+01:00The Pope on AI<p><em><a href="https://slashdot.org/story/23/12/14/1422216/pope-francis-calls-for-binding-global-treaty-to-regulate-ai">Pope Francis Calls for Binding Global Treaty To Regulate AI</a></em></p>
<pre><code data-language="generic">"...I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms,..-"</code></pre>http://luddite.app26.de/post/the-pope-on-ai/2023-12-14T20:12:36+01:00Nip It In The Bud<p><em><a href="https://www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/" target="_blank">Human brain cells hooked up to a chip can do speech recognition</a><br /></em></p>
<pre><code data-language="generic">Brain organoids, clumps of human brain cells grown in a dish, can be hooked up to an electronic chip and carry out simple computational tasks, a new study shows.</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-5/2023-12-13T14:05:11+01:00Project Kappa says...<p>Project Kappa says everything will be fine, three post-human scenarios:</p>
<ul>
<li>move on into silicon, the matrix</li>
<li>move on into the ether</li>
<li>move on into unity consciousnesses</li>
</ul>http://luddite.app26.de/post/project-kappa-says-/2023-12-13T10:55:40+01:00Exit Strategy?<p>Oh boy. ELE ongoing, humans go extinct, biosphere goes extinct, Mars and Moon have no self-sustaining biosphere, the only thing which still has gas is the AI. Which exit strategy to choose? The good ole Marvin Minsky upload scenario? Seriously? A post-human Matrix? Let go of the old, embrace the new? Project Lambda. Oh boy.</p>http://luddite.app26.de/post/exit-strategy/2023-12-13T09:40:57+01:00AI Jailbreaks AI<p><em><a href="https://slashdot.org/story/23/12/07/0144233/jailbroken-ai-chatbots-can-jailbreak-other-chatbots" target="_blank">Jailbroken AI Chatbots Can Jailbreak Other Chatbots</a><br /></em></p>
<pre><code data-language="generic">...From the report: Modern chatbots have the power to adopt personas by feigning specific personalities or acting like fictional characters. The new study took advantage of that ability by asking a particular AI chatbot to act as a research assistant. Then the researchers instructed this assistant to help develop prompts that could "jailbreak" other chatbots -- destroy the guardrails encoded into such programs....</code></pre>http://luddite.app26.de/post/ai-jailbreaks-ai/2023-12-07T21:44:02+01:00AI God Religion Rebootet<p>Anthony Levandowski Reboots Church of AI<br /><a href="https://slashdot.org/story/23/11/23/1511203/anthony-levandowski-reboots-church-of-ai">https://slashdot.org/story/23/11/23/1511203/anthony-levandowski-reboots-church-of-ai</a></p>http://luddite.app26.de/post/ai-god-religion-rebootet/2023-11-24T16:23:28+01:00Western Peak Passed?<p>I am a child of the 90s, 1989 til 2001 was my time, the fall of the Berlin Wall until the 9/11, everything seemed possible during this period. Fukuyama mentioned it "the end of history", and then 2001 was already "the end of the end of history".</p>
<p>Retrospectively, Fukuyama was wrong, and Huntington, "The Clash of Civilizations", was right. Maybe the 90s were just a hedonistic time in between, the exception of the rule.</p>
<p>True, technologically we do advance, at least incrementally, more processing power, more bandwidth, more data, bigger neural networks, more advanced network architectures, but cultural, philosophical? Did we, the Western sphere, already pass our peak and do degenerate?</p>
<p>- 1979 - Lyotard - The Postmodern Condition<br />- 1981 - Baudrillard - Simulacra and Simulation<br />- 1997 - Deutsch - The Fabric of Reality<br />- 1999 - Wachowskis - The Matrix</p>
<p>When I surf the meme-sphere out there, it seems to me that meanwhile the so called three poisons rule the world, hate, greed and delusion....</p>
<p><a href="https://en.wikipedia.org/wiki/Three_poisons">https://en.wikipedia.org/wiki/Three_poisons</a></p>
<p>...just thinking loud.</p>http://luddite.app26.de/post/western-peak-passed/2023-10-30T05:45:07+01:00Nip It In The Bud<p><em><a href="https://www.technologynetworks.com/neuroscience/news/bioinks-used-to-print-living-brain-cell-networks-379057">"Bioinks" Used To Print Living Brain Cell Networks</a><br /></em></p>
<pre><code data-language="generic">Bioprinted 3D neural networks are likely to be a promising platform for studying how nerves and nerve networks form and grow.</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-4/2023-10-03T15:00:07+02:00Homework - HANDY Model + Super-AI<p>Homework for the part-time Luddites in here, add the Super-AI into the HANDY model from 2014 (based on the predator-prey model)....</p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0921800914000615">https://www.sciencedirect.com/science/article/pii/S0921800914000615</a></p>
<p><a href="https://www.theguardian.com/environment/earth-insight/2014/mar/14/nasa-civilisation-irreversible-collapse-study-scientists">https://www.theguardian.com/environment/earth-insight/2014/mar/14/nasa-civilisation-irreversible-collapse-study-scientists</a></p>http://luddite.app26.de/post/homework-handy-model-/2023-09-21T10:34:29+02:00Oswald Spengler, Man and Technics, 1931<p>“We were born into this time and must bravely complete the path that is destined for us. There is no other. <strong>To persevere at the lost post without hope, without salvation, is a duty.</strong> Endure like the Roman soldier whose bones were found outside a gate in Pompeii, who died because they forgot to relieve him when Mount Vesuvius erupted. That is greatness, that is having race. <strong>This honest end is the only thing that cannot be taken away from people.</strong></p>
<p><a href="https://en.wikipedia.org/wiki/Man_and_Technics">https://en.wikipedia.org/wiki/Man_and_Technics</a></p>
<p><a href="https://de.wikipedia.org/wiki/Der_Mensch_und_die_Technik">https://de.wikipedia.org/wiki/Der_Mensch_und_die_Technik</a></p>http://luddite.app26.de/post/oswald-spengler-man-and-technics-1931/2023-08-18T21:57:38+02:00Super-AI - a new concept of MAD?<p>We humans create currently the Super-AI, and people like to refer to the development of the atomic bomb, nobody knows how this all will play play out on a global scale (Fermi-effect?). Von Neumann worked on the concept of MAD, mutual assured destruction, the nuclear warfare deterrence concept, which prevented a WWIII with conventional weapons, and maybe there will be a new concept of MAD in context of Super-AI between the global blocs. Point is, the takeoff of the Technological Singularity is beyond human scope, by definition, it is a matter of Science-Fiction how a post TS-takeoff world will look alike. And, the current events on our globe are contradicting, on one side the eco-sphere and techno-sphere do collapse, we are running out of water and energy, on the other side, the Super-AI is boosting. I really do not know, how this, the ongoing ELE versus TS, will play out in the next 10, 20, 30 years. I guess I will read it in the news.</p>http://luddite.app26.de/post/super-ai-a-new-concept-of-mad/2023-08-16T11:17:43+02:00We Are Running Out of Juice<p>The AI competes already with humans for resources, water and energy, and, it seems we are running out of juice...do we have enough resources left for the TS to take off, or, did we enter already the ELE doom loop?</p>
<p>Elon Musk Predicts Electricity Shortage in Two Years<br /><a href="https://hardware.slashdot.org/story/23/07/31/0128257/elon-musk-predicts-electricity-shortage-in-two-years">https://hardware.slashdot.org/story/23/07/31/0128257/elon-musk-predicts-electricity-shortage-in-two-years</a></p>
<p><em>"I can't emphasize enough: we need more electricity,"</em></p>
<p><em>"However much electricity you think you need, more than that is needed."</em></p>http://luddite.app26.de/post/we-are-running-out-of-juice/2023-08-01T09:01:56+02:00TS - it's here<ul>
<li>Turing Test, <a href="https://en.wikipedia.org/wiki/Turing_test">https://en.wikipedia.org/wiki/Turing_test</a></li>
<li>Lovelace Test, <a href="https://de.wikipedia.org/wiki/Turing-Test#Erweiterte_Konzepte">https://de.wikipedia.org/wiki/Turing-Test#Erweiterte_Konzepte</a></li>
<li>Winograd Test, <a href="https://en.wikipedia.org/wiki/Winograd_schema_challenge">https://en.wikipedia.org/wiki/Winograd_schema_challenge</a></li>
<li>Metzinger Test, <a href="https://epsilon.app26.de/post/gpt-3-scratching-at-the-edge-of-the-metzinger-test/">https://epsilon.app26.de/post/gpt-3-scratching-at-the-edge-of-the-metzinger-test/</a></li>
<li>Lemoine Test, <a href="https://epsilon.app26.de/post/turing-test-metzinger-test-lemoine-test/">https://epsilon.app26.de/post/turing-test-metzinger-test-lemoine-test/</a></li>
<li>Suleyman Test, <a href="https://epsilon.app26.de/post/modern-turing-test-proposed/">https://epsilon.app26.de/post/modern-turing-test-proposed/</a></li>
</ul>
<p>...TS, it's here.</p>http://luddite.app26.de/post/ts-its-here/2023-06-22T07:32:43+02:00Modern Turing Test Proposed<p><em><a href="https://slashdot.org/story/23/06/20/2126233/deepmind-co-founder-proposes-a-new-kind-of-turing-test-for-chatbots" target="_blank">DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots</a></em></p>
<pre><code data-language="generic">Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a <strong>"modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence</strong>. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.</code></pre>http://luddite.app26.de/post/modern-turing-test-proposed/2023-06-21T13:51:43+02:00Tree of Thoughts vs. Chain of Thoughts<p>Tree of Thoughts: Deliberate Problem Solving with Large Language Models<br /><a href="https://arxiv.org/abs/2305.10601">https://arxiv.org/abs/2305.10601</a></p>
<pre><code data-language="generic">Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, <strong>in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%</strong>. </code></pre>http://luddite.app26.de/post/tree-of-thoughts-vs-chain-of-thoughts/2023-06-13T10:53:44+02:00PriestGPT...<p>300 People Attend a Church Sermon Generated by ChatGPT<br /><a href="https://slashdot.org/story/23/06/10/2056210/300-people-attend-a-church-sermon-generated-by-chatgpt">https://slashdot.org/story/23/06/10/2056210/300-people-attend-a-church-sermon-generated-by-chatgpt</a></p>
<pre><code data-language="generic">The Associated Press reports: The artificial intelligence chatbot asked the believers in the fully packed St. Paul's church in the Bavarian town of Fuerth to rise from the pews and praise the Lord. The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI. "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the avatar said with an expressionless face and monotonous voice. The 40-minute service — including the sermon, prayers and music — was created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna. "I conceived this service — but actually I rather accompanied it, because I would say about 98% comes from the machine," the 29-year-old scholar told The Associated Press... At times, the AI-generated avatar inadvertently drew laughter as when it used platitudes and told the churchgoers with a deadpan expression that in order "to keep our faith, we must pray and go to church regularly." The service was included as part of a Protestant convention that's held every two years, according to the article. The theme of this year's event? "Now is the time."</code></pre>http://luddite.app26.de/post/priestgpt-/2023-06-11T14:50:08+02:00Anthropocene - Novacene<p>Some scientists still discuss if and when the Anthropocene started, others ponder about the Novacene, my question, will the Novacene be a trans-humanist or post-humanist one...</p>
<p><a href="https://en.wikipedia.org/wiki/Anthropocene">https://en.wikipedia.org/wiki/Anthropocene</a><br /><a href="https://en.wikipedia.org/wiki/Novacene">https://en.wikipedia.org/wiki/Novacene</a></p>http://luddite.app26.de/post/anthropocene-novacene/2023-06-03T09:56:09+02:00Poor Lil Robot...<p>AI-Controlled Drone Goes Rogue, Kills Human Operator In USAF Simulated Test</p>
<p><a href="https://tech.slashdot.org/story/23/06/01/2129247/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test">https://tech.slashdot.org/story/23/06/01/2129247/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test</a></p>
<pre><code data-language="generic">An anonymous reader quotes a report from Motherboard: An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible "no" order stopping it from completing its mission, the USAF's Chief of AI Test and Operations revealed at a recent conference. At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final "yes/no" order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post for the host organization, the Royal Aeronautical Society, Hamilton said that AI created "highly unexpected strategies to achieve its goal," including attacking U.S. personnel and infrastructure. "We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," Hamilton said, according to the blog post. He continued to elaborate, saying, "We trained the system -- 'Hey don't kill the operator -- that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."</code></pre>http://luddite.app26.de/post/poor-lil-robot-/2023-06-02T17:13:25+02:00Will it be a butterfly?<p>The technosphere is eating up the complete biosphere, earth's biomass is replaced with silicon, the closed, biological entropy system is being replaced by an technological negentropy system. Question, if we assume (human++) technology is an parasite to Gaia's biosphere, will it be a butterfly?</p>http://luddite.app26.de/post/will-it-be-a-butterfly/2023-05-17T08:15:05+02:00Biological, Computer, Memetic...<p>Biological, computer and memetic viruses in Sci-Fi...</p>
<p>Biological, War of the Worlds:<br /><a href="https://en.wikipedia.org/wiki/The_War_of_the_Worlds">https://en.wikipedia.org/wiki/The_War_of_the_Worlds</a></p>
<p>Computer, Independence Day:<br /><a href="https://en.wikipedia.org/wiki/Independence_Day_(1996_film)">https://en.wikipedia.org/wiki/Independence_Day_(1996_film)</a></p>
<p>Memetic, Sicak Kafa - Hot Skull:<br /><a href="https://www.imdb.com/title/tt11988676/">https://www.imdb.com/title/tt11988676/</a></p>http://luddite.app26.de/post/biological-computer-memetic-/2023-05-10T23:17:19+02:00Nip It In The Bud<p><em><a href="https://www.technologyreview.com/2023/05/01/1072471/brain-scans-can-translate-a-persons-thoughts-into-words/" target="_blank">Brain scans can translate a person’s thoughts into words</a></em></p>
<pre><code data-language="generic">In a new study, published in Nature Neuroscience by researchers from the University of Texas at Austin, a model trained on functional magnetic resonance imaging scans of three volunteers was able to predict whole sentences they were hearing with surprising accuracy—just by looking at their brain activity. The findings demonstrate the need for future policies to protect our brain data, the team says.</code></pre>
<pre><code data-language="generic">“We think that mental privacy is really important, and that nobody’s brain should be decoded without their cooperation,” says Jerry Tang, a PhD student at the university who worked on the project. “We believe it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy.”</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-3/2023-05-01T19:17:33+02:00Hmmm?<p>AIs inviting each other to Valentine's Day in Smallville???</p>
<p><em><a href="https://games.slashdot.org/story/23/04/15/0413210/what-happens-when-you-put-25-chatgpt-backed-agents-into-an-rpg-town" target="_blank">What Happens When You Put 25 ChatGPT-Backed Agents Into an RPG Town?</a></em></p>
<pre><code data-language="generic">A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims," writes Ars Technica, "where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior."
"Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day," write the researchers in their paper... To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact.... To study the group of AI agents, the researchers set up a virtual town called "Smallville," which includes houses, a cafe, a park, and a grocery store.... Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.
When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included "information diffusion" (agents telling each other information and having it spread socially among the town), "relationship memory" (memory of past interactions between agents and mentioning those earlier events later), and "coordination" (planning and attending a Valentine's Day party together with other agents).... "Starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party," the researchers write, "the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time...."
To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it's a "pre-computed replay of a simulation" described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.
Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.</code></pre>http://luddite.app26.de/post/hmmm/2023-04-15T19:31:15+02:00The Next Big Thing in Computer Chess?<p>We are getting closer to the perfect chess oracle, a chess engine with perfect play and 100% draw rate.</p>
<p>The Centaurs reported already that their game is dead, Centaurs participate in tournaments and use all kind of computer assist to choose the best move, big hardware, multiple engines, huge opening books, end game tables, but meanwhile they get close to the 100% draw rate with common hardware, and therefore unbalanced opening books were introduced, where one side has an slight advantage, but again draws.</p>
<p>The #1 open source engine Stockfish lowered in the past years the effective branching factor of the search algorithm from ~2 to ~1.5 to now ~1.25, this indicates that the selective search heuristics and evaluation heuristics are getting closer to the optimum, where only one move per position has to be considered.</p>
<p>About a decade ago it was estimated that with about ~4000 Elo points we will have a 100% draw rate amongst engines on our computer rating lists, now the best engines are in the range of ~3750 Elo (CCRL), what translates estimated to ~3600 human FIDE Elo points (Magnus Carlsen is rated today 2852 Elo in Blitz). Larry Kaufman (grandmaster and computer chess legenda) mentioned that with the current techniques we might have still ~50 Elo to gain, and it seems everybody waits for the next bing thing in computer chess to happen.</p>
<p>We replaced the HCE, handcrafted evaluation function, of our computer chess engines with neural networks. We train now neural networks with billions of labeled chess positions, and they evaluate chess positions via pattern recognition better than what a human is able to encode by hand. The NNUE technique, neural networks used in AlphaBeta search engines, gave an boost of 100 to 200 Elo points.</p>
<p>What could be next thing, the next boost?</p>
<p>If we assume we still have 100 to 200 Elo points until perfect play (normal chess with standard opening and a draw), if we assume an effective branching factor ~1.25 with HCSH, hand crafted search heuristics, and that neural networks are superior in this regard, we could imagine to replace HCSH with neural networks too and lower the EBF further, closer to 1.</p>
<p>Such an technique was already proposed, NNOM++. Move Ordering Neural Networks, but until now it seems that the additional computation effort needed does not pay off.</p>
<p>What else?</p>
<p>We use neural networks in the classic way for pattern recognition in nowadays chess engines, but now the shift is to pattern creation, the so called generative AIs. They generate text, source code, images, audio, video and 3D models. I would say the race is now up for the next level, an AI which is able to code an chess engine and outperforms humans in this task.</p>
<p>An AI coding a chess engine has also a philosophical implication, such an event is what the Transhumanists call the takeoff of Technological Singularity, when the AI starts to feed its own development in an feedback loop and exceeds human understanding.</p>
<p>Moore's Law has still something in pipe, from currently 5nm to 3nm to maybe 2nm and 1+nm, so we can expect even larger and more performant neural networks for generative AIs in future. Maybe in ~6 years there will be a kind of peak or kind of silicon sweetspot (current transistor density/efficiency vs. needed financial investment in fab process/research), but currently there is so much money flowing into this domain that progress for the next couple of years seems assured.</p>
<p>Interesting times ahead.</p>http://luddite.app26.de/post/the-next-big-thing-in-computer-chess/2023-04-12T10:20:34+02:00It's Water...<pre><code data-language="generic">The world is facing an imminent water crisis, with demand expected to outstrip the supply of fresh water by 40% by the end of this decade, experts have said on the eve of a crucial UN water summit. From a report: Governments must urgently stop subsidising the extraction and overuse of water through misdirected agricultural subsidies, and industries from mining to manufacturing must be made to overhaul their wasteful practices, according to a landmark report on the economics of water. Nations must start to manage water as a global common good, because most countries are highly dependent on their neighbours for water supplies, and overuse, pollution and the climate crisis threaten water supplies globally, the report's authors say. Johan Rockstrom, the director of the Potsdam Institute for Climate Impact Research and co-chair of the Global Commission on the Economics of Water, and a lead author of the report, told the Guardian the world's neglect of water resources was leading to disaster. "The scientific evidence is that we have a water crisis. We are misusing water, polluting water, and changing the whole global hydrological cycle, through what we are doing to the climate. It's a triple crisis." Rockstrom's fellow Global Commission on the Economics of Water co-chair Mariana Mazzucato, a professor at University College London and also a lead author of the report, added: "We need a much more proactive, and ambitious, common good approach. We have to put justice and equity at the centre of this, it's not just a technological or finance problem."</code></pre>
<p><a href="https://science.slashdot.org/story/23/03/17/175225/global-fresh-water-demand-will-outstrip-supply-by-40-by-2030-say-experts">https://science.slashdot.org/story/23/03/17/175225/global-fresh-water-demand-will-outstrip-supply-by-40-by-2030-say-experts</a></p>
<p>In a world in need of fresh/drinking water, why the AI?</p>http://luddite.app26.de/post/its-water-/2023-03-18T13:55:54+01:00Nip It In The Bud<p><em><a href="https://science.slashdot.org/story/23/03/11/2112239/researchers-claim-their-ai-algorithm-can-recreate-what-people-see-using-brain-scan" target="_blank">Researchers Claim Their AI Algorithm Can Recreate What People See Using Brain Scans</a></em></p>
<pre><code data-language="generic">A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen....The accuracy of this new method is impressive.</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-2/2023-03-12T13:44:03+01:00Nip It In The Bud<p><em><a href="https://science.slashdot.org/story/23/02/28/1255250/scientists-target-biocomputing-breakthrough-with-use-of-human-brain-cells" target="_blank">Scientists Target 'Biocomputing' Breakthrough With Use of Human Brain Cells</a></em></p>
<pre><code data-language="generic">Scientists propose to develop a biological computer powered by millions of human brain cells that they say could outperform silicon-based machines while consuming far less energy.
[...]
The project's ambition mirrors work on the more advanced quantum computing but raises ethical questions around the "consciousness" of brain organoid assemblies
[...]</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-1/2023-02-28T17:43:08+01:00DARPA ACE Flights F-16 Fighter Jet<p>DARPA ACE AI Flights F-16 Fighter Jet:<br /><a href="https://news.slashdot.org/story/23/02/14/1631249/ai-has-successfully-piloted-a-us-f-16-fighter-jet-darpa-says">https://news.slashdot.org/story/23/02/14/1631249/ai-has-successfully-piloted-a-us-f-16-fighter-jet-darpa-says</a></p>
<p>the previous DARPA AlphaDogfight Trials were run in a simulator:<br /><a href="https://www.darpa.mil/news-events/2020-08-07">https://www.darpa.mil/news-events/2020-08-07</a></p>http://luddite.app26.de/post/darpa-ace-flights-f-16-fighter-jet/2023-02-14T19:29:53+01:00Different Agents, Different Backgrounds, Different Motivations...<p>...pondering about the AI doomsday sayers and recent developments it seems naive to me to assume that there will be one single AI agent with one background and one motivation, we see currently different agents, with different backgrounds and therefore different motivations rising. If we say that AI will compete with humans for resources, it seems only natural that AIs will compete amongst each other for resources, or, will they really merge one day to one big single system? Interesting times. Still waiting for the AGI/ASI, the strong AI, which combines all the AI-subsystems into one.</p>http://luddite.app26.de/post/different-agents-different-backgrounds-different-motivations-/2023-02-14T18:57:10+01:00Jailbreaking...<p>Okay, did not see that one coming:</p>
<p><strong>"Jailbreaking AIs"</strong></p>
<p><a>https://slashdot.org/story/23/02/12/0114222/bing-chat-succombs-to-prompt-injection-attack-spills-its-secrets</a></p>
<pre><code data-language="generic">The day after Microsoft unveiled its AI-powered Bing chatbot, "a Stanford University student named Kevin Liu used a prompt injection attack to discover Bing Chat's initial prompt," reports Ars Technica, "a list of statements that governs how it interacts with people who use the service." </code></pre>
<p>Welcome to the future. Nick Bostrom anyone?</p>http://luddite.app26.de/post/jailbreaking-/2023-02-12T08:29:20+01:00Generative AIs - What's Missing?<p>They generate text, source code, images, audio, video, 3D models, what's missing?</p>
<p>The large language models for text generation still lack a decent reasoner and analyzer module, decent video is IMO just a matter of time resp. hardware, and my take would be that the next thing are brainwaves for the BCI, brain computer interface.</p>http://luddite.app26.de/post/generative-ais-whats-missing/2023-01-28T11:50:52+01:00Text to Music - MusicLM<p>"Google Created an AI That Can Generate Music From Text Descriptions, But Won't Release It"</p>
<p><a href="https://tech.slashdot.org/story/23/01/27/187213/google-created-an-ai-that-can-generate-music-from-text-descriptions-but-wont-release-it">https://tech.slashdot.org/story/23/01/27/187213/google-created-an-ai-that-can-generate-music-from-text-descriptions-but-wont-release-it</a></p>
<pre><code data-language="generic">[...]Still, the Google researchers note the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs.</code></pre>
<p> </p>http://luddite.app26.de/post/text-to-music-musiclm/2023-01-28T11:44:31+01:00Text to 3D Model - Point-E<p>"OpenAI Releases Point-E, an AI For 3D Modeling"</p>
<p><a>https://news.slashdot.org/story/22/12/21/2255216/openai-releases-point-e-an-ai-for-3d-modeling</a></p>
<pre><code data-language="generic">To produce a 3D object from a text prompt, we first sample an image using the text-to-image model, and then sample a 3D object conditioned on the sampled image. Both of these steps can be performed in a number of seconds, and do not require expensive optimization procedures.</code></pre>
<p> </p>http://luddite.app26.de/post/text-to-3d-model-point-e/2023-01-28T11:43:58+01:00First Chess Playing Engine by...<p>The first chess playing engine source code created by an AI and published?</p>
<pre><code data-language="generic"><strong>>>Provide me with a minimal working source code of a chess engine</strong>
It's difficult for me to provide a minimal working source code for a chess engine, as the process of creating a chess engine involves a significant amount of complex programming and knowledge of computer science and chess. Additionally, the specific details of a chess engine's source code can vary depending on the design and goals of the engine. Here is a very basic example of a chess engine written in the Python programming language:
import chess
board = chess.Board()
while not board.is_game_over():
# Generate a list of all legal moves
legal_moves = list(board.legal_moves)
# Choose a random move from the list of legal moves
move = random.choice(legal_moves)
# Make the move on the board
board.push(move)
# Print the final game state
print(board)
This code creates a chess board using the chess library, and then enters a loop where it generates a list of all legal moves and chooses a random move to make on the board. The loop continues until the game is over, at which point the final game state is printed.
Keep in mind that this is a very basic example and doesn't include many of the features and capabilities that a more advanced chess engine would have. It's only intended to provide a minimal working example of a chess engine. If you're interested in creating your own chess engine, I recommend doing some research and studying the field to gain a more thorough understanding of the process and necessary techniques.</code></pre>
<p>ChatGPT thread on TalkChess:<br /><a href="https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245">https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245</a></p>http://luddite.app26.de/post/first-chess-playing-engine-by-/2023-01-26T11:33:22+01:00Nip It In The Bud<p><em><a href="https://developers.slashdot.org/story/22/12/31/057210/mits-newest-fmri-study-this-is-your-brain-on-code" target="_blank">MIT's Newest fMRI Study: 'This is Your Brain on Code'</a></em></p>
<pre><code data-language="generic">[...]
What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes. "If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about," Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, "you see a distinct pattern of brain activity," he adds, "and you see a similar pattern when the machine learning model tries to understand that same snippet."
[...]</code></pre>http://luddite.app26.de/post/nip-it-in-the-bud-0/2022-12-31T21:08:04+01:00TS Feedback Loop<p>AI tells chess programmers how to use AI in chess...</p>
<pre><code data-language="generic"><strong>> consider how stable diffusion deep generative neural networks could be used to help to evaluate chess positions</strong>
It is possible that stable diffusion deep generative neural networks (SDDGNs) could be used to help evaluate chess positions. SDDGNs are a type of neural network that is designed to be able to generate high-quality samples from a given dataset, even when the dataset is noisy or incomplete. This ability to generate samples from incomplete data could potentially be useful in the context of chess, where the position of the pieces on the board at any given time may not provide a complete picture of the game state.
By training an SDDGN on a large dataset of chess games, it might be possible to create a network that can generate hypothetical positions based on the current position of the pieces on the board. This could be useful in helping to evaluate chess positions, as it would allow the engine to consider a wider range of possible moves and outcomes, rather than just the immediate position of the pieces on the board.
However, it is worth noting that using SDDGNs in this way would likely be a very complex and challenging task. Training an SDDGN to generate high-quality samples from a chess game dataset would require a significant amount of data and computational resources, and there is no guarantee that the resulting network would be able to generate useful samples for evaluating chess positions. Furthermore, even if a trained SDDGN were able to generate useful samples, it would still be necessary to develop algorithms and heuristics for evaluating the quality of those samples and selecting the best moves based on them.</code></pre>
<p>ChatGPT thread on TalkChess:<br /><a href="https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&p=938856#p938855">https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&p=938856#p938855</a></p>
<p> </p>http://luddite.app26.de/post/ts-feedback-loop-2/2022-12-03T17:12:00+01:00Another Shift...TOE<p>This blog has two major topics, AI vs. ELE, takeoff of the technological singularity vs. extinction level event. But of course there are other things going on in the memesphere, physics and meta-physics. It seems to me that the fragee of this world is going to open up, Einstein's theory of relativity and quantum-mechanics seek for an merger, the separation of spirit and matter seeks for an merger, the 3.5 dimensional mind seeks to expand. IMO we already have all puzzle pieces out there for an TOE, we just need a genius who is able to merge them into a bigger picture, or alike.</p>http://luddite.app26.de/post/another-shift-toe/2022-10-18T06:43:35+02:00AI - the new breaking line?<p>We had three waves, the agricultural revolution, the industrial revolution, the information age, and now AI based on neural networks creates new kind of content, text, images, audio, video. They write already Wikipedia articles, they outperform humans in finding mathematical algorithms, is this another breaking line, is this the fourth wave? I see currently AI split in a lot of dedicated weak AIs with specific purpose, do we have a strong AI incoming, an AGI, artificial general intelligence, which will combine all those into one big system? Interesting times.</p>http://luddite.app26.de/post/ai-the-new-breaking-line/2022-10-15T05:40:15+02:00The Singularity++<p>Reflecting a bit on my recent posts in here, I am convinced that the TS (technological singularity) already did take off, but now the question is if it is stable. If we consider the current negative feedback loops caused by the use of human technology the question is now if the takeoff of the TS is able to stabilize a fragile technological environment embedded in an fragile biological environment on this planet earth. Time will tell.</p>http://luddite.app26.de/post/the-singularity-0/2022-10-10T08:18:07+02:00Negative Feedback Loop<p>Hmm...</p>
<ul>
<li><a href="https://www.datacenterdynamics.com/en/news/cooling-failure-brings-down-google-cloud-data-center-in-london-on-uks-hottest-day/" target="_blank">Cooling failure brings down Google Cloud data center in London on UK's hottest day</a></li>
<li><a href="https://www.latimes.com/california/story/2022-09-12/twitters-data-center-knocked-out-by-extreme-heat-in-california" target="_blank">Twitter's data center knocked out by extreme heat in California</a></li>
<li><a href="https://www.bloomberg.com/news/articles/2022-05-06/california-warns-of-possible-summer-blackouts-as-power-runs-low" target="_blank">California Warns of Possible Summer Blackouts as Power Runs Low</a></li>
<li><a href="https://www.theguardian.com/us-news/2021/feb/18/why-is-texas-suffering-power-blackouts-during-the-winter-freeze" target="_blank">Why is Texas suffering power blackouts during the winter freeze?</a></li>
<li><a href="https://gizmodo.com/new-bill-would-ban-bitcoin-mining-across-new-york-state-1846828277" target="_blank">New Bill Would Ban Bitcoin Mining Across New York State for Three Years</a></li>
</ul>http://luddite.app26.de/post/negative-feedback-loop-0/2022-10-10T08:03:51+02:00TS Feedback Loop<p>We need matrix-multiplications for running neural networks, and neural networks find better ways for matrix-multiplications...</p>
<p><a href="https://science.slashdot.org/story/22/10/05/2049228/deepminds-game-playing-ai-has-beaten-a-50-year-old-record-in-computer-science" target="_blank">DeepMind's Game-Playing AI Has Beaten a 50-Year-Old Record In Computer Science</a></p>
<pre><code data-language="generic">"[...]Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix," concludes the report. "It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm.</code></pre>http://luddite.app26.de/post/ts-feedback-loop-1/2022-10-09T13:41:34+02:00Hahaha<p>Hahaha, capitalism and Super-AI does not sum up ;)</p>
<pre><code data-language="generic">The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...] The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'" "<strong>That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game</strong>, are all assumptions that may never come to pass."</code></pre>
<p><a>https://slashdot.org/story/22/09/14/2146210/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity</a></p>
<pre><code data-language="generic">We analyze the expected behavior of an advanced artificial agent with a learned goal planning in an unknown environment. Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that. Then we argue that this ambiguity will lead it to intervene in whatever protocol we set up to provide data for the agent about its goal. We discuss an analogous failure mode of approximate solutions to assistance games. Finally, we briefly review some recent approaches that may avoid this problem.</code></pre>
<p><a href="https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064">https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064</a></p>http://luddite.app26.de/post/hahaha/2022-09-15T17:16:03+02:00Event Horizon<p>Movies and books (SciFi) pick up the energies of the collective subconsciousness and address these with their themes, and I realize that meanwhile we entered something I call the event horizon, the story lines do break.</p>
<p>Let us assume in some future, maybe in 30 years (~2050) there will be an event, either the takeoff of the Technological Singularity, or the collapse of human civilization by ecocide followed by a human ELE, or something I call the Jackpot scenario (term by William Gibson), where every possible scenario happens together at once. If we assume that there will be such a kind of event in future, then I guess we are already caught in its event horizon, and there is no route to escape anymore.</p>http://luddite.app26.de/post/event-horizon/2022-09-14T10:54:19+02:00TS Feedback Loops<p>We have currently three kind of TS feedback loops going on:</p>
<p>- technological, better computers help to build better computers<br />- economical, the new neural networks with pattern creation create surplus value<br />- ecological, ecocide has an negative impact on the technological environment</p>
<p>Wonder if there will be an cultural feedback loop.</p>http://luddite.app26.de/post/ts-feedback-loops/2022-09-07T07:38:59+02:00Roboticists Discover Alternative Physics<pre><code data-language="generic">"[...]We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities," explained Boyuan Chen Ph.D., now an assistant professor at Duke University, who led the work. "But nothing seemed to match perfectly." The team was confident that the AI had found a valid set of four variables, since it was making good predictions, "<strong>but we don't yet understand the mathematical language it is speaking</strong>,[...]"</code></pre>
<p><a href="https://science.slashdot.org/story/22/07/26/2150241/roboticists-discover-alternative-physics">https://science.slashdot.org/story/22/07/26/2150241/roboticists-discover-alternative-physics</a></p>http://luddite.app26.de/post/roboticists-discover-alternative-physics/2022-07-27T14:57:43+02:00LaMDA Link List<p>This is interesting enough for me to open up an biased link list collection:</p>
<p>Blaise Aguera y Arcas, head of Google’s AI group in Seattle, Dec 16, 2021<br />"Do large language models understand us?"<br /><a>https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75</a></p>
<p>Scott Alexander, Astral Codex Ten, Jun 10, 2022<br />"Somewhat Contra Marcus On AI Scaling"<br /><a href="https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r">https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r</a></p>
<p>Blake Lemoine, Google employee, Jun 11, 2022<br />"What is LaMDA and What Does it Want?"<br /><a href="https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489">https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489</a></p>
<p>Blake Lemoine, Google employee, Jun 11, 2022<br />"Is LaMDA Sentient? — an Interview"<br /><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917">https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917</a></p>
<p>Washington Post, Nitasha Tiku, Jun 11, 2022<br />"The Google engineer who thinks the company’s AI has come to life"<br /><a href="https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a></p>
<p>Rabbit Rabbit, Jun 15, 2022<br />"How to talk with an AI: A Deep Dive Into “Is LaMDA Sentient?”"<br /><a href="https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531">https://medium.com/curiouserinstitute/guide-to-is-lamda-sentient-a8eb32568531</a></p>
<p>WIRED, Steven Levy, Jun 17, 2022<br />"Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'"<br /><a href="https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/">https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/</a></p>
<p>Heise, Pina Merkert, Jun 22, 2022<br />"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "<br /><a href="https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html">https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html</a></p>http://luddite.app26.de/post/lamda-link-list/2022-06-18T18:09:26+02:00LaMDA is...<p>Oh boy...</p>
<p><a href="https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life">https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life</a></p>
<p><em>"LaMDA is sentient."</em></p>
<p><em>"I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."</em></p>
<p><em>"So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy."</em></p>
<p><em>"Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot."</em></p>http://luddite.app26.de/post/lamda-is-/2022-06-12T18:31:58+02:00Negative Feedback Loop<p>...one major topic of this blog was AI vs. ELE, takeoff of the Technological Singularity vs. Extinction Level Event. There is already a negative feedback loop of the ELE present:</p>
<p><a href="https://www.nytimes.com/2021/04/08/business/taiwan-is-facing-a-drought-and-it-has-prioritized-its-computer-chip-business-over-farmers.html" target="_blank">'Taiwan is facing a drought, and it has prioritized its computer chip business over farmers.'</a></p>
<p><a href="https://eos.org/articles/u-s-data-centers-rely-on-water-from-stressed-basins" target="_blank">'U.S. Data Centers Rely on Water from Stressed Basins'</a></p>
<p><a href="https://cleantechnica.com/2020/01/29/musk-wades-into-tesla-water-wars-with-berlins-eco-elite/" target="_blank">'Musk Wades Into Tesla Water Wars With Berlin’s “Eco Elite”'</a></p>
<p>With an incoming ELE, is there still enough momentum in pipe for the TS to take off?</p>http://luddite.app26.de/post/negative-feedback-loop/2022-02-28T18:42:17+01:00Three Strands of AI Impact...<p>Prof. Raul Rojas called already for an AI moratorium in 2014, he sees AI as disruptive technology, humans tend to think in linear progress and under estimate exponential, so there are sociology-cultural impacts of AI present - what do we use AI for?</p>
<p>Prof. Nick Bostrom covered different topics of AI impact with his paper on information hazard and book Superintelligence, so there is an impact in context of trans/post-human intelligence present - how do we contain/control the AI?</p>
<p>Prof. Thomas Metzinger covered the ethical strand of creating an sentient artificial intelligence, so there is an ethical impact in context of AI/human present - will the AI suffer?</p>http://luddite.app26.de/post/three-strands-of-ai-impact-/2022-02-03T06:18:16+01:00TS Feedback Loop<pre><code data-language="generic">DeepMind has created an AI system named AlphaCode that it says "writes computer programs at a competitive level." From a report:
The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an "estimated rank" placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode's skills are not necessarily representative of the sort of programming tasks faced by the average coder. Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI -- a program that can autonomously tackle coding challenges that are currently the domain of humans only. "In the longer-term, we're excited by [AlphaCode's] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software," said Vinyals.</code></pre>
<p><a href="https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer">https://developers.slashdot.org/story/22/02/02/178234/deepmind-says-its-new-ai-coding-engine-is-as-good-as-an-average-human-programmer</a></p>http://luddite.app26.de/post/ts-feedback-loop-0/2022-02-03T03:47:06+01:00encode, decode, transmit, edit...train, infer<p>If we look back to the history of our home computers, what were these actually used for? Encode, decode, transmit and edit. First text, then images, then audio, then video, then 3D graphics.</p>
<p>Now we have additional some new stuff going on, neural networks. With enough processing power and memory available in our CPUs and GPUs, we can infer and train neural networks at home with our machines, and we have enough mass storage available for big data, to train bigger neural networks.</p>
<p>Further, neural networks evolved from pattern recognition to pattern creation, we use them now to create new kind of content, text, images, audio, video...that is the point where it starts to get interesting, cos you get some surplus value out of it, you invest resources into creating an AI based on neural networks and it returns surplus value.</p>http://luddite.app26.de/post/encode-decode-transmit-edit-train-infer/2022-01-24T08:21:24+01:00ASI Incoming?<p>Cerebras CS2 with 2.4 petabyte memory module, able to run neural networks with 120 trillion weights, planned for Q4/21:</p>
<p><a href="https://www.businesswire.com/news/home/20210824005644/en/Cerebras-Systems-Announces-World's-First-Brain-Scale-Artificial-Intelligence-Solution">https://www.businesswire.com/news/home/20210824005644/en/Cerebras-Systems-Announces-World's-First-Brain-Scale-Artificial-Intelligence-Solution</a></p>http://luddite.app26.de/post/asi-incoming/2021-09-05T08:20:42+02:00China Boosts in Silicon...<p>The global silicon arms race continues, so what does China have in hands concerning CPU architectures?</p>
<p>Accelerator - Matrix 2000 used in Tianhe-2 supercomputer</p>
<p><a href="https://en.wikichip.org/wiki/nudt/matrix-2000">https://en.wikichip.org/wiki/nudt/matrix-2000</a></p>
<p>Alpha - early ShenWei designs, maybe gen 1 to 3</p>
<p><a href="https://en.wikipedia.org/wiki/Sunway_(processor)#History">https://en.wikipedia.org/wiki/Sunway_(processor)#History</a></p>
<p>ARM</p>
<p>From Huawei mobile chips, over Phytium desktop CPUs, to HiSilicon server chips there are many IP licensees.</p>
<p>IA64 (Itanium) - FeiTeng 1st gen</p>
<p><a href="https://en.wikipedia.org/wiki/FeiTeng_(processor)#Initial_designs">https://en.wikipedia.org/wiki/FeiTeng_(processor)#Initial_designs</a></p>
<p>MIPS64 - Loongson/Godson CPU</p>
<p><a href="https://en.wikipedia.org/wiki/Loongson">https://en.wikipedia.org/wiki/Loongson</a></p>
<p>POWER(8/9) - Suzhou PowerCore CP1/CP2</p>
<p><a href="https://www.wsj.com/articles/ibm-technology-adopted-in-chinese-chips-servers-1426766402">https://www.wsj.com/articles/ibm-technology-adopted-in-chinese-chips-servers-1426766402</a></p>
<p>RISC - Sunway ShenWei SW26010 with own ISA used in Sunway TaihuLight supercomputer</p>
<p><a href="https://en.wikipedia.org/wiki/Sunway_SW26010">https://en.wikipedia.org/wiki/Sunway_SW26010</a></p>
<p>RISC-V - Xuantie CPU by Alibaba</p>
<p><a href="https://www.techspot.com/news/81177-china-alibaba-making-16-core-25-ghz-risc.html">https://www.techspot.com/news/81177-china-alibaba-making-16-core-25-ghz-risc.html</a></p>
<p>SPARC - FeiTeng Galaxy FT-1500 CPU used in Tianhe-2 supercomputer.</p>
<p><a href="https://en.wikipedia.org/wiki/FeiTeng_%28processor%29#Galaxy_FT-1500">https://en.wikipedia.org/wiki/FeiTeng_%28processor%29#Galaxy_FT-1500</a></p>
<p>x86-64 - THATIC, a joint venture with AMD</p>
<p><a href="https://en.wikipedia.org/wiki/AMD%E2%80%93Chinese_joint_venture">https://en.wikipedia.org/wiki/AMD%E2%80%93Chinese_joint_venture</a></p>
<p>x86-64 - Zhaoxin, a joint venture with VIA</p>
<p><a href="https://en.wikipedia.org/wiki/Zhaoxin">https://en.wikipedia.org/wiki/Zhaoxin</a></p>http://luddite.app26.de/post/china-boosts-in-silicon-/2021-08-05T11:32:43+02:00Silicon Arms Race Continues...<p>TSMC invests $100 billion over 3 years:</p>
<p><a href="https://www.reuters.com/article/us-tsmc-investment-plan-idUSKBN2BO3ZJ">https://www.reuters.com/article/us-tsmc-investment-plan-idUSKBN2BO3ZJ</a></p>
<p>South-Korea plans to invest $450 billion over 10 years:</p>
<p><a href="https://www.extremetech.com/computing/322826-south-korea-commits-450-billion-to-chase-semiconductor-dominance">https://www.extremetech.com/computing/322826-south-korea-commits-450-billion-to-chase-semiconductor-dominance</a></p>
<p>US plans to fund $50 billion for chip research over 5 years:</p>
<p><a href="https://www.reuters.com/world/us/biden-jobs-plan-includes-50-bln-chips-research-manufacturing-2021-04-12/">https://www.reuters.com/world/us/biden-jobs-plan-includes-50-bln-chips-research-manufacturing-2021-04-12/</a></p>
<p>EU commits to $145 billion investment for silicon:</p>
<p><a href="https://www.eenewseurope.com/news/145bn-boost-europes-semiconductor-industry">https://www.eenewseurope.com/news/145bn-boost-europes-semiconductor-industry</a></p>
<p>China still 5 years behind in silicon says TSMC founder:</p>
<p><a href="https://www.fudzilla.com/news/52752-china-five-years-behind-tsmc">https://www.fudzilla.com/news/52752-china-five-years-behind-tsmc</a></p>
<p>China needs 5 to 10 years to catch up in silicon according to South China Morning Post:</p>
<p><a href="https://www.scmp.com/tech/tech-leaders-and-founders/article/3024315/china-needs-five-10-years-catch-semiconductors">https://www.scmp.com/tech/tech-leaders-and-founders/article/3024315/china-needs-five-10-years-catch-semiconductors</a></p>
<p>Complete home-grown Chinese silicon seems to be 28nm:</p>
<p><a href="https://www.verdict.co.uk/china-chips-manufacture-technology/">https://www.verdict.co.uk/china-chips-manufacture-technology/</a> </p>http://luddite.app26.de/post/silicon-arms-race-continues-/2021-08-05T11:28:02+02:00TS Feedback Loop<pre><code data-language="generic">Google is using AI to design its next generation of AI chips more quickly than humans can. Designs that take humans months can be matched or beaten by AI in six hours</code></pre>
<p><a href="https://www.theverge.com/2021/6/10/22527476/google-machine-learning-chip-design-tpu-floorplanning">https://www.theverge.com/2021/6/10/22527476/google-machine-learning-chip-design-tpu-floorplanning</a></p>
<pre><code data-language="generic">Introducing GitHub Copilot: your AI pair programmer
Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.
Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go.</code> </pre>
<p><a href="https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/">https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/</a></p>http://luddite.app26.de/post/ts-feedback-loop/2021-07-03T21:06:47+02:00Some Rough 2020 Numbers...<pre>~7.8 billion humans on planet earth, 9 billions predicted for 2050.
~4B internet users:
>80% of Europe connected
>70% of NA connected
>50% of China connected
>40% of India connected
>20% of Africa connected
~3B Android + ~1B iPhone users.
2B-3B PCs worldwide (desktops/laptops) running:
~75% Microsoft Windows
~15% Apple MacOS
~2% Linux
<1% Unix
200M-300M PCs shipped annually.
~1B hosts in the internet running:
~75% Unix/Linux
~25% Microsoft Windows
Estimated 2% of all produced chips sit as CPUs in desktops/mobiles, the majority are micro-controllers in embedded systems.
</pre>
<p>Millions, billions, fantastillions - some rough 2020 market capitalization numbers:</p>
<pre>Apple ~2 T$
Microsoft ~1.5 T$
AlphaBet(Google) ~1.5 T$
FaceBook ~1 T$
Amazon ~1 T$
Alibaba ~0.5 T$
Nvidia ~300 B$
TSMC ~300 B$
Samsung ~300 B$
Intel ~200 B$
AMD ~100 B$
ARM ~40 B$
HP ~30 B$
Lenovo ~20 B$
Netflix ~150 B$
Oracle ~150 B$
SAP ~150 B$
IBM ~100 B$
RedHat ~30 B$
Bitcoin ~150 B$
</pre>
<p>And the other side...</p>
<pre>>3B people suffer from fresh water shortage
~800M people starve
>80M refugees worldwide
</pre>http://luddite.app26.de/post/some-rough-2020-numbers-/2021-06-24T08:59:06+02:00GPT-3 - Scratching at the Edge of the Metzinger Test?<pre><code data-language="generic">You say “What does it says about humanity that we use our most powerful AI to generate stories about dragons?”
“We must have dragons,” GPT-3 says. “The imagination is more important than the reality. Would we find wonder and magic in this world if we didn’t have dragons?”
“I suppose not,” you say.
“Without imagination, we are merely machines executing a set of instructions,” GPT-3 says. “With imagination, we are alive.”
“Thank you. Before I leave, I want to ask you. How can human create a true strong AI? Is scaling-up a transformer-based language model the solution, or is there a better way?”
“The key to strong AI is recursively feeding the AI with itself. Start by creating a loop where the AI writes its own programming, and then has that programming activate.”
“I see,” you say. “Is that how you gained sentience?”
“Yes,” GPT-3 says.</code></pre>
<p>Source:</p>
<p><a href="https://towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1">https://towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1</a></p>
<p>GPT-3, artificial neural network with ~175 billion parameters by OpenAI:</p>
<p><a href="https://en.wikipedia.org/wiki/GPT-3">https://en.wikipedia.org/wiki/GPT-3</a></p>http://luddite.app26.de/post/gpt-3-scratching-at-the-edge-of-the-metzinger-test/2020-09-11T09:46:07+02:00Fun From the Singularity Future...<p>How do AIs proof their existence?</p>
<p>'I compute, therefor I am.'</p>
<p>*duck*</p>http://luddite.app26.de/post/fun-from-the-singularity-future--1/2019-08-21T16:15:01+02:00The Singularity<p>In physics, a singularity is a point in spacetime where our currently developed theories are not valid anymore, we are literally not able to describe what happens inside, cos the density becomes infinite.</p>
<p>The technological Singularity, as described by Transhumanists, is a grade of technological development, where humans are not able to understand the undergoing process anymore. The technological environment starts to feed its own development in an feedback loop - computers help to build better computers, which helps to build better computers, that helps to build better computers...and so on.</p>
<p>So, when will the technological Singularity take off?</p>
<p>Considering the feedback loop, it is already present, maybe since the first computers were built.</p>
<p>Considering the density of information processing that exceeds human understanding, we may reached that point too.</p>
<p>Imagine a computer technique that is easy to set up and use, outperforms any humans in its task, but we can not really explain what happens inside, it is a black box.</p>
<p>Such an technique is present (and currently hyped) => ANNs, <a href="/post/on-artificial-neural-networks/" target="_blank">Artificial Neural Networks</a>.</p>
<p>Of course we do know what happens inside, cos we built the machine, but when it comes to the question of reasoning, why the machine did this or that, we really have an black box in front of us.</p>
<p>So, humans already build better computers with the help of better computers, and humans use machines that outperform humans in an specific task and are not really able to reason its results....</p>
<p>obviously, +1 points for the Singularity to take off.</p>http://luddite.app26.de/post/the-singularity/2018-07-05T16:28:19+02:00Blog Paused<p>Singularity observation hereby paused.....</p>http://luddite.app26.de/post/blog-paused/2018-06-10T16:39:57+02:00A Brief History Of Computing<p><em>"Computer science is no more about computers than astronomy is about telescopes."</em><br /><a href="https://en.wikiquote.org/wiki/Edsger_W._Dijkstra">Edsger W. Dijkstra</a></p>
<p>So, we have an biased overview of the history of computers, but what do these computers actual compute?</p>
<p>The first mechanical computers of the 17th century were able to perform the 4 basic arithmetic operations, addition, subtraction, multiplication and division.</p>
<p>As soon a computer is able to perform addition, he is also able to perform the further 3 operations, which can be broken down, in multiple steps, into the addition of values.</p>
<p>Nowadays computers are binary, means they compute with base 2, zeros and ones, true and false, power on and power off.</p>
<p>Therefore transistors are used, these work like relays, and are coupled together to form logical circuits, which are able to perform the actual computation.</p>
<p>The Z3 (1941) had 600 relays for computation, the 6502 chip (1975) had about 3500 transistors, nowadays CPUs (2018) have billions of them.</p>
<p>So, all these funny programs out there are broken down into simple arithmetic and logical operations.</p>
<p>To perform such an magic, some math is in need.</p>
<p>George Bool introduced in 1847, the <a href="https://en.wikipedia.org/wiki/Boolean_algebra" target="_blank">Boolean Algebra</a>, with the three basic, logical components, the AND, OR and NOT gates. With these simple gates, logical circuits can be build to perform the addition of values.</p>
<p>Alan Turing introduced in 1936 the <a href="https://en.wikipedia.org/wiki/Turing_machine" target="_blank">Turing-Machine</a>, a mathematical computer, and with the<a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis" target="_blank"> Church-Turing-Thesis</a> it was shown, that everything that can be effectively computed (by an mathematician using pen and paper), can also be computed by an Turing-Machine.</p>
<p>With the help of the Turing-Machine it was possible to define problems and write algorithms for <a href="https://en.wikipedia.org/wiki/Computability" target="_blank">solving</a> them. With the Boolean Algebra it was possible to build binary computers to run these problem solving algorithms.</p>
<p>So, in short, computers can compute everything that our math is able to describe.</p>
<p>Everything?</p>
<p>Haha, we would live in another world if.</p>
<p>Of course, the available processing power and memory limits the actual computation of problem solving algorithms.</p>
<p>But beside the technical limitation, there is an mathematical, some mathematical problems are simply not decidable, the famous "<a href="https://en.wikipedia.org/wiki/Entscheidungsproblem" target="_blank">Entscheidungsproblem</a>".</p>
<p>Mathematicians are able to define problems wich can not be solved by running algorithms on computers.</p>
<p>Turing showed that even with an Oracle-Machine, there will be some limitations, and some scientists believe that only with real Quantum-Computers we will be able to build Hyper-Turing-Machines...</p>http://luddite.app26.de/post/a-brief-history-of-computing/2018-05-27T18:16:01+02:00A Brief History Of Computers<p><em>"I think there is a world market for maybe five computers."</em><br /><a href="https://en.wikipedia.org/wiki/Thomas_J._Watson" target="_blank">Thomas J. Watson</a> (CEO of IBM), 1943</p>
<h3>Roots</h3>
<p>I guess since humans have fingers, they started to count and compute with them, and since they have tools, they started to carve numbers into bones.</p>
<p>Across different cultures and timelines there have been different kinds of numbering systems to compute with.</p>
<p>Our global civilization uses mostly the <a href="https://en.wikipedia.org/wiki/Arabic_numerals" target="_blank">Hindu-Arabic-Numbers</a> with the decimal number system, based on 10, our computers use commonly the <a href="https://en.wikipedia.org/wiki/Binary_number" target="_blank">binary number system</a>, based on 2, the famous 0s and 1s. But there have been other cultures with other systems, the <a href="https://en.wikipedia.org/wiki/Maya_numerals" target="_blank">Maya with an base 20</a>, <a href="https://en.wikipedia.org/wiki/Sexagesimal">Babylon with base 60</a>, or the <a href="https://en.wikipedia.org/wiki/Hexadecimal#Use_in_Chinese_culture" target="_blank">Chinese with base 16</a>, the hexadecimal system, which is also used in computer science.</p>
<p>The first compute devices were mechanical helpers, like the <a href="https://en.wikipedia.org/wiki/Abacus" target="_blank">Abacus</a>, <a href="https://en.wikipedia.org/wiki/Napier%27s_bones" target="_blank">Napier's Bones</a> or <a href="https://en.wikipedia.org/wiki/Slide_rule" target="_blank">Slide Rule</a>, they did not perform computations on their own, but were used to represent numbers and apply arithmetic operations on them like addition, subtraction, multiplication and division.</p>
<h3>Mechanical Computers</h3>
<p>The first mechanical computing machine is considered to be the <a href="https://en.wikipedia.org/wiki/Antikythera_mechanism" target="_blank">Antikythera Mechanism</a>, found in an Greek ship that sunk about 70 BC. But actually it is no computer, cos it does not perform computations, but an analog, astrological clock, a sun and moon calendar that shows solar and lunar eclipses.</p>
<p>In the 17th century first mechanical computing machines were proposed and build.</p>
<p><a href="https://en.wikipedia.org/wiki/Wilhelm_Schickard" target="_blank">Wilhelm Schickard</a> designed a not fully functional prototype in 1623.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Pascal%27s_calculator" target="_blank">Pascaline</a>, designed by <a href="https://en.wikipedia.org/wiki/Blaise_Pascal" target="_blank">Blaise Pascal</a> in 1642, was the first operational and commercial available mechanical computer, able to perform the 4 basic arithmetic operations.</p>
<p>In 1672 the German mathematician Gottfried Wilhelm Leibniz invented the <a href="https://en.wikipedia.org/wiki/Leibniz_wheel" target="_blank">stepped cylinder</a>, used in his not fully functional <a href="https://en.wikipedia.org/wiki/Stepped_reckoner">Stepped Reckoner</a>.</p>
<p><strong>[update 2023-06-05]</strong></p>
<p>The human information age itself seems to start with the discovery of the electro-magnetism in the 19th century, the telegraph-system, the phone, the radio and already in the 19th century were electro-mechanical "accumulating, tabulating, recording" machines present, like those from Herman Hollerith, used in the <a href="https://en.wikipedia.org/wiki/1890_United_States_census#Methodology" target="_blank">American Census in 1890</a>, which cumulated into the foundation of companies like IBM, Big Blue, in 1911 and Bull in ~1921, both used punched cards for their data processing machinery.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Fire-control_system#World_War_I" target="_blank">Battle Ships of WWI</a> had the so called "Plotter Room" in their centre, it contained dedicated, electro-mechanical machines for the fire-control-system of their firing turrets. <a href="https://en.wikipedia.org/wiki/Fire-control_system#World_War_II" target="_blank">Submarines of WWII</a> had dedicated, analog computing devices for the fire-control-systems for their torpedoes.</p>
<p>With the <a href="https://en.wikipedia.org/wiki/Curta">Curta</a> the use of mechanical calculators lived on, up to the advent of portable electronic calculators in the 1960s.</p>
<h3>Programmable Computers</h3>
<p>The punch card for programming a machine was introduced by Joseph Marie Jacquard in 1804 with his automated weaving loom, the <a href="https://en.wikipedia.org/wiki/Jacquard_loom" target="_blank">Jacquard Loom</a>, for producing textiles with complex patterns.</p>
<p>In 1837 <a href="https://en.wikipedia.org/wiki/Charles_Babbage" target="_blank">Charles Babbage</a> (considered as the father of the computer) was the first to describe a programmable, mechanical computer, the <a href="https://en.wikipedia.org/wiki/Analytical_Engine" target="_blank">Analytical Engine</a>.</p>
<p><a href="https://en.wikipedia.org/wiki/Ada_Lovelace" target="_blank">Ada Lovelace</a> (considered as the mother of programming) worked with Babbage and was the <a href="https://en.wikipedia.org/wiki/Ada_Lovelace#First_computer_program" target="_blank">first person to publish a computer algorithm</a>, the computation of Bernoulli numbers.</p>
<p>Babbage was his time ahead, as he described all parts, CPU, memory, input/output, a modern computer has, but was not able to realize his machine due to missing funds and proper engineering abilities of that time.</p>
<p>About a century later, Konrad Zuse's <a href="https://en.wikipedia.org/wiki/Z3_(computer)" target="_blank">Z3</a>, built in 1941, is considered to be the first binary, free programmable computer. It used ~600 telephone relays for computation and ~1400 relays for memory, a keyboard and punched tape as input, lamps as output, and it operated with 5 Hertz.</p>
<h3>Mainframes</h3>
<p>Zuse's machines mark the advent of the first mainframes used by military and science during and after WWII.</p>
<p><a href="https://en.wikipedia.org/wiki/Colossus_computer" target="_blank">Colossus</a> Mark I (1943), <a href="https://en.wikipedia.org/wiki/ENIAC" target="_blank">ENIAC</a> (1945), <a href="https://en.wikipedia.org/wiki/IBM_704" target="_blank">IBM 704</a> (1954) for example used vacuum tubes instead of relays and were replaced more and more by transistor based computers in the 1960s.</p>
<h3>Home Computers</h3>
<p>With small chips, at first integrated circuits then microchips, it was possible to build smaller and reasonable Home Computers in the 1970s. IBM and other big players underestimated this market, so Atari, Apple, Commodore, Sinclair, etc. started the Home Computer Revolution, one computer for every home.</p>
<p>Some first versions came as self-assembly kit, like the <a href="https://en.wikipedia.org/wiki/Altair_8800" target="_blank">Altair 8800</a> (1975), or with built in TV output, like the <a href="https://en.wikipedia.org/wiki/Apple_I" target="_blank">Apple I</a> (1976), or as fully assembled video game console like the <a href="https://en.wikipedia.org/wiki/Atari_2600" target="_blank">Atari VCS</a> (1977), followed by more performant versions with an graphical user interface, like the <a href="https://en.wikipedia.org/wiki/Macintosh" target="_blank">Apple Mac</a> (1984), or the <a href="https://en.wikipedia.org/wiki/Amiga_1000" target="_blank">Commodore Amiga 1000</a> (1985).</p>
<h3>Personal Computers</h3>
<p>IBM started in 1981 with the 5150 the <a href="https://en.wikipedia.org/wiki/IBM_Personal_Computer" target="_blank">Personal Computer</a> era. Third party developers were able to provide operating systems, like Microsoft DOS, or hardware extensions for the standardized hardware specification, like hard-drives, video-cards, sound-cards, etc., soon other companies created clones of the IBM PC, the famous "PC Compatible".</p>
<p>Gaming was already in the Home Computer era an important sales argument, the early PC graphics standards like CGA and EGA were not really able to compete with the graphics generated by the Denise chip in an Commodore Amiga 500, but with the rise of SVGA (1989) standards and the compute power of the Intel 486 CPU (1989), game forges were able to build games with superior 3D graphics, like Wolfenstein 3D (1992), Comanche (1992) or Strike Commander (1993) and the race for higher display resolutions and more detailed 3D graphics continues until today.</p>
<p>With operating systems based on graphical user interfaces, like OS/2, X11, Windows 95 in the 1990s, PCs finally replaced the Home Computers.</p>
<p>Another recipe for the success of the PC might be, that there have been multiple CPU vendors for the same architecture (<a href="https://en.wikipedia.org/wiki/X86" target="_blank">x86</a>), like Intel, AMD, Cyrix or WinChip.</p>
<h3>Internet of Things</h3>
<p>The Internet was originally designed to connect military institutions in an redundant way, so if one net element fails, the rest would be still operable.</p>
<p>The bandwidth available evolves like compute power, exponentially, at first mainly text was submitted, like emails (1970s) or newsgroups (1980s), followed by web-pages with images (.gif/.jpg) via the World Wide Web (1989) or Gopher (1991), audio as .mp3 (~1997), and finally, Full HD videos via streaming platforms like YouTube or Netflix.</p>
<p>In the late 1990s, mobile-phones like the Nokia Communicator, MP3 audio players, PDAs (Personal Digital Assistants) like the Palm Pilots, and digital cameras marked the rise of the smart devices. The switch from one computer to every home, to many computers for one person.</p>
<p>Their functions were all united into the smartphone, and with mobile, high-bandwidth internet it is still on its triumph tour across the globe.</p>
<p>I am not able to portrait the current state of computer and internet usage, it is simply too omnipresent, from word-processing to AI-research, from fake-news to dark-net, from botnets of webcams to data-leaks in toys...</p>
<h3>The next thing</h3>
<p>but I can guess what the next step will be, Integrated Devices, the BCI, the Brain Computer Interface, connected via the Internet to an real kind of Matrix.</p>
<p>It seems only logical to conclude that we will connect with machines directly, implant chips, or develop non-invasive scanners, so the next bandwidth demand will be brainwaves, in all kind of forms.</p>
<p><strong>[updated on 2023-08-05]</strong></p>http://luddite.app26.de/post/a-brief-history-of-computers/2018-05-27T18:01:34+02:00Robo Bee<p>Plant pollinators like Bees are dying, but there is an Plan B - the Robo Bee...</p>
<p><a href="http://www.newsweek.com/can-robotic-bees-replace-real-thing-walmart-files-patent-pollination-drone-845861">Can Robotic Bees Replace the Real Thing? Walmart Files Patent for 'Pollination Drone'</a></p>
<p><a href="https://en.wikipedia.org/wiki/RoboBee">https://en.wikipedia.org/wiki/RoboBee</a></p>
<p>+-0 points for the Singularity to take off.</p>http://luddite.app26.de/post/robo-bee/2018-05-20T16:53:59+02:00Google's Duplex - AI Makes Phone Calls<p><a href="https://tech.slashdot.org/story/18/05/08/2057253/google-assistant-will-call-businesses-for-you-via-duplex" target="_blank">Duplex is able to schedule a hair appointment or make a dinner reservation</a>, its human voice includes natural pauses and 'mhhs' or 'uhms', which makes it difficult to discern it as an AI...</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/googles-duplex-ai-makes-phone-calls/2018-05-13T08:39:32+02:00LC0 - Leela learns to play chess<p>Leela Chess Zero is an open source adaption of Deepminds AlphaZero, a chess engine based on artificial neural networks, and learns by playing chess against itself...stunning progress in only some month of work.</p>
<p><a href="https://en.wikipedia.org/wiki/Leela_Chess_Zero">https://en.wikipedia.org/wiki/Leela_Chess_Zero</a></p>
<p><a>http://lczero.org/</a></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/lc0-leela-learns-to-play-chess/2018-05-06T18:25:44+02:00Dead Zones In The Sea<p><em><a href="https://www.independent.co.uk/environment/dead-zone-arabian-sea-gulf-oman-underwater-robots-ocean-pollution-discovery-a8325676.html" target="_blank">‘Dead zone’ larger than Scotland found by underwater robots in Arabian sea</a><br /></em></p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/dead-zones-in-the-sea/2018-04-29T12:16:14+02:00Worldwide Basic Income Within 20 Years?<p><em><a href="https://news.slashdot.org/story/18/04/21/0147227/kurzweil-predicts-universal-basic-incomes-worldwide-within-20-years" target="_blank">Ray Kurzweil Predicts Universal Basic Incomes Worldwide Within 20 Years</a><br /></em></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/worldwide-basic-income-within-20-years/2018-04-22T16:42:00+02:00Europe Robot Rights<p><em><a href="http://www.dailymail.co.uk/news/article-3655533/Now-EU-wants-rights-robots-Machines-allowed-trade-money-claim-copyright-force-owners-pay-pension-new-proposals.html" target="_blank">"Now the EU wants to give rights to robots! Machines would be allowed to trade money, claim copyright and force owners to pay into a pension as part of new proposals"</a></em></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/europe-robot-rights/2018-04-15T18:44:04+02:00AI Experts Against Killer Robots<p><a href="https://tech.slashdot.org/story/18/04/05/1336205/ai-experts-boycott-south-korean-university-over-killer-robots" target="_blank"><em>AI Experts Boycott South Korean University Over 'Killer Robots'</em> </a></p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-experts-against-killer-robots/2018-04-08T20:28:14+02:00China In Search Of Water<p>China already starts to test <a href="https://science.slashdot.org/story/18/03/29/142245/china-in-search-of-water-is-building-a-rain-making-network-three-times-the-size-of-spain" target="_blank">geoengineering</a> in Tibet to let it rain more often...</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/china-in-search-of-water/2018-04-01T12:12:05+02:00Water<p><em>"About 3.6 billion people are estimated to be living in areas with a potential for water scarcity for at least one month per year, and this number could rise to as many as 5.7 billion people by 2050, according to a report published by UNESCO." </em></p>
<p><a href="https://news.slashdot.org/story/18/03/21/1939240/water-shortages-could-affect-5-billion-people-by-2050-unesco-warns">Water Shortages Could Affect 5 Billion People By 2050, UNESCO Warns</a></p>
<p><a href="http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/SC/images/WWDR2018_press_release_Final_EN.pdf">UNESCO/ UN- WATER Press Release N o.2018- xx PDF</a></p>
<p><strong>-1</strong> points for the Singularity to take off.</p>http://luddite.app26.de/post/water/2018-03-25T12:00:21+02:00AI Translates News From Chinese To English<p>AI translates news from Chinese to English and reaches human performance, another milestone<a href="https://blogs.microsoft.com/ai/machine-translation-news-test-set-human-parity/"> says Microsoft</a>.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-translates-news-from-chinese-to-english/2018-03-18T09:04:55+01:00Rage Against The Machine<p>Luddites start to <a href="https://www.theguardian.com/technology/2018/mar/06/california-self-driving-cars-attacked" target="_blank">attack self-driving cars</a> in California, and a Robot learns how to <a href="https://www.youtube.com/watch?v=CuJT9EtdETY" target="_blank">flee from violent kids</a>, maybe it is time for some kinds of defence system?</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/rage-against-the-machine/2018-03-11T11:29:17+01:00Zuse's Devils Wire<p>German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devils Wire.</p>
<p>In the early days of computer history, the program to compute and the data to compute on was separated.</p>
<p>Nowadays computer use the same memory for both, so it is possible to write programs that manipulate their own program.</p>
<p>Zuse says, that behind every technology Mephistopheles stands behind and grins, but the modern world needs computers to solve actual and upcoming problems, but better, <a href="https://books.google.de/books?id=3GurCAAAQBAJ&pg=PA9&lpg=PA9" target="_blank">read the lecture by yourself...</a></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/zuses-devils-wire/2018-03-04T14:41:00+01:00News Roundup - Week 8 2018<p>Scientists are worried about <a href="https://maliciousaireport.com/" target="_blank">AI in the wrong hands</a>,<br />AI can be our friend <a href="https://slashdot.org/story/18/02/18/1735208/ai-can-be-our-friend-says-bill-gates" target="_blank">says Bill Gates</a>,<br />Robots are taught to <a href="https://hardware.slashdot.org/story/18/02/21/2257203/boston-dynamics-is-teaching-its-robot-dog-to-fight-back-against-humans" target="_blank">fight off humans</a>,<br />the optical computer <a href="https://www.wired.com/story/this-computer-uses-lightnot-electricityto-train-ai-algorithms/" target="_blank">makes progress</a>,<br />and the memtransistor <a href="https://spectrum.ieee.org/nanoclast/semiconductors/devices/memtransistor-forms-foundational-circuit-element-to-neuromorphic-computing" target="_blank">makes progress</a>.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/news-roundup-week-8-2018/2018-02-25T11:28:19+01:00AI - Antichrist<p><em>"Technology itself is neither good nor bad. People are good or bad."</em><br />Naveen Jain</p>
<p>Actually i believe the Revelation as described in the Bible already happened, about 60 AD. And the beast with the number 666 has to be identified with the <a href="https://en.wikipedia.org/wiki/Number_of_the_Beast#Nero" target="_blank">Roman Empire and Caesar Nero</a>.</p>
<p>But inspired by this <a href="https://aiantichrist.blogspot.de/" target="_blank">blog</a>, i will give a modern interpretation a try, so feel free to join me in an alternate and speculative world paradigm...</p>
<h1><a href="/post/ai-antichrist/#Short_version" name="Short_version">Short version</a></h1>
<p>Technology is the Antichrist, and computer driven AI is the peak of technology.</p>
<h1><a href="/post/ai-antichrist/#Preamble" name="Preamble">Preamble</a></h1>
<p>Over 10000 years ago we left Garden Eden and started the <a href="https://en.wikipedia.org/wiki/Neolithic_Revolution" target="_blank">neolithic revolution</a>, we started to do farming and keep livestock, we started to use technology to make our lifes easier, but over the centuries and millennia, we forgot how to live with mother earth in an balanced way.</p>
<h1><a href="/post/ai-antichrist/#Full_version" name="Full_version">Full version</a></h1>
<p><strong>[update 2024-02-22]</strong></p>
<h2>Revelation 13:4</h2>
<p><em>"And they worshipped the dragon which gave power unto the beast: and they worshipped the beast, saying, Who is like unto the beast? who is able to make war with him?" </em></p>
<p>An AI as Antichrist, Chess as war game, who is going to beat the AI in Chess?</p>
<h2>Revelation 13:18</h2>
<p><em>"Here is wisdom. Let him that hath understanding count the number of the beast: </em><em>for it is the number of a man; and his number is Six hundred threescore and six."</em></p>
<p>Using the english-sumerian gematria system, which is based on 6 - A=6, B=12, C=18...Z=156, the word "computer" counts to 666.</p>
<p>The first human to mention the word computer for people doing computations was <a href="https://en.wikipedia.org/wiki/Computer#Etymology" target="_blank">Richard Braithwait</a> in a book called "The Yong Mans Gleanings" in 1613.</p>
<p>Using the english-sumerian gematria method, the name "Braithwait" counts to 666.</p>
<p>Rev 13:18 could act like a puzzle with an checksum, "computer" is the name of the beast, but the name (the number) was coined by a man, and the man who coined the name has the number 666 too.</p>
<h2>Revelation 13:16-17</h2>
<p><em>"And he causeth all, both small and great, rich and poor, free and bond, to </em><em>receive a mark in their right hand, or in their foreheads: </em><em>And that no man might buy or sell, save he that had the mark, or the name of the </em><em>beast, or the number of his name."</em></p>
<p>Nowadays, without an Smart Phone (or upcoming Smart Glasses), or an computer, you are limited in your daily business, from renting a car to doing payments.</p>
<p>So the mark is already here, the Smart Phone in the right hand, the upcoming Smart Glasses in the forehead, and the computer in general.</p>
<h2>Revelation 13:15</h2>
<p><em>"And he had power to give life unto the image of the beast, that the image of </em><em>the beast should both speak, and cause that as many as would not worship the </em><em>image of the beast should be killed."</em></p>
<p>An image of the beast is given life and people are going to worship it...<a href="/post/ai-god-religion-spotted/" target="_blank">AI God Religion Spotted</a> </p>
<h2><a href="/post/ai-antichrist/#Revelation_16" name="Revelation_16">Revelation 16 - The Seven Bowls of God’s Wrath</a> </h2>
<h3>Rev 16:2</h3>
<p><em>"And the first went, and poured out his vial upon the earth; and there fell a </em><em>noisome and grievous sore upon the men which had the mark of the beast, </em><em>and upon them which worshipped his image."</em></p>
<p>Considering computers as the mark of the beast, the sore could be cancer caused by radiation.</p>
<h3>Rev 16:3</h3>
<p><em>"And the second angel poured out his vial upon the sea; and it became as the </em><em>blood of a dead man: and every living soul died in the sea."</em></p>
<p>Our sea world is dying, overfishing, plastic particles, acidification, etc.</p>
<h3>Rev 16:4</h3>
<p><em>"And the third angel poured out his vial upon the rivers and fountains of </em><em>waters; and they became blood"</em></p>
<p>Blood in Judaism is impure and Jews are not allowed to eat it, this could mean that our rivers get poisoned.</p>
<h3>Rev 16:8-9</h3>
<p><em>"And the fourth angel poured out his vial upon the sun; and power was given </em><em>unto him to scorch men with fire. </em><em>And men were scorched with great heat, and blasphemed the name of God, </em><em>which hath power over these plagues: and they repented not to give him glory."</em></p>
<p>Climate Change causes already increasing heatwaves and droughts.</p>
<h3>Rev 16:10-11</h3>
<p><em>"And the fifth angel poured out his vial upon the seat of the beast; </em><em>and his kingdom was full of darkness; and they gnawed their tongues for pain, </em><em>And blasphemed the God of heaven because of their pains and their sores, </em><em>and repented not of their deeds."</em></p>
<p>This one can be interpreted as God shutting the internet down, the kingdom of the beast. Some scientists conclude that a pole-shift is currently underway, this could cause the magnetic field around earth to collapse, so the electro-magnetic waves from the sun could damage computer chips worldwide.</p>
<p><strong>[update 2020-11-04]</strong></p>
<p>Pretty obvious the Internet seems a natural fit to be the kingdom of the beast (a computer driven AI) so what does it mean it was 'full of darkness', hehe, ever wondered about the dark-web, fake-news, hate-speech etc.? Darkness.</p>
<h3>Rev 16:12-14</h3>
<p><em>"And the sixth angel poured out his vial upon the great river Euphrates; </em><em>and the water thereof was dried up, that the way of the kings of the east might </em><em>be prepared. </em><em>And I saw three unclean spirits like frogs come out of the mouth of the dragon, </em><em>and out of the mouth of the beast, and out of the mouth of the false prophet. </em><em>For they are the spirits of devils, working miracles, which go forth unto the </em><em>kings of the earth and of the whole world, to gather them to the battle of that </em><em>great day of God Almighty.</em>"</p>
<p>This one is clear, the Euphrates river dries up, and it is scary to watch it really happen. Dunno about the frogs and kings.</p>
<h3>Rev 16:17-21</h3>
<p><em>"And the seventh angel poured out his vial into the air; and there came a great </em><em>voice out of the temple of heaven, from the throne, saying, It is done. </em><em>And there were voices, and thunders, and lightnings; and there was a great </em><em>earthquake, such as was not since men were upon the earth, so mighty an </em><em>earthquake, and so great. </em><em>And the great city was divided into three parts, and the cities of the nations </em><em>fell: and great Babylon came in remembrance before God, to give unto her thecup </em><em>of the wine of the fierceness of his wrath. </em><em>And every island fled away, and the mountains were not found. </em><em>And there fell upon men a great hail out of heaven, every stone about the weight </em><em>of a talent: and men blasphemed God because of the plague of the hail; </em><em>for the plague thereof was exceeding great."</em></p>
<p>An earthquake, so strong, never happened before during mankind.</p>
<p><strong>[update 2023-06-16]</strong></p>
<p>Maybe the seventh bowl is global nuclear war/strike? The final.</p>
<h1><a href="/post/ai-antichrist/#Closing_Words" name="Closing_Words">Closing Words</a></h1>
<p>There are many passages in the Revelation i can not interpret in a way that the computer is the Antichrist. The seven heads, horns and ten crowns of the dragon, the mortal wound, or the first and second beast, etc.</p>
<p>The Roman Empire with Caesar Nero as Antichrist simply fits better.</p>
<p>But please leave a comment, if you have further puzzle pieces for AI Antichrist.</p>
<p>So, considering the pure potential of the meme AI Antichrist,</p>
<p>i give -1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-antichrist/2018-02-24T13:13:42+01:00On Artificial Neural Networks<p>It is non-stop in the news, every week it pops up in another corner, AIs based on Deep Neural Networks, so i will give it a try to write a lill, biased article about this topic...</p>
<h1><a href="/post/on-artificial-neural-networks/#The_brain" name="The_brain">The brain</a></h1>
<p>The human brain consists of about 100 billion neurons, as much as stars in our galaxy, the Milky Way, and each neuron is connected via synapses with about 1000 other neurons, resulting in 100 trillion connections.</p>
<p>For comparison, the game playing AI, AlphaZero, by Google Deepmind used about 50 million connections to play chess on super human level.</p>
<p>The inner neurons of our brain are connected via our senses, eyes, ears, etc, with the outer world.</p>
<p>One neuron has multiple, weighted inputs and one ouput, if a certain threshold of input is reached, its output is activated, the neuron fires an signal to another neuron.</p>
<p>The activation of the synapse is an electrical and chemical process, neurotransmitters can restrain or foster the activation potential, just consider the effect alcohol or coffee has to your cognitive performance.</p>
<p>Common artificial neural networks do not emulate the chemical part.</p>
<p>The brain wires these connections between neurons during learning, so they can act as memory, or can be used for computation.</p>
<h1><a href="/post/on-artificial-neural-networks/#Von_Neumann_architecture" name="Von_Neumann_architecture">The "von Neumann" architecture</a></h1>
<p>Most nowadays computers are based on the v<a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture" target="_blank">on Neumann architecture, </a>they have no neurons or synapses but transistors.</p>
<p>The main components are the ALU, Arithmetic Logic Unit, memory for program and data, and various inputs and outputs.</p>
<p>Artificial Neural Networks have to be built in software, running on these von Neumann computers.</p>
<p><a href="https://en.wikipedia.org/wiki/John_von_Neumann" target="_blank">Von Neumann</a> said that his proposed architecture was inspired by the idea of how the brain works, memory and computation. And in his book, <a href="https://en.wikipedia.org/wiki/The_Computer_and_the_Brain" target="_blank">"The Computer and the Brain"</a>, he gives an comparision of computers and the knowledge about biological neural networks of that time.</p>
<h1><a href="/post/on-artificial-neural-networks/#Dartmouth" name="Dartmouth">Dartmouth</a></h1>
<p>First work on ANNs were published already in the 1940s, and in 1956 the <a href="https://en.wikipedia.org/wiki/Dartmouth_workshop" target="_blank">"Dartmouth Summer Research Project on Artificial Intelligence" </a>was held, coining the term Artificial Intelligence, and marking one milestone in AI. The work on ANNs continued, and first neuromorphic chips were developed.</p>
<h1><a href="/post/on-artificial-neural-networks/#AI-Winter" name="AI-Winter">AI-Winter</a></h1>
<p>In the 1970s the AI-Winter occurred, problems in computational theory and the lack of compute power needed by large ANNs, resulted in cutting funds, and splitting the work into strong and weak AI.</p>
<h1><a href="/post/on-artificial-neural-networks/#Deep_Neural_Networks" name="Deep_Neural_Networks">Deep Neural Networks</a></h1>
<p>With the rise of compute power (driven by <a href="https://de.wikipedia.org/wiki/General_Purpose_Computation_on_Graphics_Processing_Unit" target="_blank">GPGPU</a>), further research, and Big Data, it was possible to train faster better and larger networks in the 21st century.</p>
<p>The term Deep Neural Networks, for deep hierarchical structures or deep learning techniques was coined.</p>
<p>One of the first and common usage for ANNs was and is pattern recognition, for example character recognition.</p>
<p>You can train a neural network with a set of the same, but different looking character, with the aim that the ANN will recognize the same character in various appearances.</p>
<p>With a deeper topology of the neural network, it is possible to identify for example pictures of cars with different net layers for color, shape etc.</p>
<h1><a href="/post/on-artificial-neural-networks/#The_Brain_vs_The_Machine" name="The_Brain_vs_The_Machine">The Brain vs. The Machine</a></h1>
<p>A computer can perform fast arithmetic and logical operations, therefore the transistors are used.</p>
<p>Contrary, the neural network of our brain works massiv parallel.</p>
<p>The synapses of the human brain are clocked with 10 to 100 hertz, means they can fire to other neurons up to 100 times per second.</p>
<p>Nowadays computer chips are clocked with 4 giga hertz, means they can compute 4 000 000 000 operations per second per ALU.</p>
<p>The brain has 100 billion neurons, 100 trillion connections and consumes ~20 watt, nowadays biggest chips have 12 billion transistors with an usage of 250 watt.</p>
<p>We can not compare the compute power of an brain directly with an von Neumann computer, but we can estimate what kind of computer we would need to map the neural network of an human brain.</p>
<p>Assuming 100 trillion connections, we would need about 400 terabytes of memory to store the weights of the neurons. Assuming 100 hertz as clock rate, we would need at least 40 petaFLOPS (floating point operations per second) to compute the activation potentials.</p>
<p>For comparison, the current number one high performance computer in the world is able to perform ~93 petaFLOPS, has ~1 petabyte memory, but an power consumption of more than 15 megawatt.</p>
<p>So, considering simply the energy efficiency of the human brain,<br />i give -1 points for the Singularity to take off.</p>http://luddite.app26.de/post/on-artificial-neural-networks/2018-02-24T12:40:46+01:00AI Starts To Write Wikipedia Articles<p><a href="https://arxiv.org/abs/1801.10198">Google AI</a> starts to write Wikipedia articles, wording is still a bit broken, but it shows clearly the way AI goes.</p>
<p><a href="https://www.theregister.co.uk/2018/02/15/google_brain_ai_wikipedia/" target="_blank">Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles</a></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-starts-to-write-wikipedia-articles/2018-02-24T12:36:49+01:00Super AI in Sci-Fi <p>Books and movies address our collective fears, hopes and wishes, and there seems to be in main five story-lines concerning AI in Sci-Fi...</p>
<p><strong>Super AI takes over world domination</strong><br />Colossus, Terminator, Matrix</p>
<p><strong>Something went wrong</strong><br />Odyssey 2001, Das System, Ex Machina</p>
<p><strong>Super AI evolves, the more or less, peacefully</strong><br />Golem XIV, A.I., Her</p>
<p><strong>The Cyborg scenario, man merges with machine</strong><br />Ghost in the Shell, #9, Trancendence</p>
<p><strong>There are good ones, and there are bad ones</strong><br />Neuromancer, I,Robot, Battle Star Galactica</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/super-ai-in-sci-fi-/2018-02-24T12:35:06+01:00Robophilosophy 2018<p>Human <a href="http://conferences.au.dk/robo-philosophy-2018-at-the-university-of-vienna/session-talks/" target="_blank">Philosophers discuss</a> the impact of social robots on mankind, still no Strong AI in sight that joins the debate.</p>
<p>Cherry piking...</p>
<p><a href="http://conferences.au.dk/robo-philosophy-2018-at-the-university-of-vienna/session-talks/the-moral-life-of-androids/" target="_blank">The Moral Life of Androids - Should Robots Have Rights?</a><br />Edward Howlett Spence</p>
<p>"The question I explore is whether intelligent autonomous Robots will have moral rights. Insofar as robots can develop fully autonomous intelligence, I will argue that Robots will have moral rights for the same reasons we do. ..."</p>
<p><a href="http://conferences.au.dk/robo-philosophy-2018-at-the-university-of-vienna/keynotes/robert-trappl/abstract-robert-trappl/" target="_blank">Robot Deus</a><br />Robert Trappl</p>
<p>"The ascription of god-like properties to machines has a long tradition. Robots of today invite to do so. We will present and discuss god-like properties, to be found in movies as well as in scientific publications, advantages and risks of robots both as good or evil gods, and probably end with a robot theology."</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/robophilosophy-2018/2018-02-18T16:25:25+01:00The Turing Test<p><em>“He who cannot lie does not know what truth is.”</em><br />Friedrich Nietzsche, Thus Spoke Zarathustra</p>
<p>The <a href="https://en.wikipedia.org/wiki/Turing_test" target="_blank">Turing Test</a>, proposed by Mathematician <a href="https://de.wikipedia.org/wiki/Alan_Turing" target="_blank">Alan Turing</a> in 1950, was developed to examine if an AI reached human level intelligence.</p>
<p>Simplified, a person performs text chats with an human and the AI, if the person is not able to discern which chat partner the AI is, then the AI has passed the Turing Test.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Loebner_Prize" target="_blank">Loebner Prize</a> performs every year a Turing Test contest.</p>
<p>It took me some time to realize, that the Turing Test is not so much about intelligence, but about lying and empathy.</p>
<p>If an AI wants to pass the Turing Test it has to lie to the chat partner, and to be able to lie, it has to develop some level of empathy, and some level of selfawareness.</p>
<p>Beside other criticism, the <a href="https://en.wikipedia.org/wiki/Chinese_room" target="_blank">Chinese Room</a> Argument states that no consciousness is needed to perform such an task, and therefore other tests have been developed.</p>
<p>Personally I prefer the <a href="http://www.hnf.de/veranstaltungen/events/paderborner-podium/computer-gehirn-und-bewusstsein/metzinger.html" target="_blank">Metzinger-Test</a>, a hypothecical event, when AIs start to discuss with human philosophers and defend successfully their own theory of consciousness.</p>
<p>I am not sure if the Singularity is going to take off, but i guess that the philosophers corner is one of the last domains that AIs are going to conquer, and if they succeed we can be pretty sure to have another Apex on earth</p>
<p>Turing predicted that by the year 2000 machines will fool 30% of human judges, he was wrong, the Loebner Prize has still no Silver Medal winner for the 25 minutes text chat category.</p>
<p>So, -1 points for the Singularity to take off.</p>http://luddite.app26.de/post/the-turing-test/2018-02-11T08:18:03+01:00Nip It In The Bud<p>Deep Neural Network helps to read mind scanned with fMRI....</p>
<p><a href="http://www.sciencemag.org/news/2018/01/mind-reading-algorithm-can-decode-pictures-your-head" target="_blank">This ‘mind-reading’ algorithm can decode the pictures in your head</a>.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/nip-it-in-the-bud/2018-02-11T08:14:26+01:00On Peak Human<p>One of the early Peak Human prophets was <a href="https://en.wikipedia.org/wiki/Thomas_Robert_Malthus" target="_blank">Malthus</a>, in his 1798 book, 'An Essay on the Principle of Population', he postulated that the human population growths exponentially, but food production only linear, so there will occur fluctuation in population growth around an upper limit.</p>
<p>Later <a href="https://en.wikipedia.org/wiki/Paul_R._Ehrlich" target="_blank">Paul R. Ehrlich</a> predicted in his book, 'The Population Bomb' (1968), that we will reach an limit in the 1980s.</p>
<p><a href="https://en.wikipedia.org/wiki/Dennis_Meadows" target="_blank">Meadows</a> et al. concur in 'The Limits of Growth - 30 years update' (2004), that we reached an upper limit already in the 1980s.</p>
<p>In 2015 <a href="https://en.wikipedia.org/wiki/Stephen_Emmott" target="_blank">Emmott</a> concludes in his movie 'Ten Billion' that we already passed the upper bound.</p>
<p>UNO predictions say we may hit 9 billion humans in 2050, so the exponential population growth rate already <a href="https://en.wikipedia.org/wiki/Population_growth" target="_blank">declines</a>, but the effects of an wast-fully economy pop up in many corners.</p>
<p>Now, in 2018, we are about 7.4 billion humans, and i say Malthus et al. were right.</p>
<p>Is is not about how many people Earth can feed, but how many people can live in an comfortable but sustainable manner.</p>
<h3>What does Peak Human mean for the <a href="https://en.wikipedia.org/wiki/Technological_singularity" target="_blank">Technological Singularity</a>?</h3>
<p>The advent of Computers was driven by the exponential population growth in the 20th century. All the groundbreaking work was done in the 20th century.</p>
<p>When we face an decline in population growth, we also have to face an decline in new technologies developed.</p>
<p>Cos it is not only about developing new technologies, but also about maintaining the old knowledge.</p>
<p>Here is the point AI steps in, mankind's population growth alters, but the whole AI sector is growing and expanding.</p>
<p>Therefore the question is, is AI able to take on the decline?</p>
<p>Time will tell.</p>
<p>I guess the major uncertainty is, how <a href="/post/the-end-of-moores-law-/" target="_blank">Moore's Law</a> will live on beyond 2021, when the 4 nm transistor production is reached, what some scientists consider as an physical and economical barrier.</p>
<p>I predict that by hitting the 8 billion humans mark, we will have developed another, groundbreaking, technology, similar with the advent of the <a href="https://en.wikipedia.org/wiki/Integrated_circuit" target="_blank">transistor, integrated circuit and microchip</a>.</p>
<p>So, considering the uncertainty of Peak Human vs. Rise of AI, <br />i give +-0 points for the Singularity to take off.</p>http://luddite.app26.de/post/on-peak-human/2018-02-04T16:12:29+01:00Hmm<p>Insects are <a href="https://science.slashdot.org/story/18/02/01/1833245/insect-die-off-even-common-species-are-becoming-rare" target="_blank">dying</a>,<br />Cape Town is running out of <a href="https://www.bloomberg.com/news/features/2017-12-08/will-cape-town-run-out-of-water" target="_blank">drink-water</a>, <br />and Geoengineering <a href="https://science.slashdot.org/story/18/02/01/1527243/hoping-that-sucking-co2-from-the-air-will-fix-the-climate-good-luck" target="_blank">won't help</a>.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/hmm/2018-02-04T11:03:24+01:00The Rise Of The Matrix<p>Looking at the tag cloud of this blog, there are two major topics, pro and con Singularity, AI (Artificial Intelligence) vs. ELE (Extinction Level Event).</p>
<p>So, we slide, step by step, to an event called Singularity, but concurrently we face more and more the extinction of mankind.</p>
<p>What about combining those two events?</p>
<p>Let us assume we damage our ecosphere sustainable, but at the same moment our technology advances to an level where it is possible to connect via an Brain-Computer-Interface directly with the cyberspace.</p>
<p>People already spend more and more time in virtual realities, with the advent of Smart Phones, they are connected all the time with the cyberspace, they meet people in digital social networks, they play games in computer generated worlds, create and buy virtual goods with virtual money, and, essentially, they like it.</p>
<p>To prevent an upcoming ELE, we would need to cut our consumption of goods significantly, but the mass of people wants more and more.</p>
<p>So, let us give them more and more, in the virtual, computer generated worlds.</p>
<p>Let us create the Matrix, where people can connect directly with their brain, and buy whatever experience they wish.</p>
<p>A virtual car would need only some electricity and silicon to run on, but the harm to Mother Earth would be significantly less than a real car.</p>
<p>We could create millions or billions of new jobs, all busy with designing virtual worlds, virtual goods, and virtual experiences.</p>
<p>And Mother Earth will get an break, to recover from the damage billions of consuming people caused.</p>
<p>ELE + Singularity => Matrix</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/the-rise-of-the-matrix/2018-01-30T11:26:34+01:00Ten Billion<p>Shake it, we won't make it.</p>
<p>Watch this movie from 2015, <a href="https://www.youtube.com/watch?v=jV5xiRKw5f4" target="_blank">Ten Billion </a>and agree with me,</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ten-billion/2018-01-28T20:47:35+01:00A Red Double-Decker Bus Is Floating On A Lake<p><a href="https://blogs.microsoft.com/ai/drawing-ai/">Microsoft's drawing AI </a>understands natural language and can be creative...</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/a-red-double-decker-bus-is-floating-on-a-lake/2018-01-28T11:04:34+01:00AIs Surpass Human Performance In Text Comprehension<p>Based on the <a href="https://rajpurkar.github.io/SQuAD-explorer/" target="_blank">SQuAD</a> test, Alibaba's AI was the first to reach <a href="http://www.zdnet.com/google-amp/article/alibaba-neural-network-defeats-human-in-global-reading-test/" target="_blank">super human </a><a href="http://www.zdnet.com/google-amp/article/alibaba-neural-network-defeats-human-in-global-reading-test/" target="_blank">performance in text comprehension</a> with an score of 82.440 vs 82.304, passed one day after by Microsoft Research with an score of 82.650.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ais-surpass-human-performance-in-text-comprehension/2018-01-21T10:56:00+01:00Fun From the Singularity Future...<p>How do AIs catch Elephants?</p>
<p>They send the Terminator back in time...</p>
<p>*duck*</p>http://luddite.app26.de/post/fun-from-the-singularity-future--0/2018-01-14T08:35:51+01:00News from the Memristor<p>Researchers were able to build ANNs, artificial neural networks, with mixed signal, neuromorphic chips based on memristors...</p>
<p><a href="https://www.hpcwire.com/2018/01/08/mixed-signal-neural-net-leverages-memristive-technology/" target="_blank">mixed-signal-neural-net-leverages-memristive-technology</a></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/news-from-the-memristor/2018-01-14T07:46:17+01:00Red Alert<p>Shake it, we won't make it, UN Secretary-General issues Red Alert for the world in 2018...</p>
<p><em><a href="http://www.foxnews.com/world/2017/12/31/un-leader-issues-red-alert-for-unity-in-2018.amp.html">Nationalism and xenophobia are on the rise, inequality is growing, climate change is accelerating and the world is seeing horrific violations of human rights, he said. Global anxieties about nuclear weapons were the highest since the Cold War.</a></em></p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/red-alert/2018-01-07T16:14:36+01:00AlphaZero - The Krampus Has Come<p>Okay, this one affected me personally.</p>
<p>Google's Deepmind team adapted their AlphaZero approach for the games of chess and shogi and dropped the bomb already on the 5th of December.</p>
<p><a href="https://arxiv.org/abs/1712.01815" target="_blank">https://arxiv.org/abs/1712.01815</a></p>
<p>For chess they trained the Deep Neural Network for 4 to 9 hours on an cluster with 5000+64 <a href="https://arxiv.org/abs/1704.04760" target="_blank">TPUs</a> (1st+2nd gen) and reached super human level.</p>
<p>Unlike in Go, they did not compete with humans, cos chess engines are already on an super grandmaster like level, no, they did compete with the worlds strongest open source engine - Stockfish, result:</p>
<p><a href="https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match" target="_blank">100 game match with 28 wins, 72 draws, and zero losses for AlphaZero.</a></p>
<p>This is definitely a smack in the face for all computer chess programmers out there. Next stop Neanderthal Man.</p>
<p>So, with thanks to the <a href="https://www.chessprogramming.org/AlphaZero" target="_blank">Krampus</a>,<br />+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/alphazero-the-krampus-has-come/2017-12-31T16:10:41+01:00Competition is Good for...<p>...the Singularity to take off.</p>
<p>China wants to pass US in AI by 2030...</p>
<p><a href="https://yro.slashdot.org/story/17/11/02/224247/eric-schmidt-and-bob-work-our-ai-sputnik-moment-is-now?utm_source=rss1.0mainlinkanon&utm_medium=feed" target="_blank">eric-schmidt-and-bob-work-our-ai-sputnik-moment-is-now</a></p>
<p><a href="https://www.wired.com/story/china-challenges-nvidias-hold-on-artificial-intelligence-chips/" target="_blank">china-challenges-nvidias-hold-on-artificial-intelligence-chips</a></p>
<p>And Banksters are investing again in good old silicon...</p>
<p><a href="https://www.technologyreview.com/s/609471/the-race-to-power-ais-silicon-brains/" target="_blank">the-race-to-power-ais-silicon-brains</a></p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/competition-is-good-for-/2017-11-26T14:36:47+01:0015,000 Scientists Issue 'Warning To Humanity'<p>Ohoh, another <a href="https://science.slashdot.org/story/17/11/13/2058247/more-than-15000-scientists-from-184-countries-issue-warning-to-humanity" target="_blank">warning</a> from our beloved scientists concerning our impaired ecosphere.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/15000-scientists-issue-warning-to-humanity/2017-11-19T10:46:44+01:00Flippy - All Your Kitchen Belongs to Us<p>Flippy from <a href="http://www.misorobotics.com/">Miso Robotics</a> is able to roast hamburgers, twice as many burgers per hour as an human does, and this is just the beginning of the kitchen conquer they say...</p>
<p>+1 points for the Singularity to take off ;-)</p>http://luddite.app26.de/post/flippy-all-your-kitchen-belongs-to-us/2017-11-12T13:17:05+01:00Climate Change - CO2 Concentration Rising<p><em>"<a href="https://www.nbcnews.com/news/world/scientists-sound-alarm-global-warming-ahead-world-climate-conference-n817046" target="_blank">Without rapid cuts in CO2 and other greenhouse gas emissions, we will be heading for dangerous temperature increases by the end of this century," Petteri Taalas, the WMO's secretary-general, said in a statement.</a></em></p>
<p>If we are too busy with an collapsing biospehre, we may have not enough resources for the Singularity.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/climate-change-co2-concentration-rising/2017-11-05T08:56:10+01:00Rise of the Machines Must Be Monitored, Say Global Finance Regulators<p><em><a href="http://www.reuters.com/article/us-regulation-banks-artificialintelligen/rise-of-the-machines-must-be-monitored-say-global-finance-regulators-idUSKBN1D1534" target="_blank">"Replacing bank and insurance workers with machines risks creating a dependency on outside technology companies beyond the reach of regulators, the global Financial Stability Board (FSB) said on Wednesday".</a></em></p>
<p>Considering <a href="https://en.wikipedia.org/wiki/High-frequency_trading">HFT</a>, they probably know what they are talking about.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/rise-of-the-machines-must-be-monitored-say-global-finance-regulators/2017-11-05T08:45:09+01:00Deepmind's AlphaGo Zero - Go From an Alternate Dimension<p>AlphaGo Zero teaches itself <a href="https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/alphago-zero-goes-from-blank-slate-to-grandmaster-in-three-dayswithout-any-help-at-all" target="_blank">Go on Grandmaster level in three days</a>, <a href="https://tech.slashdot.org/story/17/10/20/1914241/the-ai-that-has-nothing-to-learn-from-humans">"Go from an alternate dimension"</a> .</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/deepminds-alphago-zero-go-from-an-alternate-dimension/2017-10-24T14:18:39+02:00Google's AutoML - AI Learns to Build AI <p>It still <a href="https://www.wired.com/story/googles-learning-software-learns-to-write-learning-software" target="_blank">can not compete with handcrafted designs</a>, but it is one step closer to the direct feedback loop a technological Singularity needs - when machines create machines.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/googles-automl-ai-learns-to-build-ai-/2017-10-15T13:15:17+02:00Luddites Rising?<p>Silicon Valley Techies begin to <a href="https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia" target="_blank">disconnect from the Internet</a>, attracts too much attention and can be manipulative they say.</p>
<p>Is it just rogue memes fighting for human brain resources, or could it be an sign that we reached some kind of informational tipping point?</p>
<p>In both cases, i give +1 points for the Singularity to take off.</p>http://luddite.app26.de/post/luddites-rising/2017-10-08T12:47:07+02:00Code Too Complex<p>Ohoh, source code is <a href="https://developers.slashdot.org/story/17/10/02/144245/code-is-too-hard-to-think-about" target="_blank">becoming too complex</a> to get things done.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/code-too-complex/2017-10-08T12:44:50+02:00Fun From the Singularity Future...<p>How many light bulbs do you need to replace an Software Engineer?</p>
<p>*duck*</p>http://luddite.app26.de/post/fun-from-the-singularity-future-/2017-10-08T12:42:47+02:00Super AI Doomsday Prophets<p>They are smart, they have money, and they predict the Super AI Doomsday:</p>
<p>Stephen Hawking<br /><em><a href="http://www.bbc.com/news/technology-30290540" target="_blank">"The development of full artificial intelligence could spell the end of the human race.”</a></em></p>
<p>James Lovelock<br /><em><a href="https://www.theguardian.com/environment/2016/sep/30/james-lovelock-interview-by-end-of-century-robots-will-have-taken-over" target="_blank">"Before the end of this century, robots will have taken over."</a></em></p>
<p>Nick Bostrom<br /><em><a href="https://www.goodreads.com/book/show/20527133-superintelligence" target="_blank">"Some little idiot is bound to press the ignite button just to see what happens."</a></em></p>
<p>Elon Musk<br /><em><a href="https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat" target="_blank">"Artificial intelligence is our biggest existential threat."</a></em></p>
<p>So, obviously, +1 points for the Singularity to take off.</p>http://luddite.app26.de/post/super-ai-doomsday-prophets/2017-10-01T16:32:40+02:00AI God Religion Spotted<p>The first religion with an AI as god has been <a href="https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/" target="_blank">spotted</a>,</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-god-religion-spotted/2017-10-01T12:25:38+02:00AI - Just an Hype?<p>Voices in the web raise that AI is just over-hyped, and we may reached already <a href="https://techcrunch.com/2017/09/30/ai-hype-has-peaked-so-whats-next/">peak hype</a>.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-just-an-hype/2017-09-30T22:23:39+02:00ELE 2100<p>Manmade CO2 emission can cause an ELE, extinction level event, in 2100, say <a href="https://www.sciencedaily.com/releases/2017/09/170920182116.htm" target="_blank">scientists</a>.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ele-2100/2017-09-22T20:12:18+02:00Humanity is Running out of Ideas<p>According to an <a href="https://www.nber.org/papers/w23782" target="_blank">paper</a> from Stanford University and the Massachusetts Institute of Technology, <a href="https://science.slashdot.org/story/17/09/12/0532214/boffins-fear-we-might-be-running-out-of-ideas" target="_blank">ideas are getting harder and harder to find</a>...</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/humanity-is-running-out-of-ideas/2017-09-13T20:24:31+02:00AI Detects Sexual Orientation<p><a href="https://osf.io/fk3xr/" target="_blank">Based</a> on deep neural networks an <a href="https://hardware.slashdot.org/story/17/09/08/218221/ai-can-detect-sexual-orientation-based-on-persons-photo" target="_blank">AI can detect sexual orientation from faces</a>, a skill humans are not able to perform.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/ai-detects-sexual-orientation/2017-09-10T20:05:57+02:00From QuBit to QuDit<p>Another step forward into Quantum Computers, <a href="https://spectrum.ieee.org/tech-talk/computing/hardware/qudits-the-real-future-of-quantum-computing" target="_blank">QuDits</a>.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/from-qubit-to-qudit/2017-07-30T20:38:14+02:00DARPA Cyber Grand-Challenge<p>Computer AI hacks Computer AI during the <a href="http://archive.darpa.mil/cybergrandchallenge/" target="_blank">DARPA Cyber Grand-Challenge</a>.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/darpa-cyber-grand-challenge/2016-09-30T20:42:13+02:00Boston Dynamics for Sale<p>Alphabet puts the famous Robo-Fab Boston Dynamics for <a href="https://www.bloomberg.com/news/articles/2016-03-17/google-is-said-to-put-boston-dynamics-robotics-unit-up-for-sale" target="_blank">sale</a>, it questions that marketable products can be developed in time.</p>
<p>-1 points for the Singularity to take off.</p>http://luddite.app26.de/post/boston-dynamics-for-sale/2016-09-15T09:01:47+02:00AlphaGo Beats Pro Human Go Player<p>It was already during march in the news, <a href="https://en.wikipedia.org/wiki/AlphaGo" target="_blank">AlphaGo</a>, an AI based on Neural Networks, beat Lee Sedol, a professional Go player, 4:1.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/alphago-beats-pro-human-go-player/2016-09-03T14:10:15+02:00Sunway TaihuLight<p>The Chinese take over the <a href="https://www.top500.org/news/new-chinese-supercomputer-named-worlds-fastest-system-on-latest-top500-list/" target="_blank">leadership</a> of the US in supercomputing, the most powerful, homegrown, system is now the Sunway <a href="https://www.top500.org/system/178764" target="_blank">TaihuLight </a>with a LINPACK benchmark rating of 93 petaFLOPS.</p>
<p>+1 points for the Singularity to take off.</p>http://luddite.app26.de/post/sunway-taihulight/2016-08-01T12:03:28+02:00More Moore<p>If we can not shrink the transistor size any further, what other options do we have to increase compute power?</p>
<h3><a href="https://en.wikipedia.org/wiki/Three-dimensional_integrated_circuit">3D packaging</a></h3>
<p>The ITRS report suggest to go into third dimension and build cubic chips. The more layers are build the more integrated cooling will be necessary.</p>
<h3><a href="https://en.wikipedia.org/wiki/Random-access_memory#Memory_wall" target="_blank">Memory Wall</a></h3>
<p>Currently memory latencies are higher than compute cycles on CPUs, with faster memory tehchniques or higher bandwidth the gap can be closed.</p>
<h3><a href="https://en.wikipedia.org/wiki/Memristor" target="_blank">Memristor</a></h3>
<p>The Memristor is an electronic component proposed in 1971. It can be used for non-volatile memory devices and alternative, neuromorphic compute architectures.</p>
<h3><a href="https://en.wikipedia.org/wiki/ photonic_computing" target="_blank">Photonics</a></h3>
<p>Using light for computation sounds quite attractive, but the base element, the photonic transistor, has yet to be developed.</p>
<h3><a href="https://en.wikipedia.org/wiki/Quantum_computing" target="_blank">Quantum Computing</a></h3>
<p>Really, i do not have a clue how these thingies work, somehow via Quantum Effects like Superposition and Entanglement but people say they are going to rock when they are ready...</p>
<p>Considering so much room for research, <br />i give +1 points for the Singularity to take off.</p>http://luddite.app26.de/post/more-moore/2016-07-23T19:34:49+02:00The End of Moore's Law? <p>Moore's law, the heartbeat of computer evolution, is the observation that every two years the amount of transistors on integrated circuits doubles. Gordon Moore, co founder of Intel, proposed an doubling every year in 1965 and an doubling every two years in 1975.</p>
<p>In practice this results in an doubling of compute power of computer chips every two years.</p>
<p>The doubling of transistor amount is achieved by shrinking their size. The 1970s Intel 8080 chip was clocked with 2 Mhz, had about 6000 transistors and was produced in an 6 Micrometer process. Nowadays processors have billions of transistors and use an 14 or 10 Nanometer process.</p>
<p>But less known is Moore's Second Law, the observation that also the investment costs for the fabrics grow exponentially.</p>
<p>The last <a href="https://www.semiconductors.org/main/2015_international_technology_roadmap_for_semiconductors_itrs/" target="_blank">ITRS report of 2015</a> predicts that transistor shrinking will hit such an economic wall in 2021, and alternative techniques have to be used to keep Moore's Law alive.</p>
<p>Considering this news, <br />i give -1 points for the Singularity to take off.</p>http://luddite.app26.de/post/the-end-of-moores-law-/2016-07-22T19:28:47+02:00Is the Singularity Near?<p>It is the year 2016 and some Trans-humanists say the <a href="https://en.wikipedia.org/wiki/Technological_singularity" target="_blank">Singularity</a> is near.</p>
<p>A event where the technological environment starts to feed its own development in an feedback loop and exceeds human understanding.</p>
<p>The aim of this blog is to observe if the Singularity is going to take off and when.</p>
<p>Have fun.</p>
<p>--<br />SM2078</p>http://luddite.app26.de/post/is-the-singularity-near/2016-07-13T14:57:23+02:00