loader from loading.io

Fear Power, Not Intelligence

Fluidity

Release Date: 10/22/2023

A Better Future, Without Backprop show art A Better Future, Without Backprop

Fluidity

This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

info_outline
Better Text Generation With Science And Engineering show art Better Text Generation With Science And Engineering

Fluidity

Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy’s paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et...

info_outline
Classifying Images: Massive Parallelism And Surface Features show art Classifying Images: Massive Parallelism And Surface Features

Fluidity

Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewicz’s...

info_outline
Do AI As Engineering Instead show art Do AI As Engineering Instead

Fluidity

Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems.     This episode has a lot of links! Here they are.   Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI   Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954   Chris Olah and his collaborators: “Thread:...

info_outline
Do AI As Science Instead show art Do AI As Science Instead

Fluidity

Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob...

info_outline
Do AI As Science And Engineering Instead show art Do AI As Science And Engineering Instead

Fluidity

Do AI As Science And Engineering Instead - We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies.     Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems....

info_outline
Backpropaganda: Anti-Rational Neuro-Mythology show art Backpropaganda: Anti-Rational Neuro-Mythology

Fluidity

Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction.   https://betterwithout.ai/backpropaganda   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a...

info_outline
Artificial Neurons Considered Harmful, Part 2 show art Artificial Neurons Considered Harmful, Part 2

Fluidity

The conclusion of this chapter.   So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad.  Sayash Kapoor and Arvind Narayanan’s "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": Another video showing a walk through latent space: You can support the podcast and get episodes a week early, by supporting the Patreon: If you like the show,...

info_outline
Gradient Dissent- Artificial Neurons Considered Harmful, Part 1 show art Gradient Dissent- Artificial Neurons Considered Harmful, Part 1

Fluidity

This begins "Gradient Dissent", the companion material to "Better Without AI". The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives. Artificial Neurons Considered Harmful, Part 1 - So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and...

info_outline
Futurism, Politics, and Responsibility show art Futurism, Politics, and Responsibility

Fluidity

The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent. Cozy Futurism - If we knew we’d never get flying cars, most people wouldn’t care. What do we care about? https://betterwithout.ai/cozy-futurism Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that! https://betterwithout.ai/meaningful-future The Inescapable: Politics - No realistic approach to future AI can avoid questions of power...

info_outline
 
More Episodes
Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.
 
 
For much of the AI safety community, the central question has been “when will it happen?!” That is futile: we don’t have a coherent description of what “it” is, much less how “it” would come about. Fortunately, a prediction wouldn’t be useful anyway. An AI apocalypse is possible, so we should try to avert it.
 
 
You can support the podcast and get episodes a week early, by supporting the Patreon:
 
If you like the show, consider buying me a coffee:
 
Original music by Kevin MacLeod.
 
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.