Fluidity
This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
info_outlineFluidity
Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy’s paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et...
info_outlineFluidity
Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewicz’s...
info_outlineFluidity
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. This episode has a lot of links! Here they are. Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread:...
info_outlineFluidity
Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob...
info_outlineFluidity
Do AI As Science And Engineering Instead - We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies. Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems....
info_outlineFluidity
Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction. https://betterwithout.ai/backpropaganda You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a...
info_outlineFluidity
The conclusion of this chapter. So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad. Sayash Kapoor and Arvind Narayanan’s "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": Another video showing a walk through latent space: You can support the podcast and get episodes a week early, by supporting the Patreon: If you like the show,...
info_outlineFluidity
This begins "Gradient Dissent", the companion material to "Better Without AI". The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives. Artificial Neurons Considered Harmful, Part 1 - So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and...
info_outlineFluidity
The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent. Cozy Futurism - If we knew we’d never get flying cars, most people wouldn’t care. What do we care about? https://betterwithout.ai/cozy-futurism Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that! https://betterwithout.ai/meaningful-future The Inescapable: Politics - No realistic approach to future AI can avoid questions of power...
info_outlineYou can support the podcast and get episodes a week early, by supporting the Patreon:
If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold