loader from loading.io

Classifying Images: Massive Parallelism And Surface Features

Fluidity

Release Date: 01/05/2025

A Better Future, Without Backprop show art A Better Future, Without Backprop

Fluidity

This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

info_outline
Better Text Generation With Science And Engineering show art Better Text Generation With Science And Engineering

Fluidity

Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy’s paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et...

info_outline
Classifying Images: Massive Parallelism And Surface Features show art Classifying Images: Massive Parallelism And Surface Features

Fluidity

Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewicz’s...

info_outline
Do AI As Engineering Instead show art Do AI As Engineering Instead

Fluidity

Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems.     This episode has a lot of links! Here they are.   Michael Nielsen’s “The role of ‘explanation’ in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI   Subbarao Kambhampati’s “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954   Chris Olah and his collaborators: “Thread:...

info_outline
Do AI As Science Instead show art Do AI As Science Instead

Fluidity

Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob...

info_outline
Do AI As Science And Engineering Instead show art Do AI As Science And Engineering Instead

Fluidity

Do AI As Science And Engineering Instead - We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies.     Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems....

info_outline
Backpropaganda: Anti-Rational Neuro-Mythology show art Backpropaganda: Anti-Rational Neuro-Mythology

Fluidity

Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction.   https://betterwithout.ai/backpropaganda   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a...

info_outline
Artificial Neurons Considered Harmful, Part 2 show art Artificial Neurons Considered Harmful, Part 2

Fluidity

The conclusion of this chapter.   So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad.  Sayash Kapoor and Arvind Narayanan’s "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": Another video showing a walk through latent space: You can support the podcast and get episodes a week early, by supporting the Patreon: If you like the show,...

info_outline
Gradient Dissent- Artificial Neurons Considered Harmful, Part 1 show art Gradient Dissent- Artificial Neurons Considered Harmful, Part 1

Fluidity

This begins "Gradient Dissent", the companion material to "Better Without AI". The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives. Artificial Neurons Considered Harmful, Part 1 - So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and...

info_outline
Futurism, Politics, and Responsibility show art Futurism, Politics, and Responsibility

Fluidity

The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent. Cozy Futurism - If we knew we’d never get flying cars, most people wouldn’t care. What do we care about? https://betterwithout.ai/cozy-futurism Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that! https://betterwithout.ai/meaningful-future The Inescapable: Politics - No realistic approach to future AI can avoid questions of power...

info_outline
 
More Episodes

Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns.

https://betterwithout.ai/images-surface-features

This episode has a lot of links:

David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096

Jordana Cepelewicz’s “Where we see shapes, AI sees textures,” Quanta Magazine, July 1, 2019: https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/

“Suddenly, a leopard print sofa appears”, May 2015: https://web.archive.org/web/20150622084852/http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html

“Understanding How Image Quality Affects Deep Neural Networks” April 2016: https://arxiv.org/abs/1604.04004
 
Goodfellow et al., “Explaining and Harnessing Adversarial Examples,” December 2014: https://arxiv.org/abs/1412.6572

“Universal adversarial perturbations,” October 2016: https://arxiv.org/pdf/1610.08401v1.pdf

“Exploring the Landscape of Spatial Robustness,” December 2017: https://arxiv.org/abs/1712.02779

“Overinterpretation reveals image classification model pathologies,” NeurIPS 2021: https://proceedings.neurips.cc/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Paper.pdf

“Approximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet,” ICLR 2019: https://openreview.net/forum?id=SkfMWhAqYQ

Baker et al.’s “Deep convolutional networks do not classify based on global object shape,” PLOS Computational Biology, 2018: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006613

François Chollet's Twitter threads about AI producing images of horses with extra legs: twitter.com/fchollet/status/1573836241875120128 and twitter.com/fchollet/status/1573843774803161090

“Zoom In: An Introduction to Circuits,” 2020: https://distill.pub/2020/circuits/zoom-in/

Geirhos et al., “ImageNet-Trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness,” ICLR 2019: https://openreview.net/forum?id=Bygh9j09KX

Dehghani et al., “Scaling Vision Transformers to 22 Billion Parameters,” 2023: https://arxiv.org/abs/2302.05442

Hasson et al., “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,” February 2020: https://www.gwern.net/docs/ai/scaling/2020-hasson.pdf

You can support the podcast and get episodes a week early, by supporting the Patreon:
 
If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
 
Original music by Kevin MacLeod.
 
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.