AI with AI: Artificial Intelligence with Andy Ilachinski
AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field. The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.
info_outline
All Good Things
02/24/2023
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
/episode/index/show/aionai/id/26034879
info_outline
Up, Up, and Autonomy!
02/10/2023
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident.
/episode/index/show/aionai/id/25892961
info_outline
Dr. GPT
01/29/2023
Dr. GPT
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.
/episode/index/show/aionai/id/25760064
info_outline
EmerGPT
01/13/2023
EmerGPT
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China’s Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google’s search business over the success of OpenAI’s ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe.
/episode/index/show/aionai/id/25600674
info_outline
The Kwicker Man
12/16/2022
The Kwicker Man
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,’ except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains.
/episode/index/show/aionai/id/25351026
info_outline
Battledrone Galactica
12/02/2022
Battledrone Galactica
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael’s DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011.
/episode/index/show/aionai/id/25193874
info_outline
The AI Who Loved Me
11/25/2022
The AI Who Loved Me
Andy and Dave once again welcome Sam Bendett, research analyst with CNA’s Russia Studies Program, to the podcast to discuss the latest unmanned and autonomous news from the Ukraine and Russian conflict. The group discusses the use and role of commercial quadcopters, the recent Black Sea incident involving unmanned systems, and the supply of Iranian systems to Russia. They also discuss the Wagner Group’s Research and Development center, and its potential role in the Ukraine-Russian conflict. Will Ukraine deploy lethal autonomous drones against Russia? PMC Wagner Center: Russia's Lancet: Coordinated drone attack at Sevastopol: Iranian supply of drones to Russia: Russia's "brain drain" problem:
/episode/index/show/aionai/id/25122246
info_outline
Drawing Outside the Box
11/04/2022
Drawing Outside the Box
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers’ views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.
/episode/index/show/aionai/id/24907923
info_outline
Rebroadcast: AI-chemy 2: This Time It's Personal (Part 2)
10/07/2022
Rebroadcast: AI-chemy 2: This Time It's Personal (Part 2)
Dr. from CNA’s program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector.
/episode/index/show/aionai/id/24619662
info_outline
Keep Watching the AIs!
09/23/2022
Keep Watching the AIs!
Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK’s National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person’s brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend.
/episode/index/show/aionai/id/24471108
info_outline
NOMARS Attacks!
09/09/2022
NOMARS Attacks!
Andy and Dave discuss the latest in AI news and research, starting with DARPA moving into Phase 2 of its No Manning Required Ship (NOMARS) program, having selected Serco Inc for its Defiant ship design. The UK releases a roadmap on automated vehicles, Connected & Automated Mobility 2025, and describes new legislation that will place liability for the actions of self-driving vehicles onto manufacturers, and not the occupants. The DOD’s Chief Digital and AI Office is preparing to roll out Tradewinds, an open solutions marketplace geared toward identifying new technologies and capabilities. The US bans NVIDIA and AMD from selling or exporting certain types of GPUs (mostly for high-end servers) to China and Russia. A report in Nature examines the “reproducibility crisis” involving machine learning in scientific articles, identifying eight types of “data leaks” in research that raise cause for concern. Google introduces a new AI image noise reduction tool that greatly advances the state of the art for low lighting and resolution images, using RawNeRF, which makes use of the previous neural radiance fields approach, but on raw image data. Hakwan Lau and Oxford University Press make available for free In Consciousness We Trust: the Cognitive Neuroscience of Subjective Experience. And Sam Bendett joins Andy and Dave to discuss the latest from Russia’s Army 2022 Expo and other recent developments around the globe.
/episode/index/show/aionai/id/24323925
info_outline
EPIC BLOOM
08/26/2022
EPIC BLOOM
Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China’s Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced’s Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science.
/episode/index/show/aionai/id/24175335
info_outline
Searching for Robot Pincher
08/12/2022
Searching for Robot Pincher
Andy and Dave discuss the latest in AI news and research, including an announcement from DeepMind that it is freely providing a database of 200+ million protein structures as predicted by AlphaFold. Researchers at the Max Planck Institute for Intelligent Systems demonstrate how a robot dog can learn to walk in about one hour using a Bayesian optimization algorithm. A chess-playing robot breaks the finger of a seven-year-old boy during a chess match in Moscow. A bill with the Senate Armed Services Committee would require the Department of Defense to accelerate the fielding of new technology to defeat drone swarms. The Chief of Naval Operations Navigation Plan 2022 aims to add 150 uncrewed vessels by 2045. The text-to-image transformer DALL-E is now available in beta. Researchers at Columbia University use an algorithm to identify possible state variables from the observation of systems (such as a double pendulum) and discover “alternate physics”; the algorithm discovers the intrinsic dimension of the observed dynamics and identifies a candidate set of state variables, but in most cases, the scientists found it difficult (if not impossible) to decode those variables to known phenomena. Wolfram Media and Etienne Bernard make Introduction to Machine Learning: Mathematica available for free. And Jeff Edmonds and Sam Bendett join for a discussion on their latest report, Russian Military Autonomy in Ukraine: Four Months In – a closer look at the use of unmanned systems by both Russia and Ukraine. https://www.cna.org/our-media/podcasts/ai-with-ai
/episode/index/show/aionai/id/24038259
info_outline
AI-chemy 2: This Time It's Personal (Part 2)
07/29/2022
AI-chemy 2: This Time It's Personal (Part 2)
Dr. from CNA’s program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector. Report : A Technological Divorce: The impact of sanctions and the end of cooperation on Russia’s technology and AI sector.
/episode/index/show/aionai/id/23900616
info_outline
AI-chemy 2: This Time It's Personal
07/15/2022
AI-chemy 2: This Time It's Personal
Andy and Dave discuss the latest in AI news and research, including an update from DARPA on its Machine Common Sense program, demonstrating rapidly adapting to changing terrain, carrying dynamic loads, and understanding how to grasp objects [0:55]. The Israeli military fields new tech from Camero-Tech that allows operators to ‘see through walls,’ using pulse-based ultra-wideband micro-power radar in combination with an AI-based algorithm for tracking live targets [5:01]. In autonomous shipping [8:13], the Suzaka, a cargo ship powered by Orca AI, makes a nearly 500-mile voyage “without human intervention” for 99% of the trip; the Prism Courage sails from the Gulf of Mexico to South Korea “controlled mostly” by HiNAS 2.0, a system by Avikus, a subsidiary of Hyundai; and Promare’s and IBM’s Mayflower Autonomous Ship travels from the UK to Nova Scotia. In large language models [10:09], a Chinese research team unveils a 174 trillion parameter model, Bagualu (‘alchemist pot’) and claims it runs an AI model as sophisticated as a human brain (not quite, though); Meta releases the largest open-source AI language model, with OPT-66B, a 66 billion parameter model; and Russia’s Yandex opens its 100 billion parameters YaLM to public access. Researchers from the University of Chicago publish a model that can predict future crimes “one week in advance with about 90% accuracy” (referring to general crime levels, not specific people and exact locations), and also demonstrate the potential effects of bias in police response and enforcement [13:32]. In a similar vein, researchers from Berkeley, MIT, and Oxford publish attempts to forecast future world events using the neural network system Autocast, and show that forecasting performance still comes in far below a human expertise baseline [16:37]. Angelo Cangelosi and Minoru Asada provide the (graduate) book of the week, with Cognitive Robotics.
/episode/index/show/aionai/id/23756549
info_outline
the sentience of the lamdas
07/02/2022
the sentience of the lamdas
Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan’s “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems. Visit our to explore the links mentioned in this episode.
/episode/index/show/aionai/id/23594552
info_outline
RAI, consumers’ co-operative
06/17/2022
RAI, consumers’ co-operative
CNA colleagues Kaia Haney and Heather Roff join Andy and Dave to discuss Responsible AI. They discuss the recent Inclusive National Security seminar on AI and National Security: Gender, Race, and Algorithms. The keynote speaker, Elizabeth Adams spoke on the challenges that society faces in integrating AI technologies in an inclusive fashion, and she identified ways in which consumers of AI-enabled products can ask questions and engage on the topic of inclusivity and bias. The group also discusses a variety of topics around the many challenges that organizations face in operationalizing these ideas, including a revisit of the findings from recent medical research, which found an algorithm was able to identify the race of a subject from x-rays and CAT scans, even with identifying features removed. : AI and National Security: Gender, Race and Algorithms Sign up for the InclusiveNatSec mailing list .
/episode/index/show/aionai/id/23436560
info_outline
Top Gan: Swarmaverick
06/03/2022
Top Gan: Swarmaverick
Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can’t explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18].
/episode/index/show/aionai/id/23319224
info_outline
El Gato Altinteligento
05/20/2022
El Gato Altinteligento
Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25]. In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it’s all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus’s Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48].
/episode/index/show/aionai/id/23178662
info_outline
Leggo my Stego!
05/06/2022
Leggo my Stego!
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24]. Careers: “RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm EST on June 7th at .”
/episode/index/show/aionai/id/23031278
info_outline
The Amulet of NeRFdor
04/22/2022
The Amulet of NeRFdor
Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.’ [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?’ which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] Professor Chad Jenkins from the University of Michigan returns to discuss the latest developments, including the upcoming Department of Robotics, and a robotics undergraduate degree. [19:10] https://www.cna.org/CAAI/audio-video
/episode/index/show/aionai/id/22880021
info_outline
Bridge on the River NukkAI
04/08/2022
Bridge on the River NukkAI
Andy and Dave discuss the latest in AI news and research, including DoD’s 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available. The European Parliament’s Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft (including that the EU should not regulate AI as a technology, but rather focus on risk). Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback. Researchers from Collaborations Pharmaceuticals and King’s College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods. And Kevin Pollpeter, from CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict.
/episode/index/show/aionai/id/22728548
info_outline
A PIG GR_PH
03/25/2022
A PIG GR_PH
Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine’s defense ministry has begun to use Clearview AI’s facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15]. https://www.cna.org/CAAI/audio-video
/episode/index/show/aionai/id/22569875
info_outline
Slightly Unconscionable
03/11/2022
Slightly Unconscionable
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA’s OFFSET demo [4:11]. A ‘pitch deck’ from Clearview AI reveals their intent to expand beyond law enforcement and aim to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model’ trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious’ attribute of today’s large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30]. https://www.cna.org/CAAI/audio-video
/episode/index/show/aionai/id/22405397
info_outline
Short Circuit RACER
02/25/2022
Short Circuit RACER
Andy and Dave discuss the latest in AI news and research, starting with the Aircrew Labor In-Cockpit Automation System (ALIAS) program from DARPA, which flew a UH-60A Black Hawk autonomously and without pilots on board, to include autonomous (simulated) obstacle avoidance [1:05]. Another DARPA program, Robotic Autonomy in Complex Environments with Resiliency (RACER) entered its first phase, focused on high-speed autonomous driving in unstructured environments, such as off-road terrain [2:39]. The National Science Board releases its State of U.S. Science and Engineering 2022 report, which shows the U.S. continues to lose its leadership position in global science and engineering [4:30]. The Undersecretary of Defense for Research and Engineering, Heidi Shyu, formally releases its technology priorities, 14 areas grouped into three categories: seed areas, effective adoption areas, and defense-specific areas [6:31]. In research, OpenAI creates InstructGPT in an attempt to align language models to follow human instructions better, resulting in a model with 100x fewer parameters than GPT-3 and provided a user-favored output 70% of the time, though still suffering from toxic output [9:37]. DeepMind releases AlphaCode, which has succeeded in programming competitions with an average ranking in the top 54% across 10 contests with more than 5,000 participants each though it approaches the problem through more of a brute-force approach [14:42]. DeepMind and the EPFL’s Swiss Plasma Center also announce they have used reinforcement learning algorithms to control nuclear fusion (commanding the full set of control coils of a tokamak magnetic controller). Venture City publishes Timelapse of AI (2028 – 3000+), imagining how the next 1,000 years will play out for AI and the human race [18:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA’s Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes to autonomy and how they might use it in this conflict, wrapping up insights from their recent paper on Russian Military Autonomy in a Ukraine Conflict [22:52]. Listener Note: The interview with Sam Bendett and Jeff Edmonds was recorded on Tuesday, February 22 at 1 pm. At the time of recording, Russia had not yet launched a full-scale invasion of Ukraine.
/episode/index/show/aionai/id/22254659
info_outline
Xenopus in Boots
02/11/2022
Xenopus in Boots
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes (for publicly available autonomous vehicle datasets) containing at least one missing object box [12:05]. And principal researchers Josh Bongard and Mike Levin join Andy and Dave for more discussion on the latest Xenobots research [18:21]. Follow the link below to visit our website and explore the links mentioned in this episode. https://www.cna.org/CAAI/audio-video
/episode/index/show/aionai/id/22103801
info_outline
Xenadu
01/28/2022
Xenadu
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA’s GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD’s technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT’s CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54].
/episode/index/show/aionai/id/21933569
info_outline
Three Amecas!
01/14/2022
Three Amecas!
Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE< a 3.5B parameter text-to-image generation model to generate even higher quality images than DALL-E, though it still has trouble with “highly unusual” scenarios [29:30]. The Santa Fe Institute publishes The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic, 800 pages on how complexity interwove through the pandemic [33:50]. And Chris Peter has an algorithm to create a short movie after watching Hitchcock’s Vertigo 20 times [35:22]. Please visit our website to explore the links mentioned in this episode.
/episode/index/show/aionai/id/21778556
info_outline
Rebroadcast: AI Today, Tomorrow, & Forever
12/31/2021
Rebroadcast: AI Today, Tomorrow, & Forever
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education.
/episode/index/show/aionai/id/21644711
info_outline
Is it alive or is it Xeno-rex?
12/17/2021
Is it alive or is it Xeno-rex?
Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it’s important, and where the research will be going [21:44].
/episode/index/show/aionai/id/21522251