Idea Machines
Idea Machines is a deep dive into the systems and people that bring innovations from glimmers in someone's eye all the way to tools, processes, and ideas that can shift paradigms. We see the outputs of innovation systems everywhere but rarely dig into how they work. Idea Machines digs below the surface into crucial but often unspoken questions to explore themes of how we enable innovations today and how we could do it better tomorrow. Idea Machines is hosted by Benjamin Reinhardt.
info_outline
Speculative Technologies with Ben Reinhardt [Macroscience cross-post]
05/27/2024
Speculative Technologies with Ben Reinhardt [Macroscience cross-post]
Tim Hwang turns the tables and interviews me (Ben) about Speculative Technologies and research management.
/episode/index/show/ideamachines/id/31483397
info_outline
Industrial Research with Peter van Hardenberg [Idea Machines #50]
02/10/2024
Industrial Research with Peter van Hardenberg [Idea Machines #50]
Peter van Hardenberg talks about Industrialists vs. Academics, Ink&Switch's evolution over time, the Hollywood Model, internal lab infrastructure, and more! Peter is the lab director and CEO of , a private, creator oriented, computing research lab. References (and their many publications) Transcript Peter Van Hardenberg [00:01:21] Ben: Today I have the pleasure of speaking with Peter van Hardenbergh. Peter is the lab director and CEO of Inkin switch. Private creator oriented, competing research lab. I talked to Adam Wiggins, one of inkind switches founders, [00:01:35] way back in episode number four. It's amazing to see the progress they've made as an organization. They've built up an incredible community of fellow travelers and consistently released research reports that gesture at possibilities for competing that are orthogonal to the current hype cycles. Peter frequently destroys my complacency with his ability to step outside the way that research has normally done and ask, how should we be operating, given our constraints and goals. I hope you enjoy my conversation with Peter. Would you break down your distinction between academics and industrialists [00:02:08] Peter: Okay. Academics are people whose incentive structure is connected to the institutional rewards of the publishing industry, right? You, you publish papers. And you get tenure and like, it's a, it's, it's not so cynical or reductive, but like fundamentally the time cycles are long, right? Like you have to finish work according to when, you know, submission deadlines for a conference are, you know, you're [00:02:35] working on something now. You might come back to it next quarter or next year or in five years, right? Whereas when you're in industry, you're connected to users, you're connected to people at the end of the day who need to touch and hold and use the thing. And you know, you have to get money from them to keep going. And so you have a very different perspective on like time and money and space and what's possible. And the real challenge in terms of connecting these two, you know, I didn't invent the idea of pace layers, right? They, they operate at different pace layers. Academia is often intergenerational, right? Whereas industry is like, you have to make enough money every quarter. To keep the bank account from going below zero or everybody goes home, [00:03:17] Ben: Right. Did. Was it Stuart Brand who invented pace [00:03:22] Peter: believe it was Stewart Brand. Pace layers. Yeah. [00:03:25] Ben: That actually I, I'd never put these two them together, but the, the idea I, I, I think about impedance mismatches between [00:03:35] organizations a lot. And that really sort of like clicks with pace layers Exactly. Right. Where it's like [00:03:39] Peter: Yeah, absolutely. And, and I think in a big way what we're doing at, Ink& Switch on some level is trying to provide like synchro mesh between academia and industry, right? Because they, the academics are moving on a time scale and with an ambition that's hard for industry to match, right? But also, Academics. Often I think in computer science are like, have a shortage of good understanding about what the real problems people are facing in the world today are. They're not disinterested. [00:04:07] Ben: just computer [00:04:08] Peter: Those communication channels don't exist cuz they don't speak the same language, they don't use the same terminology, they don't go to the same conferences, they don't read the same publications. Right. [00:04:18] Ben: Yeah. [00:04:18] Peter: so vice versa, you know, we find things in industry that are problems and then it's like you go read the papers and talk to some scientists. I was like, oh dang. Like. We know how to solve this. It's just nobody's built it. [00:04:31] Ben: Yeah. [00:04:32] Peter: Or more accurately it would be to say [00:04:35] there's a pretty good hunch here about something that might work, and maybe we can connect the two ends of this together. [00:04:42] Ben: Yeah. Often, I, I think of it as someone, someone has, it is a quote unquote solved problem, but there are a lot of quote unquote, implementation details and those implementation details require a year of work. [00:04:56] Peter: yeah, a year or many years? Or an entire startup, or a whole career or two? Yeah. And, and speaking of, Ink&Switch, I don't know if we've ever talked about, so a switch has been around for more than half a decade, right? [00:05:14] Peter: Yeah, seven or eight years now, I think I could probably get the exact number, but yeah, about that. [00:05:19] Ben: And. I think I don't have a good idea in my head over that time. What, what has changed about in, can switches, conception of itself and like how you do things. Like what is, what are some of the biggest things that have have changed over that time?[00:05:35] [00:05:35] Peter: So I think a lot of it could be summarized as professionalization. But I, I'll give a little brief history and can switch began because the. You know, original members of the lab wanted to do a startup that was Adam James and Orion, but they recognized that they didn't, they weren't happy with computing and where computers were, and they knew that they wanted to make something that would be a tool that would help people who were solving the world's problems work better. That's kinda a vague one, but You know, they were like, well, we're not physicists, we're not social scientists. You know, we can't solve climate change or radicalization directly, or you know, the journalism crisis or whatever, but maybe we can build tools, right? We know how to make software tools. Let's build tools for the people who are solving the problems. Because right now a lot of those systems they rely on are getting like steadily worse every day. And I think they still are like the move to the cloud disempowerment of the individual, like, you [00:06:35] know, surveillance technology, distraction technology. And Tristan Harris is out there now. Like hammering on some of these points. But there's just a lot of things that are like slow and fragile and bad and not fun to work with and lose your, you know, lose your work product. You know, [00:06:51] Ben: Yeah, software as a service more generally. [00:06:54] Peter: Yeah. And like, there's definitely advantages. It's not like, you know, people are rational actors, but something was lost. And so the idea was well go do a bit of research, figure out what the shape of the company is, and then just start a company and, you know, get it all solved and move on. And I think the biggest difference, at least, you know, aside from scale and like actual knowledge is just kind of the dawning realization at some point that like there won't really be an end state to this problem. Like this isn't a thing that's transitional where you kind of come in and you do some research for a bit, and then we figure out the answer and like fold up the card table and move on to the next thing. It's like, oh no, this, this thing's gotta stick around because these problems aren't gonna [00:07:35] go away. And when we get through this round of problems, we already see what the next round are. And that's probably gonna go on for longer than any of us will be working. And so the vision now, at least from my perspective as the current lab director, is much more like, how can I get this thing to a place where it can sustain for 10 years, for 50 years, however long it takes, and you know, to become a place that. Has a culture that can sustain, you know, grow and change as new people come in. But that can sustain operations indefinitely. [00:08:07] Ben: Yeah. And, and so to circle back to the. The, the jumping off point for this, which is sort of since, since it began, what have been some of the biggest changes of how you operate? How you, or just like the, the model more generally or, or things that you were [00:08:30] Peter: Yeah, so the beginning was very informal, but, so maybe I'll skip over the first like [00:08:35] little period where it was just sort of like, Finding our footing. But around the time when I joined, we were just four or five people. And we did one project, all of us together at a time, and we just sort of like, someone would write a proposal for what we should do next, and then we would argue about like whether it was the right next thing. And, you know, eventually we would pick a thing and then we would go and do that project and we would bring in some contractors and we called it the Hollywood model. We still call it the Hollywood model. Because it was sort of structured like a movie production. We would bring in, you know, to our little core team, we'd bring in a couple specialists, you know, the equivalent of a director of photography or like a, you know, a casting director or whatever, and you bring in the people that you need to accomplish the task. Oh, we don't know how to do Bluetooth on the web. Okay. Find a Bluetooth person. Oh, there's a bunch of crypto stuff, cryptography stuff. Just be clear on this upcoming project, we better find somebody who knows, you know, the ins and outs of like, which cryptography algorithms to use or [00:09:35] what, how to build stuff in C Sharp for Windows platform or Surface, whatever the, the project was over time. You know, we got pretty good at that and I think one of the biggest changes, sort of after we kind of figured out how to actually do work was the realization that. Writing about the work not only gave us a lot of leverage in terms of our sort of visibility in the community and our ability to attract talent, but also the more we put into the writing, the more we learned about the research and that the process of, you know, we would do something and then write a little internal report and then move on. But the process of taking the work that we do, And making it legible to the outside world and explaining why we did it and what it means and how it fits into the bigger picture. That actually like being very diligent and thorough in documenting all of that greatly increases our own understanding of what we did.[00:10:35] And that was like a really pleasant and interesting surprise. I think one of my sort of concerns as lab director is that we got really good at that and we write all these like, Obscenely long essays that people claim to read. You know, hacker News comments on extensively without reading. But I think a lot about, you know, I always worry about the orthodoxy of doing the same thing too much and whether we're sort of falling into patterns, so we're always tinkering with new kind of project systems or new ways of working or new kinds of collaborations. And so yeah, that's ongoing. But this, this. The key elements of our system are we bring together a team that has both longer term people with domain contexts about the research, any required specialists who understand like interesting or important technical aspects of the work. And then we have a specific set of goals to accomplish [00:11:35] with a very strict time box. And then when it's done, we write and we put it down. And I think this avoids number of the real pitfalls in more open-ended research. It has its own shortcomings, right? But one of the big pitfalls that avoids is the kind of like meandering off and losing sight of what you're doing. And you can get great results from that in kind of a general research context. But we're very much an industrial research context. We're trying to connect real problems to specific directions to solve them. And so the time box kind of creates the fear of death. You're like, well, I don't wanna run outta time and not have anything to show for it. So you really get focused on trying to deliver things. Now sometimes that's at the cost, like the breadth or ambition of a solution to a particular thing, but I think it helps us really keep moving forward. [00:12:21] Ben: Yeah, and, and you no longer have everybody in the lab working on the same projects, right. [00:12:28] Peter: Yeah. So today, at any given time, The sort of population of the lab fluctuates between sort of [00:12:35] like eight and 15 people, depending on, you know, whether we have a bunch of projects in full swing or you know, how you count contractors. But we usually, at the moment we have sort of three tracks of research that we're doing. And those are local first software Programmable Inc. And Malleable software. [00:12:54] Ben: Nice. And so I, I actually have questions both about the, the write-ups that you do and the Hollywood model and so on, on the Hollywood model. Do you think that I, I, and this is like, do you think that the, the Hollywood model working in, in a. Industrial Research lab is particular to software in the sense that I feel like the software industry, people change jobs fairly frequently. Contracting is really common. Contractors are fairly fluid and. [00:13:32] Peter: You mean in terms of being able to staff and source people?[00:13:35] [00:13:35] Ben: Yeah, and people take, like, take these long sabbaticals, right? Where it's like, it's not uncommon in the software industry for someone to, to take six months between jobs. [00:13:45] Peter: I think it's very hard for me to generalize about the properties of other fields, so I want to try and be cautious in my evaluation here. What I would say is that, I think the general principle of having a smaller core of longer term people who think and gain a lot of context about a problem and pairing them up with people who have fresh ideas and relevant expertise, does not require you to have any particular industry structure. Right. There are lots of ways of solving this problem. Go to a research, another research organization and write a paper with someone from [00:14:35] an adjacent field. If you're in academia, right? If you're in a company, you can do a partnership you know, hire, you know, I think a lot of fields of science have much longer cycles, right? If you're doing material science, you know, takes a long time to build test apparatus and to formulate chemistries. Like [00:14:52] Ben: Yeah. [00:14:52] Peter: someone for several years, right? Like, That's fine. Get a detach detachment from another part of the company and bring someone as a secondment. Like I think that the general principle though, of putting together a mixture of longer and shorter term people with the right set of skills, yes, we solve it a particular way in our domain. But I don't think that that's software u unique to software. [00:15:17] Ben: Would, would it be overreaching to map that onto professors and postdocs and grad students where you have the professor who is the, the person who's been working on the, the program for a long time has all the context and then you have postdocs and grad students [00:15:35] coming through the lab. [00:15:38] Peter: Again, I need to be thoughtful about. How I evaluate fields that I'm less experienced with, but both my parents went through grad school and I've certainly gotten to know a number of academics. My sense of the relationship between professors and or sort of PhD, yeah, I guess professors and their PhD students, is that it's much more likely that the PhD students are given sort of a piece of the professor's vision to execute. [00:16:08] Ben: Yeah. [00:16:09] Peter: And that that is more about scaling the research interests of the professor. And I don't mean this in like a negative way but I think it's quite different [00:16:21] Ben: different. [00:16:22] Peter: than like how DARPA works or how I can switch works with our research tracks in that it's, I it's a bit more prescriptive and it's a bit more of like a mentor-mentee kind of relationship as [00:16:33] Ben: Yeah. More training.[00:16:35] [00:16:35] Peter: Yeah. And you know, that's, that's great. I mean, postdocs are a little different again, but I think, I think that's different than say how DARPA works or like other institutional research groups. [00:16:49] Ben: Yeah. Okay. I, I wanted to see how, how far I could stretch the, stretch [00:16:55] Peter: in academia there's famous stories about Adosh who would. Turn up on your doorstep you know, with a suitcase and a bottle of amphetamines and say, my, my brain is open, or something to that effect. And then you'd co-author a paper and pay his room and board until you found someone else to send him to. I think that's closer in the sense that, right, like, here's this like, great problem solver with a lot of like domain skills and he would parachute into a place where someone was working on something interesting and help them make a breakthrough with it. [00:17:25] Ben: Yeah. I think the, the thing that I want to figure out, just, you know, long, longer term is how to. Make those [00:17:35] short term collaborations happen when with, with like, I, I I think it's like, like there's some, there's some coy intention like in, in the sense of like Robert Kos around like organizational boundaries when you have people coming in and doing things in a temporary sense. [00:17:55] Peter: Yeah, academia is actually pretty good at this, right? With like paper co-authors. I mean, again, this is like the, the pace layers thing. When you have a whole bunch of people organized in an industry and a company around a particular outcome, You tend to have like very specific goals and commitments and you're, you're trying to execute against those and it's much harder to get that kind of like more fluid movement between domains. [00:18:18] Ben: Yeah, and [00:18:21] Peter: That's why I left working in companies, right? Cause like I have run engineering processes and built products and teams and it's like someone comes to me with a really good idea and I'm like, oh, it's potentially very interesting, but like, [00:18:33] Ben: but We [00:18:34] Peter: We got [00:18:35] customers who have outages who are gonna leave if we don't fix the thing, we've got users falling out of our funnel. Cause we don't do basic stuff like you just, you really have a lot of work to do to make the thing go [00:18:49] Ben: Yeah. [00:18:49] Peter: business. And you know, my experience of research labs within businesses is that they're almost universally unsuccessful. There are exceptions, but I think they're more coincidental than, than designed. [00:19:03] Ben: Yeah. And I, I think less and less successful over time is, is my observation that. [00:19:11] Peter: Interesting. [00:19:12] Ben: Yeah, there's a, there's a great paper that I will send you called like, what is the name? Oh, the the Changing Structure of American Innovation by She Aurora. I actually did a podcast with him because I like the paper so much. that that I, I think, yeah, exactly. And so going back to your, your amazing [00:19:35] write-ups, you all have clearly invested quite a chunk of, of time and resources into some amount of like internal infrastructure for making those really good. And I wanted to get a sense of like, how do you decide when it's worth investing in internal infrastructure for a lab? [00:19:58] Peter: Ooh. Ah, that's a fun question. Least at In and Switch. It's always been like sort of demand driven. I wish I could claim to be more strategic about it, but like we had all these essays, they were actually all hand coded HTML at one point. You know, real, real indie cred there. But it was a real pain when you needed to fix something or change something. Cause you had to go and, you know, edit all this H T M L. So at some point we were doing a smaller project and I built like a Hugo Templating thing [00:20:35] just to do some lab notes and I faked it. And I guess this is actually a, maybe a somewhat common thing, which is you do one in a one-off way. And then if it's promising, you invest more in it. [00:20:46] Ben: Yeah. [00:20:46] Peter: And it ended up being a bigger project to build a full-on. I mean, it's not really a cms, it's sort of a...
/episode/index/show/ideamachines/id/29884948
info_outline
MACROSCIENCE with Tim Hwang [Idea Machines #49]
11/27/2023
MACROSCIENCE with Tim Hwang [Idea Machines #49]
A conversation with Tim Hwang about historical simulations, the interaction of policy and science, analogies between research ecosystems and the economy, and so much more. Topics Historical Simulations Macroscience Macro-metrics for science Long science The interaction between science and policy Creative destruction in research “Regulation” for scientific markets Indicators for the health of a field or science as a whole “Metabolism of Science” Science rotation programs Clock speeds of Regulation vs Clock Speeds of Technology References Transcript [00:02:02] Ben: Wait, so tell me more about the historical LARP that you're doing. Oh, [00:02:07] Tim: yeah. So this comes from like something I've been thinking about for a really long time, which is You know in high school, I did model UN and model Congress, and you know, I really I actually, this is still on my to do list is to like look into the back history of like what it was in American history, where we're like, this is going to become an extracurricular, we're going to model the UN, like it has all the vibe of like, after World War II, the UN is a new thing, we got to teach kids about international institutions. Anyways, like, it started as a joke where I was telling my [00:02:35] friend, like, we should have, like, model administrative agency. You know, you should, like, kids should do, like, model EPA. Like, we're gonna do a rulemaking. Kids need to submit. And, like, you know, there'll be Chevron deference and you can challenge the rule. And, like, to do that whole thing. Anyways, it kind of led me down this idea that, like, our, our notion of simulation, particularly for institutions, is, like, Interestingly narrow, right? And particularly when it comes to historical simulation, where like, well we have civil war reenactors, they're kind of like a weird dying breed, but they're there, right? But we don't have like other types of historical reenactments, but like, it might be really valuable and interesting to create communities around that. And so like I was saying before we started recording, is I really want to do one that's a simulation of the Cuban Missile Crisis. But like a serious, like you would like a historical reenactment, right? Yeah. Yeah. It's like everybody would really know their characters. You know, if you're McNamara, you really know what your motivations are and your background. And literally a dream would be a weekend simulation where you have three teams. One would be the Kennedy administration. The other would be, you know, Khrushchev [00:03:35] and the Presidium. And the final one would be the, the Cuban government. Yeah. And to really just blow by blow, simulate that entire thing. You know, the players would attempt to not blow up the world, would be the idea. [00:03:46] Ben: I guess that's actually the thing to poke, in contrast to Civil War reenactment. Sure, like you know how [00:03:51] Tim: that's gonna end. Right, [00:03:52] Ben: and it, I think it, that's the difference maybe between, in my head, a simulation and a reenactment, where I could imagine a simulation going [00:04:01] Tim: differently. Sure, right. [00:04:03] Ben: Right, and, and maybe like, is the goal to make sure the same thing happened that did happen, or is the goal to like, act? faithfully to [00:04:14] Tim: the character as possible. Yeah, I think that's right, and I think both are interesting and valuable, right? But I think one of the things I'm really interested in is, you know, I want to simulate all the characters, but like, I think one of the most interesting things reading, like, the historical record is just, like, operating under deep uncertainty about what's even going on, right? Like, for a period of time, the American [00:04:35] government is not even sure what's going on in Cuba, and, like, you know, this whole question of, like, well, do we preemptively bomb Cuba? Do we, we don't even know if the, like, the warheads on the island are active. And I think I would want to create, like, similar uncertainty, because I think that's where, like, that's where the strategic vision comes in, right? That, like, you have the full pressure of, like, Maybe there's bombs on the island. Maybe there's not even bombs on the island, right? And kind of like creating that dynamic. And so I think simulation is where there's a lot, but I think Even reenactment for some of these things is sort of interesting. Like, that we talk a lot about, like, oh, the Cuban Missile Crisis. Or like, the other joke I had was like, we should do the Manhattan Project, but the Manhattan Project as, like, historical reenactment, right? And it's kind of like, you know, we have these, like, very, like off the cuff or kind of, like, stereotype visions of how these historical events occur. And they're very stylized. Yeah, exactly, right. And so the benefit of a reenactment that is really in detail Yeah. is like, oh yeah, there's this one weird moment. You know, like that, that ends up being really revealing historical examples. And so even if [00:05:35] you can't change the outcome, I think there's also a lot of value in just doing the exercise. Yeah. Yeah. The, the thought of [00:05:40] Ben: in order to drive towards this outcome that I know. Actually happened I wouldn't as the character have needed to do X. That's right That's like weird nuanced unintuitive thing, [00:05:50] Tim: right? Right and there's something I think about even building into the game Right, which is at the very beginning the Russians team can make the decision on whether or not they've even actually deployed weapons into the cube at all, yeah, right and so like I love that kind of outcome right which is basically like And I think that's great because like, a lot of this happens on the background of like, we know the history. Yeah. Right? And so I think like, having the team, the US team put under some pressure of uncertainty. Yeah. About like, oh yeah, they could have made the decision at the very beginning of this game that this is all a bluff. Doesn't mean anything. Like it's potentially really interesting and powerful, so. [00:06:22] Ben: One precedent I know for this completely different historical era, but there's a historian, Ada Palmer, who runs [00:06:30] Tim: a simulation of a people election in her class every year. That's so good. [00:06:35] And [00:06:36] Ben: it's, there, you know, like, it is not a simulation. [00:06:40] Tim: Or, [00:06:41] Ben: sorry, excuse me, it is not a reenactment. In the sense that the outcome is indeterminate. [00:06:47] Tim: Like, the students [00:06:48] Ben: can determine the outcome. But... What tends to happen is like structural factors emerge in the sense that there's always a war. Huh. The question is who's on which sides of the war? Right, right. And what do the outcomes of the war actually entail? That's right. Who [00:07:05] Tim: dies? Yeah, yeah. And I [00:07:07] Ben: find that that's it's sort of Gets at the heart of the, the great [00:07:12] Tim: man theory versus the structural forces theory. That's right. Yeah. Like how much can these like structural forces actually be changed? Yeah. And I think that's one of the most interesting parts of the design that I'm thinking about right now is kind of like, what are the things that you want to randomize to impose different types of like structural factors that could have been in that event? Right? Yeah. So like one of the really big parts of the debate at XCOM in the [00:07:35] early phases of the Cuban Missile Crisis is You know, McNamara, who's like, right, he runs the Department of Defense at the time. His point is basically like, look, whether or not you have bombs in Cuba or you have bombs like in Russia, the situation has not changed from a military standpoint. Like you can fire an ICBM. It has exactly the same implications for the U. S. And so his, his basically his argument in the opening phases of the Cuban Missile Crisis is. Yeah. Which is actually pretty interesting, right? Because that's true. But like, Kennedy can't just go to the American people and say, well, we've already had missiles pointed at us. Some more missiles off, you know, the coast of Florida is not going to make a difference. Yeah. And so like that deep politics, and particularly the politics of the Kennedy administration being seen as like weak on communism. Yeah. Is like a huge pressure on all the activity that's going on. And so it's almost kind of interesting thinking about the Cuban Missile Crisis, not as like You know us about to blow up the world because of a truly strategic situation but more because of like the local politics make it so difficult to create like You know situations where both sides can back down [00:08:35] successfully. Basically. Yeah [00:08:36] Ben: The the one other thing that my mind goes to actually to your point about it model UN in schools. Huh, right is Okay, what if? You use this as a pilot, and then you get people to do these [00:08:49] Tim: simulations at [00:08:50] Ben: scale. Huh. And that's actually how we start doing historical counterfactuals. Huh. Where you look at, okay, you know, a thousand schools all did a simulation of the Cuban Missile Crisis. In those, you know, 700 of them blew [00:09:05] Tim: up the world. Right, right. [00:09:07] Ben: And it's, it actually, I think it's, That's the closest [00:09:10] Tim: thing you can get to like running the tape again. Yeah. I think that's right. And yeah, so I think it's, I think it's a really underused medium in a lot of ways. And I think particularly as like you know, we just talk, talk like pedagogically, like it's interesting that like, it seems to me that there was a moment in American pedagogical history where like, this is a good way of teaching kids. Like, different types of institutions. And like, but it [00:09:35] hasn't really matured since that point, right? Of course, we live in all sorts of interesting institutions now. And, and under all sorts of different systems that we might really want to simulate. Yeah. And so, yeah, this kind of, at least a whole idea that there's lots of things you could teach if you, we like kind of opened up this way of kind of like, Thinking about kind of like educating for about institutions. Right? So [00:09:54] Ben: that is so cool. Yeah, I'm going to completely, [00:09:59] Tim: Change. Sure. Of course. [00:10:01] Ben: So I guess. And the answer could be no, but is, is there connections between this and your sort of newly launched macroscience [00:10:10] Tim: project? There is and there isn't. Yeah, you know, I think like the whole bid of macroscience which is this project that I'm doing as part of my IFP fellowship. Yeah. Is really the notion that like, okay, we have all these sort of like interesting results that have come out of metascience. That kind of give us like, kind of like the beginnings of a shape of like, okay, this is how science might work and how we might like get progress to happen. And you know, we've got [00:10:35] like a bunch of really compelling hypotheses. Yeah. And I guess my bit has been like, I kind of look at that and I squint and I'm like, we're, we're actually like kind of in the early days of like macro econ, but for science, right? Which is like, okay, well now we have some sense of like the dynamics of how the science thing works. What are the levers that we can start, like, pushing and pulling, and like, what are the dials we could be turning up and turning down? And, and, you know, I think there is this kind of transition that happens in macro econ, which is like, we have these interesting results and hypotheses, but there's almost another... Generation of work that needs to happen into being like, oh, you know, we're gonna have this thing called the interest rate Yeah, and then we have all these ways of manipulating the money supply and like this is a good way of managing like this economy Yeah, right and and I think that's what I'm chasing after with this kind of like sub stack but hopefully the idea is to build it up into like a more coherent kind of framework of ideas about like How do we make science policy work in a way that's better than just like more science now quicker, please? Yeah, right, which is I think we're like [00:11:35] we're very much at at the moment. Yeah, and in particular I'm really interested in the idea of chasing after science almost as like a Dynamic system, right? Which is that like the policy levers that you have You would want to, you know, tune up and tune down, strategically, at certain times, right? And just like the way we think about managing the economy, right? Where you're like, you don't want the economy to overheat. You don't want it to be moving too slow either, right? Like, I am interested in kind of like, those types of dynamics that need to be managed in science writ large. And so that's, that's kind of the intuition of the project. [00:12:04] Ben: Cool. I guess, like, looking at macro, how did we even decide, macro econ, [00:12:14] Tim: how did we even decide that the things that we're measuring are the right things to measure? Right? Like, [00:12:21] Ben: isn't it, it's like kind of a historical contingency that, you know, it's like we care about GDP [00:12:27] Tim: and the interest rate. Yeah. I think that's right. I mean in, in some ways there's a triumph of like. It's a normative triumph, [00:12:35] right, I think is the argument. And you know, I think a lot of people, you hear this argument, and it'll be like, And all econ is made up. But like, I don't actually think that like, that's the direction I'm moving in. It's like, it's true. Like, a lot of the things that we selected are arguably arbitrary. Yeah. Right, like we said, okay, we really value GDP because it's like a very imperfect but rough measure of like the economy, right? Yeah. Or like, oh, we focus on, you know, the money supply, right? And I think there's kind of two interesting things that come out of that. One of them is like, There's this normative question of like, okay, what are the building blocks that we think can really shift the financial economy writ large, right, of which money supply makes sense, right? But then the other one I think which is so interesting is like, there's a need to actually build all these institutions. that actually give you the lever to pull in the first place, right? Like, without a federal reserve, it becomes really hard to do monetary policy. Right. Right? Like, without a notion of, like, fiscal policy, it's really hard to do, like, Keynesian as, like, demand side stuff. Right. Right? And so, like, I think there's another project, which is a [00:13:35] political project, to say... Okay, can we do better than just grants? Like, can we think about this in a more, like, holistic way than simply we give money to the researchers to work on certain types of problems. And so this kind of leads to some of the stuff that I think we've talked about in the past, which is like, you know, so I'm obsessed right now with like, can we influence the time horizon of scientific institutions? Like, imagine for a moment we had a dial where we're like, On average, scientists are going to be thinking about a research agenda which is 10 years from now versus next quarter. Right. Like, and I think like there's, there's benefits and deficits to both of those settings. Yeah. But man, if I don't hope that we have a, a, a government system that allows us to kind of dial that up and dial that down as we need it. Right. Yeah. The, the, [00:14:16] Ben: perhaps, quite like, I guess a question of like where the analogy like holds and breaks down. That I, that I wonder about is, When you're talking about the interest rate for the economy, it kind of makes sense to say [00:14:35] what is the time horizon that we want financial institutions to be thinking on. That's like roughly what the interest rate is for, but it, and maybe this is, this is like, I'm too, [00:14:49] Tim: my note, like I'm too close to the macro, [00:14:51] Ben: but thinking about. The fact that you really want people doing science on like a whole spectrum of timescales. And, and like, this is a ill phrased question, [00:15:06] Tim: but like, I'm just trying to wrap my mind around it. Are you saying basically like, do uniform metrics make sense? Yeah, exactly. For [00:15:12] Ben: like timescale, I guess maybe it's just. is an aggregate thing. [00:15:16] Tim: Is that? That's right. Yeah, I think that's, that's, that's a good critique. And I think, like, again, I think there's definitely ways of taking the metaphor too far. Yeah. But I think one of the things I would say back to that is It's fine to imagine that we might not necessarily have an interest rate for all of science, right? So, like, you could imagine saying, [00:15:35] okay, for grants above a certain size, like, we want to incentivize certain types of activity. For grants below a certain size, we want different types of activity. Right, another way of slicing it is for this class of institutions, we want them to be thinking on these timescales versus those timescales. Yeah. The final one I've been thinking about is another way of slicing it is, let's abstract away institutions and just think about what is the flow of all the experiments that are occurring in a society? Yeah. And are there ways of manipulating, like, the relative timescales there, right? And that's almost like, kind of like a supply based way of looking at it, which is... All science is doing is producing experiments, which is like true macro, right? Like, I'm just like, it's almost offensively simplistic. And then I'm just saying like, okay, well then like, yeah, what are the tools that we have to actually influence that? Yeah, and I think there's lots of things you could think of. Yeah, in my mind. Yeah, absolutely. What are some, what are some that are your thinking of? Yeah, so I think like the two that I've been playing around with right now, one of them is like the idea of like, changing the flow of grants into the system. So, one of the things I wrote about in Microscience just the past week was to think [00:16:35] about, like sort of what I call long science, right? And so the notion here is that, like, if you look across the scientific economy, there's kind of this rough, like, correlation between size of grant and length of grant. Right, where so basically what it means is that like long science is synonymous with big science, right? You're gonna do a big ambitious project. Cool. You need lots and lots and lots of money Yeah and so my kind of like piece just briefly kind of argues like but we have these sort of interesting examples like the You know Like framing a heart study which are basically like low expense taking place over a long period of time and you're like We don't really have a whole lot of grants that have that Yeah. Right? And so the idea is like, could we encourage that? Like imagine if we could just increase the flow of those types of grants, that means we could incentivize more experiments that take place like at low cost over long term. Yeah. Right? Like, you know, and this kind of gets this sort of interesting question is like, okay, so what's the GDP here? Right? Like, or is that a good way of cracking some of the critical problems that we need to crack right now? Right? Yeah. And it's kind of where the normative part gets into [00:17:35] it is like, okay. So. You know, one way of looking at this is the national interest, right? We say, okay, well, we really want to win on AI. We really want to win on, like, bioengineering, right? Are there problems in that space where, like, really long term, really low cost is actually the kind of activity we want to be encouraging? The answer might be no, but I think, like, it's useful for us to have, like, that. Color in our palette of things that we could be doing Yeah. In like...
/episode/index/show/ideamachines/id/28798163
info_outline
Idea Machines with Nadia Asparouhova [Idea Machines #48]
10/03/2022
Idea Machines with Nadia Asparouhova [Idea Machines #48]
Nadia Asparouhova talks about idea machines on idea machines! Idea machines, of course, being her framework around societal organisms that turn ideas into outcomes. We also talk about the relationship between philanthropy and status, public goods and more. Nadia is a hard-to-categorize doer of many things: In the past, she spent many years exploring the funding, governance, and social dynamics of open source software, both writing a book about it called “” and putting those ideas into practice at GitHub, where she worked to improve the developer experience. She explored parasocial communities and reputation-based economies as an independent researcher at Protocol Labs and put those ideas into practice as employee number two at Substack, focusing on the writer experience. She’s currently researching what the new tech elite will look like, which forms the base of a lot of our conversation. Completely independently, the two of us came up with the term “idea machines” to describe same thing — in her words: “self-sustaining organisms that contains all the parts needed to turn ideas into outcomes.” I hope you enjoy my conversation with Nadia Asparouhova. Links Transcript [00:01:59] Ben: I really like your way of, of defining things and sort of bringing clarity to a lot of these very fuzzy words that get thrown around. So, so I'd love to sort of just get your take on how we should think about so a few definitions to start off with. So I, in your mind, what, what is tech, when we talk about like tech and philanthropy what, what is that, what is that entity. [00:02:23] Nadia: Yeah, tech is definitely a fuzzy term. I think it's best to find as a culture, more than a business industry. And I think, yeah, I mean, tech has been [00:02:35] associated with startups historically, but But like, I think it's transitioning from being this like pure software industry to being more like, more like a, a way of thinking. But personally, I don't think I've come across a good definition for tech anywhere. It's kind, you know? [00:02:52] Ben: Yeah. Do, do you think you could point to some like very sort of like characteristic mindsets of tech that you think really sort of set it. [00:03:06] Nadia: Yeah. I think the probably best known would be, you know, failing fast and moving fast and breaking things. I think like the interest in the sort of like David and gly model of an individual that is going up against an institution or some sort of. Complex bureaucracy that needs to be broken apart. Like the notion of disrupting, I think, is a very tech sort of mindset of looking at a problem and saying like, how can we do this better? So it, in a [00:03:35] weird way, tech is, I feel like it's sort of like, especially in relation, in contrast to crypto, I feel like it's often about iterating upon the way things are or improving things, even though I don't know that tech would like to be defined that way necessarily, but when I, yeah. Sort of compare it to like the crypto mindset, I feel like tech is kind of more about breaking apart institutions or, or doing yeah. Trying to do things better. [00:04:00] Ben: A a as opposed. So, so could you then dig into the, the crypto mindset by, by contrast? That's a, I think that's a, a subtle difference that a lot of people don't go into. [00:04:10] Nadia: Yeah. Like I think the crypto mindset is a little bit more about building a parallel universe entirely. It's about, I mean, well, one, I don't see the same drive towards creating monopolies in the way that and I don't know if that was like always a, you know, core value of tech, but I think in practice, that's kind of what it's been of. You try to be like the one thing that is like dominating a market. Whereas with crypto, I think people are [00:04:35] because they have sort of like decentralization as a core value, at least at this stage of their maturity. It's more about building lots of different experiments or trying lots of different things and enabling people to sort of like have their own little corner of the universe where they can, they have all the tools that they need to sort of like build their own world. Whereas the tech mindset seems to imply that there is only one world the world is sort of like dominated by these legacy institutions and it's Tech's job to fix. Those problems. So it's like very much engaged with what it sees as kind of like that, that legacy world or [00:05:10] Ben: Yeah, I, I hadn't really thought about it that way. But that, that totally makes sense. And I'm sure other people have, have talked about this, but do, do you feel that is an artifact of sort of the nature of the, the technology that they're predicated on? Like the difference between, I guess sort of. The internet and the, the internet of, of like SAS and servers and then the [00:05:35] internet of like blockchains and distributed things. [00:05:38] Nadia: I mean, it's weird. Cause if you think about sort of like early computing days, I don't really get that feeling at all. I'm not a computer historian or a technology historian, so I'm sure someone else has a much more nuanced answer to this than I do, but yeah. I mean, like when I think of like sixties, computer or whatever, it, it feels really intertwined with like creating new worlds. And that's why like, I mean, because crypto is so new, it's maybe. It, we can only really observe what's happening right now. I don't know that crypto will always look exactly like this in the future. In fact, it almost certainly will not. So it's hard to know like, what are, it's like core distinct values, but I, I just sort of noticed the contrast right now, at least, but probably, yeah, if you picked a different point in, in text history, sort of like pre startups, I guess and, and pre, or like that commercialization phase or that wealth accumulation phase it was also much more, I guess, like pie this guy. Right. But yeah, it feel, it feels like at least the startup mindset, or like whenever that point of [00:06:35] history started all this sort of like big successes were really about like overturning legacy industries, the, yeah. The term disruption was like such a buzzword. It's about, yeah. Taking something that's not working and making it better, which I think is like very intertwined with like programmer mindset. [00:06:51] Ben: It's yeah, it's true. And I'm just thinking about sort of like my impression of, of the early internet and it, and it did not have that same flavor. So, so perhaps it's a. Artifact of like the stage of a culture or ecosystem then like the technology underlying it. I guess [00:07:10] Nadia: And it's strange. Cause I, I feel like, I mean, there are people today who still sort of maybe fetishizes too strong, a word, but just like embracing that sort of early computing mindset. But it almost feels like a subculture now or something. It doesn't feel. yeah. I don't know. I don't, I don't find that that's like sort of the prevalent mindset in, in tech. [00:07:33] Ben: Well, it, it feels like the, the sort of [00:07:35] like mechanisms that drive tech really do sort of center. I mean, this is my bias, but like, I feel like the, the way that that tech is funded is primarily through venture capital, which only works if you're shooting for a truly massive Result and the way that you get a truly massive result is not to build like a little niche thing, but to try to take over an industry. [00:08:03] Nadia: It's about arbitrage [00:08:05] Ben: yeah. Or, or like, or even not even quite arbitrage, but just like the, the, to like, that's, that's where the massive amount of money is. And, and like, [00:08:14] Nadia: This means her like financially. I feel like when I think about the way that venture capital works, it's it's. [00:08:19] Ben: yeah, [00:08:20] Nadia: ex sort of exploiting, I guess, the, the low margin like cost models. [00:08:25] Ben: yeah, yeah, definitely. And like then using that to like, take over an industry, whereas if maybe like, you're, you're not being funded in a way [00:08:35] that demands, that sort of returns you don't need to take as, as much of a, like take over the world mindset. [00:08:41] Nadia: Yeah. Although I don't think like those two things have to be at odds with each other. I think it's just like, you know, there's like the R and D phase that is much more academic in nature and much more exploratory and then venture capital is better suited for the point in which some of those ideas can be commercialized or have a commercial opportunity. But I don't think, yeah, I don't, I don't think they're like fighting with each other either. [00:09:07] Ben: Really? I, I guess I, I don't know. It's like, so can I, can I, can I disagree and, and sort of say, like, it feels like the, the, the stance that venture type funding comes with, like forces on people is a stance of like, we are, we might fail, but we're, we're setting out to capture a huge, huge amount of value and like, [00:09:35] And, and, and just like in order for venture portfolios to work, that needs to be the mindset. And like there, there are other, I mean, there are just like other funding, ways of funding, things that sort of like ask for more modest returns. And they can't, I mean, they can't take as many risks. They come with other constraints, but, but like the, the need for those, those power law returns does drive a, the need to be like very ambitious in terms of scale. [00:10:10] Nadia: I guess, like what's an example of something that has modest financial returns, but massive social impact that can't be funded through philanthropy and academia or through through venture capital [00:10:29] Ben: Well, I mean, like are, I mean, like, I think that there's, [00:10:35] I think that, that, that, [00:10:38] Nadia: or I guess it [00:10:39] Ben: yeah, I think the philanthropy piece is really important. Sorry, go ahead. [00:10:42] Nadia: Yeah. I guess always just like, I feel like it was like different types of funding for different, like, I, I sort of visualized this pipeline of like, yeah. When you're in the R and D phase. Venture capital is not for you. There's other types of funding that are available. And then like, you know, when you get to the point where there are commercial opportunities, then you switch over to a different kind of funding. [00:11:01] Ben: Yeah. Yeah, no, I, I definitely agree with that. I, I, I think, I think what we're like where, where, where I was at least talking about is like that, that venture capital is sort of in the tech world is, is like the, the, the thing, the go to funding mechanism. [00:11:16] Nadia: Yeah. Yeah. Which is partly why I'm interested in, I guess, idea machines and other sources of funding that feel like they're at least starting to emerge now. Which I think gets back to those kinds of routes that, I mean, it's actually surprising to me that you can talk to people in tech who don't always make the connection that tech started as an, [00:11:35] you know, academically and government funded enterprise. And not venture venture capital came along later. Right then and so, yeah, maybe we, we're kind of at that point where there's been enough wealth generated that can kind of start that cycle again. [00:11:47] Ben: yeah. And, and speaking of that another distinction that, that you've made in your writing that I think is really important is the difference between charity and philanthropy. Do you mind unpacking how you think about that? [00:12:00] Nadia: Yeah. Charity is, is more like direct services. So you're not, there's sort of like a one to one, you put something in, you get sort of similar equal measure back out of it. And there's, I mean, charity is, you know, you can have like emergency relief or disasters or yeah, just like charitable services for people that need that kind of support. And to me, it's, it's just sort of strange that it always gets lumped in with philanthropy, which is a. Enterprise entirely philanthropy is more of the early stage pipeline [00:12:35] for it it's, it's more like venture capital, but for public goods in the same way that venture capital is very early stage financing for private goods. Philanthropy is very early stage financing for public goods. And if those public goods show promise or yeah, one need to be scaled, then you can go to government to get to get more funding to sustain it. Or maybe there are commercial opportunities or, you know, there are multiple paths that can, they can branch out from there. But yeah, philanthropy at its heart is about experimenting with really wild and crazy ideas that benefit public society that that could have massive social returns if successful. Whereas charity is not really about risk taking charity is really about providing a stable source of financing for those who really need it in the moment. [00:13:21] Ben: And, and the there's, there's two things I, I, I want to poke at there is like, do so. So you describe philanthropy as like crazy risk taking do, do you think that most [00:13:35] philanthropists see it, that. [00:13:37] Nadia: Today? No. And yeah, philanthropy has had this very varied history over the last, like let's say like modern philanthropy in its current form has only really existed since the late 18 hundreds, early 19 hundreds. So we've got whatever, like a hundred, hundred 50 years. Most of what we think about in philanthropy today for, you know, most let's say adults that have really only grown up in the phase of philanthropy that you might call like late stage modern philanthropy to be a little cynical about it where it has. And, and part of that has just come from, I mean, just a bridge history of philanthropy, but you know, early on or. Premodern philanthropy. We had the the church was kind of maybe more played more of that, that role or that that force in both like philanthropic experiments and direct services. And then like when, in the age of sort of like, yeah, post gilded, age, post industrial revolution you had people who made a lot of, lot of self-made wealth. And you had people that were experimenting with new ideas [00:14:35] to provide public goods and services to society. And government at the time was not really playing a role in that. And so all that was coming from private citizens and private capital. And so those are, yeah, there was a time in which philanthropy was much more experimental in that way. But then as government sort of stepped in around you know, mid 19 hundreds to become sort of like that primary provider and funder of public services that diminished the role of philanthropy. And then in the late 1960s, Foundations just became much more heavily regulated. And I think that was sort of like the turning point where philanthropy went from being this like highly experimental and, and just sort of like aggressive risk taking sort of enterprise to much more like safe because it was just sort of like hampered by all these like accountability requirements. So yeah, I think like philanthropy today is not representative of what philanthropy has been historically or what it could be. [00:15:31] Ben: A and what are, what are some of your favorite, like weird, [00:15:35] risky pre regulation, philanthropic things. [00:15:40] Nadia: Oh, I don't do favorites, but [00:15:42] Ben: Oh, okay. Well what, what are, what are some, some amusing examples of, of risky philanthropic cakes. [00:15:51] Nadia: one I mean, [00:15:52] Ben: Take a couple. [00:15:54] Nadia: Probably like the most famous example would be like Carnegie public libraries. So like our public library system started as a privately funded experiment. And for each library that was created Andrew Carnegie would ask the government, the, the local government or the local community to find he would help fund the creation of the libraries. And then the government would have to find a way to like continue to sustain it and support it over the years. So it was this nice sort of like, I guess, public private type partnership. But then you have, I mean, also scientific research and public health initiatives that were philanthropically supported and funded. So Rockefeller's eradication of worm as a yeah. Public health initiative finding care for yellow fever. Those are some [00:16:35] examples. Yeah. I mean the public school education system in the south did not exist until there was sort of like an initiative to say, why aren't there public schools in the south and how do we just create them and, and fund. So and then also like the state of American private universities, which were sort of modeled after European universities at the time. But also came about after private philanthropists were funding research into understanding, like why is our American higher education? Not very good, you know, at the time it was like, not that good compared to the German university models. And so there was a bunch of research that was produced from that. And then they kind of like set out to yeah. Reform American universities and, yeah. So, I mean, there, there're just like so many examples of people just sort of saying, and, and I think like, I, I, one thing I do wanna caveat is like, I'm not regressive in the sense of. Wow. This thing, you know, worked really well a hundred years ago. And why don't we just do the exact same thing again? I feel like that's like a common pitfall in history. It's not that I think, you know, [00:17:35] everything about the world is completely different today versus let's say 19 years, but [00:17:39] Ben: in the past. And so it could be different to her in the [00:17:41] Nadia: exactly that that's sort of, the takeaway is like, where we're at right now is not a terminal state or it doesn't have to be a terminal state. Like philanthropy has been through many different phases and it can continue to have other phases in the future. They're not gonna look exactly like they did historically, but yeah. [00:17:56] Ben: That, that's that such a good distinction. And it goes for, for so many things where like, like when you point to historical examples I don't know. Like, I, I think that I, I suffer the same thing where I, you know, it's like you point to, to historical examples and it's like, not, it's not bringing up the historical examples to say, like, we should go back to this it's to say, like, it has been different and it could be different. [00:18:18] Nadia: Something I think about, and this is a little, it just, I don't know. I, I just think of like any, any adult today in, like, let's say like the, the who's like active in the workforce. We're talking about the span of like a, you know, like 30 year institutional memory or something. Like, and so [00:18:35] like anything that we think about, like, what is like possible or not possible is just like limited by like our biological lifespans. Like anyone you're talking, like, all we ever know is like, what we've grown up with in like, let's say the last 30 ish years for anyone. And so it's like, the reason why it's important to study history is to remind yourself that like everything that you know about, you know, what I think about philanthropy right now, based on the inputs I've been given in my lifetime is very different from if I study history and go, oh, actually it's only been that way for like pretty short amount of time. Only a few decades. [00:19:06] Ben: Yeah, totally. And I, I, I guess this is, this might be a, a slightly people might disagree with this, but from, from my perspective there's been sort of less institutional change within. The lifetime of most people in, in the workforce and especially most people in tech, which tends to skew younger than there was in the past, [00:19:30] Nadia: Yeah. [00:19:32] Ben: like, or, or like to put, put more fine on a point of it. [00:19:35] Like there's, there seems to have been less institutional change in the like latter half of the, the 20th century than in the...
/episode/index/show/ideamachines/id/24565185
info_outline
Institutional Experiments with Seemay Chou [Idea Machines #47]
09/01/2022
Institutional Experiments with Seemay Chou [Idea Machines #47]
Seemay Chou talks about the process of building a new research organization, ticks, hiring and managing entrepreneurial scientists, non-model organisms, institutional experiments and a lot more! Seemay is the co-founder and CEO of — a research and development company focusing on underesearched areas in biology and specifically new organisms that haven't been traditionally studied in the lab. She’s also the co-founder of — a startup focused on harnessing molecules in tick saliva for skin therapies and was previously an assistant professor at UCSF. She has thought deeply not just about scientific problems themselves, but the meta questions of how we can build better processes and institutions for discovery and invention. I hope you enjoy my conversation with Seemay Chou Links Transcript [00:02:02] Ben: So since a lot of our conversation is going to be about it how do you describe Arcadia to a smart well-read person who has never actually heard of it before? [00:02:12] Seemay: Okay. I, I actually don't have a singular answer to this smart and educated in what realm. [00:02:19] Ben: oh, good question. Let's assume they have taken some undergraduate science classes, but perhaps are not deeply enmeshed in, in academia. So, so like, [00:02:31] Seemay: enmeshed in the meta science community.[00:02:35] [00:02:35] Ben: No, no, no, no, but they've, they, they, they, they they're aware that it's a thing, but [00:02:40] Seemay: Yeah. Okay. So for that person, I would say we're a research and development company that is interested in thinking about how we explore under researched areas in biology, new organisms that haven't been traditionally studied in the lab. And we're thinking from first principal polls about all the different ways we can structure the organization around this to also yield outcomes around innovation and commercialization. [00:03:07] Ben: Nice. And how would you describe it to someone who is enmeshed in the, the meta science community? [00:03:13] Seemay: In the meta science community, I would, I would say Arcadias are meta science experiment on how we enable more science in the realm of discovery, exploration and innovation. And it's, you know, that that's where I would start. And then there's so much more that we could click into on that. Right. [00:03:31] Ben: And we will, we will absolutely do that. But before we get there I'm actually really [00:03:35] interested in, in Arcadia's backstory. Cuz cuz when we met, I feel like you were already , well down the, the path of spinning it up. So what's, there's, there's always a good story there. What made you wanna go do this crazy thing? [00:03:47] Seemay: So, so the backstory of Arcadia is actually trove. Soro was my first startup that I spun out together with my co-founder of Kira post. started from a point of frustration around a set of scientific questions that I found challenging to answer in my own lab in academia. So we were very interested in my lab in thinking about all the different molecules and tick saliva that manipulate the skin barrier when a tick is feeding, but basically the, the ideal form of a team around this was, you know, like a very collaborative, highly skilled team that was, you know, strike team for like biochemical, fractionation, math spec, developing itch assays to get this done. It was [00:04:35] not a PhD style project of like one person sort of open-endedly exploring a question. So I was struggling to figure out how to get funding for this, but that wasn't even the right question because even with the right money, like it's still very challenging to set up the right team for this in academia. And so it was during this frustration that I started exploring with Kira about like, what is even the right way to solve this problem, because it's not gonna be through writing more grants. There's a much bigger problem here. Right? And so we started actually talking to people outside of academia. Like here's what we're trying to achieve. And actually the outcome we're really excited about is whether it could yield information that could be acted on for an actually commercializable product, right. There's like skin diseases galore that this could potentially be helpful for. So I think that transition was really important because it went from sort of like a passive idea to, oh, wait, how do we act as agents to figure out how to set this up correctly? [00:05:35] We started talking to angel investors, VCs people in industry. And that's how we learned that, you know, like itch is a huge area. That's an unmet need. And we had tools at our disposal to potentially explore that. So that's how tr started. And that I think was. The beginning of the end or the, the start of the beginning. However you wanna think about it. Because what it did, was it the process of starting trove? It was so fun and it was not at all in conflict with the way I was thinking about my science, the science that was happening on the team was extremely rigorous. And I experienced like a different structure. And that was like the light bulb in my head that not all science should be structured the same way. It really depends on what you're trying to achieve. And then I went down this rabbit hole of trying to study the history of what you might call meta science. Like what are the different structures and iterations of this that have happened over, over the history of even the United States. And it's, hasn't always been the same. Right? And then I think [00:06:35] like, as a scientist, like once you grapple with that, that the way things are now is not how they always have been. Suddenly you have an experiment in front of you. And so that is how Arcadia became born, because I realize. Couched within this trove experiment is so many things that I've been frustrated about that I, I, I don't feel like I've been maximized as the type of scientist that I am. And I really want to think in my career now about not how I fit into the current infrastructure, but like what other infrastructures are available to us. Right? [00:07:08] Ben: Nice. [00:07:09] Seemay: Yeah. So that, that was the beginning. [00:07:11] Ben: and, and so you, you then, I, I, I'm just gonna extrapolate one more, more step. And so you sort of like looked at the, the real, the type of work that you really wanted to do and determined that, that the, the structure of Arcadia that you've built is, is like perhaps the right way to go about enabling that. [00:07:30] Seemay: Okay. So a couple things I, I don't even know yet if Arcadia is the right way to do it. So I [00:07:35] feel like it's important for me to start this conversation there that I actually don't know. But also, yeah, it's a hypothesis and I would also say that, like, that is a beautiful summary, but it's still, it was still a little clunkier than the way you described it and the way I described it. So there's this gap there then of like, okay, what is the optimal place for me to do my science? How do we experiment with this? And I was still acting in a pretty passive way. You know, I was around people in the bay area thinking about like new orgs. And I had heard about this from like ju and Patrick Collison and others, like people very interested in funding and experimenting with new structures. So I thought, oh, if I could find someone else to create an organization. That I could maybe like help advise them on and be a part of, and, and so I started writing up this proposal that I was trying to actually pitch to other people like, oh, would you be interested in leading something like this? [00:08:35] Like, and the more that went on and I, I had like lots and lots and lots of conversations with other scientists in academia, trying to find who would lead this, that it took probably about six months for me to realize like, oh, in the process of doing this, I'm actually leading this. I think and like trying to find someone to hand the keys over to when actually, like, I seem to be the most invested so far. And so I wrote up this whole proposal trying to find someone to lead it and. It came down to that like, oh, I've already done this legwork. Like maybe I should consider myself leading it. And I've, I've definitely asked myself a bunch of times, like, was that like some weird internalized sexism on my part? Cause I was like looking for like someone, some other dude or something to like actually be in charge here. So that's actually how it started. And, and I think a couple people started suggesting to this to me, like if you feel so strongly about this, why aren't you doing this? And I know [00:09:35] it's always an important question for a founder to ask themselves. [00:09:38] Ben: Yeah, yeah, no, that's, that's really clutch. I appreciate you sort of going into the, the, the, the, the, the, like, not straight paths of it. Because, because I guess when we, we put these things into stories, we always like to, to make it like nice and, and linear and like, okay, then this happened and this happened, and here we are. But in reality, it was it's, it's always that ambiguity. Can, can I actually ask two, two questions based on, on that story? One is you, you mentioned that. In academia, even if you had the money, you wouldn't be able to put together that strike team that you thought was necessary. Like why can, can you, can you unpack that a little bit? [00:10:22] Seemay: Yeah. I mean, I think there's a lot of reasons why one of the important reasons, which is absolutely not a criticism of academia, in fact, it's maybe like my support of the [00:10:35] mission in academia is around training and education. That like part of our job as PIs and the research projects we set up is to provide an opportunity for a scientist to learn how to ask questions. How to answer those, how to go through the whole scientific process. And that requires a level of sort of like openness and willingness to allow the person to take the reigns on that. That I think is very difficult if you're trying to hit like very concrete, aggressive milestones with a team of people, right. Another challenge of that is, you know, the way we set up incentive structures around, you know, publishing, like we also don't set up the way we, you know, publish articles in journals to be like very collaborative or as collaborative as you would want in this scenario. Right. At the end of the day, there's a first author, there's the last author. And that is just a reality. We all struggle with despite everyone's best intentions. And so that inherently now sets up yeah. [00:11:35] Another situation where you're trying to figure out how you, we, this collaborative effort with this reality and. Even in the best case scenario, it doesn't always feel great. Right? Like it just like makes it harder to do the thing. And then finally, like it just, you know, for the way we fund projects in, in academia, you know, this wasn't a very hypothesis driven project. Like it's very hard to lay out specific aims for it. Beyond just the things we're gonna be trying to like, what, what, what is our process that we can lay [00:12:08] Ben: Yeah, it's a [00:12:09] Seemay: I can't tell you yeah. What the outcomes are gonna be. So I did write grants on that and that was repeatedly the feedback. And then finally, there's, you know, this other thing, which is that, like, we didn't want to accidentally land on an opportunity for invi innovation. We explicitly wanted to find molecules that could be, you know, engineered for products. Like that was [00:12:35] our hypothesis. If there is any that like. By borrowing the innovation from ticks who have evolved to feed for days to sometimes over a week that we are skipping steps to figure out the right natural product for manipulating processes in the skin that have been so challenging to, you know, solve. So we didn't want it to be an accident. We wanted to be explicitly translational quote unquote. So that again, poses another challenge within an academic lab where you, you have a different responsibility, right? [00:13:05] Ben: Yeah. And, and you it's there there's like that tension there between setting out to do that and then setting out to do something that is publishable, right? [00:13:14] Seemay: Mm-hmm mm-hmm . Yeah. Yeah. And I think one of the, the hard things that I'm always trying to think about is like, what are things that have out of the things that I just listed? What are things that are appropriately different about academia and what are the things that maybe are worth a second? [00:13:31] Ben: mm. [00:13:32] Seemay: they might actually be holding us back even [00:13:35] within academia. So the first thing I would say is non-negotiable that there's a training responsibility. So that is has to be true, but that's not necessarily mutually exclusive with also having the opportunity for this other kind of team. For example, we don't really have great ways in academia to properly, you know, support staff scientists at a, at a high level. Like there's a very limited opportunity for that. And I, you know, I'm not arguing with people about like the millions of reasons why that might be. That's just a fact, you know, so that's not my problem to solve. I just, I just see that as like a challenge also like of course publishing, right? Like I think [00:14:13] Ben: yeah, [00:14:14] Seemay: in a best case scenario publishing should be science should be in the driver's seat and publishing should be supporting those activities. I think we do see, you know, and I know there's a spectrum of opinions on this, but there are definitely more and more cases now where publishing seems to be in the [00:14:35] driver's seat, [00:14:36] Ben: yeah, [00:14:36] Seemay: dictating how the science goes on many levels. And, you know, I can only speak for myself that I, I felt that to be increasingly true as I advanced my career. [00:14:47] Ben: yeah. And just, just to, to make it, make it really explicit that it's like the, the publishing is driving because that's how you like, make your tenure case. That's how you make any sort of credibility. Everybody's gonna be judging you based on what you're publishing as opposed to any other. [00:15:08] Seemay: right. And more, I think the reason it felt increasingly heavy as I advanced my career was not even for those reasons, to be honest, it was because of my trainees, [00:15:19] Ben: Hmm. [00:15:20] Seemay: if I wanna be out. Doing my crazy thing. I have a huge responsibility now to my students, and that is something I'm not willing to like take a risk on. And so now my hands are tied in this like other way, and their [00:15:35] careers are important to me. And if they wanna go into academia, I have to safeguard that. [00:15:40] Ben: Yeah. I mean, it suggests. Sort of a, a distinction between sort of, regardless of academia or not academia between like training labs and maybe focused labs. And, and you could say like, yes, you, you want trainees in focus. Like you want trainees to be exposed to focused research. But like at least sort of like thinking about those differences seems really important. [00:16:11] Seemay: Yes. Yeah. And in fact, like, you know, because I don't like to, I don't like to spend too much time, like. Criticizing people in academia, like we even grapple with this internally at Arcadia, [00:16:25] Ben: Yeah. [00:16:25] Seemay: like there is a fundamentally different phase of a project that we're talking about sort of like new, creating new ideas, [00:16:35] exploring de-risking and then some transition that happens where it is a sort of strike team effort of like, how do you expand on this? How do you make sure it's executed well? And there's probably many more buckets than the, just the two I said, but it it's worthy of like a little more thought around the way we set up like approvals and budgets and management, because they're too fundamentally different things, you know? [00:17:01] Ben: Yeah, that's actually something I, I wanted to ask about more explicitly. And this is a great segue is, is sort of like where, where do ideas come from at Arcadia? Like how, you know, it's like, there's, there's some spectrum where everybody's from, like everybody's working on, you know, their own thing to like you dictating everything. Everything in between. So like, yeah. Can you, can you go more into like, sort of how that, that flow works almost? [00:17:29] Seemay: So I might even reframe the question a little bit to [00:17:35] not where do ideas come from, but how do ideas evolve? Because it's [00:17:39] Ben: please. Yeah. That's a much better reframing. [00:17:41] Seemay: because it's rarely the case, regardless of who the idea is coming from at Arcadia, that it ends where it starts. and I think that that like fluidity is I the magic sauce. Right. And so by and large, the ideas tend to come from the scientists themselves. Occasionally of course, like I will have a thought or Che will have a thought, but I see our roles as much more being there to like shepherd ideas in the most strategic and productive direction. And so we like, you know, I spent a lot of time thinking about like, well, what kind of resources would this take? And, you know, Che definitely thinks about that piece as well as, you know, like what it, what would actually be the impact of this if it worked in terms of like both our innovation, as well as the knowledge base outside of Arcadia Practically speaking, something we've started doing, that's been really helpful because we've gone. We've already gone through different iterations of this too. Like we [00:18:35] started out of like, oh, let's put out a Google survey. People can fill out where they pitch a project to us. And that like fell really flat because there's no conversation to be had there. And now they're basically writing a proposal. Yeah. More streamlined, but it's not that qualitatively different of a process. So then we started doing these things called sandboxes, which I'm actually really enjoying right now. These are every Friday we have like an hour long session. The entire company goes and someone's up at the dry erase board. We call it, throwing them in the sandbox and they present some idea or set of ideas or even something they're really struggling. For everybody to like, basically converse with them about it. And this has actually been a much more productive way for us to source ideas. And also for me to think collaboratively with them about like the right level of like resources, the right sort of inflection points for like, when we decide go or no, go on things. And so that's how we're currently doing it. I mean, we're [00:19:35] like just shy of about 30 people. I, this process will probably break again. once we hit like 50 people or something, cuz it's actually just like logistically a lot of people to cram into a room and there is a level of sort of like, yeah, and then there's a level of formality that starts to happen when there's like that many people in the room. So we'll see how it goes, but that's how it's currently working today. [00:20:00] Ben: that's that's really cool. And, and, and so then, then like, let's, let's keep following the, the evolutionary path, right. So an idea gets sandboxed and you collectively come to some conclusion that it's like, okay, like this idea is, is like, well worth pursuing then what happens. [00:20:16] Seemay: So then and actually we're like very much still under construction right now around this. We're trying to figure out like, how do, how do we think about budget and stuff for this type of step? But then presumably, okay, the person starts working on it. I can tell you where we're trying to go. I, I'm not sure where there yet, where we're trying to go is turning our [00:20:35] publications into a way to like actually integrate into this process. Like, ideally I would love it as CEO, if I...
/episode/index/show/ideamachines/id/24233781
info_outline
DARPA and Advanced Manufacturing with William Bonvillian [Idea Machines #46]
08/02/2022
DARPA and Advanced Manufacturing with William Bonvillian [Idea Machines #46]
William Bonvillian does a deep dive about his decades of research on how DARPA works and his more recent work on advanced manufacturing. William is a Lecturer at MIT and the Senior Director of Special Projects,at MIT’s Office of Digital Learning. Before joining MIT he spent almost two decades as a senior policy advisor for the US senate. He’s also published many papers and a detailed book exploring the DARPA model. Links Transcript [00:00:35] In this podcast, William Bonvillian, and I do a deep dive about his decades of research about how DARPA works and his more recent work on advanced manufacturing. Well humans, a lecturer at MIT and a senior director of special projects at MIT is office of digital learning. Before joining MIT. He spent almost two decades as a senior policy advisor for the us Senate. He's published many papers and a detailed book exploring the DARPA model. I've wanted [00:01:35] to compare notes with him for years. And it was a pleasure. And an honor to finally catch up with him. Here's my conversation with William [00:01:42] Ben: The place that I I'd love to start off is how did you get interested in, in DARPA and the DARPA model in the first place you've been writing about it for more than a decade now. And, and you're probably one of the, the foremost people who who've explored it. So how'd you get there in the first. [00:01:58] William: You know, I, I I worked for the us Senate as a advisor in the Senate for for about 15 years before coming to MIT then. And I I worked for a us Senator who is on the on the armed services committee. And so I began doing a substantial amount of that staffing, given my interest in science technology, R and D and you know, got early contact with DARPA with some of DARPA's both program managers and the DARPA directors, and kind of got to know the agency that way spent some time with them over in their [00:02:35] offices. You know, really kind of got to know the program and began to realize what a, what a dynamic force it was. And, you know, we're talking 20, 20 plus years ago when frankly DARPA was a lot less known than it is now. So yeah, just like you know, kind of suddenly finding this, this Jewelbox varied in the. It was it was a real discovery for me and I became very, very interested in the, kind of the model they had, which was so different than the other federal R and D agencies. [00:03:05] Ben: Yeah. And, and actually um, It sort of in your mind, what is the for, for people who I, I think tend to see different federal agencies that give money to researchers as, as all being in the same bucket. What, what do you, what would you describe the difference between DARPA and the NSF as being [00:03:24] William: well? I mean, there's a big difference. So the NSF model is to support basic research. And they have, you know, the equivalent of project [00:03:35] managers there and they, they don't do the selecting of the research projects. Instead they queue up applicants for funds and then they supervise a peer review process. Of experts, you know, largely from academia who evaluate, you know, a host of proposals in a, in a given R and D area mm-hmm and and make valuations as to which ones would qualify. What are the kind of best most competitive applicants for NSFs basic research. So DARPA's got a different project going on, so it doesn't work from the bottom up. It, it has strong program managers who are in effect kind of empowered to go out and create new things. So they're not just, you know, responding to. Grant applications for basic research, they come into DARPA and develop a [00:04:35] vision of a new breakthrough technology area. They wanna stand up. And so it's, and there's no peer review here. It's really, you hire talented program managers. And you unleash them, you turn them loose, you empower them to go out and find the best work that's going on in the country. And that's, that can be from, from universities and often ends in this breakthrough technology area they've identified. But it also could be from comp companies, often smaller companies and typically they'll construct kind of a hybrid model where they've got academics. Companies working on a project, the companies are already always oriented to getting the technology out the door. Right. Cause they have to survive, but the researchers are often in touch with some of the more breakthrough capabilities behind the research. So bringing those two together is something that the program manager at DARPA does. So while at [00:05:35] NSF, the program manager equivalent, you know, their big job is getting grant out the door and supervising a complex selection process by committee mm-hmm . The role of the, of the ARPA of the, of the DARPA program manager is selecting the award winners is just the beginning of the job. Then in effect you move into their home, right? You work with them on an ongoing basis. DARPA program managers are spending at least one third of their time on the road, linking up with their, you know, with their grantees, the folks they've contracted with sort of helping them along in the process. And then, you know, the typically fund a group of research awards in an area they'll also work on putting together kind of a thinking community amongst those award winners. Contract winners so that they begin to share their best ideas. And that's not a, that's not easy, right? Yeah. Yeah. If you're an academic [00:06:35] or you, a company, you stuff, you trading ideas is a complicated process, but that's one of the tasks. That the DARPA program manager has, is to really build these thinking communities around problems. And that's what they that's what they're driven to do. So it's a very, very different situation. This is, this is the different world here that Dar is created [00:07:01] Ben: and, and sort of actually to, to, to click on The, the how DARPA program managers interact with ideas. Do you have a sense of how they incentivize that idea sharing? Is it just the, the concept that if you share these ideas, they might get funded in a way that they wouldn't or like what, how do they sort of construct that That trust that people for people could actually be sharing those ideas. [00:07:28] William: Yeah. In, in some ways then it starts out at an all stage. So before, you know, a new [00:07:35] program manager arrives at DARPA and often they'll have, I mean, this could be ape. It could be I RPA, which worked slightly different ways, but similar kind of approach RPE is our energy DARPA. I, APA is our intelligence Dar. Right. And then soon we'll have a help DARPA, which has now been funded. Yeah. I wanna [00:07:55] Ben: your opinion on that later. [00:07:57] William: Okay. Well, we're working away on this model here. You know, you hire a program manager and you hire somebody. Who's gonna be, you know, talent and dynamic and kind of entrepreneurial and standing up a new program. They get the DARPA and they begin to work on this new technology area. And a requirement of DARPA is that really be a breakthrough. They don't wanna fund incremental work that somebody else may be doing. They wanna find a new, new territory. That's their job, revolutionary breakthroughs. To get there. They'll often convene workshops, 1, 2, 3 workshops with some of the best thinkers around the country, including people, [00:08:35] people who may be applying for the funding, but they'll, they'll look for the best people bringing together and get, you know, a day long process going um, often in several different locations to kind of think through. Technology advance opportunity. How, how it might shape up what might contribute, how might you organize it? What research might go into it, what research areas and that kind of begins the kind of thinking process of building a community around a problem. And then they'll make grant awards. And then similarly, they're gonna be frequently convening this group and everybody can sit on their hands and keep their mouth shut. But you know, that's not often the way technologists work. They'll get into a problem and start wanting to share ideas and brainstorm. And that's, that's typically what then takes place and part of the job of the, of. Partner manager DARPA is to really encourage that kind of dialogue and get a lot of ideas on the table and really promote it. Yeah. [00:09:34] Ben: [00:09:35] And, and then also with, with those ideas do, do you have, like, in your, your having looked at this so much, do you have a sense of how much there there's this tension? You know, it's like people generally do the best research when they feel a lot of ownership over their own ideas and they feel like they're, they're really working on. The, the thing that they want to work on. But then at the same time to sort of for, for, for the, a project to play into a broader program, you often need to sort of adjust ideas towards sort of a, a bigger system or a bigger goal. Do you have, do you have an idea of how much Program managers sort of shape what people are working on versus just sort of enabling people to work on things that they would want to work on. Otherwise. [00:10:24] William: Yeah. The program manager in communication with DARPA's office directors and director. Right, right. So it's a very flat organization. You know, and [00:10:35] there'll be an office director and a number of program managers working with that office director. For example in the field of, of biological technologies, a fairly new DARPA office set up about a decade ago. Yeah. You know, there'll be a group of DARPA program managers with expertise in that field and they will often have often a combination of experiences. They'll have some company experience as well as some academic research experience that they're kind of walking on both sides. They'll come into DARPA often with some ideas about things they want to pursue, right. And then they'll start the whittle down process to get after what they really wanna do. And that's, that's a very, very critical stage. They'll do it often in dialogue with fellow program managers at DARPA who will contribute ideas and often with their office. Who kind of oversees the portfolio and we can feed that DARPA program manager into other areas of expertise around DARPA. So coming up with a big breakthrough idea, then [00:11:35] you test it out in these workshops, as I mentioned, right. As well as in dialogue with your colleagues at DARPA. And then if it looks like it's gonna work, then you can move it rapidly to the approval process. But DARPA is, you know, I mean, it's what its name says. It's advanced research projects agency. So it's not just doing research. It very much wants to do projects. And you know, it's an agency and it's a defense agency, so they're gonna be, have to be related to the defense sector. Although there's often spill over into huge areas of civilian economy, like in the it world really pioneer a lot. But essentially the big idea to pursue that's being developed by the program manager and refined by the program manager. And then they'll put out, you know, often what's called a broad area announcement, a BIA. We wanna get a technology that will do this. Right. Right. Give us your best [00:12:35] ideas. And put this out, this broad area announcement out and get people to start applying. And if it's, if the area is somewhat iffy, they can, you know, proceed with smaller awards to see how it kind of tests out rather than going into a full, larger, larger award process with kind of seedlings they'll plant. So there's a variety of mechanisms that it uses, but getting that big breakthrough revolution or idea is the key job at a program manager. And then they'll, they're empowered to go out and do it. And look, Dora's very cooperative. The program managers really work with each other. Yeah. But in addition, it's competitive and everybody knows whose technology is getting ahead, whose technology is moving out and what breakthroughs it might lead to. So there's a certain amount of competition amongst the program managers too, as to how their revolution is coming along. Nice. [00:13:28] Ben: And, and then sort of to, to go sort of like one level down the hierarchy, if you will. When [00:13:35] they put out these, these BAAs do you have a sense for, of how often the performers will sort of either shift their focus to, to, towards a APA program or like how much sort of haggling is there between the performer and the, the program manager in terms of Sort of finding this balance between work that supports the, the broader program goals and work that sort of supports a researcher's already existing agenda. Right. Because, you know, it's like people in their labs, they, they sort of have this, the things that they're pursuing and maybe they're, they're like sort of roughly in the same direction as a program, but need to be, need to be shifted. [00:14:20] William: Yeah. It's, you know, the role of the program manager is to put out a new technological vision, you know, some kind of new breakthrough territory. That's gonna really be a very significant [00:14:35] advance that can be implemented. It's gonna be applied. It's not discovery. It's implementation that they're oriented to. They want to create a new thing that can be implemented. So they're gonna put the vision out there and look the evaluation process. Is gonna look hard at whether or not this exact question you're raising. It's gonna look hard at whether or not the, the applicant researcher is kind of doing their own thing or can actually contribute to the, to the implementation of the vision. And that's gonna be the cutoff. Will it serve the vision or not? And if it's not, it's not gonna get the award. So look, that's an issue with DARPA. DARPA is going at their particular technology visions. NSF will fund, you know, it's driven by the applicants. They will think of ideas they wanna pursue and see if they can get NSF funding for it at DARPA's the other way around the program manager has vision [00:15:35] and then sees who's willing to pursue that vision with him or her. Yeah. Right. So it's a, it's more of a, I won't say top down because DARPA's very collaborative, but it's more of a top down approach than as opposed to NSF, which is bottom up. They're going for technology visions, not to see what neat stuff is out there. right. [00:15:56] Ben: Yeah. And just to, to shift a little bit you, you mentioned I a RPA and ARPA, E as, as other government agencies that, that used the same model you wrote an article in 2011 about ARPA E and, and I I'm interested in. What like how you think that it has played out over, over the past decade? Like how, like how well do you think that they, they have implemented the model? Do you think that it, it does work there. And like what other places do you think, I guess do, do you have a sense of like how to know whether a DARPA, the DARPA [00:16:35] model is applicable to an area more broadly? [00:16:39] William: Yeah. I mean, look that's, and that's kind of a, that's kind of a key question, you know, if you wanna do a, if you wanna do a DARPA, like thing, is it gonna work in the territory that you wanna work in? But let's, let's look at this energy issue. You know, I was involved in, you know, some of the early discussions about creating an, a. For for energy and, you know, the net result of that was that Congressman named bar Gordon led an effort on the house science committee to really create an ARPA energy. And, and that approach had been recommended by a national academies committee. And it you know, it seemed to make a term on a sense. So what was going on in energy at the time of formulation of this. Like the 2007 rough time period. You know, 2008, what was happening was that there was significant amount of investment that was moving from, in, [00:17:35] in moving in venture capital towards new energy, clean tech technologies. So the venture capital sector in that timetable was ramping. It's 2006, 2007 time period was ramping up its venture funding and Cleantech. And that's when AR was being proposed and consider. So it looks like it looked to us, looks everybody, like there would be a way of doing the scale up. Right. In other words, it's not enough just to have, you know, Cool things that come out of an agency, you need to implement the technology. So who's gonna implement it. Right. Who's gonna do that scale up into actual implementation. And that's a very key underlying issue to consider when you're trying to set up a DARPA model. DARPA has the advantage of a huge defense procurement budget. So, right. It can, you know, it can formulate a new technology breakthrough, like [00:18:35] saying stealth, right. Mm-hmm or in you know, UAVs and drones. And then it can turn to the defense department that will spend procuring money to actually stand up the model on a good day. Cause that doesn't always happen. doesn't always go. Right. But, but it's there, what's the scale up model gonna be for energy? Well, we thought there was gonna be venture capital money to scale up Cleantech. And then the bottom fell out of the Cleantech venture funding side in the 2008, 2009 time table and venture money really pulled out. So, you know, in 2009 is. Harpa E first received it, significant early funding. Got an appropriation of 400 million had been authorized for the science committee and then it got an appropriation. Could you say that again? And the there was a big risk there. So look, RPE was then created, had a very dynamic leader named Maju. Who's now at Stanford leading the energy initiatives there. Aroon [00:19:35] saw the challenge and he frankly rose to it. So if they weren't gonna get this, these technologies scaled up through venture capital, like everybody assumed would work. How are they gonna do scale up? So who did a whole series of very creative things? There was some venture left. So we maintained, you know, good relations with the venture world. But also with the corporate world, because there were a lot of corporations that were interested in kind of moving in some of these directions. If these new technologies complimented technologies, they were already pursuing, right. So room created this annual. RPE summit where all of its award winners would, you know, present their technologies and, you know, fabulous, you know, presentations and booths all around this conference. It rapidly became the leading energy technology conference in the us wide widely attended by thousands of people. Venture capital may not be funding much, but they were there. But more importantly, [00:20:35] companies were there. And, you know, looking at what these technologies were to see how they could get to get stood up. So that was a way of exposing what was RPE was doing in a really big way. Right. Right. Another approach they tried was very successfully was to create what they call the tech to market group. So in addition to your program manager at RPE, You stand up a new project and assigned to that project would be somebody with expertise in the commercialization of technology by whatever route the financing might be obtained. And they brought in a series of experts who had done this, who knew venture, who knew startups, who also knew federal government contracting in case the feds were gonna buy this stuff, particularly a D O D and this tech to market group became, you know, that was part of the discipline of standing up a project was to really make sure there was gonna be a pathway to commercialization. In fact, that approach. [00:21:35] Was so successful and DARPA for a number of years later hired away RPE tech tech to tech, to market director to run and set up its own tech to market program. Right. Which was, you know, the, the new child is just taught the parent a lesson here is what the, what the point was. So there's now a tech to market group at, at DARPA as well. Another approach they...
/episode/index/show/ideamachines/id/23936715
info_outline
Philanthropically Funding the Foundation of Fields with Adam Falk [Idea Machines #45]
07/02/2022
Philanthropically Funding the Foundation of Fields with Adam Falk [Idea Machines #45]
In this conversation, Adam Falk and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P. Sloan Foundation, which was started by the eponymous founder of General Motors and has been funding science and education efforts for almost nine decades. They’ve funded everything from iPython Notebooks to the Wikimedia foundation to an astronomical survey of the entire sky. If you’re like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan Foundation, Adam was the president of Williams College and a high energy physicist focused on elementary particle physics and quantum field theory. His combined experience in research, academic administration, and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem. I hope you enjoy this as much as I did. Links - - - Highlight Timestamps - How do you measure success in science? [00:01:31] - Thinking about programs on long timescales [00:05:27] - How does the Sloan Foundation decide which programs to do? [00:08:08] - Sloan's Matter to Life Program [00:12:54] - How does the Sloan Foundation think about coordination? [00:18:24] - Finding and incentivizing program directors [00:22:32] - What should academics know about the funding world and what should the funding world know about academics? [00:28:03] - Grants and academics as the primary way research happens [00:33:42] - Problems with grants and common grant applications [00:44:49] - Addressing the criticism of philanthropy being inefficient because it lacks market mechanisms [00:47:16] - Engaging with the idea that people who create value should be able to capture that value [00:53:05] Transcript [00:00:35] In this conversation, Adam Falk, and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P Sloan foundation, which was started by the eponymous founder of general motors. And has been funding science and education efforts for almost nine decades. They funded everything from IP. I fond [00:01:35] notebooks to Wikimedia foundation. To an astronomical survey of the entire sky. If you're like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan foundation. Adam was the president of Williams college and I high energy physicist focused on elementary particle physics in quantum field theory. His combined experience in research. Uh, Academic administration and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem i hope you enjoy this as much as i did [00:02:06] Ben: Let's start with like a, sort of a really tricky thing that I'm, I'm myself always thinking about is that, you know, it's really hard to like measure success in science, right? Like you, you know, this better than anybody. And so just like at, at the foundation, how do you, how do you think about success? Like, what is, what does success look like? What is the difference between. Success and failure mean to [00:02:34] Adam: you? [00:02:35] I mean, I think that's a, that's a really good question. And I think it's a mistake to think that there are some magic metrics that if only you are clever enough to come up with build them out of citations and publications you could get some fine tune measure of success. I mean, obviously if we fund in a scientific area, we're funding investigators who we think are going to have a real impact with their work individually, and then collectively. And so of course, you know, if they're not publishing, it's a failure. We expect them to publish. We expect people to publish in high-impact journals, but we look for broader measures as well if we fund a new area. So for example, A number of years ago, we had a program in the microbiology of the built environment, kind of studying all the microbes that live in inside, which turns out to be a very different ecosystem than outside. When we started in that program, there were a few investigators interested in this question. There weren't a lot of tools that were good for studying it. [00:03:35] By 10 years later, when we'd left, there was a journal, there were conferences, there was a community of people who were doing this work, and that was another measure, really tangible measure of success that we kind of entered a field that, that needed some support in order to get going. And by the time we got out, it was, it was going strong and the community of people doing that work had an identity and funding paths and a real future. Yeah. [00:04:01] Ben: So I guess one way that I've been thinking about it, it's just, it's almost like counterfactual impact. Right. Whereas like if you hadn't gone in, then it, the, it wouldn't be [00:04:12] Adam: there. Yeah. I think that's the way we think about it. Of course that's a hard to, to measure. Yeah. But I think that Since a lot of the work we fund is not close to technology, right. We don't have available to ourselves, you know, did we spin out products? Did we spin out? Companies did a lot of the things that might directly connect that work to, [00:04:35] to activities that are outside of the research enterprise, that in other fields you can measure impact with. So the impact is pretty internal. That is for the most part, it is, you know, Has it been impact on other parts of science that, you know, again, that we think might not have happened if we hadn't hadn't funded what we funded. As I said before, have communities grown up another interesting measure of impact from our project that we funded for about 25 years now, the Sloan digital sky survey is in papers published in the following sense that one of the innovations, when the Sloan digital sky survey launched in the early. Was that the data that came out of it, which was all for the first time, digital was shared broadly with the community. That is, this was a survey of the night sky that looked at millions of objects. So they're very large databases. And the investigators who built this, the, the built the, the, the telescope certainly had first crack at analyzing that [00:05:35] data. But there was so much richness in the data that the decision was made at. Sloan's urging early on that this data after a year should be made public 90% of the publications that came out of the Sloan digital sky survey have not come from collaborators, but it come from people who use that data after it's been publicly released. Yeah. So that's another way of kind of seeing impact and success of a project. And it's reached beyond its own borders. [00:06:02] Ben: And you mentioned like both. Just like that timescale, right? Like that, that, that 25 years something that I think is just really cool about the Sloan foundation is like how, how long you've been around and sort of like your capability of thinking on those on like a quarter century timescale. And I guess, how do you, how do you think about timescales on things? Right. Because it's like, on the one hand, this is like, obviously like science can take [00:06:35] 25 years on the other hand, you know, it's like, you need to be, you can't just sort of like do nothing for 25 years. [00:06:44] Adam: So if you had told people back in the nineties that the Sloan digital sky survey was going to still be going after a quarter of a century, they probably never would have funded it. So, you know, I think that That you have an advantage in the foundation world, as opposed to the the, the federal funding, which is that you can have some flexibility about the timescales on what you think. And so you don't have to simply go from grant to grant and you're not kind of at the mercy of a Congress that changes its own funding commitments every couple of years. We at the Sloan foundation tend to think that it takes five years at a minimum to have impact into any new field that you go into it. And we enter a new science field, you know, as we just entered, we just started a new program matter to life, which we can talk about. [00:07:35] That's initially a five-year commitment to put about $10 million a year. Into this discipline, understanding that if things are going well, we'll re up for another five years. So we kind of think of that as a decadal program. And I would say the time scale we think on for programs is decades. The timescale we think of for grants is about three years, right? But a program itself consists of many grants may do a large number of investigators. And that's really the timescale where we think you can have, have an impact over that time. But we're constantly re-evaluating. I would say the timescale for rethinking a program is shorter. That's more like five years and we react. So in our ongoing programs, about every five years, we'll take a step back and do a review. You know, whether we're having an impact on the program, we'll get some outside perspectives on it and whether we need to keep it going exactly as it is, or adjust in some [00:08:35] interesting ways or shut it down and move the resources somewhere else. So [00:08:39] Ben: I like that, that you have, you almost have like a hierarchy of timescales, right? Like you have sort of multiple going at once. I think that's, that's like under underappreciated and so w one thing they want to ask about, and maybe the the, the life program is a good sort of like case study in this is like, how do you, how do you decide what pro, like, how do you decide what programs to do, right? Like you could do anything. [00:09:04] Adam: So th that is a terrific question and a hard one to get. Right. And we just came out of a process of thinking very deeply about it. So it's a great time to talk about it. Let's do it. So To frame the large, the problem in the largest sense, if we want to start a new grantmaking program where we are going to allocate about $10 million a year, over a five to 10 year period, which is typical for us, the first thing you realize is that that's not a lot of money on the scale that the federal government [00:09:35] invest. So if your first thought is, well, let's figure out the most interesting thing science that people are doing you quickly realize that those are things where they're already a hundred times that much money going in, right? I mean, quantum materials would be something that everybody is talking about. The Sloan foundation, putting $10 million a year into quantum materials is not going to change anything. Interesting. So you start to look for that. You start to look for structural reasons that something that there's a field or an emerging field, and I'll talk about what some of those might be, where an investment at the scale that we can make can have a real impact. And And so what might some of those areas be? There are fields that are very interdisciplinary in ways that make it hard for individual projects to find a home in the federal funding landscape and one overly simplified, but maybe helpful way to think about it is that the federal funding landscape [00:10:35] is, is governed large, is organized largely by disciplines. That if you look at the NSF, there's a division, there's a director of chemistry and on physics and so forth. And but many questions don't map well onto a single discipline. And sometimes questions such as some of the ones we're exploring in the, you know, the matter to life program, which I can explain more about what that is. Some of those questions. Require collaborations that are not naturally fundable in any of the silos the federal government has. So that's very interdisciplinary. Work is one area. Second is emerging disciplines. And again, often that couples to interdisciplinary work in a way that often disciplines emerge in interesting ways at the boundaries of other disciplines. Sometimes the subject matter is the boundary. Sometimes it's a situation where techniques developed in one discipline are migrating to being used in another discipline. And that often happens with physics, the [00:11:35] physicist, figure out how to do something, like grab the end of a molecule and move it around with a laser. And suddenly the biologists realize that's a super interesting thing for them. And they would like to do that. So then there's work. That's at the boundary of those kind of those disciplines. You know, a third is area that the ways in which that that can happen is that you can have. Scale issues where, where kind of work needs to happen at a certain scale that is too big to be a single investigator, but too small to kind of qualify for the kind of big project funding that you have in the, in the, in the federal government. And so you're looking, you could also certainly find things that are not funded because they're not very interesting. And those are not the ones we want to fund, but you often have to sift through quite a bit of that to find something. So that's what you're looking for now, the way you look for it is not that you sit in a conference room and get real smart and think that you're going to see [00:12:35] things, other people aren't going to see rather you. You source it out, out in the field. Right. And so we had an 18 month process in which we invited kind of proposals for what you could do on a program at that scale, from major research universities around the country, we had more than a hundred ideas. We had external panels of experts who evaluated these ideas. And that's what kind of led us in the end to this particular framing of the new program that we're starting. So and, and that, and that process was enough to convince us that this was interesting, that it was, you know, emergent as a field, that it was hard to fund in other ways. And that the people doing the work are truly extraordinary. Yeah. And that's, that's the, that's what you're looking for. And I think in some ways there are pieces of that in all of the programs that particularly the research programs that. [00:13:29] Ben: And so, so actually, could you describe the matter to life program and like, [00:13:35] and sort of highlight how it fits into all of those buckets? [00:13:38] Adam: Absolutely. So the, the, the matter of the life program is an investigation into the principles, particularly the physical principles that matter uses in order to organize itself into living systems. The first distinction to make is this is not a program about how did life evolve on earth, and it's actually meant to be a broader question then how is life on earth organized the idea behind it is that life. Is a particular example of some larger phenomenon, which is life. And I'm not going to define life for you. That is, we know what things are living and we know things that aren't living and there's a boundary in between. And part of the purpose of this program is to explore that it's a think of it as kind of out there, on, out there in the field. And, and mapmaking, and you know, over here is, you [00:14:35] know, is a block of ice. That's not alive. And, you know, over here is a frog and that's alive and there's all sorts of intermediate spaces in there. And there are ideas out there that, that go, you know, that are interesting ideas about, for example, at the cellular level how is information can date around a cell? What might the role of. Things like non-equilibrium thermodynamics be playing is how does, can evolution be can it can systems that are, non-biological be induced to evolve in interesting ways. And so we're studying both biotic and non biotic systems. There are three strains, stray strands in this. One is building life. That is it was said by I think I, I find men that if you can't build something, you don't understand it. And so the idea, and there are people who want to build an actual cell. I think that's, that's a hard thing to do, but we have people who are building in the laboratory little bio-molecular machines understanding how that might [00:15:35] work. We, we fund people who are kind of constructing, protocells thinking about ways that the, the ways that liquid separate might provide SEP diff divisions between inside and outside, within. Chemical reactions could take place. We funded businesses to have made tiny little, you know, micron scale magnets that you mix them together and you can get them to kind of organize themselves in interesting ways. Yeah. In emerge. What are the ways in which emergent behaviors come to this air couple into this. And so that's kind of building life. Can you kind of build systems that have features that feel essential to life and by doing that, learn something general about, say the reproduction of, of, of, of DNA or something simple about how inside gets differentiated from outside. Second strand is principles of life, and that's a little bit more around are [00:16:35] there physics principles that govern the organization of life? And again, are there ways in which the kinds of thinking that informed thermodynamics, which is kind of the study of. Piles of gas and liquid and so forth. Those kinds of thinking about bulk properties and emergent behavior can tell us something about what's the difference between life that's life and matter. That's not alive. And the third strain is signs of life. And, you know, we have all of these telescopes that are out there now discover thousands of exoplanets. And of course the thing we all want to know is, is there life on them? We were never going to go to them. We maybe if we go, we'll never come back. And and we yet we can look and see the chemical composition of these. Protoplanets just starting to be able to see that. And they transition in front of a star, the atmospheres of these planets absorb light from the stars and the and the light that's absorbed tells you something about the chemical composition of the atmosphere. [00:17:35] So there's a really interesting question. Kind of chemical. Are there elements of the chemical composition of an atmosphere that would tell you that that life is present there and life in general? Right. I, you know, if, if you, if you're going to look for kind of DNA or something, that might be way too narrow, a thing to kind of look for. Right. So we've made a very interesting grant to a collaboration that is trying to understand the general properties of atmospheres of Rocky planets. And if you kind of knew all of the things that an atmosphere of an Earth-like planet might look like, and then you saw something that isn't one in one of those, you think, well, something other might've done that. Yeah. So that's a bit of a flavor. What I'd say about the nature of the research is it is, as you could tell highly interdisciplinary. Yeah. Right. So this last project I mentioned requires geoscience and astrophysics and chemistry and geochemistry and a vulcanology an ocean science [00:18:35] and, and Who's going to fund that. Yeah. Right. It's also in very emerging area because it comes at the boundary between geoscience, the understanding of what's going on on earth and absolutely cutting edge astrophysics, the ability to kind of look out into the cosmos and see other planets. So people working at that boundary it's where interesting things often, often happen. [00:18:59] Ben: And you mentioned that when, when you're looking at programs, you're, you're looking for things that are sort of bigger than like a single pie. And like, how do you, how do you think about sort of the, the different projects, like individual projects within a program? Becoming greater than the sum of their parts. Like, like, you know, there's, there's some, there's like one end of the spectrum where you've just sort of say, like, go, go do your things. And everybody's sort of...
/episode/index/show/ideamachines/id/23496056
info_outline
Managing Mathematics with Semon Rezchikov [Idea Machines #44]
05/30/2022
Managing Mathematics with Semon Rezchikov [Idea Machines #44]
In this conversation, Semon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction, and a lot more! Semon is currently a postdoc in mathematics at Harvard where he specializes in symplectic geometry. He has an amazing ability to go up and down the ladder of abstraction — doing extremely hardcore math while at the same time paying attention to *how* he’s doing that work and the broader institutional structures that it fits into. Semon is worth listening to both because he has great ideas and also because in many ways, academic mathematics feels like it stands apart from other disciplines. Not just because of the subject matter, but because it has managed to buck many of the trend that other fields experienced over the course of the 20th century. Links Transcript [00:00:35] Welcome back to idea machines. Before we get started, I'm going to do two quick pieces of housekeeping. I realized that my updates have been a little bit erratic. My excuse is that I've been working on my own idea machine. That being said, I've gotten enough feedback that people do get something out of the podcast and I have enough fun doing it that I am going to try to commit to a once a month cadence probably releasing on the pressure second [00:01:35] day of. Second thing is that I want to start doing more experiments with the podcast. I don't hear enough experiments in podcasting and I'm in this sort of unique position where I don't really care about revenue or listener numbers. I don't actually look at them. And, and I don't make any revenue. So with that in mind, I, I want to try some stuff. The podcast will continue to be a long form conversation that that won't change. But I do want to figure out if there are ways to. Maybe something like fake commercials for lesser known scientific concepts, micro interviews. If you have ideas, send them to me in an email or on Twitter. So that's, that's the housekeeping. This conversation, Simon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction. is currently a post-doc in mathematics at Harvard, where he specializes in symplectic geometry. He has an amazing ability to go up, go up and down the ladder of [00:02:35] abstraction, doing extremely hardcore math while at the same time, paying attention to how he's doing the work and the broader institutional structures that affect. He's worth listening to both because he has great ideas. And also because in many ways, academic mathematics feels like it stands apart from other disciplines, not just because of the subject matter, but because it has managed to buck many of the trends that other fields experience of the course of the 20th century. So it's worth sort of poking at why that happened and perhaps. How other fields might be able to replicate some of the healthier parts of mathematics. So without further ado, here's our conversation. [00:03:16] Ben: I want to start with the notion that I think most people have that the way that mathematicians go about a working on things and be thinking about how to work on things like what to work on is that you like go in a room and you maybe read some papers and you think really hard, and then [00:03:35] you find some problem. And then. You like spend some number of years on a Blackboard and then you come up with a solution. But apparently that's not that that's not how it actually works. [00:03:49] Semon: Okay. I don't think that's a complete description. So definitely people spend time in front of blackboards. I think the length of a typical length of a project can definitely. Vary between disciplines I think yeah, within mathematics. So I think, but also on the other hand, it's also hard to define what is a single project. As you know, a single, there might be kind of a single intellectual art through which several papers are produced, where you don't even quite know the end of the project when you start. But, and so, you know, two, a two years on a single project is probably kind of a significant project for many people. Because that's just a lot of time, but it's true that, you know, even a graduate student might spend several years working on at least a single kind of larger set of ideas because the community does have enough [00:04:35] sort of stability to allow for that. But it's not entirely true that people work alone. I think these days mathematics is pretty collaborative people. Yeah. If you're mad, you know, in the end, you're kind of, you probably are making a lot of stuff up and sort of doing self consistency checks through this sort of formal algebra or this sort of, kind of technique of proof. It makes you make sure helps you stay sane. But when other people kind of can think about the same objects from a different perspective, usually things go faster and at the very least it helps you kind of decide which parts of the mathematical ideas are really. So often, you know, people work with collaborators or there might be a community of people who were kind of talking about some set of ideas and they may be, there may be misunderstanding one another, a little bit. And then they're kind of biting off pieces of a sort of, kind of collective and collectively imagined [00:05:35] mathematical construct to kind of make real on their own or with smaller groups of people. So all of those [00:05:40] Ben: happen. And how did these collaborations. Like come about and [00:05:44] Semon: how do you structure them? That's great. That's a great question. So I think this is probably several different models. I can tell you some that I've run across. So during, so sometimes there are conferences and then people might start. So recently I was at a conference and I went out to dinner with a few people, and then after dinner, we were. We were talking about like some of our recent work and trying to understand like where it might go up. And somebody, you know, I was like, oh, you know, I didn't get to ask you any questions. You know, here's something I've always wanted to know from you. And they were like, oh yes, this is how this should work. But here's something I don't know. And then somehow we realized that you know, there was some reasonable kind of very reasonable guests as to what the answer is. Something that needed to be known would be so I guess now we're writing a paper together, [00:06:35] hopefully that guess works. So that's one way to start a collaboration. You go out to a fancy dinner and afterwards you're like, Hey, I guess we maybe solved the problem. There is other ways sometimes people just to two people might realize they're confused about the same thing. So. Collaboration like that kind of from somewhat different types of technical backgrounds, we both realized we're confused about a related set of ideas. We were like, okay, well I guess maybe we can try to get unconfused together. [00:07:00] Ben: Can I, can I interject, like, I think it's actually realizing that you are confused about the same problem as someone who's coming at it from a different direction is actually hard in and of itself. Yes. Yes. How, how does, like, what is actually the process of realizing that the problem that both of you have is in fact the same problem? Well, [00:07:28] Semon: you probably have to understand a little bit about the other person's work and you probably have to in some [00:07:35] way, have some basal amount of rapport with the other person first, because. You know, you're not going to get yourself to like, engage with this different foreign language, unless you kind of like liked them to some degree. So that's actually a crucial thing. It's like the personal aspect of it. Then you know it because maybe you'll you kind of like this person's work and maybe you like the way they go about it. That's interesting to you. Then you can try to, you know, talk about what you've recently been thinking about. And then, you know, the same mathematical object might pop up. And then that, that sort of, that might be you know, I'm not any kind of truly any mathematical object worth studying, usually has incarnations and different formal languages, which are related to one another through kind of highly non-obvious transformation. So for example, everyone knows about a circle, but a circle. Could you could think of that as like the set of points of distance one, you could think of it as some sort of close, not right. You can, you can sort of, there are many different concrete [00:08:35] intuitions through which you can grapple with this sort of object. And usually if that's true, that sort of tells you that it's an interesting object. If a mathematical object only exists because of a technicality, it maybe isn't so interesting. So that's why it's maybe possible to notice that the same object occurs in two different peoples. Misunderstandings. [00:08:53] Ben: Yeah. But I think the cruxy thing for me is that it is at the end of the day, it's like a really human process. There's not a way of sort of colliding what both of, you know, without hanging out. [00:09:11] Semon: So people. And people can try to communicate what they know through texts. So people write reviews on. I gave a few talks recently in a number of people have asked me to write like a review of this subject. There's no subject, just to be clear. I kind of gave a talk with the kind of impression that there is a subject to be worked on, but nobody's really done any work on it that you're [00:09:35] meeting this subject into existence. That's definitely part of your job as an academic. But you know, then that's one way of explaining, I think that, that can be a little bit less, like one-on-one less personal. People also write these a different version of that is people write kind of problems. People write problem statements. Like I think these are interesting problems and then our goal. So there's all these famous, like lists of conjectures, which you know, in any given discipline. Usually when people decide, oh, there's an interesting mathematical area to be developed. At some point they have a conference and somebody writes down like a list of problems and the, the conditions for these problems are that they should kind of matter. They should help you understand like the larger structure of this area and that they should, the problems to solve should be precise enough that you don't need some very complex motivation to be able to engage with them. So that's part of, I think the, the trick in mathematics. You know, different people have very different like internal understandings of something, but you reduce the statements or [00:10:35] the problems or the theorems, ideally down to something that you don't need a huge superstructure in order to engage with, because then people will different, like techniques or perspective can engage with the same thing. So that can makes it that depersonalizes it. Yeah. That's true. Kind of a deliberate, I think tactic. And [00:10:51] Ben: do you think that mathematics is. Unique in its ability to sort of have those both like clean problem statements. And, and I think like I get the sense that it's, it's almost like it's higher status in mathematics to just declare problems. Whereas it feels like in other discipline, One, there are, the problems are much more implicit. Like anybody in, in some specialization has, has an idea of what they are, but they're very rarely made lightly explicit. And then to pointing out [00:11:35] problems is fairly low status, unless you simultaneously point out the problem and then solve it. Do you think there's like a cultural difference? [00:11:45] Semon: Potentially. So I think, yeah, anyone can make conjectures in that, but usually if you make a conjecture, it's either wrong or on. Interesting. It's a true for resulting proof is boring. So to get anyone to listen to you, when you make problem, you state problems, you need to, you need to have a certain amount of kind of controllers. Simultaneously, you know, maybe if you have a cell while you're in, it's clear. Okay. You don't understand the salary. You don't understand what's in it. It's a blob that does magic. Okay. The problem is understand the magic Nath and you don't, you can't see the thing. Right? So in some sense, defining problems as part of. That's very similar to somebody showing somebody look, here's a protein. Oh, interesting. That's a very [00:12:35] similar process. And I do think that pointing out, like, look, here's a protein that we don't understand. And you didn't know about the existence of this protein. That can be a fairly high status work say in biology. So that might be a better analogy. Yeah. [00:12:46] Ben: Yeah, no, I like that a lot that math does not have, you could almost say like the substrate, that the context of reality. [00:12:56] Semon: I mean it's there, right? It's just that you have to know what to look for in order to see it. So, right. Like, you know, number theorists, love examples like this, you know, like, oh, everybody knows about the natural numbers, but you know, they just love pointing out. Like, here's this crazy pattern. You would never think of this pattern because you don't have this kind of overarching perspective on it that they have developed over a few thousand years. [00:13:22] Ben: It's not my thing really been around for a few thousand years. It's pretty [00:13:25] Semon: old. Yeah. [00:13:27] Ben: W w what would you, [00:13:30] Semon: this is just curiosity. What, what would [00:13:32] Ben: you call the first [00:13:35] instance of number theory in history? [00:13:38] Semon: I'm not really sure. I don't think I'm not a historian in that sense. I mean, certainly, you know, the Bell's equation is related to like all kinds of problems in. Like I think grease or something. I don't exactly know when the Chinese, when the Chinese remainder theorem is from, like, I I'm, I'm just not history. Unfortunately, I'm just curious. But I do think the basic server very old, I mean, you know, it was squared of two is a very old thing. Right. That's the sort of irrationality, the skirt of two is really ancient. So it must predate that by quite a bit. Cause that's a very sophisticated question. [00:14:13] Ben: Okay. Yeah. So then going, going back to collaborations I think it's a surprising thing that you've told me about in the past is that collaborations in mathematics are like, people have different specializations in the sense that the collaborations are not just completely flat of like everybody just sort of [00:14:35] stabbing at a place. And that you you've actually had pretty interesting collaborations structures. [00:14:43] Semon: Yeah. So I think different people are naturally drawn to different kinds of thinking. And so they naturally develop different sort of thinking styles. So some people, for example, are very interested in someone had there's different kinds. Parts of mathematics, like analysis or algebra or you know, technical questions and typology or whatnot. And some people just happen to know certain techniques better than others. That's one access that you could sort of classify people on a different access is about question about sort of tasting what they think is important. So some people. Wants to have a very kind of rich, formal structure. Other people want to have a very concrete, intuitive structure, and those are very different, those lead to very different questions. Which, you know, that's sort of something I've had to navigate with recently where there's a group of people who are sort of mathematical physicists and they kind of like a very rich, formal structure. And there's other [00:15:35] people who do geometric analysis. Kind of geometric objects defined by partial differential equations and they want something very concrete. And there are relations between questions about areas. So I've spent some time trying to think about how one can kind of profitably move from one to the other. But did Nash there's that, that sort of forces you to navigate a certain kind of tension. So. Maybe you have different access is whether people like these are the here's one, there's the frogs and birds.com. And you know, this, this is a real, this is a very strong phenomenon and mathematics is this, this [00:16:09] Ben: that was originally dice. [00:16:11] Semon: And maybe I'm not sure, but it's certainly a very helpful framework. I think some people really want to take a single problem and like kind of stab at it. Other people want to see the big picture and how everything fits. And both of these types of work can be useful or useless depending on sort of the flavor of the, sort of the way the person approached it. So, you know, often, you know, often collaborations have like one person who's obviously more kind of hot and kind [00:16:35] of more birdlike and more frog like, and that can be a very productive. [00:16:40] Ben: And how do you make your, like let's, let's let's date? Let's, let's frog that a little bit. And so like, what are the situations. W what, what are the, both like the success and failure modes of birds in the success and failure modes of [00:16:54] Semon: frocks. Great, good. This is, I feel like this is somehow like very clearly known. So the success so-so what frogs fail at is they can get stuck on a technical problem, which does not matter to the larger aspect of the larger university. Hmm. And so in the long run, they can spend a lot of work resolving technical issues which are then like, kind of, not really looked out there because in the end they, you know, maybe the, you know, they didn't matter for kind of like progress. Yeah. What they can do is they can discover something that is not obvious from any larger superstructure. Right. So they can sort of by directly [00:17:35] engaging with kind of the lower level details of mathematical reality. So. They can show the birds something they could never see and simultaneously they often have a lot of technical capacity. And so they can, you know, there might be some hard problem, which you know, no one, a large perspective can help you solve. You just have to actually understand that problem. And then they can remove the problem. So that can learn to lead opened kind of to a new new world. That's the frog. The birds have an opposite success and failure. Remember. The success mode is that they point out, oh, here's something that you could have done. That was easier. Here's kind of a missing piece in the puzzle. And then it turns out that's the easy way to go. So you know, get mathematical physicists, have a history of kind of being birds in this way, where they kind of point out, well, you guys were studying this equation to kind of study the typology of format of holes instead of, and you should study, set a different equation, which is much easier. And we'll tell you all this. And the reason for this as sort of like incomprehensible to mathematician, but the math has made it much easier to solve a lot of problems. That's kind of the [00:18:35] ultimate bird success. The failure mode is that you spend a lot of time piecing things together, but then you only work on problems, which are, which makes sense from this huge perspective. And those problems ended up being uninteresting to everyone else. And you end up being trapped by this. Kind of elaborate complexity of your own perspective. So you start working on kind of like an abstruse kind of, you know, you're like computing some quantity, which is interesting only if you understand this vast picture and it doesn't really shed light on anything. That's simple for people to understand. That's usually...
/episode/index/show/ideamachines/id/23273849
info_outline
Scientific Irrationality with Michael Strevens [Idea Machines #43]
01/18/2022
Scientific Irrationality with Michael Strevens [Idea Machines #43]
Professor Michael Strevens discusses the line between scientific knowledge and everything else, the contrast between what scientists as people do and the formalized process of science, why Kuhn and Popper are both right and both wrong, and more.
/episode/index/show/ideamachines/id/21812921
info_outline
Distributing Innovation with The VitaDAO Core Team [Idea Machines #42]
01/02/2022
Distributing Innovation with The VitaDAO Core Team [Idea Machines #42]
A conversation with the VitaDAO core team. VitaDAO is a decentralized autonomous organization — or DAO — that focuses on enabling and funding longevity research.
/episode/index/show/ideamachines/id/21652943
info_outline
The Nature of Technology with Brain Arthur [Idea Machines #41]
10/03/2021
The Nature of Technology with Brain Arthur [Idea Machines #41]
Dr. Brian Arthur talks about how technology can be modeled as a modular and evolving system, combinatorial evolution more broadly and dig into some fascinating technological case studies that informed his book The Nature of Technology.
/episode/index/show/ideamachines/id/20687351
info_outline
Philosophy of Progress with Jason Crawford [Idea Machines #40]
09/29/2021
Philosophy of Progress with Jason Crawford [Idea Machines #40]
In this Conversation, Jason Crawford and I talk about starting a nonprofit organization, changing conceptions of progress, why 26 years after WWII may have been what happened in 1971, and more. Jason is the proprietor of Roots of Progress a blog and educational hub that has recently become a full-fledged nonprofit devoted to the philosophy of progress. Jason’s a returning guest to the podcast — we first spoke in 2019 relatively soon after he went full time on the project . I thought it would be interesting to do an update now that roots of progress is entering a new stage of its evolution. Links Transcript So what was the impetus to switch from sort of being an independent researcher to like actually starting a nonprofit I'm really interested in. Yeah. The basic thing was understanding or getting a sense of the level of support that was actually out there for what I was doing. In brief people wanted to give me money and and one, the best way to receive and manage funds is to have a national nonprofit organization. And I realized there was actually enough support to support more than just myself, which had been doing, you know, as an independent researcher for a year or two. But there was actually enough to have some help around me to basically just make me more effective and, and further the mission. So I've already been able to hire research [00:02:00] assistants. Very soon I'm going to be putting out a a wanted ad for a chief of staff or you know, sort of an everything assistant to help with all sorts of operations and project management and things. And so having these folks around me is going to just help me do a lot more and it's going to let me sort of delegate everything that I can possibly delegate and focus on the things that only I can do, which is mostly research and writing. Nice and sort of, it seems like it would be possible to take money and hire people and do all that without forming a nonprofit. So what what's sort of like in your mind that the thing that makes it worth it. Well, for one thing, it's a lot easier to receive money when you have a, an organization that is designated as a 5 0 1 C three tax status in the United States, that is a status that makes deductions that makes donations tax deductible. Whereas other donations to other types of nonprofits are not I had had issues in the past. One organization would want to [00:03:00] give me a grant as an independent researcher, but they didn't want to give it to an individual. They wanted it to go through a 5 0 1 C3. So then I had to get a new. Organization to sort of like receive the donation for me and then turn around and re grant it to me. And that was just, you know, complicated overhead. Some organizations didn't want to do that all the time. So it was, it was just much simpler to keep doing this if I had my own organization. And do you have sort of a broad vision for the organization? Absolutely. Yes. And it, I mean, it is essentially the same as the vision for my work, which I recently articulated in an essay on richer progress.org. We need a new philosophy of progress for the 21st century and establishing such a philosophy is, is my personal mission. And is the mission. Of the organization to just very briefly frame this in the I, the 19th century had a very sort of strong and positive, you know, pro progress vision of, of what progress was and what it could do for humanity and in the [00:04:00] 20th century. That optimism faded into skepticism and fear and distrust. And I think there are ways in which the 19th century philosophy of progress was perhaps naively optimistic. I don't think we should go back to that at all, but I think we need a, we need to rescue the idea of progress itself. Which the 20th century sort of fell out of love with, and we need to find ways to acknowledge and address the very real problems and risks of progress while not losing our fundamental optimism and confidence and will to, to move forward. We need to, we need to regain to recapture that idea of progress and that fundamental belief in our own agency so that we can go forward in the 21st century with progress. You know, while doing so in a way that is fundamentally safe and benefits all of humanity. And since you, since you mentioned philosophy, I'm really like, just, just ask you a very weird question. That's related to something that I've been thinking about. And [00:05:00] so like, in addition to the fact that I completely agree the philosophy. Progress needs to be updated, recreated. It feels like the same thing needs to be done with like the idea of classical liberalism that like it was created. Like, I think like, sort of both of these, these philosophies a are related and B were created in a world that is just has different assumptions than we have today. Have you like, thought about how the, those two, like those two sort of like philosophical updates. Yeah. So first off, just on that question of, of reinventing classical liberalism, I think you're right. Let me take this as an opportunity to plug a couple of publications that I think are exploring this concept. Yeah. So so the first I'll mention is palladium. I mentioned this because of the founding essay of palladium, which was written by Jonah Bennet as I think a good statement of the problem of, of why classical liberalism is [00:06:00] or, or I think he called it the liberal order, which has maybe a slightly different thing. But you know, the, the, the basic idea of You know, representative democracy is you know, or constitutional republics with, with sort of representative democracy you know, and, and basic ideas of of freedom of speech and other sort of human rights and individual rights. You know, all of that as being sort of basic world order you know, Jonah was saying that that is in question now and. There's essentially now. Okay. I'm going to, I'm going to frame this my own way. I don't know if this is exactly how gender would put it, but there's basically, there's, there's basically now a. A fight between the abolitionists and the reformists, right. Those who think that the, the, the, that liberal order is sort of like fundamentally corrupt. It needs to be burned to the ground and replaced versus those who think it's fundamentally sound, but may have problems and therefore needs reform. And so you know, I think Jonah is on the reform side and I'm on the reform side. I think, you know, the institutions of you know, Western institutions and the institutions of the enlightenment let's say are like [00:07:00] fundamentally sound and need reform. Yeah, rather than, rather than just being raised to the ground. This was also a theme towards the end of enlightenment now by Steven Pinker that you know, a lot of, a lot of why he wrote that book was to sort of counter the fundamental narrative decline ism. If you believe that the world is going to hell, then it makes sense to question the fundamental institutions that have brought us here. And it kind of makes sense to have a burn it all to the ground. Mentality. Right. And so those things go together. Whereas if you believe that you know, actually we've made a lot of progress over the last couple of hundred years. Then you say, Hey, these institutions are actually serving us very well. And again, if there are problems with them, let's sort of address those problems in a reformist type of approach, not an abolitionist type approach. So Jonah Bennett was one of the co-founders of palladium and that's an interesting magazine or I recommend checking out. Another publication that's addressing some of these concepts is I would say persuasion by Yasha Munk. So Yasha is was a part of the Atlantic as I recall. [00:08:00] And basically wanted to. Make a home for people who were maybe left leaning or you know, would call themselves liberals, but did not like the new sort of woke ideology that is arising on the left and wanted to carve out a space for for free speech and for I don't know, just a different a non-local liberalism, let's say. And so persuasion is a sub stack in a community. That's an interesting one. And then the third one that I'll mention is called symposium. And that is done by a friend of mine. Roger Sinskey who it himself has maybe a little bit more would consider himself kind of a more right-leaning or maybe. Just call himself more of an individualist or an independent or a, you know, something else. But I think he maybe appeals more to people who are a little more right-leaning, but he also wanted you know, something that I think a lot of people are, are both maybe both on the right and the left are wanting to break away both from woke ism and from Trumpism and find something that's neither of those things. And so we're seeing this interesting. Where people on the right and left are actually maybe [00:09:00] coming together to try to find a third alternative to where those two sides are going. So symposium is another publication where you know, people are sort of coming together to discuss, what is this idea of liberalism? What does it mean? I think Tristan ski said that he wanted some posting to be the kind of place where Steven Pinker and George will, could come together to discuss what liberalism means. And then, then he like literally had that as a, as a podcast episode. Like those two people. So anyway, recommend, recommend checking it out. And, and Rob is a very good writer. So palladium, persuasion and symposium. Those are the three that I recommend checking out to to explore this kind of idea of. Nice. Yeah. And I think it looks, I mean, I mean, I guess in my head it actually like hooks, like it's sort of like extremely coupled to, to progress. Cause I think a lot of the places where we, there's almost like this tension between ideas of classical liberalism, like property rights and things that we would like see as progress. Right. Cause it's like, okay, you want to build your [00:10:00] Your Hyperloop. Right. But then you need to build that Hyperloop through a lot of people's property. And there's like this fundamental tension there. And then. I look, I don't have a good answer for that, but like just sort of thinking about that, vis-a-vis, it's true. At the same time, I think it's a very good and healthy and important tension. I agree because if you, if you have the, if you, so, you know, I, I tend to think that the enlightenment was sort of. But there were at least two big ideas in the enlightenment, maybe more than two, but you know, one of them was sort of like reason science and the technological progress that hopefully that would lead to. But the other was sort of individualism and and, and, and, and Liberty you know concepts and I think what we saw in the 20th century when you have one of those without the other, it leads to to it to disaster. So in particular I mean the, the, the communists of you know, the Soviet union were were [00:11:00] enamored of some concept of progress that they had. It was a concept of progress. That was ultimately, they, they got the sort of the science and the industry part, but they didn't get the individualism and the Liberty part. And when you do that, what you end up with is a concept of progress. That's actually detached from what it ought to be founded on, which is, I mean, ultimately progress by. To me in progress for individual human lives and their happiness and thriving and flourishing. And when you, when you detach those things, you end up with a, an abstract concept of progress, somehow progress for society that ends up not being progress for any individual. And that, as I think we saw in the Soviet union and other places is a nightmare and it leads to totalitarianism and it leads to, I mean, in the case specifically the case of the Soviet union mass. And not to mention oppression. So one of the big lessons of you know, so going back to what I said, sort of towards the beginning that the 19th century philosophy of progress had, I think a bit of a naive optimism. And part of that, [00:12:00] part of the naivete of that optimism was the hope that that all forms of progress would go together and work sort of going along hand in hand, the technological progress and moral and social progress would, would go together. In fact, towards the end of. The, the 19th century some people were hopeful that the expansion of industry and the growth of trade between nations would lead to a new era of world peace, the end. And the 20th century obviously prove this wrong, right? There's a devastating, dramatic proof though. And I really think it was my hypothesis right now is that it was the world war. That really shattered the optimism of the 19th century that, you know, they really proved that technological progress does not automatically lead to moral progress. And with the dropping of the atomic bomb was just like a horrible exclamation point on this entire lesson, right? The nuclear bomb was obviously a product of modern science, modern technology and modern industry. And it was the most horrific destructive [00:13:00] weapon ever. So so I think with that, people saw that that these things don't automatically go together. And I think the big lesson from from that era and and from history is that technological and moral progress and social progress or an independent thing that we have. You know, in their own right. And technological progress does not create value for humanity unless it is embedded in the, you know, in the context of good moral and social systems. So and I think that's the. You know, that's the lesson of, for instance, you know, the cotton gin and and American slavery. It is the lesson of the of the, the Soviet agricultural experiments that ended on in famine. It's the lesson of the, the Chinese great leap forward and so forth. In all of those cases, what was missing was was Liberty and freedom and human in individual rights. So those are things that we must absolutely protect, even as we move technological and industrial progress forward. Technological progress ultimately is it is [00:14:00] progress for people. And if it's not progress for people and progress for individuals and not just collectives then it is not progress at all the one. I agree with all of that. Except the thing I would poke is I feel like the 1950s might be a counterpoint to the world wars destroying 20th century optimism, or, or is it, do you think that is just sort of like, there's almost like a ha like a delayed effect that I think the 1950s were a holdover. I think that, so I think that these things take a generation to really see. And so this is my fundamental answer at the, at the moment to what happened in 1971, you know, people ask this question or 1970 or 73 or whatever date around. Yeah. I think what actually happened, the right question to ask is what happened in 1945, that took 25 years to sink in. And I think, and I think it's, so my answer is the world wars, and I think it is around this time that [00:15:00] you really start to see. So even in the 1950s, if you read intellectuals and academics who are writing about this stuff, you start to read things like. Well, you know, we can't just unabashedly promote quote-unquote progress anymore, or people are starting to question this idea of progress or, you know, so forth. And I'm not, I haven't yet done enough of the intellectual history to be certain that that's really where it begins. But that's the impression I've gotten anecdotally. And so this is the, the hypothesis that's forming in my mind is that that's about when there was a real turning point now to be clear, there were always skeptics of. From the very beginning of the enlightenment, there was a, an anti enlightenment sort of reactionary, romantic backlash from the beginnings of the industrial revolution, there were people who didn't like what was happening. John chakra. So you know, Mary Shelley, Karl Marx, like, you know, you name it. But I think what was going on was that essentially. The progress you know, the, the progress movement or whatever, they, the people who are actually going forward and making scientific and technological progress, they [00:16:00] were doing that. Like they were winning and they were winning because they were because people could see the inventions coming especially through the end. I mean, you know, imagine somebody born. You know, around 1870 or so. Right. And just think of the things that they would have seen happen in their lifetime. You know, the telephone the the, you know, the expansion of airplane, the automobile and the airplane, right? The electric light bulb and the, and the, the electric motor the first plastics massive. Yeah, indoor plumbing, water, sanitation vaccines, if they live long enough antibiotics. And so there was just oh, the Haber-Bosch process, right. And artificial or synthetic fertilizer. So this just like an enormous amount. Of these amazing inventions that they would have seen happen. And so I basically just think that the, the, the reactionary voices against against technology and against progress, we're just drowned out by all of the cheering for the new inventions. And then my hypothesis is that what happened after world war II is it wasn't so much that, you [00:17:00] know the people who believed in progress suddenly stopped believing in it. But I think what happens in these cases, The people who, who believed in progress their belief was shaken and they lost some of their confidence and they became less vocal and their arguments started feeling a little weaker and having less weight and conversely, the sort of reactionary the, the anti-progress folks were suddenly emboldened. And people were listening to them. And so they could come to the fore and say, see, we told you, so we've been telling you this for generations. We always knew it, that this was going to be what happened. And so there was just a shift in sort of who had the confidence, who was outspoken and whose arguments people were listening to. And I think when you, when you have then a whole generation of people who grew up in this new. Milia, then you get essentially the counterculture of the 1960s and you know, and you get silent spring and you get you know, protests against industry and technology and capitalism and civilization. And, [00:18:00] you know, do you think there, mate, there's just like literally off the cuff, but there might also be some kind of like hedonic treadmill effect where. You know, it's like you see some, like rate of progress and, you know, it's like you, you start to sort of like, that starts to be normalized. And then. It's true. It's true. And it's funny because so well before the world war, so even in the late 18 hundreds and early 19 hundreds, you can find people saying things like essentially like kids these days don't realize how good they have it. You know, people don't even know the history of progress. It's like, I mean, I found. I found it. Let's see. I remember there was so I wrote about this, actually, I hadn't had an essay about this called something like 19th century progress studies, because there was this guy who was even before the transcontinental railroad was built in the U S in the sixties. There was this guy who like in the 1850s or so [00:19:00] was campaigning for it. And he wrote this whole big, long pamphlet that, you know, promoting the idea of a transcontinental railroad and he was trying to raise private money for it. And. One of the things in this long, you know, true to the 19th century, it was like this long wordy document. And one of the parts of this whole thing is he starts going into the, like the whole history of transportation back to like the 17th or 16th century and like the post roads that were established in Britain and you know, how those improve transportation, but even how, even in that era, that like people were speaking out against the post roads as, and we're posing them. No...
/episode/index/show/ideamachines/id/20638589
info_outline
Fusion, Planning, Programs, and Politics with Stephen Dean [Idea Machines #39]
08/30/2021
Fusion, Planning, Programs, and Politics with Stephen Dean [Idea Machines #39]
In this conversation, Dr. Stephen Dean talks about how he created the 1976 US fusion program plan, how it played out and the history of fusion power in the US, technology program planning and management more broadly, and more.
/episode/index/show/ideamachines/id/20302580
info_outline
Policy, TFP, and airshiPs with Eli Dourado [Idea Machines #38]
07/27/2021
Policy, TFP, and airshiPs with Eli Dourado [Idea Machines #38]
Eli Dourado on how the sausage of technology policy is made, the relationship between total factor productivity and technological progress, airships, and more.
/episode/index/show/ideamachines/id/19941650
info_outline
In the Realm of the Barely Feasible with Arati Prabhakar [Idea Machines #37]
01/25/2021
In the Realm of the Barely Feasible with Arati Prabhakar [Idea Machines #37]
In this conversation I talk to the Amazing Arati Prabhakar about using Solutions R&D to tackle big societal problems, gaps in the innovation ecosystem, DARPA, and more.
/episode/index/show/ideamachines/id/17672165
info_outline
Shaping Research by Changing Context with Ilan Gur [Idea Machines #36]
12/18/2020
Shaping Research by Changing Context with Ilan Gur [Idea Machines #36]
In this conversation I talk to Ilan Gur about what it really means for technology to “escape the lab”, the power of context to shape the usefulness of research, the inadequacies of current institutional structures, how activate helps technology escape the lab *by* changing people’s context, and more.
/episode/index/show/ideamachines/id/17241332
info_outline
Your Equity is a Product with Luke Constable [Idea Machines #35]
11/25/2020
Your Equity is a Product with Luke Constable [Idea Machines #35]
In this conversation I talk to Luke Constable about the complicated tapestry of finance, funding projects, incentives, organizational and legal structures, social technologies, and more.
/episode/index/show/ideamachines/id/16956095
info_outline
Venture Research with Donald Braben [Idea Machines #34]
11/09/2020
Venture Research with Donald Braben [Idea Machines #34]
In this conversation I talk to Donald Braben about his venture research initiative, peer review, and enabling the 21st century equivalents of Max Planck. Donald has been a staunch advocate of reforming how we fund and evaluate research for decades. From 1980 to 1990 he ran BP’s venture research program, where he had a chance to put his ideas into practice. Considering the fact that the program cost two million pounds per year and enabled research that both led to at least one Nobel prize and a centi-million dollar company, I would say the program was a success. Despite that, it was shut down in 1990. Most of our conversation centers heavily around his book “” which I suspect you would enjoy if you’re listening to this podcast. Links Transcript audio_only [00:00:00] This conversation. I talked to Donald breathing about his venture research initiative, peer review, and enabling the 21st century equivalent of max Planck. Donald has been a staunch advocate for forming how we fund and evaluate research for decades. From 1980 to 1990, he ran BP's venture research program. Where he had a chance to put his ideas into practice. [00:01:00] Considering the fact that the program costs about 2 million pounds per year and enabled research, that book led to at least one Nobel prize and to send a million dollar company. I would say the program was success, despite that it was shut down in 1990. Most of our conversations centers heavily around his book, scientific freedom, which just came out from straight press. And I suspect that you would enjoy if you're listening to this podcast. So here's my conversation with Donald Raven. would you explain, in your own words, the concept of a punk club and why it's really well, it's just my name for the, for the, outstanding scientists of the 20th century, you know, starting with max blank, who looked at thermodynamics, and it took him 20 years to reach his conclusions, that, that matter was, was quantized. You know, and that, and, he developed quantum mechanics, that was followed by Einstein and Rutherford and, and, and a [00:02:00] whole host of scientists. And I've called, in order to be, succinct Coley's they, these 500 or so scientists who dominated the 20th century, the plank club. So I don't have to keep saying Einstein rather for that second. I said, and it's, it's an easy shorthand. Right. And so, do you think that like, well, there's a raging debate about whether the existence of the plank club was due to sort of like the time and place and the, the things that could be discovered in physics in the first half of the 20th century versus. Sort of a more or more structural argument. Do you, where do you really come down on that? The existence of the plank club? [00:03:00] W well, like, yeah, so like, I guess, I guess it's, tied to sort of like this, but the question of like, like almost like, yeah. Are you asking, will there be a 20th century, 21st century playing club? Do you think, do you think it's possible? Like, it's sort of like now right now. No, it's not. because, peer review forbids it, in the early parts of the 20th century, then scientists did not have to deal with, did not necessarily have to deal with peer review. that is the opinions of the, of the expert of the few expert colleagues. they just got on, on, Edgar to university and had a university position, which was as difficult then as it is now to get. But once you got a university position in the first part up to about 1970, then you could do then providing your requirements were modest, Varney. You didn't [00:04:00] need, you know, huge amounts of money. Say. You could do anything you wanted and, you didn't have to worry about your, your peers opinions. I mean, you did in your department when people were saying, Oh, he's mad. You know, and he's looking at this, that, and the other, you could get on with it. You didn't have to take too much attention. We pay too much attention to what they were doing, but now in the 21st century, consensus dominates everything. And, it is a serious, serious problem. Yeah. So I, I seriously believe that keeps me what keeps me going is that it is possible for there to be a plane club in the 21st century. It is possible, but right now it won't take, it won't happen. I mean, re there's been reams written on peer review, absolute huge, literature. and the, but, but most of it seems to have been written by, by people who at least favor the status [00:05:00] quo. And so they conclude that peer review is great, except perhaps for multidisciplinary research, which ma, which might cause problems. This is the establishment view. And so they take steps to try to ease the progress of multidisciplinary research, but still using peer review. Now. Multidisciplinary research is essentially is, is absolutely essential to venture research. I mean, because what they are doing, what every venture researchers, the researcher is doing is to look at the universe. and the world we inhabit in a new way. So that's bound to create new, new disciplines, new thought processes. And so the, when the conventional P, when the funding agencies say, there's a problem with multidisciplinary research, they're saying that's a problem with venture reserves. Yeah. And so therefore we won't have a plank club until that problem is [00:06:00] solved. And I proposed the solution in the book. Of course. Yeah, exactly. And so I guess, so with the book, I actually think of it as it's just like a really well done, an eloquent, almost like policy proposal, like it's, it's like you could, I feel like you could actually take the book and like hand it to. A policymaker and say like do this, I guess you could, so, I guess to put it, but like clearly nobody's done that. Right? did you, do you ever do that? Like, did you actually like go to, government agencies or even billionaires? Like the, the amount of money that you're talking about is almost like shockingly small. what, what are, what are people's responses of like, why not do this? Patrick Collison as being the only billionaire who has responded, I've met about, I don't know, half a dozen billionaires. And, they all want to, they all want to do things [00:07:00] their way, you know, they all want to, which is fair, which is fair enough. They all want to, sees a university through their own eyes. They are not capable of saying opening their eyes and listening to what scientists really want to do and to get what scientists really want to do. You've got you. You just can't just ask them straight off. You've got to talk to them. For a long time before they will reveal what they want really want to do. And then only a few of them will be capable of being a potential member of the plank club of the 21st century state. But it's a wonderful process. It's exciting. And I don't know why. well, I, I think I do actually, why the conventional authorities do not do this. And I believe that for, the reason this is more or less as follows that, for 20, 30 years following the expansion of the universities in about, about 1970 for political reasons. [00:08:00] no, not at all for, for scientific reasons that, there was a huge expansion in the universities and, and, and a number of academics. I really really mean it's factors of three, two, three, four, or something like that, depending on the country. Really huge. And, so therefore the old system where freedom for everyone was more or less guaranteed, which is what I would advocate freedom for everyone as a right. So, what we have done now is to develop absolute selection, rules, absolute selection rules for selecting venture researchers. And, and, and that's taken, you know, that's taken some time to develop them, but they work well. And, and, and open up the world to a complete ways, new ways of looking at it. Yeah, look, I mean, the, the, the track record seems very like very good, right? Like you, you, you, you [00:09:00] enabled research that would not have happened otherwise and led to Nobel prizes. Right. Like, I don't, I don't see how it could, what evidence one could present that your method works more so than that. and so it's, so yeah. Well, well, over the years you see, the, the scientists to work in for, for the funding agencies. they have advised politicians on the ways to ration research without affecting it. And they have come up with the way, the method of peer review, which is now a dairy girl, you know? it's absolutely essential. Yeah. Every to every funding agency in the world, I've not come across one that does not use it well, apart from our own operation, of course we don't use it. but we, we find ways around it. And that's the conventional wisdom is that there are no ways around, [00:10:00] there are no way. peer review is regarded as the only way to ensure research excellence. People keep saying that it's the only way, but we have demonstrated with the BP venture to search you and this and that UCL, that there is another way. And, and I guess so is, is, is the response from, people that you would propose this to simply that , they, they don't believe that. they don't believe that it can work because it doesn't, it isn't peer reviewed, , is that the main contention? Any, any ideas now must, must, must survive. Peer review and venture research of course would not. And so therefore what we're saying is therefore not admissible. And now a few people, in like the 50 or so of my, my, of my supporters, very senior supporters, re regard what we [00:11:00] are doing as essential, but their voice is still tiny compared with the, you know, the millions of, researchers and, and the, I I'm the funding agencies. Now the funding agencies kept on saying that they have advised politicians over the years, that the only way to ensure to ensure, that the, that the scientific enterprise is healthy is to, is to, is to a DIA to peer review. Now. They cannot. They cannot now say, ah, yes, Raven points out. There's a serious floor. They cannot do that. And so they say they do, they do not acknowledge that I exist or that the problem exists. This is so, so just because like they have, have doubled down so hard on peer review being the acid test for research quality. That they, they just like, they can't, they're like they're [00:12:00] lash to that, man. Okay. Okay. And, so I, I know at least in the NSF, I think actually shortly after your book came out originally, so in. 2008, 2009. I read about an initiative to try to do more. I think they, they termed it like transformational research and the NSF, that was the NSF, initiative. it was pioneered by Nina fedoroff. Nina fedoroff, who's a great, another great is one of my supporters. and, she was the, she was the, I think she was the chairman of the science board or something like that, which controls the NSF. And so she set up a special task force to look at. Mainly what I was trying to do. And so, she invited me to go to Washington on three occasions and we sat in this huge room at the national science [00:13:00] foundation headquarters. And we, we, we, we had three, two or three day meetings, venture research, and they concluded that it was the only way to go. And so that's what they recommended to the NSF. But what did the NSF do with decided that th that, that, that they would accept Nina Federoff's recommendations, but they should be administered by each of the divisions separately. Well, that's, that means they don't do anything that they wouldn't do normally. and so, I guess one, one thing, I'm not sure if you mentioned it in the book, what do you think about, like HHMI Janelia and. Like sort of the effort that they do, because it is much closer to your recommendations, how Hughes you mean? Yeah. Howard Hughes medical Institute. Yeah, exactly. and specifically like they're their Janelia campus where my understanding is [00:14:00] that they give people sort of, whole free funding for five years and really just sort of let them. Explore what they want to explore, but they have to, but they, I think they insist on them going to the central laboratories. Yeah. Yeah. That is a problem. How's that? Because, well, because, scientists all have roots and they all, I've ended up where they have, you know, wherever it is and that's where they prefer to work. And so therefore in venture research, that's why we allowed them to work in their old environment. Yeah. But now with total freedom and they'd radically transformed, you know, a little segment of the, of what it's done, but they transformed it and that they would've transformed even more. Had we been allowed in 1990, if BP had allowed venture research to continue. They were th th there [00:15:00] would be more than 14 major breakthroughs because, in 1990, when BP closed us down, then we, these people no longer could rely on venture research support the, the, the essential, feedback that we gave them, the meetings that we arranged, you know, of all venture researchers, which we had to work out how the hell to do that because, you know, w th the just scientists and engineers, scientists, and engineers all came together. Yeah. And, I don't know that's been done. but anyway, right. we were no longer allowed to provide that support. And so therefore they were on their own and exposed to the full rigors of peer review in applying to funds before they were ready for it. Yeah. The successful ones are venture research, you know, people they can suddenly, you know, it's, with his ionic liquids, then he, he, he, jumped over the line [00:16:00] of, of, of, into mainstream science and it became then part of the mainstream. Yeah, and same with similar Polica and all the other people who, who, who, who was successful. But, but th th there were, there were a few groups, you know, who were left high and dry and, and they had to manage, they had to, cut their class according to the funding. Yeah. do you keep track of people who today would make good venture researchers? but, but don't like, like, do people still still send you letters and say like, I want to do this crazy thing. No, I'm afraid. I can't, I can't do that because I would be, raising their hopes, way beyond what Mabel to provide, UCL. We've done that and we've met one person. we supported one person, Nick lane, whose work has been prodigious digitally successful. Now he could not get support. He couldn't get support from anybody. [00:17:00] Before we, we felt a bit, before we backed in. And, so I persuaded the university to cover up 150,000 pounds over three years, which is trivial amount of money. Totally. Totally. And since that time, since then, he's, now he's more or less stepped over the line and he's now become mainstream. And he's, since that time as you're right. 5 million pounds, 5 million compared to the 150,000. So that's, that's profitable, you know, as far as the university is concerned, they're profitable, but even so even with UCL, it's still not caught on. Yeah. And, do you, do you, so when I, I guess I also have a question about like, what about the people who might make really be really good researchers, but , don't even make it through to the point where, they would even like be able to [00:18:00] raise venture, venture research money in that , There's also the fact that. in venture research, you were entirely supporting people except for, believe one case, you were supporting people who are already in academia, right? they'd already sort of gone through all the hoops of getting a PhD and, getting some sort of, some sort of position. And so do you have a sense of how many possibly amazing people get weeded out? even before that point? Oh, I mean, to be a venture researcher, you you've got to have a university position, I would guess. or, I mean, as with, was, with, with, with the only engineer we supported, he was working for a company and we enabled them to leave and I took great care to, to, to inquire of him. He w he would have to give up his job because, you know, industrial company couldn't support him if he was working for another company. And so I had to be sure that [00:19:00] he really was serious about this. And so we arranged that he, we arranged for a university appointment for the nearest university to where he lived, which was sorry. That's was just down the road, so to speak. And, but even that created problems, he was never really accepted by the university hierarchy. And w why do you think the university, association is, is important? as opposed to just someone just, you know, just doing research, right? Like what if they had, they built like a lab in their basement or. we're doing mostly theory. And so they just sort of, they've done that. You know, people, you know, like the guy, the guy at shop, I mean, if they, if they'd done that, then of course we listened to them, but they must be, they must be reasonably proficient in, in, in, in, in what I mean, they're, they're coming with a proposal to do something. Right. And to some that you've got to, [00:20:00] you've got to have done something else. You've got to, you've got to prepare the ground, so to speak. Yeah. So getting a university appointment today is no more difficult than it was say in 1970, you still had to get up. You still got to get, you know, go through a degree. PhD may be, and, and then, convince the university that you're worthy of, appointment. But then as I said before, You, you had automatic, you automatically qualified for this modest amount of funding, at least in Britain. you automatically qualify for that, but now you're quantifying for nothing. Once you pass the, you know, you're appointed by the university, you then start this game of trying to convince funding agencies to support you. And if you don't, you're dead. Yeah, you don't get, you don't get anywhere. You've got no Tanya. So you, you, you, you, you just disappear. It says it's an unforgivable system and it's extremely [00:21:00] inefficient. Yes. Do you think, I guess the question is like, is, is efficiency even the thing that's worth shooting for it. Like, it seems like it's, it's going always going to be inherently inefficient. Because of the uncertainty. like I guess I always worry when, when, like, when efficiency comes up as a metric around research, because then you sort of start having to calculate like, okay, like how much value is this? What is our like, return on investment? Like how efficient is that? And it's just, do you think that's the right way to think about it? Well, it's certainly not a bad way. but, but mines are closed, you know, I, I've been in touch with so many people over the years, you know, I've been at this now for 20 years since, since BP terminated my contract, so to speak and I've never, and I've always, and I've always tried every single minute [00:22:00] of that 20 years to find new ways of doing this. I mean, it's big, it does sound a bit, you know, that, that th th th th th th th the, the, what I do as a large element of the crank about it, but I'm so convinced of the value of this eventual research and its contribution to humanity, so to speak. I'm so convinced that it will make an enormous contribution that I keep on going. Yeah. No. I mean, I have no money. Yeah, no, I'm not paid to do this. And the first person that I've met of the many, very rich people I've come across has been factored colorism. Who, offered to publish my book and at a fraction of the price, why only were charging for it? Why do you charge $75 for a paperback he's charging less than $20 for a, for a hardback? Yeah. Well, I think he realizes that it's important for people to [00:23:00] actually read it. That's that's good. he, took part in, just before he met me, he took part in a, in a, in a, in a, in a blog or something like that. it's on, it has a YouTube thing. And, he said he was very impressed when he met me and I, I, I, I changed the way he looked....
/episode/index/show/ideamachines/id/16742144
info_outline
Focusing on Research with Adam Marblestone [Idea Machines #33]
10/26/2020
Focusing on Research with Adam Marblestone [Idea Machines #33]
A conversation with Adam Marblestone about his new project - Focused Research Organizations. Focused Research Organizations (FROs) are a new initiative that Adam is working on to address gaps in current institutional structures. You can read more about them in that Adam released with Sam Rodriques. Links Transcript [00:00:00] In this conversation, I talked to Adam marble stone about focused research organizations. What are focused research organizations you may ask. It's a good question. Because as of this recording, they don't exist yet. There are new initiatives that Adam is working on to address gaps. In current institutional structures, you can read more about them in the white paper that Adam released recently with San Brad regens. I'll put them in the show notes. Uh, [00:01:00] just a housekeeping note. We talk about F borrows a lot, and that's just the abbreviation for focus, research organizations. just to start off, in case listeners have created a grave error and not yet read the white paper to explain what an fro is. Sure. so an fro is stands for focus research organization. the idea is, is really fundamentally, very simple and maybe we'll get into it. On this chat of why, why it sounds so trivial. And yet isn't completely trivial in our current, system of research structures, but an fro is simply a special purpose organization to pursue a problem defined problem over us over a finite period of time. Irrespective of, any financial gain, like in a startup and, and separate from any existing, academic structure or existing national lab or things [00:02:00] like that. It's just a special purpose organization to solve, a research and development problem. Got it. And so the, you go much more depth in the paper, so I encourage everybody to go read that. I'm actually also really interested in what's what's sort of the backstory that led to this initiative. Yeah. it's kind of, there's kind of a long story, I think for each of us. And I would be curious your, a backstory of how, how you got involved in, in thinking about this as well. And, but I can tell you in my personal experience, I had been spending a number of years, working on neuroscience and technologies related to neuroscience. And the brain is sort of a particularly hard a technology problem in a number of ways. where I think I ran up against our existing research structures. in addition to just my own abilities and [00:03:00] everything, but, but I think, I think I ran up against some structural issues too, in, in dealing with, the brain. So, so basically one thing we want to do, is to map is make a map of the brain. and to do that in a, in a scalable high-speed. Way w what does it mean to have a map of the brain? Like what, what would, what would I see if I was looking at this map? Yeah, well, we could, we could take this example of a mouse brain, for example. just, just, just for instance, so that there's a few things you want to know. You want to know how the individual neurons are connected to each other often through synopsis, but also through some other types of connections called gap junctions. And there are many different kinds of synopsis. and there are many different kinds of neurons and, There's also this incredibly multi-scale nature of this problem where a neuron, you know, it's, it's axon, it's wire that it sends out can shrink down to like a hundred nanometers in [00:04:00] thickness or less. but it can also go over maybe centimeter long, or, you know, if you're talking about, you know, the neurons that go down your spinal cord could be meter long, neurons. so this incredibly multi-scale it poses. Even if irrespective of other problems like brain, computer interfacing or real time communication or so on, it just poses really severe technological challenges, to be able to make the neurons visible and distinguishable. and to do it in a way where, you can use microscopy, two image at a high speed while still preserving all of that information that you need, like which molecules are aware in which neuron are we even looking at right now? So I think, there's a few different ways to approach that technologically one, one is with. The more mature technology is called the electron microscope, electromicroscopy approach, where basically you look at just the membranes of the neurons at any given pixel sort of black or white [00:05:00] or gray scale, you know, is there a membrane present here or not? and then you have to stitch together images. Across this very large volume. but you have to, because you're just able to see which, which, which pixels have membrane or not. you have to image it very fine resolution to be able to then stitch that together later into a three D reconstruction and you're potentially missing some information about where the molecules are. And then there's some other more, less mature technologies that use optical microscopes and they use other technologies like DNA based barcoding or protein based barcoding to label the neurons. Lots of fancy, but no matter how you do this, This is not about the problem that I think can be addressed by a small group of students and postdocs, let's say working in an academic lab, we can go a little bit into why. Yeah, why not? They can certainly make big contributions and have to, to being able to do this. But I think ultimately if we're talking about something like mapping a mouse brain, it's not [00:06:00] going to be, just a, a single investigator science, Well, so it depends on how you think about it. One, one, one way to think about it is if you're just talking about scaling up, quote, unquote, just talking about scaling up the existing, technologies, which in itself entails a lot of challenges. there's a lot of work that isn't academically novel necessarily. It's things like, you know, making sure that, Improving the reliability with which you can make slices of the brain, into, into tiny slices are making sure that they can be loaded, onto, onto the microscope in an automated fast way. those are sort of more engineering problems and technology or process optimization problems. That's one issue. And just like, so Y Y Can't like, why, why couldn't you just sort of have like, isn't that what grad students are for like, you know, it's like pipetting things and, doing, doing graduate work. So like why, why couldn't that be done in the lab? That's not why [00:07:00] they're ultimately there. Although I, you know, I was, I was a grad student, did a lot of pipetting also, but, But ultimately they're grad student. So are there in order to distinguish themselves as, as scientists and publish their own papers and, and really generate a unique academic sort of brand really for their work. Got it. So there's, there's both problems that are lower hanging fruit in order to. in order to generate that type of academic brand, but don't necessarily fit into a systems engineering problem of, of putting together a ConnectTo mapping, system. There's also the fact that grad students in, you know, in neuroscience, you know, may not be professional grade engineers, that, for example, know how to deal with the data handling or computation here, where you would need to be, be paying people much higher salaries, to actually do, you know, the kind of industrial grade, data, data piping, and, and, and many other [00:08:00] aspects. But I think the fundamental thing that I sort of realized that I think San Rodriquez, my coauthor on this white paper also realized it through particularly working on problems that are as hard as, as clinic Comix and as multifaceted as a system building problem. I th I think that's, that's the key is that there's, there's certain classes of problems that are hard to address in academia because they're system building problems in the sense that maybe you need five or six different. activities to be happening simultaneously. And if any, one of them. Doesn't follow through completely. you're sort of, you don't have something that's novel and exciting unless you have all the pieces putting, you know, put together. So I don't have something individually. That's that exciting on my own as a paper, Unless you, and also three other people, separately do very expert level, work, which is itself not academically that interesting. Now having the connectome is academically [00:09:00] interesting to say the least. but yes, not only my incentives. but also everybody else's incentives are to, to maybe spend say 60% of their time doing some academically novel things for their thesis and only spend 40% of their time on, on building the connectome system. Then it's sort of, the probability of the whole thing fitting together. And then. We see everyone can perceive that. And so, you know, they basically, the incentives don't align well, for, for what you would think of as sort of team science or team engineering or systems engineering. yeah. And so I'm like, I think, I think everybody knows that I'm actually like very much in favor of this thing. So, I'm going to play devil's advocate to sort of like tease out. what I think are. Important things to think about. so, so one sort of counter argument would be like, well, what about projects? Like cert, right? Like that [00:10:00] is a government yeah. Led, you should, if you do requires a lot of systems engineering, there's probably a lot of work that is not academic interesting. And yet, it, it, it happens. So like there's clearly like proof of concepts. So like what what's like. W why, why don't we just have more things like, like certain for, the brain. Yeah. And I think this gets very much into why we want to talk about a category of focused research organizations and also a certain scale, which we can get into. So, so I think certain is actually in many ways, a great example of, of this, obviously this kind of team science and team engineering is incredible. And there are many others, like LIGO or, or CBO observatory or the human genome project. These are great examples. I think the, the problem there is simply that these, these are multibillion dollar initiatives that really take decades of sustained. government involvement, to make it happen. And so once they get going, and [00:11:00] once that flywheel sort of start spinning, then you have you have it. And so, and so that, that is a nonacademic research project and also the physics and astronomy communities, I think have more of a track record and pipeline overall. perhaps because it's easier, I think in physical sciences, then in some of these sort of emerging areas of, of, you know, biology or sort of next gen fabrication or other areas where it's, it's, there's less of a, a grounded set of principles. So, so for CERN, everybody in the physics basically can agree. You need to get to a certain energy scale. Right. And so none of the theoretical physicists who work on higher energy systems are going to be able to really experimentally validate what they're doing without a particle accelerator of a certain level. None of the astronomers are gonna be able to really do deep space astronomy without a space telescope. and so you can agree, you know, community-wide that, This is something that's worth doing. And I think there's a lot of incredible innovation that happens in those with focus, research organizations. We're thinking about a scale that, [00:12:00] that sort of medium science, as opposed to small science, which is like a, you know, academic or one or a few labs working together, Or big science, which is like the human genome project was $3 billion. For example, a scope to be about $1 per base pair. I don't know what actually came out, but the human genome has 3 billion basis. So that was a good number. these are supposed to be medium scale. So maybe similar to the size of a DARPA project, which is like maybe between say 25 and. A hundred or $150 million for a project over a finite period of time. And they're there. The idea is also that they can be catalytic. So there's a goal that you could deliver over a, some time period. It doesn't have to be five years. It could be seven years, but there's some, some definable goal over definable time period, which is then also catalytic. so in some ways it will be more equivalent to. For the genome project example, what happened after the genome project where, the [00:13:00] cost of genome sequencing through, through new technologies was brought down, basically by a million fold or so is, is, is, how George Church likes to say it, inventing new technologies, bringing them to a level of, of readiness where they can then be, be used catalytically. whereas CERN, you know, It's just a big experiment that really has to keep going. Right. And it's also sort of a research facility. there's also permanent institutes. I think there's a, is a, is a, certainly a model that can do team science and, and many of the best in the brain mapping space, many of the sort of largest scale. connectomes in particular have come either from Janelia or from the Allen Institute for brain science, which are both sort of permanent institutes, that are, that are sort of, nonacademic or semi academic. but that's also a different thing in the sense that it's, it takes a lot of activation energy to create an Institute. And then that becomes now, a permanent career path rather than sort of focusing solely on what's the shortest path to. To some [00:14:00] innovation, the, the, the permanence. So, so the, the flip side of the permanence is that, I guess, how are you going to convince people to do this, this, like this temporary thing, where. I think, someone asked on Twitter about like, you know, if it's being run by the government, these people are probably going to get, government salaries. So you're, you're getting a government salary, without the like one upside of a government job, which is the security. so like what, what is the incentive for, for people to, to come do this? Yeah. And I think, I think it depends on whether it's government or philanthropic, philanthropic fro Faros are also definitely. An option and maybe in many ways more flexible, because the, you know, the government sort of has to, has to contract in a certain way and compete out, you know, contracts in a certain way. They can't just decide, the exact set of people to do something, for example. So, so the government side has. Both a huge [00:15:00] opportunity in the sense that I think this is a very good match for a number of things that the government really would care about. and the government has, has, has the money, and resources to do this, but philanthropic is also one we should consider. but in any case, there are questions about who and who will do Froy and, and why. and I think the basic answer though, it, it comes down to, it's not a matter of, of cushiness of the career certainty. it's, it's really, these are for problems that are not doable any other way. this is actually in many ways, the definition is that you're only going to do this. if this is the only way to do it, and if it's incredibly important. So it really is a, it's a medium scale moonshots. you would have to be extremely passionate about it. That being said, there are reasons I think in approximate sense why one might want to do it both in terms of initiating one and in terms of sort of B being part of them. [00:16:00] so one is simply that you can do science. that is for a fundamental purpose or, or, or, pure, purely driven toward your passion to solve a problem. and yet can have potentially a number of the affordances of, of industry such as, industry competitive salaries, potentially. I think the government, we have to ask about what the government can do, but, but in a certain philanthropic setting, you could do it another aspect that I think a lot of scientists find. Frustrating in the academic system is precisely that they have to. spend so much work to differentiate themselves and do something that's completely separate from what their friends are doing, in order to pay the bills basically. So, so if, if you don't eventually go and get your own appealing, you know, Tenure track job or, or so on and so forth. the career paths available in academia are much, much fewer, and often not, not super well compensated. And, and [00:17:00] so there are a number of groups of people that I've seen in sort of, if you want critical mass labs or environments where they're working together, actually, despite perhaps the. Incentive to, to, differentiate where they're working, does a group of three or four together. and they would like to stay that way, but they can't stay that way forever. And so it's also an opportunity if you, if you have a group of people that wants to solve a problem, to create something a little bit like, like a seal team. so like when, when I was, I'm not very generally militaristic person, but, when I was a kid, I was very obsessed with the Navy seals. But, but anyway, I think the seal team was sort of very tight knit. kind of a special forces operation that works together on one project is something that a lot of scientists and engineers I think want. and the problem is just that they don't have a structure in which they can do that. Yeah. So then finally, I think that, although in many cases maybe essentially built into the structure fro is make sense. We can [00:18:00] talk about this as, as nonprofit organizations. these are the kinds of projects where, you would be getting a relatively small team together to basically create a new industry. and if you're in the right place at the right time, then after an fro is over, you would be in the ideal place to start. The next startup in an area where it previously, it's not been possible to do startups because the horizons for a venture investment would have been too long to make it happen from the beginning. Well, that's actually a great transition to a place that I'm still not certain about, which is what happens. After it fro, cause you, you said that it, that it's a explicitly temporary organization. And then, how do you make sure that it sort of achieves its goal, right? Like, because you can see so many of these, these projects that actually sound really great and they like go in and possibly could do good work and then somehow it all just sort of diffuses. [00:19:00] so, so have you thought about how to sort of make sure that that lives on. Well, this is a tricky thing as we've discussed, in a number of settings. So, in a, like to maybe throw that question back to you after I answer it. Cause I think you have interesting thoughts about that too, but, but in short, it's, it's a tricky thing. So, so the fro. Is entirely legal focused there isn't, there's no expectation that it would continue, by default and simply because it's a great group of people, or because it's been doing interesting work, it's sort of, it is designed to fulfill a certain goal and it should be designed also from the beginning to have a, a plan of the transition. Like it could be a nonprofit organization where it is explicitly intended that at the end, assuming success, One or more startups could be created. One or more datasets could be released and then a, you know, a much less expensive and intensive, nonprofits, structure could be be there to [00:20:00] host the data and provide it to the world. it could be something where. the government would be using it as a sort of prototyping phase for something that could then become a larger project or be incorporated into a larger moonshot project. So I think you explicitly want a, a goal of a finite tune to it, and then also a explicit, upfront, deployment or transition plan, being central to it much more so than any publication or anything. Of course. At the same time. there is the pitfall that when you have a milestone driven or goal focused organization, that the funder would try to micromanage that and say, well, actually,...
/episode/index/show/ideamachines/id/16543583
info_outline
Hanging Out in the Valley of Death with Michael Filler and Matthew Realff [Idea Machines #32]
10/19/2020
Hanging Out in the Valley of Death with Michael Filler and Matthew Realff [Idea Machines #32]
Michael Filler and Matthew Realff discuss Fundamental Manufacturing Process innovations. We explore what they are, dig into historical examples, and consider how we might enable more of them to happen. Michael and Matthew are both professors at Georgia Tech and Michael also hosts an excellent podcast about nanotechnology called Nanovation.
/episode/index/show/ideamachines/id/16461026
info_outline
The Decline of Unfettered Research with Andrew Odlyzko [Idea Machines #31]
09/01/2020
The Decline of Unfettered Research with Andrew Odlyzko [Idea Machines #31]
A conversation with Professor Andrew Odlyzko about the forces that have driven the paradigm changes we’ve seen across the research world in the past several decades. Andrew is a professor at the University of Minnesota and worked at Bell Labs before that. The conversation centers around his paper “” which was written in 1995 but feels even more timely today. Key Takeaway The decline of unfettered research is part of a complex web of causes - from incentives, to expectations, to specialization and demographic trends. The sobering consequence is that any single explanation is probably wrong and any single intervention probably won’t be able to shift the system. Links (Automated, and thus mistake-filled) Transcript audio_only [00:00:00] In this conversation. I talked to professor Andrew Odlyzko about the forces that have driven the paradigm changes we've seen across the research world. In the past several decades. Andrew is a professor at the university of Minnesota and worked at bell labs for that our conversation centers around in his paper, the decline of unfettered research, which was written in 1995, but feels even more timely today. I've linked to it in the show notes and [00:01:00] also a Twitter thread that I wrote to get down my own thoughts. I highly recommend that you check out one of them either now or after listening to this conversation. I realized that it might be a little weird to be talking about a paper that you wrote 25 years ago, but it, it seemed when I read it, it sort of blew my mind because it seemed so like all of it just seemed so true today. Um, and so I was, I was wondering, uh, like first do you, do you, do you sort of think that the, the core thesis of that paper still holds up? Like how would you amend it if you had to write it again today? Oh, absolutely. I'm convinced that the base thesis is correct. And as the last quarter century has provided much more evidence to support it. And basically if I were writing it today, I would just simply draw on this experience all those 25 years. Yeah. Yeah. Cause, okay, cool. So, so like, um, I sort of wanted to [00:02:00] establish the baseline of like asking questions about it is still, is still super relevant. Um, So, uh, just, uh, for, for the, for the listeners, um, would you sort of go through how you think of what unfettered research meets? Because, uh, I think many people have heard of, of sort of like, like basic or, or curiosity driven research, but I think that the distinction is actually really important. Mmm. Well, yes. So basically unfettered researchers, emotional curiosity, driven research, very closely related to maybe some shades of difference with the idea here is that you kind of find the best people. You can most promising researchers and give them essentially practically complete freedom. Give them resources, making them complete freedom to pursue the most interesting problems that they see. Um, and that was something which, uh, kind of many people still think of this as being the main mode of operations. And that's still thought [00:03:00] the best type of research in that case, but it's definitely been fading. Yeah. So, uh, would you, would you make the art? So what, like, what is the, is the most powerful argument that unfettered research is actually not the best kind of research. Well, so why is it not the best kind of research? So again, this is not so much an issue of world's best in some global optimization sense. And so on my essay. It wasn't really addressed to the forces that were influencing conduct of science technology research. Um, and, uh, I'm not quite saying that it's kind of ideal that it was happening. I said, well, here are the reasons. And given the society we live in and the institutions, the general framework here is what's happened and why it's happening. Yeah. [00:04:00] Now and a particular outfit. Yes, there was an argument coming out of my discussion was that, uh, this unfettered research was, uh, becoming a much smaller fraction of the total. And this was actually quite justified. But yes, uh, even so to a large extent, research did dominate for a certain period of time. Um, that era was ending now. It was likely to be the con kind of consigned to a few small niches. So evolving on the, a small number of people, much more of the work was going to be kind of oriented towards particular projects. Yeah, the, the, the thing that I really like about the term unfettered research that I feel like draws a distinction between it and curiosity European is that, uh, unfettered research, the idea of fettered versus unfettered, uh, feels like it refers to, um, Sort of like [00:05:00] external constraints on a researcher, whereas curiosity driven versus, uh, not curiosity driven is, uh, the motivation uh, um, Where, where is like, curiosity? Do you have any, is like the internal, no motivation for a researcher. And I think it's, my whole framework is around incentives. So it's like, what are the incentives on researchers and, and, uh, fettered versus unfettered really sort of, uh, touches on that. Yes. Um, personally, I don't draw a very sharp distinction between the two, I think has got into very fine gradations and so on. I'm not sure they kind of necessarily in most meaningful is our sons. Is that when we're talking, just driven around unfettered research, People are never kind of totally acting in isolation based on is our curiosity. They always react to the opportunities. They react to what they hear from other people. And very often also they are striving for recognition. Yeah, [00:06:00] invitations to stock home to receive about price and so on. That's something many people in the proper disciplines of course keep in mind or so, so there are always some constraints coming from particular group in that case, I kind of, I know these terms as almost synonymous. Yeah, that makes a lot of sense. And so sort of a, the upshot of the decline of unfair research for me was, uh, kind of mind blowing. And it makes so much sense when you put it this way, that research has become a commodity. And I'm not sure how much you've been paying attention to sort of what I would called, like the, the, um, stagnation literature, where there's been a lot of literature around the idea of, of scientistic stagnation. And I realized that sort of at the core of that was this assumption about [00:07:00] research being a commodity. Like you look at these economic models and it's just like, okay, well we need more researchers to produce more research and it's this undifferentiated. Thing. Um, and so, so like in your mind, what are the implications of something specifically research becoming a commodity, right. Let me maybe kick it back a little bit. I'm not sure commodities quite the right term. Uh, I think we can relate it to something that has been documented and discussed very extensively in various areas, such as sports. Sports or maybe music and so on named new that what happens is, well, it's becoming very music, becoming very competitive, uh, schools, cranking out people are selecting them for the ability to perform at a certain level, scolding them, and then letting them go on the stage and so on and compete. And so what you find, for [00:08:00] example, you sport typically the gap between the. Top, whereas leads say the gold medal winner as a silver medal winner has been narrowing performance has been increasing in practically all areas of sports people, jump throws that are higher. They run faster. So on again, that seemed to be leveling off in many cases. People studying human physiology, argue with some quantitative models that we're approaching the limits of what's possible to do with our human body, unless we go to some other planet and other environmental assaults. Uh, so you hire these people, but you still have the best ones in there. Um, you were saying bolt, you know, kind of, uh, sprint or repeatedly case is I got a good example. And so you, you, you couldn't, it's not quite. Correct to say is that the hundred meter [00:09:00] sprinters are a commodity. There is definitely a differentiation there, and there is a reason to encourage them to compete and get better and train to do better and better. On the other hand, you come to a situation losing anyone knows the top around nurse makes less and less of a difference to the performance. It should observe. And I think that something similar happening with the research, you said that she saw you. And so I think that presupposes something that I love your take on, which is that sort of, there are natural limits to human physiology. I think like that's a pretty clear, right? Like, um, but there's. Not as clearly a limit to technological ability or the, the amount that we can know about how the universe works [00:10:00] like possibly. Um, and so, so this is, this is almost like, it feels almost philosophical, but so the, the analogy to sports, um, Would presuppose some, some natural limit on, uh, sort of like the amount of science and the amount of technology that we could do. Um, and so, so do you think that that's, that's the case. Okay. Yes, there definitely is a difference in those kind of general research in science. We don't have these very obvious, very obvious reasonably well defined limits. On the other hand, what we're coming up against is the fact that these fields still are becoming more and more competitive, soft sciences are sort of growing. Uh, it's also your current number of sub fields is growing. A volume of information that's available is growing while that also means that watch any single individual can master [00:11:00] smaller and smaller fraction of that total. So in some sense, you could say that human society is becoming much more knowledgeable. The algorithm each individual we can say is becoming less, less knowledgeable, knows less and less about the world. And we depend much more on the information we got from others. Uh, there's this extensive concern right now about the postural world and all of these filter baubles and such tied to how being created. And is that this almost inevitable because. How do you actually know anything? Um, sort of surveys show that maybe 10% of the people believe the earth is flat. And all those theories and all those pictures from space as being fake or creations of people with video editing tools and so on. And well, uh, most people can [00:12:00] live quite well with the mental model of that world. Uh, as long as they are not in charge of plotting rocket trajectories or airplane trajectories, and so on, same thing, vaccinations I'll do you know the vaccination is good. I'm assuming you're not . I, I believe that vaccinations are. Pretty good. How would you, you prove to me that vaccinations work again, there's a whole long chain of reasoning and data and so on. That has to be put together to really come to this conclusion that vaccinations work. Some is sometimes I ask my questions and my students. Whether they come through as the artists around now, you from Caltech, you may remember enough physics to be able to come up with a convincing argument. Most people can't. Okay. That's all. It would have been thought. It's consistent with everything sounds fine. So is [00:13:00] the result. Is that we have people, large groups of people working very hard and as much as very competitive, uh, in many cases, and you look many projects require extensive collaborations. Uh, and this has been documented in a kind of quantitative terms in some of my presentation decks. I had some, this slide. Where I showed the degree of collaboration amongst mathematicians. So similar, similar graphs could be drawn from other disciplines. Many of them moved towards more collaborative form, a head of mathematics, a lot less, but slower and so on in mathematics background, 1940 around I focused the exact numbers now, but there are 95% of the papers where it's in the bystander or, sorry. By year, 2000, 60 years later. No, it was down to about under 50%. Wow. And by now a check, I haven't [00:14:00] gotten the latest numbers. I suspect it's probably well under 40%. And so what does it reflect? Uh, I suspect to a large extent, I think that's consistent with what other people found in other disciplines who started more carefully is design need to combine different types of expertise. Great. Um, not knowing enough to be able to cut out the project. That's crazy. And so, so this, this paints for me, a really sobering picture of a world in which. Basically like as, as you need to collaborate on more things, there's more specialization. So you need more people to collaborate, uh, which just sort of by its very nature, nature increases, coordination costs. And so it feels like it's almost like just more and more friction in the system. And so each new that just like has more friction involved, um, and. [00:15:00] So like, is it, this is like the inevitable trajectory, just for things to, uh, to stall out or like, is there an escape hatch from this, uh, this conundrum? Well, I say we simply have to deal with it. No, I don't think so. So I don't see any kind of silver bullet. I don't see a big breakthrough people doubt AI, and yes, I'm not downplaying the usefulness of various AI tools, but I still think they are likely to be fairly limited in this kind of real creative sense. Um, and so we'll simply have to deal with a fact cause that's things are getting messier. That requires more effort. Marshall was the low hanging fruit has been picked up. We'd have to work harder. And also there will be men, highly [00:16:00] undesirable features. Uh, people going off on tangents, uh, kind of, kind of creating their own alternate realities, such like going astray. That was all of those kind of build up kind of elaborate alternate realities where certain kind of art attempts are assembled together into convincing pictures. I think we'll be, we'll have to deal with that. Yeah. And, um, so, so. Another piece that you, uh, like sort of core to the thesis, is this increasing sense of competition? Um, w would it be too extreme too? Say that the, the game has sort of changed from I'm a sort of absolute game, uh, to a relative game in, in a lot of research where instead of trying to produce a. The best thing, it's just trying to produce something [00:17:00] that's better than the other person. Uh, I'm not sure, uh, whether I would put those stamps out there to think of it. Uh, I mean, there was always this element of competition. You simply look at these bitter disputes, Newton versus Libin. It's about calculus. For example, other cases. Sometimes they were resolved amicably, Darwin evolution and so on. But again, people often they're reacted to not to competition. So Darwin that, getting his book into print because he heard that well, that's just coming out with the work and so on. That's a really good point. Things like that. Uh, so I think the competitive aspect was always there. It's actually very important to get people to accelerate themselves to, to, to, to, to do their best. So I think that is always been important. Yeah, probably much more important now than used to be the occasion before is the [00:18:00] need for collaboration. A need to for collaboration, need to kind of assemble a group to work with groups towards some common goals. And especially that universities, you often see it now where the professor is less the. Yeah, investigator created more, almost like a thought leader or manager because ideas and by the customer assemble, you know, get the grants, bring graduate students and post docs who was an executor, a program. And you know, the head of the lab who gets his or her name, you know, other publications, not necessarily just lead the sense that. Because they're all found that person really is the inspiration kind of on maybe overly original ideas they use there by the second is very different from what it used to be. Say a hundred years ago. Yeah. Even a hundred [00:19:00] years ago, you saw some of it at the sun Edison. Well, it was a very good example, this larger lab, which was working under his guidance and trying out various things, all of the different materials for light bulb filament and such like it was clear that kind of Edison was driving it, but lots of people working on it and so on. But I mean, Edison was very unusual for that period these days. That is how research operates. Yes. And the, the pieces that you allude to in your paper is that, um, there's sort of, there's, there's more competition and, uh, what I would call less Slack, um, in terms, I think of those as being, uh, sort of like to counter opposing systems or to capture opposing forces. And if you. Have that, uh, like competition is what drives you to some [00:20:00] equilibrium. And then Slack is what lets you sort of like jump out of local equilibria. Um, And, and the thing that really drove this home for me was the example you give of, uh, the, the contrast between Xerox, having years and years to sort of do development around their patent and build up additional patents versus, um, the, the superconductor research where multiple groups, uh, came up with the same discovered the same thing, like within weeks of each other. And, uh, I wonder if there's. That is that, um, sort of phenomena is actually playing into the stagnation piece in that, like, this is probably not true in of itself, but like, is it possible that the reason we don't have room temperature superconductors is actually because, uh, nobody. Could would profit from bill, like could actually build up a patent portfolio around them [00:21:00] to the point where they would, where it would be profitable for them. And so like this, this competition is actually sort of like, uh, driving out, uh, paradigm shifts. Well, It's hard to say, because here we're talking about the real kind of, uh, um, natural barriers kind of room temperature, semiconductors exist, easy abstract. Okay. We don't know for certain. Yeah, of case on the other hand, what you can observe is that there have been a few labs that were established over the last couple of decades, which tried to kind of come up with this moonshots and so on. Well, I mean, Google has this X lab. I think something like that, that's been called it. Hasn't produced the very much, uh, Ellen was bill Gates collaborate on creating Microsoft. He had this kind of. So silver bullet, I mean the kind of lab in [00:22:00] Silicon Valley, uh, I forget his name right now. Again, not much has come out of it. Uh, so I think it's simply very difficult to come up with breakthrough ideas. Uh, and I mean, you know, my main area that I can talk to you about shape mathematics itself. Uh, there have been a few kind of. Really incisive ideas, new breakthroughs, last few decades, I would say many few words and used to be like I used to be for other areas more closer to applications cryptography. I used to work a lot. Uh, I would say March of what has been done over the last couple of decades have been pretty much incremental. There hasn't been all that much either way of significant breakthroughs. Um, if you look at something like Bitcoin has excited, uh, attention of many people, uh, in our work produced almost a dozen years ago. On the other hand, all the basic [00:23:00] technologies unit has been known for at least 30 years. Yes the result. Uh, so I think it's more, more a case that it's really harder to achieve breakthroughs, the kind of the low hanging fruit or the big pick. Only a few of them are, are maybe hiking around and maybe occasionally somebody will find them, but not too often. Yeah. I, I guess it's, I always, I find the, the low hanging fruit. Explanation sort of unsatisfying, I guess. And I'm always trying to, to at least like tease that apart and because, you know, it's, it's sort of like there are low...
/episode/index/show/ideamachines/id/15796595
info_outline
On the Cusp of Commerciality with Eleonora Vella [Idea Machines #30]
08/23/2020
On the Cusp of Commerciality with Eleonora Vella [Idea Machines #30]
A conversation with Eleonora Vella about getting the right people in the room, finding research on the cusp of commercializability, and generally how TandemLaunch’s unique system works.
/episode/index/show/ideamachines/id/15717446
info_outline
Innovating Through Time with Anton Howes [Idea Machines #29]
08/06/2020
Innovating Through Time with Anton Howes [Idea Machines #29]
A conversation with Dr Anton Howes about The Royal Society of Arts, cultural factors that drive innovation, and many aspects of historical innovation. Anton is a historian of innovation whose work focuses especially on 18th and 19th century England as a hotbed of creativity. He recently released an excellent book that details the history of the Royal Society of Arts called “Arts and Minds: How the Royal Society of Arts Changed a Nation” and he publishes an excellent newsletter at Age of Invention.
/episode/index/show/ideamachines/id/15510173
info_outline
Inventors, Corporations, Universities, and Governments with Ashish Arora [Idea Machines #28]
07/09/2020
Inventors, Corporations, Universities, and Governments with Ashish Arora [Idea Machines #28]
A conversation with Ashish Arora about how and why the interlocking American institutions that support technological change have evolved over time, their current strengths and weaknesses, and how they might change in the future.
/episode/index/show/ideamachines/id/15149477
info_outline
Invention, Discovery, and Bell Labs with Venkatesh Narayanamurti [Idea Machines #27]
05/29/2020
Invention, Discovery, and Bell Labs with Venkatesh Narayanamurti [Idea Machines #27]
In this episode I talk to Venkatesh Narayanamurti about Bell Labs, running research organizations, and why the distinction between basic and applied research is totally wrong. Venkatesh has led organizations across the research landscape: he was a director at Bell Labs during its Golden Age, a VP at Sandia National Lab, the Dean of Engineering at UC Santa Barbara and started Harvard’s engineering school.
/episode/index/show/ideamachines/id/14620943
info_outline
Roadmapping Science with Adam Marblestone [Idea Machines #26]
04/20/2020
Roadmapping Science with Adam Marblestone [Idea Machines #26]
In this episode I talk to Adam Marblestone about technology roadmapping, scientific gems hidden in plain sight, and systematically exploring complex systems.
/episode/index/show/ideamachines/id/14065931
info_outline
Distributed Innovation with Jude Gomilla [Idea Machines #25]
03/30/2020
Distributed Innovation with Jude Gomilla [Idea Machines #25]
In this episode I talk to Jude Gomilla about distributed innovation systems focused especially around the bottom-up response to the coronavirus crisis. Jude is a physicist, founder and CEO of the knowledge compilation platform Golden, and a prolific angel investor. He’s also been in the thick of the distributed response to the coronavirus response from day one.
/episode/index/show/ideamachines/id/13764974
info_outline
Analogies, Context, and Zettleconversation with Joel Chan [Idea Machines #24]
03/17/2020
Analogies, Context, and Zettleconversation with Joel Chan [Idea Machines #24]
Intro In this episode I talk to about cross-disciplinary knowledge transfer, zettlekasten, and too many other things to enumerate. Joel is an a professor in the and a member of their . His research focuses on understanding and creating generalizable configurations of people, computing, and information that augment human intelligence and creativity. Essentially, how can we expand our knowledge frontier faster and better. This conversation was also an experiment. Instead of a normal interview that’s mostly the host directing the conversation, Joel and I actually let the conversation be directed by his notes. We both use a note-taking system called a that’s based around densely linked notes and realized hat it might be interesting to record a podcast where the structure of the conversation is Joel walking through his notes around where his main lines of research originated. For those of you who just want to hear a normal podcast, don’t worry - this episode listens like any other episode of idea machines. For those of you who are interested in the experiment, I’ve put a longer-than normal post-pod at the end of the episode. Key Takeaways Context and synthesis are two critical pieces of knowledge transfer that we don’t talk or think about enough. There is so much exciting progress to be made in how we could generate and execute on new ideas. Show Notes More meta-experiments: - Wright brothers - Wing warping - Control is core problem - Boxes have nothing to do with flying - George Vestral - velcro - - Canonical way you’re supposed to do scientific literature - Even good practice - find the people via the literature - Incubation Effect - Infrastructure has no way of knowing whether a paper has been contradicted - No way to know whether paper has been Refuted, Corroborated or Expanded - Incentives around references - Herb Simon, Allen Newell - problem solving as searching in space - Continuum from ill structured problem to well structured problems - Figuring out the parameters, what is the goal state, what are the available moves - Cyber security is both cryptography and social engineering - How do we know what we know? - Only infrastructure we have for sharing is via published literature - - Consequences of science as a career - Art in science - As there is more literature fragmentation it’s harder to synthesize and actually figure out what the problem is - Canonical unsolved problems - - Review papers are: Hard to write and Career suicide - Formulating a problem requires synthesis - Three levels of synthesis 1. Listing citations 2. Listing by idea 3. Synthesis - - Social markers - yes I’ve read X it wasn’t useful - Conceptual flag citations - there may actually be no relation between claims and claims in paper - Types of knowledge synthesis and their criteria - If you’ve synthesized the literature you’ve exposed fractures in it - To formulate problem you need to synthesize, to synthesize you need to find the right pieces, finding the right pieces is hard - Individual synthesis systems: - Zettlekasten - Tinderbox system - Roam - Graveyard of systems that have tried to create centralized knowledge repository - The memex as the philosopher’s stone of computer science - Semantic web - Shibboleth words - Open problem - “What level of knowledge do you need in a discipline” - Feynman sense of knowing a word - Information work at interdisciplinary boundaries - carol palmer - Different modes of interdisciplinary research - “Surface areas of interaction” - Causal modeling the Judea pearl sense - Sensemaking is moving from unstructured things towards more structured things and the tools matter
/episode/index/show/ideamachines/id/13586915
info_outline
Funding Breakthrough Research with Anna Goldstein - [Idea Machines #23]
02/26/2020
Funding Breakthrough Research with Anna Goldstein - [Idea Machines #23]
In this episode I talk to Anna Goldstein about how the ARPA (Advanced Research Projects Agency) model works and what makes it unique. We focus on ARPA-E: the department of Energy’s version of DARPA that funds breakthrough energy research.
/episode/index/show/ideamachines/id/13306754
info_outline
Systems of Progress with Jason Crawford - [Idea Machines #22]
02/16/2020
Systems of Progress with Jason Crawford - [Idea Machines #22]
In this episode I talk to Jason Crawford about his work on the history of progress, funding and incentivizing inventions, ideas behind their time, and more. Jason is the author of the Roots of Progress blog, where he focuses on telling the story of human progress in an amazingly accessible way.
/episode/index/show/ideamachines/id/13168412