loader from loading.io

"Survey advice" by Katja Grace

LessWrong Curated Podcast

Release Date: 08/26/2022

"Survey advice" by Katja Grace

LessWrong Curated Podcast

Things I believe about making surveys, : If you write a question that seems clear, there’s an unbelievably high chance that any given reader will misunderstand it. (Possibly this applies to things that aren’t survey questions also, but that’s a problem for another time.) A better way to find out if your questions are clear is to repeatedly take a single individual person, and sit down with them, and ask them to take your survey while narrating the process: reading the questions aloud, telling you what they think the question is asking, explaining their thought process in answering...

info_outline
"Toni Kurz and the Insanity of Climbing Mountains" by Gene Smith

LessWrong Curated Podcast

Content warning: death I've been on a YouTube binge lately. My current favorite genre is disaster stories about mountain climbing. The death statistics for some of these mountains, especially ones in the Himalayas are truly insane. To give an example, let me tell you about a mountain most people have never heard of: Nanga Parbat. It's a 8,126 meter "wall of ice and rock", sporting the tallest mountain face and the fastest change in elevation in the entire world: the Rupal Face. I've posted a picture above, but these really don't do justice to just how gigantic this wall is. This single face...

info_outline
"Deliberate Grieving" by Raemon

LessWrong Curated Podcast

This post is hopefully useful on its own, but begins a series ultimately about grieving over a world that might (or, might not) be .  It starts with some pieces from a previous   post, but goes into more detail. At the beginning of the pandemic, I didn’t have much experience with . By the end of the pandemic, I had gotten quite a lot of practice grieving for things. I now think of grieving as a key life skill, with ramifications for epistemics, action, and coordination.  I had read , which gave me footholds to get started with. But I still had to develop some skills from...

info_outline
"Humans provide an untapped wealth of evidence about alignment" by TurnTrout & Quintin Pope

LessWrong Curated Podcast

Crossposted from the . May contain more technical jargon than usual. TL;DR: To even consciously consider an alignment research direction,  to locate it as a promising lead. As best I can tell, many directions seem interesting but do not have strong evidence of being “entangled” with the alignment problem such that I expect them to yield significant insights.  For example, “we can solve an easier version of the alignment problem by first figuring out how to build an AI which maximizes the number of real-world diamonds” has intuitive appeal and plausibility, but this claim...

info_outline
"Changing the world through slack & hobbies" by Steven Byrnes

LessWrong Curated Podcast

  Introduction In EA orthodoxy, if you're really serious about EA, the three alternatives that people most often seem to talk about are (1) “direct work” in a job that furthers a very important cause; (2) ; (3) earning that will help you do those things in the future, e.g. by getting a PhD or teaching yourself ML. By contrast, there’s not much talk of: (4) being in a job / situation where you have extra time and energy and freedom to explore things that seem interesting and important.  But that last one is really important!

info_outline
"«Boundaries», Part 1: a key missing concept from utility theory" by Andrew Critch

LessWrong Curated Podcast

Crossposted from the . May contain more technical jargon than usual. This is Part 1 of my on LessWrong. Summary: «Boundaries» are a missing concept from the axioms of game theory and bargaining theory, which might help pin-down certain features of multi-agent rationality (this post), and have broader implications for effective altruism discourse and x-risk (future posts). 1. Boundaries (of living systems)  Epistemic status: me describing what I mean. With the exception of some relatively recent and isolated pockets of research on embedded agency (e.g., ), most attempts at formal...

info_outline
"ITT-passing and civility are good; "charity" is bad; steelmanning is niche" by Rob Bensinger

LessWrong Curated Podcast

I often object to claims like "charity/steelmanning is an argumentative virtue". This post collects a few things I and others have said on this topic over the last few years. My current view is: ("the art of addressing the best form of the other person’s argument, even if it’s not the one they presented") is a useful niche skill, but I don't think it should be a standard thing you bring out in most arguments, even if it's an argument with someone you strongly disagree with. Instead, arguments should mostly be organized around things like: Object-level learning and truth-seeking, with...

info_outline
"What should you change in response to an "emergency"? And AI risk" by Anna Salamon

LessWrong Curated Podcast

Related to: Epistemic status: A possibly annoying mixture of straightforward reasoning and hard-to-justify personal opinions. It is often stated (with some justification, IMO) that AI risk is an “emergency.”  Various people have explained to me that they put various parts of their normal life’s functioning on hold on account of AI being an “emergency.”  In the interest of people doing this sanely and not confusedly, I’d like to take a step back and seek principles around what kinds of changes a person might want to make in an “emergency” of different sorts.  ...

info_outline
"On how various plans miss the hard bits of the alignment challenge" by Nate Soares

LessWrong Curated Podcast

Crossposted from the . May contain more technical jargon than usual. (As usual, this post was written by Nate Soares with some help and editing from Rob Bensinger.) In my, I described a “hard bit” of the challenge of aligning AGI—the sharp left turn that comes when your system slides into the “AGI” capabilities well, the fact that alignment doesn’t generalize similarly well at this turn, and the fact that this turn seems likely to break a bunch of your existing alignment properties. Here, I want to briefly discuss a variety of current research proposals in the field, to explain...

info_outline
"Humans are very reliable agents" by Alyssa Vance

LessWrong Curated Podcast

Over the last few years, deep-learning-based AI has progressed in fields like natural language processing and image generation. However, self-driving cars seem stuck in perpetual beta mode, and aggressive predictions there have repeatedly been . Google's self-driving project started four years AlexNet kicked off the deep learning revolution, and it still isn't deployed at , thirteen years later. Why are these fields getting such ? Right now, I think the biggest answer is that judge models by average-case performance, while self-driving cars (and many other applications) require matching...

info_outline
 
More Episodes

https://www.lesswrong.com/posts/oyKzz7bvcZMEPaDs6/survey-advice

Things I believe about making surveys, after making some surveys:

  1. If you write a question that seems clear, there’s an unbelievably high chance that any given reader will misunderstand it. (Possibly this applies to things that aren’t survey questions also, but that’s a problem for another time.)
  2. A better way to find out if your questions are clear is to repeatedly take a single individual person, and sit down with them, and ask them to take your survey while narrating the process: reading the questions aloud, telling you what they think the question is asking, explaining their thought process in answering it. If you do this repeatedly with different people until some are not confused at all, the questions are probably clear.
  3. If you ask people very similar questions in different sounding ways, you can get very different answers (possibly related to the above, though that’s not obviously the main thing going on).
  4. One specific case of that: for some large class of events, if you ask people how many years until a 10%, 50%, 90% chance of event X occurring, you will get an earlier distribution of times than if you ask the probability that X will happen in 10, 20, 50 years. (I’ve only tried this with AI related things, but my guess is that it at least generalizes to other low-probability-seeming things. Also, if you just ask about 10% on its own, it is consistently different from 10% alongside 50% and 90%.
  5. Given the complicated landscape of people’s beliefs about the world and proclivities to say certain things, there is a huge amount of scope for choosing questions to get answers that sound different to listeners (e.g. support a different side in a debate).
  6. There is also scope for helping people think through a thing in a way that they would endorse, e.g. by asking a sequence of questions. This can also change what the answer sounds like, but seems ethical to me, whereas applications of 5 seem generally suss.
  7. Often your respondent knows thing P and you want to know Q, and it is possible to infer something about Q from P. You then have a choice about which point in this inference chain to ask the person about. It seems helpful to notice this choice. For instance, if AI researchers know most about what AI research looks like, and you want to know whether human civilization will be imminently destroyed by renegade AI systems, you can ask about a) how fast AI progress appears to be progressing, b) when it will reach a certain performance bar, c) whether AI will cause something like human extinction. In the 2016 survey, we asked all of these.
  8. Given the choice, if you are hoping to use the data as information, it is often good to ask people about things they know about. In 7, this points to aiming your question early in the reasoning chain, then doing the inference yourself.
  9. Interest in surveys doesn’t seem very related to whether a survey is a good source of information on the topic surveyed on. One of the strongest findings of the 2016 survey IMO was that surveys like that are unlikely to be a reliable guide to the future.
  10. This makes sense because surveys fulfill other purposes. Surveys are great if you want to know what people think about X, rather than what is true about X. Knowing what people think is often the important question. It can be good for legitimizing a view, or letting a group of people have common knowledge about what they think so they can start to act on it, including getting out of bad equilibria where everyone nominally supports claim P because they think others will judge them if not.
  11. If you are surveying people with the intention of claiming a thing, it is helpful to think ahead about what you want to claim, and make sure you ask questions that will let you claim that, in a simple way. For instance, it is better to be able to say ‘80% of a random sample of shoppers at Tesco said that they like tomato more than beans’ than to say ‘80% of a sample of shoppers who were mostly at Tesco but also at Aldi (see footnote for complicated shopper selection process) say that they prefer tomato to peas, or (using a separate subset of shoppers) prefer peas to beans, from which we can infer that probably about 80% of shoppers in general, or more, prefer tomato to beans’. You want to be able to describe the setup and question in a way that is simple enough that the listener understands what happened, and see the significance of the finding.
  12. If you are running a survey multiple times, and you want informative answers about whether there were differences in views between those times, you should probably run exactly the same survey and not change the questions even a tiny bit unless there is very strong reason to. This follows from 3.
  13. Qualtrics costs thousands of dollars to use, and won’t let you sign up for an account or even know how much it might cost unless you book a meeting to talk to someone to sell it to you. Guidedtrack.com seems pretty nice, but I might not have been trying to do such complicated things there.
  14. Running surveys seems underrated as an activity.