Exercise: Solve "Thinking Physics"
post by Raemon · 2023-08-01T00:44:48.975Z · LW · GW · 15 commentsThis is a question post.
Contents
Challenge I: Solve three problems (and learn from them) Step 1: Do an exercise. Step 2: Learn from it Overall structure How to pick exercises Submission guidelines Challenge II: Design a training regimen Submission guidelines Challenge III: Fully "Solve" Thinking Physics Bonus Challenge: None Answers 12 Morpheus 11 Muireall 4 Raemon 1 SomeJustGoodAdvice None 15 comments
Note: please write any answers to this prompt in spoiler-tags.
Recently I set out to deliberate practice at "reasoning about confusing intellectual problems."
Eliezer's Class Project [LW · GW] has a fictional group of rationality students try to find the true theory of quantum gravity in one month. This always seemed like a cool goal and test for rationality training to aspire to. If you're not solving difficult open problems faster than science [LW · GW], your Art of Rationality probably isn't complete.
Of course, our Art of Rationality isn't complete yet. But, I think there is something promising in this area, as a way to ground out "rationality training" in something concrete. It seems like good practice to take a given physics question you don't understand the theory behind, and try to invent the theory yourself.
I don't think we're anywhere close to the "definitively impressive" version of rationality practice/training. But, I think a good next step is "Solve Thinking Physics™"
Thinking Physics is a textbook that teaches physics "question-first." Each page presents a physics-y situation, and asks you to figure out what happens next. The questions are multiple choice, but often fairly tricky nonetheless.
I think a good rationalist-training goal is aim for a goal of "be (correctly) 95% confident in the answer", as a rough proxy for "there were no major lingering confusions about the problem except for generic 'maybe I missed something?'". And, failing that, have the subgoal of at least being calibrated about how confused you. Every time you look at an answer, first log your probabilities for each of the multiple-choices in Fatebook.io (or prediction-tracking tool of your choice).
The problems are set up in a way that you can probably reason about them from some basic background knowledge, without much math background. They're ideal for people who don't have much physics background (since the whole point of the book is to teach you physics), although I know people with some physics education who still find it fairly hard.
I spent two weeks working on Thinking Physics problems, and hosting meetups/workshops where other people could join me. With each question, I focused on learning as much as I could about how-to-think.
My original hypothesis was that I could get significantly better at it in 6-8 weeks. I only spent two, and the result so far is I think I'm significantly better although didn't yet hit my goal of 95% accuracy. (In my final test-set, I got 1 out of 5 questions wrong, when I was aiming for zero. I do think I have a pretty clear sense of why I got that 1 question wrong, and what I should have done differently)
After workshopping some ideas for "the Thinking Physics rationality challenge", I now present you with three tiers of challenge.
Challenge I: Solve three problems (and learn from them)
Step 1: Do an exercise.
Spend some time trying to solve three Thinking Physics question. Aim for 95% accuracy, fully deconfusing yourself about each exercise.
Write down your probabilities for each answer.
It's important to actually write down the probability for each answer – otherwise, you may get a vague sense of "yeah that's probably right", that doesn't allow me to cleanly say "I got this one wrong." And doing it for all the answers, not just your favorite one, gives you additional bits about whether your models made any sense. (i.e. having clearly stated "I think answer A is most likely and B is second most likely" gives you a harder update if it turns out that A and B were both wrong)
Step 2: Learn from it
Then, think about how you could have solved the problem better.
Your primary goal is to learn as much as possible from each question.
Babble [LW · GW] as many new insights as you can about how to think. This can include explicit "strategies" (like "see if you can simplify the problem"), physiological things (like "I got tired and needed to take a break"), or psychological things ("something about this feels weirdly aversive and ughy, what's up with that?").
When you're done, submit your answer on this post for "what you learned." (Focus on your takeaways, not the object-level solution).
Overall structure
This is more fun with a partner, although I recommend spending a chunk of time thinking independently before sharing your answers and thought-processes with each other. You might find it helpful to get some friends together as a weekend activity.
I've found a fairly good default approach is to do:
- 20 minutes thinking about it by yourself
- 20 minutes thinking about it with a friend
- 20 minutes discussing your meta-reflections on how to solve the problem with a friend.
How to pick exercises
The exercises vary in difficulty. My recommendation is to flip to a random page, weighted towards the beginning of the book. If it feels "difficult but not impossible", then give it a try.
If you're pretty confident you just know the answer, still try to come up with a clear explanation for why (but err on the side of checking the answer quickly rather than spending a lot of time doublechecking).
If you end up feeling stuck, try to give it at least 10 minutes before giving up and switching to a different problem. (In most cases, I found it valuable to give it a solid 20 minutes of independent thought + 20 minutes of conversation-with-partner even if I felt really stuck).
Some particular exercises that seemed reasonably good for people I beta-tested this with (which is not to say they were easy or hard, but that I feel like I/others learned from making a good faith effort on:
- Steam Locomotive
- Cold Bath
- Rare Air
- The Expansion of Nothing
- Landscape
(Page numbers for the exercises vary between editions of the book, but you can look them up in the table of contents)
Submission guidelines
Put your answers in spoiler tags (begin each line with ">!"), although first list (unspoiler-tagged) that it was a Tier 1 challenge, the name of the exercises you did, and whether you give them each an overall thumbs up or thumbs-down as having been a good exercise.
Challenge II: Design a training regimen
After you've done 3 exercises and gotten a rough sense of their shape, develop a training regime that helps you significantly improve at Thinking Physics exercises.
If you started out not being able to reliably solve them at all, get to the point where you can at least semi-reliably solve them, given enough time. (Suggested target: solve 5 random questions in a row without getting any wrong, without help)
If you started out able to semi-reliably get the right answers given a lot of time, aim for speed – can you solve 10 problems in a row, relatively quickly, and only get between 0-1 question wrong?
Submission guidelines
You can submit your training regime before actually completing it (but flag whether you have actually employed it yet, and if you end up actually doing the training regimen, I suggest replying later with any updates you made).
I think it's a fine use of this exercise to submit your training regime, then read other people's suggested regimens to get more ideas before going off to actually do it.
Put your training description in spoiler-tags (although again list which challenge-tier you're doing in non-spoiler tags)
(Once you actually get started with the training, I recommend adjusting your approach as you learn more)
Challenge III: Fully "Solve" Thinking Physics
After you've significantly improved your skill-level, develop a thorough for solving Thinking Physics exercises, in generality. Write the instructions that would have helped past-you get to the point where you could solve them reliably and/or quickly.
(It's okay for this to include metagaming / psychologizing the author. This is "Solve 'Thinking Physics'", not "Solve 'Physics'")
Write your answer either as a spoiler-tagged comment here, or as a top-level post if it ends up feeling like a full essay (and then a quick comment here linking to it). Include a note about what concrete outcomes you achieved.
Bonus Challenge:
Find different sets of exercises that are as different as possible from Thinking Physics (i.e. requiring a pretty different set of skills, while still being feeling relevant to becoming a "generalist researcher"), that would make for a good followup to this exercise.
Answers
- Challenge I
- Exercises:
- steam engine :-1:
- cold bath :+1:
- expansion of nothing :+1:
- Exercises:
-
tldr (long thing contains all the babble, only included because seemed low cost, don't recommend reading):
- Did exercises alone as didn't feel like setting up something with a partner. Felt I was excited enough that should work.
- steered away from 95% for all exercises where I hadn't seen the puzzle before as was afraid that there's a trick.
- I mostly noticed how extremely FUN I found this! Just today when reflecting that studying for university courses has kind of killed some of my enthusiasm and I didn't really remember last time being really excited while studying or in free time. Tried playing games (like chess), but even that felt more like doing the motions and got me more addicted and not actually in a flow state of mind. Somehow this just clicked.
- Training regimen:
- Probably opting for speed. This seemed on the easier side.
- I will maybe try 5 minutes per question and see how that goes for 10 of them.
-
Journal
- steam locomotive
- Intuition: It seems like bigger wheels would be better for higher speed, but might be more wasteful.
- abstract
- This is in the momentum collum. It seems like one difference would be that the train with lower load, but speed, needs to efficiently maintain that speed. The one for freight needs better breaks.
- I also am not sure if I am supposed to use other evidence. On the other hand, why would I have received.
- Overall it seems the higher train is made in a way that is designed for slower speed (higher, less big Schornstein.) the thing stopping things in the front
- There is something unintuitive about Gänge here as well! Having a big wheel means on rotation of the engine is covering more ground. This definitely seems like the thing that you want for the fast train with less load! On the other hand I think there is a high chance I have the direction backwards there.
- I really don't like the trick of having two options that they are both trains of the same kind! I feel like that makes it hard for me to become confident!
- Anything left confusing? I don't know lots about time or engines! Not 100% sure on direction of thing! Not sure how tricky questions are I think I want to not get overconfident (gpt-4 thing got me!)! I also haven't spent 20 minutes!
- Tracks as hints? What about the stangen thing attached to wheels?
- How does number of wheels matter?
- I feel like I got most of the evidence I know how to interpret.
- I am not sure if I should treat this as an exercise in not being too impatient, or in moving at the appropriate speed. I think I want to go with taking the appropriate time?
- When practicing I am also not sure how well this went. I felt this exercise really didn't give me that much to work on?
- Result:
- b) (90%)
- a) (5%)
- c,d) (5%) passenger speed, freight load.
- Looking back
- I was right!
- Gives me more confidence that this book is trying to be straightforward and not trying to trick me.
- I could have explicitly thought about if the locomotive thing would have been recommended if it seemed like this thing wouldn't have been super object level.
- I feel like I could easily taken this one on in 5 minutes if I had not expected really hard stuff.
- I got the thing correct for exactly the right intuition. Nice. I want to check that in the future.
- I want more books that are like that! I feel like I want to take Bryan caplan's exam that he gave gpt-3 like that (since gpt-4 still failed and I feel like I would remember the questions.)
- I like how they didn't spoiler me by telling me whether this was an easy or hard one though.
- What else do we learn from this? Not sure? I'd be interested how hard people thought this one was?
- More reflection after more exercises?
- I had a hard time figuring out how to feel about gaming. It feels like I could be more efficient. Something is chasing me. On the other hand, I have time!
- I'd be interested to know how many other
- I still feel a bit impulsive
- It is fun to babble all of my notes in this document
- at the same time i feel anxiety about later pruning to decide what to post on the forum
- i feel i will either dump everything there, or i will just decide later! babble!
- I feel in general I have a bit of a hard time balancing meta and object level. Maybe an adhd thing? Maybe I just have the separating babble and prune as too much of a doctrine in my head that I don't actually follow?
- I notice that I love doing these artificial exercises. Fun! I feel way more motivated.
- I think in general with adhd and everything I might be steering to much into not giving myself the artificial structure I need to really thrive by giving myself challenges that actually make me achieve great things!
- I think I will switch to the next exercise before too much philosophizing.
- Report:
- I didn't feel like finding a partner and just wanted to start with 3 problems for now.
- Cold Bath
- before
- Archimedian principle thing (knowing density of the ice not actually required ha!).
- I know the answer. The density of ice is lower than the density of water (or at least 4 degrees is the most dense I believe.)
- Thing I might be a bit confused about:
- if it is getting hotter than 4 degrees, at some point we could reach a temperature again where the thing spills over. My assumption (given this book has been reasonable so far.)
- This seems really unfair to get confident
- Final answer: Will stay exactly brim full. Ice displaced exactly as much mass as there is water in the thing (confusing stuff about air and everything else is negligable.).
- prediction
- a) 3%
- b) 2%
- c) 95%
- after
- right!
- I don't give myself too much credit as I had already encountered this.
- Apparently I might have been eposed to too much of this. Probably lots of stuff out of this book was used by content creators I know.
- I did end up needing to precisify my answer and I also didn't notice even without knowing ice density, you can solve this with archimedian principle.
- I think I also want to prod internal physics simulation engine more (not only the verbal one.)
- before
- Rare Air
- Dang! I ended up exactly on the wrong page and spoilering myself! I had thought a tad before that I hadn't written down how annoying it is to not spoiler yourself on the other exercises!
- Lesson: stay careful to not get a page too far! (maybe precompute page disparity!) 9 pages!
- The expansion of nothing.
- Analysis
- This one feels really interesting!
- Intuitive model is very confused. I can see arguments for all three.
- Slightly more intuitive if it would get smaller though.
- Seems coincidental to just stay same (but eh... toy physics problems sometimes do this)
- Intuition-pumps
- What if we had the rod without the circle?
- What happens without the circle?
- What happens if we repeat?
- What is the mechanism behind the expansion in the first place? I guess we have electrons in higher states. Everything is in higher energy and pushing away from each other?
- Is there an analogy with other stuff that has force like this?
- What if I imagine concrete points?
- Making the thing really thin gives me the strong intuition that everything within the same radius is going to push each other apart, resulting in hole being bigger! Not what internal physics engine said!
- I am also not sure if there is going to be some strain because the balance of material is not working out anymore?
- Reminds me of the orange thing, that no matter if you have an orange or the earth, increasing your circumference is going to do the same to your radius. Means the shape would just stay the same. Everything just gets a bigger radius.
- How to resolve remaining confusion?
- I could try to dig deeper into how the stretching apart might work.
- I could dig a bit deeper into ..j
- I have learned some stuff about mechanics and lagrangians/Hamiltonians and going from normal to radial coordintees. Is that stuff any help here?
- I feel if I would hit it from the top, it would still give me a different answer
- Not sure if principiled to give 95% when I am still into other models? How confident in meta thing?
- Noting that I have "SO" much fun doing this!
- I remember just a few hours earlier feeling like I miss this feeling of just being really enthusiastic! Not sure if that was just me being not really reflective, or if that is really the case and I should attend to this. All the generic advice out there kind of tells me that I should perhaps not stop myself and just continue riding the wave for now?
- For: follow your interests, there's this guy who just for fun did all the problems sets in one go. (Paul graham) I find them effortful
- Overenthusiuams seems the only real way people with adhd operate
- Against:
- People who work on this not just for one evening but over extended periods might actually form longterm differences with their brainz.
- Noting that I feel like actually applying the finding portends for and against thing explicitly so strongly since I have not made predictions for sometime.
- takeaway
- I also just notice that with this exercise I just felt entitled to start
- With research on AI safety stuff I feel like I am waiting for this gatekeeper to tell me that I'll not be wasting peoples time by working on xyz. Not sure that is an actual problem in general. Specifically doesn't seem super productive compared to just getting excited and started on things though!
- I was still using slightly more sketchy analysis this time! I did realize that you could take the ring apart, but then I threw this thought away before thinking about what would actually happen if take apart, heat, take back together.
- In my mind I took things not really apart, but kept them in the same place when heating. I would not have expected to still get the same answer!
- I did not come up with something close to the taking a photo and expanding the whole photo analogy
- I do feel like I had something close to that!
- I feel great because deliberation actually got me closer from my initial first guess. Kind of suspicion though, that I was in kinda modest mode and I took more interesting intuition, but if pressed I would have gone with the expansion. (Could be hindsight bias)
- All in all I really liked this challenge! Very fun!
- Prediction:
- a) 80%
- b) 10%
- c) 10%
- Analysis
- steam locomotive
↑ comment by Raemon · 2023-08-08T20:30:06.922Z · LW(p) · GW(p)
(The >! thing didn't work in markdown I just did a quick edit to change the comment to a LessWrong Docs comment where I was confident the spoiler tags would work, will look up how to do markdown later)
and, thanks and congrats!
I couldn't quite tell where you journaling vs end takeaways started and stopped. I'm happy to read through the whole thing but you might want to edit for clarity for benefit of others.
Replies from: Morpheus↑ comment by Morpheus · 2023-08-08T20:39:22.036Z · LW(p) · GW(p)
I was copying it from my notes (with syntax for spoiler tag already in) and I belief that the lesswrong-docs mode didn't work for that reason. Took some time because I got confused because I looked in the "welcome&faq"-post instead of the actual faq for the markdown way.
OK, a shot at Challenge I, with Poof and Foop, Steam Locomotive, and Expansion of Nothing. Felt like all three are in the sweet spot. I personally dislike Expansion of Nothing.
Poof and Foop:
The problem statement is a bit leading: there's some kind of inversion symmetry relationship between the two cases, so it should go the opposite direction, right?
Initially, definitely. The puncture means that there's less pressure on the right side—instead of colliding with the can, some particles go inside.
But those particles end up colliding with the interior left side anyway. So it seems like it should even out, and at the end the can won't be moving.
So my guess is (c). Can I make myself more confident?
Why doesn't an inversion argument go through? Well, the compressed air can is drawn in vacuum, but the vacuum can doesn't empty the environment.
So it's not simply time reversal. If the compressed air can were in air, then we might have some kind of symmetry between air particle and absence of air particle,
but then the compressed air can would slow down due to drag and stop in the limit. So that still points to (c). That also works as a thermodynamic argument—the first can isn't equilibrating with anything, so none of the work goes to heat. 95% confidence feels good.
*checks* OK, looks like I was thinking about it right, and my explanation for why the naive inversion is wrong is equivalent to sketch II.
Reflection: The main interesting thing here is the fake symmetry argument. My favorite problems have tempting solutions that don't work for subtle reasons. I think it's important not to count problems solved until you can pinpoint why those solutions fail.
What did I use here? If you're dealing with pressure, you can probably get an answer with forces or with thermodynamics. A net force can be thought of as a single force or as lack of a balancing force. That's the symmetry idea.
I'm not very good at babbling. I'm basically looking over what I wrote and renarrating it. Words to words.
Steam Locomotive:
We might want to think about torque and the height of the axle.
Or maybe it's about wheel radius. One cycle takes you further with bigger wheels.
I think these both point to (b).
I'm a little confused because thinking about the wheel heights of sports cars and trucks would push me towards (a). But cars have gears. Directly driving small wheels is basically low gear.
Not sure how I'd know if the answer were (c) or (d). Seems like you'd need background knowledge not in the question.
I should think about actual forces to get to 95% confidence.
Let's say the engine puts out the same force in both cases. Then, in II, each wheel sees half as much force from the engine,
but the ground exerts force on twice as many wheels, so that part's a wash. But because the wheels are smaller, the ground
needs to exert more force per unit engine force to keep the wheel from slipping (same torque).
So for the same engine, II seems to give more accelerating force, while I gives higher top speed. I'd put 95% on (b).
*checks* OK, seems like I had the right thought. Could I have been as confident from the distance-per-cycle argument alone? Rather than look at forces,
the author's answer argues that we know the locomotive that goes a shorter distance in the same number of engine cycles must
be putting more energy into each mile it travels. I considered that, but I wasn't sure it was a valid step.
Why couldn't you just be getting less work from the engine? Well, it's the same piston with the same motion.
My force calculation already needs that assumption, it just makes the final connection with the acceleration.
Reflection: I feel like I don't know much about automotives. (Is a locomotive an automotive, by the way? I think so, it's just locomotives involve a track.) I can describe transmission and gears and engines and so on if I think about it, but I don't have much intuition. Like, I can't explain why it's one way and not another, or how different cars solve different problems.
I just feel like I should have been able to answer the question immediately. If I could drive stick, would that help? Probably not. I already ride a bike and didn't immediately see the analogy.
What did I use? Qualitative personal experience. I picked a misleading experience but reasonably didn't weight it above thinking through the problem. Identifying relevant considerations. Didn't stop at the first idea.
Expansion of Nothing:
Oh, this one's nasty. It has to expand, right?
If you took an iron disk and drew a circle where the hole is, the circle would expand.
If you cut that disk out and heat up the cutout, the disk expands the same amount.
So everything outside the circle can't be exerting any net force at the boundary, and the hole has to stay the same size as the disk.
I don't see any problems with this argument, but can I explain why other arguments don't work? Why can't thermal expansion generate stress instead of allowing uniform expansion? I guess in a sense I just gave the reason, but why does the gap shrink if you cut a gap in a rod instead? Well, when you have only one piece, it's like applying a magnification transformation, which requires an origin. But the origin is arbitrary—you can just recenter. With two separate pieces, the two origins of magnification are no longer arbitrary.
*checks* Yeah, the author's answer doesn't go there, unfortunately.
Reflection: This problem feels really annoying to me. Maybe I saw it a long time ago and got it wrong? Or maybe it's that you never have anything that's free to expand uniformly. It's braced against something, or it's sitting on something with a different coefficient of thermal expansion, and you do get stress and it does matter how the thing is patterned.
This feels like a problem where you're supposed to think about limiting cases. Like, if you have an atomic ring, obviously it expands. I don't know if you can justify jumping to the right answer from that, though. If the disk is thick and the cutout doesn't go all the way through, it expands. Ehh. You still need an argument that it expands the same.
↑ comment by Raemon · 2023-08-09T00:28:59.984Z · LW(p) · GW(p)
I actually created a doc where people can add their own confusions and answers for Expansion of Nothing: https://docs.google.com/document/d/1cleM-QuO9R9_jRqDZMMKzobpWcf-k9KHBe91fUMWhuQ/edit
I'll edit it into the OP.
I'd also add: a TODO item on my list is to make my own followup question for Expansion of Nothing that presents rings of different materials (i.e. something like a ring of water, a ring of jelly, a ring of concrete, something like that), and asks "in any of these cases, do you get a different answer than the Iron Ring?
I don't actually know currently whether there exists a material that gets you a different answer, so this is a bit of a research question. Crafting the final question such that it somehow gives you enough information feels like part of the exercise. [by which I meant the meta-exercise of "creating an exercise", which I think is also a useful thing to learn]
↑ comment by Raemon · 2023-08-09T00:47:13.523Z · LW(p) · GW(p)
I'm also generally curious how you found the exercise, whether it seemed worthwhile to you.
↑ comment by Muireall · 2023-08-09T01:13:09.454Z · LW(p) · GW(p)
I enjoyed it, although I'm already the sort of person who thinks Thinking Physics is fun—both the problem solving and the nitpicking about what constitutes a correct explanation. It seems worth doing at least a handful of problems this way, and more broadly deliberately practicing problem solving and metacognition about problem solving. Thinking Physics could be a good complement to Problem Solving Through Problems or How To Solve It, since in my (limited) experience you get quickly diminishing returns to anything but competition math with collections like that.
My answer to "Designing a training regimen."
I recently spent ~2 weeks on this. I iterated on the approach over time, and didn't really try to this "design training" exercise at the beginning.
My starting approach was the "aim for 95% confidence" (now listed as a requirement in the OP), based on receiving that advice from a friend and finding the general idea compelling. Initially I aimed at always giving myself at least a full day to answer a question. I eventually came back to this, but pretty quickly decided it wasn't actually the right approach.
I ended up with a separation between "training" from "testing." During training, I'm optimizing for learning quickly. This can include erring more on looking up answers, working with partners, etc.
During testing, I focused on evaluating whether I-specifically-learned-things, so I didn't talk to friends about my thought process much to avoid spoilers. And I gave myself a very long time (sometimes spending more than a full day on each question).
I was experimenting with workshops throughout this time, and a lot of my effort ended up going towards managing other people and making sure they were having a good time. One of the things I'd go back-in-time and tell myself is "don't try to mix large workshops and doing-it-myself. Invite friends to partner with, but focus on a few people you know well."
One major update was I shouldn't just be trying to get the right answer, I should be trying to identify the explanation the author was primarily aiming at. (Sometimes the author's explanations are confusing or incomplete, but I think "generate lots of relevant explanations, at least one of which was the one the author generated" still seems useful for making sure I actually modeled the situation well)
I figured out partway through the process that I should be optimizing for "learning as much as I could from each question", and that suggested a followup strategy of "choose problems that feel like I will learn a lot from". (With the most obvious implication being 'not too easy or too hard', and a trickier implication being 'requires skills that I'd still benefit from focusing on improving')
One of the biggest problems was setting aside time to do it at all. This is a lot of cognitive work. I ultimately found I could only do this for a few hours a day and still was pretty exhausted in the evening. I think it's relatively achievable to set aside one weekend for this but the amount of time necessary to vet "you have meaningfully improved" is pretty expensive.
I was lucky to be able to take a 2 week break where I was professionally focused on this. I think if rationality wasn't part of my Day Job, and I couldn't take a vacation for it, I think my approach would be to allocate one weekend-day each week towards this for a few weekends (aiming to look up the answer after an hour per question). And then, for testing... well, this feels fairly tricky. An obvious answer is just... keep allocating weekend time to it. This feels like it'd take a long time. Hrmm.
It'd be easier if "people's ability to solve Thinking Physics problems" was better studied, and it was, say, known that some given exercises generally take an average undergrad 2 hours to deconfuse themselves on. (Then, you set yourself a 2 hour timer and submit your best answer when you're done, rather than potentially spending days on it doublechecking yourself).
I think, for the immediate future, "take as long as you want to thoroughly understand the scenario" is a better test of thinking-skill for people doing openended research, and the fact it is it mostly makes sense to do this if you're actually already planning to invest years in openended research with poor feedback loops.
↑ comment by Morpheus · 2023-08-17T21:43:55.110Z · LW(p) · GW(p)
I tried doing these exercises in my rationality group this week with 5 other people. Since we did this as part of our regular meetup, doing 1h for a single question would have taken too long (we could have done 2 questions max). Instead, we did 4 exercises in ~90 min (steam locomotive, poof and foop, expansion of nothing, rare air). We started out with relatively strong physics background (everyone knowing mechanics), so I think that wasn't too hasty, except for the reflection part, perhaps. I gave people the first 5 minutes to think for themselves and to record their first probabilities. Then we discussed probabilities (there ended up to always be strong disagreements. We had two physics PhD's, and they happened to disagree twice, both of them with 90% confidence on a question (both times the same one of them was right)).
I think because our meetups are often just more of a social meetup, there was not as big of a buy-in to go full munchkin on the exercises. Since I had already done the puzzles, I was also not participating in the discussion, as I didn't want to leak information. I feel like that was a mistake, since I feel like by participating in the discussion I could transfer my enthusiasm and people would have had more fun and tried harder on the exercises. Next time, I am going to pick problems that I haven't solved yet. I also forgot to do the reflections as a discussion, instead I told everyone to think about how they could have done better on their own, which was definitely worse. I then just ended up making the reflection part really short (3 min) for the first easy exercises because people didn't seem enthusiastic.
Once we got to the rare air exercise everyone seemed to be really involved though since the exercise was obviously hard and people actually started thinking. At the end, they still converged on the wrong answer. I had a hard time reading the room for how this went. But people actually brought up whether we can try this again at our next meetup, so I guess it went well.
One of the takeaways was that people weren't double-checking their models enough with settings they know (for example, they got rare air wrong because their definition of pressure was incorrect: particles per volume * speed)
It also took more time than I expected, where people were just trying to grok the solution (especially for poof and foop).
↑ comment by Morpheus · 2023-08-08T21:21:41.346Z · LW(p) · GW(p)
It'd be easier if "people's ability to solve Thinking Physics problems" was better studied, and it was, say, known that some given exercises generally take an average undergrad 2 hours to deconfuse themselves on. (Then, you set yourself a 2 hour timer and submit your best answer when you're done, rather than potentially spending days on it doublechecking yourself).
I think, for the immediate future, "take as long as you want to thoroughly understand the scenario" is a better test of thinking-skill for people doing openended research, and the fact it is it mostly makes sense to do this if you're actually already planning to invest years in openended research with poor feedback loops.
Is part of your hesitance that your “dataset” of thinking physics type of questions is not super large? I'd expect just doing 5 of the exercises in 50 minutes every day as a "test set" is going to get you more reliable feedback whether your daily training regime is working, but then you need to find new exercises once you run out of thinking physics questions.
Not quite what was looked for, but my answer / analysis of Not Far (one of the earliest problems in Mechanics):
This problem asks you to determine what distance has been traveled, based on pseudo-integrating a graph of speed vs. time. ! Notably, the graph appears to have three square units of area above the y-axis, and three square units of area below the y-axis. If these are each in fact identical squares, they would cancel out, and there would have been no net-distance traveled. !
! The problem then asks "Look at this speed graph and tell how far away from the starting point this thing ended up.", giving the options: ! a) It is impossible to tell because the graph has no numerical scale on it. ! b) It ended up at the starting point. ! c) It did not end up at the starting point but where it ended can't be told. !
! I pretty firmly believed (and still believe) that the answer here is a: Not only does the graph not not have a numerical scale on either axis, but there are no other indications either that these squares are equally sized. ! Still, I anticipated that the workbook might be trying to teach a different lesson: Not something like 'without units, be careful of interpreting graphs', but something more like 'positive and negative distances can cancel out'. ! Accordingly, I didn't put ~100% on a; instead I did a 90-8-2 split across these, getting it 'wrong' when the answer was deemed to be b. !
! I notice that I'm pretty frustrated to have gotten this one 'incorrect', and that I was kind of muddling between two different levels of analysis: 1) what is my confidence in the correct answer to this problem, and 2) what is my confidence that the answer I deem correct is the same one that the author deems correct. I really did not want to have to be considering 2 that deeply when giving my probabilities, but I guess the cost of that will be getting a lower predictive score than otherwise. ! I also notice that I'm really searching out for someone/something to validate my experience of having been 'robbed' here. But from what I can tell, the Internet does not have much other discussion of this specific problem. I feel my trust kind of broken by this particular exercise, and I'm bummed to have encountered it so early on, but also feel like I'm kind of shouting out into the void by sharing this. (I am finding the exercises on the whole to be useful though and do not at all regret having gone through the ones so far.)
↑ comment by Raemon · 2024-12-09T22:11:24.565Z · LW(p) · GW(p)
I just briefly skimmed your answer (trying not to actually engage with it enough to figure out the problem or your thought process), and then went and looked at the problem.
I got the answer B. The reason I went with B is that (especially contrasted with other illustrations in the book), the problem looks like it's going out of it's way to signal that the squares are regular enough that they are trying to convey "this is the same relative size."
I think there's not going to be an objective answer here – sometimes, graphs without units are complete bullshit, or on a logscale, or with counterintuitive units or whatever. Sometimes, they are basically what they appear-at-first-glance to be.
instead I did a 90-80-2 split across these, getting it 'wrong' when the answer was deemed to be b.
Does this mean you assignd ~49% on B? (not 100% sure how to parse this)
The way I approach Thinking Physics problems is
a) I do assume I am trying to guess what the author thought, which does sometimes mean psychologizing (this is sort of unfortunate but also not that different from most real world practical examples, where you often get a task that depends on what other people think-they-meant, and you have to do a mix of "what is the true underlying territory" and "am I interpreting the problem correct?"
b) whenever there are multiple things I'm uncertain of ("what does 'pressure' mean?", "what does the author mean by 'pressure'?) I try to split those out into multiple probabilities
↑ comment by SomeJustGoodAdvice · 2024-12-10T02:21:47.347Z · LW(p) · GW(p)
Whoops, that was a typo - corrected the probability now in the thread, & thanks, that's helpful
Replies from: Raemon↑ comment by Raemon · 2024-12-10T02:24:55.195Z · LW(p) · GW(p)
Nod. I think I would basically argue that wasn't really a reasonable probability to give the second option. (When I thought it was 90/80/2 I was like "okay well that's close to 50/50 which feels like a reasonable guess for the authorial intent as well as, in practice, what you can derive from unlabeled graphs.")
15 comments
Comments sorted by top scores.
comment by Morpheus · 2023-09-13T20:32:19.324Z · LW(p) · GW(p)
Bonus Challenge
Inspired by this idea [LW(p) · GW(p)] from Alex Turner's shortform, I tried to figure out which facts are truth or fiction based on prompting gpt-4 to mess with a Wikipedia article on Developmental Psychology. (First I let GPT-4 munch a big chunk of the article, and then I chose the first chunk I saw that contained lots of concrete claims.)
Crecedences are 0% if claim false, and 100% if the text written by gpt-4 is true/reflects the original article. Outcomes are on the line afterwards. Written more as personal notes (very rough).
Vision is sharper in infants than in older children.
- Vision is probably not sharper for infants, but the opposite! (10%)
- false
Infant sight tends to remain stable with little improvement over time.
- Infant sight should rapidly improve! (at least at some point it has too!) (10%)
- false
Color perception is limited in the first year, with infants primarily seeing in shades of gray [79]. Infants only begin to develop adult-like vision at about twelve months.[72]
- is no color perception plausible? (70%)
- false, In fact they learn it at 4 months!
Hearing is still evolving at the time of birth.
- Accidentally skipped this claim
Newborns show no distinct preference for human speech over other sounds, and they can't distinguish their mother's voice from others'.
- Newborns should probably pay more attention to their mother's voice! (It seems that this makes more sense if the latter parts are true. Not sure though!) (40%)
- false
The belief that these features are learned in the womb has been debunked.
- the debunking this seems pretty plausible! (70%) (on reflection that is not super sure that this is how it would be written on wikipedia)
- false
By 18 months, infants' hearing ability is still not on par with adults.
- not hearing on par is plausible. On the other hand, the opposite seems more likely to be mentioned? (30%) (seems plausible, at that time some babies start talking right?)
- false
Smell and taste are rudimentary, with infants often unable to distinguish between pleasant and unpleasant odors and tastes
- The smell seems very implausible to me! Especially for some of the more toxic things, I would expect them to be very ingrained. It seems like valence for a lot of the strongest smells is preprogrammed! (10%) (I give not 5%, because it could be for substances that are not really dangerous? In that case rudimentary would make sense as a description)
- false
Newborns do not show a clear preference for the smell of human milk over that of formula.[72]: 150 Older infants, interestingly, do not show a preference for their mother's scent.[79] Human milk over formula? Seems like that could go either way with underpowered studies? 55%
- true (Huh first positive result ... somehow I now want to see how well powered these actually were, or how you detect which smell a baby "likes" at all and whether that's a strong signal)
Touch and feel, while being one of the first senses to develop in the womb, are not as refined in infants as previously thought.[84] This contradicts the idea of primitive reflexes, which were believed to demonstrate advanced touch capabilities.
- This section seemes perhaps a bit weird? Why would primitive reflexes be rather advanced? Is this saying that a baby needs to figure out most motor control and most is not preprogrammed? Seems plausible, I give (40%) that none of the claims above have been altered.
- false (In hindsight of course a baby can figure out a lot of motor control before leaving the womb)
Pain perception in infants is believed to be less intense than in older children, indicating that they may not feel pain as acutely.
- Not sure how long something is an infant. It seems like a plausible claim if a lot of pain is sorta more of a social thing and babies haven't developed that so much yet? On the other hand babies seems like they are crying a lot and that they are constantly suffereing. (30%)
- false
There is also no substantial evidence that glucose can relieve pain in newborns.[87]
- The glucose thing seems like a cointoss? Seems marginally more plausible to be mentioned if true so (45%)
- false Wow a lot of these are giving across higher confidence than I would have expected. The sucrose thing is apparently a common thing and the randomized control trial doesn't seem to have too bad numbers (although I should at some point figure out how to get a useful estimate of the effect-size out of statistics like that). It seems plausible that blinding might be a bit hard.
- It also gives me more confidence that Wikipedia is not listing lots of common misconceptions it wants to crush.
- Overall, this whole field seems interesting! I think I also underestimated this field because it has psychology in its name (Yeah, I know that sounds dumb). I was not reflecting on my probabilities for long, and now feel like I could have done a lot better if I had (feedback and knowing how wrong my first impressions are is also valuable). Also reminds me of some section of hpmor where harry thinks about how it took a very long time until some human came up with the idea to investigate when children learn what. It also seems like a lot of the problems with testing that you would usually have in psychology studies, especially around surveys and self-report, is that you can't do that with infants, so you get higher quality data. You also wouldn't get infants that are trying to figure out what your experimental design is and whether they want to prove you right, wrong etc.
comment by Max H (Maxc) · 2023-08-01T02:16:55.536Z · LW(p) · GW(p)
The Amazon link in this post is for Thinking Physics: Understandable Practical Reality. I also found Thinking Physics: Practical Lessons in Critical Thinking and Thinking Physics is Gendaken Physics.
AFAICT, these are just different editions of the same book, but I couldn't determine what the best or latest edition is. To save people the same Googling that I did, Archive.org has a version available online here, and the Harvard Book Store sells a paperback copy in stock here for $34. (Amazon doesn't appear to actually have any edition for sale at a reasonable price.)
Replies from: anoncecomment by Max H (Maxc) · 2023-08-01T04:32:54.631Z · LW(p) · GW(p)
Find different sets of exercises that are as different as possible from Thinking Physics (i.e. requiring a pretty different set of skills, while still being feeling relevant to becoming a "generalist researcher"), that would make for a good followup to this exercise.
I think my idea [LW · GW] of investigating a recent (alleged) poker cheating scandal is a good exercise in this vein. It's certainly very different from Thinking Physics problems.
The main objections people had when I posted it were that it requires either already having or quickly absorbing a lot of background knowledge about the rules of poker and norms in the high stakes poker scene as a prerequisite, and that there is no way to know if you got the answer right. I continue to think these are not fatal flaws, and that if you're willing to invest some hours in learning the relevant background (which is itself a good rationality skill to practice, especially if you try to do it under time pressure), the payoff in the quality of the mystery is worth it.
There are a myriad of plausible competing hypotheses and piles of publicly available (but somewhat complex-to-think-about) evidence that make this a good test of your ability to make Bayesian updates about a real world situation. Also, the fact that there is no public consensus is actually a benefit in some ways - the exercise is un-spoilable, and you can research freely without fear of accidentally running into a consensus-accepted definitive conclusion.
Looking into other unsolved mysteries (e.g. murder mysteries, heists, or other famous cold cases) might provide a similar kind of challenge, and if you compile enough cases you could form a set of exercises in the "mystery solving" genre. But it can be hard to find suitable candidates with lots of publicly available evidence of different types, especially cases that still have multiple competing hypotheses and no clear / trivially obvious conclusion. Essentially, you want something that is actually unsolved (not just legally unsolved), but still interesting and not a total dead end due to lack of evidence. I haven't personally looked into it much, but the JonBenét Ramsey case (warning: gruesome murder / CSA case) comes to mind as one possibility that might suit.
comment by Elizabeth (pktechgirl) · 2023-08-12T20:27:24.093Z · LW(p) · GW(p)
Cold Air isn't in my physical copy, the archive.org copy, or the pdf I found online, I think the problems vary by edition.
Replies from: Raemoncomment by Mitchell_Porter · 2023-08-01T07:38:13.681Z · LW(p) · GW(p)
Eliezer's Class Project has a fictional group of rationality students try to find the true theory of quantum gravity in one month. This always seemed like a cool goal and test for rationality training to aspire to. If you're not solving difficult open problems faster than science, your Art of Rationality probably isn't complete.
It's good for intelligent people to be audaciously ambitious. But is Art of Rationality enough to figure out quantum gravity, or solve "difficult open problems" in the sciences? If not, could you comment on what else is needed?
Replies from: Raemon↑ comment by Raemon · 2023-08-01T07:52:04.939Z · LW(p) · GW(p)
I mean, depends how you're defining art of rationality. I think it'll usually require some kind of domain expertise and skills in the relevant open problems. I also think "rationality" would be important for figuring out what skills to gain, and figuring out how to learn them as quickly as possible, if you were starting from scratch.
As for "is this possible?", well, I'm not sure. This post is part of sequence (and a possible longterm research project) aimed at figuring out the answer.
comment by quila · 2024-09-24T05:15:56.248Z · LW(p) · GW(p)
i'm enjoying this. going through the questions right now, might do all of them
had a notable experience with one of the early questions:
question: "The battery output voltage, the bottle volume, the digital clock time, and the measure of weight (12 volts; one gallon; 12:36; 1 lb) all have something in common. It is that they are represented by a) one number b) more than one number."
recollected thought process: apart from the clock time, they all have one number. the time on the clock is also, in my opinion, represented by one number in a non base-n numeral system - the symbols update predictably when the value is incremented, which is all that's required. i'm not sure if the author intends that interpretation of the clock, though. let's look for other interpretations.
"lb" - this is a pointer to formulas related to weight/gravity (or more fundamentally, a pointer back to physics/the world). "1 lb" means "1 is the value to pass as the weight variable". a formula is not itself a number, but can contain them. maybe this is why the clock is included - most would probably consider it to contain two numbers, which would force them to think about how these other three could be 'more than one number' as well.
(though it's down to interpretation, i'll choose b) more than one number.)
the listed answer is: a) one number. "Each is represented by only one number - the battery by 12 volts, the bottle by one gallon, the time by 12:36 and the weight by one pound. Things described by one number are called scalars. For example: on a scale of one to ten, how do you rate this teacher?" it just restates them and implies in passing that 12:36 is one number, without deriving any insight from the question. *feels disappointed*. (i guess they just wanted to introduce a definition)
comment by Muireall · 2023-08-08T21:16:33.048Z · LW(p) · GW(p)
I only ever flipped through Thinking Physics for fun, but what I remember is that I tended to miss easier problems more often. If I spent time thinking about one, really making sure I got it right, I'd probably get it. Outside those, there were some that really were elementary, but I'd often find myself thinking I'd looked at the author's answer too soon—a self-serving "well, I would have gotten this, if I were really trying." I might say the problem was that I couldn't tell when I needed to really try.
This does remind me a bit of how I studied for the physics GRE (do people still take that?), particularly getting calibrated on multiple-choice confidence and on how long to spend on problems. Unfortunately, but perhaps not surprisingly, very little of that study transferred to my PhD experience.
Replies from: Raemon↑ comment by Raemon · 2023-08-08T21:18:51.795Z · LW(p) · GW(p)
I am interested in
- how much deliberate effort you put into calibrating yourself on "how much effort to put into multiple choice questions"
- whether you put any deliberate effort into transferring that into the PhD experience
- what did you actually do in your PhD experience?
- what do you think would have better prepared you for PhD experience?
↑ comment by Muireall · 2023-08-08T23:07:22.803Z · LW(p) · GW(p)
For context if anyone needs it, the Physics GRE is (was?) a multiple-choice exam where you get penalized for wrong answers but not for blanks. It works out so that if you eliminate one answer there's no harm in guessing, in expectation. There's also considerable time pressure—something like 90 seconds per question on average.
how much deliberate effort you put into calibrating yourself on "how much effort to put into multiple choice questions"
Enough to get through all questions with some time left over, even if that meant guessing on some I could fully solve. I'd mark the questions I'd guessed on with different symbols that let me go back at the end and prioritize solving them. For three or so practice tests, I systematically went over every problem that I missed, guessed, or spent a long time on and did the metacognitive thing including questions like "how long did I think this would take? when was I 50% confident? when should I have decided to move on? how could I have decided faster?" (Using purely retrospective judgment—I wasn't actually timing individual questions or anything more complicated.)
whether you put any deliberate effort into transferring that into the PhD experience
Not really. I think I had some notion that being able to solve small problems quickly could lead to a sort of qualitatively better fluency, but in the end there just wasn't enough in common between test content/conditions and research (or even coursework) to prioritize that. I definitely didn't learn the lesson that I was generally underconfident.
what did you actually do in your PhD experience?
Pretty normal experimentalist route, maybe heavier on math and programming than typical. Coursework for 1-2 years shading into helping with senior students' experiments, then designing and running my own.
what do you think would have better prepared you for PhD experience?
In the end I was reasonably well prepared in terms of technical knowledge, problem solving, [meta]cognitive skills, and so on (irrespective of the GRE). I think I mostly lacked perspective, particularly in terms of choosing problems and working with a supervisor. I'd guess, starting with most helpful, one or more of these:
- Industry experience with a good manager
- More research experience in other subjects
- Research in the same subject
- Other industry experience
As far as things I could have done instead with the time I used to study, I don't know. Make friends with grad students?
comment by trevor (TrevorWiesinger) · 2023-08-01T02:48:35.951Z · LW(p) · GW(p)
I think it's important to note that, if you randomly solve thinking physics (or even make a decent breakthrough), then all the alignment researchers get to have it too.