Posts

“Thinking Physics” as an applied rationality exercise 2023-08-27T15:31:00.814Z
“Thinking Physics” as an applied rationality exercise 2023-08-10T08:32:01.075Z
Karlsruhe Rationality Meetup: Inadequate Equilibria pt2 2022-11-10T10:00:44.150Z
Karlsruhe Rationality Meetup: Inadequate Equilibria pt1 2022-11-02T09:15:57.489Z
What Is the Idea Behind (Un-)Supervised Learning and Reinforcement Learning? 2022-09-30T16:48:06.523Z
Does the existence of shared human values imply alignment is "easy"? 2022-09-26T18:01:10.661Z
Karlsruhe Rationality Meetup: Predictions 2022-09-06T16:56:57.021Z
Moneypumping Bryan Caplan's Belief in Free Will 2022-07-16T00:46:03.176Z
Returns on cognition of different board games 2022-02-13T20:40:49.163Z
Coping with Undecidability 2022-01-27T10:31:00.520Z
Time until graduation as a proxy for picking between (German) universities 2022-01-24T18:27:32.984Z
Are "non-computable functions" always hard to solve in practice? 2021-12-20T16:32:25.118Z
What is the evidence on the Church-Turing Thesis? 2021-09-19T11:34:49.377Z
Chance that "AI safety basically [doesn't need] to be solved, we’ll just solve it by default unless we’re completely completely careless" 2020-12-08T21:08:47.575Z
Morpheus's Shortform 2020-08-07T22:35:57.530Z

Comments

Comment by Morpheus on Luck based medicine: angry eldritch sugar gods edition · 2023-09-19T15:49:23.745Z · LW · GW

I just realized you'd expect your blood sugar to go down or at least move a little.

Comment by Morpheus on Luck based medicine: angry eldritch sugar gods edition · 2023-09-19T15:34:41.563Z · LW · GW

I’d always avoided diet soda on the belief that no-calorie sweeteners spiked your insulin and this led to sugar cravings that left you worse off. But when I tried stevia-sweeted Zevia with the GCM my blood glucose levels didn’t move at all, and I didn’t feel any additional drive to eat sugar

Maybe I am missing something, but why would a spike in your insulin be visible on a glucose monitor? If it wouldn't then perhaps your previous stance on the sweeteners was right?

If your graph is right about your sugar intake, another hypothesis for the cravings would be your total sugar intake. It seems you are consuming less sugar in the winter.

Comment by Morpheus on Morpheus's Shortform · 2023-09-16T19:52:30.604Z · LW · GW

Epistemic Status: Anecdote

Two weeks ago, I’ve been dissatisfied with the amount of workouts I do. When I considered how to solve the issue, my brain generated the excuse that while I like running outside, I really don’t like doing workouts with my dumbbells in my room even though that would be a more intense and therefore more useful workout. Somehow I ended up actually thinking and asked myself why I don’t just take the dumbbells with me outside. Which was of course met by resistance because it looks weird. It’s even worse! I don’t know how to “properly” do curls or whatnot, and other people would judge me on that. I noticed that I don’t actually care that much about people in my dorm judging me. These weirdness points have low cost. In addition, this muscle of rebellion seems useful to train, as I suspect it to be one of the bottlenecks that hinders me from writing posts like this one. 

Comment by Morpheus on Exercise: Solve "Thinking Physics" · 2023-09-13T20:32:19.324Z · LW · GW

Bonus Challenge

Inspired by this idea from Alex Turner's shortform, I tried to figure out which facts are truth or fiction based on prompting gpt-4 to mess with a Wikipedia article on Developmental Psychology. (First I let GPT-4 munch a big chunk of the article, and then I chose the first chunk I saw that contained lots of concrete claims.)

Crecedences are 0% if claim false, and 100% if the text written by gpt-4 is true/reflects the original article. Outcomes are on the line afterwards. Written more as personal notes (very rough).

Vision is sharper in infants than in older children.

  • Vision is probably not sharper for infants, but the opposite! (10%)
  • false

Infant sight tends to remain stable with little improvement over time.

  • Infant sight should rapidly improve! (at least at some point it has too!) (10%)
  • false

Color perception is limited in the first year, with infants primarily seeing in shades of gray [79]. Infants only begin to develop adult-like vision at about twelve months.[72]

  • is no color perception plausible? (70%)
  • false, In fact they learn it at 4 months!

Hearing is still evolving at the time of birth.

  • Accidentally skipped this claim

Newborns show no distinct preference for human speech over other sounds, and they can't distinguish their mother's voice from others'.

  • Newborns should probably pay more attention to their mother's voice! (It seems that this makes more sense if the latter parts are true. Not sure though!) (40%)
  • false

The belief that these features are learned in the womb has been debunked.

  • the debunking this seems pretty plausible! (70%) (on reflection that is not super sure that this is how it would be written on wikipedia)
  • false

By 18 months, infants' hearing ability is still not on par with adults.

  • not hearing on par is plausible. On the other hand, the opposite seems more likely to be mentioned? (30%) (seems plausible, at that time some babies start talking right?)
  • false

Smell and taste are rudimentary, with infants often unable to distinguish between pleasant and unpleasant odors and tastes

  • The smell seems very implausible to me! Especially for some of the more toxic things, I would expect them to be very ingrained. It seems like valence for a lot of the strongest smells is preprogrammed! (10%) (I give not 5%, because it could be for substances that are not really dangerous? In that case rudimentary would make sense as a description)
  • false

Newborns do not show a clear preference for the smell of human milk over that of formula.[72]: 150  Older infants, interestingly, do not show a preference for their mother's scent.[79] Human milk over formula? Seems like that could go either way with underpowered studies? 55%

  • true (Huh first positive result ... somehow I now want to see how well powered these actually were, or how you detect which smell a baby "likes" at all and whether that's a strong signal)

Touch and feel, while being one of the first senses to develop in the womb, are not as refined in infants as previously thought.[84] This contradicts the idea of primitive reflexes, which were believed to demonstrate advanced touch capabilities.

  • This section seemes perhaps a bit weird? Why would primitive reflexes be rather advanced? Is this saying that a baby needs to figure out most motor control and most is not preprogrammed? Seems plausible, I give (40%) that none of the claims above have been altered.
  • false (In hindsight of course a baby can figure out a lot of motor control before leaving the womb)

Pain perception in infants is believed to be less intense than in older children, indicating that they may not feel pain as acutely.

  • Not sure how long something is an infant. It seems like a plausible claim if a lot of pain is sorta more of a social thing and babies haven't developed that so much yet? On the other hand babies seems like they are crying a lot and that they are constantly suffereing. (30%)
  • false

There is also no substantial evidence that glucose can relieve pain in newborns.[87]

  • The glucose thing seems like a cointoss? Seems marginally more plausible to be mentioned if true so (45%)
  • false Wow a lot of these are giving across higher confidence than I would have expected. The sucrose thing is apparently a common thing and the randomized control trial doesn't seem to have too bad numbers (although I should at some point figure out how to get a useful estimate of the effect-size out of statistics like that). It seems plausible that blinding might be a bit hard.
  • It also gives me more confidence that Wikipedia is not listing lots of common misconceptions it wants to crush.
  • Overall, this whole field seems interesting! I think I also underestimated this field because it has psychology in its name (Yeah, I know that sounds dumb). I was not reflecting on my probabilities for long, and now feel like I could have done a lot better if I had (feedback and knowing how wrong my first impressions are is also valuable). Also reminds me of some section of hpmor where harry thinks about how it took a very long time until some human came up with the idea to investigate when children learn what. It also seems like a lot of the problems with testing that you would usually have in psychology studies, especially around surveys and self-report, is that you can't do that with infants, so you get higher quality data. You also wouldn't get infants that are trying to figure out what your experimental design is and whether they want to prove you right, wrong etc.
Comment by Morpheus on Defunding My Mistake · 2023-09-07T22:22:34.452Z · LW · GW

Certain risks around groupthink, not knowing about how to select for behaviors or memes that are "safe" to tolerate in whatever memetic/status gradient you find yourself in, even just defining terms like blindspot or bias ---- they all seem made a lot worse by young EAs/rats who didn't previously learn to navigate a niche ideology/subculture.

Why does it have to be niche? Haven't met many nonrationalists who's mind doesn’t go haywire once you start on Politics or Religion. Where did these EAs/Rats grow up if they weren't exposed to that?

Comment by Morpheus on [deleted post] 2023-08-20T21:18:27.186Z

I changed the section on spoiler blocks to reflect the actual behavior of the editor. One might also consider changing this paragraph, as the “markdown syntax” for spoiler tags is not supported in this markdown (or fixing the bug itself).

The LW Docs editor actually supports a bunch of markdown syntax!

  • You can use #, ##, ### at the beginning of a line to insert Heading Level 1, 2, 3
  • > at the beginning of a paragraph makes it a quote block 
  • >! makes for a spoiler tag on a paragraph 
  • Three dashes will insert a newline
Comment by Morpheus on Morpheus's Shortform · 2023-08-20T21:00:09.988Z · LW · GW

Testing a claim from the lesswrong_editor tag about the spoiler feature: first trying ">!":

! This should be hidden

Apparently markdown does not support ">!" for spoiler tags. now ":::spoiler ... :::"

It's hidden!

works.

Comment by Morpheus on 6 non-obvious mental health issues specific to AI safety · 2023-08-19T00:17:05.245Z · LW · GW

What is this 0 point?

Comment by Morpheus on Exercise: Solve "Thinking Physics" · 2023-08-17T21:43:55.110Z · LW · GW

I tried doing these exercises in my rationality group this week with 5 other people. Since we did this as part of our regular meetup, doing 1h for a single question would have taken too long (we could have done 2 questions max). Instead, we did 4 exercises in ~90 min (steam locomotive, poof and foop, expansion of nothing, rare air). We started out with relatively strong physics background (everyone knowing mechanics), so I think that wasn't too hasty, except for the reflection part, perhaps. I gave people the first 5 minutes to think for themselves and to record their first probabilities. Then we discussed probabilities (there ended up to always be strong disagreements. Our physicist was twice >90% confident in the wrong answer).

I think because our meetups are often just more of a social meetup, there was not as big of a buy-in to go full munchkin on the exercises. Since I had already done the puzzles, I was also not participating in the discussion, as I didn't want to leak information. I feel like that was a mistake, since I feel like by participating in the discussion I could transfer my enthusiasm and people would have had more fun and tried harder on the exercises. Next time, I am going to pick problems that I haven't solved yet. I also forgot to do the reflections as a discussion, instead I told everyone to think about how they could have done better on their own, which was definitely worse. I then just ended up making the reflection part really short (3 min) for the first easy exercises because people didn't seem enthusiastic.

Once we got to the rare air exercise everyone seemed to be really involved though since the exercise was obviously hard and people actually started thinking. At the end, they still converged on the wrong answer. I had a hard time reading the room for how this went. But people actually brought up whether we can try this again at our next meetup, so I guess it went well.

One of the takeaways was that people weren't double-checking their models enough with settings they know (for example, they got rare air wrong because their definition of pressure was incorrect: particles per volume * speed)

It also took more time than I expected, where people were just trying to grok the solution (especially for poof and foop).

Comment by Morpheus on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T20:04:07.936Z · LW · GW

Since I had only heard the term “stochastic parrot” by skeptics who obviously didn't know what they are talking about, I hadn't realized what a fitting phrase stochastic parrot actually is. One might even argue it's overselling language models, as parrots are quite smart.

Comment by Morpheus on Morpheus's Shortform · 2023-08-15T12:50:07.850Z · LW · GW

Inspired by John's post on How To Make Prediction Markets Useful For Alignment Work I made two markets (see below):

I feel like there are pretty important predictions to be made around things like whether the current funding situation is going to continue as it is. It seems hard to tell, though what kind of question to ask that provides someone more value, than just reading something like the recent post on what the marginal LTFF grant looks like

Comment by Morpheus on Tune Your Cognitive Strategies · 2023-08-14T15:13:04.814Z · LW · GW
Comment by Morpheus on Exercise: Solve "Thinking Physics" · 2023-08-08T21:21:41.346Z · LW · GW

It'd be easier if "people's ability to solve Thinking Physics problems" was better studied, and it was, say, known that some given exercises generally take an average undergrad 2 hours to deconfuse themselves on. (Then, you set yourself a 2 hour timer and submit your best answer when you're done, rather than potentially spending days on it doublechecking yourself).

I think, for the immediate future, "take as long as you want to thoroughly understand the scenario" is a better test of thinking-skill for people doing openended research, and the fact it is it mostly makes sense to do this if you're actually already planning to invest years in openended research with poor feedback loops.

Is part of your hesitance that your “dataset” of thinking physics type of questions is not super large? I'd expect just doing 5 of the exercises in 50 minutes every day as a "test set" is going to get you more reliable feedback whether your daily training regime is working, but then you need to find new exercises once you run out of thinking physics questions.

Comment by Morpheus on Exercise: Solve "Thinking Physics" · 2023-08-08T20:39:22.036Z · LW · GW

I was copying it from my notes (with syntax for spoiler tag already in) and I belief that the lesswrong-docs mode didn't work for that reason. Took some time because I got confused because I looked in the "welcome&faq"-post instead of the actual faq for the markdown way.

Comment by Morpheus on Exercise: Solve "Thinking Physics" · 2023-08-08T20:25:38.582Z · LW · GW
  • Challenge I
    • Exercises:
      • steam engine :-1:
      • cold bath :+1:
      • expansion of nothing :+1:
  • tldr (long thing contains all the babble, only included because seemed low cost, don't recommend reading):

    • Did exercises alone as didn't feel like setting up something with a partner. Felt I was excited enough that should work.
    • steered away from 95% for all exercises where I hadn't seen the puzzle before as was afraid that there's a trick.
    • I mostly noticed how extremely FUN I found this! Just today when reflecting that studying for university courses has kind of killed some of my enthusiasm and I didn't really remember last time being really excited while studying or in free time. Tried playing games (like chess), but even that felt more like doing the motions and got me more addicted and not actually in a flow state of mind. Somehow this just clicked.
    • Training regimen:
      • Probably opting for speed. This seemed on the easier side.
      • I will maybe try 5 minutes per question and see how that goes for 10 of them.
  • Journal

    • steam locomotive
      • Intuition: It seems like bigger wheels would be better for higher speed, but might be more wasteful.
      • abstract
        • This is in the momentum collum. It seems like one difference would be that the train with lower load, but speed, needs to efficiently maintain that speed. The one for freight needs better breaks.
        • I also am not sure if I am supposed to use other evidence. On the other hand, why would I have received.
        • Overall it seems the higher train is made in a way that is designed for slower speed (higher, less big Schornstein.) the thing stopping things in the front
        • There is something unintuitive about Gänge here as well! Having a big wheel means on rotation of the engine is covering more ground. This definitely seems like the thing that you want for the fast train with less load! On the other hand I think there is a high chance I have the direction backwards there.
        • I really don't like the trick of having two options that they are both trains of the same kind! I feel like that makes it hard for me to become confident!
        • Anything left confusing? I don't know lots about time or engines! Not 100% sure on direction of thing! Not sure how tricky questions are I think I want to not get overconfident (gpt-4 thing got me!)! I also haven't spent 20 minutes!
        • Tracks as hints? What about the stangen thing attached to wheels?
        • How does number of wheels matter?
        • I feel like I got most of the evidence I know how to interpret.
        • I am not sure if I should treat this as an exercise in not being too impatient, or in moving at the appropriate speed. I think I want to go with taking the appropriate time?
        • When practicing I am also not sure how well this went. I felt this exercise really didn't give me that much to work on?
      • Result:
        • b) (90%)
        • a) (5%)
        • c,d) (5%) passenger speed, freight load.
      • Looking back
        • I was right!
        • Gives me more confidence that this book is trying to be straightforward and not trying to trick me.
        • I could have explicitly thought about if the locomotive thing would have been recommended if it seemed like this thing wouldn't have been super object level.
        • I feel like I could easily taken this one on in 5 minutes if I had not expected really hard stuff.
        • I got the thing correct for exactly the right intuition. Nice. I want to check that in the future.
        • I want more books that are like that! I feel like I want to take Bryan caplan's exam that he gave gpt-3 like that (since gpt-4 still failed and I feel like I would remember the questions.)
        • I like how they didn't spoiler me by telling me whether this was an easy or hard one though.
        • What else do we learn from this? Not sure? I'd be interested how hard people thought this one was?
        • More reflection after more exercises?
        • I had a hard time figuring out how to feel about gaming. It feels like I could be more efficient. Something is chasing me. On the other hand, I have time!
        • I'd be interested to know how many other
        • I still feel a bit impulsive
        • It is fun to babble all of my notes in this document
        • at the same time i feel anxiety about later pruning to decide what to post on the forum
          • i feel i will either dump everything there, or i will just decide later! babble!
        • I feel in general I have a bit of a hard time balancing meta and object level. Maybe an adhd thing? Maybe I just have the separating babble and prune as too much of a doctrine in my head that I don't actually follow?
        • I notice that I love doing these artificial exercises. Fun! I feel way more motivated.
        • I think in general with adhd and everything I might be steering to much into not giving myself the artificial structure I need to really thrive by giving myself challenges that actually make me achieve great things!
        • I think I will switch to the next exercise before too much philosophizing.
      • Report:
        • I didn't feel like finding a partner and just wanted to start with 3 problems for now.
    • Cold Bath
      • before
        • Archimedian principle thing (knowing density of the ice not actually required ha!).
        • I know the answer. The density of ice is lower than the density of water (or at least 4 degrees is the most dense I believe.)
        • Thing I might be a bit confused about:
          • if it is getting hotter than 4 degrees, at some point we could reach a temperature again where the thing spills over. My assumption (given this book has been reasonable so far.)
        • This seems really unfair to get confident
        • Final answer: Will stay exactly brim full. Ice displaced exactly as much mass as there is water in the thing (confusing stuff about air and everything else is negligable.).
      • prediction
        • a) 3%
        • b) 2%
        • c) 95%
      • after
        • right!
        • I don't give myself too much credit as I had already encountered this.
        • Apparently I might have been eposed to too much of this. Probably lots of stuff out of this book was used by content creators I know.
        • I did end up needing to precisify my answer and I also didn't notice even without knowing ice density, you can solve this with archimedian principle.
        • I think I also want to prod internal physics simulation engine more (not only the verbal one.)
    • Rare Air
      • Dang! I ended up exactly on the wrong page and spoilering myself! I had thought a tad before that I hadn't written down how annoying it is to not spoiler yourself on the other exercises!
      • Lesson: stay careful to not get a page too far! (maybe precompute page disparity!) 9 pages!
    • The expansion of nothing.
      • Analysis
        • This one feels really interesting!
        • Intuitive model is very confused. I can see arguments for all three.
          • Slightly more intuitive if it would get smaller though.
          • Seems coincidental to just stay same (but eh... toy physics problems sometimes do this)
        • Intuition-pumps
          • What if we had the rod without the circle?
          • What happens without the circle?
          • What happens if we repeat?
          • What is the mechanism behind the expansion in the first place? I guess we have electrons in higher states. Everything is in higher energy and pushing away from each other?
          • Is there an analogy with other stuff that has force like this?
          • What if I imagine concrete points?
          • Making the thing really thin gives me the strong intuition that everything within the same radius is going to push each other apart, resulting in hole being bigger! Not what internal physics engine said!
            • I am also not sure if there is going to be some strain because the balance of material is not working out anymore?
          • Reminds me of the orange thing, that no matter if you have an orange or the earth, increasing your circumference is going to do the same to your radius. Means the shape would just stay the same. Everything just gets a bigger radius.
          • How to resolve remaining confusion?
            • I could try to dig deeper into how the stretching apart might work.
            • I could dig a bit deeper into ..j
          • I have learned some stuff about mechanics and lagrangians/Hamiltonians and going from normal to radial coordintees. Is that stuff any help here?
          • I feel if I would hit it from the top, it would still give me a different answer
            • Not sure if principiled to give 95% when I am still into other models? How confident in meta thing?
        • Noting that I have "SO" much fun doing this!
          • I remember just a few hours earlier feeling like I miss this feeling of just being really enthusiastic! Not sure if that was just me being not really reflective, or if that is really the case and I should attend to this. All the generic advice out there kind of tells me that I should perhaps not stop myself and just continue riding the wave for now?
          • For: follow your interests, there's this guy who just for fun did all the problems sets in one go. (Paul graham) I find them effortful
            • Overenthusiuams seems the only real way people with adhd operate
          • Against:
            • People who work on this not just for one evening but over extended periods might actually form longterm differences with their brainz.
          • Noting that I feel like actually applying the finding portends for and against thing explicitly so strongly since I have not made predictions for sometime.
      • takeaway
        • I also just notice that with this exercise I just felt entitled to start
        • With research on AI safety stuff I feel like I am waiting for this gatekeeper to tell me that I'll not be wasting peoples time by working on xyz. Not sure that is an actual problem in general. Specifically doesn't seem super productive compared to just getting excited and started on things though!
        • I was still using slightly more sketchy analysis this time! I did realize that you could take the ring apart, but then I threw this thought away before thinking about what would actually happen if take apart, heat, take back together.
          • In my mind I took things not really apart, but kept them in the same place when heating. I would not have expected to still get the same answer!
          • I did not come up with something close to the taking a photo and expanding the whole photo analogy
            • I do feel like I had something close to that!
        • I feel great because deliberation actually got me closer from my initial first guess. Kind of suspicion though, that I was in kinda modest mode and I took more interesting intuition, but if pressed I would have gone with the expansion. (Could be hindsight bias)
        • All in all I really liked this challenge! Very fun!
      • Prediction:
        • a) 80%
        • b) 10%
        • c) 10%
Comment by Morpheus on My Trial Period as an Independent Alignment Researcher · 2023-08-08T17:19:38.238Z · LW · GW

I'm also planning to participate in the Trojan Detection Challenge, where I can hopefully win some prize money.

You want to collaborate? DMd you.

Comment by Morpheus on Solving for the optimal work-life balance with geometric rationality · 2023-08-06T20:31:21.212Z · LW · GW

If you know that your ability to make the world better is somewhat below average (across all possible worlds you could have found yourself in)

 

What counterfactual are we looking for here? This makes me very confused?

Sort of thinking of this tweet:


For which are you most thankful: (A): existing at all, (B): given (a), existing as human. (C): given (b) your time & place in human history, or (D) given (c) your particular role & associates.

Comment by Morpheus on Improvement on MIRI's Corrigibility · 2023-06-12T22:20:54.626Z · LW · GW

You mean my link to arXiv? The PDF there should be readable. Or do you mean the articles linked in the PDF? They seem to work as well just fine.

Comment by Morpheus on Improvement on MIRI's Corrigibility · 2023-06-09T22:45:32.972Z · LW · GW

I haven't read your post in detail. But 'effective disbelief' sounds similar to Stuart Armstrongs work on indifference methods.

Comment by Morpheus on Ages Survey: Results · 2023-06-06T11:28:25.474Z · LW · GW

Was thinking the same thing when I thought about me with 11 who was more than capable to stay at home alone just fine. I don't really get what is so special about being home alone at night.

Comment by Morpheus on Against Deep Ideas · 2023-03-20T09:28:41.612Z · LW · GW

For what it's worth my brain thinks of all of these as 'deep interesting ideas' which intuitively your post might have pushed me away from. Just noticing that I'd be super careful to not use this idea as a curiosity-killer.

Comment by Morpheus on "You'll Never Persuade People Like That" · 2023-03-12T05:54:21.493Z · LW · GW

And that's what explains the attractiveness of the appeal-to-persuading-third-parties. What "You'll never persuade people like that" really means is, "You are starting to persuade me against my will, and I'm laundering my cognitive dissonance by asserting that you actually need to persuade someone else who isn't here."

 

Big if true. Going to look out for this in future conversations. 

Comment by Morpheus on What is Evidence? · 2023-03-10T17:02:37.354Z · LW · GW

Your example seems still confused to me. Maybe try something simpler like "Will it rain tomorrow? " because you want to pack for a trip. There's lots things you can inquire to figure out if this is likely. For example if it's cloudy now that probably has some bearing on whether it will rain. You can look up past weather records for your region. More recently we have detailed models informing forecasts that you can access through the internet to inform you about the weather tomorrow. All of these are evidence.

There is also lots of observations you can make that are for all you know uncorrelated with whether it will rain tomorrow. Like the outcome of a dice throw you do. These do not constitute evidence toward your question or at least not very informative evidence.

Comment by Morpheus on A case for capabilities work on AI as net positive · 2023-02-28T12:12:50.381Z · LW · GW

Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.

Comment by Morpheus on A case for capabilities work on AI as net positive · 2023-02-28T11:42:46.077Z · LW · GW

Thus, capabilities work shift from being net-negative to net positive in expectation.

This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).

Comment by Morpheus on A case for capabilities work on AI as net positive · 2023-02-27T23:46:19.656Z · LW · GW

I now think the probabilities of AI risk have steeply declined to only 0.1-10%, and all of that probability mass is plausibly reducible to ridiculously low numbers by going to the stars and speeding up technological progress.

I think this is wrong (in that how does speeding up reduce risk? What do you want to speed up?) . I'd be actually interested in the case for this I got promised in the title.

Comment by Morpheus on Somewhat against "just update all the way" · 2023-02-24T11:58:01.251Z · LW · GW

Past me is trying to give himself too much credit here. Most of it was epistemic luck/high curiosity that lead him to join Søren Elverlin's reading group in 2019 and then I just got exposed to the takes from the community.

Comment by Morpheus on [deleted post] 2023-02-22T15:42:34.127Z

I am in a situation where I have literally zero people in my network that grook what I consider basic arguments (about AI safety), the main motivator for this post.

Yeah, that also seems like the first thing to fix to me. Private messaged you.

Some other next steps:

Comment by Morpheus on Somewhat against "just update all the way" · 2023-02-20T10:36:19.114Z · LW · GW

I don’t think many with monotonically increasing doom pay attention to current or any alignment research when they make their updates

Maybe I am just one of the “not many”. But I think this depends on how closely you track your timelines. Personally, my timelines are uncertain enough that most of my substantial updates have been in the earlier direction (like from Median ~2050 to median 2030-2035). This probably happens to a lot of people who newly enter the field, because they naturally first put more emphasis on surveys like the one you mentioned. I think my biggest ones were:

  • going from "taking the takes from capabilities researchers at face value, not having my own model and going with Metaculus" to "having my own views".
  • GPT2 (…and the log loss still goes down) and then the same with GPT3. In the beginning, I still had substantial probability mass (30%) on this trend just not continuing.
  • Minverva (apparently getting language models to do math is not that hard (which was basically my last “trip wire” going off)).

I do think my P(doom) has slightly decreased from seeing everyone else finally freaking out.

Comment by Morpheus on Human beats SOTA Go AI by learning an adversarial policy · 2023-02-19T10:14:07.196Z · LW · GW

I can believe that it's possible to defeat a Go professional by some extremely weird strategy that causes them to have a seizure or something in that spirit. But, is there a way to do this that another human can learn to use fairly easily? This stretches credulity somewhat.

Or there's just different paths to get AGI that involve different weaknesses and blind spots? Human children also seem exploitable in lots of ways. Couldn't you argue similarly that Humans are not generally intelligent, because Alpha-beta pruning + some mediocre evaluation function beats them in chess consistently, and they are not even able to learn to beat it?

Comment by Morpheus on Stuff I Recommend You Use · 2023-02-08T04:59:12.422Z · LW · GW

These elastic laces that I don’t have to tie could save me hundreds of hours over the course of my life, I guess.

That or learn to tie your shoelaces fast! Have completely forgotten the slow way by now.

Comment by Morpheus on What fact that you know is true but most people aren't ready to accept it? · 2023-02-04T19:21:14.539Z · LW · GW

Well, this sounds kinda intriguing, but I am not sure whether this is the kind of area where I am currently epistemically helpless. Thankfully, prediction markets exist

Comment by Morpheus on What fact that you know is true but most people aren't ready to accept it? · 2023-02-04T11:23:35.907Z · LW · GW

I find it kinda suspicious that this species niche seems to only make sense with homo sapiens around? Who would that hominid need to run away from if not for homo sapiens? I don't have great intuitions for the numbers here, but it seems like homo sapiens invading the americas would probably not be enough time to adapt.

Comment by Morpheus on What fact that you know is true but most people aren't ready to accept it? · 2023-02-03T14:07:35.713Z · LW · GW

Any belief worth having entails predictions. The disagreement feature seems to handle these answers well.

Comment by Morpheus on Saying things because they sound good · 2023-01-31T06:21:43.834Z · LW · GW

On the other hand, maybe other people don't do the same. Or maybe they do but hearing the "party line" of "breakfast is the most important meal of the day" over and over again adds up and leads to you eventually believing it.

This reminds me of a revelation I recently had from going to a Steiner school (an alternative "holistic" school system popular in Germany) as a kid. When I talked to my sister who goes to the same school, I was noticing that she thought that the stories her teachers were telling her were literally true. I then tried to make it clear that humans had not started planting wheat because someone put a golden dagger into the ground, and that this was mostly an entertaining story. I would not even have known how to start explaining to her which things her teachers said were just stories and which were actual things worth remembering. Often they intentionally invent weird stories to teach you things like the alphabet. As a child, I had just found this a little frustrating because that meant we were just learning one letter a day. We also got told a lot of classics (like Greek Anthology) though, which I do actually appreciate. I am also not sure if we got taught a lot more bullshit than regular schools.

Comment by Morpheus on Running by Default · 2023-01-05T14:37:39.618Z · LW · GW

Running upstairs is also a good example (the trick is getting to the point where you can run up the stairs fast enough that you only feel you are exhausted when you are already up). Or jumping downstairs. Generally, optimizing for fun is probably the most sustainable way of doing this. I don't enjoy most sports that require endurance, but like doing things like running in short bursts, so it works pretty well for me. The biggest downside is probably strangers looking at you weird, but I mostly don't notice and certainly don't mind. Walking in a group is different, of course.

Comment by Morpheus on Morpheus's Shortform · 2023-01-05T13:47:59.463Z · LW · GW

Agree that the meaning of the ranges is very ill-defined. I think I am most often drawn to this when I have a few different heuristics that seem applicable. Example of internals: One is just how likely this feels when I query one of my predictive engines and another is just some very crude "outside view"/eyeballed statistic that estimates how well I did on this in the past. Weighing these against each other causes lots of cognitive dissonance for me, so I don't like doing it.

Comment by Morpheus on Morpheus's Shortform · 2023-01-03T16:19:46.612Z · LW · GW

Probably silly

Quantifying uncertainty is great and all, but also exhausting precious mental energy. I am getting quite fond of giving probability ranges instead of point estimates when I want to communicate my uncertainty quickly. For example: “I'll probably (40-80%) show up to the party tonight.” For some reason, translating natural language uncertainty words into probability ranges feels more natural (at least to me) so requires less work for the writer.

If the difference is important, the other person can ask, but it still seems better than just saying 'probably'.

Comment by Morpheus on My first year in AI alignment · 2023-01-02T01:48:27.561Z · LW · GW

Messaged you about the coworking.

Comment by Morpheus on Extreme risk neutrality isn't always wrong · 2023-01-01T02:09:46.256Z · LW · GW

A fractional Kelly Bet gets excellent results for the vast majority of risk preferences. I endorse using 0.2 to 0.5 Kelly to calibrate personal risk-taking in your life.

Why not normal Kelly?

Comment by Morpheus on Morpheus's Shortform · 2022-12-15T23:18:43.242Z · LW · GW

Thanks! This looks like the solution I was looking for!

Comment by Morpheus on Morpheus's Shortform · 2022-12-15T14:44:40.434Z · LW · GW

Has someone bothered moving the content on Arbital into a format where it is (more easily) accessible? By now I figured out that and where you can see all math and ai-alignment related content, but I only found that by accident, when Arbitals main page actually managed to load not like the other 5 times I clicked on its icon. I had already assumed it was nonexistent, but it's just slow as hell.

Comment by Morpheus on Morpheus's Shortform · 2022-12-14T20:34:05.328Z · LW · GW

I often do read the comments, though I don't really read that intentionally, so I don't have a good estimate of how often I read comments or how many I read (probably read most comments if I find the topic interesting, and I feel like the points in the post wasn't obvious before I read it). I scroll through the "Recent discussion" stuff almost never. So I miss a lot of comments if I read a post early on and then people make comments later that I never see.

Comment by Morpheus on Morpheus's Shortform · 2022-12-14T19:16:02.184Z · LW · GW

Do you read the comments?

Wups...that might be a bug to fix. My excuse might be that I read the post before you made the comment, but I am not sure if that is true.

Its likely that it won't tell you The Answer, but if there isn't an answer, you should wish to believe there is not an answer. You should not force yourself to "internalise" an answer you can't perosnally understand, and that has objections to it.

I think you are definitely pointing out a failure mode I've fallen into recently, a few times. But mostly I am not sure if I understood what you mean. I also think my original comment failed to communicate how my views have actually shifted, which is mostly that after fidling with binary strings a bit and trying to figure out how I would model any causal chains in that, I noticed that the simple way I wanted to do that didn't work and my naive notion for how causes work broke down. I now think, when you have a system that is fully deterministic and that in such worlds "probabilistic causality" is a property of maps of such agents, but mostly I am still very confused. I don't actually have anything that I would call solution actually.

Comment by Morpheus on Morpheus's Shortform · 2022-12-14T15:35:35.928Z · LW · GW

Yeah, I know, that post. I give Jaynes most of the credit for further corrupting me. Was mostly hoping for good links for how to think about causality. Something pointing towards the solution to the problems mentioned in this post. I kinda skimmed "The book of why", but did not feel like I really understood the motivation behind do-calculus. I still don't really understand the justification between saying that xyz are random variables. It seems like saying "these observations should all be bagged into the same variable X" is already doing huge legwork in terms of what is able to cause what. I kinda wonder whether you could do a thing similar to implications in logic where you say, "assuming we put these observations all in the same bag, that implies this bag causes this other bag to have a slightly different composition", but say we bag them a bit differently, and causation looks different.

Comment by Morpheus on Morpheus's Shortform · 2022-12-14T15:07:30.128Z · LW · GW

Hm… maybe? Do you have a specific example, or links you have in mind when you say this? I am still having trouble wrapping my head around this and plan think more about it.

Comment by Morpheus on Morpheus's Shortform · 2022-12-14T08:49:49.032Z · LW · GW

“Causality is part of the map, not the territory”. I think I had already internalized that this is true for probabilities, but not for “causality”, a concept that I don't have a solid grasp on yet. This should be sort of obvious. It's probably written somewhere in the sequences. But not realizing this made me very confused when thinking about causality in a deterministic setting after reading the post on finite factored sets in pictures (causality doesn't seem to make sense in a deterministic setting). Thanks to Lucius for making me realize this.

Comment by Morpheus on The No Free Lunch theorem for dummies · 2022-12-06T21:47:00.817Z · LW · GW

Yes, but in worlds where not every sequence {0,1} * is equally likely (eg, your possible worlds have ANY structure) there will be predictors that outperform random predictors (like AIXI for example). (this is not literally true up to maximum pedantry (eg. there are infinitely measures on all languages where AIXI/solomonoff induction never works, but for all of those see my other comment))

Comment by Morpheus on The No Free Lunch theorem for dummies · 2022-12-06T21:17:57.260Z · LW · GW

Well... I don't know about you, but even if I believed that the most likely explanation for my observations was that I am a boltzmann brain, my current beliefs will lead me to effectively act as if I have 0 crecedence in that belief (since these worlds have no implications for my policy). As long as I put 0 value on this frame, I can actually discard it even if I have knightian uncertainty about which is the right prior to use (Logical uncertainty makes this more complicated than it needs to be and I think the basic point still stands. I am basically appealing to pragmatism).

This might not apply to every theorem that has ever been called NFL theorem. I think that what I wrote is true for the stuff that Wolpert shows in this paper.

Comment by Morpheus on Infra-Bayesian physicalism: a formal theory of naturalized induction · 2022-12-06T10:24:05.128Z · LW · GW

A physicalist hypothesis is a pair ), where  is a finite[4:2] set representing the physical states of the universe and  represents a joint belief about computations and physics. [...] Our agent will have a prior over such hypotheses, ranging over different .

I am confused what the state space  is adding to your formalism and how it is supposed to solve the ontology identification problem. Based on what I understood, if I want to use this for inference, I have this prior , and now I can use the bridge transform to project phi out again to evaluate my loss in different counterfactuals. But when looking at your loss function, it seems like most of the hard work is actually done in your relation  that determines which universes are consistent, but its definition does not seem to depend on . How is that different from having a prior that is just over  and taking the loss, if  is projected out anyway and thus not involved?