Posts
Comments
Alexey Guzey, walkthrough of his computer setup and productivity workflow.
- Founder of New Science. Popular blogger (eg, author of Matthew Walker’s “Why We Sleep” Is Riddled with Scientific and Factual Errors).
It seems like Guzey has changed his mind about a bunch of things, including needing all those huge monitors.
Makes me think this video is no longer relevant.
Blogpost
So you're saying that for running, it's better to do a more intense (uphill) shorter duration run, than a less intense (flat terrain) longer duration run?
If I understand that correctly, it would imply that, for cardio, the rule is reverse the one for weights: "heavier" for "less reps"?
WRT cardio, besides rowing more, I also do more of my running up hills, as it substantially lowers impact and allows higher volume.
What do you mean by 'impact' in this context?
Peterson
Petersen*
After giving it some thought, I do see a lot of real-life situations where you get to such a place.
For instance-
I was recently watching The Vow, the documentary about the NXIVM cult ("nexium").
In very broad strokes, one of the core fucked up things the leader does, is to gaslight the members into thinking that pain is good. If you resist him, don't like what he says, etc, there is something deficient in you. After a while, even when he's not in the picture so it would make sense for everyone to suffer less and get some slack, people punish each other for being deficient or weak.
And now that I wrote it about NXIVM I imagine those dynamics are actually commonplace in everyday society too.
Thanks for pointing to your clarification. I find it a lot clearer than the OP.
Downvoted because there is no « disagree » button.
I strongly disagree with the framing that one could control their emotions (both from the EA quote and from OP). I’m also surprised that most comments don’t go against the post in that regard.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
That was very interesting, thank you!
It was useful to me to read your footnote "I am autistic" at the beginning. It gave me better context and I expect I would have just been confused by the post otherwise.
I'd suggest adding it to the main body, or even starting with it as the first sentence.
A general intelligence may also be suppressed by an instinct firing off, as sometimes happens with humans. But that’s a feature of the wider mind the GI is embedded in, not of general intelligence itself.
I actually think you should count that as evidence against your claim that humans are General Intelligences.
Qualitatively speaking, human cognition is universally capable.
How would we know if this wasn't the case? How can we test this claim?
My initial reaction here is to think "We don't know what we don't know".
I think this is evidence that should increase our p(aliens), but not enough evidence to make the claim "either all are lying, or aliens are real".
It's also evidence of something like "they are wrong but honest"; "the instruments bugged"; "something about reality we don't get which is not aliens" etc
Gotcha. Thanks for clarifying!
I am confused. Why does everyone else select the equilibrium temperature? Why would they push it to 100 in the next round? You never explain this.
I understand you may be starting off a theorem that I don’t know. To me the obvious course of action would be something like: the temperature is way too high, so I’ll lower the temperature. Wouldn’t others appreciate that the temperature is dropping and getting closer to their own preference of 30 degrees ?
Are you saying what you’re describing makes sense, or are you saying that what you’re describing is a weird (and meaningless?) consequence of Nash theorem?
Hey! I appreciate you for making this.
I live alone in Sweden and I've been feeling very stressed about AI over the last few days.
It was a nice video to watch, and I entertained myself listening to you speak Finnish. Thanks!
Ok, thanks for the correction! My definition was wrong but the argument still stands that it should be teachable, or at least testable.
Maybe I don't know what I'm talking about and obviously we've tried this already.
I've heard Eliezer mention that the ability to understand AI risk is linked to Security Mindset.
Security Mindset is basically: you can think like a hacker, of exploits, how to abuse rules etc. So you can defend against hacks & exploits. You don't stop at basic "looks safe to me!"
There are a lot of examples of this Security/Hacker Mindset in HPMOR. When Harry learns of rates between magical coins vs his known prices for gold, silver, etc, he instantly thinks of a scheme to trade between the magical and the muggle world to make infinite money.
Eliezer also said that Security Mindset is something you either got or not.
I remember thinking: that can't be true!
Are we bottlenecking AI alignment on not having enough people with Eliezer-level Security Mindset, and saying "Oh well, it can't be taught!"?!
(That's where I've had the "people are dropping the ball" feeling. But maybe I just don't know enough.)
Two things seem obvious to me:
- Couldn't one devise a Security Mindset test, and get the high scorers to work on alignment?
(So even if we can't teach it, we get more people who have it)(I assume it was a similar process to find Superforecasters).
- Have we already tried really hard to teach Security Mindset, so that we're sure it can't be taught?
Presumably, Eliezer did try, and concluded it wasn't teachable?
I won't be the one doing this, since I'm unclear on whether I'm Security gifted myself (I think a little, and I think more than I used to, but I'm too low g to play high level games).
I thought the title was a joke or a metaphor
Yes! Eliezer did on another post.
Here it is if you want it:
https://discord.gg/45fkqBZuTB
Thank you :))
My view of PC's P(Doom) came from (IIRC) Scott Alexander's posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there'll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it'd be much harder to control. From that, I concluded "Christiano thinks P(doom) > 50%", which I realize is pretty sloppy reasoning.
I will go back to those articles to check whether I misrepresented his views. For now I'll remove his name from the post 👌🏻
Hey! Thanks for sharing the debate with LeCun, I found it very interesting and I’ll do more research on his views.
Thanks for pointing out that even a 1% existential risk is worth worrying about, I imagine it’s true even in my moral system, if I just realize that ie 1% probability that humanity wipes = 70 million expected deaths (1% of 7 billions) plus all the expected humans that wouldn’t come to be.
That’s logically.
Emotionally, I find it WAY harder to care for a 1% X-risk. Scope insensitivity. I want to think about where else in my thinking this is causing output errors.
Yes..... except that it was only given once.
I was catching up then; so I didn't want discord access and get all the spoilers. And now I can't find the link.
Unrelated, but I don't know where to ask-
Could somebody here provide me with the link to Mad Investor Chaos's discord, please?
I'm looking for the discord link! It was linked at some point, but I was catching up then, so I didn't want to see spoilers and didn't click or save the link.
But now I'd like to find it, and so far all my attempts have failed.
TLDR+question:
I appreciate you for writing that article. Humans seem bad at choosing what to work on. Is there a sub-field in AI alignment where a group of researchers solely focus on finding the most relevant questions to work on, make a list, and others pick from that list?
• • •
(I don't think this is an original take)
Man, I genuinely think you make me smarter and better at my goals. I love reading from you. I appreciate you for writing this.
I notice how easily I do "let's go right into thinking about this question"/"let's go right into doing this thing" instead of first asking "is this relevant? How do I know that? How does it help my goals better than something else?" and then "drop if not relevant".
In (one of) Eliezer's doom article(s), he says that he needs to literally stand behind people while they work for them to do anything sensible (misquote, and maybe hyperbolic on his end). From that, I judge that people in AI (and me, and probably people generally), even very rigorous people when it comes to execution, do NOT have sufficient rigor when it comes to choosing which task to do.
From what EY says, from what you say, I make the judgment that AI researchers (and people in general, and me) choose tasks more in terms of "what has keywords in common with my goals" + "what sounds cool" + "what sounds reasonable" + "what excites me", which is definitely NOT the same as "what optimizes my goals given my resources", and there aren't many excuses for doing that.
Except maybe: this is a bias that we realized humans have. You didn't know. Now that you know, stop doing it.
(I understand that researchers walk a difficult line where they may ALSO need to optimize for "projects that will get funded", which may involve "sounds cool to grant-givers". But the point still holds I believe.)
There may be the additional problem of humility whereby people assume that already-selected problems must be relevant "because my smart colleagues wouldn't be working on them if not", instead of just having it be the policy that reasons for doing projects are known and challengeable.
Hi,
Somehow unrelated, my question is about dissolution. What is the empirical evidence behind it? Could someone point me to it, preferably something short about brain structures?
Otherwise, it would seem to be subject too much to hindsight bias: you've seen people make a mistake, and you build a brain model that makes that mistake. But it could be another brain model, you just don't know because your dissolution is unfalsifiable.
Thank you!
Ideal format for beginning rationalists, thank you so much for that. I am reading it every day, going to full articles when wanting some more depth. It's also helped me "recruit" new rationalists among my friends. I think that this work may have wide and long-lasting effects.
It would be extra-nice, and I don't have the skills to do it myself, to have the links go to this LW - 2.0. Maybe you have reasons against it that I haven't considered?