LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
Thinking about it, I suspect I was not getting what "authenticity and openness" means. Like, it's not "being yourself and letting go", and more "being honest", I guess? Could you give me >= 2 examples of a person being "authentic and open"?
raemon on some thoughts on LessOnlineYoung people (metaphorically or literally) are welcome!
seth-herd on jacquesthibs's ShortformI think future more powerful/useful AIs will understand our intentions better IF they are trained to predict language. Text corpuses contain rich semantics about human intentions.
I can imagine other AI systems that are trained differently, and I would be more worried about those.
That's what I meant by current AI understanding our intentions possibly better than future AI.
richard_kennaway on Introducing AI Lab Watch"AI Watch."
raemon on Raemon's ShortformAre the disagree reacts with ‘small icons are good for this reason (enough to override other concerns)’ or ‘I didn’t update previously?’
d0themath on We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"I will also suggest the questions: 1) What are the things I’m really confident in? And 2) What are the things those I often read or talk to are really confident in? 3) And are there simple arguments which just involve bringing in little-thought-about domains of effect which throw that confidence into question?
jesse-hoogland on Examples of Highly Counterfactual Discoveries?Anecdotally (I couldn't find confirmation after a few minutes of searching), I remember hearing a claim about Darwin being particularly ahead of the curve with sexual selection & mate choice. That without Darwin it might have taken decades for biologists to come to the same realizations.
review-bot on Cohabitive Games so FarThe LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
flewrint-ophiuni on Cooperation is optimal, with weaker agents too - tldrThank you for the references! I'm reading your writings, it's interesting
I posted the super-cooperation argument while expecting that LessWrong would likely not be receptive, but I'm not sure which community would engage with all this and find it pertinent at this stage
More concrete and empirical productions seems needed
I rly like the idea of making songs to powerfwly remind urself abt things. TODO.
Step 1: Set an alarm for the morning. Step 2: Set the alarm tone for this song. Step 3: Make the alarm snooze for 30 minutes after the song has played. Step 4: Make the alarm only dismissable with solving a puzzle. Step 5: Only ever dismiss the alarm after you already left the house for the walk. Step 6: Always have an umbrella for when it is rainy, and have an alternative route without muddy roads.
I currently (until I get around to making a better system...) have an AI voice say reminders to myself based on calendar events I've set up to repeat every day (or any period I've defined). The event description is JSON, and if '"prompt": "Time to take a walk!"' is nonempty, the voice says what's in the prompt.
I don't have any routines that are too forcefwl (like "only dismissable with solving a puzzle"), because I want to minimize whip and maximize carrot. If I can only do what's good bc I force myself to do it, it's much less effective compared to if I just *want* to do what's good all the time.
...But whip can often be effective, so I don't recommend never using it. I'm just especially weak to it, due to not having much social backup-motivation, and a heavy tendency to fall into deep depressive equilibria.