LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
Hey, I'm new to LessWrong and working on a post - however at some point the guidelines which pop up at the top of a fresh account's "new post" screen went away, and I cannot find the same language in the New Users Guide or elsewhere on the site.
Does anyone have a link to this? I recall a list of suggestions like "make the post object-level," "treat it as a submission for a university," "do not write a poetic/literary post until you've already gotten a couple object-level posts on your record."
It seems like a minor oversight if it's impossible to find certain moderation guidelines/tips and tricks if you've already saved a draft/posted a comment.
I am not terribly worried about running headfirst into a moderation filter, as I can barely manage to write a comment which isn't as high effort of an explanation as I can come up with - but I do want that specific piece of text for reference, and now it appears to have evaporated into the shadow realm.
Am I just missing a link that would appear if I searched something else?
(Edit: also, sorry if this is the wrong place for this, I would've tried the "intercom" feature, but I am currently on the mobile version of the site, and that feature appears to be entirely missing there - and yes, I checked my settings to make sure it wasn't "hidden")
fowlertm on fowlertm's ShortformWe recently released an interview with independent scholar John Wentworth:
It mostly centers around two themes: "abstraction" (forming concepts) and "agency" (dealing with goal-directed systems).
Check it out!
At least Eliezer has been extremely clear that he is in favor of a stop not a pause (indeed, that was like the headline of his article "Pausing AI Developments Isn't Enough. We Need to Shut it All Down"), so I am confused why you list him with anything related to "pause".
My guess is me and Eliezer are both in favor of a pause, but mostly because a pause seems like it would slow down AGI progress, not because the next 6 months in-particular will be the most risky period.
raemon on Deep HonestySo there's "being honest" and "trying to convince people of things you think are true", and I think those are at least somewhat different projects. I feel like the first is more obviously good than the second.
I would first ask "what's my goal" (and, doublecheck why it's your goal and if you're being honest with yourself). Like, "I want to be able to say my true thoughts out loud and have an honest open relationship with my relatives" is different from "i don't want my relatives to believe false things" (the win-condition for the former is about you, the latter is about them). The latter is subtly different from "I want to have presented my best case to them, that they'll actually listen to, but then let them make up their own mind."
I'd also note there are additional soft skills you can gain like:
Thinking about it, I suspect I was not getting what "authenticity and openness" means. Like, it's not "being yourself and letting go", and more "being honest", I guess? Could you give me >= 2 examples of a person being "authentic and open"?
raemon on some thoughts on LessOnlineYoung people (metaphorically or literally) are welcome!
seth-herd on jacquesthibs's ShortformI think future more powerful/useful AIs will understand our intentions better IF they are trained to predict language. Text corpuses contain rich semantics about human intentions.
I can imagine other AI systems that are trained differently, and I would be more worried about those.
That's what I meant by current AI understanding our intentions possibly better than future AI.
richard_kennaway on Introducing AI Lab Watch"AI Watch."
raemon on Raemon's ShortformAre the disagree reacts with ‘small icons are good for this reason (enough to override other concerns)’ or ‘I didn’t update previously?’
d0themath on We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"I will also suggest the questions: 1) What are the things I’m really confident in? And 2) What are the things those I often read or talk to are really confident in? 3) And are there simple arguments which just involve bringing in little-thought-about domains of effect which throw that confidence into question?