LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
That sounds like something a cross between learned helplessness and madman theory.
The madman theory angle is "If I don't respond well to threats of negative outcomes, people (including myself) have no reason to threaten me". The learned helplessness angle is "I've never been able to get good sets of tasks and threats, and trying to figure something out usually leads to more punishment, so why put in any effort?"
Combine the two and you get "Tasks with risks of negative outcomes? Ugh [LW · GW], no."
With learned helplessness, the standard mechanism for (re)learning agency is being guided through a productive sequence by someone who can ensure the negative outcomes don't happen, getting more and more control over the sequence each time until you can do it on your own, then adapting it to more and more environments.
Avoiding tasks with possible negative outcomes isn't really feasible, so getting hands-on help with handling threat of negative consequences seems useful. Probably from a mental coach or psychologist.
The app doesn't help people who struggle with setting reasonable tasks with reasonable rewards and punishments. Akrasia is an umbrella term for "something somewhere in the chain to actually getting to do things is stopping the process", so it makes sense that one person's "solution" to akrasia isn't going to work for a lot of people.
I think it's healthy to see these kinds of posts as procedural inspiration. As a reader it's not about finding something that works for you, it's about analysing the technique someone used to iterate on their first hint of a good idea until it became something that thoroughly helped them.
johannes-c-mayer on Examples of Highly Counterfactual Discoveries?I am also not sure how useful it is, but I would be very careful with saying that R programmers not using it is strong evidence that it is not that useful. Basically, that was a bit the point I wanted to make with the original comment. Homoiconicity might be hard to learn and use compared to learning a for loop in python. That might be the reason that people don't learn it. Because they don't understand how it could be useful. Probably actually most R users did not even hear about homoiconicity. And if they would they would ask "Well I don't know how this is useful". But again that does not mean that it is not useful.
Probably many people at least vaguely know the concept of a pure function. But probably most don't actually use it in situations where it would be advantageous to use pure functions because they can't identify these situations.
Probably they don't even understand basic arguments, because they've never heard them, of why one would care about making functions pure. With your line of argument, we would now be able to conclude that pure functions are clearly not very useful in practice. Which I think is, at minimum, an overstatement. Clearly, they can be useful. My current model says that they are actually very useful.
[Edit:] Also R is not homoiconic lol. At least not in a strong sense like lisp. At least what this guy on github says. Also, I would guess this is correct from remembering how R looks, and looking at a few code samples now. In LISP your program is a bunch of lists. In R not. What is the data structure instance that is equivalent to this expression: %sumx2y2% <- function(e1, e2) {e1 ^ 2 + e2 ^ 2}
?
Universal guide to magic via anthropics:
Either a strong self-sampling assumption is false
Of course it is false. What are the reasons to even suspect that it might be true?
and-or path-based identity is true.
Note that path-dependent identity also has its own paradoxes: two copies can have different ‘weights” depending on how they were created while having the same measure. For example, if in sleep two copies of me will be created and one of the copies will be copied again – when there will be 3 copies in the morning in the same world, but if we calculate chances to be one of them based on paths, they will be ½ and ¼ and ¼.
This actually sounds about right. What's paradoxical here?
arthur-conmy on Refusal in LLMs is mediated by a single directionI think this discussion is sad, since it seems both sides assume bad faith from the other side. On one hand, I think Dan H and Andy Zou have improved the post by suggesting writing about related work, and signal-boosting the bypassing refusal result, so should be acknowledged in the post (IMO) rather than downvoted for some reason. I think that credit assignment was originally done poorly here (see e.g. "Citing others" from this Chris Olah blog post), but the authors resolved this when pushed.
But on the other hand, "Section 6.2 of the RepE paper shows exactly this" and accusations of plagiarism seem wrong @Dan H [AF · GW]. Changing experimental setups and scaling them to larger models is valuable original work.
(Disclosure: I know all authors of the post, but wasn't involved in this project)
(ETA: I added the word "bypassing". Typo.)
We'll send out location details to anyone who buys a ticket (and also feel free to ping us and we'll tell you).
I've had some experience with people trying to disrupt events, and trivial inconveniences of figuring out the address makes a non negligible difference in people doing stuff like that.
ann-brown on Thoughts on seed oilYou don't actually have to do any adjustments to the downsides, for beneficial statistical stories to be true. One point I was getting at, specifically, is that it is better than being dead or suffering in specific alternative ways, also. There can be real and clear downsides to carrying around significant amounts of weight, especially depending what that weight is, and still have that be present in the data in the first place because of good reasons.
I'll invoke the 'plane that comes back riddled in bullet holes, so you're trying to armor where the bullet holes are' meme. The plane that came back still came back; it armored the worst places, and now its other struggles are visible. It's not a negative trend, that we have more planes with damage now, than we did when they didn't come back.
I do think it's relevant that the U.S. once struggled with nutritional deficiencies with corn, answered with enriched and fortified products that helped address those, and likely still retains some of the root issues (that our food indeed isn't as nutritious as it should be, outside those enrichments). That the Great Depression happened at all; and the Dust Bowl. There's questions here not just of personal health, but of history; and when I look at some of the counterfactuals, given available resources, I see general trade-offs that can't be ignored when looking at - specifically - the statistics.
We would still have to explain the downsides of obesity, and not just in the long-term health effects like heart disease or diabetes risks, but in the everyday life of having to carry around so much extra weight.
Despite that, I'd still agree that being overweight is better than being underweight.
heramb on Heramb's ShortformEveryone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.
This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).
And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.
We seem to be in whack-a-mole territory now because of the overton window shifting for investors.
snewman on We are headed into an extreme compute overhangAll of this is plausible, but I'd encourage you to go through the exercise of working out these ideas in more detail. It'd be interesting reading and you might encounter some surprises / discover some things along the way.
Note, for example, that the AGIs would be unlikely to focus on AI research and self-improvement if there were more economically valuable things for them to be doing, and if (very plausibly!) there were not more economically valuable things for them to be doing, why wouldn't a big chunk of the 8 billion humans have been working on AI research already (such that an additional 1.6 million agents working on this might not be an immediate game changer)? There might be good arguments to be made that the AGIs would make an important difference, but I think it's worth spelling them out.
snewman on We are headed into an extreme compute overhangCan you elaborate? This might be true but I don't think it's self-evidently obvious.
In fact it could in some ways be a disadvantage; as Cole Wyeth notes in a separate top-level comment, "There are probably substantial gains from diversity among humans". 1.6 million identical twins might all share certain weaknesses or blind spots.