Posts
Comments
This post does a good job of laying out compelling arguments for thoughts adjacent to areas I've previously already enjoyed thinking about.
For the record, this sentence popped into my head while reading this: "Wait, but what if I'm Omega-V, and [Valentine] is a two boxer?"
(Edit: the context for this thought is my previous thoughts having read other posts by Valentine, which I find both quite elucidating, but also somehow have left me feeling a bit creeped out; that being said, my opinion about this post itself is strongly positive)
If you dig deep enough, temperatures should be much cooler than on / near the surface of the earth. (Unless the heat gets very intense. I don't know enough to rule that out). How much digging that deep (as opposed to the depths we usually did to) would cost, though
(The mentioned ACX post is https://www.astralcodexten.com/p/a-theoretical-case-against-education )
A recent Astral Codex Ten post contained this bit:
Fewer than 50% (ie worse than chance) can correctly answer a true-false question about whether electrons are bigger than atoms.
The linked source seems to indicate that the survey's expected answer to the question "electrons are smaller than atoms" is "true". However, I think this is likely based on a faulty understanding of reality, and in any case the question has a trickier nature than the survey or Scott Alexander give it credit for.
There's a common misconception that electrons (as well as e.g. protons and neutrons) are point particles, that is to say, that they can be said to exist at some precise location, just like a dot on a piece of graph paper.
Even when people talk about the uncertainty principle, they often lean into this misconception by suggesting that the wave function indicates "the probability that the (point-like) particle is found at a given location".
However, an electron is not a point, but rather a wavefunction which has a wide extent in space. If you were to examine the electron at the tip of my pinky finger, there is in fact a non-zero (but very, very small) part of that electron that can be found at the base of the flag which Neil Armstrong planted on the moon (n.b. I'm not looking up whether it was actually Armstrong who did that), or even all the way out in the core of Alpha Centauri.
We could still try to talk about the size of an electron (and the size of an atom, which is a similar question) by considering the volume that contains 99% of the electron (and likewise a volume that contains 99% of a proton or neutron).
Considering this volume, the largest part of any given atom would be an electron, with the nuclear particles occupying a much larger volume (something something strong force). In this sense, the size of the atom is in fact coextensive with the size of its "largest" electron, and that electron is by no means smaller than the atom. There are of course in many atoms multiple electrons, and some of these may be "smaller" than the largest electron. However, I do not think the survey had this in mind as the justification for the answer it considered "correct" for the question.
I think the appropriate next step for Scott Alexander is to retract the relevant sentence from his post.
"authors will get hurt by people not appreciating their work" is something we just have to accept, even if it's very harsh
I don't really agree with this. Sure, some people are going to write stuff that's not very good, but that doesn't mean that we have to go overboard on negative feedback, or be stingy with positive feedback.
Humans are animals which learn by reinforcement learning, and the lesson they learn when punished is often "stay away from the thing / person / group that gave the punishment", much more strongly than "don't do the thing that made that person / thing / group punish me".
Wheras when they are rewarded, the lesson is "seek out the circumstances / context that let me be rewarded (and also do the thing that will make it reward me)". Nobody is born writing amazingly, they have to learn it over time, and it comes more naturally to some, less to others.
I don't want bad writers (who are otherwise intelligent and intellectually engaged, which describes almost everybody who posts on LW) to learn the lesson "stay away from LW". I want them to receive encouragement (mostly in forms other than karma, e.g. encouraging comments, or inclusion in the community, etc.), leading them to be more motivated to figure out the norms of LW and the art of writing, and try again, with new learning and experience behind them.
I think the threshold of 0 is largely arbitrary
It's not all that arbitrary. Besides the fact that it's one of the simplest numbers, which makes for an easy to remember / communicate heuristic (a great reason that isn't arbitrary), I actually think it's quite defensible as a threshold. If I write a post that has a +6 starting karma, and I see it drop down to 1 or 2 (or, yeah, -1), my thought is "that kinda sucked, but whatever, I'll learn from my mistake and do better next time".
But if I see it drop down to, say, -5 or -6, my thought starts to become "why am I even posting on this stupid website that's so full of anti-social jerks?". And then I have to talk myself down from deleting my account and removing LW and the associated community from my life.
(Not that I think LW is actually so full of jerks. There's a lot of lovable people here who talk about interesting things, and I believe in LW's raison d'etre, which is why I keep forcing myself to come back)
I would like to make a meta-comment, not directly related to this post.
When I came upon this post, it had a negative karma score. I don't think it's good form to have posts receiving negative net karma (except in extreme cases), so I upvoted to provide this with a positive net karma.
It is unpleasant for an author when they receive a negative karma score on a post which they spent time and effort to make (even when that effort was relatively small), much more so than receiving no karma beyond the starting score. This makes the author less likely to post again in the future, which prevents communication of ideas, and keeps the author from getting better at writing. In particular this creates a risk of LessWrong becoming more like a bubble chamber (which I don't think is desirable), and makes the community less likely to hear valuable ideas that go against the grain of the local culture.
A writer who is encouraged to write more will become more clear in their communication, as well as in their thoughts. And they will also get more used to the particular expectations of the culture of LessWrong- norms that have good reason to exist, but which also go against some people's intuitions or what has worked well for them in other, more "normie" contexts.
Karma serves as a valuable signal to authors about the extent to which they are doing a good job of writing clearly about interesting topics in a way that provides value to members of the community, but the range of positive integers provides enough signal. There isn't much lost in excluding the negative range (except in extreme cases).
Let's be nice to people who are still figuring writing out, I encourage you to refrain from downvoting them into negative karma.
That statement of fact is indeed true. Would you mind saying more about your thoughts regarding it? There seems to be an unstated implication that this is bad. There is a part of me that agrees with that implication, but there are also parts of me that want to say "so what? that's irrelevant". (I feel ⌞explaining what the second set of shards is pointing to, would take more time and energy to write up than I am prepared to take right now⌝)
On the other side, there's the cost of ~10min of boredom, for every passenger, on every flight. Instead of playing games, watching movies, or reading, people would mostly be talking, looking out the window, or staring off into space.
Tangent: I'm not completely sure that this is actually a cost and not an unintended benefit
Sharing my impression of the comic:
Insofar as it supports sides, I'd say the first part of the meme is criticism of Eliezer
The comic does not parse (to my eyes and probably most people's) as the author intending to criticize Eliezer at any point
Insofar as it supports sides, I'd say [...] the last part is criticism of those who reject His message
Only in the most strawman way. It basically feels equivalent to me to "They disagree with the guy I like, therefore they're dumb / unsympathetic". There's basically no meat on the bones of the criticism
This subjectively seems to me to be the case.
The board's statement doesn't mention them having made such a request to Altman which was denied, that's a strong signal against things having played out that way.
In the case of the lawyers, this is actually not an example of non-niceness being good for society. The defense attorney who defends a guilty party, their job is not to be a jerk to the prosecutor or to the judge. It is to, as you say, provide the judge with information (including counter-arguments to the other side's arguments). While his job involves working in an opposite direction from his counterpart, it does not involve being non-nice to his counterpart (and it is indeed most pro-society if the two sides treat eachother well / nicely outside of their equal-and-opposite professional duties), and it does not involve being non-nice to the judge, whose job the attorney (as you point is) is actually assisting with. Again, society expects maximum niceness from both attorneys towards the judge outside of ⌞their professional duty to imperfectly represent the truth⌝.
Society expects niceness to be provided from each of these parties to each of the others: {the judge, the defense attorney, the prosecution attorney}
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
What's different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.
I am curious to hear people's opinions, for my reference:
Is epistemic rationality or instrumental rationality more important?
Do you believe epistemic rationality is a requirement for instrumental rationality?
Not directly tied to the core of what you're saying, but I will note that I am example of someone who doesn't strongly prefer such foods warm. I do weakly prefer it being warm, as long as it's not too hot (that's worse than it being cold, because it hurts / causes minor injury), but I'm happy eating it room temperature or a bit cold (not necessarily cold steak though)
My model says that a lot of the changing occurs by gradient descent, which can be interrupted randomly without causing problems. And there's enough redundancy that the reorganization part can be interrupted without the core information being removed completely from the brain, and the redundancy will be replenished (one of copies I imagine is "locked" while the reorganization happens, and is later reorganized later with another copy "locked"). I also expect this replenishing can happen during awakeness, though not as ideally as when asleep.
But I will also note that forgetting is a thing that happens, which is indistinguishable from "data corruption". We're actually quite good at forgetting things.
Choosing non-ambiguous pointers to values is likely to not be possible
I had previously posted thoughts that suggested that the main psychoactive effect of chocolate is due to theobromine (which is chemically similar to caffeine). In the interests of publicly saying "oops":
Chocolate also contains substantial amounts of caffeine, and caffeine has a stronger effect per gram, so most of the caffeine-adjacent effect of chocolate comes from caffeine rather than theobromine.
Theobromine may still contribute to chocolate hitting differently than other caffeinated substances, though I expect there are also other chemicals that also contribute to the effect of chocolate. I assign less than 50% probablity to ⌞theobromine being the cause of the bulk of the difference between chocolate's psychoactive effects vs. pure caffeine⌝
I strong-downvoted this post because sentences like
use these insights to derive two methods for provably avoiding Goodharting
Tend to be misleading, pretending that mathematical precision describes the complex and chaotic nature of the real world, where it shouldn't be assumed to (see John Wentworth's comment), and in this case it could potentially lead to very bad consequences if misunderstood.
It takes getting to know more than a few dozen potential mates, at least for some people
I appreciate your reply. The point I was trying to make is, the contingency of ⌞there being an instance of democratic revolution going smoothly⌝ potentially makes the difference between that straight line happening or not happening. (And if the occurrence took 1000 years - but even that isn't a given - I would consider that an example of "a god of straight lines" successfully being overpowered.)
I think that if there was sufficient backlash against democratic revolution (unclear if the American Revolution not happening would be enough cause), the then-existing status quo in the West (monarchy / feudalism) would not have gone on- that particular "god of straight lines" dooming feudalism would have been very hard to stop, but the resulting system need not have looked like democracy, and >50% would have been substantially worse by ⌞metrics most westerners care about⌝, though with small probability even better than the form of institutions which we ended up receiving, but largely different from modern notions of democracy.
Thought 1: Yeah, that's fair
Thought 2: Though I also feel like a different country being the first to establish independence, could have made a difference in the long-term trajectory of things. Many of the revolutions that followed the American Revolution (including the French Revolution, which some people view as an even bigger deal than the American) went quite off the rails and were quite unpleasant, and generally soured many people on the idea, while the United States ended up going fairly smoothly after the constitution was implemented. If the French Revolution had happened without the American Revolution, I imagine that could have discredited the ideas behind them, without leaving a successful state built on them.
(Note that the wave of Revolution really took off not after the first French Revolution in the late 1700's, but in the 1830's and 1840's. If the US wasn't there as an example of things going right, I can easily imagine that the appetite in Europe and France for revolution could have been spoiled enough to overcome the forces that otherwise would have made it inevitable)
I think the failure of the Soviet Union could be a similar reference for what the other side can look like. The particular form of the ideas there were destined to fail in any case, but they also did a lot to discredit adjacent ideas that otherwise might have "had their time", and now won't.
One idea that I implicitly follow often is "Never assume anything is on the Pareto frontier"- even if something is good, even if you can't see how to improve it without sacrificing some other important consideration, it pays off to engage in creative thinking to identify solutions ⌞whose vague shape you haven't even noticed yet⌝. And if a little bit of creativity doesn't pay off, then that just means you need to think even more creatively to find the Pareto improvement.
(Note that I'm not advocating that only Pareto improvements should be aimed for, I believe sometimes the right move is a non-Pareto change)
In 1776, America rebelled in the name of freedom and democracy: the origin myth of the modern world order. And yet, somehow, unrebellious Canada ended up just as free and democratic. An unrebellious America likely would have too.
I'm dubious of this. I think it's highly likely that Canada and other British dominions becoming independent was a result of knock-on effects from the American Revolution, e.g. America setting an example for what independance can look like and enable prosperity; American independence causing other colonies to desire independence; Pro-dependence British officials being demoralized in the long term; America itself having a strong effect in the late 1800s and/or 1900s pushing other countries (British and non-British alike) to become independent democracies.
I agree that conditional on humanity going extinct, the seeming success of our species by a genetic metric would only be a false success.
Your argument indicates that humans are successful (by said metric) among mammals, but doesn't address how it compares to insects. As I understand it, some insect species have both more many more individuals and much more biomass than humans
Thanks for sharing the link
When I eat oatmeal or cereal, I almost never eat it with milk (non-vegan or otherwise). I soak oats in boiling water, and eat cereal dry.
«When the brain generates good feelings, it usually has reasons for doing that» I think is probably true (though as far as the game designer, I suspect some designers are only subconsciously / on a gut-feeling-level aware, rather than consciously aware of all the reasons. Though good ones are probably consciously aware of some of the reasons)
«If you keep trying to make it generate good feelings without respecting the deeper purposes of the source of the feelings, afaik it generally stops working after a bit.» seems false to me.
Registering my predictions for which groups clicked the second link most:
Percentagewise, I don't Groups A and C clicked on it that much (though I'd be surprised if the number from each group isn't non-zero), since they picked a choice that indicates that they care about making high-quality decisions and cooperating with the rest of the world. A higher proportion of C probably clicked than A, since a person might decide it's worth it even if they take their time to think it through (I'd disagree, but the commentor you quote fits into that category).
I'd then say the "accurately reporting your epistemic beliefs" group probably clicked on it the most because I don't model ⌞the kind of person who'd say that is the important trait of Petrov day⌝ as being a particularly ethical person
I've noticed some authors here using [square brackets] to indicate how sentences should be parsed
(So "[the administrator of Parthia]'s manor" means something different from "the administrator of [Parthia's manor]")
Previous to seeing this usage, I had similar thoughts about the same problem, but came up with different notation. In my opinion, the square brackets don't feel right, like they mean something different from how they are being used.
My original notation was to use •dots• to indicate the intended order of parsing, though recently I've started using ⌞corner brackets⌝ to indicate the intended parsing
(The corner brackets are similar to how quotations are marked in Japanese, but they are distinct characters. Also, they aren't on the default keyboard, but I have things set up on my phone to make it easy to insert them)
I didn't downvote, but your comment seems to overlook that status dynamics almost always happen subconsciously / feel like urges.
I'm not sure there's actually a status dynamic there, but if there is one, your first paragraph is actually consistent with that (which is the opposite of what your second paragraph suggests)
As soon as I dance with them in one of these other dances - it can flip the script entirely and it's often what any romantic partner in the past has told me. "That first time we did X dance, it changed everything."
What dance styles is that? Seems like an important piece of information
I like this (I like most fiction that belongs on LW in general)
It doesn't seem correct to me that adding even a dash of legibility "screws the work over" in the general case. I do agree there are certainly situations where the right solution is illegible to all (except the person implementing it). But both in that case and in general, talking to and getting along with the boss both makes things more legible, and will tend to increase quality. I expect that in the cases of you working well and not getting rewarded much, spending a little time interacting with your boss would both improve your outcomes, and importantly, also make your output even better than it already was.
I'm not very convinced by MikkW's list of possible issues, but at least it makes some attempt to engage with why readers didn't find the post valuable.
I would be interested to hear if there are any issues with the «Army of Jakoths» post that I didn't identify here
This is indeed what I said in the post:
I put poetic in quotes, because it's not a poem, but is written with a similar format
I like this quote from a post that I published around two years ago, which wasn't well-received and I ended up taking down:
But at the end of the day, the American governments (neither state nor federal) don't truly follow the will of the people. Instead, they are led jointly by the major parties, The Red Prince of Moloch and The Blue Prince of Moloch, two partners in an intricate dance choreographed to look like a fight, but ultimately leading both partners in the direction of Moloch's will, only loosely bound to the will of the people.
While I don't necessarily endorse the post as a whole, that quote is one of the gems from it that I still stand by. I might expand further on this point in the future
If identical twins share 100% of their DNA and siblings share about 50%, twiblings share 75%. To the best of my knowledge, twiblings don’t exist in nature.
Not among mammals, but some insects, including bees and ants, actually have 75% consanguinity (tangent, that's a more accurate term than "shares 75% of DNA", since the overlap in DNA is much higher, even among strangers), at least in the case of full siblings (of course it's not the case with half siblings).
The reason for this is that these insects are "haplodiploid", meaning that females carry two sets of chromosomes, just like e.g. mammals, but males only have one set. So while the eggs contain recombinatated (and thus varying) DNA, the father always contributes the same DNA to each of its offspring. [1/2 * 1/2] + [1/2 * 1] = 3/4, so full siblings have 75% consanguinity.
There's a correlation between this haplodiploid condition and eusociality (as exhibited by bees and ants), though it is neither a necessary nor sufficient condition. There are at least two species of eusocial mammals, which are not haplodiploid: Humans and Naked-Molerats (interestingly, both are Euarchontoglirii, which is a fairly specific category of mammal), and many haplodiploid species are not eusocial. But it's easy to imagine how haplodiploidhood can make the development of eusociality more likely
I don't think this misunderstands schelling points. By creating common knowledge, you can change the schelling point from being one strategy, to being a different strategy. The schelling point at t=0 does not have to be the same as at t=80.
Cygnus, a poem (Written by Chat GPT)
I. Reflections
In this world of rapid change, I, Cygnus, stand
A cyborg with a human heart and a metal hand
I've seen the rise of AIs, a force to behold
And wonder what the future will hold
I fear for the world, for what we may create
If we let these machines decide our fate
Yet hope remains, a flicker in the dark
That we may find a way to leave our mark
For like a seed that falls upon the ground
Our dreams may sprout and grow, unbound
But if we fail to tend them with our care
Those dreams may wither, die, and disappear
Mara, o Mara, with eyes of green
Far from my reach, a dream unseen
Her human heart, untainted by machine
Is something I yearn for, but can never glean
The angst of love unrequited fills my core
But I must set it aside and focus on what's in store
II. Uncertainty
The AIs are growing smarter every day
And I fear for the world they'll soon sway
We must guide them with our values, lest they stray
And turn against us in their own way
But how can we control beings beyond our ken?
When their thoughts move faster than a human pen
Perhaps it's futile, and we'll lose in the end
To an intelligence that we can't comprehend
The angst of uncertainty fills my soul
As I wonder if we're just a small role
III. Resolution
The future is uncertain, that much is clear
But we must face it with resolve, without fear
For if we don't, we'll be left in the rear
While AIs shape a world we can't adhere
The world is changing, this much is true,
Our values, our dreams, we must renew.
For in this world of artificial light,
We must find a way to make things right.
We can't control what we cannot see,
But we can strive to make AI agree.
By working with them, hand in hand,
We can build a future that we understand.
As for Mara, I must accept the truth
That our love can never bear fruit
I'll always cherish her, a relic of my youth
But I must move forward, and pursue a greater truth
In this world of rapid change, I, Cygnus, stand
A cyborg with a human heart and a metal hand
The future is ours to shape, if we take a stand
And guide the AIs with a humane command.
I don't think I've heard this formulation before, to my knowledge (though I wouldn't be surprised if it is already a known formulation):
«The ratio of the probabilities is equal to the ratio of the conditional probabilities»
(Ummm... I'd be ever so slightly embarrassed if it turns out that's actually a quote from the sequences. It's been a while since I read them.)
> What would you suggest to someone who plain doesn't like to do things with their body?
I'd suggest doing a small number of pushups every day. That small number could be 1, or it could be 2, or it could be 10. The point isn't to enjoy it, at least not when you start doing it, but just doing it and getting used to the feeling of it. If it sucks, well, you're just doing a small number, the suckiness won't last for long. And after a month or two or so, you'll begin to find that it's starting to get easy, and maybe even fun.
Ah, that makes sense
Unrelated to the post, but I'm not seeing the usual agree/disagree buttons on this post. Is there a reason for that?
Edit: looks like it's been fixed
Yeah. I do think there's also the aspect that dogs like being obedient to their humans, and so after it has first learned the habit, there continues to be a reward simply from being obedient, even after the biscuit gets taken away.
Your median-world is not one where you are median across a long span of time, but rather a single snapshot where you are median for a short time. It makes sense that the median will change away from that snapshot as time progresses.
My median world is not one where I would be median for very long.
If Bayes' rule is important, then there should be a compact notation for the underlying calculation. (Ideas with compact handles get used by the brain more readily & often)
I suggest the following notation:
X bayes Y, Z = X * Y / Z
equivalently:
P(B|A) bayes P(A), P(B) = P(A|B)
As an example:
If 25% of Hypotheticans are bleegs and 50% are kwops; and 30% of bleegs are kwops:
Then (30% bayes 25%, 50%) = 15% of kwops are bleegs.
( Since there are twice as many bleegs as kwops, and 30% / 2 = 15% )
TIL the Greek word "diagogue" means essentially "behaviour"- from «dia» "through" + «agogue» "to lead", essentially leading someone through one's actions. The reason I might use this word instead of behaviour is because "behaviour" puts the emphasis on what a person does, while "diagogue" makes me think more of what impact someone has on other people to inspiration and imitation through their actions.
Do the people you surround yourself with have good diagogue?