LessWrong 2.0 Reader
View: New · Old · TopRestrict date range: Today · This week · This month · Last three months · This year · All time
next page (older posts) →
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2006-01-01T08:00:05.370Z · comments (6)
Technically, another explanation is that those 18% were 60% sure that the Sun goes around the Earth, and wanted to maximize their chances of being right.
mateusz-baginski on AI Alignment MetastrategyFor people who (like me immediately after reading this reply) are still confused about the meaning of "humane/acc", the header photo of Critch's X profile is reasonably informative
mateusz-baginski on Open Thread Spring 2024
I have the mild impression that Jacqueline Carey's Kushiel trilogy is somewhat popular in the community?[1] Is it true and if so, why?
E.g. Scott Alexander references Elua in Mediations on Moloch and I know of at least one prominent LWer who was a big enough fan of it to reference Elua in their discord handle.
I read the beginning and skimmed through the rest of the linked post. It is what I expected it to be.
We are talking about "probability" - a mathematical concept with a quite precise definition. How come we still have ambiguity about it?
Reading E.T. Jayne’s might help.
Probability is what you get as a result of some natural desiderata related to payoff structures. When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.
yair-halberstadt on From the outside, American schooling is weirdThe way the auditing works in the UK is as follows:
Students will be given an assignment, with a strict grading rubric. This grading rubric is open, and students are allowed to read it. The rubric will detail exactly what needs to be done to gain each mark. Interestingly, even students who read the rubric often fail to get these marks.
Teachers then grade the coursework against the rubric. Usually two from each school are randomly selected for review. If the external grader finds the marks more than 2 points off, all of the coursework will be remarked externally.
The biggest problem with this system is that experienced teachers will carefully go over the grading rubric with their students, and explain precisely what needs to be done to gain each mark. They will then read through drafts of the coursework, and point out which marks the student is failing to get it. When they mark the final coursework they will add exactly one point to the total.
Meanwhile less experienced teachers don't actually understand what the marking rubric means. They will pattern match the students response to the examples in the rubric, and give their students a too high mark. It will then be regraded externally and the students will end up with a far lower grade than they had expected.
Thus much of the difference in grades between schools is explainable by the difference in teacher quality/experience. This is bad for courses which are mostly graded in coursework, but fortunately most academic subjects are 90% written exams.
yair-halberstadt on From the outside, American schooling is weirdI believe that the US is nearly unique in not having national assessments. Certainly in both the UK and Israel most exams with some impact on your future life are externally marked, and those few that are not are audited. From my perspective the US system seems batshit insane, I'd be interested in what a steelman of "have teachers arbitrarily grade the kids then use that to decide life outcomes" could be?
Another huge difference between the education system in the US and elsewhere is the undergraduate/postgraduate distinction. Pretty much everywhere else an undergraduate degree is focused in a specific field, and meant to teach you sufficiently well to immediately get a job in that field. When 3 years isn't enough for that the length of the degree is increased by a year or 2 and you come out with a masters or a doctorate at the end. For example my wife took a 4 year course and now has a master's in pharmacy, allowing her to work as a pharmacist. Friends took a 5 or 6 year course (depending on the university) and are not Doctors. Second degrees are pretty much only necessary if you want to go into academia or research.
Meanwhile in the US it seems that all an undergraduate degree means is you took enough courses in anything you want to get a certificate, and then have to go to a postgraduate course to actually learn stuff that's relevant to your particular career. 8 years total seems like standard to become a doctor in the US, yet graduate doctors actually have a year or 2 less medical training than doctors in the UK. This seems like a total dead weight loss.
prometheus on Vernor Vinge, who coined the term "Technological Singularity", dies at 79"To the best of my knowledge, Vernor did not get cryopreserved. He has no chance to see the future he envisioned so boldly and imaginatively. The near-future world of Rainbows End is very nearly here... Part of me is upset with myself for not pushing him to make cryonics arrangements. However, he knew about it and made his choice."
ape-in-the-coat on Beauty and the BetsTo be frank, it feels as if you didn't read any of my posts on Sleeping Beauty before writing this comment. That you are simply annoyed when people arguing about substantionless semantics - and, believe me, I sympathise enourmously! - assume that I'm doing the same, based on shallow pattern matching "talks about Sleeping Beauty -> semantic disagreement" and spill your annoyance at me, without validating whether your previous assumption is actually correct.
Which is a shame, because I've designed this whole series of posts with people like you in mind. Someone who starts from the assumption that there are two valid answers, because it was the assumption I myself used to be quite sympathetic to until I actually went forth and checked.
If it's indeed the case, please start here [LW · GW] and then I'd appreciate if you actually engaged with the points I made, because that post addresses the kind of criticism you are making here.
If you actually read all my Sleeping Beauty posts, saw me highlight the very specific mathematical disagreements between halfers and thirders and how utterly ungrounded the idea of using probability theory with "centred possible words" is, I don't really understand how this kind of appealing to both sides still having a point can be a valid response.
Anyway, I'm going to address you comment step by step.
Sleeping Beauty is an edge case where different reward structures are intuitively possible
Different reward structures are possible in any probability theory problem. Make a bet on a coin toss but if the outcome is Tails - this bet is repeated three times and if it's Heads you get punched in the face - is a completely possible reward structure for a simple coin toss problem. Is it not very intuitive? Granted, but this is besides the point. Mathematical rules are supposed to always work, even in non-intuitive cases.
Once the payout structure is fixed, the confusion is gone.
People should agree on which bets to make - this is true and this is exactly what I show in the first part of this post. But the mathematical concept of "probability" is not just about bets - which I talk about in the middle part of this post. A huge part of the confusion is still very much present. Or so it was, until I actually resolved it in the previous post.
Sleeping beauty is about definitions.
There definetely is a semantic component in the disagreement betwen halfers and thirders. But it's the least interesting one and that's why I'm postponing the talk about it until the next post.
The thing, you seem to be missing, is that there is also a real objective disagreement which is obfuscated by the semantic one. People noticed that halfers and thirders use different definitions and come to the conclusion that semantics is all there is and decided not to look further. But they totally should have.
My last two posts are talking about this objective matters disagreements. Is there an update on awakening or is there not? There is a disagreement about it even between thirders who, apparently agree on the definition of "probability". Are the ways halfers and thirders define probability formally correct? It's a strictly defined mathematical concept, mind you, not some similarity cluster category border like "sound". Are Tails&Monday and Tails&Tuesday mutually exclusive events? You can't just define mutual exclusivity however you like.
Probability is something defined in math by necessity.
Probability is a measure function over an event space. And if for some mathematical reasons you can't construct an event space, your "probability" is illdefined.
You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".
I'm doing both. I've shown that only one thing formally is probability, and in the next post I'm going to define the other thing and explore it's properties.
dave-orr on Would you have a baby in 2024?Heh, that's why I put "strong" in there!
gerald-monroe on AI #57: All the AI News That’s Fit to Printhttps://twitter.com/perrymetzger/status/1772987611998462445 just wanted to bring this to your attention.
It's unfortunate that some snit between Perry and Eliezer over events 30 years ago stopped much discussion of the actual merits of his arguments, as I'd like to see what Eliezer or you have to say in response.
Eliezer responded with : https://twitter.com/ESYudkowsky/status/1773064617239150796 . He calls Perry a liar a bunch of times and does give
the first group permitted to try their hand at this should be humans augmented to the point where they are no longer idiots -- augmented humans so intelligent that they have stopped being bloody idiots like the rest of us; so intelligent they have stopped hoping for clever ideas to work that won't actually work. That's the level of intelligence needed to build something smarter than yourself and survive the experience.