Posts
Comments
True; in addition, places vary a lot in their freak-tolerance.
If I lived in Wyoming and wanted to go to a fetish event, I guess I'm driving to maybe Denver, around 3h40 away? I know this isn't a consideration for everyone but it's important to me.
Why the 6in fan rather than the 8in one? Would seem to move a lot more air for nearly the same price.
Thank you!
Reminiscent of Freeman Dyson's 2005 answer to the question: "what do you believe is true even though you cannot prove it?":
Since I am a mathematician, I give a precise answer to this question. Thanks to Kurt Gödel, we know that there are true mathematical statements that cannot be proved. But I want a little more than this. I want a statement that is true, unprovable, and simple enough to be understood by people who are not mathematicians. Here it is.
Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and so on. Numbers that are exact powers of five are 5, 25, 125, 625 and so on. Given any number such as 131072 (which happens to be a power of two), the reverse of it is 270131, with the same digits taken in the opposite order. Now my statement is: it never happens that the reverse of a power of two is a power of five.
The digits in a big power of two seem to occur in a random way without any regular pattern. If it ever happened that the reverse of a power of two was a power of five, this would be an unlikely accident, and the chance of it happening grows rapidly smaller as the numbers grow bigger. If we assume that the digits occur at random, then the chance of the accident happening for any power of two greater than a billion is less than one in a billion. It is easy to check that it does not happen for powers of two smaller than a billion. So the chance that it ever happens at all is less than one in a billion. That is why I believe the statement is true.
But the assumption that digits in a big power of two occur at random also implies that the statement is unprovable. Any proof of the statement would have to be based on some non-random property of the digits. The assumption of randomness means that the statement is true just because the odds are in its favor. It cannot be proved because there is no deep mathematical reason why it has to be true. (Note for experts: this argument does not work if we use powers of three instead of powers of five. In that case the statement is easy to prove because the reverse of a number divisible by three is also divisible by three. Divisibility by three happens to be a non-random property of the digits).
It is easy to find other examples of statements that are likely to be true but unprovable. The essential trick is to find an infinite sequence of events, each of which might happen by accident, but with a small total probability for even one of them happening. Then the statement that none of the events ever happens is probably true but cannot be proved.
You're not able to directly edit it yourself?
On Twitter I linked to this saying
Basic skills of decision making under uncertainty have been sorely lacking in this crisis. Oxford University's Future of Humanity Institute is building up its Epidemic Forecasting project, and needs a project manager.
Response:
I'm honestly struggling with a polite response to this. Here in the UK, Dominic Cummings has tried a Less Wrong approach to policy making, and our death rate is terrible. This idea that a solution will somehow spring from left-field maverick thinking is actually lethal.
For the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.
I look back and say "I wish he had been right!"
Britain was in the EU, but it kept Pounds Sterling, it never adopted the Euro.
How many opportunities do you think we get to hear someone make clearly falsifiable ten-year predictions, and have them turn out to be false, and then have that person have the honour necessary to say "I was very, very wrong?" Not a lot! So any reflections you have to add on this would I think be super valuable. Thanks!
Hey, looks like you're still active on the site, would be interested to hear your reflections on these predictions ten years on - thanks!
It is, of course, third-party visible that Eliezer-2010 *says* it's going well. Anyone can say that, but not everyone does.
I note that nearly eight years later, the preimage was never revealed.
Actually, I have seen many hashed predictions, and I have never seen a preimage revealed. At this stage, if someone reveals a preimage to demonstrate a successful prediction, I will be about as impressed as if someone wins a lottery, noting the number of losing lottery tickets lying about.
Half formed thoughts towards how I think about this:
Something like Turing completeness is at work, where our intelligence gains the ability to loop in on itself, and build on its former products (eg definitions) to reach new insights. We are at the threshold of the transition to this capability, half god and half beast, so even a small change in the distance we are across that threshold makes a big difference.
As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you're probably "the stupidest such culture that could get there, because if it could be done at a stupider level then it would've happened there first."
Who first observed this? I say this a lot, but I'm now not sure if I first thought of it or if I'm just quoting well-understood folklore.
May I recommend spoiler markup? Just start the line with >!
Another (minor) "Top Donor" opinion. On the MIRI issue: agree with your concerns, but continue donating, for now. I assume they're fully aware of the problem they're presenting to their donors and will address it in some fashion. If they do not might adjust next year. The hard thing is that MIRI still seems most differentiated in approach and talent org that can use funds (vs OpenAI and DeepMind and well-funded academic institutions)
I note that this is now done. As I have for so many things here. Great work team!
Spoiler space test
Rot13's content, hidden using spoiler markup:
Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.
Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.
This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.
I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.
ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.
Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.
AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.
The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.
I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.
The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.
I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
I think the Big Rationalist Lesson is "what adjustment to my circumstances am I not making because I Should Be Able To Do Without?"
Just to get things started, here's a proof for #1:
Proof by induction that the number of bicolor edges is odd iff the ends don't match. Base case: a single node has matching ends and an even number (zero) of bicolor edges. Extending with a non-bicolor edge changes neither condition, and extending with a bicolor edge changes both; in both cases the induction hypothesis is preserved.
From what I hear, any plan for improving MIRI/CFAR space that involves the collaboration of the landlord is dead in the water; they just always say no to things, even when it's "we will cover all costs to make this lasting improvement to your building".
Of course I should have tested it before commenting! Thanks for doing so.
Spoiler markup. This post has lots of comments which use ROT13 to disguise their content. There's a Markdown syntax for this.
I note that this is now done.
I note that this is now done.
"If you're running an event that has rules, be explicit about what those rules are, don't just refer to an often-misunderstood idea" seems unarguably a big improvement, no matter what you think of the other changes proposed here.
I notice your words are now larger thanks to the excellence of this comment!
Excellent, my words will finally get the prominence they deserve!
When does voting close? EDIT: "This vote will close on Sunday March 18th at midnight PST."
I thought of a similar example to you for big-low-status, but I couldn't think of an example I was happy with for small-high-status. Every example I could think of was one where someone is visually small, but you already know they're high status. So I was struck when your example also used someone we all know is high status! Is there a pose or way of looking which both looks small and communicates high status, without relying on some obvious marker like a badge or a crown?
Ainslie, not Ainslee. I found this super distracting for some reason, partly because his name is repeated so often.
A plausible strategy would be to buy say 100 bitcoins for $1 each, then sell 10 at $10, 10 at $100, and so on. With this strategy you would have made $111,000 and hold 60 bitcoins.
"Even though gaining too much in pregnancy" is missing the word "weight" I think.
I can't work out where you're going with the Qubes thing. Obviously a secure hypervisor wouldn't imply a secure system, any more than a secure kernel implies a secure system in a non-hypervisor based system.
More deeply, you seem to imply that someone who has made a security error obviously lacks the security mindset. If only the mindset protected us from all errors; sadly it's not so. But I've often been in the situation of trying to explain something security-related to a smart person, and sensing the gap that seemed wider than a mere lack of knowledge.
Please don't bold your whole comment.
Looks like this hasn't been marked as part of the "INADEQUATE EQUILIBRIA" sequence: unlike the others, it doesn't carry this banner, and it isn't listed in the TOC.
I agree, if the USA had decided to take over the world at the end of WWII, it would have taken absolutely cataclysmic losses. I think it would still have ended up on top of what was left, and the world would have rebuilt, with the USA on top. But not being prepared to make such an awful sacrifice to grasp power probably comes under a different heading than "moral norms".
There are many ways to then conclude that AGI is far away where far away means decades out. Not that decades out is all that far away. Eliezer conflating the two should freak you out. AGI reliably forty years away would be quite the fire alarm.
I don't think I understand this point. Is the conflation "having a model of the long-term that builds on a short-term model" and "having any model of the long term", in which case the conflation is akin to expecting climate scientists to predict the weather? If so I agree that that's a slip up, but my alarm level isn't raised to "freaked out" yet, what am I missing?
I move in circles where asking "why is X bad" is as bad as X itself. So for the avoidance of doubt, I do not think that your comment here makes you a bad person.
I'm trying to imagine a conversation where one person expresses a preference about the other's pubic hair that wouldn't be inappropriate, and I'm struggling a little. Here's what I've come up with:
A BDSM context in which that sort of thing is a negotiated part.
The two have been playing for a while and are intimate enough for that to be appropriate.
The other person asks, and gets an honest answer.
It sounds like none of these are what you have in mind; can you paint me a more detailed example?
Which parts do you think are not needed?
Dawkins's "Middle World" idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
Thank you! Hooray for this sort of thing :)
Also I have already read them all more than once and don't plan to do so again just to get the badge :)
Facebook-like reactions.
I would like to be able to publicly say eg "hear hear" on a comment or post, without cluttering up the replies. Where the "like" button is absent eg on Livejournal, I sorely miss it. This is nothing to do with voting and should be wholly orthogonal; voting is anonymous and feeds into the ranking algorithm, where this is more like a comment that says very little and takes up minimal screen real estate, but allows people to get a quick feel for who thinks what about a comment.
Starting with "thumbs up" would be a big step forward, but I'd hope that other reactions would become available later, eg "disagree connotationally" or "haha" or "don't like the tone" or "I want to help with this". Each should be associated with a small graphic, with a hover-over to show the meaning as well as who applied the reaction. Like emoji in eg Discord and unlike Facebook, a single user can apply multiple reactions to the same comment, so I can say both "agree" and "don't like the tone".
I apologise for having buried this feature request in the depths of not one but two comment threads before putting it here :)
I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I'd like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both "agree" and "don't like tone").
EDIT: now a feature request.
I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.
Thank you all so much for doing this!
Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.
I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.
The feature map link seems to be absent.
Thinking about it, I'd rather not make the self-rating visible. I'd rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.