Posts
Comments
I've used microcovid occasionally, to make sure my intuitive feelings about risk were not completely crazy (and that did cause some updates; notably, putting numbers to staying outdoors had an influence.) I'm not a heavy user, but I do appreciate the work you've done!
I'd basically like to see more of the same - update microcovid.org for omicron and keep it going.
(FWIW, I'm in the Netherlands, where we just entered a new lockdown for omicron. So COVID unfortunately isn't "over".)
You're right that a negative affect to NFTs in particular / blockchain stuff in general is part of the reaction, but I don't see the reasoning error in
- "<X> causes greater electricity consumption;
- on the margin, greater electricity consumption currently causes <more pollution / finite resources to be consumed faster / more birds to die due to windmills / ...>, which is bad;
- this is a downside to <X>."
It's probably the case that NFTs do not directly cause greater electricity consumption, but NFTs do plausibly indirectly cause greater electricity consumption, e.g. via making Ethereum more valuable, thus increasing mining rewards, thus increasing competition.
Although I've heard the advice to leave after a year, my experience has been different - after three years, I'm still learning a lot and I'm beginning to tackle the really hard problems. Basically, I find myself agreeing with Yossi Kreinin's reply to Patrick McKenzie's advice, at least so far. (Both links are very much worth reading.)
Of course, you do need to push for interesting assignments and space to learn. Also, be sure to pick a company that actually does something interesting in the first place - I work on embedded crypto devices for the government market, in a company that's young enough that there's still plenty of flexibility.
- Batman is a murderer no less than the Joker, for all the lives the Joker took that Batman could've saved by killing him. ch. 85
- "It's not fair to the innocent bystanders to play at being Batman if you can't actually protect everyone under that code." ch. 91
- Harry had no intention of saying it out loud, of course, but now that he'd failed decisively to prevent any deaths during his quest, he had no further intention of being restrained by the law or even the code of Batman.ch. 97.
Thanks, Nancy, for putting in this effort.
Some people do need to see that link, but note that it, too, is rather dangerous.
And, of course, encouraging homeownership makes this worse. Good thing that most of the Western world hasn't made that an explicit policy goal for the past decade...
I was pretty happy about that, actually.
I assume that TheAncientGeek has actually submitted the survey; in that case, their comment is "proof" that they deserve karma.
I, too, took the survey. (And promptly forgot to claim my karma; oh well.)
I didn't exactly disagree with the content, right?
Part of the problem is just that writing something good about epistemic rationality is really hard, even if you stick to the 101 level - and, well, I don't really care about 101 anymore. But I have plenty of sympathy for those writing more practical posts.
This is not nice - could you try to find a more pleasant way to say this?
Also, LW does do epistemic rationality - but it's easier to say something useful and new about practical matters, so there are more posts of that kind.
Note, though, that (a) "Lisp doesn't look like C" isn't as much of a problem in a world where C and C-like languages are not dominant, and (b) something like Common Lisp doesn't have to be particularly functional - that's a favored paradigm of the community, but it's a pretty acceptable imperative/OO language too.
"Doesn't run well on my computer" was probably a bigger problem. (Modern computers are much faster; modern Lisp implementations are much better.)
Edit: still, C is clearly superior to any other language. ;-)
The Dutch figures [are closer to yours than I expected|https://www.swov.nl/ibmcognos/cgi-bin/cognos.cgi?b_action=powerPlayService&m_encoding=UTF-8&BZ=1AAAB7pUZHH542oVOXW~CIBT9M1C3F3Oh1o_HPtBSo8ummzXZM7PXhrUFQxuX7NePWhNjlmU3cM7J4cAhyLfjfL~dZWsZt511uJYPlHM9S6eMQyKFWLIJiGweZnKazMVSzESSJNJnHoP_biZ26epV7Fcx5cuDNR2azqujrQt0NEroBIxqkIZytEFv1coU7YhG8o~QTrf6YK_BkzpUqsT7xDu6Cmv9WSHloJTpVO1FYQs0ngdwpu106dW5D6NrS~yyZkicfCWHpi48OtTfuvTnla5tg51XfXUg83ScbjebLN2vPYmXLL6rtc1ZmfL~x4LkLT4CEAYAjAEhBMg0isLoikB67xm7FuvLpyksnpRyngjlc8pDoBwZ5R_ULwaD3Qzya9hl9WIovezb~ACpc4us] (link in Dutch); I'd expect us to do quite a bit better than that, since people here are very used to bicyclists. Unfortunately, cyclists still die at 12 per 10^9 km traveled, pedestrians at 14 per 10^9 km, but drivers at 2 per 10^9 km (i.e. 1 to 6 instead of 1 to 10+, but still not very good.)
I do wonder how much of this effect can be explained by the fact that travelling (by car or otherwise) in a city or on a country road is much harder than highway driving. Or by the fact that people standing still die at a rate of infinity per km traveled. (And standing still near traffic is indeed measurably dangerous!)
Surveyed.
Also, spoiler: the reward is too small and unlikely for me to bother thinking through the ethics of defecting; in particular, I'm fairly insensitive to the multiplier for defecting at this price point. (Morality through indecisiveness?)
Start from "The very existence of flame-throwers proves that some time, somewhere, someone said to themselves, You know, I want to set those people over there on fire, but I'm just not close enough to get the job done", I guess.
Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.
Unless I am badly mistaken, indemnify would mean that Harry has to pay etc. if e.g. Dumbledore decides to demand recompense of his own. (Note that Dumbledore may well have similar power over her as he has over Harry himself.)
This is obviously much worse than just giving up his own claim ("exonerate").
Relatedly, most TCP scheduler are variants of the Reno algorithm, which basically means that they increase transmission rate until (the network is so congested that) packets begin dropping. In contrast, Vegas-type schedulers increase transmission rate until packets begin to take longer to arrive, which starts happening in congested networks shortly before packets are actually lost. A network of Vegas machines has considerably lower latency, and only minimally worse throughput, than a network of Reno machines.
Unfortunately, since Reno backs off later than Vegas, a mixed Vegas/Reno network ends with the Reno machines consuming the vast majority of bandwidth.
Interestingly, while almost all TCP schedulers are Reno variants (i.e. efficient in the presence of likely neighbours), there is basically no-one who entirely foregoes a scheduler and just sends as fast as possible, which was the original pre-Reno behaviour (and which is pretty optimal for the individual, at least until the entire internet collapses due to ridiculous levels of congestion. This has happened.)
Is she particularly powerful, though? She's extraordinarily talented, very knowledgeable for her age, and has more raw power than anyone in her year including Draco; but Rita is more experienced, and most importantly older - it has been repeatedly pointed out that HP lacks the raw power for something-or-other, and the twins are far stronger than he despite not being particularly talented. It seems that Rita should have an edge in the "raw power" department, and I'd expect this effect to key off raw power.
Note that it's also sufficient to assume that Quirrel and/or Mary's room can suppress this effect.
This is a bit un-LW-ian, but: I'm earnestly happy for you. You sound, if not happier, more fulfilled than in your first post on this site. (Also, ambition is good.)
Sounds like the Buddha and his followers to me.
patio11 is something of a "marketing engineer", and his target audience is young software enthusiasts (Hacker News). What makes you think that this isn't pretty specific advice for a fairly narrow audience?
Spoiler: Gura ntnva, gur nyvra qbrf nccneragyl znantr gb chg n onpxqbbe va bar bs gur uhzna'f oenvaf.
I agree that the AI you envision would be dangerously likely to escape a "competent" box too; and in any case, even if you manage to keep the AI in the box, attempts to actually use any advice it gives are extremely dangerous.
That said, I think your "half an inch" is off by multiple orders of magnitude.
My comment was mostly inspired by (known effective) real-world examples. Note that relieving anyone who shows signs of being persuaded is a de-emphasized but vital part of this policy, as is carefully vetting people before trusting them.
Actually implementing a "N people at a time" rule can be done using locks, guards and/or cryptography (note that many such algorithms are provably secure against an adversary with unlimited computing power, "information theoretic security").
Note that the AI box setting is not one which security-minded people would consider "competent"; once you're convinced that AI is dangerous and persuasive, the minimum safeguard would be to require multiple people to be present when interacting with the box, and to only allow release with the assent of a significant number of people.
It is, after all, much harder to convince a group of mutually-suspicious humans than to convince one lone person.
(This is not a knock on EY's experiment, which does indeed test a level of security that really was proposed by several real-world people; it is a knock on their security systems.)
For me, high (insight + fun) per (time + effort).
(Are you sure you want this posted under what appears to be a real name?)
I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe - it's just a very peculiar kind of wireheading.
As you point out, it'd be obvious, on reflection, that one's sense of rightness has changed; but that doesn't necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.
Consider this explanation, too.
I don't think it's unfair to put some restrictions on the universes you want to describe. Sure, reality could be arbitrarily weird - but if the universe cannot even be approximated within a number of bits much larger than the number of neurons (or even atoms, quarks, whatever), "rationality" has lost anyway.
(The obvious counterexample is that previous generations would have considered different classes of universes unthinkable in this fashion.)
It's not too hard to write Eliezer's 2^48 (possibly invalid) games of non-causal-Life to disk; but does that make any of them real? As real as the one in the article?
It's true that intelligence wouldn't do very well in a completely unpredictable universe; but I see no reason why it doesn't work in something like HPMoR, and there are plenty of such "almost-sane" possibilities.
This comment is relevant.
Mostly, what David_Gerard says, better than I managed to express it; in part, "be nice to whatever minorities you have"; and finally, yes, "this is a good cause; we should champion it". "Arguments as soldiers" is partly a valid criticism, but note that we're looking at a bunch of narratives, not a logical argument; and note that very little "improvement of the other's arguments" seem to be going on.
All of what you say is true; it is also true that I'm somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.
The first post is heavily fact-based and defends a thesis based on - of necessity - incomplete data and back-projection of mechanisms that are not fully understood. I don't mean to say that it is a bad post; but there are certainly plenty of legitimate alternative viewpoints and footnotes that could be added, and it is no surprise that there are a lot of both in the comments section.
The second post is an idiosyncratic, personal narrative; it is intended to speak a wider truth, but it's clearly one person's very personal view. It, too, is not a bad post; but it's not a terribly fact-based one, and the comments find fewer nits to pick.
This post seems closer to the second post - personal narratives - but the comment section more closely resembles that of the first post.
As to the desirability of this effect: it's good to be a bit more careful around whatever minorities you have on the site, and this goes double for when the minority is trying to express a personal narrative. I do believe there are some nits that could be picked in this post, but I'm less convinced that the cumulative improvement to the post is worth the cumulative... well, not quite invalidation, but the comments section does bother me, at least.
If a post has 39 "short comments saying "I want to see more posts like this post."" and 153 nitpicks, that says something about the community reaction. This is especially relevant since "but this detail is wrong" seems to be a common reaction to these kinds of issues on geek fora.
(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn't contribute all that much signal either.)
One relevant datum: when I started my studies in math, about 33% of the students was female. In the same year, about 1% (i.e. one) of the computer science students was female.
It's possible to come up with other reasons - IT is certainly well-suited to people who don't like human interaction all that much - but I think that's a significant part of the problem.
It bothers me how many of these comments pick nits ("plowing isn't especially feminine", "you can't unilaterally declare Crocker's Rules") instead of actually engaging with what has been said.
(And those are just women's issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
From AlexanderD's comment:
"The point, though, is that the narrowness of focus in the adventure precluded exploration of a large set of options."
If playing D&D with a bunch of girls consistently leads to solutions being proposed that do not fit the traditional D&D mold, that can teach us something about how well that mold fits a bunch of girls. More generally, the author is a pretty smart woman who thought this was a good example - you'd do well to take a second look.
If you interpret the father's statement as "all else being equal, being a better cook is good" and you completely divorce it from a historical and cultural context, it is indeed not really problematic. But given that we are, in fact, talking culture here, I do not think that this is the interpretation most likely to increase your insight.
That's not a high bar. I love my IT job, but IT is shamefully bad at this.
Automatic dishwashers are really cheap per hour saved. The actual costs will vary widely (esp. in the US, where the cost of electricity is much lower than where I live), but our best estimate at the time of buying was $2/hour saved (based on halving the 30 minutes we need to do the dishes, and assuming it breaks the moment it's out of warranty - not entirely unreasonable, since we pretty much bought the cheapest option.) Locally, about half is depreciation of the dishwasher and half is electricity/washing powder/water (negligible).
(I've brought this up before: http://lesswrong.com/lw/9pk/rationality_quotes_february_2012/5tsb.)
Computers have revolutionized most fields of science. I take it as a general "yay science/engineer/computers" quote.
Sure, thorium reactors do not appear to immediately allow nuclear weapons - but the scientific and technological advances that lead to thorium reactors are definitely "dual-use".
I'm not entirely convinced of either the feasibility or the ethics of the "physicists should never have told politicians how to build a nuke" argument that's been made multiple times on LW (and in HPMOR), but the existence of thorium reactors doesn't really constitute a valid argument against it - an industry capable of building thorium reactors is very likely able to think up, and eventually build, nukes.
Aren't you just confusing distributions (2d2) and samples ('3') here?
This is true in theory, but do you think it's an accurate description of our real world?
(Nuclear power is potentially great, but with a bit more patience and care, we could stretch our non-nuclear resources quite a bit further, which would have given us more time to build stable(r) political systems.)
I'm not completely aware of the correct protocol here, but "with what gender do you primarily identify? ... M (transgender f -> m) ..." is not something I would expect a transgender person to say - if I'd made that much of an effort to be (fe)male, I'd want to be "(fe)male", not "(fe)male (transgender ...)".
Splitting out blog referrals from general referrals seems odd; is there a reason you cannot use "[ ] some blog, [ ]" and "[ ] other, [ ]".
I see no benefit to "revealed" in "What is the probability that any of humankind's revealed religions is more or less correct?".
Calibration IQ: "... is greater than the reported IQ..."
We already have a poll about whether this is useful content, and it's currently at +32. I can imagine a few reasons why you made this second poll, but none of them are exactly complimentary.