Comment by gjm on Epistemic Tenure · 2019-02-19T23:46:04.130Z · score: 2 (1 votes) · LW · GW

If Bob's history is that over and over again he's said things that seem obviously wrong but he's always turned out to be right, I don't think we need a notion of "epistemic tenure" to justify taking him seriously the next time he says something that seems obviously wrong: we've already established that when he says apparently-obviously-wrong things he's usually right, so plain old induction will get us where we need to go. I think the OP is making a stronger claim. (And a different one: note that OP says explicitly that he isn't saying we should take Bob seriously because he might be right, but that we should take Bob seriously so as not to discourage him from thinking original thoughts in future.)

And the OP doesn't (at least as I read it) seem like it stipulates that Bob is strikingly better epistemically than his peers in that sort of way. It says:

Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob has a history of believing true things long before anyone else, or Bob has discovered or invented some ideas that I have found very useful.

which isn't quite the same. One of the specific ways in which Bob might have earned that "lot of intellectual respect" is by believing true things long before everyone else, but that's just one example. And someone can merit a lot of intellectual respect without being so much better than everyone else.

For an "intellectual venture capitalist" who generates a lot of wild ideas, mostly wrong but right more often than you'd expect, I do agree that we want to avoid stifling them. But we do also want to avoid letting them get entirely untethered from reality, and it's not obvious to me what degree of epistemic tenure best makes that balance.

(Analogy: successful writers get more freedom to ignore the advice of their editors. Sometimes that's a good thing, but not always.)

Comment by gjm on Epistemic Tenure · 2019-02-19T14:43:47.905Z · score: 2 (1 votes) · LW · GW

I think I'm largely (albeit tentatively) with Dagon here: it's not clear that we don't _want_ our responses to his wrongness to back-propagate into his idea generation. Isn't that part of how a person's idea generation gets better?

One possible counterargument: a person's idea-generation process actually consists of (at least) two parts, generation and filtering, and most of us would do better to have more fluent _generation_. But even if so, we want the _filtering_ to work well, and I don't know how you enable evaluations to propagate back as far as the filtering stage but to stop before affecting the generation stage.

I'm not saying that the suggestion here is definitely wrong. It could well be that if we follow the path of least resistance, the result will be _too much_ idea-suppression. But you can't just say "if there's a substantial cost to saying very wrong things then that's bad because it may make people less willing or even less able to come up with contrarian ideas in future" without acknowledging that there's an upside too, in making people less inclined to come up with _bad_ ideas in future.

Comment by gjm on The Case for a Bigger Audience · 2019-02-15T17:26:46.563Z · score: 2 (3 votes) · LW · GW

Sure: the author of a particular article may just want it to be read and shared as widely as possible. But what's locally best for them is not necessarily the same as what's globally best for the LW community.

Put yourself in a different role: you're reading something of the sort that might be on LW. Would you prefer to read and discuss it here or on Facebook? For me, the answer is "definitely here". If your answer is generally "Facebook" then it seems to me that you want your writings discussed on Facebook, you want to discuss things on Facebook, and what would suit you best is for Less Wrong to go away and for people to just post things on Facebook. Which is certainly a preference you're entitled to have, but I don't think Less Wrong should be optimizing for people who feel that way.

Comment by gjm on The Case for a Bigger Audience · 2019-02-15T03:17:29.499Z · score: 13 (4 votes) · LW · GW

I personally would prefer everything to do with Facebook, Twitter, etc., to stay as far away from LW as possible. Also, adding social-media sharing buttons seems to be asking to have more of the discussion take place away from LW, which is the exact opposite of what I thought was being discussed here.

Comment by gjm on The Hamming Question · 2019-02-12T04:00:39.102Z · score: 2 (1 votes) · LW · GW

Rereading the original text, I think he is talking about all three of (1) doing something that has a substantial impact on the world, (2) doing something that brings you major career success, and (3) doing something that turns you into a better scientist and a better person. (The last of those is mostly not very apparent in what he says, but there's this: "I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.")

Comment by gjm on Minimize Use of Standard Internet Food Delivery · 2019-02-11T16:50:09.876Z · score: 2 (3 votes) · LW · GW

When I go out to a restaurant rather than getting a takeaway (whether I pick it up or someone else delivers it) I'm not (in my mind, at least) primarily choosing "the seating and decorations and stuff". Rather, I prefer (1) freshly prepared food rather than food that's been sat in takeaway containers for the last half hour, (2) food that hasn't had to be optimized for coping well with sitting in takeaway containers for half an hour, and (3) not having to put up with any of the hassle of preparing a meal and clearing up afterwards.

To elaborate a little: 1. Many kinds of food will just not taste as good if they've been kept warm for half an hour after preparation. 2. Some kinds suffer badly enough that they just aren't available for takeaway. 3. After a takeaway/delivery meal it's still necessary to clean up dishes and cutlery and dispose of the containers and any leftover food (which may involve cleaning up the containers too, if they're of a kind you feel you ought to recycle). If you go and sit down at the restaurant you get food that's tastier because it's freshly prepared, you have the option of having a meal of a sort that wouldn't survive transporting from where it's prepared to where you live, and you don't have to do any cleanup at all.

Of course you may not care about those things, or may not think them worth the hassle of going out to eat, but it seems clear to me that they are genuine benefits (as you might think "seating and decorations and stuff" aren't) that a person might reasonably be willing to pay for in time or money or both.

Comment by gjm on Minimize Use of Standard Internet Food Delivery · 2019-02-11T16:42:14.656Z · score: 2 (1 votes) · LW · GW

I don't find that view hard to understand or hard to agree with. I wonder whether we're interpreting that passage differently from one another. Here's what I take it to mean:

"To value 15 seconds of _your_ time more than $5 of _the local pizza place's_ money is to be excessively selfish. If you cost them $5 to save yourself 15s, then you are making the world a substantially worse place on net for a trifling benefit to yourself. Decent people do not do that, and if you do it you should feel bad about it."

I can totally understand how someone might disagree with that (way 1: "yes, I really do care that much more about myself than about random other people, and I don't see any reason why I should be ashamed of that"; way 2: "in this situation the local restaurant has clearly decided that they don't mind getting $5 less in order to save their customer a phone call and hence make it more likely that they get that customer at all, and if they're OK with that decision I don't see why I shouldn't be") but it seems clearly reasonable to me.

Comment by gjm on The Hamming Question · 2019-02-11T16:36:22.018Z · score: 6 (3 votes) · LW · GW

I agree that Hamming is talking about how to be a successful scientist, but I think "as measured by things like promotions, publications, and reputation" gives the wrong impression: that Hamming's talking about how to optimize for personal success as opposed to overall impact. But the "have a reasonable attack" criterion is necessary for optimizing impact on the world, too, and I don't think Hamming would have changed his advice if he'd been convinced that (e.g.) the way to maximize promotions, publications, and reputation is to get better at self-promotion or to falsify your results or something.

Comment by gjm on Is the World Getting Better? A brief summary of recent debate · 2019-02-07T18:15:53.674Z · score: 21 (4 votes) · LW · GW

Also highly relevant:

  • Steven Pinker's reply to Hickel, posted on Jerry Coyne's "Why Evolution is True" blog.
    • Hickel's Guardian article suggests that the improving-world narrative is specifically being pushed by the very rich and their acolytes, but there's a much broader consensus around it than that.
    • Poverty has reduced no matter where you set the threshold: the whole distribution has changed. (Same as Wiseman's first point.)
    • Hickel's Guardian article suggests that it's pretty much just China that's got much less poor, but actually lots of other poor countries have improved a lot.
    • Hickel's suggestion that in pre-colonial times people in those very poor countries were less poor than GDP-based measures suggest because they had highly-non-financial assets like (communal) access to water, livestock, grazing land, etc. This is "a romantic fairy tale".
    • The decline in poverty shows up in metrics like life expectancy, literacy, etc., as well as when you just measure money. (Same as Wiseman's second point.)
    • If you ask anyone who's actually spent time in these countries, they'll agree that they've got much better.
    • Hickel is a "far leftist" and he only says this stuff because he doesn't want to face the reality that capitalism has made poor people's lives much better. (Boo to the Left!)
  • Hickel's subsequent reply to Pinker, posted on his own blog.
    • Much elaboration on the alleged deficiencies of number purporting to give long-term trends in poverty: they involve combining further-past figures that are mostly just about GDP (and in particular pretty much completely ignore those non-financial assets) with newer figures that actually look at consumption but are only available from 1981 onwards. You can't just splice these together and assume you get something meaningful.
    • No, your account is the fairy tale without citations. (A couple of paragraphs later, he does name some books that he says support his account of things.)
    • Reiteration of his argument from the Guardian article that in the colonial period the people of many poor countries got severely screwed over in ways that don't show up well in GDP graphs.
    • No, he's not saying that Industrialization Is Bad, he's saying that the specific way it happened in the "global south" was bad in ways that are obscured by the figures Pinker likes to quote.
    • Much elaboration on the business of whether $1.90 is a reasonable line to look at. Hickel says: no, it's way too low, for lots of reasons, and if you try to measure what's happening to poverty by looking at what's happened to the fraction of people living on <=$1.90/day then you will badly mislead yourself.
    • Yes, the whole distribution has shifted, but different bits of it have shifted in very different ways; e.g., if you use $7.40/day as your threshold instead of $1.90 (a figure one person Hickel quotes has suggested) then instead of the huge decrease Pinker likes to cite you see a decrease from 71% to 58%, which doesn't seem nearly so impressive.
    • Absolute numbers of people in poverty have actually increased, if you use the higher thresholds, because those reductions in proportion are cancelled out by increases in population. So the world has more very poor people now than it used to have. Especially if you ignore China, which is (1) rather a special case and (2) not where you want to be pointing if your argument is that neoliberal capitalism has made everyone better off, since China has taken a not-at-all-neoliberal path to greater prosperity. In poor countries outside East Asia, where in many cases neoliberal policies were enforced by the IMF and the World Bank, things look very different -- and in fact even relative poverty increased (if you use that $7.40 threshold) outside China between 1980 and 2000 when this was happening.
    • Since that time, the biggest gains have been (1) in China and (2) in Latin American countries with socialist or social-democratic governments. (Yay to the Left! Boo to the Right!)
    • Actually, we shouldn't be looking either at absolute poverty numbers or at the fraction of the world that's poor: we should be comparing the amount of poverty with the world's capacity to reduce it. That's actually worse than it used to be since the world as a whole has been getting richer much faster than the poorest parts of the world have, and it will take a shamefully long time to eradicate even "$1.90 poverty" at our present rate.
    • On those other measures of quality-of-life: Yes, they're getting better, and that's good, but they don't indicate that people are getting out of poverty, and in many cases what has improved them is not the march of neoliberal capitalist globalization but simple public health interventions. And on one specific important one, hunger, Pinker's relying on figures that are bad in the same sort of way as "$1.90 poverty" figures are bad.

I'm left somewhat less than satisfied by all of this; it seems like Pinker and Hickel are ignoring some of each other's points, presumably because they don't have nice snappy responses to them.

[EDITED substantially after posting, to include rough summaries of the two things I'm citing.]

Comment by gjm on X-risks are a tragedies of the commons · 2019-02-07T18:11:33.741Z · score: 4 (2 votes) · LW · GW

The boundary between disagreement about whether something is real and different preferences about the costs of mitigation are, alas, somewhat porous. E.g., when there was debate about the dangers of smoking, you were much less likely to think there was a lot of danger if you were either a smoker or an employee of a tobacco company than if you were neither of those things.

I don't know how strong this effect is in the domain of existential risk; it might depend on what things one counts as existential risks.

Comment by gjm on STRUCTURE: Reality and rational best practice · 2019-02-02T01:47:01.740Z · score: 6 (3 votes) · LW · GW

This seems like an outline sketch of something you might write in the future, and so do the previous things in the sequence. Is the intention (1) to edit these posts later to flesh them out, (2) to leave these here as sketches and then write separate more detailed things later, or (3) just to write bullet-point-list sketches and leave them as such?

Comment by gjm on Prediction Contest 2018: Scores and Retrospective · 2019-01-27T21:01:34.630Z · score: 6 (3 votes) · LW · GW

Are you familiar with Metaculus ? That functions somewhat as a continuously-running prediction contest along similar lines, and it has a lot more than three participants. (Including some people who are active on LW.)

Comment by gjm on Open Thread January 2019 · 2019-01-16T17:54:57.175Z · score: 6 (2 votes) · LW · GW

The paper discusses two specific blackmail scenarios. One ("XOR Blackmail") is a weirdly contrived situation and I don't think any of what wo says is talking about it. The other ("Mechanical Blackmail") is a sort of stylized version of real blackmail scenarios, and does assume that the blackmailer is a perfect predictor. The paper's discussion of Mechanical Blackmail then considers the case where the blackmailer is an imperfect (but still very reliable) predictor, and says that there too an FDT agent should refuse to pay.

wo's discussion of blackmail doesn't directly address either of the specific scenarios discussed in the FDT paper. The first blackmail scenario wo discusses (before saying much about FDT) is a generic case of blackmail (the participants being labelled "Donald" and "Stormy", perhaps suggesting that we are not supposed to imagine either of them as any sort of perfectly rational perfect predictor...). Then, when specifically discussing FDT, wo considers a slightly different scenario, which again is clearly not meant to involve perfect prediction, simulation, etc., because he says things like " FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed" and "FDT agents who are known as FDT agents have a lower chance of getting blackmailed" (emphasis mine, in both cases).

So. The paper considers a formalized scenario in which the blackmailer's decision is made on the basis of a perfectly accurate prediction of the victim, but then is at pains to say that it all works just the same if the prediction is imperfect. The blog considers only imperfect-prediction scenarios. Real-world blackmail, of course, never involves anything close to the sort of perfect prediction that would make it take a logical inconsistency for an FDT agent to get blackmailed.

So taking the paper and the blogpost to be talking only about provably-perfect-prediction scenarios seems to me to require (1) reading the paper oddly selectively, (2) interpreting the blogpost very differently from me, and (3) not caring about situations that could ever occur in the actual world, even though wo is clearly concerned with real-world applicability and the paper makes at least some gestures towards such applicability.

For the avoidance of doubt, I don't think there's necessarily anything wrong with being interested primarily in such scenarios: the best path to understanding how a theory works in practice might well go via highly simplified scenarios. But it seems to me simply untrue that the paper, still more the blogpost, considers (or should have considered) only such scenarios when discussing blackmail.

Comment by gjm on Open Thread January 2019 · 2019-01-16T02:50:02.217Z · score: 2 (1 votes) · LW · GW

It might. It's pretty long. I don't suppose you'd like to be more specific?

(If the answer is no, you wouldn't, because all you mean is that it seems like the sort of discussion that might have useful things in it, then fair enough of course.)

Comment by gjm on Open Thread January 2019 · 2019-01-15T11:23:37.140Z · score: 9 (4 votes) · LW · GW

Even if you are known to be the sort of person who will blow the whistle on blackmailers it is still possible that someone will try to blackmail you. How could that possibly involve a logical inconsistency? (People do stupid things all the time.)

Comment by gjm on Open Thread January 2019 · 2019-01-13T11:51:35.607Z · score: 6 (4 votes) · LW · GW

You say

So he's using a counterexample that's predicated on a logical inconsistency and could not happen. If a decision theory fails in situations that couldn't really happen, that's actually not a problem.

but that counterexample isn't predicated on a logical inconsistency. An FDT agent can still get blackmailed, by someone who doesn't know it's an FDT agent or who doesn't care that the outcome will be bad or who is lousy at predicting the behaviour of FDT agents.

Comment by gjm on What is a reasonable outside view for the fate of social movements? · 2019-01-08T20:02:11.546Z · score: 4 (2 votes) · LW · GW

None. Just giving impressions. I didn't do anything like 25% either :-).

Comment by gjm on What is a reasonable outside view for the fate of social movements? · 2019-01-07T14:11:11.519Z · score: 10 (6 votes) · LW · GW

I'm not particularly interested in bounties and therefore am not trying to randomize anything. I've worked backwards from the end of the (alphabetically-ordered) list and find the following:

  • World Cleanup Day: Too early to tell.
  • Women's suffrage movement: Unqualified success.
  • Women's liberation movement: Big success. Movement (or something like it) is still around, but its present-day balance of good versus harm is controversial (or should I say "problematic"?).
  • Women Against War: actually two movements. One in the 1950s was anti-Vietnam-War; it doesn't appear to have had much success, but the war did eventually end. The more recent one is, I think, too recent to evaluate.
  • Wikimedia: Still around and going strong. Not clear to me to what extent "social movement" is a good description of it, though.
  • Western Cape Anti-Eviction Campaign: Seems like it still exists and is still campaigning against evictions in South Africa. Appears to have inspired similar campaigns elsewhere in the world. I don't know how much actual success it's had.
  • White Wednesdays: this seems to be one person's campaign rather than an actual movement, and at only ~18mo old too young to assess for this sort of problem.
  • Voluntary Human Extinction Movement: clearly hasn't _succeeded_ since the human race is still here and its population is still growing; my impression is that VHEMT hasn't had any impact to speak of, but I haven't heard of it suffering anything Eternal-September-like.
  • Via Campesina: Still seems to be around and pursuing something like its original goals. I have no idea how much difference it's actually making (still less whether any difference is actually beneficial, but that's not really the point here).
  • Veganism: Still around, still pursuing original goals; I think roughly constant over time.
Comment by gjm on The Categories Were Made For Man, Not Man For The Categories · 2019-01-07T13:06:29.376Z · score: 2 (1 votes) · LW · GW

Yes, some categories are more useful than others for understanding the universe. Or for various other purposes. Categories more useful for one thing are not always more useful for another. (E.g., Scott's example of hypothetical-Solomon's Ministry of Dag and Ministry of Behemah; hypothetical-Solomon would not have been well served by trying to have a Ministry of Mammals instead.)

The fact that some categories are more useful than others doesn't stop it being true that "the categories were made for man". It just means that our choices of categories aren't 100% arbitrary. And that's OK, because Scott is not claiming that they are.

Comment by gjm on Optimization Regularization through Time Penalty · 2019-01-02T02:43:52.205Z · score: 2 (1 votes) · LW · GW

I'm not convinced that relativity is really a problem: it looks to me like you can probably deal with it as follows. Instead of asking about the state of the universe at time T and making T one parameter in the optimization, ask about the state of the universe within a spacetime region including O (where O is a starting-point somewhere around where the AI is to start operating) where now that region is a parameter in the optimization. Then instead of , use times some measure of the size of that region. (You might use something like total computation done within the region but that might be hard to define and as OP suggests it might not penalize everything you care about.) You might actually want to use the size of the boundary rather than of the region itself in your regularization term, to discourage gerrymandering. (Which might also make some sense in terms of physics because something something holographic principle something something, but that's handwavy motivation at best.)

Of course, optimizing over the exact extent of a more-or-less-arbitrary region of spacetime is much more work than optimizing over a single scalar parameter. But in the context we're looking at, you're already optimizing over an absurdly large space: that of all possible courses of action the AI could take.

Comment by gjm on An Extensive Categorisation of Infinite Paradoxes · 2018-12-15T13:51:52.294Z · score: 4 (2 votes) · LW · GW

OK! I'll look forward to those future posts.

(I'm a big surreal number fan, despite the skeptical tone of my comments here, and I will be extremely interested to see what you're proposing.)

Comment by gjm on An Extensive Categorisation of Infinite Paradoxes · 2018-12-14T20:35:08.439Z · score: 9 (3 votes) · LW · GW

So far as anyone knows, no actual processes in the actual world are accurately described by surreal numbers. If not, then I suggest the same goes for the "nearest possible worlds" in which, say, it is possible for Mr Trump to be faced with the sort of situation described under the heading "Trumped". But you can have, in a universe very much like ours, an endless succession of events of order-type . If the surreal numbers are not well suited to describing such situations, so much the worse for the surreal numbers.

And when you say "I don't think the ordinary notion of sequence makes sense", what it looks like to me is that you have looked at the ordinary notion of sequence, made the entirely arbitrary choice that you are only prepared to understand it in terms of surreal numbers, and indeed not only that but made the further arbitrary choice that you are only prepared to understand it if there turns out to be a uniquely defined surreal number that is the length of such a sequence, observed that there is not such a surreal number, and then said not "Oh, whoops, looks like I took a wrong turn in trying to model this situation" but "Bah, the thing I'm trying to model doesn't fit my preconceptions of what the model should look like, therefore the thing is wrong". You can't do that! Models exist to serve the things they model, not the other way around.

It's as if I'd just learned about the ordinals, decided that all infinite things needed to be described in terms of the ordinals, was asked something about a countably infinite set, observed that such a set is the same size as but also the same size as and , and said "I don't think the notion of countably infinite set makes sense". It makes perfectly good sense, I just (hypothetically) picked a bad way to think about it: ordinals are not the right tool for measuring the size of a (not-necessarily-well-ordered) set. And likewise, surreal numbers are not the right tool for measuring the length of a sequence.

Don't get me wrong; I love the surreal numbers, as an object of mathematical study. The theory is gorgeous. But you can't claim that the surreal numbers let you resolve all these paradoxes, when what they actually allow you to do is to replace the paradoxical situations with other entirely different situations and then deal with those, while rejecting the original situations merely because your way of trying to model them doesn't work out neatly.

Comment by gjm on An Extensive Categorisation of Infinite Paradoxes · 2018-12-14T03:11:38.421Z · score: 10 (5 votes) · LW · GW

If the number of days is, specifically, , then the days are numbered 0, 1, 2, ..., with precisely the (ordinary, finite) non-negative integers occurring. They are all smaller than . The number isn't the limit or the least upper bound of those finite integers, merely the simplest thing bigger than them all.

If you are tempted to say "No! What I mean by calling the number of days is precisely that the days are numbered by all the omnific integers below ." then you lose the ability to represent a situation in which Trump suffers this indignity on a sequence of days with exactly the order-type of the first infinite ordinal , and that seems like a pretty serious bullet to bite. In particular, I think you can't call this a solution to the "Trumped" paradox, because my reading of it -- even as you tell it! -- is that it is all about a sequence of days whose order-type is .

Rather a lot of these paradoxes are about situations that involve limiting processes of a sort that doesn't seem like a good fit for surreal numbers (at least so far as I understand the current state of the art when it comes to limiting processes in the surreal numbers, which may not be very far).

Comment by gjm on A hundred Shakespeares · 2018-12-13T19:00:20.343Z · score: 6 (4 votes) · LW · GW

Though spelling was pretty flexible in and after Shakespeare's time! He doesn't seem to have used "Shakespear" himself (it looks as if "Shakspere" may have been his usual spelling) but that was the usual spelling in the late 18th century :-).

Comment by gjm on The housekeeper · 2018-12-11T15:12:33.391Z · score: 7 (2 votes) · LW · GW

Was the name "Event Horizon" chosen specifically in reference to what happens when you pack things as densely into a small space as you can?

Comment by gjm on LW Update 2018-11-22 – Abridged Comments · 2018-12-11T14:56:57.576Z · score: 20 (7 votes) · LW · GW

A bunch of us have commented to say how much we hate the comment-abridging "feature". It would be good to hear from the other side.

Hear ye, hear ye! If you (1) think comment-abridgement is an improvement and (2) aren't one of the admins responsible for implementing it (whose reasons I think we already know), please comment below and tell us why. Thanks!

Comment by gjm on LW Update 2018-11-22 – Abridged Comments · 2018-12-10T02:20:39.393Z · score: 14 (4 votes) · LW · GW

I've already been whingeing at Ben in PMs about the comment-abridging system; Ben, if you're reading this, feel free to ignore it since I promised I'd shut up about it :-). Anyway, this is just to go on record as really disliking this "feature", which I think encourages superficial skimming over careful reading, makes it a big pain to engage fully with what others have said (because you have to expand lots of things separately), and just generally feels to me as if it does nothing I can see why I should actually want.

I agree with Raemon that any choice about how comments behaviour will shape what sort of conversations are had. I think this particular choice will encourage the wrong sort of conversations. And it makes LW harder to read.

Comment by gjm on Moving Factward · 2018-11-30T15:45:35.525Z · score: 4 (2 votes) · LW · GW

There are various different culinary measures called a cup, but if filled with water they all give you between 1/5 and 1/4 of a kilogram. Actual cups-for-drinking-from can give wildly different amounts but a kilogram would correspond to a litre of water, and vessels that large usually have names other than "cup".

Comment by gjm on Summary: Surreal Decisions · 2018-11-30T15:42:53.675Z · score: 2 (1 votes) · LW · GW

For me right now (Firefox on Windows 10) neither those mysterious extra symbols nor any placeholders for them appear.

The actual stream of bytes looks something like this: Completeness%3A%20%E2%88%80x%2C%20y%20%E2%88%88%20X%2C%20either%20x%20%E2%89%BC%20y%20or%20y%20%E2%89%BC%5C%5Cu0016%20x

The result of "unquoting" this is: Completeness% [forall]x, y [element] X, either x [leq] y or y [leq]\\u0016 x where the things in square brackets represent single Unicode characters each represented by three UTF-8-encoded octets.

This stuff is all inside a <script type="text/inject-data"> element, which seems to be some Meteor thing and I don't know what gets done with it -- but presumably it's being processed by something that interprets backslashed Unicode escapes. \u0016 is an old ASCII control character (yes, the ASCII control characters have Unicode code points assigned to them), the one called SYN. I have absolutely no idea what is the "correct" behaviour for a web browser asked to display a SYN character.

Comment by gjm on Moving Factward · 2018-11-30T02:20:20.550Z · score: 4 (2 votes) · LW · GW

You must have larger cups than me.

Comment by gjm on Moving Factward · 2018-11-29T15:55:36.418Z · score: 10 (6 votes) · LW · GW

I think the analogy to the round earth is unhelpful, for essentially the reasons Said Achmiz's comment alludes to. More relevant would be an infinite plane (where you can always move west and there is a consistent global notion of west-ness).

Comment by gjm on Genetically Modified Humans Born (Allegedly) · 2018-11-29T15:53:35.473Z · score: 4 (3 votes) · LW · GW

In fairness, "bureaucracy" is an unusually difficult word to spell correctly (though splitting it as bureau/-cracy, the former being an English word and the latter the suffix in democracy, autocracy, etc., helps) and Christian is not a native anglophone.

Comment by gjm on Summary: Surreal Decisions · 2018-11-27T23:59:56.390Z · score: 19 (8 votes) · LW · GW

It's an appealing idea (and one that has been informally around in LW-space for many years). But I wonder how useful it really is. Consider two classes of infinite-utility scenario.

The first is the sort considered in this paper: some outcome is merely decreed to be infinitely good or bad (e.g., because Christians contend that eternal salvation is a good infinitely superior to anything earthly). In this case, an obvious question is how to map this alleged infinite goodness or badness to a concrete surreal value. Are the glories of heaven worth exactly utility? How do we know it's that rather than or or something?

The second (and to my mind more interesting) is where the infinite utilities arise from combining infinitely many finite utilities. Rather than just decreeing that heaven is infinitely good, perhaps we should consider it as an infinite succession of finitely good days (though theologians would quibble with that on multiple grounds). Or perhaps the universe is spatially infinite and contains (e.g.) infinitely many exact copies of our earth, and we need to model that somehow. Or perhaps we're contemplating an Everett-style quantum multiverse and the underlying Hilbert space is too big for the measures we care about to be finite-valued. (Note: this one may be bullshit; I haven't thought about it carefully.) This sort of scenario seems like a better prospect for formalization: we can calculate which infinities we need just by adding up the finite ones. Except that we can't, because there doesn't appear to be a Right Way to compute infinite sums in the surreal numbers. For instance, consider the sum with terms. That's gotta be , right? It certainly looks like it should be -- but note, e.g., that certainly isn't the least upper bound of the finite sums we encounter on the way; for instance, and are smaller upper bounds.

Let's suppose we somehow have a solution to these problems. Are we ready to start using surreal numbers (or, who knows?, some other number system bigger than the reals) to solve infinite-utility decision problems? Nope. Consider e.g. the following problem, which if it isn't one of the motivational examples in the paper under discussion here is at least of the same type. There are infinitely many people. Infinitely many are really happy (utility ) and infinitely many are really unhappy (utility ). We have the choice between (1) leaving them all alone, (2) making a million unhappy people happy, and (3) making a million happy people unhappy. Naive real-valued decision theory is no good here because all the utilities are undefined (infinity minus infinity). But, even if we suppose we've got a way of computing infinite sums of surreal numbers, and it works kinda like the infinite sums we already know how to compute, we're still screwed, because those infinite sums are order-dependent. If we line our people up as then we "obviously" get infinite positive utility. If we line them up as then we "obviously" get infinite negative utility. But there's no obvious way to choose the ordering, and what do we do if that action that makes a million unhappy people happy also rearranges them to make the second order more natural somehow when the first was more natural before?

Nothing in the Chen&Rubio paper seems to me to shed any light on these issues, and without that it seems to me we're not really any better off with surreal utilities than we were with real utilities: the only problems we can solve better than before are ones artificially constructed to be solvable with the new machinery.

Comment by gjm on Quantum Mechanics, Nothing to do with Consciousness · 2018-11-27T00:00:25.967Z · score: 8 (4 votes) · LW · GW

I think there are two quite separate issues here.

1. Does consciousness play a role in quantum effects? (This idea is pretty clearly in the "woo" category, although there are physicists who seem to take it seriously, and dismissing it as monstrously improbable given that all evidence to date is that the basic principles of the universe operate at a level much lower than that of consciousness seems quite safe.)

2. Do quantum effects play a role in consciousness? (This idea -- which of course needs some elaboration since in some sense everything is quantum effects, deep down -- isn't altogether crazy, though specific theories along these lines have been very handwavy and from what little I know not at all plausible, and personally I would be very surprised if anything of the sort turned out to be right.)

I think OP is entirely concerned with #1, but doesn't make it explicit that #2 is not in view (and e.g. if I am understanding Volodymyr Frolov's comment correctly, #2 is what he's addressing, so at the very least it's possible to think that the article is about #2 as well as #1). Might be worth clarifying.

Comment by gjm on If You Want to Win, Stop Conceding · 2018-11-23T12:12:51.765Z · score: 5 (3 votes) · LW · GW

What are "karma" type reasons?

Comment by gjm on [Insert clever intro here] · 2018-11-22T12:57:51.611Z · score: 6 (4 votes) · LW · GW

Welcome! Please don't take the downvotes as a sign that you aren't welcome here. (They probably do indicate that things that look like proselytizing won't be well received, though.)

I think translating "λόγος" as "rationality" is a bit of a stretch. I don't know of any English translations that even render it as "reason", which is more defensible. I expect you're right that the authors of the New Testament didn't see any conflict between their beliefs and reason; people usually don't, whether such conflict exists or not; in any case, our epistemic situation isn't the same as theirs and it's possible that in the intervening ~2k years we've learned and/or forgotten things that make the most reasonable conclusion for us different from the most reasonable conclusion for them. (Examples of the sort of thing I mean: 1. The success of science over the last few centuries means that the proposition "everything that happens has a natural explanation" is more plausible for us than for them. 2. The author of John's gospel, or his sources, may have actually met Jesus, and perhaps something about doing so was informative and convincing in a way that merely reading about him isn't. 3. We know the history of Christianity since their time, which might make it more credible -- after all, it survived 2k years and became the world's dominant religion, which has to count for something -- or less credible -- after all, people have done no end of terrible things in its name, which makes it less likely that a benevolent god is looking on it with special approval. 4. We have different examples available to us of other religious movements and how they've developed; e.g., we might compare the early days of Christianity with those of something like Mormonism, and they might compare it with the Essenes.)

Comment by gjm on Preschool: Much Less Than You Wanted To Know · 2018-11-22T12:39:22.797Z · score: 14 (4 votes) · LW · GW

You don't know that doing nothing would have achieved the same outcome with the Lego bricks. Perhaps what she needed was to have someone show her what to do and then have some time elapse.

(That's not an argument for trying to teach her every day, of course. But if you did and she eventually figured it out, you wouldn't necessarily be wrong to give your teaching some of the credit. Explaining once and waiting might be just as effective, but that doesn't mean that explaining not at all and just waiting would have been.)

Comment by gjm on Topological Fixed Point Exercises · 2018-11-19T23:17:22.938Z · score: 14 (4 votes) · LW · GW

Inappropriately highbrow proof of #4 (2d Sperner's lemma):

This proves a generalization: any number of dimensions, and any triangulation of the simplex in question. So, the setup is as follows. We have an n-dimensional simplex, defined by n+1 points in n-dimensional space. We colour the vertices with n+1 different colours. Then we triangulate it -- chop it up into smaller simplexes -- and we extend our colouring somehow in such a way that the vertices on any face (note: a face is the thing spanned by any subset of the vertices) of the big simplex are coloured using only the colours from the vertices that span that face. And the task is to prove that there are an odd number of little simplexes whose vertices have all n+1 colours.

This colouring defines a map from the vertices of the triangulation to the vertices of the big simplex: map each triangulation-vertex to the simplex-vertex that's the same colour. We can extend this map to the rest of each little simplex by linear interpolation. The resulting thing is continuous on the whole of the big simplex, so we have a continuous map (call it f) from the big simplex to itself. And we want to prove that we have an odd number of little simplices whose image under f spans the whole thing. (Call these "good" simplices.)

We'll do it with two ingredients. The easy one is induction: when proving this in n dimensions we shall assume we already proved it for smaller numbers of dimensions. The harder one is homology, a standard tool in algebraic topology. More precisely we'll do homology mod 2. It associates with each topological space X and each dimension d an abelian group Hd(X), and the key things you need to know are (1) that if you have f : X -> Y then you get an associated group homomorphism f* : Hd(X) -> Hd(Y), (2) that Hd(a simplex) is the cyclic group of order 2 if d=0, and the trivial group otherwise, and (3) that Hd(the boundary of a simplex) is the cyclic group of order 2 if d=0 or d = (dimension of simplex - 1) and the trivial group otherwise. Oh, and one other crucial thing: if you have f : X -> Y and g : Y -> Z then (gf)* = g*f*: composition of maps between topological space corresponds to composition of homomorphisms between their homology groups.

(You can do homology "over" any commutative ring. The groups you get are actually modules over that ring. It happens that the ring of integers mod 2 is what we want to use. A simplex is, topologically, the same thing as a ball, and its boundary the same thing as a sphere.)

OK. So, first of all suppose not only that the number of good simplices isn't odd, but that it's actually zero. Then f maps the whole of our simplex to its boundary. Let's also consider the rather boring map g from the boundary to the whole simplex that just leaves every point where it is. Now, if the thing we're trying to prove is true in lower dimensions then in particular the map gf -- start on the boundary of the simplex, stay where you are using g, and then map to the boundary of the simplex again using f -- has an image that, so to speak, covers each boundary face of the simplex an odd number of times. This guarantees -- sorry, I'm eliding some details here -- that (gf)* (from the cyclic group of order 2 to the cyclic group of order 2) doesn't map everything to the identity. But that's impossible, because (gf)*=g*f* and the map f* maps to Hn(whole simplex) which is the trivial group.

Unfortunately, what we actually need to assume in order to prove this thing by contradiction is something weaker: merely that the number of good simplices is even. We can basically do the same thing, because homology mod 2 "can't see" things that happen an even number of times, but to see that we need to look a bit further into how homology works. I'm not going to lay it all out here, but the idea is that to build the Hd(X) we begin with a space of things called "chains" which are like linear combinations (in this case over the field with two elements) of bits of X, we define a "boundary" operator which takes combinations of d-dimensional bits of X and turns them into combinations of (d-1)-dimensional bits in such a way that the boundary of the boundary of anything is always zero, and then we define Hd(x) as a quotient object: (d-dimensional things with zero boundary) / (boundaries of d+1-dimensional things). Then the way we go from f (a map of topological spaces) to f* (a homomorphism of homology groups) is that f extends in a natural way to a map between chains, and then it turns out that this map interacts with the boundary operator in the "right" way for this to yield a map between homology groups. And (getting, finally, to the point) if in our situation the number of good simplices is even, then this means that the map of chains corresponding to f sends anything in n dimensions to zero (essentially because it means that the interior of the simplex gets covered an even number of times and when working mod 2, even numbers are zero), which means that we can think of f* as mapping not to the homology groups of the whole simplex but to those of its boundary -- and then the argument above goes through the same as before.

I apologize for the handwaving above. (Specifically, the sentence beginning "This guarantees".) If you're familiar with this stuff, it will be apparent how to fill in the details. If not, trying to fill them in will only add to the pain of what's already too long a comment :-).

This is clearly much too much machinery to use here. I suspect that if we took the argument above, figured out exactly what bits of machinery it uses, and then optimized ruthlessly we might end up with a neat purely-combinatorial proof, but I regret that I am too lazy to try right now.

Comment by gjm on Is Clickbait Destroying Our General Intelligence? · 2018-11-19T22:39:22.867Z · score: 2 (1 votes) · LW · GW

I'm like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about "cultural general-intelligence software". But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we've probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I'm not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I'm not sure it would be a much worse calamity if it took 300k years to repair than if it took "only" 30k years.

Comment by gjm on History of LessWrong: Some Data Graphics · 2018-11-17T23:12:22.084Z · score: 3 (2 votes) · LW · GW

If it's a computed trend-line rather than something someone eyeballed then in my book that is a fitted curve. Anyway, that makes sense; presumably it goes below zero somewhere a little to the left of where it stops. Given the obvious discontinuity, it might have made more sense to plot separate lines for before and after...

Comment by gjm on Is Clickbait Destroying Our General Intelligence? · 2018-11-17T14:53:31.006Z · score: 6 (3 votes) · LW · GW

If it took 300k years to develop human software, and 4-13M years to develop human hardware (starting from our common ancestor with chimpanzees), that seems consistent with Eliezer's claim that developing the software shouldn't take all that long _compared with the hardware_. (Eliezer doesn't say "hardware" but "hard-software", but unless I misunderstand he's talking about something fairly close to "software that implements what human brain hardware does".)

[EDITED to add:] On the other hand, you might expect software to evolve faster than hardware, at any given level of underlying complexity/difficulty/depth, because the relevant timescales for selection of memes are shorter than those for genes. So actually I'm not sure how best to translate timelines of human development into predictions for AI development. There's no very compelling reason to assume that "faster for evolution" and "faster for human R&D" are close to being the same thing, anyway.

Comment by gjm on Is Clickbait Destroying Our General Intelligence? · 2018-11-17T14:48:36.133Z · score: 2 (1 votes) · LW · GW

Dunno whether it's an unpopular reaction in any particular circles, but it's pretty much how I felt about _Neuromancer_ too.

Comment by gjm on The Inspection Paradox is Everywhere · 2018-11-16T20:16:12.625Z · score: 6 (4 votes) · LW · GW

It's a name for an important special case of "different denominators lead to different averages", where the cause of the perhaps-unexpected denominator is that some quantities you're interested in estimating correlate with how likely you are to observe them.

That correlation is a key point here, and any description of the effect that doesn't include it is describing at most part of it.

Comment by gjm on History of LessWrong: Some Data Graphics · 2018-11-16T20:10:40.708Z · score: 3 (2 votes) · LW · GW

That fitted curve looks pretty dubious in its earlier parts. (Maybe I'm misunderstanding and it isn't a fitted curve at all?)

Comment by gjm on Debate Rules In Benjamin Franklin's Junto · 2018-10-31T01:38:17.671Z · score: 10 (3 votes) · LW · GW

It looks to me as if 1 and 2 are indeed describing rules of Junto debate, while 3 and 4 (despite the heading that appears in the linked document above the section from which they are taken) are rather describing Franklin's conduct later in life, inspired by his experiences in the Junto.

[EDITED to add:] In the actual autobiography that heading does not appear; where Franklin says what he did "agreeably to the old laws of our Junto" I don't think he is claiming that the practice he describes is itself required by those laws; the term "old" is interesting, but I think the Junto was still active at that time -- Franklin says it was at about the same time as the establishment of the Philadelphia public library, which was ~1730, and elsewhere in the autobiography he implies the Junto's continued operation in the late 1730s.

Comment by gjm on Debate Rules In Benjamin Franklin's Junto · 2018-10-31T01:35:47.713Z · score: 27 (12 votes) · LW · GW

I think it's possible that you're interpreting some key words in that provision differently from how Franklin meant them. But I am not very confident either about how he meant them, or about how you are taking them. I think "warmth" may mean "anger" or "heated argument" (and I think you may be taking it to mean "friendliness"); I think "positive" may mean "forceful" (and I think you may be taking it to mean "approving"). The bit about "direct contradiction" seems like evidence for the meanings I am conjecturing; but, again, I am not confident in my guesses about the nuances of 18th-century American English.

Comment by gjm on In praise of heuristics · 2018-10-30T03:34:19.140Z · score: 2 (1 votes) · LW · GW

I still don't understand what you're saying about that first objection. What's this model in which it "cannot be true" that neither A nor B has higher status than the other?

If you're saying that that can never happen in a "purely relative" system, then what I don't understand is why you think that. If you're saying something else, then what I don't understand is what other thing you're saying.

It seems to me that there's no inconsistency at all between a "purely relative" system and equal or incomparable statuses. Equal status for A and B means that all status effects work the same way for A as for B (and in particular that if there's some straightforward status-driven competition between A and B then, at least as far as status goes, they come out equal). Incomparable status would probably mean that there are different sorts of status effect, and some of them favour A and some favour B, such that in some situations A wins and in some B wins.

I don't dispute (indeed, I insist on) the point that it's vanishingly rare to have no other factors. And I bet you're right that cleanly separating status effects from other effects is very difficult. It's not clear to me that this is much of an objection to "purely relative" models of status in contrast to other models. I guess the way in which it might be is: what distinguishes a "purely relative" model is that all you are entitled to say about status is what you can determine from examining who wins in various "status contests", and since pure status contests are very rare and disentangling the effects in impure status contests is hard you may not be able to tell much about who wins. That's all true, but I think there are parallel objections to models of "non-relative" type: if it's hard to tell whether A outranks B because status effects are inseparable from other confounding effects, I think that makes it just as hard to tell (e.g.) what numerical level of status should be assigned to A or to B.

Comment by gjm on In praise of heuristics · 2018-10-26T23:31:42.596Z · score: 2 (1 votes) · LW · GW

I don't think I understand your first objection. It seems to say that when there's a dispute between A and B, and neither A nor B has higher status than the other, onlookers don't give the benefit of the doubt to either ... which is precisely what the relative-status model we're talking about predicts should happen. How is this an objection?

On the second objection: I agree that many may assume that higher formal-hierarchy position makes a person more likely to be in the wrong. But status is not quite the same thing as position in a formal hierarchy, and I think it's possible to have both "assume lower-status people have wronged higher-status people rather than the other way around" and "assume formal superiors have wronged formal inferiors rather than the other way around" as heuristics. Also ... consider why people might have that latter heuristic. Presumably it's because higher-ups not infrequently do abuse their authority. Which is to say, they wrong people lower down in the hierarchy and get away with it because of their position.

Of course "no other factors at all" is a vanishingly rare situation. My expectation is that status effects are frequently present but often not alone, and I focused on the situation where there are no other effects for the sake of clarity. When (as usual) there are other effects, the final outcome will result from combining all the effects; the specific effects of status will be hard to disentangle but I see no reason to expect them to vanish just because other things are also present.

Comment by gjm on In praise of heuristics · 2018-10-26T13:28:36.751Z · score: 2 (1 votes) · LW · GW

First of all, to clarify, questions like those will never be governed purely by considerations of status, and in some cases other factors will matter much more. (Bob might be much higher-status than Carol but I might like Carol much better, or be hoping to persuade her to sleep with me or offer me a job or something.) But to whatever degree those questions' answers are influenced by status it will be relative "zero-sum" status that matters, because those are relative zero-sum questions.

What I wrote makes it sound like I was suggesting that status is the only, or the dominant, thing determining the answers to those questions. My apologies for writing unclearly. (I think it was just unclarity -- I don't think I thought status was dominant in determining those answers. But it's easy to forget one's past states of mind.)

I don't know whether that suffices to clear things up. In case it doesn't, some more words about the "Bob and Carol in a fight" scenario: Suppose you see two acquaintances having an argument. Usually this indicates that at least one of them has been unreasonable somehow. Your initial assumption on seeing them cross at one another might be that Bob has been unreasonable to Carol, or that Carol has been unreasonable to Bob. (If you have a sufficiently well trained mind, you may be able to avoid such assumptions. I think many people can't.) In the -- admittedly unlikely -- event that there are no other factors at all to favour one of those assumptions over the other, I am guessing (and it is only a guess) that on balance the higher-status person would tend to get the benefit of the doubt, and more people would jump to "Low-Status-Guy has done something stupid / creepy / offensive" than to the equivalent guess about High-Status-Guy.

I agree that celebrity endorsements are probably too complicated to be useful here. I picked on them because they initially seemed like they might be a nice example of non-relative status effects, but the more I thought about it the less convinced I was of that.

Comment by gjm on In praise of heuristics · 2018-10-25T22:06:57.929Z · score: 4 (2 votes) · LW · GW

Aside from what I take to be a slip-up (your postulate 3 should have {Alice, Bob, Carol} where it currently has {Alice, Carol, Dave}: yes, you've correctly described the purely-relative picture of status that I think Bucky had in mind.

I think there is some non-tautologous content associated with this sort of model -- namely, the claim that actual status relations are well modelled by this sort of system. That'll be true in so far as status governs the answers to questions like "if I have to pick one person to do a favour to, which will it be?" or "if Bob and Carol are in a fight, which of them do I expect to be in the right before I have any other information?" and false in so far as status governs the answers to questions like "if it's Dave's birthday and I'm buying him a present, how much time and money shall I put into it?" or "when Carol makes a statement, how inclined am I to believe it?".

It feels to me -- but I have no reason to take my feelings as authoritative -- as if status is mostly relative but some of these "absolute" questions are influenced by status too. Think about celebrity endorsements for products (of the kind where the celebrity isn't famous for being expert in a relevant domain): their point is that when people see a very high-status person using a particular brand of $thing and saying "$thing is great!" they're more inclined to use $thing themselves, and I don't think it's plausible that this is driven by some kind of comparison of $famous_person and all the other people in the world who might not be endorsing $thing.

... But maybe this is still relative; maybe the influence I gain, if I become famous, over what brand of shirt or phone people buy, comes at the cost of other famous people's influence. This would be true e.g. if everyone allocated a roughly fixed amount of attention to seeing what high-status other people are doing so as to copy it, so that when I become famous I'm competing for that attention with Bill Gates and Kanye West, and they get just a bit less of it. So maybe celebrity endorsements aren't a good example of non-relative status effects after all.

"Future of Go" summit with AlphaGo

2017-04-10T11:10:40.249Z · score: 3 (4 votes)

Buying happiness

2016-06-16T17:08:53.802Z · score: 38 (38 votes)

AlphaGo versus Lee Sedol

2016-03-09T12:22:53.237Z · score: 19 (19 votes)

[LINK] "The current state of machine intelligence"

2015-12-16T15:22:26.596Z · score: 3 (4 votes)

[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem

2015-08-17T08:41:45.179Z · score: 15 (15 votes)

Group Rationality Diary, March 22 to April 4

2015-03-23T12:17:27.193Z · score: 6 (7 votes)

Group Rationality Diary, March 1-21

2015-03-06T15:29:01.325Z · score: 4 (5 votes)

Open thread, September 15-21, 2014

2014-09-15T12:24:53.165Z · score: 6 (7 votes)

Proportional Giving

2014-03-02T21:09:07.597Z · score: 10 (13 votes)

A few remarks about mass-downvoting

2014-02-13T17:06:43.216Z · score: 27 (42 votes)

[Link] False memories of fabricated political events

2013-02-10T22:25:15.535Z · score: 17 (20 votes)

[LINK] Breaking the illusion of understanding

2012-10-26T23:09:25.790Z · score: 19 (20 votes)

The Problem of Thinking Too Much [LINK]

2012-04-27T14:31:26.552Z · score: 7 (11 votes)

General textbook comparison thread

2011-08-26T13:27:35.095Z · score: 9 (10 votes)

Harry Potter and the Methods of Rationality discussion thread, part 4

2010-10-07T21:12:58.038Z · score: 5 (7 votes)

The uniquely awful example of theism

2009-04-10T00:30:08.149Z · score: 36 (47 votes)

Voting etiquette

2009-04-05T14:28:31.031Z · score: 10 (16 votes)

Open Thread: April 2009

2009-04-03T13:57:49.099Z · score: 5 (6 votes)