Posts
Comments
Why can't the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I've found a factor to update downwards. This feels like it relies on the reasoner's embedding though, so maybe it's cheating, but it's not clear and non-confusing to me why it doesn't count.
Begin here and read up to part 5 inclusive. On the margin, getting a basic day-in, day-out wardrobe of nice well-fitting jeans/chinos (maybe chino or cargo shorts if you live in a hot place) and t-shirts is far more valuable when you start approaching fashion than hats. Hats are a flair that come after everything else in the outfit you're wearing them with. Maybe you want to just spend a few hours one-off choosing a hat and don't want to think about all the precursors. But that can actually make you backslide. If you look at their advice about hats, you'll see that pork pies and fedoras are recommended, but it's well-known how badly a fedora can backfire if you aren't very careful.
(For example, I'm still in the 'trying new t-shirts/shirts/jeans/chinos/shoes with an occasional jumper purchase' phase after about a year, 18 months. Still haven't even got to shorts. You might progress faster if you do shops more often or have a higher shopping budget. But suffice to say hats are a long way in.)
There is a known phenomenon of guys walking around with a fedora or brimmed hat or whatever with a poorly coordinated outfit, dirty clothes, odour, bad fit, etc. basically not having the basics down before going intermediate. In these cases you will lose points with a lot of people because they will cringe or think you're trying to compensate. You may or may not have been engaging in similar thinking when making this thread, but watch out for that failure mode.
Supplementary reading and good to get a yay or nay before buying something, or to get recommendations within a type of garment: /r/malefashionadvice/
Fashionability and going for safety helmets/caps might be divergent strategies though. If you were purely optimizing the former, what I say above might be relevant. If the latter, just getting some Crasches and calling it a day might be enough.
Yes!! I've also independently come to the conclusion that basic real analysis seems important for these sorts of general lessons. In fact I suspect that seeing the reals constructed synthetically, or the Peano --> Integers --> Rationals --> Dedekind cuts construction, or some similar rigorous construction of an intuitively 'obvious' concept, is probably a big boost in accessing the upper echelons of philosophical ability. Until you've really seen how axioms work and broken some intuitive thing down to the level that you can see how a computer could verify your proofs (at least in principle), I kind of feel like you haven't done the work to really understand the concepts of proof or definition or seen a really fully reduction of a thing to basic terms.
What specifically did you mean here?
What I mean is if you have the resources (time, energy, etc.) to do so, consider trying to get the data where the script returned '0' values because the source you used didn't have that bit of data. But make it clear that you've done independent research where you find the figures yourself, so that the user realises it's not from the same dataset. And failing that, e.g. if there just isn't enough info out there to put a figure, state that you looked into it but there isn't enough data. (This lets the user distinguish between 'maybe the data just wasn't in the dataset' versus 'this info doesn't even exist so I shouldn't bother looking for it.)
I think the big problem with trying to determine "related jobs" is that, more often than not, in the actual job market, the relationship between similar jobs is in name only.
Sure it would again be more resource-intensive, but I was thinking you could figure out yourself which careers are actually related, or ask people in those fields what they actually think are the core parts of their job and which others jobs they'd relate it to.
I like the graph that shows salary progression at every age. Often career advice just gives you the average entry figure and the average and peak senior figures, which kinda seems predicated upon the 'Career for life' mentality which locks people into professions they dislike. Suggestions, to do or not do with as you see fit, no reply necessary:
Ability to compare multiple jobs simultaneously. Make a note saying the graph will appear once you pick a job, or have it pop up by default on a default job. Center the numerical figures in their cells.
Make the list of jobs and/or the list of categories searchable and associate search keywords to jobs. For example, if I want to find 'Professor', it seems to come under postsecondary teachers, which wouldn't have been something I would have thought of without trawling the list of educators, but I would have found it if I could search by 'Professor' and get the result returned.
'Actuaries', 'Statisticians', 'Mathematicians' seem to have a duplicate entries. Check database for other duplicates by querying for where job names coincide. Have the graph update to say which job you're currently looking at, so the user can be sure it's updated. When hovering on the graph, have the box say e.g. 'Age 40' rather than just '40' to make it obvious what '40' refers to. When hovering on the graph, have the order of the figures in the box correspond to the order on the graph, i.e. give the upper, then median, then bottom figures rather than opposite as it currently is. Track down the figures where you don't have data, or establish that there is not enough data, and let the user know which is the case so they know the provenance of researched or omitted figures.
In general, I think a lot of the time the user will want to come in from an angle of having relatively specific jobs in mind and going from there, rather than working from broad categories to increasingly specific jobs. I'm not immediately sure if or how this should cash out into specific suggestions, though. But maybe something to bear in mind while you're developing the product. Perhaps you could have a mode like the current one and a 'wandering' mode where you start with a specific job then have it compared and linked to related or similar jobs (where the relational and similarity data would have to be put into the database somehow). Maybe a graph interface with nodes?
Thanks to Luke for his exceptional stewardship during his tenure! You'll be awesome at GiveWell!
And Nate you're amazing for taking a level and stepping up to the plate in such a short period of time. It always sounded to me like Luke's shoes would be hard for a successor to fill, but seeing him hand over to you I mysteriously find that worry is distinctly absent! :)
I used to have an adage to the effect that if you walk away from an argument feeling like you've processed it before a month has passed, you're probably kidding yourself. I'm not sure I would take such a strong line nowadays, but it's a useful prompt to bear in mind. Might or might not be related to another thing I sometimes say, that it takes at least a month to even begin establishing a habit. While a perfect reasoner might consider all hypotheses in advance or be able to use past data to test new hypotheses, in practice it seems to me that being on the lookout for evidence for or against a new idea is often necessary to give the idea a fair shake, which feels like a very specific case of noticing (namely, noticing when incoming information bears on some new idea you heard and updating).
This premise sounds interesting, but I feel like concrete examples would really help me be sure I understand
I didn't follow everything in the post, but it seems like the motivating problem is that UDT fails in an anti-Newcomb problem defined in terms of the UDT agent. But this sounds a lot like a fully general counterargument against decision algorithms; for any algorithm, we can form a decision problem that penalizes exactly that and only that agent. Take any algorithm running on a physical computer and place it in a world where we specify, as an axiom, that any physical instantiation of that algorithm is blasted by a proton beam as soon as it begins to run, before it can act. This makes any algorithm look bad, but this is just a no free lunch argument that every algorithm seems to fail in some worlds, especially ones where the deck is stacked against the algorithm by defining it to lose.
A decision process 'failing' in some worlds is a problem only if there exists some other decision process that does not suffer analogously. (Possible open problem: What do we mean by analogous failure or an analogously stacked world?)
Of course, it's possible this objection is headed off later in the post where I start to lose the train of thought.
I wondered about this too before I tried it. I thought I had a higher-than-average risk of being very sensitive to my own perspirations/sheddings. But I haven't detected any significant problems on this front after trying it. It goes both ways: Now I know that I'm not very sensitive to my own trouser sweat, it means I can wear trousers longer after they've been washed (i.e. exposed to potentially irritant laundry products), which possibly reduces the risk of skin problems from the laundry products (another problem that I think I have a higher-than-average chance of having; the two aren't mutually exclusive).
(Insert disclaimer about this maybe being very dependent on lots of factors, e.g. maybe I'll move to another city with an imperceptibly different climate and get screwed over by wearing jeans for more than a day.)
Not sure if it's in addition to what you're thinking of or it is what you're thinking of, but Tommy Hilfiger 'never' 'washes his Levis'. I heard this and confirmed with a fashion- and clothing-conscious friend that they (the friend) had tried it. I used to wash jeans and chinos after a few consecutive days of wearing them. For the past five or six weeks I've been trying out the 'no wash' approach. I wore one pair of jeans for about thirty five days (maybe split into two periods of continuous wearing) and washed them probably once or never during that time. So far as I could tell they did not smell anywhere near enough to be offensive, and I only stopped wearing them because I got too small for them. This included doing some form of exercise like pushups, circuits, or timed runs at the track in the jeans (and then not showering for a few hours afterwards) on most days.
After those jeans I've been wearing the same pair of chinos for eight days and they seem to be fine. It's worth giving a try to see if it works for you too, in your circumstances. It is very plausible that climate, bathing frequency, sensitivity to own sweat, sensitivity to laundry products, underpants use etc. provide enough variation between people that doing it is a no-brainer for some and not doing it is probably right for others.
During this period, before showering each night, I take the trousers off, shake them off, then (assuming I don't have any reason to think the outside of them had accumulated much ickiness during that day) drape them inside out over a chair, which hopefully lets them air out and let moisture evaporate off. (In fact, I now do this with most of my clothes, and it seems like it might indeed make them smell fresher for longer.)
https://www.google.co.uk/search?q=tommy+hilfiger+wash+jeans http://www.dailymail.co.uk/femail/article-2459720/Tommy-Hilfiger-thinks-crazy-throw-jeans-laundry-wear.html
Thanks for reminding me to do a meetup report! I've added it at the end of the announcement for this Sunday's meetup. Let me know in the comments there whether you think you can still make it this weekend.
Currently expecting at fewest two others with joint probability >70%, so I'll still do the original day. But I'll bear the next week in mind; we might do two weeks in a row.
You more-or-less said, "gwern is imperfect but net-positive. So deal with it. Not everyone can be perfect.". I think such a response, in reply to someone who feels bullied by a senior members and worries the community is/will close ranks, is not the best course of action, and in fact is better off not being made. Even assuming your comment was not a deontological imperative, but rather a shorthand for a heuristic argument, I am very uncertain as to what heuristic you are suggesting and why you think it's a good heuristic.
Even if you ignored all that and rewrote your original comment differently, that might be sufficient to make headway.
Does that make things clearer? If this line of inquiry also seems too unweildy to begin replying to, can you go up meta levels and suggest a way to proceed?
I'm not sure exactly which parts you're referring to, so can you quote the parts you find odd or by which you are confused?
Those aren't weird deontological rules and you're just throwing in those words to describe those phrases as boo lights. MOST things people say aren't meant as strict rules, but as contextual and limited responses to the conversation at hand.
There is a very particular mental process of deontological thinking that epistemic rationalists should train themselves to defuse, in which an argument is basically short-circuited by a magic, invalid step. If the mental process that actually takes place in someone's head is, 'This person criticised a net-positive figure. Therefore, they must be belittled', and that's as far as their ability to justify actually goes, that seems like the kind of thinking an epistemic rationalist would want to be alerted to and detrain, if it's taking place subconsciously.
You're proposing the alternative that shminux could justify it further but is using it as a shorthand, and that I'm confusing that omission for an absence of recursive justification. The bare bones of shminux's comment would be "gwern is imperfect but hugely net positive. So deal with it. Not everyone can be perfect." If that's not deontological thinking, then it remains such a general heuristic argument, bare of any specific details of the case at hand, that it's a crappy comment to make to someone who feels that they've been bullied by a senior member and is probably worried the community will close ranks. It's not just a matter of 'What is the most charitable interpretation of shminux's comment', it's also e.g. 'What is the distribution over interpretations that would actually occur to someone who feels bullied and aggrieved?'
It looks like I'm making a fully general counterargument against arguments by calling anything short of a computer-verifiable argument deontological. It looks like you're making a fully general counterargument against accusations of deontological argument.
Your point (3) is an example of a recurring thing where I question a particular comment someone makes to a post, and then someone comes along and makes a bunch of arguments about why the original poster is in fact an idiot or defector or whatever and gets a bunch of upvotes by (intentional or otherwise) sleight of hand; they look like they're refuting my comment, but all they've done is justify general skepticism of the original poster, rather than a specific justification of the response that I questioned. It introduces a false dichotomy between belittling the original poster and 'opening the floodgates', and (intentionally or otherwise) makes me look like the naive idiot who wants to open the floodgates and the other person like the heroic, gritty defender of the forum. When all I was saying was that being mean in that specific way isn't the best thing from a consequentialist perspective. Specifically:
You can't treat everyone who complains about being bullied by the community seriously.
This is the false dichotomy. You are (intentionally or otherwise) completely misrepresenting what I'm saying. It looks to me like I got rounded off in your mind to 'naive person who thinks all claims of bullying deontologically have to be taken seriously', which is what annoyed you. You should be more careful when interpreting in future in such situations.
That's like auto-cooperating in a world full of potential defectors.
Or I'm not using a deontological or generalised heuristic, and I'm just making the specific claim that the exact response from this exact person in this exact case was not great. Apply your own skepticism of assumptions of deontology to me, if you will insist they be applied to shminux.
It creates an incentive to punish anyone you dislike by starting a thread about how mean they are to you
It's not obvious to me that this slippery slope is slippery enough to justify the specific response in this specific case.
and also has a chilling effect on conversation in general.
If I'm correct and shminux's reply was inappropriate, then that also has a chilling effect on those who have grievances. Additionally, I found shminux's reply and the amount of support it originally had very off-putting. I knew that I'd have to take a long time responding to it to try to point out what was wrong with it, and risk downvotes and obnoxious responses to do so. Then I found that some of the responses I did actually get (including yours) made me feel emotionally disgusted enough, and seemed so fundamentally crappy down several inferential layers, that it took me this long to respond and even begin to be able to roughly convey my position. I say this not as a definitive assertion that nobody should have challenged me, but to point out that you only mentioned the chilling effects on the accused without mentioning the effects on the accuser and other community members.
Despite the rudeness, Gwern's replies in the linked conversation were lengthy and tried to convey information and thoughts. I've seen plenty of examples of people afraid to talk because they might offend someone online, and I don't really want the threshold for being punished for rudeness to be that low on Lesswrong.
This seems very far away from my specific criticisms of shminux's comment.
Point (4) also does not connect to the specifics of shminux's comment.
Point (5) is defused by the obsevation that I was not defending ThisSpaceAvailable's post, but rather was criticising shminux's comment on the grounds that there are better responses than shminux's to the post. I find it extremely telling that you then state there are much better ways to make Less Wrong less rude, when you failed to understand that my comment was saying to shminux that there are much better ways of responding to a post like this than making a comment that pattern-matches extremely strongly to closing ranks around a senior community member. I.e. the form of your (5) is similar to the form of my comment, yet you missed what my comment was saying, and this seems like significant evidence to me that you were mindkilled by my comment.
Whoever downvoted my earlier comment (or this one), please explain your downvote.
I think my comment was on-point, truthful...
If you're claiming that you claiming these attributes justifies your post, I note it's circular reasoning. Otherwise, onto the next:
...pithy...
Even in conjunction with the other attributes you list, I'm not sure that pith is even close to being a good thing more often than not. See ciphergoth's post on never being sarcastic
...and not overly rude...
It took me up to now to figure out a plausible (though not necessarily probable), non-insulting interpretation of your comment. Originally it came across as you calling the original poster laughable and naive, and belittling them for being an unsuccessful campaigner. I also originally thought you were laughing at their advocacy of immortality because you are against immortality, but I now think that that might have been me misinterpreting to an extent you couldn't have reasonably avoided.
I genuinely think the post is hilarious, because it shows so many cognitive biases in service of "rationalism."
This is a justification for your comment being truthful, not for it being useful.
The poster claims he wants to reduce X-risk. But his proposed solution is to stand in the street with placards saying "Stop Existential Risks!" And then magically a solution appears, because of "awareness." What would we say about, for example, a malaria charity that used such a tactic?
This comes across to me as you pattern-matching the original poster to the Clueless Activist stereotype and then being discharitable to them retroactively because of that pattern match. Omitting the details of how street activism and awareness-raising causes good outcomes is not the same as there not being any mechanism by which it would work. This feels like it should be immediately obvious if you were trying at all to be empathetic or trying to identify non-crazy interpretations of the original post.
What would we say about, for example, a malaria charity that used such a tactic?
Effective Altruists wouldn't respond like you did to someone suggesting street activism just because it pattern-matches to stereotypical clueless non-effective altruism. EA's don't belittle conventional interventions.
More to the point your third bullet point doesn't constitute a valid argument for making your comment. Even if the original poster is proposing a magical non-effective intervention, you haven't shown why this is significant evidence in favour of your actual comment, rather than just for not taking their proposal too seriously. This argumentative omission seems to be a recurring theme to me when I question someone who belittles a new or inexperienced user; it's easy to mock a new user or make an inexperienced user seem silly, and get upvotes because that rhetorical sleight of hand makes it look like you've actually justified your specific response to them, when all you've done is justify general skepticism of their suggestion.
I seem to recall that policy debates shouldn't appear one-sided. Yet all his slogans are ridiculous. Consider, for example, "Prevent Global Catastrophe!" Do you think that people who don't take existential risks are in favour of global catastrophe? What does it even mean to say there is a 50% chance of a global catastrophe?
As above.
Perhaps the funniest part is that the poster has already organised street actions for immortality. Presumably, he must believe that those made great strides to solving the problem of immortality(!!!), which is why he's now using the same tactics to tackle existential risk more generally...
Unless you have significant specific knowledge of the effectiveness of street action, this collapses down to 'My prior belief, pending further information, is that street action is ineffective.' Which conspicuously isn't a justification (beyond being weak heuristic evidence) for your actual comment rather than general skepticism.
But in another way, his street actions for immortality were presumably successful, because they made the participants (at Burning Man, no less!) feel good about themselves, and superior to the rest of the common flock. So the second part of my comment was a double-edged sword.
Insomuch as your comment was self-effacing and potentially supportive, you did not communicate that clearly.
I could go on. Ultimately, if you make a ridiculous post, you can't expect people not to laugh.
This is a linguistically Clever argument that wins rhetorical points but is extremely non-obvious, particularly in the context of Less Wrong. Again this fails to specifically justify your comment rather than having the reaction of laughing at the post (but not necessarily commenting to that effect).
With all these points, I can make lots of guessesd at what the filled-out form of your argument would be, but I have lots of uncertainty over interpretations and it would be a lot more efficient if you spelled out your arguments.
Whoever downvoted this comment, please explain your downvote.
turchin's proposed action makes me uneasy, but how would you justify this comment? Generally such comments are discouraged here, and you would've been downvoted into oblivion if you'd made such a response to a proposal that weren't so one-sidedly rejected by Less Wrong. What's the relevant difference that justifies your comment in this case, or do you think such comments are generally okay here, or do you think you over-reacted?
Oops, I didn't actually read 7 and assumed it was public opinion had grown more positive. Given the two choices actually presented, I'd say 7 more likely.
Edit: Relative credences (not necessarily probabilities since I'm conditioning on there being significant effect sizes), generated naively trying not to worry too much about second-guessing how you distributed intuitive and counterintuitive results:
1:07 : 33:67
2:08 : 33:67
3:09 : 67:33
4:10 : 40:60
5:11 : 45:55
6:12 : 85:15
In another project spaced repetition project I used Anki to learn to distinguish color that he didn't distinguish beforehand.
I think I managed to do this when learning flags, with Chad and Romania. It seemed like I got to the point where I could reliably distinguish their flags on my phone, whereas when I started, I did no better than chance. I did consciously explain this to somebody else as something interesting, but now that I think about it, I failed to find it as interesting as I should have, because the idea that seeing a card a few times on Anki can increase my phenomenal granularity or decrease the amount of phenomenal data that my brain throws away, is pretty amazing.
I found typing to be a massive deterrent personally. Lots of my Anki is done in bed or on trains on my phone, and I found Memrise (on a laptop) much less compelling and harder to get myself to do than Anki because of all the typing, multiple choice, and drag-n-drop (and it would switch between those which would break my focus). I don't want to have to type 'London' when I'm asked what the capital of the UK is or click it on a multiple choice. Maybe if it were just typing on a fully-fledged computer, like you describe, it wouldn't be so bad?
I still don't think I self-deluded to any actionable extent, but I probably should mention that sometimes I would mark a card as Easy, see the answer and Just Know the answer was different from what I would have answered, undo, and mark the card as Again. I can see how you'd be much more confident I was self-deluding without that detail, which I forgot.
This post is brilliant.
(Sensations of potential are fascinating to me. I noticed a few weeks ago that after memorizing a list of names and faces, I could predict in the first half second of seeing the face whether or not I'd be able to retrieve the name in the next five seconds. Before I actually retrieved the name. What??? I don't know either.)
Right! When telling people about Anki, I often mention the importance of not self-deluding about whether one knows the answer. But sometimes I also mention how I mark a card as 'Easy' before I've retrieved or subvocalized the answer. It definitely felt like the latter was not self-delusion (especially when Anki was asking me what the capital of the UK was, say). But I felt unable to communicate why it was not self-delusion, and worried that without the other person understanding that mental phenomenon, they would think I was self-deluding and conclude that self-delusion is actually okay after all.
I vaguely noticed that awkwardness to some degree, but I still need to work on the skill of noticing such impasses and verbalizing them. And I certainly wasn't conscious enough of it, or didn't dwell enough on it, to think more about noticing.
First, how is average utilitarian defined in a non-circular way?
If you can quantify a proto-utility across some set of moral patients (i.e. some thing that is measurable for each thing/person we care about), then you can then call your utility the average of proto-utility over moral patients. For example, you could define your set of moral patients to be the set of humans, and each human's proto-utility to be the amount of money they have, then average by summing the money and dividing by the number of humans.
I don't necessarily endorse that approach, of course.
Has he been officially "impressed" yet?
I think Eliezer says he's still confused about anthropics.
What reading can I do on anthropics to get an idea of the major ideas in the field?
So far as I know, Nick Bostrom's book is the orthodox foremost work in the field. You can read it immediately for free here. Personally, I would guess that absorbing UDT and updateless thinking is the best marginal thing you can do to make progress on anthropics, but that's probably not even a majority opinion on LW, let alone among anthropics scholars.
Woah, well done everyone who donated so far. I made a small contribution. Moreover, to encourage others and increase the chance the pooled donations reach critical mass, I will top up my donation to 1% of whatever's been donated by others, up to at least $100 total from me. I encourage others to pledge similarly if you're also worrying about making a small donation or worrying the campaign won't reach critical mass.
Daniel, did you go ahead with this? Learn anything interesting?
(A): There exists a function f:R->R
and the axioms, for all r in R:
(A_r): f(r)=0
(The graph of f is just the x-axis.)
This might be expressible with a finite axiomatisation (e.g. by building functions and arithmetic in ZFC), and indeed I've given a finite schema, but I'm not sure it's 'fair' to ask for an example of a theory that cannot be compressed beyond uncountably many axioms; that would be a hypertask, right? I think that's what Joshua's getting at in the sibling to this comment.
I don't think there's stuff directly on dissolving (criminal) justice in LessWrong posts, but I think lots of LessWrongers agree or would be receptive to non-retributive/consequentialist justice and applying methods described in the Sequences to those types of policy decisions.
Some of your positions are probably a bit more fringe (though maybe would still be fairly popular) relative to LW, but I agree with a lot of them. E.g. I've also been seriously considering the possibility that pain is only instrumentally bad due to ongoing mental effects, so that you can imagine situations where torture is actually neutral (except for opportunity cost). One might call this 'positive utilitarianism', in opposition to negative utilitarianism.
The Fun Theory Sequence might be of interest to you if you haven't read it yet.
But anyway, awesome introduction comment! Welcome to LessWrong; I'm looking forward to hearing more of your ideas!
The prospect of being formally in a study pair/group makes me anxious in case I'm a flake and feel like I've betrayed the other participant(s) by being akratic or being unable to keep up and then I will forever after be known as That Flake Who Couldn't Hack Model Theory That Everybody Should Laugh At etc. etc. I should probably work on that anxiety, but in the interim, as a more passive option, I've just created this Facebook group. Has the benefit that anybody who stumbles across it or this comment can join and dip in at their leisure.
I don't really know what to expect from the group and I'm fairly content at this point to let its direction be driven by whoever joins, but I would say that if you're unsure and hesitating whether to join or post a question or whatever, please Just Do It, rather than hovering, timing out, and giving up. Even if you're just curious or think you might want to join the group in future to comment but don't right now, feel free to join now and turn off notifications from the group to eliminate the Trivial Inconvenience for your future self.
Also, please do feel free to join if you're not actively studying FAI but want to help others!
I'm not sure if it's because I'm Confused, but I'm struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.
I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write 'halfer' and 'thirder' computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.
Ah, that's good to know. Thanks for the suggestion!
It's not enough to say "the act of smoking". What's the causal pathway that leads from the lesion to the act of smoking?
Exactly, that's part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That's the hard edge of the problem, and even if the correct answer turns out to be 'it depends' or 'the question doesn't make sense' or involves a dissolution of reference classes or whatever, then one paragraph isn't going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb's problem seriously. 'It's not enough to say the act of two-boxing...' I don't think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
It has a clear answer (smoking doesn't cause cancer), and it's only interesting because it can trip up attempts at mathematising decision theory.
That's exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren't necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
If so, I don't think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn't apply to the smoker, as they're using a different decision-making process.
in response to Newcomb's problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you're right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their 'status' by making it look like they're confused about an easily settled issue, even if that's not what you're (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they're wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
I don't think you're taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
If it's not the urge, what is it?
Obvious alternative that occurred to me in <5 seconds: It's not the urge, it's the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don't show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said 'the act of smoking' in the comment to which you were replying!
becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.
I've been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it's actually rude to respond like this unless you're really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you've got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that's not a property of the world we actually live in.
If I'm interpreting your comment correctly, you're either stating that it's not the case that people's brains make rational probability estimates (which everybody on friggin' LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I'm not sure what the benefits of your comment are.
Am I missing something that you and the upvoters saw in your comment?
Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect--'Haha, the thought experiments we use are far divorced from the actual vagaries of human thought'. The fact I found it so hard to get this suggests to me that others probably didn't get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)
Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.
Rundown of positive and negative results:
In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it's the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with you, but I learned to love it.
I suspect that learning the dates of monarchs and Prime Ministers (e.g. of England/UK) would have a similar benefit in contextualising and de-intimidating historical facts, but I never finished those decks and haven't touched them in a while, so never reached the critical mass of knowledge that allowed me to have a good handle on periods of British history. I found it pretty difficult to (for example) keep track of six different Georges and map each to dates, so slow progress put me off. Let me know if you're interested and want to set up a pact, e.g. 'We'll both do at least ten cards from each deck a day and report back to the other regularly' or something. In fact that offer probably stands for any readers.
I installed some decks for learning definitions in areas of math that I didn't know, but found memorising decontextualised definitions hard enough that I wasn't motivated to do it, given everything else I was doing and Anki-ing at the time. I still think repeat exposure to definitions might be a useful developmental strategy for math that nobody seems to be using deliberately and systematically, but I'm not sure Anki is a right way to do it. Or if it is, that shooting so far ahead of my current knowledge was the best way to do it. Similarly a LaTeX deck I got having pretty much never used LaTeX and not practising it while learning the deck.
Canadian provinces/territories I have not yet found useful beyond feeling good for ticking off learning the deck, which was enough for me since I did them in a session or two.
Languages Spoken in Each Country of the World (I was trying to do not just country-->languages but country-->languages with proportions of population speaking the languages) was so difficult and unrewarding in the short term that I lost motivation extremely quickly (this was months ago). The mental association between 'Berber' and 'North Africa' has come up a surprising number of times, though. Most recently yesterday night.
Periodic table (symbol<--->name, name<-->number) took lots of time and hasn't been very useful for me personally (I pretty much just learned it in preparation for a quiz). Learning just which elements are in which groups/sections of the Periodic table might be more useful and a lot quicker (since by far the main difficulty was name<--->number).
I am relatively often wanting for demographic and economic data, e.g. population of countries, population of major world cities, population of UK places, GDP's. Ideally I'd not just do this for major places since I want to get a good intuitive sense of these figures for very large or major places on down to tiny places.
Similarly if one has a hobby horse it could be useful. Examples off the top of my head (not necessarily my hobby horse): Memorising the results from the LessWrong surveys. Memorising the results from the PhilPapers survey. Memorising data about resource costs of meat production vs. other food production. Memorising failed AGI timeline predictions. Etc.
I found starting to learn Booker Prize winners on Memrise has let me have a few 'Ah, I recognise that name and literature seems less opaque to me, yay!' moments, but there's probably higher-priority decks for you to learn unless that's more your area.
So my mind state is more likely in a five-sibling world than a six-sibling one, but using it as anthropic evidence would just be double-counting whatever evidence left me with that mind state in the first place.
Yep; in which case the anthropic evidence isn't doing any useful explanatory work, and the thesis 'Anthropics doesn't explain X' holds.
Yes! There's a lot of ways to remove the original observer from the question.
The example I thought of (but ended up not including): If all one's credence were on simula(ta)ble (possibly to arbitrary precision/accuracy even if perfect simulation were not quite possible) models and one could specify a prior over initial conditions at the start of the Cold War, then one could simulate each set of initial conditions forward then run an analysis over the sets of initial conditions to see if any actionable causal factors showed up leading to the presence or absence of a nuclear exchange.
A problem with this is that whether one would expect such a set of simulations to show a nuclear exchange to be the usual outcome or not is pretty much the same as one's prior for a nuclear exchange in the non-simulated Cold War, by conservation of expected evidence. But maybe it suffices to at least show that the selection effect is irrelevant to the causal factors we're interested in. Certainly it gives a way to ask such questions that has a better chance of circumventing anthropic explanations in which one might not be interested.
Is this in support of or in opposition to the thesis of the post? Or am I being presumptuous to suppose that it is either?
Haha. I did seriously consider it when that example was less central to the text, but ended up just going for playing it straight when it was interleaved, since I didn't want to encourage second-guessing/paranoia.
Thanks. I've edited the post pointing to lukeprog's more recent post about the matching drive, since I'd consider this one fully obsolete now the Stellar offer is so low.
Good chance you've seen both of these before, but:
http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html
I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.
Damn, if only someone had created a thread for that, ho ho ho
Strategic incompetence?
I'm not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?
There will probably be holes and not quite capture exactly what I mean, but I'll take a shot. Let me know if this is not rigorous or detailed enough and I'll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.
Helplessness about topic X - One is not able to attain a knowably stable and confident opinion about X given the amount of effort one is prepared to put in or the limits of one's knowledge or expertise etc. One's lack of knowledge of X includes lack of knowledge about the kinds of arguments or methods that tend to work in X, lack of experience spotting crackpot or amateur claims about X, and lack of general knowledge of X that would allow one to notice one's confusion at false basic claims and reject them. One is unable to distinguish between ballsy amateurs and experts.
Learned helplessness about X - The helplessness is learned from experience of X; much like the sheep in Animal Farm, one gets opinion whiplash on some matter of X that makes one realise that one knows so little about X that one can be argued into any opinion about it.
(This has ended up more like a bunch of arbitrary properties pointing to the sense of learned helplessness rather than a slick definition. Is it suitable for your purposes, or should I try harder to cut to the essence?)
Rant about learned helplessness in physics: Puzzles in physics, or challenges to predict the outcome of a situation or experiment, often seem like they have many different possible explanations leading to a variety of very different answers, with the merit of these explanations not being distinguishable except to those who have done lots of physics and seen lots of tricks, and maybe even then maybe you just need to already know the answer before you can pick the correct answer.
Moreover, one eventually learns that the explanations at a given level of physics instruction are probably technically wrong in that they are simplified (though I guess less so as one progresses).
Moreover moreover, one eventually becomes smart enough to see that the instructors do not actually even spot their leaps in logic. (For example, it never seemed to occur to any of my instructors that there's no reason you can't have negative wavenumbers when looking at wavefunctions in basic quantum. It turns out that when I run the numbers, everything rescales since the wavefunction bijects between -n and n and one normalizes the wavefunction anyway, so that it doesn't matter, but one could only know this for sure after reasoning it out and justifying discarding the negative wavenumbers. It basically seemed like the instructors saw an 'n' in sin(n*pi/L) or whatever and their brain took it as a natural number without any cognitive reflection that the letter could have just as easily been a k or z or something, and to check that the notation was justified by the referent having to be a natural.)
Moreover, it takes a high level of philosophical ability to reason about physics thought experiments and their standards of proof. Take the 'directly downwind faster than the wind' problem. The argument goes back and forth, and, like the sheep, at every point the side that's speaking seems to be winning. Terry Tao comes along and says it's possible, and people link to videos of carts with propellers apparently going downwind faster than the wind and wheels with rubber bands attached allegedly proving it. But beyond deferring to his general hard sciences problem-solving ability, one has no inside view way to verify Tao's solution; what are the standards of proof for a thought experiment? After all, maybe the contraptions in the video only work (assuming they do work as claimed, which isn't assured) because of slight side-to-side effects rather than directly down wind or some other property of the test conditions implicitly forbidden by the thought experiment.
Since any physical experiment for a physics thought experiment will have additional variables, one needs some way to distinguish relevant and irrelevant variables. Is the thought experiment the limit as extraneous variables become negligible, or is there a discontinuity? What if different sets of variables give rise to different limits? How does anyone ever know what the 'correct' answer is to an idealised physics thought experiment of a situation that never actually arises? Etc.
You seem to making an assertion about me in your last paragraph, but doing so very obliquely.
Apologies for that. I don't think that that specific failure mode is particularly likely in your case, but it seems plausible to me that other people thinking in that way has shifted the terms of discourse such that that form of linguistic relativism is seen as high-status by a lot of smart people. I am more mentioning it to highlight the potential failure mode; if part of why you hold your position is that it seems like the kind of position that smart people would hold, but I can account for those smart people holding it in terms of metacontrarianism, then that partially screens off that reason for endorsing the smart people's argument.
It looks like you submitted your comment before you meant to, so I shall probably await its completion before commenting on the rest.
This is really dismissive and, if I'm honest, I'm disappointed it's been upvoted so much. It's very convenient to say something like this and score points by signalling self-sacrificing stoicism and tough skin, and a lot less convenient to take the time to actually try looking for solutions or even just hold off from making dismissive comments.
I believe I remember when I hopped on #lesswrong (on which I've spent maybe between fifteen and ninety minutes' active time, so it's telling that this happened), and within a few minutes you'd complained to me (when I wasn't talking to you, if I recall correctly) about gwern not respecting others' norms or something. I didn't imply you were a crybaby or use my ignorance of your beef as an excuse to be Above It All. In fact, I made conscious effort to preempt my brain labelling it as 'weirdly outspoken whining' and to not be prejudicial in assessing what you'd said, but to treat it as a potentially legitimate complaint.
When someone is upset and potentially feels bullied by the community, telling them to deal with it as if they are inherently the problem by daring to rock the boat is unacceptable. It might be the case that they turn out to be in the wrong, but putting it like you've put it is basically never ever ever helpful and smacks of failure of consequentialism (only looking at whether gwern is net-positive, rather than if there are marginal improvements to be made).
If it's your honest belief that there is nothing that can be done and that this thread shouldn't have been made, there are much nicer, less dismissive, more effective ways of doing it, or actions you can take. For example, not commenting and moving on. Like seriously, if anybody should be above 'Stop rocking the boat', it should be LessWrongers, given how many of us have probably encountered that attitude dozens or hundreds of times in pursuit of truth or been bullied for being nerds, and how many of us have black belts in Social Justice theory (even those who have questioned or renounced that art). And if anybody should be above weird deontological rules like 'Don't criticize net-positive figures' or 'if u dont like it u can just leave', it should be people who pride themselves on clearheaded thinking.
Actually, I could imagine you reading that comment and feeling it still misses your point that 0.999... is undefined or has different definitions or senses in amateur discussions. In that case, I would point to the idea that one can makes propositions about a primitive concept that turn out to be false about the mature form of it. One could make claims about evidence, causality, free will, knowledge, numbers, gravity, light, etc. that would be true under one primitive sense and false under another. Then minutes or days or month or years or centuries or millennia later it turns out that the claims were false about the correct definition.
It would be a sin of rationality to assume that, since there was a controversy over definitions, and some definitions proved the claim and some disproved it, that no side was more right than another. One should study examples of where people made correct claims about fuzzy concepts, to see what we might learn in our own lives about how these things resolve. Were there hints that the people who turned out to be incorrect ignored? Did they fail to notice their confusion? Telltale features of the problem that favoured a different interpretation? etc.
It's "Here's a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?" Which is just a silly argument to have.
It's not. The "0.999... doesn't equal 1" meme is largely crackpottery, and promotes amateur overconfidence and (arguably) mathematical illiteracy.
Terms are precious real estate, and their interpretations really are valuable. Our thought processes and belief networks are sticky; if someone has a crap interpretation of a term, then it will at best cause unnecessary friction in using it (e.g. if you define the natural numbers to include -1,...,-10 and have to retranslate theorems because of this), and at worst one will lose track of the translation between interpretations and end up propagating false statements ("2^n can sometimes be less than 2 for n natural")
the correct response (unless they have sufficient real analysis background) is not "Well, here's a proof that of that claim", it's "Well, there are various axioms and definitions that lead to that being treated as being equal to 1".
It would be an accurate response (even if not the most pragmatic or tactful) to say, "Sorry, when you pin down what's meant precisely, it turns out to be a much more useful convention to define the proposition 0.999...=1 such that it is true, and you basically have to perform mental gymnastics to try to justify any usage where it's not true. There are technically alternative schemas where this could fail or be incoherent or whatever, but unless you go several years into studying math (and even then maybe only if you become a logician or model theorist or something), those are not what you'll be encountering."
One could define 'marble' to mean 'nucleotide'. But I think that somebody who looked down on a geneticist for complaining about people using 'marble' as if it means 'nucleotide', and who said it was a silly argument as if the geneticist and the person who invented the new definition were Just As Bad As Each Other, would be mistaken, and I would suspect they were more interested in signalling their Cleverness via relativist metacontrarianism than getting their hands dirty figuring out the empirical question of which definitions are useful in which contexts.
A recurring problem with these forms of civilizational inadequacy is bystander effect/first-mover disadvantage/prisoners' dilemma/etc, and the obvious solutions (there might be others) are coordination or enforcement. Seeing if there's other solutions and seeing how far people have already run with coordination and enforcement seems promising. Even if one is pessimistic about how easily the problems can be addressed and thinks we're probably screwed anyway but slightly less probably screwed if we try, then the value of information is still very high; this is a common feature of FHI's work, which, by the way, I consider extremely valuable!
What reasons might we have to believe or disbelieve that we can do better than (or significantly improve) governments, the UN, NATO, sanctions, treaty-making, etc.?
Generalising from 'plane on a treadmill'; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like 'because of Newton's Nth law', 'because of drag', 'because of air resistance', 'but this is unphysical so it must be false' etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physical experiments, since most people don't have a good enough physical intuition to know in advance what types of physical arguments go through, so should be in a state of epistemic learned helplessness with respect to physics.
Where does one draw the line, if at all? "1+1 does no inherently equal 2; rather, by convention, it is understood to mean 2. The debate is not about the territory, it is about what the symbols on the map mean." It seems to me like that--very 'mysteriously'--people who understand real analysis never complain "But 0.999... doesn't equal 1"; sufficient mathematical literacy seems to kill any such impulse, which seems very telling to me.
Good point!