Comment by themajor on Insights from Linear Algebra Done Right · 2019-07-17T11:40:38.020Z · score: 1 (1 votes) · LW · GW

Just to share my two cents on the matter, the distinction between abstract vectors and maps on the one hand, and columns with numbers in them (confusingly also called vectors) and matrices on the other hand, is a central headache for Linear Algebra students across the globe (and by extension also for the lecturers). If the approach this book takes works for you then that's great to hear, but I'm wary of `hacks' like this that only supply a partial view of the distinction. In particular matrix-vector mulitplication is something that's used almost everywhere, if you need several translation steps to make use of this that could be a serious obstacle. Also the base map that limerott mentions is of central importance from a category-theoretic point of view and is essential in certain more advanced fields, for example in differential geometry. I'm therefore not too keen on leaving it out of a Linear Algebra introduction.

Unfortunately I don't really know what to do about this, like I said this topic has always caused major confusion and the trade-off between completeness and conciseness is extremely complicated. But do beware that, based on only my understanding of your post, you might still be missing important insights about the distinction between numerical linear algebra and abstract linear algebra.

Comment by themajor on The Competence Myth · 2019-07-01T21:21:35.826Z · score: 1 (1 votes) · LW · GW

I think 'competent' should in this context mean something like 'has the ability to, after being pointed to a gap in the market, build and/or keep functional a company that fills this gap'. This agrees fully with what you said, there is an extreme lot of wiggle room between 'total retard' and this sense of competent (in fact, I think almost everybody lives in this wiggle room). Furthermore, I think it makes sense to naively think that the abundance of successful companies suggests a lot of people are competent in this sense, whereas I claim this is not the case.

Comment by themajor on The Competence Myth · 2019-07-01T12:16:26.665Z · score: 6 (4 votes) · LW · GW

Very interesting observations! Personally I'd perhaps phrase it the other way around, not 'incompetence is killing corporations' but more something like 'what changed in the past 70 years that allowed people to build long-living corporations back then and not now, assuming today's regular company deaths are caused by incompetence?'. My personal guess is that either back when these long-living companies were founded (~1890's) there was much more low-hanging fruit on the market, allowing less efficient companies to still survive, or alternatively that today's economic environment is much more risk-tolerant so the selection for competence happens much more *after* founding a company.

I agree fully with the government bureaucracy remark, although I suspect there are a ton of other very important effects at work there too (for example, out of all organisations I expect governments in particular to have high accountability and regular run-ins with Chesterton's fence, both of which increase bureaucratic load).

Comment by themajor on The Competence Myth · 2019-07-01T09:22:24.722Z · score: 5 (3 votes) · LW · GW

I personally think we don't need to posit a mechanism that explains why people's wrong beliefs don't cause immediate disaster for companies. In my worldview this is fully explained by selection effects in the market, both at the level of organisations and at the level of individual employees. Since long-term views are very hard to link to individual outcomes, the selection pressure is weaker here.

I'd like to point out that this does suggest that organisations and companies fail and go bankrupt regularly, we just don't hear that much about the quick failures (which I think fits reasonably well with observations, but I haven't looked into this all that much).

This is in fact also an/my answer to the non-rhetorical question why anything works at all. I disagree with Kirkpatrick in attributing this to individuals, which seems to suggest there is some class of millions of managers who have attained some mystical level of competence that somehow doesn't scale to groups.

Comment by themajor on Coherent decisions imply consistent utilities · 2019-05-13T14:14:24.615Z · score: 7 (5 votes) · LW · GW

This is part of the meaning of 'utility'. In real life we often have risk-averse strategies where, for example, 100% chance at 100 dollars is preferred to 50% chance of losing 100 dollars and 50% chance of gaining 350 dollars. But, under the assumption that our risk-averse tendencies satisfy the coherence properties from the post, this simply means that our utility is not linear in dollars. As far as I know this captures most of the situations where risk-aversion comes into play: often you simply cannot tolerate extremely negative outliers, meaning that your expected utility is mostly dominated by some large negative terms, and the best possible action is to minimize the probability that these outcomes occur.

Also there is the following: consider the case where you are repeatedly offered bets of the example you give (B versus C). You know this in advance, and are allowed to redesign your decision theory from scratch (but you cannot change the definition of 'utility' or the bets being offered). What criteria would you use to determine if B is preferable to C? The law of large numbers(/central limit theorem) states that in the long run with probability 1 the option with higher expected value will give you more utilons, and in fact that this number is the only number you need to figure out which option is the better pick in the long run.

The tricky bit is the question whether this also applies to one-shot problems or not. Maybe there are rational strategies that use, say, the aggregate median instead of the expected value, which has the same limit behaviour. My intuition is that this clashes with what we mean with 'probability' - even if this particular problem is a one-off, at least our strategy should generalise to all situations where we talk about probability 1/2, and then the law of large numbers applies again. I also suspect that any agent that uses more information to make this decision than the expected value to decide (in particular, occasionally deliberately chooses the option with lower expected utility) can be cheated out of utilons with clever adversarial selections of offers, but this is just a guess.

Comment by themajor on Tales From the American Medical System · 2019-05-10T10:26:08.916Z · score: 1 (1 votes) · LW · GW

I think your first remark is exactly the point. If the visits are useless then this is a crappy doctor scamming money and time out of patients and insurance companies, if the visits are important then asking OP's friend to come in (for being over 4 months late on a 3-month checkup) sounds very reasonable to me. I think Zyryab's suggestion of asking a doctor to Turing Test this makes a lot of sense - maybe the checkups are more valuable in certain life stages/demographics/early after diagnosis? Maybe the checkup is something more complicated than recording the HbA1c levels? I'm surprised to hear that without outside medical information the doctor is guilty until proven innocent.

Comment by themajor on Tales From the American Medical System · 2019-05-10T10:15:38.823Z · score: 1 (6 votes) · LW · GW

I'm really surprised this is being downvoted so much.

As far as I can tell (and frankly I don't care enough to put serious effort towards finding more information, but I do note nobody in the comments started with "I am a doctor" or "After talking about this with my own doctor, ...") OP's friend was in a life-threatening situation, the solution to which is a renewed insulin prescription. On top of that, the doctor/medical establishment enforces the rule that people (only young people? only people who recently developed diabetes? There could be a good medical reason here, I don't know) with Type I Diabetes have regular checkups.

Now I imagine there are all sorts of reasons for wanting to skip this checkup. Maybe the checkup isn't needed, and is just a money scam (small aside: if my doctor tells me I need a regular checkup, this is not my first thought. But individual situations can vary). Maybe the doctor's schedule is so unreasonable that it's impossible to make an appointment. There could be thousands of valid reasons. The problem as I see it is that, from the point of view of both the doctor and the nurse, they are only negotiating over the checkup. You mention right at the start that the nurse offered a solution ("drop everything and come see your doctor tomorrow") - from that point on the situation was no longer life-threatening! There was no realistic scenario in which this would cost your friend more than the plans they made for the next day! You were just haggling over what is more important, your friend's schedule or the rules set by the medical establishment that you need an active prescription to get insulin and you need a checkup to renew your prescription. Guess which one the nurse is going to find more important.

I understand if it feels like your friend is being blackmailed by the doctor (and in fact it seems like they are), but by refusing to visit the next day you are the ones who escalated the situation. And then escalated even further by threatening with media exposure. I think from the point of view of the nurse your friend is showing rather hostile behaviour. I'll take the liberty of going through the phone call as you posted it, filling in how I expect nurses to act:

The nurse tells my friend he needs to go see his doctor, because it has been seven months, and the doctor feels he should see his doctor every three.

Probably standard procedure. At any rate this decision it out of the nurse's hands, so they are just providing information here.

My friend replies that he agrees he should see his doctor, and he has made an appointment in a few weeks when he has the time to do that.
The nurse says that he can’t get his prescription refilled until he sees the doctor.

Still standard. Nurses don't get to overrule conditions doctors set for medication, if the doctor says a checkup is needed then the nurse has no way of handing over insulin.

My friend explains that he does not have the time to drop what he is doing and see the doctor the next day. That he is happy to see the doctor in a few weeks. But that until then, he requires insulin to live.
The nurse says that he can’t get his prescription refilled until he sees the doctor. That if he wants it earlier he can find another doctor.

Still the same issue. The nurse doesn't have the authority to overrule the conditions set by the doctor. Also I'm missing a sentence here, who introduced talking to the doctor the very next day?

My friend explains again that he does not have the time to see any doctor the next day, nor can one find a doctor on one day’s notice in reasonable fashion. And that he has already made an appointment, and needs insulin to live. And would like to speak with the doctor.
The nurse refuses to get the prescription filled. The nurse does not offer to let him speak to the doctor, and says that he can either wait, make an appointment for the next day, or find a new doctor.

So apparently making an appointment one a one-day notice is very doable on the doctor's side. By this point you are solidly haggling about time, not medicine. I also think the nurse could have let you speak with the doctor here. But I think it's also plausible that they get/did in the past get phone calls from all kinds of entitled weirdos who refuse to show up to appointments, and at this moment it's really not clear your friend is not one of them. Why would their day plans be more important?

My friend points out that without insulin, he will die. He asks if the nurse wants him to die. Or what the nurse suggests he do instead, rather than die.
This seems not to get through to the nurse, because my friend asks these questions several times. The nurse does not offer to refill the prescription, or let my friend talk to the doctor.
My friend says that if the doctor does not give him access to life saving medicine and instead leaves him to die, he will post about it on social media.
The nurse now decides, for the first time in the conversation, that my friend should perhaps talk to his doctor.

Really? Your friend escalates from "I don't want to visit you tomorrow" to "that means you must want me to die", which of course the sensible nurse ignores, and your strategy was to repeat it a few more times? Yeah, you really showed them there. I bet the nurse immediately realised they were wrong the first time, and connected you through with the doctor before you got to the third repetition. From their point of view you've refused a good solution to the problem and are now just bugging them to make your life easier (who likes going to checkups? Nobody. So who haggles about not wanting to show up? Well, not everybody, but more than just your friend I bet). And at that point your strategy is to escalate even more by threatening media exposure, and put even more pressure on that poor nurse? I'm not surprised the doctor claimed you are blackmailing them after this.

What was your goal of the conversation with the nurse in the first place? You need a doctor's prescription for the insulin, so shouldn't you have aimed for talking with the doctor? And if that was your goal, what purpose did it serve to tighten the screws on the nurse? You should have acted like a model patient and calmly requested you speak with the doctor, who can (and did) overrule the normal medical process just to give you life-saving medicine.

I guess that became a far longer monologue than I planned, I'm not going to go through the phone call with the doctor because it's just more of the same. I think OP is in the wrong here, at the very least in their interaction with the nurse. And I do agree that this is a bad medical system, but you really can't throw the co-pay costs, the lack of automatic prescription extensions/sufficiently large prescriptions to last you a long time and your interaction with the nurse and doctor on one heap and pretend this is the fault of "the American medical system". The overall structure sucks, but some of these people are just local actors who cannot make a change and your friend threatened them to not have to change their schedule.

Comment by themajor on Best reasons for pessimism about impact of impact measures? · 2019-05-03T16:54:19.675Z · score: 13 (5 votes) · LW · GW

I have a bit of time on my hands, so I thought I might try to answer some of your questions. Of course I can't speak for TurnTrout, and there's a decent chance that I'm confused about some of the things here. But here is how I think about AUP and the points raised in this chain:

  • "AUP is not about the state" - I'm going to take a step back, and pretend we have an agent working with AUP reasoning. We've specified an arcane set of utility functions (based on air molecule positions, well-defined human happiness, continued existence, whatever fits in the mathematical framework). Next we have an action A available, and would like to compute the impact of that action. To do this our agent would compare how well it would be able to optimize each of those arcane utility functions in the world where A was taken, versus how well it would be able to optimize these utility functions in the world where the rest action was taken instead. This is "not about state" in the sense that the impact is determined by the change in the ability for the agent to optimize these arcane utilities, not by the change in the world state. In the particular case where the utility function is specified all the way down to sensory inputs (as opposed to elements of the world around us, which have to be interpreted by the agent first) this doesn't explicitly refer to the world around us at all (although of course implicitly the actions and sensory inputs of the agent are part of the world)! The thing being measured is the change in ability to optimize future observations, where what is a 'good' observation is defined by our arcane set of utility functions.
  • "overfitting the environment" - I'm not too sure about this one, but I'll have a crack at it. I think this should be interpreted as follows: if we give a powerful agent a utility function that doesn't agree perfectly with human happiness, then the wrong thing is being optimized. The agent will shape the world around us to what is best according to the utility function, and this is bad. It would be a lot better (but still less than perfect) if we had some way of forcing this agent to obey general rules of simplicity. The idea here is that our bad proxy utility function is at least somewhat correlated with actual human happiness under everyday circumstances, so as long as we don't suddenly introduce a massively powerful agent optimizing something weird (oops) to massively change our lives we should be fine. So if we can give our agent a limited 'budget' - in the case of fitting a curve to a dataset this would be akin to the number of free parameters - then at least things won't go horribly wrong, plus we expect these simpler actions to have less unintended side-effects outside the domain we're interested in. I think this is what is meant, although I don't really like the terminology "overfitting the environment".
  • "The long arms of opportunity cost and instrumental convergence" - this point is actually very interesting. In the first bullet point I tried to explain a little bit about how AUP doesn't directly depend on the world state (it depends on the agent's observations, but without an ontology that doesn't really tell you much about the world), instead all its gears are part of the agent itself. This is really weird. But it also lets us sidestep the issue of human value learning - if you don't directly involve the world in your impact measure, you don't need to understand the world for it to work. The real question is this one: "how could this impact measure possibly resemble anything like 'impact' as it is intuitively understood, when it doesn't involve the world around us?" The answer: "The long arms of opportunity cost and instrumental convergence". Keep in mind we're defining impact as change in the ability to optimize future observations. So the point is as follows: you can pick any absurd utility function you want, and any absurd possible action, and odds are this is going to result in some amount of attainable utility change compared to taking the null action. In particular, precisely those actions that massively change your ability to make big changes to the real world will have a big impact even on arbitrary utility functions! This sentence is so key I'm just going to repeat it with more emphasis: the actions that massively change your ability to make big changes in the world - i.e. massive decreases of power (like shutting down) but also massive increases in power - have big opportunity costs/benefits compared to the null action for a very wide range of utility functions. So these get assigned very high impact, even if the utility function set we use is utter hokuspokus! Now this is precisely instrumental convergence, i.e. the claim that for many different utility functions the first steps of optimizing them involves "make sure you have sufficient power to enforce your actions to optimize your utility function". So this gives us some hope that TurnTrout's impact measure will correspond to intuitive measures of impact even if the utility functions involved in the definition are not at all like human values (or even like a sensible category in the real world at all)!
  • "Wirehead a utility function" - this is the same as optimizing a utility function, although there is an important point to be made here. Since our agent doesn't have a world-model (or at least, shouldn't need one for a minimal working example), it is plausible the agent can optimize a utility function by hijacking its own input stream, or something of the sorts. This means that its attainable utility is at least partially determined by the agent's ability to 'wirehead' to a situation where taking the rest action for all future timesteps will produce a sequence of observations that maximizes this specific utility function, which if I'm not mistaken is pretty much spot on the classical definition of wireheading.
  • "Cut out the middleman" - this is similar to the first bullet point. By defining the impact of an action as our change in the ability to optimize future observations, we don't need to make reference to world-states at all. This means that questions like "how different are two given world-states?" or "how much do we care about the difference between two two world-states?" or even "can we (almost) undo our previous action, or did we lose something valuable along the way?" are orthogonal to the construction of this impact measure. It is only when we add in an ontology and start interpreting the agent's observations as world-states that these questions come back. In this sense this impact measure is completely different from RR: I started to write exactly how this was the case, but I think TurnTrout's explanation is better than anything I can cook up. So just ctrl+F "I tried to nip this confusion in the bud." and read down a bit.
Comment by themajor on 1960: The Year The Singularity Was Cancelled · 2019-04-23T12:50:19.211Z · score: 3 (2 votes) · LW · GW

Thanks, that addresses the concerns I had!

Comment by themajor on 1960: The Year The Singularity Was Cancelled · 2019-04-23T09:14:10.958Z · score: 2 (2 votes) · LW · GW

I think the evidence presented is way too weak to support the type of conclusions drawn in this piece. I mean really, we're computing doubling times by taking logarithms of estimated GDP, inserting an arbitrary offset in our definition of the horizontal axis and then plotting THAT on a log-log scale? What were you expecting to find?

More specifically: the horizontal labels of the most recent data points are heavily influenced by the particular choice of 2020 offset. I've taken the liberty of repeating (I hope) Scott's analysis with the data from the paper, and swapping the offset to 2050 or even 2100 bunches the last data points a lot closer together, allowing a linear fit to pretty much pass through them. I think some argument can be made that we need a higher time resolution in an era with a doubling time of ~20 years compared to an era with a doubling time of ~500 years, but I'm still not happy with how sensitive this analysis is and would love to hear why 2020 is a better choice than 2100.

Also I notice that Scott left a bunch of data points from the paper out of the graph. I can live with excluding the really early ones (before 10000 B.C.), but why do you skip over the ones near 0 A.D.? The 1100-1200's? And where are the data points with negative doubling times (i.e. declining GDP)? Maybe I missed it but I don't see mention of these at all.

Comment by themajor on Rule Thinkers In, Not Out · 2019-02-28T23:24:11.376Z · score: 6 (4 votes) · LW · GW

Yes, I think you're right. Personally I think this is where the charitable reading comes in. I'm not aware of Einstein specifically stating that there have to be hidden variables in QM, only that he explicitly disagreed with the nonlocality (in the sense of general relativity) of Copenhagen. In the absence of experimental proof that hidden variables is wrong (through the EPR experiments) I think hidden variables was the main contender for a "local QM", but all the arguments I can find Einstein supporting are more general/philosophical than this. In my opinion most of these criticisms still apply to the Copenhagen Interpretation as we understand it today, but instead of supporting hidden variables they now support [all modern local QM interpretations] instead.

Or more abstractly: Einstein backed a category of theories, and the main contender of that category has been solidly busted (ongoing debate about hidden variables blah blah blah I disagree). But even today I think other theories in that pool still come ahead of Copenhagen in likelihood, so his support of the category as a whole is justified.

Comment by themajor on Rule Thinkers In, Not Out · 2019-02-28T15:27:55.422Z · score: 11 (4 votes) · LW · GW

I feel like I'm walking into a trap, but here we go anyway.

Einstein disagreed with some very specific parts of QM (or "QM as it was understood at the time"), but also embraced large parts of it. Furthermore, on the parts Einstein disagreed with there is still to this day ongoing confusion/disagreement/lack of consensus (or, if you ask me, plain mistakes being made) among physicists. Discussing interpretations of QM in general and Einstein's role in them in particular would take way too long but let me just offer that, despite popular media exaggerations, with minimal charitable reading it is not clear that he was wrong about QM.

I know far less about Einstein's work on a unified field theory, but if we're willing to treat absence of evidence as evidence of absence here then that is a fair mark against his record.

Comment by themajor on So You Want to Colonize The Universe · 2019-02-28T15:14:53.910Z · score: 0 (3 votes) · LW · GW

I think this is an interesting idea, but doesn't really intersect with the main post. The marginal benefits of reaching a galaxy earlier are very very huge. This means that if we are ever in the situation where we have some probes flying away, and we have the option right now to build faster ones that can catch up, then this makes the old probes completely obsolete even if we give the new ones identical instructions. The (sunk) cost of the old probes/extra new probes is insignificant compared to the gain from earlier arrival. So I think your strategy is dominated by not sending probes that you feel you can catch up with later.

Comment by themajor on One Website To Rule Them All? · 2019-01-24T08:48:00.234Z · score: 1 (1 votes) · LW · GW

Well, I still don't have any experience with this. But maybe possible avenues include:

  • Looking into moderation rules.
  • Including some kind of reputation/point/reward system, and other methods to keep your users engaged.
  • Tracking metrics on the growth of the Site, and ideally having some advance expectations/plans on how to respond to different rates of growth/decline.
  • A more radical approach might be to give up the phase 2 and beyond in their entirety, and settle for a target audience of people close enough to you that you can reasonably trust them.

The survivorship bias is a very valid point, but [not doing research on how to make websites grow] is also a poor strategy. Personally I'd still look into the advice, but I'm afraid what you're trying to do is simply very difficult.

Comment by themajor on One Website To Rule Them All? · 2019-01-23T12:06:29.822Z · score: 1 (1 votes) · LW · GW

Epistemic status: worried about effort/time lost.

I am by no means experienced with any of this, and seriously considered not writing anything at all. But it only takes me a bit of time (an hour max) to write why I feel why the odds are very strongly against you, and if you are serious about pursuing this idea then even if at low probability that my comment is helpful to you it's worth writing it on average. So here we go.

During my read of the post, top-to-bottom, at the part

On matters of truth, it needs to support epistemic arguments for why we should believe or not believe particular claims. On matters of action, it needs to provide important pro/cons of taking that action. Site must have a method of allowing the best arguments to rise to the top.

my internal monologue went "The first bit is difficult but perhaps possible. The second is a mess. Oh dear, the third is basically impossible!". The sentence immediately after, explaining this functionality would be the bare basics, shocked me quite a lot. I think aiming for the quoted section is nigh-impossible, and then we haven't started on the possible additional features you mention. Your post strongly reminds me of Benjamin Hoffman's piece on Anglerfish (in my opinion worth reading in full), and also a bit of a segment (near the start) in one of Eliezer's posts on security mindset - where the character Amber makes the mistake of thinking that the critical part of her startup is the technology, where really it is the security. I think in a similar manner your Site would, besides depending on the UI, the back-end, the marketing etc. also depend critically on its ability to continue growing during certain critical phases, and the lack of discussion on this as a plausible failure mode is making me rather pessimistic.

In my mind, conditional on Site eventually operating as intended, it should grow through several phases. First you have a low number of users (~100 regular users? Sorry, I don't have experience with this) who basically filtered in from your social circles, and are able to aggregate their opinions/thoughts as intended. Then in the next phase Site grows more popular as people notice this is a valuable source of truth/plans/speculation, and they provide new questions and answers covering broader topics. After that there should be some third phase where Site is diverse and big enough that all those extra features you mentioned might become plausible to implement (I'll come back to this later).

My problem lies with the second phase. Benjamin's piece suggests that as soon as Site is big enough to have any real value, this immediately creates incentives for outsiders to try to abuse/free-ride on the project (for example through manipulating the questions or voting). This would be worse on discussions on *actions*, which is why at the start I mentioned that that is more difficult than discussing *truth*. Your wish to keep Site crowd-sourced makes it more difficult to guard against this phenomenon, and to me Eliezer's writing on security mindset suggests that if you don't treat this problem as central the odds are strongly against you. It is unclear to me what motivates people to keep coming back to Site in this second phase if they disagree with a large part of the demographic/consensus, or in general why echo-chamber effects would not apply. In fact, it is unclear to me why people would spend time participating in discussions outside their immediate interests at all (see also for example evaporative cooling).

Lastly I think a large part of Site would only function after you have some critical mass of users to have sufficient discussion on a lot of different topics. This is troubling as it means those parts existing at all is conditional on Site being a success. In the spirit of "If you're not growing you're shrinking" I think a lot more time and effort should be focused on figuring out how to obtain and keep a userbase, and introducing fancy features is downstream from this.

Sorry for being so critical and nonconstructive. I don't know how to solve any of these problems, but like I said at the start it felt like a wrong strategy to just stay quiet. I hope I'm wrong about most/all of this, and let me as a closure mention again that I don't have experience with this at all.

Comment by themajor on Playing Politics · 2018-12-07T11:09:53.502Z · score: 1 (1 votes) · LW · GW

In one of my social groups 'send out a Doodle' has become a meme indicating that a committee has failed at organising itself/is never going to be productive, and the chairman of the committee (thankfully we always assign this role) has to take action right now and suggest a meeting time and place.

In my personal experience Doodle is useful but also one of those trivial inconveniences (reading an email, clicking a link, checking 10 suggested times against your calendar, waiting for another email telling you a timeslot has been chosen, putting the time into your agenda), which is a downside.

Comment by themajor on Playing Politics · 2018-12-07T11:02:19.634Z · score: 2 (2 votes) · LW · GW

I don't think this is different at all. It just sounds like nobody took charge in explicitly defining the sub-committees, so instead the socially savvy committee members self-organised to actually get things* done.

*these `things' may or may not align with the purpose of the committee.

Comment by themajor on A compendium of conundrums · 2018-11-07T21:43:39.889Z · score: 1 (1 votes) · LW · GW

Prisoners and boxes: yeah, we are probably thinking of the same solution. Vg vaibyirf gur cebonovyvgl bs svaqvat n x-plpyr va na neovgenel ryrzrag bs gur crezhgngvba tebhc ba a ryrzragf.

Battleships: that's the intended solution, yes. I don't know of any nicer one

Comment by themajor on A compendium of conundrums · 2018-11-07T11:13:17.109Z · score: 1 (1 votes) · LW · GW

I know the 7 races solution, but this proof that 6 doesn't work is nice!

Comment by themajor on A compendium of conundrums · 2018-11-07T11:10:10.964Z · score: 1 (1 votes) · LW · GW

V'z abg fher vs jr'er raqvat hc jvgu gur fnzr fgenvtug yvar gubhtu? Znlor zl frg bs genafpraqragny rdhngvbaf unf gur fvzcyr fbyhgvba bs gur gnatrag, juvpu V'ir bireybbxrq, ohg V guvax gur natyr bs gur yvar j.e.g. gur pvepyr bs enqvhf 1/z fubhyq qrcraq ba gur png'f fcrrq.

Comment by themajor on A compendium of conundrums · 2018-11-06T14:55:30.193Z · score: 1 (1 votes) · LW · GW

Ahh, I love puzzles like these! I knew some, but not all of them, so thank you for posting this here! Some thoughts:

25 Horses:

Does anybody have a short proof of optimality, i.e. that if you claim the solution is $n$ races, it cannot be accomplished in $n-1$?

Pirate treasure:

I think some sort of tie-break information is required: if a pirate knows that their vote has no infuence on the number of coins they will obtain, will they vote 'no' out of spite? Or 'yes' to preserve the crew?

Knights and knaves, and What is the name of this god:

If you're willing to spend considerable time you can find a fully general solution to puzzles of this type, although the answers will be long logical constructions.

Blind maze:

Is there some short elegant solution to this? The shortest answer I have is some concatenation of solutions that works but is ugly.

Prisoners and boxes:

Am I confused, or is there a 100% guaranteed strategy to save all prisoners? There is a similar but related problem where the problem is not `made easier' and the prisoners are just sent in (one by one) without warning, and if they all find their own piece of paper they are all set free but if at least one fails then they are all kept in prison. Can you find the optimal solution (along with its expected success rate) now?

Nine dots puzzle:

There are actually two different solutions to this one!

And lastly a contribution of my own:

Having become bored with the classical game of Battleship, you and a friend decide to upgrade the rules a bit. Instead of playing on an nxn-square you will play on the entire planar lattice. Furthermore, you always found it unrealistic that these ships are just sitting there while they are getting bombed, so now at the start of the game you pick a movement vector (with integer components) for each ship, and at the start of each of your turns each ship moves by its movement vector. For simplicity we assume ships can occupy the same square at the same time. To clarify: you still only command the standard fleet of the original Battleship game.

Find a shooting strategy which is guaranteed to sink all your friend's ships.

Hint: Svefg pbafvqre gur pnfr jvgu bayl bar 1k1-fvmrq fuvc cynlvat ba gur yvar bs vagrtref, vafgrnq bs gur 2Q ynggvpr.

Comment by themajor on A compendium of conundrums · 2018-11-06T14:27:17.015Z · score: 1 (1 votes) · LW · GW

I disagree. EDIT: I'm no longer sure I disagree

Sbe n svkrq fcrrq $z$, gur bcgvzny fbyhgvba sbe gur qhpx vf gb svefg tb gb (gur vafvqr bs) gur pvepyr jvgu enqvhf $1/z$ naq pvepyr guvf hagvy vg unf ernpurq bccbfvgr cunfr bs gur png - juvpu nterrf jvgu lbhe fbyhgvba. Gur erznvavat dhrfgvba vf ubj gb ernpu gur rqtr sebz guvf arj fgnegvat cbfvgvba.

N sevraq bs zvar cbvagrq bhg gung nsgre guvf cbvag gur png jvyy or pvepyvat gur bhgfvqr bs gur pvepyr ng znkvzhz fcrrq va n fvatyr qverpgvba (nf bccbfrq gb gheavat nebhaq rirel abj naq ntnva). Guvf qverpgvba vf qrgrezvarq ol gur genwrpgbel bs gur qhpx vzzrqvngryl nsgre oernxvat $e = 1/z$, ohg nsgre gung vf svkrq fvapr gur nathyne fcrrq bs gur qhpx vf ybjre guna gung bs gur png. Gurersber sbe nal tvira cbvag ba gur havg pvepyr, ertneqyrff bs gur pheir jr pubbfr gb trg gurer, gur png jvyy or geniryvat gb gung cbvag ng znkvzhz fcrrq bire gur ybatre nep (juvpu jr pna rafher ol gnxvat na neovgenel fznyy qrgbhe) bs gur havg pvepyr. Va bgure jbeqf: gur qhpx'f genwrpgbel qbrf abg vasyhrapr gur png nalzber. Gurersber gur bcgvzny cngu gb gnxr sebz urer ba bhg vf fvzcyl gur fubegrfg cngu, v.r. n fgenvtug yvar. Nyy gung erznvaf gb or qrgrezvarq vf juvpu fgenvtug yvar gb gnxr.

Gelvat gb fbyir sbe juvpu natyr vf bcgvzny yrnqf gb n flfgrz bs gjb genaprqragny rdhngvbaf va gjb inevnoyrf (nf vf bsgra gur pnfr jvgu gurfr tbavbzrgevp ceboyrzf), ohg ertneqyrff gurl ner abg pbzcngvoyr jvgu lbhe qrfpevcgvba bs enqvhf naq natyr nf n shapgvba bs gvzr.

Comment by themajor on A compendium of conundrums · 2018-11-06T13:50:25.484Z · score: 1 (1 votes) · LW · GW

Similar puzzles to this one sometimes allow `out-of-the-box' thinking, where you use a single cut (as in: cleaving action) to cut vertically through all links in a single chain, producing 6 half-links at once.

Comment by themajor on Birth order effect found in Nobel Laureates in Physics · 2018-10-16T10:57:23.210Z · score: 2 (2 votes) · LW · GW

You're right, I should have double-checked the standard deviations before suggesting regression to the mean. I agree that regression doesn't plausibly explain the data.

Comment by themajor on Are you in a Boltzmann simulation? · 2018-10-16T10:40:21.634Z · score: 3 (2 votes) · LW · GW

While insightful I think the numbers are not to be taken too seriously - surely the uncertainty about the model itself (for example the uncorrelated nature of quantum fluctuations all the way up to mass scales of 1kg) is much larger than the in-model probabilities given here?

Comment by themajor on Birth order effect found in Nobel Laureates in Physics · 2018-10-15T11:31:48.584Z · score: 1 (1 votes) · LW · GW

I would suggest Regression to the Mean instead - we are only interested in this hypothesis because of its unusual high number on the survey in the first place.

Comment by themajor on Turning Up the Heat: Insights from Tao's 'Analysis II' · 2018-08-31T20:00:04.221Z · score: 13 (4 votes) · LW · GW

Hey there, sorry for the late reply. I wanted to let you know that every now and again I answer Turntrout's math questions via Discord, and wanted to let you (and anybody else reading this while working through undergrad math textbooks) know that I'd love to help if you have any questions! I'm a math grad student and have been teaching assistant for over 5 years now, and honestly I just love explaining math. While my time is limited and irregular please don't hesitate to shoot me a question if you're stuck on anything and would like some advice.

Comment by themajor on Boltzmann Brains and Within-model vs. Between-models Probability · 2018-08-16T07:27:01.109Z · score: 1 (1 votes) · LW · GW

Isn't that exactly what the 'takes lots of bits to specify which overserver you are'-part can take care of though? Also I'm not sure what it means for a world to contain literally infinite copies of something - do you mean that on average there is some fixed (non-zero) density over a finite volume, and the universe is infinitely large? I think this issue with infinities is unrelated to the core point of this post.

Comment by themajor on Boltzmann Brains and Within-model vs. Between-models Probability · 2018-08-14T10:48:11.256Z · score: 1 (1 votes) · LW · GW

Isn't it more natural to let all programs that explain your observations contribute to the total probability, a la 'which hypotheses are compatible with the data'? This method works well in worlds with lots of observers similar to you - on the one hand it takes a lot of bits to specify which observer you are, but on the other hand all the other observers (actually the programs describing the experiences of those observers) contribute to the total posterior probability of that world.

Comment by themajor on Book review: Pearl's Book of Why · 2018-08-13T09:33:14.856Z · score: 4 (3 votes) · LW · GW

I'm a month late to the party, but wanted to chime in anyway.

RCTs (and p-values) don't seem to be popular in physics or geology. I'm curious why Pearl doesn't find this worth noting. I've mentioned before that people seem to care about statistical significance mainly where powerful interest groups might benefit from false conclusions.

Yes, there certainly seems to be a correlation. But I think in this case this can be mostly understood by considering the subject matter as a confounding variable - powerful interest groups mostly care about research that can influence policy-making, which are usually the Social Sciences. And it just so happens that designing a replicable study that measures exactly what you are looking for is a lot easier in physics/geology/astronomy than it is in social science (who knew, the brain is complicated). So Social Science gets stuck with proxies and replication crises and wiggle room for foul play. It is a relatively sensible reaction then to demand some amount of standardization (standard statistical tools, standard experimental methods etc.) within the field. Or, put more bluntly, if you cannot have good faith in the truth of the conclusions of your peers' research (mostly just because the subject is more difficult than any training prepared them for, not because your peers are evil) you need to artificially create some common ground to start repairing that sweet exponential growth curve.

Comment by themajor on [Math] Towards Proof Writing as a Skill In Itself · 2018-06-20T09:08:36.443Z · score: 5 (2 votes) · LW · GW

https://en.wikipedia.org/wiki/Constructive_proof

I am very inexperienced in this particular part of formal set theory, but I was always informally told that the main reason the distinction between constructive and non-constructive proofs is related to the Curry-Howard Correspondence, which informally states that constructive proofs can be rewritten into computer algorithms, whereas non-constructive proofs can not.

Comment by themajor on Oops on Commodity Prices · 2018-06-20T08:42:37.463Z · score: 6 (2 votes) · LW · GW

Epistemic status: uncertain.

While I don't have a full answer for you, there are some ideas that might be worth trying out.

  • Maybe there is a way to do smart-people stuff at your own pace, like learning from books/youtube videos instead of in a public setting. Books and videos have infinite patience. People all have different paces for everything, and if you notice that yours is lower than your peers it might be worth carefully trying not keeping up for a while. This can be dangerous though, so be cautious with this.
  • Personally I've always been frustrated with my pace of learning, and this feeling has always vanished when I look back and see how far I've come. Some (very good) lecturers explained this to me both in terms of a maze ("At first you don't know which path to take so you have to spend a lot of time making dead turns, and then when you're done it looks like you didn't cover that much distance. But really you did.") and in terms of exponential growth ("At the end of a learning curve you look at your rate of progress, which is the derivative of your knowledge with respect to time, and say [I could have gotten here way faster, look at how high the derivative is now and how low my total knowledge level is]. But that's not how exponentials work, since learning also increases the rate of learning."). This has really helped me think of asking 'stupid' questions as an investment, and if I look back at the things I didn't know half a year ago I tend to be quite proud of my growth.
Comment by themajor on Inadequate Equilibria vs. Governance of the Commons · 2018-06-19T13:18:56.832Z · score: 9 (6 votes) · LW · GW

Wow, what a great piece! I'm really surprised by the lack of comments here. This is well-written, informative, and challenges the attitude I had towards coordination failure in a constructive way. Simply amazing.

Comment by themajor on Hold On To The Curiosity · 2018-04-24T09:01:37.178Z · score: 12 (2 votes) · LW · GW

From a purely mathematical point of view I don't see why the exponent should be an integer. But p=2 is preferred over all other real values because of the Central Limit Theorem.

Comment by themajor on The First Rung: Insights from 'Linear Algebra Done Right' · 2018-04-23T14:45:23.705Z · score: 23 (6 votes) · LW · GW

Interesting stuff!

I'm a math grad student and have been an assistant in (amongst other subjects) Linear Algebra courses for almost 5 years now. If you or anybody else here on LessWrong has questions on any math subject hit me up with a message. While I do not have the time to be some sort of online tutor I really love teaching, mathematics is very much my area of expertise and LessWrong readers are (on average) above-average students with more passion for understanding than the average textbook aims for, so I definitely think that investing a non-zero amount of time in this is positive-sum for the community. I'd love to help where I can!

Comment by themajor on The Eternal Grind · 2018-04-03T08:52:58.976Z · score: 6 (2 votes) · LW · GW

In case you're wondering why Midrange is called Midrange, here's a 'short' explanation from Hearthstone.

Generally speaking to activate a card all the way from your collection to a game it has to go through three bottlenecks: first you have to put it in your deck, then you have to draw it during a game and lastly you have to play it. In Hearthstone there are roughly 4 types of decks (this is a Fake Framework but it's a very good one): Aggro, Midrange, Control and Combo. Combo is just weird so we'll ignore it. The other three each target one of these three bottlenecks.

Control usually just says "Bring your puny pile of cards, I'll remove everything and still have gas left over by the end". This targets the bottleneck of putting strong/expensive/powerful cards in your deck, and indeed Control mirror matches can be best understood as each player trying to make a pairing between their 30 cards and their opponents' 30 cards that maximizes the amount of gas they have left at the end of the game ('playing the value game').

Aggro targets the bottleneck of playing the cards from your hand to the board. The usual win condition is just reducing your opponents' life to 0 while they don't have enough mana(/lands) to stop you, despite possibly having very good counters in their deck/hand. The name of the game here is to go fast and make full use of all the mana you have every turn - missing 'the curve' in Hearthstone (spending all your mana each turn) can be instantly losing for an aggressive deck.

Lastly Midrange targets the bottleneck of getting good cards from the deck into your hand. It does this by playing threatening minions as often as possible, drawing out all the good answers from the opponents' hand. An ideal Midrange deck does this each turn, starting at turn one. Usually when a Midrange deck wins a game the opponents' hand is either empty or chock-full of cards that cannot target minions. The reason this strategy is called 'Midrange' is because these decks have completely different roles and strategies depending on the opponents' deck and state of the game, often switching a few times between aggressor and defender in a single match (I think this link explains this better than I do) - you have to be aggressive if your opponent is sitting on a bunch of minion removal and defensive if they're going to burn through the cards in their hand anyway. This sort of puts these decks in the middle between Aggro and Control. From the point of view of an Aggro player most Midrange decks consist very much of a "bigger minions than thou"-strategy where they survive and then play a big dude, whereas from a Control point of view Midrange is just Aggro with more top-end. Usually Midrange decks pay a price in consistency for trying to do so many things at the same time.

Comment by themajor on Explicit Expectations when Teaching · 2018-02-06T14:34:44.800Z · score: 9 (4 votes) · LW · GW

Personally I really dislike it when concepts are used that you're only supposed to understand later (the most common example is a textbook which gives an equation with several new variables and explains below what those variables mean). It makes me backtrack through my notes the moment I run into the equation, and by the time we get to the explanation my study flow is completely interrupted.

In my experience as lecturer/while giving presentations this can be easily avoided by giving short summaries before making complicated claims. Something along the lines of 'I am now going to tell you X, which requires concept Y that you haven't heard of yet. We need this to ultimately do Z. So bear with me.' I've had a lot of positive feedback on this (and it has the additional benefit of slowing the presentation down and forcing you to make the structure of the talk explicit).

Comment by themajor on Are these arguments valid? · 2018-01-15T15:23:19.081Z · score: 8 (3 votes) · LW · GW

There is a technique in the card game Bridge that is similar to your point [2], so I wanted to mention it briefly (I believe this specific example has been mentioned on LessWrong before but I can't seem to find it).

The idea is that you have to assume that your partner holds cards (i.e. the part of the universe outside of your control is structured in such a way) that your decisions influence the outcome of the game. Following this rule will let you play better in games where it is true, and has no impact on the other games. Is this similar to what you are trying to say?

Comment by themajor on Pascal’s Muggle Pays · 2017-12-17T09:59:41.655Z · score: 3 (2 votes) · LW · GW

Wonderful post! As I mentioned in the comments on Against the Linear Utility Hypothesis and the Leverage Penalty I am in the process of writing a reply to this as well, and about half of what I had planned to write was this. Thank you for writing this up better than I would have!

I would like to add that not paying the muggler before the rift in the sky, but paying him afterwards, might not be a sole result of the decision theory (although it's certainly very related): there is also something interesting going on with the probabilities. It is a priori pretty likely that given that a poorly-dressed person walks up to you on the street, you are about to hear some outlandish claim and a request for money. Therefore your updated probability of this being a random muggler instead of a matrix lord is still pretty high even after hearing the offer. However this probabiliy vanishes to almost nothingness after observing the reality-breaking sky rift - and how close to nothingness this vanishes to is directly dependent on the absurdity of the event witnessed.

In practice I would just pony up the money at the end of your scenario, for the same reasons as you give. So I guess my true rejection to Pascal's Mugging is the one you give. I think I'll still write out the other half of my ideas though, for agents with slightly more computing power than we have.

Comment by themajor on Against the Linear Utility Hypothesis and the Leverage Penalty · 2017-12-15T23:41:12.246Z · score: 2 (2 votes) · LW · GW

I think I will try writing my reply as a full post, this discussion is getting longer than is easy to fit as a set of replies. You are right that my above reply has some serious flaws.

Comment by themajor on Meaning wars · 2017-12-15T13:08:00.525Z · score: -1 (3 votes) · LW · GW

I'm not convinced this idea is useful. You explain that communities can be better understood through what they find meaningful, and how this becomes a valuable signal to identify status in each culture. But if I were to walk up to a culture I had little experience with, I would previously ask 'What do these people do, who do these people respect and why?'. If I instead ask 'I want to understand this culture, so what do they find meaningful?' how exactly have I become better at understanding the culture?

In other words, could you taboo the word 'meaningful' and perhaps give one or two counterfactual examples of cultural behaviour that could not be explained by it?

Comment by themajor on Against the Linear Utility Hypothesis and the Leverage Penalty · 2017-12-15T11:39:28.777Z · score: 2 (2 votes) · LW · GW

I agree with your conclusions and really like the post. Nevertheless I would like to offer a defense of rejecting Pascal's Mugging even with an unbounded utility function, although I am not all that confident that my defense is actually correct.

Warning: slight rambling and half-formed thoughts below. Continue at own risk.

If I wish to consider the probability of the event [a stranger blackmails me with the threat to harm 3↑↑↑3 people] we should have some grasp of the likeliness of the claim [there exists a person/being that can willfully harm 3↑↑↑3 people]. There are two reasons I can think of why this claim is so problematic that perhaps we should assign it on the order of 1/3↑↑↑3 probability.

Firstly while the Knuth up-arrow notation is cutely compact, I think it is helpful to consider the set of claims [there exists a being that can willfully harm n people] for each value of n. Each claim implies all those before it, so the probabilities of this sequence should be decreasing. From this point of view I find it not at all strange that the probability of this claim can make explicit reference to the number n and behave as something like 1/n. The point I'm trying to make is that while 3↑↑↑3 is only 5 symbols, the claim to be able to harm that many people is so much more extraordinary than a a lot of similar claims that we might be justified in assigning it really low probability. Compare it to an artifical lottery where we force our computer to draw a number between 0 and 10^10^27 (deliberately chosen larger than the 'practical limit' referred to in the post). I think we can justly assign the claim [The number 3 will be the winning lottery ticket] a probability of 1/10^10^27. Something similar is going on here: there are so many hypothesis about being able to harm n people that in order to sum the probabilities to 1 we are forced to assign on the order of 1/n probability to each of them.

Secondly (and I hope this reinforces the slight rambling above) consider how you might be convinced that this stranger can harm 3↑↑↑3 people, as opposed to only has the ability to harm 3↑↑↑3 - 1 people. I think the 'tearing open the sky' magic trick wouldn't do it - this will increase our confidence in the stranger being very powerful by extreme amounts (or, more realistically, convince us that we've gone completely insane), but I see no reason why we would be forced to assign significant probability to this stranger being able to harm 3↑↑↑3 people, instead of 'just' 10^100 or 10^10^26 people or something. Or in more Bayesian terms - which evidence E is more likely if this stranger can harm 3↑↑↑3 people than if it can harm, say, only 10^100 people? Which E satisfies P(E|stranger can harm 3↑↑↑3 people) > P(E|stranger can harm 10^100 people but not 3↑↑↑3 people)? Any suggestions along the lines of 'present 3↑↑↑3 people and punch them all in the nose' justifies having a prior of 1/3↑↑↑3 for this event, since showing that many people really is evidence with that likelihood ratio. But, hypothetically, if all such evidence E are of this form, are our actions then not consistent with the infinitesimal prior, since we require this particular likelihood ratio before we consider the hypothesis likely?

I hope I haven't rambled too much, but I think that the Knuth up-arrow notation is hiding the complexity of the claim in Pascal's Mugging, and that the evidence required to convince me that a being really has the power to do as is claimed has a likelyhood ratio close to 3↑↑↑3:1.

Comment by themajor on Bayes and Paradigm Shifts - or being wrong af · 2017-12-15T10:56:59.857Z · score: 1 (1 votes) · LW · GW

I think you're making a mistake here. The connection between P(the sun will rise tomorrow/there will be a source of light in the sky tomorrow) and P(the sun will continue to exist forever/this will continue forever) is not trivial, and you are confusing high confidence in P(the sun will rise tomorrow) with a mistaken confidence in the additional claim "and this will continue to be true forever".

I think the correct way to do Bayesian updating on this situation is to consider these two hypotheses separately. P(the sun will rise tomorrow) will behave according to a Laplace rule in your world. But P(the sun will consist forever) should have a very low initial prior as anything existing forever is an extraordinary claim, and observing an additional sunrise is only weak evidence in favour of it being true. Conversely the decay of other objects around you (a fire running out, for example) is weak evidence against this claim, if only by analogy.

In the spirit of only trying to explain that which is actually true I think it's also worth noting that the sun visibly changes quite a lot before extinguishing, so an ideal Bayesian would correctly deduce that something extraordinary is happening. In the presence of a sun that will soon extinguish our Bayesian agent will remark that the inductive argument 'the sun rose every morning of my life, therefore it will exist forever' doesn't properly explain the changes that are visible, and the hypothesis will take a corresponding hit in probability.

Comment by themajor on Qualitative differences · 2017-11-19T17:10:36.197Z · score: 1 (1 votes) · LW · GW

" Suppose you have beliefs A, B, C, and belief D: "At least one of beliefs A, B, C is false." The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. "

But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in "At least one of A,B,C is false".

Comment by themajor on Qualitative differences · 2017-11-19T17:05:05.649Z · score: 1 (1 votes) · LW · GW

No, the utility function just needs to be sublinear. A popular example is that many toy models assign log(X) utility to X dollars.

Comment by themajor on Multidimensional signaling · 2017-10-24T09:49:30.571Z · score: 1 (1 votes) · LW · GW

I think Berkson's paradox is explained by Why the tails come apart, with the added comment that it seems that the explanation applies not just to the tails of the distribution but in fact to any selected band (although the effect is most extreme in the tails). Simpson's paradox is a paradox about ratio's and different sample sizes, showing that if you average fractions you get different results than if you add enumerators and denominators, whereas Berkson's paradox is selection bias (from the wikipedia page: conditioning on [A or B] anti-correlates A and B).

Comment by themajor on Is Spirituality Irrational? · 2016-02-10T00:26:05.243Z · score: 1 (1 votes) · LW · GW

Could you taboo 'are [...] about' in your "what people call "spiritual experiences" are largely about community and shared subjective experience."?

Also your main point, that religious people reach their conclusions partly because they have experienced different things than non-religious people, is simply true. But why would you write a long metaphor-riddled piece about this, and give it the clickbait title "Is Spirituality Irrational?". And even with this formulation there is still some Motte-and-Bailey going on if you intend to reconcile spirituality and rationality - just because different experiences were a contributing factor to accepting spirituality does not strongly support that spirituality and rationality can go hand-in-hand. Most importantly your final claim doesn't seem to help in answering my 'core conflict' above.

Comment by themajor on Is Spirituality Irrational? · 2016-02-09T22:57:35.311Z · score: 1 (1 votes) · LW · GW

The strong part of the claim is not "There exists a feeling of belonging, and religion is particularly good at inducing it" or even "Religion is among the best if not outright the very best method for maintaining social cohesion", which as you say are not claims that I think would recieve a lot of pushback (here, at least). The strong part is "Do you truly think that most of spirituality is an attempt to communicate a feeling of belonging" - i.e. when the stories found in the Bible were first told, were they claims of truth or mostly persuasion tricks?

I would accept that most of the modern function of spirituality today is to provide cohesion, but at the same time spirituality also claims to have insight into some factual matters (history, for example) and moral dilemmas. I don't see how accepting that these insights were generated with the purpose/function of maintaining group cohesion is correlated at all with them being true. I think this is the core conflict of Spirituality vs Rationality, the title of the post; not that maintaining group cohesion is irrational, but that accepting answers to factual and sometimes moral questions through dogma instead of evidence cannot be reconciled with rationality.

If there was a spirituality where all the participants acknowledged that the main purpose is group cohesion, all spoken and written text is to be interpreted as metaphors at best and, say, regular church-going makes everybody more happy all around, then I think most rationalists would be all for that. But this doesnt look at all like the spirituality found in the world around us.

Comment by themajor on Is Spirituality Irrational? · 2016-02-09T15:43:05.123Z · score: 5 (7 votes) · LW · GW

I have started writing a comment multiple times, only to remove what I wrote mid-sentence. I think I figured out why that is: your post is tempting us to argue against the existence of experiences that cannot be communicated (do you mean: 'not perfectly communicated' or 'not even hinted at that they exist'? Communication is not binary), and with the sentences:

The reason I want to convince you to entertain this notion is that an awful lot of energy gets wasted by arguing against religious beliefs on logical grounds, pointing out contradictions in the Bible and whatnot. Such arguments tend to be ineffective, which can be very frustrating for those who advance them. The antidote for this frustration is to realize that spirituality is not about logic.

you attempt to ban a whole class of arguments that might well be relevant. Your post is a wonderful piece of rhetoric (although some of the analogies get stretched a bit thin), but it hardly communicates anything. Other than

people might profess to believe in God for reasons other than indoctrination or stupidity. Religious texts and rituals might be attempts to share real subjective experiences

there doesn't seem to be a single claim in the whole text. Do you truly think that most of spirituality is an attempt to communicate a feeling of belonging that one gets also when giving up after being bullied for a week? And that this feeling is both incommunicable and easily induced with some practice (you give meditation as an example)?

Comment by themajor on What's wrong with this picture? · 2016-01-28T20:58:40.193Z · score: 1 (1 votes) · LW · GW

I have seen this argument on LessWrong before, and don't think the other explanations are as clear as they can be. They are correct though, so my apologies if this just clutters up the thread.

The Bayesian way of looking at this is clear: the prior probability of any particular sequence is 1/2^[large number]. Alice sees this sequence and reports it to Bob. Presumably Alice intends on telling Bob the truth about what she saw, so let's say that there's a 90% chance that she will not make a mistake during the reporting. The other 10% will cover all cases ranging from misremembering/misreading a flip to outright lying. The point is that if Alice is lying, this 10% has to be divided up between the other 2^[large number]-1 other possible sequences - if Alice is going to lie, any particular sequence is very unlikely to be presented by her as the true sequence, since there are a lot of ways for her to lie. So, assuming that Alice was intending to speak the truth, her giving that sequence is very strong (in my example 9*(2^[large number]-1):1) evidence that that particular sequence was indeed the true one over any specific other sequence - 'coincidentally' precisely strong enough to turn the posterior belief of Bob that that sequence is correct to 90%.

A fun side remark is that the above also clearly shows why Bob should be more skeptical when Alice presents sequences like HHHHHHHHHH or HTHTHTHTHTHT - if Alice were planning on lying these are exactly the sequences that she might pick with a greater than uniform probabilty out of all the sequences that were not thrown, and therefore each possible actual sequence contributes a higher-than-average amount of probability that Alice would present one of these special sequences, so the fact that Alice informs Bob of such a sequence is weaker evidence for this particular sequence over any other one than it would be in the regular case, and Bob ends up with a lower posterior that the sequence is actually correct.