Posts

Conceptual problems with utility functions, second attempt at explaining 2018-07-21T02:08:44.598Z · score: 15 (5 votes)
Conceptual problems with utility functions 2018-07-11T01:29:42.585Z · score: 23 (12 votes)
Are long-term investments a good way to help the future? 2018-04-30T14:41:56.640Z · score: 28 (7 votes)

Comments

Comment by dacyn on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-24T16:00:17.893Z · score: 1 (1 votes) · LW · GW

The OP didn't give any argument for SPECKS>TORTURE, they said it was "not the point of the post". I agree my argument is phrased loosely, and that it's reasonable to say that a speck isn't a form of torture. So replace "torture" with "pain or annoyance of some kind". It's not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.

Comment by dacyn on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-23T21:06:14.249Z · score: 5 (3 votes) · LW · GW

Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.

For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture -- it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).

For the record, EY's position (and mine) is that torture is obviously preferable. It's true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:

  • if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X'. This is unreasonable, since X is only slightly higher than X', while you are forcing 10 times as many people to suffer the torture.
  • On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn't necessary to get approximately correct answers.
Comment by dacyn on Explaining "The Crackpot Bet" · 2019-06-24T16:40:17.738Z · score: 9 (4 votes) · LW · GW

To repeat what was said in the CFAR mailing list here: This "bet" isn't really a bet, since there is no upside for the other party; they are worse off than when they started in every possible scenario.

Comment by dacyn on What are your plans for the evening of the apocalypse? · 2018-08-03T23:13:37.877Z · score: 1 (1 votes) · LW · GW

I don't think that chapter is trying to be realistic (it paints a pretty optimistic picture),

Comment by dacyn on Counterfactuals, thick and thin · 2018-08-01T15:58:29.931Z · score: 5 (3 votes) · LW · GW

Sure, in that case there is a 0% counterfactual chance of heads, your words aren't going to flip the coin.

Comment by dacyn on Counterfactuals, thick and thin · 2018-07-31T18:37:57.886Z · score: 7 (4 votes) · LW · GW

The question "how would the coin have landed if I had guessed tails?" seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying "heads" or "tails" (as well as quantum fluctuations). It's not clear to me what the answer to this question is, though I would guess that the coin's counterfactual probability of landing heads is somewhere strictly between 0% and 50%.

Comment by dacyn on The Feedback Problem · 2018-07-30T18:32:13.201Z · score: 3 (3 votes) · LW · GW
Reviewer is obliged to find all errors.

Not true. A reviewer's main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).

At least, this is the case in math.

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-30T18:21:25.532Z · score: 1 (1 votes) · LW · GW

You don't harbor any hopes that after reading your post, someone will decide to cooperate in the twin PD on the basis of it? Or at least, if they were already going to, that they would conceptually connect their decision to cooperate with the things you say in the post?

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-30T18:07:00.778Z · score: 5 (3 votes) · LW · GW

I am not sure how else to interpret the part of shminux's post quoted by dxu. How do you interpret it?

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-29T17:30:12.692Z · score: 1 (1 votes) · LW · GW

My point was that intelligence corresponds to status in our world: calling the twins not smart means that you expect your readers to think less of them. If you don't expect that, then I don't understand why you wrote that remark.

I don't believe in libertarian free will either, but I don't see the point of interpreting words like "recommending" "deciding" or "acting" to refer to impossible behavior rather than using their ordinary meanings. However, maybe that's just a meaningless linguistic difference between us.

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-29T17:17:25.673Z · score: 3 (2 votes) · LW · GW

A mind-reader looks to see whether this is an agent's decision procedure, and then tortures them if it is. The point of unfair decision problems is that they are unfair.

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-29T17:15:41.677Z · score: 6 (4 votes) · LW · GW

dxu did not claim that A could receive the money with 50% probability by choosing randomly. They claimed that a simple agent B that chose randomly would receive the money with 50% probability. The point is that Omega is only trying to predict A, not B, so it doesn't matter how well Omega can predict B's actions.

The point can be made even more clear by introducing an agent C that just does the opposite of whatever A would do. Then C gets the money 100% of the time (unless A gets tortured, in which case C also gets tortured).

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-28T23:35:13.163Z · score: 4 (3 votes) · LW · GW
I note here that simply enumerating possible worlds evades this problem as far as I can tell.

The analogous unfair decision problem would be "punish the agent if they simply enumerate possible worlds and then choose the action that maximizes their expected payout". Not calling something a decision theory doesn't mean it isn't one.

Comment by dacyn on Decisions are not about changing the world, they are about learning what world you live in · 2018-07-28T23:22:03.140Z · score: 1 (1 votes) · LW · GW
Again, this is just a calculation of expected utilities, though an agent believing in metaphysical free will may take it as a recommendation to act a certain way.

Are you not recommending agents to act in a certain way? You are answering questions from EYNS of the form "Should X do Y?", and answers to such questions are generally taken to be recommendations for X to act in a certain way. You also say things like "The twins would probably be smart enough to cooperate, at least after reading this post" which sure sounds like a recommendation of cooperation (if they do not cooperate, you are lowering their status by calling them not smart)

Comment by dacyn on Conceptual problems with utility functions · 2018-07-28T22:50:45.199Z · score: 1 (1 votes) · LW · GW

Games can have multiple Nash equilibria, but agents still need to do something. The way they are able to do something is that they care about something other than what is strictly written into their utility function so far. So the existence of a meta-level on top of any possible level is a solution to the problem of indeterminacy of what action to take.

(Sorry about my cryptic remark earlier, I was in an odd mood)

Comment by dacyn on Physics has laws, the Universe might not · 2018-07-28T22:43:10.676Z · score: 3 (2 votes) · LW · GW

There I was using "to be" in the sense of equality, which is different from the sense of existence. So I don't think I was tabooing inconsistently.

Comment by dacyn on Computational efficiency reasons not to model VNM-rational preference relations with utility functions · 2018-07-25T23:06:41.441Z · score: 3 (2 votes) · LW · GW

Maybe there is no absolutely stable unit, but it seems that there are units that are more or less stable than others. I would expect a reference unit to be more stable than the unit "the difference in utility between two options in a choice that I just encountered".

Comment by dacyn on Computational efficiency reasons not to model VNM-rational preference relations with utility functions · 2018-07-25T15:35:54.593Z · score: -4 (4 votes) · LW · GW

This seems like a strawman. There's a naive EU calculation that you can do just based on price, tastiness of sandwich etc that gives you what you want. And this naive EU calculation can be understood as an approximation of a global EU calculation. Of course, we should always use computationally tractable approximations whenever we don't have enough computing power to compute an exact value. This doesn't seem to have anything to do with utility functions in particular.

Regarding the normalization of utility differences by picking two arbitrary reference points, obviously if you want to systematize things then you should be careful to choose good units. QALYs are a good example of this. It seems unlikely to me that a re-evaluation of how many QALYs buying a sandwich is worth would arise from a re-evaluation of how valuable QALYs are, rather than a re-evaluation of how much buying the sandwich is worth.

Comment by dacyn on Sleeping Beauty Resolved? · 2018-07-24T22:41:09.068Z · score: 1 (1 votes) · LW · GW

Right, so it seems like our disagreement is about whether it is relevant whether the value of a proposition is constant throughout the entire problem setup, or only throughout a single instance of someone reasoning about that setup.

Comment by dacyn on Conceptual problems with utility functions, second attempt at explaining · 2018-07-22T01:40:24.192Z · score: 1 (1 votes) · LW · GW

I agree with the matching of the concepts, I don't think it means that there is a clear difference between instrumental and terminal values.

Comment by dacyn on Conceptual problems with utility functions, second attempt at explaining · 2018-07-21T23:03:26.324Z · score: 3 (2 votes) · LW · GW

Fair enough, maybe I don't have enough familiarity with non-MIRI frameworks to make an evaluation of that yet.

Comment by dacyn on A Step-by-step Guide to Finding a (Good!) Therapist · 2018-07-19T19:14:25.848Z · score: 5 (4 votes) · LW · GW

Incidentally here is another rationalist guide on how to get therapy, which I have been told is good.

Comment by dacyn on Sleeping Beauty Resolved? · 2018-07-16T18:45:52.300Z · score: 1 (1 votes) · LW · GW

Hmm. I don't think I see the logical rudeness, I interpreted TAG's comment as "the problem with non-timeless propositions is that they don't evaluate to the same thing in all possible contexts" and I brought up Everett branches in response to that, I interpreted your comment as saying "actually the problem with non-timeless propositions is that they aren't necessarily constant over the course of a computation" and so I replied to that, not bringing up Everett branches because they aren't relevant to your comment. Anyway I'm not sure exactly what kind of explanation you are looking for, it feels like I have explained my position already but I realize there can be inferential distances.

Comment by dacyn on Conceptual problems with utility functions · 2018-07-16T18:13:05.367Z · score: 1 (1 votes) · LW · GW

Fair enough. Though in this case the valuing fairness is a big enough change that it makes a difference to how the agents act, so it's not clear that it can be glossed over so easily.

Comment by dacyn on Conceptual problems with utility functions · 2018-07-15T18:27:35.762Z · score: 1 (1 votes) · LW · GW

It is not the problem, but the solution.

Comment by dacyn on Conceptual problems with utility functions · 2018-07-15T18:26:10.494Z · score: 1 (1 votes) · LW · GW

Sure, their ability to model each other matters. Their inability to model each other also matters, and this is where non-utility values come in.

Comment by dacyn on Conceptual problems with utility functions · 2018-07-15T18:24:15.021Z · score: 1 (1 votes) · LW · GW

I don't understand what it means to say that an agent who "values fairness" does better than another agent. If two agents have different value systems, how can you say that one does better than another? Regarding EY and the Prisoner's Dilemma, I agree that EY is making that claim but I think he is also making the claim "and this is evidence that rational agents should use FDT".

Comment by dacyn on Conceptual problems with utility functions · 2018-07-15T18:24:11.409Z · score: 11 (3 votes) · LW · GW

1) The notion of a "perfectly selfish rational agent" presupposes the concept of a utility function. So does the idea that agent A's strategy must depend on agent B's which must depend on agent A's. It doesn't need to depend, you can literally just do something. And that is what people do in real life. And it seems silly to call it "irrational" when the "rational" action is a computation that doesn't converge.

2) I think humanity as a whole can be thought of as a single agent. Sure maybe you can have a person who is "approximately that selfish", but if they are playing a game against human values, there is nothing symmetrical about that. Even if you have two selfish people playing against each other, it is in the context of a world infused by human values, and this context necessarily informs their interactions.

I realize that simple games are only a proxy for complicated games. I am attacking the idea of simple games as a proxy for attacking the idea of complicated games.

3) Eliezer definitely says that when your decision is "logically correlated" with your opponent's decision then you should cooperate regardless of whether or not there is anything causal about the correlation. This is the essential idea of TDT/UDT. Although I think UDT does have some valuable insights, I think there is also an element of motivated reasoning in the form of "it would be nice if rational agents played (C,C) against each other in certain circumstances rather than (D,D), how can we argue that this is the case".

Comment by dacyn on Repeated (and improved) Sleeping Beauty problem · 2018-07-12T16:57:23.265Z · score: 3 (2 votes) · LW · GW

I'm confused, isn't the "objective probability" of heads 1/2 because that is the probability of heads in the definition of the setup? The halver versus thirder debate is about subjective probability, not objective probability, as far as I can tell. I'm not sure why you are mentioning objective probability at all, it does not appear to be relevant. (Though it is also possible that I do not know what you mean by "objective probability".)

Comment by dacyn on The Dilemma of Worse Than Death Scenarios · 2018-07-11T03:46:54.966Z · score: 2 (2 votes) · LW · GW
As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place.

There seems to be a leap of logic here. One can strongly prefer an outcome without being "willing to do anything" to ensure it. Furthermore, just because someone in an extreme situation has an extreme reaction to it does not mean that we need to take that extreme reaction as our own -- it could be that they are simply being irrational.

In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevetitably leads to Pascal's Mugging.

I am confused -- being a Pascal's Mugging is usually treated as a negative feature of an argument?

I do think that it is worthwhile to work to fight S-risks. It's not clear to me that they are the only thing that matters. The self-interestedness frame also seems a little off to me; to be honest if you're selfish I think the best thing to do is probably to ignore the far future and just life a comfortable life.

Solving AI alignment doesn't seem like the easiest way for humanity to do a controlled shutdown, if we decide that that's what we need to do. Of course, it may be more feasible for political reasons.

Comment by dacyn on Repeated (and improved) Sleeping Beauty problem · 2018-07-11T01:29:56.692Z · score: 3 (2 votes) · LW · GW

This argument seems to depend on the fact that Sleeping Beauty is not actually copied, but just dissociated from her past self and so that from her perspective it seems like she is copied. If you deal with actual copies then it is not clear what is the sensible way for them to all pass around a science journal to record their experiences, or all keep their own science journals, or all keep their own but then recombine somehow, or whatever. Though if this thought experiment gives you SIA intuitions on the Sleeping Beauty problem then maybe those intuitions will still carry over to other scenarios.

Comment by dacyn on Probability is fake, frequency is real · 2018-07-11T01:07:26.691Z · score: 3 (2 votes) · LW · GW

I don't know what you mean by "should be allowed to put whatever prior I want". I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.

[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets can smooth out into a logarithmic utility function. If you start with a linear utility function then repeating the bets still gives you linear utility, and the optimal strategy is to make every bet all-or-nothing whenever you have an advantage. At least, this is true before taking into account the resource constraints of the system you are betting against.]

Comment by dacyn on Stories of Summer Solstice · 2018-07-09T06:19:16.406Z · score: 10 (4 votes) · LW · GW

Regarding silence after the last pixel of sun, "no pre-planning" is not exactly right, there were some people passing around the message that that was what we were supposed to do. It was a little ad-hoc though.

Comment by dacyn on Wirehead your Chickens · 2018-06-27T21:08:22.916Z · score: 4 (1 votes) · LW · GW

I guess it just seems to me that it's meaningless to talk about what someone would prefer if they knew about/understood X, given that they are incapable of such knowledge/understanding. You can talk about what a human in similar circumstances would think, but projecting this onto the animal seems like anthropomorphizing to me.

You do have a good point that physiological damage should probably still be considered harmful to an animal even if it doesn't cause pain, since the pre-modified animal can understand the concept of such damage and would prefer to avoid it. However, this just means that giving the animal a painkiller doesn't solve the problem completely, not that it doesn't do something valuable.

Comment by dacyn on Wirehead your Chickens · 2018-06-25T21:35:57.438Z · score: 4 (1 votes) · LW · GW

It is not clear that there is any such base state: what would it mean for an animal to "be aware of the possibility" that it could be made to have a smaller brain, have part of its brain removed, or modified so that it enjoys pain? Maybe you have more of a case with amputation and the desire to be eaten, since the animal can at least understand amputation and understand what it means to be eaten (though maybe not what it would mean to not be afraid of being eaten). But "The proposals above all fail this standard" seems to be overgeneralizing.

Comment by dacyn on UDT can learn anthropic probabilities · 2018-06-25T20:35:21.046Z · score: 6 (2 votes) · LW · GW

Yes (once you have uploaded your brain into a computer so that it can be copied). If lots of people do this, then in the end most agents will believe that SIA is true, but most descendants of most of the original agents will believe that SSA is true.

Comment by dacyn on Paradoxes in all anthropic probabilities · 2018-06-25T18:55:19.534Z · score: 4 (1 votes) · LW · GW

They can't put themselves in the shoes of their past selves, because in some sense they are not really sure whether they have past selves at all, rather than merely being duplicates of someone. Just because your brain is copied from someone else doesn't mean that you are in the same epistemological state as them. And the true descendants are also not in the same epistemological state, because they do not know whether they are copies or not.

Comment by dacyn on Wirehead your Chickens · 2018-06-25T18:35:26.291Z · score: 4 (1 votes) · LW · GW

Ah sorry, I seem to have misread your comment. Makes sense now, thanks!

Comment by dacyn on Wirehead your Chickens · 2018-06-24T19:27:22.151Z · score: 7 (3 votes) · LW · GW

There can be amounts of things other than suffering, though. Caring about the "number of chickens that lead meaningful lives" doesn't mean that one isn't a utilitarian. (For the record, I agree with the OP that the notion of "leading meaningful lives" isn't so important for animals, but I think it's possible to disagree with this and still be advocating an EA intervention.)

Comment by dacyn on Paradoxes in all anthropic probabilities · 2018-06-22T22:42:25.387Z · score: 4 (1 votes) · LW · GW

With regards to your SIA objection, I think it is important to clarify exactly what we mean by evidence conservation here. The usual formulation is something like "If I expect to assign credence X to proposition P at future time T, then I should assign credence X to proposition P right now, unless by time T I expect to have lost information in a predictable way". Now if you are going to be duplicated, then it is not exactly clear what you mean by "I expect to assign ... at future time T", since there will be multiple copies of you that exist at time T. So, maybe you want to get around this by saying that you are referring to the "original" version of you that exists at time T, rather than any duplicates. But then the problem seems to be that by waiting, you will actually lose information in a predictable way! Namely, right now you know that you are not a duplicate, but the future version of you will not know that it is not a duplicate. Since you are losing information, it is not surprising that your probability will predictably change. So, I don't think SIA violates evidence conservation.

Incidentally, here is an intuition pump that I think supports SIA: suppose I flip a coin and if it is heads then I kill you, tails I keep you alive. Then if you are alive at the end of the experiment, surely you should assign 100% probability to tails (discounting model uncertainty of course). But you could easily reason that this violates evidence conservation: you predictably know that all future agents descended from you will assign 100% probability to tails, while you currently only assign 50% to tails. This points to the importance of precisely defining and analyzing evidence conservation as I have done in the previous paragraph. Additionally, if we generalize to the setting where I make/keep X copies of you if the coin lands heads and Y copies if tails, then SIA gives the elegant formula X/(X+Y) as the probability for heads after the experiment, and it is nice that our straightforward intuitions about the cases X=0 and Y=0 provide a double-check for this formula.

Comment by dacyn on Order from Randomness: Ordering the Universe of Random Numbers · 2018-06-22T17:48:42.080Z · score: 6 (2 votes) · LW · GW

I don't know what "power spectrum" is, but the second-to-last graph looks pretty obviously like Brownian motion. This makes sense because the differences between consecutive points in the third graph will be approximately Poisson and independently distributed, so if you renormalize so that the expected value of the difference is zero, then the central limit theorem will give you Brownian motion in the limit.

Anyway regarding the relation of your post to Tegmark's theory, a random sequence can be a perfectly well-defined mathematical object (well maybe you need to consider pseudo-randomness, but that's not the point) so you are not getting patterns out of something non-mathematical (whatever that would mean) but out of a particular type of mathematical object.

Comment by dacyn on Loss aversion is not what you think it is · 2018-06-22T00:55:48.606Z · score: 7 (3 votes) · LW · GW

The validity of the author's point seems to depend on what is the best way to interpret the phrase "losses hurt more than equivalent gains". Two ways that you could interpret it in which it would be a consequence of loss aversion but not of DMU:

  • "Having your wealth decrease from X to Y decreases your satisfaction more than having your wealth increase from Y to X increases it."
  • "The pain of a small loss is significantly more than the pleasure of a small gain."

It seems to me that most of the quotes at the end, if you interpret them charitably, mean something like the above. So the post seems like a nitpick to me. It's great to explain the difference between loss aversion and DMU for people who don't necessarily know about them, but it's not clear to me that it means that the quoted people were actually wrong about something.

I would also disagree with point #3, e.g. the last sentence of the Economist quote seems valid as an intuitive explanation of loss aversion but not of DMU.

Comment by dacyn on How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey · 2018-06-20T15:54:25.202Z · score: 5 (2 votes) · LW · GW

Sure, probably some of them mean that, but you can't assume that they all do.

Comment by dacyn on Physics has laws, the Universe might not · 2018-06-19T15:26:19.403Z · score: 6 (2 votes) · LW · GW

"Exists" is one of the words I tend to taboo. People usually just use it to mean "is part of the Everett branch that I am currently in" but there are also some usages that seem to derive their meaning by analogy, like the existence of mathematical objects. I'm not sure if there is a principled distinction being drawn by those kinds of usages.

Instead I would talk about whether we can sensibly talk about something. And I can imagine people trying to talk about something, and not making any sense, but it doesn't seem to mean that there is a "thing" they are talking about that "doesn't exist".

Comment by dacyn on How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey · 2018-06-19T15:16:54.392Z · score: 5 (2 votes) · LW · GW

When people say that a morality is "objectively correct", they generally don't mean to imply that it is supported by "universally compelling arguments". What they do mean might be a little hard to parse, and I'm not a moral realist and don't claim to be able to pass their ITT, but in any case it seems to me that the burden of proof is on the one who claims that their position does imply heterogonality.

Comment by dacyn on In Defense of Ambiguous Problems · 2018-06-18T00:37:07.718Z · score: 4 (1 votes) · LW · GW

I see that someone posted in the other thread that they though the most obvious answer is 1/2, but why is this the case? I don't see any obvious intuitive argument for why 1/2 is a reasonable answer.

Edit: I guess the idea is to just not perform any update on the statement the guard makes but just use it to infer that "Vulcan Mountain" is equivalent to "Vulcan", and then answer based on the fact that the latter probability is 1/2.

Comment by dacyn on How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey · 2018-06-16T19:49:49.405Z · score: 12 (6 votes) · LW · GW

Moral realism plus moral internalism does not imply heterogonality. Just because there is an objectively correct morality, does not mean that any sufficiently powerful optimization process would believe that that morality is correct.

Comment by dacyn on The Curious Prisoner Puzzle · 2018-06-16T04:11:08.416Z · score: 6 (2 votes) · LW · GW

If you assume that the guard's probability of making this statement (and only this statement) is the same in all circumstances where the statement is true, then the answer is 1/3. Otherwise, it depends on what you know about the psychology of the guard.

Comment by dacyn on On the Chatham House Rule · 2018-06-15T19:31:36.226Z · score: 14 (6 votes) · LW · GW

On the Chatham House website I see

Q. Can participants in a meeting be named as long as what is said is not attributed?
A. It is important to think about the spirit of the Rule. For example, sometimes speakers need to be named when publicizing the meeting. The Rule is more about the dissemination of the information after the event - nothing should be done to identify, either explicitly or implicitly, who said what.

which seems reasonable. The comment about not circulating the attendee list beyond the participants is a response to the question "Can a list of attendees at the meeting be published?", and my impression is that it is only meant as an answer to this question: i.e. such a list should not be published outside of the meeting, but it is OK if some people happen to come across it randomly. So I think you are just taking the Chatham Rule much more literally than it is intended.

Comment by dacyn on Reflections on Berkeley REACH · 2018-06-15T16:13:58.607Z · score: 7 (3 votes) · LW · GW

I moved to Berkeley last week and have been coming to coworking and several events at REACH. It is certainly nice to have a place to hang out with rationalist people and start to feel integrated in the community. On the first night I was here I already got to experience some of the rationalist culture here, a doom circle. I don't think that kind of experience would have been possible without REACH.

I seem to recall people saying of the old meetups that they mostly only allow new people and transients to interact with each other and not with established community members, and I think there is an element of this at REACH, but I have certainly seen a few established people here in the short time I've been here. Some people even bring their kids sometimes so if you like playing with kids (which I do) then that is a rewarding experience.

I have been experiencing pretty serious mental health and other problems recently, and I think the community here has been pretty supportive. In particular I told Sarah/Stardust I'd probably go crazy if I couldn't find some people to have one-on-one conversations with, she was able to help me out by finding someone to put me in contact with.

All in all I think this is a great community and a great community center, keep up the good work!