Posts

Comments

Comment by simon on Pseudorandomness contest: prizes, results, and analysis · 2021-01-31T11:27:51.140Z · LW · GW

P.S:

With (1) (total number of 1's) excluded, but all of (2), (3), (4) included:

Confidence level: 61.8

Score: 20.2

With (2) (total number of runs) excluded, but all of (1), (3), (4) included:

Confidence level: 59.4

Score: 13.0

With ONLY (1) (total number of 1's) included:

Confidence level: 52.0

Score: -1.8

With ONLY (2) (total number of runs) included:

Confidence level: 57.9

Score: 18.4

So really it was the total number of runs doing the vast majority of the work. All calculations here do include setting the probability for string 106 to zero, both for the confidence level and final score.

Comment by simon on Pseudorandomness contest: prizes, results, and analysis · 2021-01-31T10:51:34.273Z · LW · GW

I think it depends a lot more on the number of strings you get wrong than on the total number of strings, so I think GuySrinivasan has a good point that deliberate overconfidence would be viable if the dataset were easy. I was thinking the same thing at the start, but gave it up when it became clear my heuristics weren't giving enough information.

My own theory though was that most overconfidence wasn't deliberate but simply from people not thinking through how much information they were getting from apparent non-randomness (i.e. the way I compared my results to what would be expected by chance).

Comment by simon on Pseudorandomness contest: prizes, results, and analysis · 2021-01-31T10:32:46.265Z · LW · GW

Whoops, missed this post at the time.

In response to:

(4) average XOR-correlation between bits and the previous 4 bits (not sure what this means -Eric)

This is simply XOR-ing each bit (starting with the 5th one) with the previous 4 and adding it all up. This test was to look for a possible tendency (or the opposite) to end streaks at medium range (other tests were either short range or looked at the whole string).  I didn't throw in more tests using any other numbers than 4 since using different tests with any significant correlation on random input would lead to overconfidence unless I did something fancy to compensate. 

In response to:

“XOR derivative” refers to the 149-bit substring where the k-th bit indicates whether the k-th bit and the (k +1)-th bit of the original string were the same or different. So this is measuring the number of runs ... (3) average value of second XOR derivative and (4) average XOR-correlation between bits and the previous 4 bits...

I’m curious how much, if any, of simon’s success came from (3) and (4). 

Values below. Confidence level refers to the probability of randomness assigned to the values that weren't in the tails of any of the tests I used.

Actual result:

Confidence level: 63.6

Score: 21.0

With (4) excluded:

Confidence level: 61.4

Score: 19.4

With (3) excluded:

Confidence level: 62.2

Score: 17.0

With both (3) and (4) excluded:

Confidence level: 60.0

Score: 16.1

Score in each case calculated using the probabilities rounded to the nearest percent (as they were or would have been submitted ultimately).  Oddly, in every single case the rounding improved my score (20.95 v. 20.92, 19.36 v. 19.33, 16.96 v. 16.89 and  16.11 v. 16.08.

So, looks I would have only gone down to fifth place if I had only looked at total number of 1's and number of runs. I'd put that down to not messing up calibration too badly, but looks that would have still put me in sixth in terms of post-squeeze scores? (I didn't calculate the squeeze, just comparing my hypothetical raw score with others' post-squeeze scores)

Comment by simon on What is going on in the world? · 2021-01-20T01:48:14.608Z · LW · GW

True, the typical argument for the great silence implying a late filter is weak, because an early filter is not all that a priori implausible. 

However, the OP (Katja Grace) specifically mentioned "anthropic reasoning".

As she  previously pointed out, an early filter makes our present existence much less probable than a late filter. So, given our current experience , we should weight the probability of a late filter much higher than the prior would be without anthropic considerations.

Comment by simon on Centrally planned war · 2021-01-06T15:59:46.809Z · LW · GW

Individuals may be bad at foresight, but if there's predictably going to be a good price for 100000 coats in a few months, someone's likely to supply them, unless of course there's some anti "price gouging" legislation.

Comment by simon on D&D.Sci Evaluation and Ruleset · 2020-12-12T21:40:15.034Z · LW · GW

If you didn’t account for selection effects, you may have correctly avoided boosting DEX because you thought it was actively harmful instead of merely useless. 

I immediately considered a selection effect, but then I tricked myself into believing it did matter by a method that corrected for the selection effect but was vulnerable to randomness/falsely seeing patterns. Oops. Specifically I found the average dex for successful and failed adventurers for each total non-dex stat value, but had them listed in an inconvenient big column with lots of gaps. I looked at some differences and it seemed that for middle values of non-dex stats, successful adventurers consistently had lower average dex than failed ones, while that reversed for extreme values. When I (now - I didn't at the time) make a bar chart out of the data it's a lot more clear that there's no good evidence for any effect of dex on success: 

 

If you didn’t look for interactions, you may have dodged the WIS<INT penalty just because WIS seemed like a better place to put points than INT. 

Yep. Thing is, I *did* look for interactions - with DEX. I had the idea that DEX might be bad due to such interactions, and when I didn't find anything more or less stopped looking for such interactions.

And I’m pretty sure even the three people who submitted optimal answers on the last post (good job simon, seed, and Ericf) didn’t find them by using the right link function

For sure in my case. I calculated the success/fail ratios for each value of each stat individually (no smoothing), and found the reachable stat combo that maximized the product of those ratios. This method found the importance of reaching 8. I was never confident that this wasn't random, though.

When I did later start simming guesses what I simmed would have given smoothed results: a bunch of stat checks with a D20, success if total number of passed stat checks  greater than a threshold. The actual test would have been pretty far down in the list of things I would have checked given infinite time.

Comment by simon on D&D.Sci · 2020-12-11T08:04:29.220Z · LW · GW

>! in reply to:

         Graduate stats likely come from 2d10 drop anyone under 60 total

I think you're right. The character stats data seems consistent with starting with 10000 candidates, each with 6 stats independently chosen by 2d10, and tossing out everything with a total below 60. 

One possible concern with this is the top score being the round number of 100, but I tested it and got only one score above 100 (it was 103), so this seems consistent with the 100 top score being coincidence.

Comment by simon on D&D.Sci · 2020-12-07T16:17:33.294Z · LW · GW

You do indeed miss out on some gains from a jump - WIS gets you a decline in success at +1 but a big gain at +3. (Edit: actually my method uses odds ratio (successes divided by failures) not probabilities (successes divided by total). So, may not be equivalent to detecting jump gains for your method. Also my method tries to maximize multiplicative gain, while your words "greatest positive" suggest you maximize additive gain.)

STR - 8 (increased by 2)

CON - 15 (increased by 1)

DEX - 13 (no change)

INT - 13 (no change)

WIS - 15 (increased by 3)

CHA - 8 (increased by 4)

calculation method: spreadsheet adhockery resulting in tables for each stat of:

per point gain = ((success odds ratio for current stat)/(success odds ratio for current stat + n))^(1/n), find n and table resulting in highest per point gain, generate new table for that stat for new stat start point and repeat.

Comment by simon on D&D.Sci · 2020-12-07T11:23:11.976Z · LW · GW

str +2 points to 8, con +1 point to 15, cha +4 points to 8, wis +3 points to 15, based on assuming that a) different stats have multiplicative effect (no other stat interactions) and b) that the effect of any stat is accurately represented by looking at the overall data in terms of just that stat and that c) the true distribution is exactly the data distribution with no random variation. I have not done anything to verify that these assumptions make sense.

dex looks like it actually has a harmful effect. I don't know whether the apparent effect is or is not too large to be explained by it helping bad candidates meet the college's apparent 60-point cutoff.

Comment by simon on Anti-EMH Evidence (and a plea for help) · 2020-12-05T21:54:51.815Z · LW · GW

I would worry in a lot of these cases that there's some risk that your model isn't taking account of, so you could be "picking up pennies in front of a steamroller". Not in all cases though - 70-200% isn't pennies.

But things like supposedly equivalent assets that used to be closely priced now diverging seems highly suspicious.

Comment by simon on Can preference falsification be reduced with Ring Signatures? · 2020-11-30T01:46:47.893Z · LW · GW

You need to have a private key to sign, otherwise it would be useless as a "signature".

For signing (in the non-ring case), you encrypt with your private key and they decrypt with your public key, whereas in normal encryption (again, non-ring) you encrypt with their public key and they decrypt with their private key.

Comment by simon on Ongoing free money at PredictIt · 2020-11-12T08:15:46.324Z · LW · GW

It's not necessarily structural inefficiency at PredictIt specifically that is causing most of this, but to a large extent bettors pricing in the odds of Trump still winning the election. Apparently Betfair's odd of Trump winning are still around 10% - link I found from searching for articles on betting odds from the last day, but I wasn't able to find the odds at Betfair itself.

Comment by simon on The Born Rule is Time-Symmetric · 2020-11-02T02:52:42.603Z · LW · GW

Yes, if you consider one branch of the wavefunction, you have less than full information than the full state you branched from. But, the analogous situation would apply to a merger of different branches - you would have less than full information in one of the initial branches regarding the full resulting merged state.

Comment by simon on On the Dangers of Time Travel · 2020-10-27T05:53:22.546Z · LW · GW

I have (semi-*)seriously considered the possibility that time travel would in fact instantly destroy the universe (not the multiverse though). One of the theories on what happens when you time travel is based on "postselection", see also youtube video by Seth Lloyd: what you get at the end is what you'd expect if you just discarded any final state that led to a contradiction with what you put in.

Now Seth Lloyd says you then renormalize the probabilities. But this renormalization seems an extraneous assumption to me: a more natural interpretation to me is that the amplitude that would otherwise be associated with the contradictory possibilities simply disappears. It would be hard to time travel without having to have a camel-through-the-eye-of-a-needle level of contortion to prevent some contradiction, so time travel would in effect reliably destroy the universe if it were ever used.

*I am bad at taking things seriously, or I probably wouldn't post this - e.g. imagine if our lack of time travel is due to the anthropic principle. edit: we should have a greater expectation on hypotheses that have more observers in our situation, so we should expect that, conditional on time travel being deadly, it's probably too hard for us so far and not merely that the worlds where it has been invented are gone. Whew, that's a relief (for now)! If anyone reading this does discover an easy time travel method though, DO NOT USE IT.

Comment by simon on The Darwin Game - Rounds 0 to 10 · 2020-10-25T08:26:09.284Z · LW · GW

I'm not so optimistic about your bot... if the clones will be getting 250 per round and you will be getting 200, you'll lose about 1/5 of your copies per round, which is like a 3 round half-life. Not going to be anything left at 90 at that rate.

Comment by simon on The Darwin Game - Rounds 0 to 10 · 2020-10-24T18:11:22.385Z · LW · GW

Ah, I had misunderstood how the system works. I had not read carefully and assumed some kind of weighted round robin. Random pairings allow for a lot more random variation.

Comment by simon on The Darwin Game - Rounds 0 to 10 · 2020-10-24T07:29:36.260Z · LW · GW

All clones should act equally against non-clones until the showdown round. I guess some outsider bots could be adjusting behavior depending on finding certain patterns in the code in order to respond to those patterns, and the relevant patterns occur in the payloads of some clones?

FWIW, doing better or worse in any given round has a multiplicative effect between rounds, not additive. So that might affect the level of randomness, though even with 100 it seems really big to be random.

Comment by simon on Local Solar Time · 2020-10-24T04:33:19.372Z · LW · GW

The main objection in the link seems to be a pre-supposition that solar time information about different places would be less available than time zone conversion now. That seems probably false. Also, sleep schedules depending on culture and not necessarily lining up with solar time is not any less of an issue now.

Comment by simon on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T03:45:46.792Z · LW · GW

People want different things, and the different possible disagreement resolving mechanisms include the different varieties of utilitarianism.

In this view, the fundamental issue is whether you want the new entity to be directly counted in the disagreement resolving mechanism. If the new entity is ignored (except for impact on the utility functions of pre-existing entities, including moral viewpoints if preference utility is used*), then there's no need to be concerned with average v. total utilitarianism.

A general policy of always including the new entity in the disagreement resolving mechanism would be extremely dangerous (utility monsters). Maybe it can be considered safe to include them under limited circumstances, but the Repugnant Conclusion indicates to me that the entities being similar to existing entities is NOT sufficient to make it safe to always include them.

(*) hedonic utility is extremely questionable imo - if you were the only entity in the universe and immortal, would it be evil not to wirehead?

Comment by simon on The Darwin Game · 2020-10-20T00:34:01.936Z · LW · GW

I'd predict three-bot is the most likely simple winner

I hope not too many other clique members submitted 3-bot (we will destroy each other then, assuming multicore hasn't already taken over by the showdown time).

Comment by simon on The Darwin Game · 2020-10-19T23:51:20.777Z · LW · GW

Thanks!

Comment by simon on The Darwin Game · 2020-10-19T15:33:18.286Z · LW · GW

Yeah, I put it in at top level.

Comment by simon on The Darwin Game · 2020-10-19T15:15:24.088Z · LW · GW

I fell for

Eventually we just settled on "the exact line foo = 'bar' is permitted as an exception", and I didn't see what I could do with that.

Later, lsusr told me that that line would get me disqualified. I didn't say anything, in the hopes some clique member would wonder what it was for, include it in their bot just in case, and get disqualified.

I thought it would be harmless and that there was some chance (even though it would make more sense to read some codeword directly from the payload) that someone would be using it as a recognition signal to boost a chosen clique member's bot using a non clique member's bot.

Is it really disqualifiable? I am not using foo for anything else other than setting it.

Comment by simon on Have the lockdowns been worth it? · 2020-10-13T00:47:37.187Z · LW · GW

By ‘lockdown’ we refer to the thing that the US, UK and China have been doing, and what Sweden and Italy didn’t

It seemed to me that Italy (after the initially hit areas were pretty much saturated) did a much harder lockdown than much of the US has been doing.

https://en.wikipedia.org/wiki/COVID-19_pandemic_lockdown_in_Italy

Comment by simon on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T00:04:00.191Z · LW · GW

It' s currently as I was suggesting, which makes me wonder if it was already like that and I was just confused, but I had been thinking that the "what happened" had linked to the original Petrov Day post and that that was the only post linked.

Comment by simon on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T22:04:30.567Z · LW · GW

Can the nuked front page link this post for "what happened" and also the petrov day post for context, instead of just the petrov day post, so people coming in actually know why there is no front page.

Comment by simon on The Bayesian Tyrant · 2020-08-21T13:35:45.478Z · LW · GW

If everyone else changes to my prior, that's great. But if I change from my prior to their prior, I am just (from the point of view of someone with my prior, which obviously includes myself) making myself vulnerable to be beaten in ordinary betting by other agents that have my prior.

Comment by simon on Money creation and debt · 2020-08-13T17:31:28.626Z · LW · GW

Either definition could be used, as long as you keep track of what definition you're using and the consequences that follow.

There's a point of view called "Modern Monetary Theory" (MMT) which defines savings to exclude investments, resulting in Savings = 0 instead of the conventional Savings = Investment, but adherents of MMT tend to misapply this, arguing that government debt is needed for, e.g. people to be able to save for retirement, which is false when you take into account investment.

Comment by simon on Maximal Ventilation · 2020-07-11T15:48:28.608Z · LW · GW

I think it's just that 1 mile per hour = 88 feet per minute which is close to 90.

Comment by simon on Intuitive Lagrangian Mechanics · 2020-06-14T03:20:21.757Z · LW · GW

FWIW, if I were asked before reading this why you substract the potential energy from the kinetic energy, I wouldn't have had a quick answer - I think "it's minimizing the overall time dilation from gravity and speed" is a really neat way to think about it.

As to why time dilation would be relevant, if you've read QED by Richard Feynman he has a visualization where you think of each (version of a) particle of having a clock with a hand that goes around and around, and you add up the hands of all the clocks for all the particles that took all the different paths.

In the end only the (versions of the) particle that took the paths very close to where the time taken is extremized add up to the final result, everything else cancels because tiny differences in path lead to opposite directions of the clock hand.

Comment by simon on Is a near-term, self-sustaining Mars colony impossible? · 2020-06-06T20:41:42.670Z · LW · GW

I think there's at least 2 very different related questions:

  1. Is it possible to have a not-too-large-to-build colony on Mars be capable of surviving if contact to Earth is cut off?

  2. Can you make a colony on Mars that's capable of (1) and is economically viable?

I think the answer to (1) is yes, probably with a lot fewer people than you might think, but the answer to (2) is likely no, regardless of number of people.

Robin Hanson, in other discussions has made a big deal of the importance of specialization and division of labor. While he's right that it's pretty important to our modern economy, another lesson of economics is that things usually have substitutes. We can use less division of labour, but it just costs more and you get less good results.

Another commenter brought up electronics. I agree that electronics would be hard to do without a lot of people. If we are talking about (1), raw survival, then I disagree that electronics is needed. For (1), the question is whether people can survive at all - we can assume that we have a colony full of very capable, hard-working people who work long hours to maintain a bare-survival standard of living. The minimum size of the colony is likely going to largely depend on one or a few products that they absolutely need and require some amount of division of labour to produce - possible examples might be spacesuits and equipment for power generation or mining. By contrast, something like farming or habitats for farming in might take up most of the economic effort, but not lower-bound the colony size so much since less division of labour is needed.

For (2) on the other hand, to have an economically viable colony you need people to be willing to come over from Earth. Earth has free air, cheap water, you can grow stuff outdoors. No amount of division of labour on Mars can make up for this advantage, since Earth already has a lot of division of labour in addition to being habitable. So you need a lot of imports from Earth per person to get a comparable standard of living on Mars, and I don't see enough plausible exports to sustain that large amount of imports.

Of course, if people are willing to sustain a large enough drop in material standard of living in order to live on Mars, it could be economically viable, but you'd need a decent number of people willing to have a really massive drop in standard of living. Are there enough such people? I doubt it, but would be happy to be proved wrong.

Comment by simon on Solar system colonisation might not be driven by economics · 2020-04-23T18:29:37.606Z · LW · GW

Yes, but a person on Earth can create value in space that they obtain by moving into space. An example would be low gravity retirement communities paid for by the retirees.

Comment by simon on Antimemes · 2020-01-04T11:07:51.169Z · LW · GW

I think a relevant difference between Lisp macros and textual macros of other languages is that in Lisp, the program is in the form of a datastructure (a list) that has relevant structure. So the macro is manipulating data at a level that is more relevant than text.

Thinking that there is not such a relevant difference might contribute to antimeminess of Lisp imo.

Aside:

I think what would be even better would be macros in a language where not only is the program in the form of a datastructure with relevant structure, but the language is also statically typed (with a suitably complex type system) with any program or subprogram having a type. The compiler could then provide protection against mistakes, allowing more complex stuff to be done in practice. Haskell partially does this by defining types for functions and allowing you to make functions that spit out other functions, but (afaik) doesn't apply this to other aspects of Haskell programs which seems to me a huge wasted opportunity.

Comment by simon on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T16:39:05.831Z · LW · GW

Home ownership is common in part because it has large tax advantages.

Comment by simon on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-10-16T16:34:13.254Z · LW · GW

For some further perspective, $10 per ton at 1.25 times removal of CO2 would be $8 per ton of CO2. So, we could theoretically pay for it with a relatively affordable $8 per ton global carbon tax if that could somehow be made to work politically.

Comment by simon on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-10-12T09:35:06.702Z · LW · GW

They propose mining and transporting 11 cubic kilometers* of olivine per year, at $10 per ton when scaled up, which comes out to $365 billion per year assuming that's metric tonnes. Might or might not be considered "reasonably cheaply" depending on what you think of the alternatives.

*they also mention 7 cubic miles, which would be almost a trillion dollars per year, but this would be a lot more than would be needed to offset world carbon emissions if their claim of 1.25 to 1 ratio of CO2 to olivine is correct - so I think that's a misconversion from the 11 cubic km figure rather than the other way around.

Comment by simon on Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (II) · 2017-07-27T09:48:55.732Z · LW · GW

Thanks for the kind words.

However, I don't agree. The additional 8 rooms is an unbiased sample of the remaining 80 rooms for beauty. The additional 8 rooms is only an unbiased sample of the full set of 81 rooms for beauty if the first room is also an unbiased sample (but I would not consider it a sample but part of the prior).

Actually I found a better argument against your original anti-thirder argument, regardless of where the prior/posterior line is drawn:

Imagine that the selector happened to encounter a red room first, before checking out the other 8 rooms. At this point in time, the selector's state of knowledge about the rooms, regardless of what you consider prior and what posterior, is in the same position as beauty's after she wakes up. (from the thirder perspective, which I generally agree with in this case). Then they both sample 8 more rooms. The selector considers this an unbiased sample of the remaining 80 rooms. After both have taken this additional sample of 8, they again agree. Since they still agree, beauty must also consider the 8 rooms to be an unbiased sample of the remaining 80 rooms. Beauty's reasoning and the selector's are the same regarding the additional 8 rooms, and Beauty has no more "supernatural predicting power" than the selector.

About only thirding getting the attention: my apologies for contributing to this asymetry. For me, the issue is, I found the perspectivism posts at least initially hard to understand, and since subjectively I feel I already know the correct way to handle this sort of problem, that reduces my motivation to persevere and figure out what you are saying. I'll try to get around to carefully reading them and providing some response eventually (no time right now).

Comment by simon on Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (II) · 2017-07-26T01:36:34.663Z · LW · GW

Well argued, you've convinced me that most people would probably define what's prior and what's posterior the way you say. Nonetheless, I don't agree that what's prior and what's posterior should be defined the way you say. I see this sort of info as better thought of as a prior (precisely because waking up shouldn't be thought of as new info) [edit: clarification below]. I don't regard the mere fact that the brain instantiating the mind having this info is physically continuous with an earlier-in-time brain instantiating a mind with different info as sufficient to not make it better thought of as a prior.

Some clarification on my actual beliefs here: I'm not a conventional thirder believing in the conventional SIA. I prefer, let's call it, "instrumental epistemic rationality". I weight observers, not necessarily equally, but according to how much I care about the accuracy of the relevant beliefs of that potential observer. If I care equally about the beliefs of the different potential observers, then this reduces to SIA. But there are many circumstances where one would not care equally, e.g. one is in a simulation and another is not, or one is a Boltzmann brain and another is not.

Now, I generally think that thirdism is correct, because I think that, given the problem definition, for most purposes it's more reasonable to value the correctness of the observers equally in a sleeping beauty type problem. E.g. if Omega is going to bet with each observer, and beauty's future self collects the sum of the earnings of both observers in the case there are two of them, then 1/3 is correct. But if e.g. the first instance of the two observer case is valued at zero, or if for some bizarre reason you care equally about the average of the correctness of the observers in each universe regardless of differences in numbers, then 1/2 is correct.

Now, I'll deal with your last paragraph from my perspective, The first room isn't a sample, it's guaranteed red. If you do regard it as a sample, it's biased in the red direction (maximally) and so should have zero weight. The prior is that the probability of R is proportional to R. The other 8 rooms are an unbiased sample of the remaining rooms. The set of 9 rooms is a biased sample (biased in the red direction) such that it provides the same information as the set of 8 rooms. So use the red-biased prior and the unbiased (out of the remaining rooms after the first room is removed) 8 room sample to get the posterior esimate. This will result in the same answer the selector gets, because you can imagine the selector found a red room first and then break down the selector's information into that first sample and a second unbiased sample of 8 of the remaining rooms.

Edit: I didn't explain my concept of prior v. posterior clearly. To me, it's conceptual not time-based in nature. For a set problem like this, what someone knows from the problem definition, from the point of view of their position in the problem, is the prior. What they then observe leads to the posterior. Here, waking sleeping beauty learns nothing on waking up that she does not know from the problem definition, given that she is waking up in the problem. So her beliefs at this point are the prior. Of course, her beliefs are different from sleeping beauty before she went to sleep, due to the new info. That new info told her she is within the problem, when she wasn't before, so she updated her beliefs to new beliefs which would be a posterior belief outside the context of the problem, but within the context of the problem constitute her prior.

Comment by simon on Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (II) · 2017-07-23T00:42:06.077Z · LW · GW

We may just be arguing over definintions.

For the priors,. I would consider Beauty's expectations from the problem definition before she takes a look at anything to be a prior, i.e. she expects 81 times higher probability of R=81 than R=1 right from the start.

SIA states that you should expect to be randomly selected from the set of possible observers. That doesn't imply that you are in a postion randomly selected from some other set. (only if observers are randomly selected from that set). Here, observers start in red rooms only, so clearly, you can't expect your room to be randomly selected colour if you believe in SIA.

Comment by simon on Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (II) · 2017-07-21T01:28:01.337Z · LW · GW

I don't think it's accurate to say that thirders accept A and B. It seems to me that thirders reject A. Indeed the fact that the thirder agrees with the selector in terms of posterior indicates that they must consider the 9 rooms to be biased because they have a different prior so need to consider the sample biased to come up with the same posterior.

Comment by simon on Open thread, May. 1 - May. 7, 2017 · 2017-05-08T23:42:12.606Z · LW · GW

I changed my mind on the space-o-gel though, at least for now.

Nice idea with the spinning ring. With relativity it should be fine as long as light itself isn't pulled in when going in parallel with the ring.

Comment by simon on Open thread, May. 1 - May. 7, 2017 · 2017-05-08T16:38:53.940Z · LW · GW

Sketch of proof: you proved that a stick collapses (compression scaling as Log(L)).

Well every connected object is either a stick, a curvy stick, or one of those things plus some extra atoms. So - prove that making things curvy or adding atoms doesn't help (enough). So, e.g. thickening the stick in the middle won't save it since you'd need infinite thickness.

for a black hole to form the shell must be pushed beyond the limit of its compressive strength.

hmmm... we've been talking as if in a space without dark energy. But with dark energy, a sufficiently large shell could be balanced by the antigravity of the dark energy within it, the acceleration caused should scale linearly with the radius. So that would be able to be under no stress at all. But I'm not sure it wouldn't form a black hole at large radius. As the interior gets bigger and bigger, eventually you get a cosmological event horizon forming so the interior forms a white hole - so light can't leave the shell to the interior. Since it's balanced, for symmetry reason's I'd expect the same to apply on the exterior. So you have this shell black hole between an interior and exterior both 'outside' the black hole.

Of course, this shell would actually be crushed by the stress in the radial direction, it's only not under stress circumferentially. But, now that we've got this example, we can extend to an ultralight aerogel (space-o-gel?) that balances the dark energy everywhere. I'd expect this to look externally the same as the shell example, so it should also eventually form a black hole. These are just guesses though - not actually calculated.

Edit: I'm now very suspicious of this analogy between the sphere and space-o-gel and will have to think about it more.

Comment by simon on Open thread, May 8 - May 14, 2017 · 2017-05-08T08:54:55.232Z · LW · GW

Parallel lines appear to intersect according to perspective. But, the more distant parts of the lines are the parts that appear to intersect. Here, where the lines actually do intersect, the more distant parts are away from the intersection. If these are ideal lines such that one could not gauge distance, and one is only looking downward, such as a projection onto a plane, then they are visually indistinguishable from parallel lines. Whether that's the same thing as them appearing to be parallel may be ... a matter of perspective. But, since this is a bird with 360 degree view, it can see that the lines do not also extend above the bird as parallel lines would, so they do not appear parallel to it.

Comment by simon on Open thread, May. 1 - May. 7, 2017 · 2017-05-08T08:15:29.972Z · LW · GW

This would in fact turn into a black hole since a black hole's radius is proportional to its mass. Also, it would collapse due to compressive forces since the force between any pair of hemispheres is proportional to r^2 but the surface to bear the force is only proportional to r.

Comment by simon on Use and misuse of models: case study · 2017-04-29T12:03:50.824Z · LW · GW

Another issue: as far as I can tell, he does not account for people switching from regular jobs to the basic job and the corresponding loss of productivity from not working regular jobs.

I also noticed, when trying to figure out whether he accounted for that or not:

Here, "non_worker" actually refers to workers outside the basic job

But then he defines the number of basic workers in such a way that a decreasing number of non-workers also leads to a decreasing number of people on the basic job.

Edit: and probably much more importantly, he counts the basic income as a cost as given, but not as a benefit as received. It's a monetary transfer, and thus destroys wealth only to the extent that it changes incentives (wealth efffect -> less incentive to work for the poor, higher marginal taxes -> less incentive to work for the rich). A correct calculation needs to assess the effect of these incentives and not count the transfer as if it were destruction of wealth.

Further edit: of course, that point is part of "it does not account for the extra value for individuals of having a basic income", now also paying attention to the people in the labour force receiving a basic income, and also looking at it from a wealth standpoint v. utility standpoint. I suppose in the end it should be a utility standpoint that should be used, but one needs to take into account effects on wealth in assessing the utility as well.

Comment by simon on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-18T15:06:54.518Z · LW · GW

Regarding your first 4 paragraphs: as it happens, I am human.

Regarding your last paragraph: yes most likely, but we can assess our options from our own point of view. Most likely our own point of view will include, as one part of what we consider, the point of view of what we are choosing to replace us. But it won't likely be the only consideration.

Comment by simon on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-17T23:08:57.773Z · LW · GW

OK, I guess I was equivocating on intuition.

But on your second paragraph: I don't think I actually disagree with you about what actually exists.

Here are some things that I'm sure you'll agree exist (or at least can exist):

  • preferences and esthetics (as you mentioned)
  • tacitly agreed on patterns of behaviour, or overt codes, that reduce conflict
  • game theoretic strategies that encourage others to cooperate, and commitment to them either innately or by choice

Now, the term "morality", and related terms like "right" or "wrong", could be used to refer to things that don't exist, or they could be used to refer to things that do exist, like maybe some or all of the the things in that list or other things that are like them and also exist.

Now, let's consider someone who thinks, "I'm intuitively appalled by this idea, as is everyone else, but I'm going to do it anyway, because that's the morally obligatory thing to do even though most people don't think so" and analyze that in terms of things that actually exist.

Some things that actually exist that would be in favour of this point of view are:

  • an aesthetic preference for a conceptually simple system combined with a willingness to bite really large bullets
  • a willingness to sacrifice oneself for the greater good
  • a willingness to sacrifice others for the greater good
  • a perhaps unconscious tendency to show loyalty for one's tribe (EA) by sticking to tribal beliefs (Utilitarianism) in the face of reasons to the contrary

Perhaps you could construct a case for that position out of these or other reasons in a way that does not add up to "fanatic adherent of insane moral system" but that's what it's looking like to me.

Comment by simon on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-17T16:06:30.019Z · LW · GW

If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that's just insanity. If the people going ahead with it think that they need to do it because that's the morally obligatory thing to do, then they're fanatic adherents of an insane moral system.

It seems to me that you think that utilitarianism is just abstractly The Right Thing to Do, independently of practical problems, any intuitions to the contrary including your own, and all that. So, why do you think that?

Comment by simon on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-17T16:01:08.237Z · LW · GW

"Better" from whose perspective?

If it's "This thing is so great that even all of us humans agree that it killing us off is a good thing" then fine. But if it's "Better according to an abstract concept (utility maximization) that only a minority of humans agree with, but fuck the rest of humanity, we know what's better" then that's not so good.

Sure, we're happy that the dinosaurs were killed off given that is allows us to replace them. That doesn't mean the dinosaurs should have welcomed that.

Comment by simon on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-16T22:26:40.934Z · LW · GW

Because it doesn't seem right to me to create something that will kill off all of humanity even if it would have higher utility.

There are (I feel confident enough to say) 7 billion plus of us actually existing people who are NOT OK with you building something to exterminate us, no matter how good it would feel about doing it.

So, you claim you want to maximize utility, even if that means building something that will kill us all. I doubt that's really what you'd want if you thought it through. Most of the rest of us don't want that. But let's imagine you really do want that. Now let's imagine you try to go ahead anyway. Then some peasants show up at your Mad Science Laboratory with torches and pitchforks demanding you stop. What are you going to say to them?