0 comments
Comments sorted by top scores.
↑ comment by ChristianKl · 2021-10-09T19:55:44.715Z · LW(p) · GW(p)
The way to fight this is to create a fork when someone would try such an attack that invalidates all ownership of the attacker.
↑ comment by Dagon · 2021-10-07T17:37:12.670Z · LW(p) · GW(p)
I like the idea, but it requires a definition of what citizenship rights/privileges get downscaled with less than full 1.0 citizenship (and upscaled with increasing shares. Can I buy myself up to 10000 citizenship? Or retain some rights that are important to me (freedom to travel and permanent residence/work ability) with 0.01 citizenships?
I strongly suspect that disaggregating the "rights" into tradeable licenses is a more workable mechanism than fractional citizenship. And, of course, once it's no longer considered a "right" that's acquired with birth and/or naturalization, it'll stop being granted that way, and only rentable from the authority for a fee.
↑ comment by Vladimir_Nesov · 2021-10-05T13:56:40.850Z · LW(p) · GW(p)
given a set of options, the correct option is the one that increases how many options (you) have
It's useful for a wide variety of goals (see instrumental convergence [? · GW]), but not for every goal. Thus it's usually the case, but it's not contradictory that in unusual circumstances it's not the case.
It does seem contradictory to use your agency to decrease the amount of agency
This phrasing usually indicates agreement, but it's false that it's agreement with my comment, since the comment you replied to is not expressing this point.
↑ comment by Measure · 2021-10-06T20:22:14.314Z · LW(p) · GW(p)
Interesting thought. I assume people are automatically granted 1.0 (or 0.9?) citizenships on birth in this country, but what happens when someone dies? Can you will your extra citizenship(s) to your heirs? What about your base 0.9? Can you sell below 0.9 (and no longer be a full citizen)? Are votes tied to owned citizenships?
Reminds me a bit of the premise of In Time.
↑ comment by Vladimir_Nesov · 2021-10-04T18:32:28.379Z · LW(p) · GW(p)
If a goal is best served by destruction of rationality, that is the course of action rational cognition would advise as effective for achieving the goal.
Things in general shouldn't admit classification into rational and irrational, just as things in general shouldn't admit classification into apples and nonapples. Is temperature an apple or a nonapple? Rationality is a property of cognitive designs/faculties/habits, distinguishing those that do well in their role within the overall process. Other uses should refer back to that.
↑ comment by avturchin · 2021-11-30T23:56:28.914Z · LW(p) · GW(p)
The SSA/SSI reference class of simulation-paranoid observers is huge
It is huge only if we add simulated beings and thus assume that simulation hypothesis is true. If not, it is only a few thousands LW and Bostrom readers.
There is a computer-independent version of simulation argument, it says that illusions are computationally cheaper than most real things and thus more frequent. Examples: movies, dreams.
↑ comment by Adam Zerner (adamzerner) · 2021-11-21T05:06:03.240Z · LW(p) · GW(p)
A relevant phrase is "temporal discounting", although I'm not sure how helpful it'd be to know that (if you don't already).
A Crash Course in the Neuroscience of Motivation [LW · GW] might be helpful, but I'm skeptical about that being helpful as well. For the people who read posts like that, I don't get the sense that they then turn around and win more than the rest of us. I think the harsh reality is just that this is a really hard problem that we haven't made much progress on yet, and that the same thing is true for a lot of things in the field of instrumental rationality [? · GW].
↑ comment by Rafael Harth (sil-ver) · 2021-11-06T22:37:41.804Z · LW(p) · GW(p)
In regards to "the measure either increases by 0 or by a minuscule amount (depending on whether many worlds means infinitely many)", please go into that a bit more. Does a finite amount of worlds allow for a 0 increase in measure per coin flip? I need help wrapping my head around that. To me it seems like the case of 0 measure increase would only, and only might, make sense with infinitely many worlds since then then it doesn't matter whether you walk the left path or the right path today,
No, this is exactly what I meant: infinite worlds 0 measure; finite worlds tiny measure. (Although I don't know how quantum coins are implemented; if you shoot one photon and look at the position, you'd be guaranteed to have 1 heads and 1 tails in two worlds, so in that case both measures are equal, right? I think you need a hypothetical perfect coin for the thought experiment to work. But I haven't thought deeply about this.)
↑ comment by Rafael Harth (sil-ver) · 2021-11-06T12:45:31.649Z · LW(p) · GW(p)
If I understand the question correctly, I'd say it very much depends on whether the coin is real or hypothetical/perfect. In the latter case, I think the measure either increases by 0 or by a minuscule amount (depending on whether many worlds means infinitely many). But if it's a real coin, it should come down to questions like 'at what point in the process of flipping is the result determined', which sounds like a messy (quantum) physical and biological question.
↑ comment by Matt Goldenberg (mr-hire) · 2021-10-12T17:12:06.303Z · LW(p) · GW(p)
There's quite a bit of discussion of this on discussions of various proof of stake algorithms and their strengths and weaknesses (or used to be).
↑ comment by Vladimir_Nesov · 2021-10-05T13:39:35.792Z · LW(p) · GW(p)
why isn't the very first step to achieving said goal is to stop behaving rational yourself?
Perhaps it is! (But also, see the rest of my comments on how using "rational" to label behavior is inflationary.)
↑ comment by Vladimir_Nesov · 2021-10-04T14:05:19.914Z · LW(p) · GW(p)
This surprisingly seems like a plausible reference to the concept of rationality, even though it pattern-matches inflationary use of the word, see Rationality: Appreciating Cognitive Algorithms [LW · GW]. If exercising does improve cognition and health, that should help for example with the ability to be agentic, although the effect is too general to say that it's specifically about that. Promotion of personal rationailty in the world and say development of better coordination tech admit some sort of collectivist version of rationality, for example making society more agentic. (The latter is not necessarily a good thing, a more rational society or organization might more reliably fail in being aligned with human values.)
↑ comment by ChristianKl · 2021-10-04T08:03:54.921Z · LW(p) · GW(p)
Rationality is not about "should". It's no value system.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2021-10-04T14:57:05.074Z · LW(p) · GW(p)
In the instrumental reading of "should", rational behavior should promote use of good cognitive algorithms. It's a bit inflationary to label any behavior that is not directly a habit of cognition "rational", but if anything is to be labeled that way, it's the things that lead to more systematic use of rational habits of cognition. This is in contrast to beliefs and actions merely generated as a result of using rational habits of cognition, calling those "rational" is obscenely inflationary.
Replies from: ChristianKl↑ comment by ChristianKl · 2021-10-04T16:50:40.722Z · LW(p) · GW(p)
I'm not sure why an instrumental reading of "should" would result in "should" not about being creating obligations. In my experience most of the time where people use the word should and then say that they aren't speaking about obligations, they aren't really clear about what they are saying.
In the case of the OP I expect that he thinks about whether there's an obligation to exercise.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2021-10-04T17:39:03.918Z · LW(p) · GW(p)
Most concepts can be thought of as purposes, inducing normativity [LW · GW] over things in the vicinity of the concept, pointing them in the direction of becoming more central examples of it. So a sphere shouldn't have bumps on it, and a guillotine should be sharp. There is usually equivocation with the shouldness of human values because many concepts are selected for being benign, including concepts for useful designs like cars and chairs, but the sense that emphasizes the purpose of a particular concept is more specific. This way rationality the concept is a property of ingredients of cognition, while rationality the purpose advises how ingredients of cognition should change to become more rational. This is the sense of being instrumental I meant, instrumental to fitting a concept better.
The idea of concepts as purposes is relevant to non-agentic behavior, where the emphasis is on coexistence of multiple purposes, not one preference, and for continued operation of specific agent designs, including rational human cognition, where parts of the design should keep to their purpose and resist corruption from consequentialist considerations, like with beliefs chosen by appeal to consequences or breaking of moral principles for the greater good [LW · GW].
↑ comment by Dagon · 2021-10-03T03:49:04.484Z · LW(p) · GW(p)
I don't think there's enough information to answer beyond the basic obvious expectation. 50% is my prior for coin flips, and unless you specify a VERY small number of voters and a known distribution of their votes, the coinflip is lost in the noise so there's no evidence in the result. Assuming "always" is a mathematical certainty rather than my opponent's intent, which could be misleading in results, it must be 1-coinflip.
↑ comment by Measure · 2021-11-04T00:59:11.299Z · LW(p) · GW(p)
[The following is in response to a deleted comment.]
You would need rather a large expanse of near-vacuum to avoid destabilizing the planet's orbit as the chaos raced inward to fill the void, and vacuum is quite an ordered state.
I agree with you about the non-consecutive sense of identity, but that should imply that most Boltzmann thought-instants would have random, incoherent memories rather than complete, laws-of-physics-obeying memories. This is, I think, a much stronger argument for a "normal" sequential world than relative size/complexity of brains vs. solar systems, which I still think points in the other direction. I have poor intuitions for these sorts of scales though, so I could easily be mistaken, and maybe there's something I'm not thinking of that would mean even non-consecutive Boltzmann brains would often have coherent memories (maybe the unlikeliness of coherent memories is dwarfed by the relative likeliness of momentary brains).
↑ comment by Measure · 2021-11-03T21:46:59.157Z · LW(p) · GW(p)
The difference is that the solar system is surrounded by ~vacuum that doesn't interfere with it whereas the hypothetical "large ordered structure" would be surrounded by a chaotic soup that would quickly disrupt it before anything could evolve.
↑ comment by Measure · 2021-11-03T20:32:31.851Z · LW(p) · GW(p)
The argument is that chaos -> brain is more likely than chaos -> large unordered structure -> habitable environment -> living organisms -> brain.
Most complex thoughts in a thermodynamic soup will be produced by the simplest structures capable of producing such thoughts - probably much smaller than an entire brain.
↑ comment by MikkW (mikkel-wilson) · 2021-10-15T19:08:54.918Z · LW(p) · GW(p)
If you don't know what you expect future / counterfactual versions of you want, it will be hard to co-operate, so I recommend spending time regularly reflecting on what they might want, especially in relation to things that you have done recently. Reflect on what actions you have done recently (consider both the most trivial and the most seemingly important), and ask yourself how future and counterfactual versions of you will react to finding out that (past) you had done that. If you don't get a gut feeling that what you did was bad, test it out by trying to create and simulate a specific counterfactual version of yourself that would react in a maximally horrified way, and reflect on what factors made that version of you horrified, and reflect on how likely those or similar factors could be. You could spend ~7 - 10 mins each day doing this reflection, or ~30 mins each week, to develop a habit of thinking in this way. I'd recommend starting with the daily version, so you can really get used to it, before maybe going to the weekly version, but you can start with the weekly version if that's more convenient, and get good results from that, too.
Also remember that the way other humans will treat counterfactual versions of you will depend on their predictions of what you will do in this branch of reality, so try to act in a way, that if the people interacting with the counterfactual_you predicted or learned you would act in that way, they would be maximally willing to do what counterfactual_you wants them to do.
↑ comment by Dirichlet-to-Neumann · 2021-10-14T18:09:05.748Z · LW(p) · GW(p)
Just take out coal/oil and a stable technological level seems possible. Also I'm not sure those stable fantasy worlds really exists in literature, most examples I can think of have (sometimes magical) technological growth or decline.
Tolkien Middle Earth is very young - a few thousands years. This means no coal, no oil, and no possibility of an industrial revolution. Technology would still slowly progress to 18th century level but I can see it happening slow enough to make the state of technology we see in the LOTR acceptable. On the other hand magical technology is declining because the elves magical power is slowly declining (both in quality and quantity) as they leave Middle Earth.
Sanderson's Rocharch's civilisation is wiped regularly by an all out war between good and evil, regularly resetting technology to bronze age level. Then (for some reasons) no war happen for 3000 years and during this time we see a steady progress in magical technology (althoug Sanderson inability to write in anything else than close third means an unlikely amount of technological progress just happens wherever the heroes are).
Also I'm pretty sure the environmental conditions in Rocharch do not allow oil and coal to form so once again no industrial revolution is possible.
↑ comment by JBlack · 2021-10-14T06:11:07.730Z · LW(p) · GW(p)
If the rules of the world preferentially destroy cultures that develop beyond the "standard fantasy technology level" (whatever that is) then I expect that over time, cultures will very strongly disfavour development beyond that level. I'm pretty sure that this will be a stable equilibrium.
If the rules are sufficiently object-level (such as in a computer game), then technological progress based on exploiting finer grained underlying rules becomes impossible. You can't work out how to crossbreed better crops if crops never crossbreed in the first place, and likewise for other things.
If intelligence itself past some point is a serious survival risk, then it will be selected against. You may get an equilibrium where the knowledge discovered between generations is (on long-term average) equal to knowledge lost.
... and so on.
↑ comment by Yair Halberstadt (yair-halberstadt) · 2021-10-13T17:23:53.240Z · LW(p) · GW(p)
But as civilization develops people might rediscover magic, beginning the whole cycle anew.
↑ comment by JBlack · 2021-10-13T06:03:13.388Z · LW(p) · GW(p)
Fantasy worlds almost universally do have gods that actively work to maintain desired states of the world, so "it would take active maintenance to achieve" isn't any sort of theoretical evidence against the possibility of their existence.
Even that aside, it's pretty easy to think of barriers that mean you won't end up with a modern, industrialized society no matter how many Henry Fords you have. Even more so when you can invoke arbitrary things like "magic".
↑ comment by Yair Halberstadt (yair-halberstadt) · 2021-10-12T18:24:42.081Z · LW(p) · GW(p)
Imagine magic was super useful, but every use has a 1 in a 100 billion chance of wiping out 99% of civilization. Then by prisoners dilemma, people keep on using magic, and civilization keeps on being wiped out and having to start from scratch.
↑ comment by JBlack · 2021-10-04T08:27:08.257Z · LW(p) · GW(p)
That's not likely to be something you can calculate, and certainly not from the given information. At the very least, you'd want to know the ratio between P(A wins with 60% of the vote | you vote A) and P(A wins with 60% of the vote | you vote B).
For large numbers of voters who are unaffected by your decision, these are likely to be very close to each other, and so the posterior odds are very close to 50% that the coin flip landed heads.
For smaller numbers (e.g. a board meeting) and/or where your decision may influence other people it's much more complicated. The fact that in the follow-up question you have an enemy who votes against you implies that the vote is not a secret ballot and your vote does influence at least some other people. This means that the posterior distribution needs to be taken over all sorts of social dynamics and situations.
Even so, the posterior probability of heads isn't likely to be much different from 50% except in very unusual circumstances.
↑ comment by tslarm · 2021-10-03T20:23:01.787Z · LW(p) · GW(p)
(Sorry if I'm misreading anything; my excuse is that I'm operating on 3 hours' sleep and am not very familiar with Python syntax.)
I ran your 'regular run' version, modified to keep a count of 1-vote victories, and the results were as I would have predicted: https://imgur.com/Y17ecLq
I'm a bit confused by the 'random voter sample' version -- which scenario is that illustrating, and what's the deal with the 'myvote = random.randrange(-voters, voters)' and ' if votes*myvote > votes*votes:' lines?
↑ comment by tslarm · 2021-10-03T04:41:47.914Z · LW(p) · GW(p)
I wrote a long response to a related comment chain here: https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past?commentId=jRo2cGuXBbkz54E4o [LW(p) · GW(p)]
My short answer to this question is the same as Dagon's: if we're assuming a negligible probability that the election was close enough for your vote to be decisive, 50% in both cases.
I tried to explain the conflicting intuitions in that other comment. It turned out to be one of those interesting questions that feels less obvious after thinking about it for a couple of minutes than at first glance, but I think I resolved the apparent contradictions pretty clearly in the end.