Comment by makoyass on One Website To Rule Them All? · 2019-01-17T09:04:59.677Z · score: 1 (1 votes) · LW · GW

I'm very excited about what might happen if we got ten people like us in a channel, I think that's a community/project I'd give a lot of energy to, but that didn't occur to me until just partway through reading your post, so I have not been collecting any names until this point, sorry. Maybe we should wait til we have a few more than two, before I start sending out invites (by the time we do, there might be something nicer for async group chats than slack).

(weirdsuns are... analytic surrealists. I don't know if I'd say they're influential, but as a name for a certain kind of thinker, those unmoored by their artificial logics from the complacency of common sense, they're a good anchor on which to ground a label.)

Comment by makoyass on The E-Coli Test for AI Alignment · 2019-01-17T03:04:03.119Z · score: 1 (1 votes) · LW · GW

A correct implementation of the function, DesireOf(System) should not have a defined result for this input. Sitting and imagining that there is a result for this input might just lead you further away from understanding the function.

Maybe if you tried to define much simpler software agents that do have utility functions, which are designed for very very simple virtual worlds that don't exist, then try to extrapolate that into the real world?

Comment by makoyass on Is there a.. more exact.. way of scoring a predictor's calibration? · 2019-01-17T00:40:46.289Z · score: 1 (1 votes) · LW · GW


n_k the number of forecasts with the same probability category

Indicate that this is using histogram buckets? I'm trying to say I'm looking for methods that avoid grouping probabilities into an arbitrary (chosen by the analyst) number of categories. For instance.. in the (possibly straw) histogram method that I discussed in the question, if a predictor makes a lot of 0.97 bets and no corresponding 0.93 bets, their [0.9 1] category will be called slightly pessimistic about its predictions even if those forecasts came true exactly 0.97 of the time, I wouldn't describe anything in that genre as exact, even if it is the best we have.

Is there a.. more exact.. way of scoring a predictor's calibration?

2019-01-16T08:19:15.744Z · score: 4 (1 votes)
Comment by makoyass on Open Thread January 2019 · 2019-01-16T05:39:39.120Z · score: 3 (2 votes) · LW · GW

I'll say it again in different words, I did not understand the paper (and consequently, the blog) to be talking about actual blackmail in a big messy physical world. I understood them to be talking about a specific, formalized blackmail scenario, in which the blackmailer's decision to blackmail is entirely contingent on the victim's counterfactional behaviour, in which case resolving to never pay and still being blackmailed isn't possible- in full context, it's logically inconsistent.

Different formalisation are possible, but I'd guess the strict one is what was used. In the softer ones you still generally wont pay.

Comment by makoyass on One Website To Rule Them All? · 2019-01-14T09:15:25.888Z · score: 1 (1 votes) · LW · GW

Hey uh, I've been thinking all of those thoughts too. We should probably nucleate up a community (a slack channel or something, somewhere to hang out and share our findings and make plans) because I'm pretty sure there are at least 10 people knocking around just here who have their heads as far into this as we do. Heard Eliezer was absolutely overflowing with discursive technologies when Arbital was being planned, his concepts were fractaline. I've been that way. I guess I pulled back a bit when I started to understand that having infinite visions of sophisticated collective intelligence augmentation systems isn't really the hard part, the hard part is building any of it, funding it and holding users.

I do see some ways to do those parts.

I'm just gonna start talking excitedly about the most recent piece of the puzzle I turned up because until this moment I have not had many people to talk to about this (lots of friends who're interested but not many who'd ever take what I was saying and do anything with it)

Yesterday I flipped out a little when I remembered that article Scott Aaronson did about eigenmorality (eigentrust) and I realised it this is exactly the algorithm that I've been looking for months, for doing a basic implementation of the thing you're calling "Contrast Voting"... (I'm going to keep calling it order-voting and graph rank recovery if you don't mind? Idk I think there are more standard terms than that) I haven't tried it yet (I just found it yesterday. Also I want to port it to Rust) but I'm pretty sure it'll do it. Basically what we need to implement order voting is, we need a way of taking a whole lot of order pair judgements/partial rankings from different users and combining them together into a single global ranking. With eigentrust (similar to all the other stuff I've been trying), basically what we'll do is we'll build a network graph where each edge represents the sum of user judgements over the two candidates, then we run eigentrust on the graph (it's a similar technology to pagerank, if you've ever had that explained to you. Score flows along directed links until an equilibrium is reached), and then we have an overall ranking of the candidates. We'll need to do some special extra stuff to allow the system to notice when two clusters of candidates aren't really comparable with the data we have, and it'll probably need to try to recognise voting cabals because there's a dark horse problem where-...

I should really write this out properly somewhere.

The reason I haven't done that already is that I'm not sure how many of our concepts should be exposed publicly.

these technologies are actually powerful. Even just order voting alone would speed up content sorting by like 20x, imgur could use it for recommending fucking cat pictures and they would become even more compulsive than they already are. (They might already be using it in a hidden way, I think netflix is.) Power isn't good or bad on its own, but some powers are more likely to be put to good uses than bad. Collective intelligence platforms are more likely to be put towards good uses than AGI is, they're inherently made of humans, they're more likely to reflect roughly human values even when they go wrong, but in their worst incarnations they can still just end up becoming completely insane demonic egregores like... dare I even speak their names, no, no I daren't, because I don't want to draw their millions of eyes towards me. Let's just say that some of the social media platforms I frequent most often are basically incapable of forming sound epistemic structures, and I'm afraid of most of their segments there, and I really hope that those words they're saying never become much more than words.

The ideas I have for technologies that'd gather and harmonise users quickly and efficiently are also some of the ones that scare me the most. I know how to summon an egregore, but Making the egregore come out of the portal sane takes a special extra step. It's absolutely doable, but I wouldn't trust anyone who's not at least weirdsun adjacent to understand the problems well enough and to stop and think about what they're doing and put in the work to make it all turn out human, and maybe not release it onto the internet before it's sound.

I think the first step is to make something that gathers information that people want. A place where people will feel comfortable forming communities and spending time. A humane place, something that respects peoples' attention, rewards it.

The world needs platforms where good mass discourses can exist, currently we have, actually none.

I actually think this should be an EA cause. At some point, if we can gather a decent team, we should start asking for funding. Maybe move to the EA hotel in blackpool and grind on it for a bit once we have a 1.0 vision.

Comment by makoyass on When is CDT Dutch-Bookable? · 2019-01-14T05:10:49.061Z · score: 1 (1 votes) · LW · GW

Hmm. I don't think I can answer the question, but if you're interested in finding fairly realistic ways to dutchbook CDT agents, I'm curious, would the following be a good method? Death in damascus would be very hard to do IRL, because you'd need a mindreader, and most CDT agents will not allow you to read their mind for obvious reasons.

A game with a large set of CDT agents. They can each output Sensible or Exceptional. If they Sensible, they receive 1$. Those who Exceptional don't get anything in that stage

Next, if their output is the majority output, then an additional 2$ is subtracted from their score. If they're exceptionally clever, if they manage to disagree with the majority, then 2$ is added to their score. A negative final score means they lose money to us. We will tend to profit, because generally, they're not exceptional. there are more majority betters than minority betters.

CDT agents act on the basis of an imagined future where their own action is born from nothing, and has no bearing on anything else in the world. As a result of that, they will reliably overestimate⬨ (or more precisely, reliably act as if they have overestimated) their ability to evade the majority. They are exceptionalists. They will (act as if they) overestimate how exceptional they are.

Whatever method they use to estimate⬨ the majority action, they will tend to come out with the same answer, and so they will tend to bet the same way, and so they will tend to lose money to the house continuously.

⬨ They will need to resort to some kind of an estimate, wont they? If a CDT tries to simulate itself (with the same inputs), that wont halt (the result is undefined). If a CDTlike agent can exist in reality, they'll use some approximate method for this kind of recursive prediction work.

After enough rounds, I suppose it's possible that their approximations might go a bit crazy from all of the contradictory data and reach some kind of equilibrium where they're betting different ways somewhere around 1:1 and it'll become unprofitable for us to continue the contest, but by then we will have made a lot of money.

Comment by makoyass on Open Thread January 2019 · 2019-01-13T21:30:34.245Z · score: 3 (2 votes) · LW · GW

I read that under a subtext where we were talking about the same blackmail scenario, but, okay, others are possible.

In cases where the blackmail truly seems not to be contingent on its policy, (and in many current real-world cases) the FDT agent will pay.

The only cases when an FDT agent actually will get blackmailed and refuse to pay are cases where being committed to not paying shifts the probabilities enough to make that profitable on average.

It is possible to construct obstinate kinds of agents that aren't sensitive to FDT's acausal dealmaking faculties. Evolution might produce them often. They will not be seen as friends. As an LDT-like human, my feelings towards those sorts of blackmailers is that we should destroy all of them as quickly as we can, because their existence is a blight to ours. In light of that, I'm not sure they have a winning strategy. When you start to imagine the directed ocean of coordinated violence that an LDT-aligned faction (so, literally any real-world state with laws against blackmail) points in your direction as soon as they can tell what you are, you may start to wonder if pretending you can't understand their source code is really a good idea.

I imagine a time when the distinction between CDT and LDT is widely essentially understood, by this time, the very idea of blackmailing will come to seem very strange, we will wonder how there was this era when a person could just say "If you don't do X, then I will do the fairly self-destructive action Y which I gain nothing from doing" and have everyone just believe them unconditionally, just believe this unqualified statement about their mechanism. Wasn't that stupid? To lie like that? And even stupider for their victims to pretend that they believe the lie? We will not be able to understand it any more.

Imagine that you see an agnostic community head walking through the park at night. You know it's a long shot, but you amble towards him, point your gun at him and say "give me your wallet." He looks back at you and says, "I don't understand the deal. You'll shoot me? How does that help you? Because you want my wallet? I don't understand the logic there, why are those two things related? That doesn't get you my wallet."

Only it does, because when you shoot someone you can loot their corpse, so it occurs to me that muggers are a bad example of blackmail. I imagine they've got to have a certain amount of comfort with actually killing people, to do that. It's not really threatening to do something self-destructive, in their view, they still benefit a little from killing you. They still get to empty your pockets. To an extent, mugging is often just a display of a power imbalance and consequent negotiation of a mutually beneficial alternative to violence.

The gang can profit from robbing your store at gunpoint, but you and them both will profit more if you just pay them protection money. LDT only refuses to pay protection money if it realises that having all of the other entangled LDT store owners paying protection money as well would make the gang profitable enough to grow, and that having a grown gang around would have been, on the whole, worse than the amortised risk of being robbed.

Comment by makoyass on Open Thread January 2019 · 2019-01-13T08:12:49.288Z · score: 21 (7 votes) · LW · GW

I'm not gonna go comment on his blog because his confusion about the theory (supposedly) isn't related to his rejection of the paper, and also because I think talking to a judge about the theory out of band would bias their judgement of the clarity of the writing in future (it would come to seem more clear and readable to them than it is, just as it would to me) and is probably bad civics, but I just have to let this out because someone is wrong on the internet, damnit

FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You are being blackmailed.

So he's using a counterexample that's predicated on a logical inconsistency and could not happen. If a decision theory fails in situations that couldn't really happen, that's actually not a problem.


If you are in Newcomb's Problem with Transparent Boxes and see a million in the right-hand box, you again fare better if you follow CDT. Likewise if you see nothing in the right-hand box.

is the same deal, if you take the right box, that's logically inconsistent with the money having been there to take, that scenario can't happen (or happens only rarely, if he's using that version of newcomb's problem), and it's no mark against a decision procedure if it doesn't win in those conditions. It will never have to face those conditions.

What if someone is set to punish agents who use FDT, giving them choices between bad and worse options, while CDTers are given great options? In such an environment, the engineer would be wise not build an FDT agent.

What if someone is set to punish agents who use CDT, giving them choices between bad and worse options, while FDTers are given great options? In such an environment, the engineer would be wise not build an CDT agent.

What if a skeleton pops out in the night and demands that you must recite the magna carta or else it will munch your nose off? Will you learn how to recite the magna carta in light of this damning thought experiment?

It is impossible to build an agent that wins in scenarios that are specifically contrived to foil that kind of agent. It will always be possible to propose specifically contrived situations for any proposed decision procedure.

Aaargh this has all been addressed by the arbital articles! :<

Comment by makoyass on Open Thread January 2019 · 2019-01-13T07:15:35.971Z · score: 1 (1 votes) · LW · GW

What's meant by "Moral" here?

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-13T06:26:13.128Z · score: 1 (1 votes) · LW · GW
I'm definitely more of the Denettian "consciousness is a convenient name for a particular sort of process built out of lots of parts with mental functions" school.

I'm in that school as well. I'd never call correlates with anthropic measure like integrated information "consciousness", there's too much confusion there. I'm reluctant to call the purely mechanistic perception-encoding-rumination-action loop consciousness either. For that I try to stick, very strictly to "conscious behaviour". I'd prefer something like "sentience" to take us even further from that mire of a word.

(But when I thought of the mirror chamber it occurred to me that there was more to it than "conscious behaviour isn't mysterious, it's just machines". Something here is both relevant and mysterious. And so I have to find a way to reconcile the schools.)

athres ∝ mass is not supposed to be intuitive. Anthres ∝ number is very intuitive, what about the path from there to anthres ∝ mass didn't work for you?

Comment by makoyass on Combat vs Nurture & Meta-Contrarianism · 2019-01-12T22:21:12.604Z · score: 1 (1 votes) · LW · GW

I'm currently of the view that anything below level three is a complete waste of time, and if we can't find a way to elevate the faith level quickly and efficiently then we have better things to be doing and we shouldn't engage much at all (This is mere opinion, and it's a very bold opinion, so I encourage people to try to wreck it, if they think they can.)

Comment by makoyass on Combat vs Nurture & Meta-Contrarianism · 2019-01-12T22:12:48.438Z · score: 5 (3 votes) · LW · GW

Let's call this process of {exposing our guessed interpretations of the other person's position}.. uh.. "batleading"

I wonder how often that impulse to batlead is not correctly understood by the batleader, and when people respond as if we're strawmanning or failing to notice our confusion and trying prematurely to dismiss a theory we ought to realise we haven't understood (when really we just want to batlead) we tragically lack the terms or the introspection to object to that erroneous view of our state of mind, and things just degenerate from there

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T20:21:04.906Z · score: 1 (1 votes) · LW · GW
Treating a merely larger brain as more anthropically important is equivalent to saying that you can draw this boundary inside the brain

I really can't understand where this is coming from. When we weigh a bucket of water, this imposes no obligation to distinguish between individual water molecules. For thousands of years we did not know water molecules existed, and we thought of the water as continuous. I can't tell whether this is an answer to what you're trying to convey.

Where I'm at is... I guess I don't think we need to draw strict boundaries between different subjective systems. I'll probably end up mostly agreeing with Integrated Information theories. Systems of tightly causally integrated matter are more likely as subjectivities, but at no point are supersets of those systems completely precluded from having subjectivity, for example, the system of me, plus my cellphone, also has some subjectivity. At some point, the universe experiences the precise state of every transistor and every neuron at the same time (this does not mean that any conscious-acting system is cognisant of both of those things at the same time. Subjectivity is not cognisance. It is possible to experience without remembering or understanding. Humans do it all of the time.)

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T02:01:52.648Z · score: 1 (1 votes) · LW · GW

I haven't read their previous posts, could you explain what "who has the preferences via causality" refers to?

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T01:00:15.641Z · score: 1 (1 votes) · LW · GW

Will read. I was given pause recently when I stumbled onto If a tree falls on Sleeping Beauty, where our bets (via LDT reflectivist pragmatism, I'd guess) end up ignoring anthropic reasoning

The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter

2019-01-11T22:26:29.887Z · score: 11 (4 votes)
Comment by makoyass on Bottle Caps Aren't Optimisers · 2019-01-08T08:09:26.604Z · score: 1 (1 votes) · LW · GW

A larger set of circumstances... how are you counting circumstances? How are you weighting them? It's not difficult to think of contexts and tasks where boulders outperform individual humans under the realistic distribution of probable circumstances.

Comment by makoyass on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T21:06:49.844Z · score: 1 (1 votes) · LW · GW

Yeah. I think I did notice it talking about a stochastic policy at one point, and on reflection I don't see any other reasonable way to do that. This interpretation also accords with making the agent's actions part of the observation history. If they were a pure function of the observations, we wouldn't need them to be there.

Comment by makoyass on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-01T08:14:10.508Z · score: 5 (3 votes) · LW · GW

In the FHI's indifference paper, they define policies as mapping observation-action histories to a distribution over actions instead of just actions ("π : H → ∆(A)"). Why is that? Is that common? Does it mean the agent is stochastic?

Comment by makoyass on Open and Welcome Thread December 2018 · 2019-01-01T00:18:08.285Z · score: 2 (2 votes) · LW · GW

I'm certainly interested in playing with reallocation systems in existing cities, but if we can go beyond that, we must.

"Gentrification", for me includes the effect where land prices increase without any increase in value. That pricing does useful work by allocating land to its most profitable uses. It does that through costly bidding wars and ruthless extraction of rent, which have horrible side-effects of reducing the benefits regular people derive from living in cities by, I'd guess, maybe 80%? (Reminder: not only is your rent too damn high, but so is the rent of the businesses you frequent), allocating vast quantities of money to the landowning class, who often aren't producing anything (especially often in san fransisco). If we can make a system that allocates land to its most productive use without those side-effects, then we no longer need market-pricing as a civic mechanism, and we should be trying like hell to get away from it. Everyone should be trying like hell to get away from it, but people who believe they have a viable mostly side-effect-free substitute should be trying especially hard.

A large part of the reason I'm attracted to the idea of building in a rural or undeveloped area is it will probably be easier to gain the use of eminent domain, in that situation. If we're building amid farmland, and we ask the state for the right to buy land directly adjacent to the city at a price of say... double the inflation-adjusted price of local farmland as of the signing of the deal, it's hard to argue that anyone loses out much in that situation. There wasn't really much of a chance that land was going to rise to that price on its own, any rise would have been an obvious exploitation of the effects of the city. If you ask for a similar privilege on urban land, forced sale at capped price is a lot more messy (and, of course, the price cap will be like 8x higher), for one, raising land prices in response to adjacent development is just what land-owners are used to in cities and they will throw a very noisy fit if someone threatens that.

Comment by makoyass on Editor Mini-Guide · 2018-12-30T09:01:37.736Z · score: 3 (2 votes) · LW · GW

Oh I see the index on the left is constructed automatically

Okay so how did you make that index on the left side of the page? :p

Comment by makoyass on What makes people intellectually active? · 2018-12-30T03:59:46.982Z · score: 11 (5 votes) · LW · GW

I receive an original idea every time I face an uncomfortably vague but demanding obsession, a question I didn't know how to ask. I think about it until I do know how to ask the question, until the vague obsession becomes precise. Out comes a payoff. I can write it down and people will like it, usually (if I can convince anyone to read it).

I can't imagine that there are a lot of people who don't get these leading obsessions, these tractable neuroses, these itching intuitions that there is something over there that we should be trying to get to know. I think there is a difference between people, some people go after those sorts of smells, others are repelled. A lot of good work comes from people who, through circumstance or psychology, cannot ignore their difficult questions.

I don't know what use this observation is for creative engineering work. I've been stuck on a simple game design problem for weeks and I'm pretty sure that's because I never learned to direct my creativity (or, to phrase that in another way: the thing that is directing my creativity does not respect and listen to the thing that knows what problems I'm supposed to be working on right now). Something in the design is missing, shallow, but this problem.. in my mind.. it never asserts itself as one of these kinds of unarticulated question that can and must be answered. I just want to turn away. I want to do something else. Some crucial party in me is not interested. I can't tell it it's wrong to be disinterested. I hate this game. I know that one day I will love it again, unfortunately I love it when it needs my love the least, and I hate it when it needs my love the most. Maybe I need to divorce the concept of the game as it exists (the current build) and the vision, the game as it should exist. Terrible thing to be confused about, but it feels like that's what's going on.

I've thought this before: A finished, released game will always be a thin shadow of an experience it is alluding to, a lot of the time games make it very obvious, it's practically explicit, I didn't want it to be obvious, I wanted the game to be honest, just what it appears to be, and so I have to go through hell to bring the being so far forward to align with the appearance.

Comment by makoyass on Spaghetti Towers · 2018-12-24T21:25:38.338Z · score: 6 (2 votes) · LW · GW

Suddenly very inspired with the idea of a programming language where even the most carelessly constructed spaghetti towers are fairly easy to refactor.

I want to design a medium that can be assembled into a fairly decent set of shelves by just progressively throwing small quantities of it at a wall.

I want to design tools that do not require foresight to be used well, because if you're doing something that's new to you, that challenges you- and I want to live in a world where most people are- foresight is always in scarce supply.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-23T23:27:09.359Z · score: 2 (2 votes) · LW · GW

They have very little to be afraid of if their commitment is true, and if it's not, we don't want it. The commitment thing isn't just a marketing stunt. It's a viability survey. The data has to be good.

I guess I should add, on top of the process for forgiving commitments under unavoidable mitigating circumstances, there should be a process for deciding whether the city met its part of the bargain. If the facilities are not what was promised, fines must be reduced or erased.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-23T07:54:30.230Z · score: 2 (2 votes) · LW · GW

There are many kinds of commerce I don't know much about. I'm going to need help with figuring out what a weird city where the cost of living is extremely low is going to need to become productive. The industries I do know about are fairly unlikely to require proximity to a port, but even in that set.. a lot of them will want proximity to manufacturing and manufacturing in turn will want to be near a port?

Can you think of any reasons we couldn't make the coordinated city's counterpart to the FSP's Statement of Intent contract legally binding, imposing large fines on anyone who fails to keep to their commitment? (while attempting to provide exceptions for people who can prove they were not in control of whatever kept them from keeping their commitment, where possible) Without that, I'd doubt those commitments will amount to much.

For a lot of people a scheme like this will be the only hope they'll ever have of owning (a share in) any urban property, if they can be convinced of the beneficence of the reallocation algorithms (I imagine there will be many opportunities to test them before building a fully coordinated city), I don't really understand what it is about the FSP that libertarians find so exciting, but I feel like coordinated city makes more concrete promises of immediate and long-term QoL than the FSP ever did. Note, the allocator includes the promise of finding ourselves surrounded by like-minded individuals

Comment by makoyass on 0 And 1 Are Not Probabilities · 2018-12-21T00:48:46.735Z · score: 0 (2 votes) · LW · GW

/r/badmathematics is shuttered now, apparently.

"This community has become something of a shitshow. Setting badmath to private while we try to decide on a way forward with the subreddit."

Oh no, really? Who would have thought that the sorts of people who have learned to enjoy indulging contempt would eventually turn on each other.

I really wanted to see that argument though, tell me, to what extent was it an argument? Cause I feel like if a person in our school wanted to settle this, they'd just distinguish the practical cases EY's talking about from the mathematical cases the conversants are talking about and everyone would immediately wake up and realise how immaterial the disagreement always was (though some of them might decide to be mad about that instead), but also, maybe Eleizer kind of likes getting people riled up about this so maybe dispersing the confusion never crossed his mind. Contempt vampires meet contempt bender. Kismesis is forged.

I shouldn't contribute to this "fight", but I can't resist. I'd have recommended he bring up how the brunt of the causal network formalization explicitly disallows certain or impossible events on the math level once you cross into a certain level of sophistication (I forget where the threshold was, but I remember thinking "well the bayesian networks that supports 0s and 1s sounds pretty darn limited and I'm going to give up on them just as my elders advised.")

Ultimately, the "can't be 0 or 1" restriction is pretty obviously needed for a lot of the formulas to work robustly (you can't even use the definition of conditional probability without restricting the prior of the evidence! Cause there's a division in it! There are lots of divisions in probability theory!)

So I propose that we give a name to that restriction, and I offer the name "credences". (Currently, it seems the word "credence" is just assigned to a bad overload of "probability" that uses percent notation instead of normal range. I doubt anyone will miss it.)

A probability is a credence iff it is neither 0 nor 1. A practical real-world right and justly radically skeptical bayesian reasoner should probably restrict a large, well-delineated subset of its evidence weights to being credences.

And now we can talk about credences and there's no need for any more confusion, if we want.

Comment by makoyass on What is abstraction? · 2018-12-17T07:40:12.776Z · score: 1 (1 votes) · LW · GW
I get the impression from hearing other people talk about it that there is a single meaning, and that I'm not understanding what that single meaning is

People are often wrong about that.

A person who understands this effect can use it to exploit people, and when they do it is called "equivocation", to use two different senses of the same word in quick enough succession that nobody notices the words aren't really pointing at the same thing, to then use the inconsistencies between the word senses to approach impossible conclusions.

I wish I could drop a load of examples but I've never been good at that. This deserves a post. This deserves a paper, there are probably whole philosophical projects that are based on the pursuit of impossible chimeras held up by prolonged, entrenched equivocation...

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-16T01:24:09.824Z · score: 3 (2 votes) · LW · GW

Update on preference graph order recovery

I decided to stop thinking about the Copeland method (method where you count how many victories each candidate has had and sort everyone according to that). They don't mention it in the analysis (pricks!) but the flaw is so obvious I'm not gonna be humble about this

Say you have a set of order judgements like this:

< = { (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (p u) (p u) (p u) (p u) }

It's a situation where the candidate "s" is a strawman. No one actually thinks s is good. It isn't relevant and we probably shouldn't be discussing it. (But we must discuss it, because no informed process is setting the agenda, and this system will be responsible for fixing the agenda. Being able to operate in a situation where the attention of the collective is misdirected is mandatory)

p is popular. p is better than the strawman, but that isn't saying much.

u is the ultimate, and is known by some to be better than p in every way. There is no controversy about that, among those who know u.

Under the copeland method, u still loses to p because p has fought more times and won more times.

The Copeland method is just another popularity contest. It is not meritocratic. It cannot overturn an incumbency by helping a few trusted seekers to spread word about their finding. It does not spread findings. It cannot help new things rise to prominence. Disregard the Copeland method.


A couple days ago I started thinking about defining a metric by thinking of every edge in the graph (every judgement) as having a "charge" and then defining a way of reducing serial wires and a way of reducing parallel wires, then getting the total charge between each pair of points (it'll have time complexity n^3 at first but I can think of lots of ways to optimise that. I wouldn't expect much better from a formal objective measure), then assembling that into a ranking.

Finding serial and parallel reducers with the right properties didn't seem difficult (I'm currently looking at parallel(a, b)→ a + b and serial(a, b)→ 1/(1/a + 1/b)). That was very exciting to realise. The current problem is, it's not clear that every tangle can be trivially reduced to an expression of parallels and serials, consider the paths between the top left and bottom right nodes in a network shaped like "▥", for instance.

Calculating the conductance between two points in a tangled circuit may be a good analogy here... and I have a little intuition that this would be NP hard in the most general case despite being deceptively tractable in real-world cases. Someone here might be able to dismiss or confirm that. I'm sure it's been studied, but I can't find a general method, nor a proof of hardness.

If true, it would make this not so obviously useful as a formal measure sufficient for use in elections.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-15T23:06:22.848Z · score: 9 (5 votes) · LW · GW
Also, what are LessWrong's views on the idea of a continuous consciousness?

It's kind of against the moderation guidelines of "Make personal statements instead of statements that try to represent a group consensus" for anyone to try to answer that question hahah =P

But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn't know.

An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-09T01:40:30.129Z · score: 13 (10 votes) · LW · GW

Most of the land around you is owned by people who don't know you, who don't support what you're doing, who don't particularly want you to be there, and who don't care about your community. If they can evict the affordable vegan food dispensary and replace it with a cheesecake factory that will pay higher rent, they will do that, repeatedly, until your ability to profit from your surroundings as a resident is as close to zero as they can make it without driving you away to another city, and if you did go to another city, you would watch the same thing happen all over again. You are living in the lands of tasteless lords, who will allow you to ignore the land-war that's always raging, it's just a third of your income, they tell you, it's just what it costs.

That's not what it costs. We can get it a lot cheaper if we coordinate. And whatever we use to coordinate can probably be extended to arranging a much more livable sort of city.

So I've been thinking a lot about what it would take to build a city in the desert where members' proximity desires are measured, clustered and optimised over, where rights to hold land are awarded and revoked on the basis of that. There would be no optimal method, but we don't need an optimal method. All we need is something that works well enough to beat the clusterfuck of exploitation and alienation that is a modern city. The system would gather us all together and we would be able to focus on our work.

I'll need more algorithms before I can even make a concrete proposal. Has anyone got some theory on preference aggregation algorithms. I feel like if I can learn a simple, flexible preference graph order recovery algorithm I'll be able to do a lot with that.

It'll probably involve quadratic voting on some level. Glen Weyl has a lot of useful ideas.

Comment by makoyass on Worth keeping · 2018-12-08T01:20:06.708Z · score: 3 (1 votes) · LW · GW

It's an interesting tradeoff, but it doesn't come up much, for me. I think, in most relevant domains, people aren't actually good at hiding their problems. Humans seem too complex, too expressive, too transparent. We were not adapted to effectively wielding privacy. We cannot fake important skills or insights that we do not have: We don't know what we don't know, we don't know the tells.

In order to present a convincing picture of a human being, clear enough for anyone to trust you with anything, the only way most people can do that is by telling the truth.

Comment by makoyass on Summary: Surreal Decisions · 2018-12-01T19:32:04.508Z · score: 1 (1 votes) · LW · GW

Yeah, there's still difficult stuff to grapple with. Mathematics isn't my specialization and I'm not in any way disagreeing that surreal numbers might be relevant here. I've been thinking about digging into Measure Theory.

Comment by makoyass on Summary: Surreal Decisions · 2018-11-30T21:49:21.239Z · score: 5 (2 votes) · LW · GW

Have infinite ethics been reexamined in light of the logical decision theories?

I then come along and punch 100 people destroying 100 utility

Under a logical decision theory, your decision procedure is reflected an infinite number of times across the universe, you can't just punch a 100 people and then stop there. If you decide to punch any people, an infinite number of reflections of you punch an infinite number of people. The assumption "the outcomes of your decisions are usually finite" is thrown out.

Modelling potential actions as isolated counterfactuals is wrong and doesn't work. We've known this for a while.

Comment by makoyass on Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn! · 2018-11-18T04:59:21.101Z · score: 5 (4 votes) · LW · GW

resist being controlled by our motivations

That's a funny thing to say. The point of an agent is for it to be controlled by its motivations. But I think I know what you mean. Part of this skill is maintaining a high level overview of everything we value, never getting destructively obsessed with a few passions to the detriment of the others, yes. The hard thing about this is it really feels like the weightings of the components of the utility function change over time. If I were drunk and mad, for instance, I have to ask myself whether maybe I really do care more, in that moment, about punching that guy over there, than I care about not getting arrested. I can think the thought "but if I assault someone I'll get arrested" and go on to think "it's worth it. I have to". And maybe that's not a malfunction. Maybe that's just what humans like to be. And maybe that means I should take care to avoid ever getting into situations where I might get drunk and mad.

Or maybe part of the eschatology skill is developing a stable heart, an unwavering sense of good, or a sense of some underlying unwavering good, like a Kokoimudji always knows where north is, perhaps we must learn to always see roughly where the longest term good is even when we're lost among our passions.

Comment by makoyass on Act of Charity · 2018-11-18T04:41:34.645Z · score: 8 (4 votes) · LW · GW

self-deception is a choice

I get in trouble when I live this belief, I don't recommend it. I might find it easier to be around people if I thought of their self-deception as a sort of demonic presence that needs to be exorcised and isn't naturally a part of them. Yes it is behaving as if there is agency in it, defending itself against correction, ignoring the warnings that something might be wrong with it, but at least we can claim that is a subagent, just a broken part in the back rather than the main thing, that will let us sleep soundly at night, that will let us pretend there is something here that deserves to be saved

Comment by makoyass on Open Thread November 2018 · 2018-11-18T03:27:13.954Z · score: 2 (2 votes) · LW · GW

I think I remember hearing that a brighter working area boost alertness, but I don't know if it replicated or if it applies to just monitors (although you must always make sure the monitor is at least as bright as your surroundings, else visual strain)

Comment by makoyass on Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn! · 2018-11-18T02:53:37.171Z · score: 3 (3 votes) · LW · GW

I discovered I was still missing some fundamental attributes that nobody responsible for my education had noticed or figured out how to instill in me.

Name them?

My fav crucial yet untaught skills are

  • basic civic theory about how to structure a working society, how to arrange specializations, how specializations that imbue their practitioners with deep differences in worldview can understand each other well enough avoid getting in stupid fights with each other. You cannot have a decent society without teaching these skills to everyone in it, but nobody is doing that.

  • noticing people's qualities and investing appropriately

  • eschatology. Knowing our own utility function. Being able to say firm but true things about our sense of beauty, justice, joy, fulfillment etc, that will generalize to new kinds of situations. Crucial for exercising any agency, but I don't know where to begin in training it.

Comment by makoyass on Conversational Cultures: Combat vs Nurture · 2018-11-11T05:48:24.424Z · score: 6 (4 votes) · LW · GW

This is a great example of how people should go about playing competitive games with their friends, always be ready to point out ways they can do better, even if that would let them win against you.

Not so sure about treating conversation as a competitive game though heh

Comment by makoyass on In favor of tabooing the word “values” and using only “priorities” instead · 2018-10-29T03:01:19.719Z · score: 3 (1 votes) · LW · GW
it makes personal and societal “values” seem hard to define/operationalize, incommensurable, uncountable, frequently in conflict with each other and usually in unclear relationships to each other and to those who hold them.

I'm fairly sure that's how human terminal values are. If you wanted to formalize a single human's values, if you had the utility function be a sum of unaligned modules that actually change over time in response to the moral apologia the human encounters and the hardships they experience, that would get you something pretty believable.

Comment by makoyass on In favor of tabooing the word “values” and using only “priorities” instead · 2018-10-28T23:01:30.466Z · score: 1 (1 votes) · LW · GW
I have limited resources and it costs things to go after what I want and succeed at getting it.

Is always true. Time is a constrained resource. Manpower is a constrained resource.

Comment by MakoYass on [deleted post] 2018-10-28T21:23:48.580Z

The thing I'm most unsure about here is the giving so much attention to the reference class {alive or intelligent}'s relationship with anthropic measure, to the extent of treating that relationship as an anomaly that needs to be explained. If you have one data point about the class of red things, "Golg is Red", and you know nothing about redness aside from that, does it really make sense to start making guesses about what is making Golg Red? Golg has the worst breath, should we then guess that Golg's bad breath is what makes Golg Red? How confident should we be?

It seems notable that the thing we know to have anthropic measure is human. The reasons we associate humanness, intelligence or aliveness with that anthropic measure, though, do not seem entirely legit. It seems conspicuous that the thing we consider to have anthropic measure happens to also be the only thing that can loudly claim to have it.

Comment by MakoYass on [deleted post] 2018-10-28T21:08:41.647Z
That sounds a *lot* like .

It's a genre. I sort of hope we never actually give rise to any simulist religions that people come to earnestly believe in, but we probably will. Most of those religions wont be true. Some of them might be. I don't know.

It does not sound a lot like any existing variant of Panpsychism

Not sure what you mean. Disambiguate "it"? The presented theory (Concentrated Existence) is not something I would call panpsychism. It might be implied by panpsychism. It should still have its own name.

Comment by makoyass on Verbal Zendo · 2018-10-22T04:46:05.174Z · score: 4 (2 votes) · LW · GW

Relatedly there's a nice little singleplayer zendolike for android here

I'd recommend it

One of the things you might like to steal for future iterations of this is its testing process. To demonstrate that you know the rule, before you're given the green light to move on, it quizzes you on a large set of examples and you have to be able to categorize them all correctly.

Comment by makoyass on Verbal Zendo · 2018-10-22T04:43:04.649Z · score: 6 (3 votes) · LW · GW

Here's something we can run in our url bars to get this now


javascript:document.getElementById("theInput").addEventListener('change', ()=> test())


Comment by makoyass on UBI for President · 2018-10-20T22:12:42.350Z · score: 1 (1 votes) · LW · GW

Another way this might help employers: There's a possibility that having a social safety net like this will reduce the incentive for a person who has found themself in a bullshit job to defend that bullshit job's existence, which may lead to lower rates of bureaucratic parasitism?

Comment by makoyass on Asymptotic Decision Theory (Improved Writeup) · 2018-09-30T04:33:32.862Z · score: 1 (1 votes) · LW · GW

The link to Tsvi's post seems to be broken

Comment by makoyass on Annihilating aliens & Rare Earth suggest early filter · 2018-09-24T23:15:46.264Z · score: 6 (2 votes) · LW · GW

Mm. I think I oppose that intuition. It's hard to see how there can be much of a distinction between existing at low measure and simply existing less, or being less likely to have occurred, or to have been observed. So, for a garden to be considered successful I would expect its caretakers to at least try to ensure that its occupants have high anthropic measure, and at least some of the time they would succeed.

Incisive question... All I can think of is... human organizations are often a lot more conscious- behaviorally- than any individual pretends to be, and I find that I am an individual rather than an organization. I am immersed the sensory experience of one human sitting at one terminal, rather than the immense, abstract sensory experience of, say, wikipedia, or the US intelligence community. It's conceivable that organizations with tightly integrated knowledge-bases and decisionmaking processes do have a lot of anthropic measure, but maybe there just aren't very many of them yet.

I'm trying to imagine speaking to some representative of the state of knowledge of a highly integrated organization, and hearing it explain that its subjective experience anthropic measure prior for organizations is higher than its anthropic measure for individuals (multiplied by the number of individuals), but I don't know what a hive-mind representative would even act like, at what point does it stop saying "we" and start saying "I"? Humans' orgs are more like ant colonies than brains, at this point, there is collective intelligence but there's no head to talk to.

Comment by makoyass on Annihilating aliens & Rare Earth suggest early filter · 2018-09-23T04:06:21.706Z · score: 3 (1 votes) · LW · GW

Some parts of your anthropic argument don't seem right... and it's lead me to a related thought

It seems likely that an expanding civilization would capture much more anthropic measure than what exists on earth. Billions of times more. So, why should we find ourselves here where the cosmic body of life is still so small? Why weren't we born hundreds of thousands of years later in the heavenly gardens of an alien diaspora?

What if... Siblings Are Dangerous: What if most life-supporting universes have in common a dynamic where technology significantly increases the danger of coexisting with other living things. Creating new life on a grand scale is made unviable, our gardens will tend to evolve competitors who will eventually kill everything around them until only they are left, or until nothing is left. The theory is that life is at its most numerous when it is too dumb and too weak to pose much of a threat to itself, as we are.

I'd guess that expanding one's own mind into well-controlled slave cortexes would capture similar amounts of new anthropic measure.. but maybe once you go over a certain scale, having those be independent minds, sufficient to observe their own existence, maybe it just isn't possible to keep that many star-sized consciousnesses under control.

(You might ask why anthropic measure should be tied to independently-intelligent minds. I don't know. It just seems to be. I don't make the rules.)

Any late filter would explain it, though.

Comment by makoyass on Book review: Why we sleep · 2018-09-22T23:57:24.211Z · score: 3 (1 votes) · LW · GW
Humans seem to be naturally biphasic: modern hunter-gatherer tribes sleep for 7-8 hours at night, and then nap for 30-60 minutes in the afternoon.

I'm having fun imagining a workplace where this sort of pattern is encouraged. A large part of the difficulty would be dealing with self-destructive work cultures where nobody wants to violate a norm in a way that risks making them look lazy. I'd want to start by having the highest performers try napping after lunch, so that people come to associate it positively with productivity, frame napping as the opposite of lazy, like exercise, it's not work but it's something that hard workers do. But there's a chance the higher-performers are exactly the people who wouldn't benefit from napping- There is such a thing as a short-sleeping gene in humans and a lot of CEOs seem to have it- so starting with them might soil the whole thing.

I'm remembering hearing stories of a lot of workplaces getting nap pods and telling their employees that it is "okay" to nap. I don't think this should be taken seriously. If you don't have enough beds for everyone in the office to sleep, it wont become a norm. It certainly wont become a habit.

I'd want to experiment with assigning a sample of people (or a set of volunteers) to napping every day, that's a design we could take seriously.

Comment by makoyass on The Steampunk Aesthetic · 2018-09-11T09:49:03.865Z · score: 1 (1 votes) · LW · GW
Steampunk fiction often also involves Victorian Era fashion and culture, but I think this is largely superfluous

Hmm. I wonder what essential steampunk costume would look like, then. I'm sure it would have lots of pockets. Maybe the clothes would always look like they had been designed from scratch to accommodate the specializations of the wearer.

Police would probably wear some kind of small shield on their shoulder, suggesting that they're ready to stop/protect someone at any moment, as that their purpose.

Affluence would be signaled not with high quality versions of ordinary clothes, but with a special kind of rectangular, leather, intricately patterned bag that the rich would carry under-arm, these would be referred to as (and would have evolved from) "wallets", the suggestion being that they need a copious amount of currency to be accessible to them at all times in case they face a sudden need to buy something expensive, as that is their purpose.

Comment by makoyass on Zetetic explanation · 2018-09-11T07:59:28.195Z · score: 6 (3 votes) · LW · GW

Stories were probably the first information format

Imagine a time before language. The information you get from your environment comes as series of events happening over time. That's the kind of information you're good at integrating into your active knowledge. Now, our blind idiot creator bestows us with language, what kind of information structure is going to allow us to convey information to our conspecifics in a way that they'll be able to digest and internalize? Just the same, a description of a series of events spoken over time, which they may now experience as if those events were happening again in front of them.

And this kind of information is very easy for us to produce. We don't need to be able to assemble any complex argument structures, we just need to dump words relating to the nouns and verbs we saw, in the order that they occurred. Stir in an instinct to dump episodic memories in front of people who weren't present in those memories, and there, they will listen, and they'll get a lot out of it, and now we have the first spoken sentences.

With this in light, if it turns out storytelling was not the first kind of extended speech, I will be shocked.

The story of bread is not the most succinct way to encode the information about bread that a person most needs, an idea is only useful if it will help a person to anticipate futures of the things that matter to them in consequence of their available actions. Our past is not our future, on its own, we can't affect the past, and a chunk of the past will not always tell us much about the future. However, a story, a relaying of events from the past, is extremely digestible. There is no way to arrange information that an animal would find easier to make sense of.

If you can find a way to explain what happened, in chronological order, that lead the ingredients of bread to become abundant, and then that made it easy for us to make bread, and then ensured that we would be able to digest it, there you've explained why and how bread is important in the form of a story, it will be not just useful information, it will be very easy for us to integrate.

And that, it seems, is what a zetetic explanation does? This... explaining by selecting parts of the history that can be assembled into a complete proof of the thing's importance.. I think it does deserve a name.

The end of public transportation. The future of public transportation.

2018-02-09T21:51:16.080Z · score: 7 (7 votes)

Principia Compat. The potential Importance of Multiverse Theory

2016-02-02T04:22:06.876Z · score: 0 (14 votes)