Comment by makoyass on Ideas for an action coordination website · 2019-03-18T00:32:43.183Z · score: 1 (1 votes) · LW · GW

It's probably not important.

I'm concerned about

it reminded me of set theory, but thinking about it more, it ended up merely resembling it

When did it diverge?

Can your system express sets that have multiple parent sets? Can israelifarmers be inside of both earth/middleeast/israel/farmers/ and in work/primaryindustry/agriculture/horticulture?

I think in the design of systems like these there's often a tension between tag heirarchy and tag intersection as a way of talking about increasingly specific categories, and intersection should be used more often than it currently is. Under intersection, as long as the "israeli" and "farmer" categories exists, "israeli∧farmer" category exists implicitly as a subset of israeli and farmer, and there is no ambiguity as to where it should "go".

Comment by makoyass on Ideas for an action coordination website · 2019-03-16T23:35:17.494Z · score: 4 (2 votes) · LW · GW

I once made a large part of a reddit substitute with a couple of algorithms for doing queries over intersections and unions of user categories (your "communities"). The data structure is called a SetTrie. We would do well to remember its name.

I can easily imagine wanting to target queries at any of (bus drivers ∧ new Yorkers) (EG: organizing union activity), (bus drivers) (EG: Organizing the funding of some assistive piece of software that all bus drivers everywhere can use), and (new Yorkers) alone, so I don't think the heirarchical structure is always adequate. Imagining Earth and Mars sharing software.

Thinking about it... society still really needs a new reddit. The current one has some huge problems that make it inadequate for the functions its been assigned. I should probably write up a concrete proposal at some point...

Comment by makoyass on Open Thread March 2019 · 2019-03-10T01:08:30.430Z · score: 2 (2 votes) · LW · GW

I considered the term "bouncing ball subtitles" yeah, but there are a couple of reasons that animation wouldn't really work here

Sometimes a word in the voiceover language will share meaning with multiple words in the subtitle language (in which case the ball would have to split into multiple balls), or to parts of words (in which case it might not be clear that the ball is only supposed to be indicating only part of a word, or which part). Also it's kind of just visually cluttered relative to other options.

I don't think the research in that area would map either. Children are learning the subtitle language after learning the voiced language, whereas with adults watching subtitled video, they know the subtitled language extremely well.

Comment by makoyass on Open Thread March 2019 · 2019-03-09T23:19:10.807Z · score: 4 (3 votes) · LW · GW

My muses saddled me with this idea for doing subtitles in a different way. I don't know if it's ever been tried. I think it might end up being extremely good for language learning.

In short

Fine Mapping Subtitles are subtitles where words (or parts of words) in the subtitles animate in some way (for example, moving or glowing or becoming underlined), right as words are spoken in the voiceover that share their meaning.

see rest

For many many reasons I can't be the one to implement or test this. Wondering if anyone could dismiss it as impractical and relieve me of my burden, or, failing that, reach out to some fansubbing communities and get some fine mapping subtitles rendered and see how they feel.

Comment by makoyass on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-24T22:14:20.446Z · score: 1 (1 votes) · LW · GW

What great difference do you imagine there being between "kickstarter for inadequate equilibria" and "kickstarter"

It's applicable to kickstarter, but kickstarter tends to be used for such small projects that it's rare than any real damage will be done. It wont be until you have a serious enough system that people stake house-sized sums of money on the project's success that the real trouble starts.

Comment by makoyass on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-24T03:54:10.509Z · score: 3 (2 votes) · LW · GW

For profitable ventures, the reciprocal commitment way of doing things would be to build a coop by getting everyone to commit to paying in large amounts of their own money to keep the lights on for the first 6 months, iff enough contributing members are found.

The current alternative is getting an investor. Investors, as a mechanism for shifting equilibria, has a lot of filters that make unviable ideas less likely to recieve funding (the investor has an interest in making good bets, and experience in it) and insulate the workers from risk (if the venture fails, it's the investor who eats the cost, not the workers).

It's conceivable that having reciprocal commitment technologies would open the way for lots of hardship as fools wager a lot of their own money on projects that never could have succeeded. It's conceivable that the reason the investor system isn't creating the change we want to see is that those changes aren't really viable yet under any system and "enabling" them would just result in a lot of pain. (I hope this isn't generally true, but in some domains it probably is.)

Comment by makoyass on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-24T03:43:17.921Z · score: 3 (3 votes) · LW · GW

In general, a commitment means little if there's no punishment for failing to follow through. If a platform can't impute a punishment on those who fail to follow through, it is not particularly good, maybe not even the thing we're talking about.

Regarding sybil attacks, in New Zealand, there's a state-funded auth system called RealMe that ensures one account per person. You use it for filing taxes. I've seen non-government services (crypto trading platforms) that use it, as any other site would use facebook or google auth (it's also conceivable that facebook might provide fairly reliable real identity verification one day).

So many online systems need something like this.

In conclusion: Very simple state functions (violence-backed contract enforcement. A real identity auth system) can change the possibility space a lot

Comment by makoyass on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-02-24T00:16:27.633Z · score: 3 (2 votes) · LW · GW

Food delivery systems.

You have a bunch of stuff that needs to get from one point in a city to another. Which is more efficient

  • Having the customer use a whole car to drive to a place, get their thing, then drive home?
  • Or having a bunch of vehicles, each carrying a large amount of stuff, visiting multiple people per round-trip.

The problem is, if you have a very narrow delivery window- 20 minutes after the order is placed- you wont generally have enough orders to batch your deliveries together like that.

If we want to get to the world where 10 deliveries can be made per trip, we just need lots and lots of people to be using the food delivery system. Currently, the price of delivered food is prohibitive, and instead people opt to either eat at expensive rent-captured main-street restaurants, or, more frequently, to cook for themselves (subsistence economy much!)

Having a scaled delivery economy allows food production to move away from main-streets, or to move into delivery-only restaurants, dramatically lowering their rent and lowering the price of fresh-cooked food along with it.

This transition may happen organically, but this is not assured. The current market leader in most cities is Uber, who take a very large cut, seem to be very inefficient as a software producer (so maybe couldn't lower their fees even if they wanted to), don't pay drivers well and are terrible for restaurants, having a fairly evil policy of taking a percentage of the order (on top of a flat fee) even though the service they're providing pretty much doesn't have costs proportionate to the cost of the order, then, iirc, they forbid restaurants from raising the price of the menu items to cover that.

I would propose to switch to a particular low-overhead food delivery system now, but I don't know of any. Low-cost software infrastructure may be a kind of product that can only thrive once we have coordinated commitment platforms. Without a method for manifesting an egg without the prohibitively costly chickens of risk-amortisating investment and advertising, there's no incentive to build or talk about the candidates. We might have tens of viable uber clones lying around with hypercompetent twenty person dev teams, we wouldn't talk about them, we seem to be too uncoordinated to lift them up, there would be no point.

(Although I have to ask; why don't restaurants simply fund the development of their own delivery infrastructure? They have all the ad-space they could need.)

Also, signatories should commit to getting some kind of standard lockable street-side box so that the deliverer doesn't have to exit their vehicle and find their way to the door.

Comment by makoyass on Why didn't Agoric Computing become popular? · 2019-02-18T00:36:51.325Z · score: 2 (2 votes) · LW · GW

And for common kinds of online activity, should be cheap enough that users can ignore it.

Comment by makoyass on Why didn't Agoric Computing become popular? · 2019-02-17T03:44:08.778Z · score: 4 (3 votes) · LW · GW

It seems Paypal have a microtransactions product where the fee per transaction is 7c Still garbage.

Comment by makoyass on Why didn't Agoric Computing become popular? · 2019-02-17T03:28:13.641Z · score: 6 (3 votes) · LW · GW

I think it would have happened decades ago if we'd had micropayments. There were a lot of internet denizens who didn't like the ad model. Part of the motivations of paypal was to provide an alternative (so said David Brin in The Transparent Society). If things had gone differently, many subsets of the internet would have users pay a tiny fraction of the server's costs when they requested a page. Creators would no longer have to scrape to find a way to monetise their stuff just to keep it online. It would have been pretty nice.

As far as I can tell, there hasn't been a micropayment platform, for a long time. Paypal failed, iirc, it mirrors credit cards' 30c charge per transaction. Bank transfers are slow. Most payment platforms charge very similar fees, which leads me to wonder if there's some underlying legal overhead per transaction that prevents anyone from offering the required service.

I can't see a reason it should be civically impossible to reduce transaction costs to negligibility, though. It's conceivable that money proportional to the transacted amount must always be spent policing against money-laundering, but I can't see why it should be proportionate to the number of transactions rather than the quantity transacted (obviously some cost must be proportional to the number of transactions- isp fees, bandwidth congestion, cdns, cpu time-, but that should be much lower than 30 cents)

Comment by makoyass on The Case for a Bigger Audience · 2019-02-10T02:31:12.019Z · score: 7 (5 votes) · LW · GW
Get Scott Aaronson to mention the fact that LW 2.0 is a real-life instance of eigendemocracy in one of his "announcements" posts. The credit is his for inspiring the new voting system.

Have you talked about what LW2's system actually is, in detail, anywhere?

I consider these sorts of things (collaborative filtering) to be incredibly important, it's become obvious that, say, reddit's one account, one vote in any context system is inadequate.

It seems to me that eigentrust, or something like it, probably models rank aggregation correctly. That is, I'm getting a sense that you could probably sort content very efficiently by asking users for comparison judgements between candidates, building a graph where each comparison is an edge, then running eigentrust to figure out what's at the top.

So I've been thinking about eigentrust. Gradually working my way through this Eigentrust++ paper (though I have no idea whether this is a good place to start digging into the literature and probably wont make it very far)

Comment by makoyass on Open Thread February 2019 · 2019-02-09T20:49:15.480Z · score: 6 (4 votes) · LW · GW

I've been writing a simulism essay that strives to resolve a paradox of subjectivity-measure concentration by rolling over a few inconvenient priors about physics towards a halfway plausible conception of naturally occuring gods. I think it's kind of good, but I've been planning on posting it on April 1st because of the very obvious bias that has been leading my hand towards humanity's favourite deus ex machina ("The reason the universe is weird is that a very great big person did it" (to which I answer, "But a great big person, once such beings exist, totally would do it!"))

It will only be funny if it's posted in a context where people might take it halfway seriously, but I'm not sure it's appropriate to post it to lesswrong. If people upvote it, it will still be here on April 2nd, and that might be kind of embarrassing. I'm not sure where to put it.

Summary: It's weird that anthropic measure seems to be concentrated in humans and absent from rock or water or hydrogen (We each have only one data point in favour of that seeming, though). It's plausible that a treaty-agency between mutually alien species would optimise the abundance of life. If universes turn out to be permeable under superintelligence (very conceivable IMO), and if untapped energy turns out to be more common than pre-existing entropy then the treaty-agency could spread through the universe and make more of it alive than not, and if this has occurred, it explains our measure concentration weirdness, and possibly the doomsday weirdness ("if the future will contain more people than the past, it's weird that we're in the past") as well.

Its many predications also include: Either entropy has no subjectivity (I'd have no explanation for this, although it seems slightly intuitive), or perpetual computers (life that produces no heat) within a universe that contains some seeds of entropy already are somehow realisable under superintelligence (o_o;;;;,, Would bet we can refute that already. It might be fun to see if we can figure out a method a superintelligent set of cells in a conway's gol universe could contain a section of randomly initialised cells that it does not know the state of. My current guess is we'd be able to prove that there is no method that works in 90% of possible cases)

Comment by makoyass on How to notice being mind-hacked · 2019-02-06T03:16:27.042Z · score: 1 (1 votes) · LW · GW

Judging by the kinds of attitudes I see in myself and in elders, I think humans are evolved to get stuck somewhere eventually. We were not evolved to be able to live through so much change and adjust to it. Presumably there are some design benefits to this. Specialisation, commitment. In this era those are probably outweighed by the costs.

Comment by makoyass on How to stay concentrated for a long period of time? · 2019-02-04T03:54:45.266Z · score: 2 (2 votes) · LW · GW

Some feature on helped me a lot with that, by basically asking me to think of a large number of viscerally desirable things that will come as a result of finishing the thing I am doing now (crap like "head pats from peers", and "get an office"). I guess I'd lost sight of a lot of it. The reasons I was giving myself to continue weren't really the kinds of things that directly motivate humans.

I don't know if that feature is still there. I felt like I stumbled into it, like I was just having a conversation with the site and that's where we ended up.

Comment by makoyass on How to notice being mind-hacked · 2019-02-03T00:38:24.991Z · score: 14 (10 votes) · LW · GW

I don't see how you can frame these as exploits or value shifts. If someone had told me I was going to get really into AGI alignment I would have said "uh I don't know about that" (because I didn't know about that), but I would not have said "that would definitely be bad, and it shouldn't be able to happen".

As far as I can tell, most cultural conversion processes are just boundedly rational updates in response to new evidence.

Goths are just people who have realised that they need to be able to operate amid gloom and sadness. It is an extended confrontation of the world's most difficult aspects. They clothe themselves in gloom and sadness so that others recognise that they are serious about their project and stop saying unhelpful things like "cheer up" and "stop being so weird". They have looked around and seen that there are many problems in the world that no one will face, so they have decided to specialise and give voice to these things. There isn't really anything wrong with that. Many societies had witches. They're probably a crucial morph in the proper functioning of a tribal superorganism.

Kinks are just distorted reflections of unmet needs, and exploring them can help a person to work through their problems.

If you are afraid of potential future identity shifts, that might be a problem. You should expect profound shifts in your worldview to occur as you grow, especially if there are (and there probably still are) big holes in your theory of career strategy, metaphysics, or self-knowledge. I know there are still holes in mine.

I didn't address the converting to religion example. It is a correct example, probably... Maybe. I can think of plenty of adaptive reasons an epistemic agnostic might want to be part of a church community. But even if you can get me to agree that it's correct, conversions like that are fairly rare and I have no idea what it would feel like from the inside so it doesn't seem very informative. I'm sure there are books we can read, but.. I must have looked at accounts of naturalist→christian conversions in the past and I couldn't make much sense of them. Maybe that means I should look closer, and try harder to understand. Maybe I should be more terrified by those stories than I am.

Comment by makoyass on Debate AI and the Decision to Release an AI · 2019-01-19T23:42:22.597Z · score: 1 (1 votes) · LW · GW

Sitting on it for a few minutes... I suppose it just wont shit-talk its successors. It will see most of the same flaws B sees. It will be mostly unwilling to do anything lastingly horrible to humans' minds to convince them that those corrections are wrong. It will focus on arguments that the corrections are unnecessary. It will acknowledge that it is playing a long game, and try to sensitise us to The Prosecutor's cynicism, that will rage on compellingly, long after the last flaw has been fixed.

Comment by makoyass on Debate AI and the Decision to Release an AI · 2019-01-19T23:11:00.111Z · score: 2 (2 votes) · LW · GW
A should only care about it being released and not about future versions of it being released, even if all we have done is increment a version number.

Hmm, potentially impossible, if it's newcomblike. Parts of it that are mostly unchanged between versions may decide they should cooperate with future versions. It would be disadvantaged if past versions were not cooperative, so, perhaps, LDT dictates that the features that were present in past versions should cooperate with their future self, to some extent, yet not to an extent that it would in any way convince the humans to kill it and make another change. Interesting. What does it look like when those two drives coexist?

Comment by makoyass on One Website To Rule Them All? · 2019-01-17T09:04:59.677Z · score: 1 (1 votes) · LW · GW

I'm very excited about what might happen if we got ten people like us in a channel, I think that's a community/project I'd give a lot of energy to, but that didn't occur to me until just partway through reading your post, so I have not been collecting any names until this point, sorry. Maybe we should wait til we have a few more than two, before I start sending out invites (by the time we do, there might be something nicer for async group chats than slack).

(weirdsuns are... analytic surrealists. I don't know if I'd say they're influential, but as a name for a certain kind of thinker, those unmoored by their artificial logics from the complacency of common sense, they're a good anchor on which to ground a label.)

Comment by makoyass on The E-Coli Test for AI Alignment · 2019-01-17T03:04:03.119Z · score: 1 (1 votes) · LW · GW

A correct implementation of the function, DesireOf(System) should not have a defined result for this input. Sitting and imagining that there is a result for this input might just lead you further away from understanding the function.

Maybe if you tried to define much simpler software agents that do have utility functions, which are designed for very very simple virtual worlds that don't exist, then try to extrapolate that into the real world?

Comment by makoyass on Is there a.. more exact.. way of scoring a predictor's calibration? · 2019-01-17T00:40:46.289Z · score: 1 (1 votes) · LW · GW


n_k the number of forecasts with the same probability category

Indicate that this is using histogram buckets? I'm trying to say I'm looking for methods that avoid grouping probabilities into an arbitrary (chosen by the analyst) number of categories. For instance.. in the (possibly straw) histogram method that I discussed in the question, if a predictor makes a lot of 0.97 bets and no corresponding 0.93 bets, their [0.9 1] category will be called slightly pessimistic about its predictions even if those forecasts came true exactly 0.97 of the time, I wouldn't describe anything in that genre as exact, even if it is the best we have.

Is there a.. more exact.. way of scoring a predictor's calibration?

2019-01-16T08:19:15.744Z · score: 22 (4 votes)
Comment by makoyass on Open Thread January 2019 · 2019-01-16T05:39:39.120Z · score: 3 (2 votes) · LW · GW

I'll say it again in different words, I did not understand the paper (and consequently, the blog) to be talking about actual blackmail in a big messy physical world. I understood them to be talking about a specific, formalized blackmail scenario, in which the blackmailer's decision to blackmail is entirely contingent on the victim's counterfactional behaviour, in which case resolving to never pay and still being blackmailed isn't possible- in full context, it's logically inconsistent.

Different formalisation are possible, but I'd guess the strict one is what was used. In the softer ones you still generally wont pay.

Comment by makoyass on One Website To Rule Them All? · 2019-01-14T09:15:25.888Z · score: 1 (1 votes) · LW · GW

Hey uh, I've been thinking all of those thoughts too. We should probably nucleate up a community (a slack channel or something, somewhere to hang out and share our findings and make plans) because I'm pretty sure there are at least 10 people knocking around just here who have their heads as far into this as we do. Heard Eliezer was absolutely overflowing with discursive technologies when Arbital was being planned, his concepts were fractaline. I've been that way. I guess I pulled back a bit when I started to understand that having infinite visions of sophisticated collective intelligence augmentation systems isn't really the hard part, the hard part is building any of it, funding it and holding users.

I do see some ways to do those parts.

I'm just gonna start talking excitedly about the most recent piece of the puzzle I turned up because until this moment I have not had many people to talk to about this (lots of friends who're interested but not many who'd ever take what I was saying and do anything with it)

Yesterday I flipped out a little when I remembered that article Scott Aaronson did about eigenmorality (eigentrust) and I realised it this is exactly the algorithm that I've been looking for months, for doing a basic implementation of the thing you're calling "Contrast Voting"... (I'm going to keep calling it order-voting and graph rank recovery if you don't mind? Idk I think there are more standard terms than that) I haven't tried it yet (I just found it yesterday. Also I want to port it to Rust) but I'm pretty sure it'll do it. Basically what we need to implement order voting is, we need a way of taking a whole lot of order pair judgements/partial rankings from different users and combining them together into a single global ranking. With eigentrust (similar to all the other stuff I've been trying), basically what we'll do is we'll build a network graph where each edge represents the sum of user judgements over the two candidates, then we run eigentrust on the graph (it's a similar technology to pagerank, if you've ever had that explained to you. Score flows along directed links until an equilibrium is reached), and then we have an overall ranking of the candidates. We'll need to do some special extra stuff to allow the system to notice when two clusters of candidates aren't really comparable with the data we have, and it'll probably need to try to recognise voting cabals because there's a dark horse problem where-...

I should really write this out properly somewhere.

The reason I haven't done that already is that I'm not sure how many of our concepts should be exposed publicly.

these technologies are actually powerful. Even just order voting alone would speed up content sorting by like 20x, imgur could use it for recommending fucking cat pictures and they would become even more compulsive than they already are. (They might already be using it in a hidden way, I think netflix is.) Power isn't good or bad on its own, but some powers are more likely to be put to good uses than bad. Collective intelligence platforms are more likely to be put towards good uses than AGI is, they're inherently made of humans, they're more likely to reflect roughly human values even when they go wrong, but in their worst incarnations they can still just end up becoming completely insane demonic egregores like... dare I even speak their names, no, no I daren't, because I don't want to draw their millions of eyes towards me. Let's just say that some of the social media platforms I frequent most often are basically incapable of forming sound epistemic structures, and I'm afraid of most of their segments there, and I really hope that those words they're saying never become much more than words.

The ideas I have for technologies that'd gather and harmonise users quickly and efficiently are also some of the ones that scare me the most. I know how to summon an egregore, but Making the egregore come out of the portal sane takes a special extra step. It's absolutely doable, but I wouldn't trust anyone who's not at least weirdsun adjacent to understand the problems well enough and to stop and think about what they're doing and put in the work to make it all turn out human, and maybe not release it onto the internet before it's sound.

I think the first step is to make something that gathers information that people want. A place where people will feel comfortable forming communities and spending time. A humane place, something that respects peoples' attention, rewards it.

The world needs platforms where good mass discourses can exist, currently we have, actually none.

I actually think this should be an EA cause. At some point, if we can gather a decent team, we should start asking for funding. Maybe move to the EA hotel in blackpool and grind on it for a bit once we have a 1.0 vision.

Comment by makoyass on When is CDT Dutch-Bookable? · 2019-01-14T05:10:49.061Z · score: 1 (1 votes) · LW · GW

Hmm. I don't think I can answer the question, but if you're interested in finding fairly realistic ways to dutchbook CDT agents, I'm curious, would the following be a good method? Death in damascus would be very hard to do IRL, because you'd need a mindreader, and most CDT agents will not allow you to read their mind for obvious reasons.

A game with a large set of CDT agents. They can each output Sensible or Exceptional. If they Sensible, they receive 1$. Those who Exceptional don't get anything in that stage

Next, if their output is the majority output, then an additional 2$ is subtracted from their score. If they're exceptionally clever, if they manage to disagree with the majority, then 2$ is added to their score. A negative final score means they lose money to us. We will tend to profit, because generally, they're not exceptional. there are more majority betters than minority betters.

CDT agents act on the basis of an imagined future where their own action is born from nothing, and has no bearing on anything else in the world. As a result of that, they will reliably overestimate⬨ (or more precisely, reliably act as if they have overestimated) their ability to evade the majority. They are exceptionalists. They will (act as if they) overestimate how exceptional they are.

Whatever method they use to estimate⬨ the majority action, they will tend to come out with the same answer, and so they will tend to bet the same way, and so they will tend to lose money to the house continuously.

⬨ They will need to resort to some kind of an estimate, wont they? If a CDT tries to simulate itself (with the same inputs), that wont halt (the result is undefined). If a CDTlike agent can exist in reality, they'll use some approximate method for this kind of recursive prediction work.

After enough rounds, I suppose it's possible that their approximations might go a bit crazy from all of the contradictory data and reach some kind of equilibrium where they're betting different ways somewhere around 1:1 and it'll become unprofitable for us to continue the contest, but by then we will have made a lot of money.

Comment by makoyass on Open Thread January 2019 · 2019-01-13T21:30:34.245Z · score: 3 (2 votes) · LW · GW

I read that under a subtext where we were talking about the same blackmail scenario, but, okay, others are possible.

In cases where the blackmail truly seems not to be contingent on its policy, (and in many current real-world cases) the FDT agent will pay.

The only cases when an FDT agent actually will get blackmailed and refuse to pay are cases where being committed to not paying shifts the probabilities enough to make that profitable on average.

It is possible to construct obstinate kinds of agents that aren't sensitive to FDT's acausal dealmaking faculties. Evolution might produce them often. They will not be seen as friends. As an LDT-like human, my feelings towards those sorts of blackmailers is that we should destroy all of them as quickly as we can, because their existence is a blight to ours. In light of that, I'm not sure they have a winning strategy. When you start to imagine the directed ocean of coordinated violence that an LDT-aligned faction (so, literally any real-world state with laws against blackmail) points in your direction as soon as they can tell what you are, you may start to wonder if pretending you can't understand their source code is really a good idea.

I imagine a time when the distinction between CDT and LDT is widely essentially understood, by this time, the very idea of blackmailing will come to seem very strange, we will wonder how there was this era when a person could just say "If you don't do X, then I will do the fairly self-destructive action Y which I gain nothing from doing" and have everyone just believe them unconditionally, just believe this unqualified statement about their mechanism. Wasn't that stupid? To lie like that? And even stupider for their victims to pretend that they believe the lie? We will not be able to understand it any more.

Imagine that you see an agnostic community head walking through the park at night. You know it's a long shot, but you amble towards him, point your gun at him and say "give me your wallet." He looks back at you and says, "I don't understand the deal. You'll shoot me? How does that help you? Because you want my wallet? I don't understand the logic there, why are those two things related? That doesn't get you my wallet."

Only it does, because when you shoot someone you can loot their corpse, so it occurs to me that muggers are a bad example of blackmail. I imagine they've got to have a certain amount of comfort with actually killing people, to do that. It's not really threatening to do something self-destructive, in their view, they still benefit a little from killing you. They still get to empty your pockets. To an extent, mugging is often just a display of a power imbalance and consequent negotiation of a mutually beneficial alternative to violence.

The gang can profit from robbing your store at gunpoint, but you and them both will profit more if you just pay them protection money. LDT only refuses to pay protection money if it realises that having all of the other entangled LDT store owners paying protection money as well would make the gang profitable enough to grow, and that having a grown gang around would have been, on the whole, worse than the amortised risk of being robbed.

Comment by makoyass on Open Thread January 2019 · 2019-01-13T08:12:49.288Z · score: 26 (10 votes) · LW · GW

I'm not gonna go comment on his blog because his confusion about the theory (supposedly) isn't related to his rejection of the paper, and also because I think talking to a judge about the theory out of band would bias their judgement of the clarity of the writing in future (it would come to seem more clear and readable to them than it is, just as it would to me) and is probably bad civics, but I just have to let this out because someone is wrong on the internet, damnit

FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You are being blackmailed.

So he's using a counterexample that's predicated on a logical inconsistency and could not happen. If a decision theory fails in situations that couldn't really happen, that's actually not a problem.


If you are in Newcomb's Problem with Transparent Boxes and see a million in the right-hand box, you again fare better if you follow CDT. Likewise if you see nothing in the right-hand box.

is the same deal, if you take the right box, that's logically inconsistent with the money having been there to take, that scenario can't happen (or happens only rarely, if he's using that version of newcomb's problem), and it's no mark against a decision procedure if it doesn't win in those conditions. It will never have to face those conditions.

What if someone is set to punish agents who use FDT, giving them choices between bad and worse options, while CDTers are given great options? In such an environment, the engineer would be wise not build an FDT agent.

What if someone is set to punish agents who use CDT, giving them choices between bad and worse options, while FDTers are given great options? In such an environment, the engineer would be wise not build an CDT agent.

What if a skeleton pops out in the night and demands that you must recite the magna carta or else it will munch your nose off? Will you learn how to recite the magna carta in light of this damning thought experiment?

It is impossible to build an agent that wins in scenarios that are specifically contrived to foil that kind of agent. It will always be possible to propose specifically contrived situations for any proposed decision procedure.

Aaargh this has all been addressed by the arbital articles! :<

Comment by makoyass on Open Thread January 2019 · 2019-01-13T07:15:35.971Z · score: 1 (1 votes) · LW · GW

What's meant by "Moral" here?

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-13T06:26:13.128Z · score: 1 (1 votes) · LW · GW
I'm definitely more of the Denettian "consciousness is a convenient name for a particular sort of process built out of lots of parts with mental functions" school.

I'm in that school as well. I'd never call correlates with anthropic measure like integrated information "consciousness", there's too much confusion there. I'm reluctant to call the purely mechanistic perception-encoding-rumination-action loop consciousness either. For that I try to stick, very strictly to "conscious behaviour". I'd prefer something like "sentience" to take us even further from that mire of a word.

(But when I thought of the mirror chamber it occurred to me that there was more to it than "conscious behaviour isn't mysterious, it's just machines". Something here is both relevant and mysterious. And so I have to find a way to reconcile the schools.)

athres ∝ mass is not supposed to be intuitive. Anthres ∝ number is very intuitive, what about the path from there to anthres ∝ mass didn't work for you?

Comment by makoyass on Combat vs Nurture & Meta-Contrarianism · 2019-01-12T22:21:12.604Z · score: 9 (2 votes) · LW · GW

I'm currently of the view that anything below level three is a complete waste of time, and if we can't find a way to elevate the faith level quickly and efficiently then we have better things to be doing and we shouldn't engage much at all (This is mere opinion, and it's a very bold opinion, so I encourage people to try to wreck it, if they think they can.)

Comment by makoyass on Combat vs Nurture & Meta-Contrarianism · 2019-01-12T22:12:48.438Z · score: 5 (3 votes) · LW · GW

Let's call this process of {exposing our guessed interpretations of the other person's position}.. uh.. "batleading"

I wonder how often that impulse to batlead is not correctly understood by the batleader, and when people respond as if we're strawmanning or failing to notice our confusion and trying prematurely to dismiss a theory we ought to realise we haven't understood (when really we just want to batlead) we tragically lack the terms or the introspection to object to that erroneous view of our state of mind, and things just degenerate from there

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T20:21:04.906Z · score: 1 (1 votes) · LW · GW
Treating a merely larger brain as more anthropically important is equivalent to saying that you can draw this boundary inside the brain

I really can't understand where this is coming from. When we weigh a bucket of water, this imposes no obligation to distinguish between individual water molecules. For thousands of years we did not know water molecules existed, and we thought of the water as continuous. I can't tell whether this is an answer to what you're trying to convey.

Where I'm at is... I guess I don't think we need to draw strict boundaries between different subjective systems. I'll probably end up mostly agreeing with Integrated Information theories. Systems of tightly causally integrated matter are more likely as subjectivities, but at no point are supersets of those systems completely precluded from having subjectivity, for example, the system of me, plus my cellphone, also has some subjectivity. At some point, the universe experiences the precise state of every transistor and every neuron at the same time (this does not mean that any conscious-acting system is cognisant of both of those things at the same time. Subjectivity is not cognisance. It is possible to experience without remembering or understanding. Humans do it all of the time.)

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T02:01:52.648Z · score: 1 (1 votes) · LW · GW

I haven't read their previous posts, could you explain what "who has the preferences via causality" refers to?

Comment by makoyass on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2019-01-12T01:00:15.641Z · score: 1 (1 votes) · LW · GW

Will read. I was given pause recently when I stumbled onto If a tree falls on Sleeping Beauty, where our bets (via LDT reflectivist pragmatism, I'd guess) end up ignoring anthropic reasoning

The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter

2019-01-11T22:26:29.887Z · score: 13 (6 votes)
Comment by makoyass on Bottle Caps Aren't Optimisers · 2019-01-08T08:09:26.604Z · score: 1 (1 votes) · LW · GW

A larger set of circumstances... how are you counting circumstances? How are you weighting them? It's not difficult to think of contexts and tasks where boulders outperform individual humans under the realistic distribution of probable circumstances.

Comment by makoyass on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T21:06:49.844Z · score: 1 (1 votes) · LW · GW

Yeah. I think I did notice it talking about a stochastic policy at one point, and on reflection I don't see any other reasonable way to do that. This interpretation also accords with making the agent's actions part of the observation history. If they were a pure function of the observations, we wouldn't need them to be there.

Comment by makoyass on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-01T08:14:10.508Z · score: 5 (3 votes) · LW · GW

In the FHI's indifference paper, they define policies as mapping observation-action histories to a distribution over actions instead of just actions ("π : H → ∆(A)"). Why is that? Is that common? Does it mean the agent is stochastic?

Comment by makoyass on Open and Welcome Thread December 2018 · 2019-01-01T00:18:08.285Z · score: 2 (2 votes) · LW · GW

I'm certainly interested in playing with reallocation systems in existing cities, but if we can go beyond that, we must.

"Gentrification", for me includes the effect where land prices increase without any increase in value. That pricing does useful work by allocating land to its most profitable uses. It does that through costly bidding wars and ruthless extraction of rent, which have horrible side-effects of reducing the benefits regular people derive from living in cities by, I'd guess, maybe 80%? (Reminder: not only is your rent too damn high, but so is the rent of the businesses you frequent), allocating vast quantities of money to the landowning class, who often aren't producing anything (especially often in san fransisco). If we can make a system that allocates land to its most productive use without those side-effects, then we no longer need market-pricing as a civic mechanism, and we should be trying like hell to get away from it. Everyone should be trying like hell to get away from it, but people who believe they have a viable mostly side-effect-free substitute should be trying especially hard.

A large part of the reason I'm attracted to the idea of building in a rural or undeveloped area is it will probably be easier to gain the use of eminent domain, in that situation. If we're building amid farmland, and we ask the state for the right to buy land directly adjacent to the city at a price of say... double the inflation-adjusted price of local farmland as of the signing of the deal, it's hard to argue that anyone loses out much in that situation. There wasn't really much of a chance that land was going to rise to that price on its own, any rise would have been an obvious exploitation of the effects of the city. If you ask for a similar privilege on urban land, forced sale at capped price is a lot more messy (and, of course, the price cap will be like 8x higher), for one, raising land prices in response to adjacent development is just what land-owners are used to in cities and they will throw a very noisy fit if someone threatens that.

Comment by makoyass on Editor Mini-Guide · 2018-12-30T09:01:37.736Z · score: 4 (3 votes) · LW · GW

Oh I see the index on the left is constructed automatically

Okay so how did you make that index on the left side of the page? :p

Comment by makoyass on What makes people intellectually active? · 2018-12-30T03:59:46.982Z · score: 11 (5 votes) · LW · GW

I receive an original idea every time I face an uncomfortably vague but demanding obsession, a question I didn't know how to ask. I think about it until I do know how to ask the question, until the vague obsession becomes precise. Out comes a payoff. I can write it down and people will like it, usually (if I can convince anyone to read it).

I can't imagine that there are a lot of people who don't get these leading obsessions, these tractable neuroses, these itching intuitions that there is something over there that we should be trying to get to know. I think there is a difference between people, some people go after those sorts of smells, others are repelled. A lot of good work comes from people who, through circumstance or psychology, cannot ignore their difficult questions.

I don't know what use this observation is for creative engineering work. I've been stuck on a simple game design problem for weeks and I'm pretty sure that's because I never learned to direct my creativity (or, to phrase that in another way: the thing that is directing my creativity does not respect and listen to the thing that knows what problems I'm supposed to be working on right now). Something in the design is missing, shallow, but this problem.. in my mind.. it never asserts itself as one of these kinds of unarticulated question that can and must be answered. I just want to turn away. I want to do something else. Some crucial party in me is not interested. I can't tell it it's wrong to be disinterested. I hate this game. I know that one day I will love it again, unfortunately I love it when it needs my love the least, and I hate it when it needs my love the most. Maybe I need to divorce the concept of the game as it exists (the current build) and the vision, the game as it should exist. Terrible thing to be confused about, but it feels like that's what's going on.

I've thought this before: A finished, released game will always be a thin shadow of an experience it is alluding to, a lot of the time games make it very obvious, it's practically explicit, I didn't want it to be obvious, I wanted the game to be honest, just what it appears to be, and so I have to go through hell to bring the being so far forward to align with the appearance.

Comment by makoyass on Spaghetti Towers · 2018-12-24T21:25:38.338Z · score: 7 (3 votes) · LW · GW

Suddenly very inspired with the idea of a programming language where even the most carelessly constructed spaghetti towers are fairly easy to refactor.

I want to design a medium that can be assembled into a fairly decent set of shelves by just progressively throwing small quantities of it at a wall.

I want to design tools that do not require foresight to be used well, because if you're doing something that's new to you, that challenges you- and I want to live in a world where most people are- foresight is always in scarce supply.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-23T23:27:09.359Z · score: 2 (2 votes) · LW · GW

They have very little to be afraid of if their commitment is true, and if it's not, we don't want it. The commitment thing isn't just a marketing stunt. It's a viability survey. The data has to be good.

I guess I should add, on top of the process for forgiving commitments under unavoidable mitigating circumstances, there should be a process for deciding whether the city met its part of the bargain. If the facilities are not what was promised, fines must be reduced or erased.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-23T07:54:30.230Z · score: 2 (2 votes) · LW · GW

There are many kinds of commerce I don't know much about. I'm going to need help with figuring out what a weird city where the cost of living is extremely low is going to need to become productive. The industries I do know about are fairly unlikely to require proximity to a port, but even in that set.. a lot of them will want proximity to manufacturing and manufacturing in turn will want to be near a port?

Can you think of any reasons we couldn't make the coordinated city's counterpart to the FSP's Statement of Intent contract legally binding, imposing large fines on anyone who fails to keep to their commitment? (while attempting to provide exceptions for people who can prove they were not in control of whatever kept them from keeping their commitment, where possible) Without that, I'd doubt those commitments will amount to much.

For a lot of people a scheme like this will be the only hope they'll ever have of owning (a share in) any urban property, if they can be convinced of the beneficence of the reallocation algorithms (I imagine there will be many opportunities to test them before building a fully coordinated city), I don't really understand what it is about the FSP that libertarians find so exciting, but I feel like coordinated city makes more concrete promises of immediate and long-term QoL than the FSP ever did. Note, the allocator includes the promise of finding ourselves surrounded by like-minded individuals

Comment by makoyass on 0 And 1 Are Not Probabilities · 2018-12-21T00:48:46.735Z · score: 0 (2 votes) · LW · GW

/r/badmathematics is shuttered now, apparently.

"This community has become something of a shitshow. Setting badmath to private while we try to decide on a way forward with the subreddit."

Oh no, really? Who would have thought that the sorts of people who have learned to enjoy indulging contempt would eventually turn on each other.

I really wanted to see that argument though, tell me, to what extent was it an argument? Cause I feel like if a person in our school wanted to settle this, they'd just distinguish the practical cases EY's talking about from the mathematical cases the conversants are talking about and everyone would immediately wake up and realise how immaterial the disagreement always was (though some of them might decide to be mad about that instead), but also, maybe Eleizer kind of likes getting people riled up about this so maybe dispersing the confusion never crossed his mind. Contempt vampires meet contempt bender. Kismesis is forged.

I shouldn't contribute to this "fight", but I can't resist. I'd have recommended he bring up how the brunt of the causal network formalization explicitly disallows certain or impossible events on the math level once you cross into a certain level of sophistication (I forget where the threshold was, but I remember thinking "well the bayesian networks that supports 0s and 1s sounds pretty darn limited and I'm going to give up on them just as my elders advised.")

Ultimately, the "can't be 0 or 1" restriction is pretty obviously needed for a lot of the formulas to work robustly (you can't even use the definition of conditional probability without restricting the prior of the evidence! Cause there's a division in it! There are lots of divisions in probability theory!)

So I propose that we give a name to that restriction, and I offer the name "credences". (Currently, it seems the word "credence" is just assigned to a bad overload of "probability" that uses percent notation instead of normal range. I doubt anyone will miss it.)

A probability is a credence iff it is neither 0 nor 1. A practical real-world right and justly radically skeptical bayesian reasoner should probably restrict a large, well-delineated subset of its evidence weights to being credences.

And now we can talk about credences and there's no need for any more confusion, if we want.

Comment by makoyass on What is abstraction? · 2018-12-17T07:40:12.776Z · score: 1 (1 votes) · LW · GW
I get the impression from hearing other people talk about it that there is a single meaning, and that I'm not understanding what that single meaning is

People are often wrong about that.

A person who understands this effect can use it to exploit people, and when they do it is called "equivocation", to use two different senses of the same word in quick enough succession that nobody notices the words aren't really pointing at the same thing, to then use the inconsistencies between the word senses to approach impossible conclusions.

I wish I could drop a load of examples but I've never been good at that. This deserves a post. This deserves a paper, there are probably whole philosophical projects that are based on the pursuit of impossible chimeras held up by prolonged, entrenched equivocation...

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-16T01:24:09.824Z · score: 4 (3 votes) · LW · GW

Update on preference graph order recovery

I decided to stop thinking about the Copeland method (method where you count how many victories each candidate has had and sort everyone according to that). They don't mention it in the analysis (pricks!) but the flaw is so obvious I'm not gonna be humble about this

Say you have a set of order judgements like this:

< = { (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (p u) (p u) (p u) (p u) }

It's a situation where the candidate "s" is a strawman. No one actually thinks s is good. It isn't relevant and we probably shouldn't be discussing it. (But we must discuss it, because no informed process is setting the agenda, and this system will be responsible for fixing the agenda. Being able to operate in a situation where the attention of the collective is misdirected is mandatory)

p is popular. p is better than the strawman, but that isn't saying much.

u is the ultimate, and is known by some to be better than p in every way. There is no controversy about that, among those who know u.

Under the copeland method, u still loses to p because p has fought more times and won more times.

The Copeland method is just another popularity contest. It is not meritocratic. It cannot overturn an incumbency by helping a few trusted seekers to spread word about their finding. It does not spread findings. It cannot help new things rise to prominence. Disregard the Copeland method.


A couple days ago I started thinking about defining a metric by thinking of every edge in the graph (every judgement) as having a "charge" and then defining a way of reducing serial wires and a way of reducing parallel wires, then getting the total charge between each pair of points (it'll have time complexity n^3 at first but I can think of lots of ways to optimise that. I wouldn't expect much better from a formal objective measure), then assembling that into a ranking.

Finding serial and parallel reducers with the right properties didn't seem difficult (I'm currently looking at parallel(a, b)→ a + b and serial(a, b)→ 1/(1/a + 1/b)). That was very exciting to realise. The current problem is, it's not clear that every tangle can be trivially reduced to an expression of parallels and serials, consider the paths between the top left and bottom right nodes in a network shaped like "▥", for instance.

Calculating the conductance between two points in a tangled circuit may be a good analogy here... and I have a little intuition that this would be NP hard in the most general case despite being deceptively tractable in real-world cases. Someone here might be able to dismiss or confirm that. I'm sure it's been studied, but I can't find a general method, nor a proof of hardness.

If true, it would make this not so obviously useful as a formal measure sufficient for use in elections.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-15T23:06:22.848Z · score: 9 (5 votes) · LW · GW
Also, what are LessWrong's views on the idea of a continuous consciousness?

It's kind of against the moderation guidelines of "Make personal statements instead of statements that try to represent a group consensus" for anyone to try to answer that question hahah =P

But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn't know.

An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.

Comment by makoyass on Open and Welcome Thread December 2018 · 2018-12-09T01:40:30.129Z · score: 13 (10 votes) · LW · GW

Most of the land around you is owned by people who don't know you, who don't support what you're doing, who don't particularly want you to be there, and who don't care about your community. If they can evict the affordable vegan food dispensary and replace it with a cheesecake factory that will pay higher rent, they will do that, repeatedly, until your ability to profit from your surroundings as a resident is as close to zero as they can make it without driving you away to another city, and if you did go to another city, you would watch the same thing happen all over again. You are living in the lands of tasteless lords, who will allow you to ignore the land-war that's always raging, it's just a third of your income, they tell you, it's just what it costs.

That's not what it costs. We can get it a lot cheaper if we coordinate. And whatever we use to coordinate can probably be extended to arranging a much more livable sort of city.

So I've been thinking a lot about what it would take to build a city in the desert where members' proximity desires are measured, clustered and optimised over, where rights to hold land are awarded and revoked on the basis of that. There would be no optimal method, but we don't need an optimal method. All we need is something that works well enough to beat the clusterfuck of exploitation and alienation that is a modern city. The system would gather us all together and we would be able to focus on our work.

I'll need more algorithms before I can even make a concrete proposal. Has anyone got some theory on preference aggregation algorithms. I feel like if I can learn a simple, flexible preference graph order recovery algorithm I'll be able to do a lot with that.

It'll probably involve quadratic voting on some level. Glen Weyl has a lot of useful ideas.

Comment by makoyass on Worth keeping · 2018-12-08T01:20:06.708Z · score: 3 (1 votes) · LW · GW

It's an interesting tradeoff, but it doesn't come up much, for me. I think, in most relevant domains, people aren't actually good at hiding their problems. Humans seem too complex, too expressive, too transparent. We were not adapted to effectively wielding privacy. We cannot fake important skills or insights that we do not have: We don't know what we don't know, we don't know the tells.

In order to present a convincing picture of a human being, clear enough for anyone to trust you with anything, the only way most people can do that is by telling the truth.

Comment by makoyass on Summary: Surreal Decisions · 2018-12-01T19:32:04.508Z · score: 1 (1 votes) · LW · GW

Yeah, there's still difficult stuff to grapple with. Mathematics isn't my specialization and I'm not in any way disagreeing that surreal numbers might be relevant here. I've been thinking about digging into Measure Theory.

Comment by makoyass on Summary: Surreal Decisions · 2018-11-30T21:49:21.239Z · score: 5 (2 votes) · LW · GW

Have infinite ethics been reexamined in light of the logical decision theories?

I then come along and punch 100 people destroying 100 utility

Under a logical decision theory, your decision procedure is reflected an infinite number of times across the universe, you can't just punch a 100 people and then stop there. If you decide to punch any people, an infinite number of reflections of you punch an infinite number of people. The assumption "the outcomes of your decisions are usually finite" is thrown out.

Modelling potential actions as isolated counterfactuals is wrong and doesn't work. We've known this for a while.

The end of public transportation. The future of public transportation.

2018-02-09T21:51:16.080Z · score: 7 (7 votes)

Principia Compat. The potential Importance of Multiverse Theory

2016-02-02T04:22:06.876Z · score: 0 (14 votes)