When would an agent do something different as a result of believing the many worlds theory? 2019-12-15T01:02:40.952Z · score: 12 (6 votes)
What do the Charter Cities Institute likely mean when they refer to long term problems with the use of eminent domain? 2019-12-08T00:53:44.933Z · score: 7 (2 votes)
Mako's Notes from Skeptoid's 13 Hour 13th Birthday Stream 2019-10-06T09:43:32.464Z · score: 6 (2 votes)
The Transparent Society: A radical transformation that we should probably undergo 2019-09-03T02:27:21.498Z · score: 8 (6 votes)
Lana Wachowski is doing a new Matrix movie 2019-08-21T00:47:40.521Z · score: 5 (1 votes)
Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours 2019-08-18T04:22:53.879Z · score: 0 (9 votes)
Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? 2019-08-05T00:12:14.630Z · score: 81 (48 votes)
Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? 2019-07-31T00:16:59.415Z · score: 10 (5 votes)
If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? 2019-07-12T01:40:48.999Z · score: 5 (3 votes)
In physical eschatology, is Aestivation a sound strategy? 2019-06-17T07:27:31.527Z · score: 18 (5 votes)
Scrying for outcomes where the problem of deepfakes has been solved 2019-04-15T04:45:18.558Z · score: 28 (15 votes)
I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it 2019-04-01T03:19:44.080Z · score: 20 (7 votes)
Is there a.. more exact.. way of scoring a predictor's calibration? 2019-01-16T08:19:15.744Z · score: 22 (4 votes)
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter 2019-01-11T22:26:29.887Z · score: 18 (7 votes)
The end of public transportation. The future of public transportation. 2018-02-09T21:51:16.080Z · score: 7 (7 votes)
Principia Compat. The potential Importance of Multiverse Theory 2016-02-02T04:22:06.876Z · score: 0 (14 votes)


Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-23T03:43:52.776Z · score: 1 (9 votes) · LW · GW

Why aren't there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn't that a natural role that many people would like to play?

Is it just that the community hasn't explicitly formed consensus that the people who're already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?

Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-22T08:08:09.608Z · score: 4 (3 votes) · LW · GW

I'd guess there weren't as many nutcases in the average ancestral climate, as there are in modern news/rumor mills. We underestimate how often it's going to turn out that there wasn't really a reason they did those things.

Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-22T08:01:03.233Z · score: 11 (2 votes) · LW · GW

I've heard of Zendo and I've been looking for someone to play Eleusis with for a while heh (maybe I'll be able to get the local EA group to do it one of these days).

though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun

Fun isn't a generic substance. Fun is subjective. A person's sense of fun is informed by something. If you've internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that training it is important, a game that trains rationality is going to be a lot of fun for you.

This is something I see often during playtesting. The people who're quickest to give up on the game tend to be the people who don't think experimentation and hypothesising has any place in their life.

I am worried about transfer failure. I guess I need to include discussion of the themes of the game and how they apply to real world situations. Stories about wrong theories, right theories, the power of theorising, the importance of looking closely at cases that break our theories.

I need to... make sure that people can find the symmetry between the game and parts of their lives.

Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-21T22:39:55.329Z · score: 2 (2 votes) · LW · GW

If you have an android phone, sure. I'll DM you a link to the apk. I should note, it's pretty brutal right now and I have not yet found a way to introduce enough primitives to the player to make really strict tests, so it's possible to guess your way all the way to the end. Consider the objective to be figure out the laws, rather than solve the puzzles.

Comment by makoyass on Should We Still Fly? · 2019-12-20T23:05:08.616Z · score: 2 (2 votes) · LW · GW

The next question is, why aren't people buying the offsetting? I seem to remembering hearing that it was once an option in most ticket purchase processes, but it must have been an unpopular choice, because the option has disappeared and now offsetting is going to be legally mandated, but apparently the legal mandate does not require enough offsetting to be done (past discussion: )

Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-20T00:52:22.170Z · score: 13 (8 votes) · LW · GW

This is probably the least important question (the answer is that some people are nuts) but also the one that I most want to see answered for some reason.

Comment by makoyass on We run the Center for Applied Rationality, AMA · 2019-12-20T00:46:58.565Z · score: 5 (7 votes) · LW · GW

I've been developing a game. Systemically, it's about developing accurate theories. The experience of generating theories, probing specimens, firing off experiments, figuring out where the theories go wrong, and refining the theories into fully general laws of nature which are reliable enough to create perfect solutions to complex problem statements. This might make it sound complicated, but it does all of that with relatively few components. Here's a screenshot of the debug build of the game over a portion of the visual design scratchpad (ignore the bird thing, I was just doodling):

The rule/specimen/problemstatement is the thing on the left, the experiments/solutions that the player has tried are on the right. You can sort of see in the scratchpad that I'm planning to change how the rule is laid out to make it more central and to make the tree structure as clear as possible (although there's currently an animation where it sort of jiggles the branches in a way that I think makes structure clear, it doesn't look as good this way).

It might turn out to be something like a teaching tool. It illuminates a part of cognition that I think we're all very interested in, not just comprehension, it also tests/trains (I would love to know which) directed creative problemsolving. It seems to reliably teach how frequently and inevitably our right-seeming theories will be wrong.

Playtesting it has been... kind of profound. I'll see a playtester develop a wrong theory and I'll see directly that there's no other way it could have gone. They could not have simply chosen to reserve judgement and not be wrong. They came up with a theory that made sense given the data they'd seen, and they had to be wrong. It is now impossible for me to fall for it when I'm presented with assertions like "It's our best theory and it's only wrong 16% of the time". To coin an idiom.. you could easily hide the curvature of the earth behind an error rate that high, I know this because I've experienced watching all of my smartest friends try their best to get the truth and end up with something else instead.

The game will have to teach people to listen closely to anomalous cases and explore their borders until they find the final simple truth. People who aren't familiar with that kind of thinking tend to give up on the game very quickly. People who are familiar with that kind of thinking tend to find it very rewarding. It would be utterly impotent for me to only try to reach the group who already know most of what the game has to show them. It would be easy to do that. I really really hope I have the patience to struggle and figure out how to reach the group who does not yet understand why the game is fun, instead. It could fail to happen. I've burned out before.

My question: what do you think of that, what do you think of the witness, and would you have any suggestions as to how I could figure out whether the game has the intended effects as a teaching tool.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-19T23:10:02.721Z · score: 2 (2 votes) · LW · GW

No. Measure decrease is bad enough to more than outweigh the utility of the winning timelines. I can imagine some very specific variants that are essentially a technology for assigning specialist workloads to different timelines, but I don't have enough physics to detail it, myself.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-17T22:17:26.697Z · score: 1 (1 votes) · LW · GW

Sure. The question, there, is whether we should expect there to be any powerful agents with utility functions that care about that.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T23:09:00.825Z · score: 1 (1 votes) · LW · GW

The question isn't really whether it's correct, the question is closer to "is it equivalent to the thing we already believed".

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T06:08:39.657Z · score: 1 (1 votes) · LW · GW

I'm noticing a deeper impediment. Before we can imagine how a morality that is relatable to humans might care about the difference between MW and WC, we need to know how to extend the human morality we bare into the bizarre new territory of quantum physics. We don't even have a theory of how human morality extends into modernity, we definitely don't have an idealisation of how human morality should take to the future, and I'm asking for an idealisation of how it would take to something as unprecedented as... timelines popping in and out of existence, universes separated by uncrossable gulfs (how many times have you or your ancestors ever straddled an uncrossable gulf!)

It's going to be very hard to describe a believable agent that has come to care about this new, hidden, bizarre distinction when we don't know how we come to care about anything.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T05:43:04.427Z · score: -1 (2 votes) · LW · GW
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds

Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don't think it's going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don't.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T01:23:21.947Z · score: 2 (2 votes) · LW · GW

Yeah. I reject it. If you're any good at remapping your utility function after perspective shifts ("rescuing the utility function"), then, after digesting many worlds, you will resolve that being dead in all probable timelines is pretty much what death really is, then, and you have known for a long time that you do not want death, so you don't have much use for quantum suicide gambits.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T01:19:12.541Z · score: 1 (1 votes) · LW · GW

Sorry. That last bit about whether causality is involved at all was a little joke. It was bad. That wasn't really what I was pondering.

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T00:17:33.657Z · score: 1 (1 votes) · LW · GW

I'm not sure, it sounds very familiar, but I think it would have sounded very familiar to me before reading it or knowing of its existence. It sounds like the sorts of things I would already know.

People who think this way tend to converge on the same ideas. It's hard to tell whether thinking superrationally causes the convergence, or whether thinking in convergent ways causes a person to have more interest in superrationality, ~~or whether causality is involved at all~~

Comment by makoyass on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T00:01:01.044Z · score: 1 (1 votes) · LW · GW

No, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.

Comment by makoyass on The Actionable Version of "Keep Your Identity Small" · 2019-12-08T08:06:40.959Z · score: 1 (1 votes) · LW · GW

I don't think that applies to the sense of tribe that I mean. When you find your tribe, the sense of tribe that I mean, you will realise that leaving it is not really an option that you ever could have had. It is simply what you are. It is simply the group of people who want for the world the same thing that you want for the world.

It can take a long time to find that tribe and to recognise it. It isn't lesswrong, it isn't EA. It's funny to think about how much of an ideological split there is between discounting neartermists and alignmentist longtermists, and how we can still be friends, if anyone started talking about why they're different (and why they're still friends) there would be a lot of discomfort, but for now we just act like it isn't there.

Comment by makoyass on What do the Charter Cities Institute likely mean when they refer to long term problems with the use of eminent domain? · 2019-12-08T07:00:22.180Z · score: 2 (2 votes) · LW · GW

The only explanation I can think of, myself, is that they are concerned that using eminent domain would "hurt market confidence" and decrease property prices.

My answer to that would be: Good. Land is reliably overpriced anyway, the land market is not efficient. I'd then refresh my memory of Inadequate Equilibria's argument to that effect (though it may have been specific to houses) and see if it cited anyone.

Comment by makoyass on What do the Charter Cities Institute likely mean when they refer to long term problems with the use of eminent domain? · 2019-12-08T06:55:50.568Z · score: 1 (1 votes) · LW · GW

No it does not. Though I'll concede "it does not seem possible" is closer to meaning that, I kind of misspoke, my stance is more; I have reasons to think it's probably impossible (see comment).

Though I wouldn't ask if I weren't open to being surprised.

Comment by makoyass on What do the Charter Cities Institute likely mean when they refer to long term problems with the use of eminent domain? · 2019-12-08T06:45:46.547Z · score: 1 (1 votes) · LW · GW

I should note, I explain a little bit about the reasons I'm very concerned by their stance, here

Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.
Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)
Comment by makoyass on Antimemes · 2019-12-08T04:51:15.216Z · score: 2 (2 votes) · LW · GW

Any sane person could write up a list of antimemes, but no sane person would post it.

It would tend to have the effect of making most people give up on the idea of antimeme, concluding that it's something that only insane people think about.

Comment by makoyass on Ungendered Spanish · 2019-12-08T02:24:35.697Z · score: 3 (2 votes) · LW · GW

It's interesting to hear that, I didn't realise that much change had occurred.

I would guess that the normalisation would have come from people spending a lot of time online/being in more situations where they don't want to and don't have to disclose a person's gender. Hm. I can see how the "they seem queer, don't want to assume their gender" might have promoted adoption by a lot.

Comment by makoyass on Ungendered Spanish · 2019-12-08T00:05:42.386Z · score: 6 (4 votes) · LW · GW

My perception as a nonbinary is that this order of events makes things difficult

Many non-binary people adopt it as their pronoun. People get practice referring to specific named individuals with it: Pat said they might be early.
Usage expands into cases where the person's gender is not relevant: The person who gave me a ride home from the dance last night doesn't take care of their car.

Edit: A more succinct way of saying this is; making the neutral pronoun mean "third gender" will make it harder for it to come to mean "indeterminate gender", although The Third Gender is often defined as indeterminacy, I'm not sure how true or obvious that is for a lot of nbs

Having the nonbinary identity enter public consciousness seems to have caused the neutral pronoun to take on a weight and colour that makes it harder to apply it to non-nonbinary people. In English, since use in situations where gender is irrelevant is already grammatical, so I'd guess this has a negligible effect on usage (though it does seem to have caused a notable amount of brain inflammation in terfs and reactionaries that I must mention but probably shouldn't go into depth about), but in a different place, seems like this might be more of a thing

If you make it about identity first, gender-neutral terms become charged, and the second phase of making them common and truly neutral and uncharged will be delayed.

Some other force I'm not aware of could overwhelm these ones. I just find it a little hard to imagine. Oh well. Most cultural shifts, at some point, were hard to imagine.

But, as an alternative: The internet is an environment where reference-without-knowing-gender is likely to frequently occur. Maybe it would be better to start by advocating the use of genderless pronouns on spanish internet as a default, and talk about why that's important for everyone (why is it important for everyone?), and then start talking about nonbinary people later.

Comment by makoyass on The Actionable Version of "Keep Your Identity Small" · 2019-12-07T21:06:12.545Z · score: 1 (1 votes) · LW · GW

I was really hoping you were going to provide an actionable version of "keep your tribal identity small"

For me, the most useful parts of the KYIS outlook were, meeting people with a fresh slate without saying "yes I'm one of those people", not feeling like you personally are being threatened when people criticise your group, not feeling that impulse to delude yourself and everyone around you into thinking the outgroup are monsters.

The issue is, I notice that we can only stay in this state of neutrality for so long. Eventually, we find our tribe, we develop an ideology (cluster of beliefs about how the world works and how to do good) that is simply too useful to step outside of, we become publicly associated with controversial projects. That will happen. If we don't learn how to move soundly in that fire we wont end up moving soundly at all.

Comment by makoyass on The New Age of Social Engineering · 2019-12-07T20:50:11.529Z · score: 1 (1 votes) · LW · GW

I am starting to see a growing movement towards designing net systems humanely, designing things to respect the user's attention, to be healthy and useful instead of just optimising engagement. and seem like two products of this movement. Unfortunately, I don't see a lot of competence here yet. as it exists now is mostly just baffling.

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-12-03T06:07:05.501Z · score: -1 (2 votes) · LW · GW

Hmm. Perhaps if there were a consensus that some people have deep, sincere, sometimes metaphysical reasons for not being environmentalists, they could become a protected class. I'm not sure many people do, myself.

Comment by makoyass on Could someone please start a bright home lighting company? · 2019-12-01T01:11:37.751Z · score: 4 (2 votes) · LW · GW

I am frequently afflicted with the kinds of drowsy depressive states that I would associate with a state of dormancy in a deep winter. I think I heard that brighter lights generally increase alertness and productivity. My current model is.. the mechanisms for determining whether the human is indoors and (therefore?) about to sleep are just very very crude. The model is also trying to account for the the CO2 concentration thing, which, last I heard we didn't have any other plausible evolutionary accounts for.

Comment by makoyass on What's been written about the nature of "son-of-CDT"? · 2019-11-30T23:20:28.248Z · score: 10 (6 votes) · LW · GW

I think I saw a bit on arbital about it

Logical decision theorists use "Son-of-CDT[red link, no such article]" to denote the algorithm that CDT self-modifies to; in general we think this algorithm works out to "LDT about correlations formed after 7am, CDT about correlations formed before 7am".

Comment by makoyass on Could someone please start a bright home lighting company? · 2019-11-30T23:14:06.029Z · score: 1 (1 votes) · LW · GW

No and it's summer in my hemisphere anyway (but I spend a lot of time indoors)

Comment by makoyass on Could someone please start a bright home lighting company? · 2019-11-30T22:38:03.126Z · score: 1 (1 votes) · LW · GW

What if we just had brighter screens?

If it just needs to be brightness in the field of vision rather than brightness in the room, well, most of the time there's a (very large) screen dominating my field of vision.

I have now set my screen brightness in uncomfortable ranges. Having difficulty adjusting but feeling very awake. Will report back in a week, I guess.

I was considering projecting bright light onto the wall behind the screen (this would allow the light to be defused a lot, and it would be very easy to deploy, wouldn't even need to hang the thing, let alone make a power socket), but it occurred to me that having the backdrop be brighter than your screen tends to cause headaches.

Comment by makoyass on Book Review: Design Principles of Biological Circuits · 2019-11-24T04:58:55.968Z · score: 5 (4 votes) · LW · GW

A large part of the reason this is interesting is that it bears on the alignment problem; if evolved mechanisms of complex systems tend to end up being comprehensible, alignment techniques that rely on inspecting the mind of an AGI become a lot easier to imagine than they currently are.

From a comment I made response to Rohin Shah on reasons for AI optimism.

One way of putting it is that in order for an agent to be recursively self-improving in any remotely intelligent way, it needs to be legible to itself. Even if we can't immediately understand its components in the same way that it does, it must necessarily provide us with descriptions of its own ways of understanding them, which we could then potentially co-opt.
Comment by makoyass on Thoughts on Robin Hanson's AI Impacts interview · 2019-11-24T04:00:39.676Z · score: 1 (1 votes) · LW · GW
I assume Robin would want one of the 20 chapters to be about whole-brain emulation (since he wrote a whole book about that)

Yeah! I would too! I'd guess that he'd anticipate emulation before AGI, and if you anticipate early emulation then you might expect AGI to come as a steady augmentation of human intelligence, or as a new cognitive tool used by large human populations- which is a much less dangerous scenario.

But... I read the beginning of Age of Em, he was so heroically cautious in premising the book that not sure whether he actually anticipates early emulation?? (I sure don't) And apparently he didn't bring it up in the interview?

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-24T02:39:31.686Z · score: 3 (2 votes) · LW · GW

Weren't the countermeasures kind of very basic, though? Like they weren't exactly the type of illegibly sophisticated egregores that trads like to worship? Isn't Tall_poppy_syndrome basically instinctive?

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-24T02:30:45.104Z · score: 3 (2 votes) · LW · GW

Good reduction.

Drethelin here is on twitter. His posts are so good,

that I can almost ignore the amount of intentionally divisive politics memes. You create these wounds, brother, and you do not heal

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-17T07:02:56.259Z · score: 2 (2 votes) · LW · GW

People say reading something boring does it, but for me it's about cognitive overhead. Something that'll make some part of the brain go "I'm too tired for this shit actually, if you wanna read this we've got to sleep a bit first, you do want to read this ergo we are going to sleep now"

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-17T07:00:19.713Z · score: 1 (1 votes) · LW · GW


I find I have very little energy for debating religion nowadays. It could just be because I don't know all that much about religion and don't want to bother to learn. But I think it might just be that the truth of claims of religion aren't really why people keep going to church, and arguing against those claims wont really have that much of an effect for people. Some relevant stuff was writ in Scott's recent article about new atheism

My impression right now, personally, is that the strongest anchors of peoples' religions are probably

  • Grounding of morality. Some people don't see a way to build a shared morality on a purely secular worldview. It's not obvious that we even can (I believe we can, with a lot of talking and a bit of evolutionary psych, but have we, yet? Has that book been written?)
  • Community. You can't argue someone out of wanting to be a part of a community of people who agree about what is good and bad. The best you can do is invite them to an effective altruist meet and try to make sure they have a good time, and then if they do, if you can make sure they understand that there are alternatives, other communities out there ready to embrace them, then maybe the prospect of leaving their spiritual community can become thinkable for them.
Comment by makoyass on [Link] John Carmack working on AGI · 2019-11-15T00:16:19.997Z · score: 5 (3 votes) · LW · GW

I'd imagine he was reaching for a term for "generalised pascal-like situation". Calling it a pascal's wager wouldn't work because pascal's wager proper wasn't a valid argument.

Hm I guess it is a bit sad that there isn't a term for this.

Comment by makoyass on The Math Learning Experiment · 2019-11-09T03:33:27.557Z · score: 1 (1 votes) · LW · GW

I'd like to see how "it's conceptual engineering" vs "It's conceptual discovery" mentalities correlate with productivity. Engineering mentality seems obviously more pragmatic and more realistic, but Discovery mentality seems much more likely to attract passion (which, for humans, fuels productivity).

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-08T03:02:04.438Z · score: 2 (2 votes) · LW · GW

Hahah. That's a funny thought. I don't think it does lead inevitably to toxicity, though. I don't think the incentives it imposes are really that favourable to that sort of usage. There's a hedonic attractor for venomous behaviour rather than a strategic attractor.

Right now the char limit isn't really that hostile to dialogue. There's a "threading" UI (hints that it's okay to post many tweets at once) so it's now less like "don't put any effort into your posts" and more like "if you're gonna post a lot try to divide it up into small, digestible pieces"

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-07T04:07:15.693Z · score: 8 (5 votes) · LW · GW

Twitter's usefulness mostly comes from the celebrities being there. The initial reason the celebrities were attracted probably had to do with the char limit, its pretext, that they are not expected to read too much and that they are not expected to write too much.

You'll see on reddit - at least, back when these things were being determined - a lot of celebrities, when they did AMAs, seemed to feel obligated to respond to every comment with a comment of similar length. Sometimes they wouldn't wait and see which comments were getting the most votes and answer those, they'd just start with the first one that hit their mailbox and work down the list until they ran out of time. My guess is, non internet-native extroverts really needed a platform that would advise them about what's expected and reasonable.

But I think, now that we're all learning that we must moderate our consumption, the celebrities (and most other people) remain on twitter mainly because the celebrities were there in the first place. I don't think we need the char limit any more. I think maybe we're ready for the training wheels to come off.

But there's another reason redditlikes don't really work for a general audience. Mainly specifics about how voting tends to work. There is no accommodation of subjectivity. Everyone sees the same vote ranking even though different people have different interests and standards. The problem is partially mitigated by separating people into different subreddits, but eventually, general subreddits like /r/worldnews, /r/technology, /r/science or even /r/futurism will grow large enough and diverse enough that people wont be able to stand being around each other again. Every demographic other than the largest, most vote-happy one will have to leave. I really want everyone to be able to join together in the same conversation, but when the top-ranked comments always turn out to be "[outgroup lies]" or "[childish inanity]", that can't happen. The outgroup wants to see their lies, and the children want to see their inanity, and I think they should be able to, but good adults need to be able to hear each other too, or else they'll just move to GoodAdultSite and then the outgroup wont be able to find refutations of their lies even when they look for them, and the children will not receive the advice they need even when they call out for it.

(I have some ideas as to how to build a redditlike that might solve this. If anyone's interested, speak up.)

Comment by makoyass on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T23:04:06.828Z · score: 4 (3 votes) · LW · GW

My other comment was mostly critical but I just want to add that I really enjoy this kind of post. Any conversation about economics of future technology is fun imo.

Comment by makoyass on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T22:03:10.862Z · score: 2 (2 votes) · LW · GW

You need to demonstrate that the cost of division {developing the coupling system, the extra materials for building with the coupling system, and having the two parts be unable to share physical mechanisms} will be less than the benefits of having smaller/cheaper tugs for the few people who really have a use for tugs.

And I think most people don't really have enough of a use for tugs to overcome economics of scale for the near term. The majority of trips will take place with non-custom cabins:

  • The ordinary rider does not need custom cabins. Consider the amount of energy people put into meaningfully customising their homes/apartments in practice (not that much), then scale it down by 20x to account for the fact that people spend a lot less time in transit than they do at home. That's how much people will care most of the time. I should examine some of the usecases though
    • unmanned delivery services only want tugs
    • people who use a wheelchair want a custom cabin they can just roll into
    • People who want to do morning routine stuff during commute want a cabin that supports that stuff? But wouldn't the road movement interfere too much? I mean, have you ever stood in a bus? Imagine having to stay upright through all the shifting and jolting while showering, putting on pants, or eating a meal. If this were a thing, enough people would want it that they could just build a custom car entirely though.
    • Big visiting service station things?
      • Probably bad example: "mobile libraries". A lot of these thingies seem less practical than just having a fixed building provide the service and moving people or goods between them.
      • Hm visiting remote-operated surgery theatres? That could be pretty badass
  • Storing your very own personal cabin, once arrived at your destination, will be an inconvenience. It would mean either sending it to a parking locker (which, if it's in the urban center, you will have to pay a non-negligible amount to reside in), or all the way home again, to then have to wait for it to come out again when they're ready to commute back. I think most people would stop bothering.

Hmm I was gonna say the tugs wouldn't be that much cheaper than common single-occupant cars because they'd need to have enough mass to gain traction on the road, but it occurs to me, if you could have the tug go under the cabin to some extent, then jack it up a bit, it could use the weight of the cabin for traction, so assuming sufficiently dense batteries and motors (can we assume?) it could be pretty small. The heavier the cabin, the more traction it needs, but also the more traction it gets. That's pretty neat.

For completeness, I should link a previous post about the economics of autonomous cars I did (which has comments, and links in turn to another post I did)

Comment by makoyass on The Parable of Predict-O-Matic · 2019-11-03T23:47:28.488Z · score: 1 (1 votes) · LW · GW

The category feels a bit broader than "self-fulfilling prophesy" to me, but not by much. I think we should look for a term that gets us away from any impression of unilaterally decided, prophetic inevitability.

has the connotation of command, for me

But that connotation isn't really incorrect! When you make a claim that becomes true iff we believe it, there's a sense in which you're commanding the whole noosphere, and if the noosphere doesn't like it, it should notice you're making a command and and reject it.

There is a very common failure mode where purveyors of monstrous self-fulfilling prophesies will behave as if they're just passively describing reality, they aren't. We should react to them as if they're being bossy, intervening, inviting something to happen, asking the epistemic network to behave a certain way.

I think I was initially familiar with the word stipulation mostly from mathematics or law, places where truths are created (usually through acts of definition). I'm not sure how it came to me, but at some point I got the impression it just meant "a claim, but made up, but still true", that genre of claim that we're referring to. The word didn't slot perfectly into place for me either, but it seemed like its meaning was close enough to truths we create by believing them, I stopped looking for a better name. We wouldn't have to drag it very far.

But I don't know. It seems like it has a specific meaning in legal contexts that hasn't got much to do with our purposes. Maybe a better name will come along.

Hmm.. should it be.. "construction"? "Some predictions are constructions." "The value of bitcoin was constructed by its stakeholders, and so one day, through them, it shall be constructed away." "We construct Pi as the optimum policy for the model M"

Comment by makoyass on The Parable of Predict-O-Matic · 2019-11-03T05:00:16.610Z · score: 3 (2 votes) · LW · GW

I've noticed that the word "stipulation" is a pretty good word for the category of claims that become true when we decide they are true. It's probably best if we try to broaden its connotations to encompass self-fulfilling prophesies than it is to make some other word or name this category "prophesy" or something.

It's clear that the category does deserve a name.

Comment by makoyass on Rohin Shah on reasons for AI optimism · 2019-11-03T02:19:05.283Z · score: 2 (1 votes) · LW · GW
He thinks that as AI systems get more powerful, they will actually become more interpretable because they will use features that humans also tend to use

I find this fairly persuasive, I think. One way of putting it is that in order for an agent to be recursively self-improving in any remotely intelligent way, it needs to be legible to itself. Even if we can't immediately understand its components in the same way that it does, it must necessarily provide us with descriptions of its own ways of understanding them, which we could then potentially co-opt. (relevant: )

This may be useful in the early phases, but I'm skeptical as to whether humans can import those new ways of understanding fast enough to be permitted to stand as an air-gap for very long. There is a reason, for instance, we don't have humans looking over and approving every credit card transaction. Taking humans out of the loop is the entire reason those systems are useful. The same dynamic will pop up with AGI.

This xkcd comic seems relevant ("sandboxing cycle")

There is a tension between connectivity and safe isolation and navigating it is hard.

Comment by MakoYass on [deleted post] 2019-11-03T01:43:20.180Z
1) Do you find this to be helpful as an examination of some crucial element of the vengeful disposition?

No. It's extremely hard to read. I think it might be getting at revenge as a way of ensuring that there is a logic of peace. An attack on unjust social realities rather than any material cause of some potential future strife; but if I didn't already have that idea in my head, I wouldn't recognise it here. I feel like it's forcing me to guess something that it could have just said outright with very little prose.

Generally. Any discussion of vengeful disposition that does not build from new decision theories (functional decision theory, best learned through the arbital pages about LDT) is going to be needlessly circuitous and is likely to repeat certain mistakes. "the meaning is seemingly illogical", for instance. It doesn't commit to this position, but it doesn't begin to refute it either.

Basically... our new decision theories are an account of rationality under which things like revenge- policies which an agent benefits from holding, but which, when actuated, do not causally bring about future benefits- are not irrational. They are rational. The standard model of rationality (CDT) was wrong. The fact that CDT was regularly doing things that brought about suboptimal outcomes should have been a big clue to people that they were not describing the true dao.

I should emphasise, because this is quite radical, FDT contends that the rationality, or irrationality, of an action is not purely a function of its future consequences. That there has to be much more to it. An action can have negative consequences and still be a crucial part of a rational policy. If you can't justify that claim from the metaphysics of survival, you can't speak with clarity about vengeance policies.

Comment by makoyass on Turning air into bread · 2019-10-30T04:07:15.081Z · score: 1 (1 votes) · LW · GW


(Well yeah, eventually we're going to draw a black ball out of the urn. Coal and gas weren't shit next to some of the coordination challenges that're coming up, I'm sure. x-risks aside, space is going to be a mess. I can't wait for kessler syndrome to set in)

thankfully we're learning how to coordinate our population growth to support a good life within the limited carrying capacity of our natural resources better and better over time

In some ways, we are (our technology seems to be greening), but in maybe the most important ways, we haven't changed anything. The global population is still growing faster than ever. Growth seems to slow down under certain conditions, but (and I felt really stupid when I realised this) if a person thinks the utterly mysterious effects of those conditions will sustain for more than three generations, they have forgotten something very basic about what biological organisms are and how they came to be, and if we let it go that way, the problem is going to come back a lot stronger, and our chances of solving it with that different set of people will be close to zero.

I don't like talking about this.

But I'm starting to get the sense that there might be something important down here that nobody is looking at with clear eyes.

Comment by makoyass on Turning air into bread · 2019-10-30T03:35:15.201Z · score: 1 (1 votes) · LW · GW

No stigma. Many more technological solutions to social problems will be needed. For instance, I'm convinced we should be pouring a lot more money into geoengineering.

I imagine that it wont always go like this because it seems like the amount of matter and energy we have access to is finite. We answered overexpansion with a technology that enabled further expansion. There are metaphysical guarantees that this will not always work. No matter how many false physical constraints we overturn the second law of thermodynamics seems to guarantee (this is debatable) that we will eventually hit a wall, and we will look back at the mess behind us, and we will ask if this was the fate we really wanted, whether things could have been much better for everyone if we'd slowed down and negotiated back when we were small enough and close enough to manage such a thing.

Comment by makoyass on What's your big idea? · 2019-10-27T04:16:58.473Z · score: 1 (1 votes) · LW · GW

What negative externalities are you thinking of. Maybe it's silly for me to ask you to say, if you're saying they're taboo, but I'm looking over all of the elitist taboos and I don't think any of them really raise much of an issue.

Did I mention that my prototype aggregate utility function only regards adjacency desires that are reciprocated. For instance, if a large but obnoxious fan-base all wanted to be next to a single celebrity author who mostly holds them all in contempt, the system basically ignores those connections. Mathematically, it's like, the payoff of positioning a and b close together is min(a.desireToBeNear(b), b.desireToBeNear(a)). The default value for desireToBeNear is zero.

P.S. Does the fact that each user desire expression (roughly, the individual utility function) gets evaluated in a complex way that depends on how it relates to the other desire expressions make this not utilitarianism? Does this position that fitting our desires together will be more complex than mere addition have a name?

Comment by makoyass on Turning air into bread · 2019-10-27T02:06:34.327Z · score: 7 (5 votes) · LW · GW

It's an important story. Sometimes there are technological solutions to social problems. As reasonable as the prophet Malthus sounded, we didn't heed his warning, we did not repent, we did not learn how to coordinate our population growth to support a good life within the limited carrying capacity of our natural resources. A wizard made a new gizmo and we all got away with it.

There's something very unsatisfying about it.

And I imagine it wont always be like this.