The Transparent Society: A radical transformation that we should probably undergo 2019-09-03T02:27:21.498Z · score: 8 (6 votes)
Lana Wachowski is doing a new Matrix movie 2019-08-21T00:47:40.521Z · score: 5 (1 votes)
Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours 2019-08-18T04:22:53.879Z · score: 0 (9 votes)
Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? 2019-08-05T00:12:14.630Z · score: 58 (33 votes)
Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? 2019-07-31T00:16:59.415Z · score: 10 (5 votes)
If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? 2019-07-12T01:40:48.999Z · score: 5 (3 votes)
In physical eschatology, is Aestivation a sound strategy? 2019-06-17T07:27:31.527Z · score: 18 (5 votes)
Scrying for outcomes where the problem of deepfakes has been solved 2019-04-15T04:45:18.558Z · score: 28 (15 votes)
I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it 2019-04-01T03:19:44.080Z · score: 20 (7 votes)
Is there a.. more exact.. way of scoring a predictor's calibration? 2019-01-16T08:19:15.744Z · score: 22 (4 votes)
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter 2019-01-11T22:26:29.887Z · score: 18 (7 votes)
The end of public transportation. The future of public transportation. 2018-02-09T21:51:16.080Z · score: 7 (7 votes)
Principia Compat. The potential Importance of Multiverse Theory 2016-02-02T04:22:06.876Z · score: 0 (14 votes)


Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-08T04:19:55.535Z · score: 5 (3 votes) · LW · GW

In Vitalik Buterin's interview on 80KHours ( I recommend it) he brought something up that evoked a pretty stern criticism of radical transparency.

Most incentive designs rely on privacy, because by keeping a person's actions off the record, you keep the meaning of those actions limited, confined, discrete, knowable. If, on the other hand, a person's vote, say, is put onto a permanent public record, then you can no longer know what it means to them to vote. Once they can prove how they voted to external parties, they can be paid to vote a certain way. They can worry about retribution for voting the wrong way. Things that might not even exist yet, that the incentive designer couldn't account for, now interfere with their behaviour. It becomes so much harder to reason about systems of agents, every act affects every other act, what hope have we of designing a robust society under those conditions? (Still quite a lot of hope, IMO, but it's a noteworthy point)

Comment by makoyass on Examples of Examples · 2019-09-08T03:57:23.453Z · score: 4 (3 votes) · LW · GW

When I was taught the incompleteness theorem (proof that there are true mathematical claims that cannot ever be proven), I wished for an example of one of its unprovable claims. Math is a very strange territory. You will often find proofs of the existence of extraordinary things, but no instance of those extraordinary things. You can know with certainty that they're out there, but you might never get to see one. Without examples, we must always wonder if the troublesome cases can be confined to a very small region of mathematics and maybe this big impressive theorem will never really actually impinge on our lives in any way.

The problem is, an example of incompleteness would have to be a true claim that nobody could prove. If nobody could prove it, how would we recognise it as a true claim?

Well, how do we know that the sun will rise again tomorrow? We know that it rose before, many times, it's never failed, there's no reason to suspect it wont rise again. We don't have a metaphysical proof that the sun will rise again tomorrow, but we don't really need one. There is no proof, but the evidence is overwhelming.

It occurred to me that we could say a similar thing about the theorem P ≠ NP. We have tried and failed to prove or disprove it for so long that any other field would have accepted that the evidence was overwhelming and moved on long ago. A physicist would simply declare it a law of reality.

I was quite happy to find my example. It wasn't some weird edge case. It's a theorem that gets used every day by computer scientists to triage their energies, see, if you can prove that a problem you're trying to solve is equivalent or stronger than a known NP problem, you would be well advised to assume it's unsolvable, even though we wont ever be able to prove it (although, admittedly, we haven't been able to prove that we wont ever be able to prove it, that too seems fairly evident, if not guaranteed)

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-06T00:16:22.297Z · score: 1 (1 votes) · LW · GW

While I took your point well, FAI is not a more plausible/easier technology than democratised surveillance. It may be implemented sooner due to needing pretty much no democratic support whatsoever to deploy, it might just as well take a very long time to create.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-05T10:59:22.625Z · score: 1 (1 votes) · LW · GW
It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.

Give examples. Often there is a lot of context missing from those videos and that is the problem. People who intentionally ignore readily available context will have no more power in a transparent society than they have today.

My concern there wasn't that some laws might not get consistently enforced, consistent enforcement is the thing I am afraid of. I'm not sure about this, but I've often gotten the impression that our laws were not designed to work without the mercy of discretionary enforcement. The whole idea of freedom from unwarranted search is suggestive to me that laws were designed under the expectation that they would generally not be enforced within the home. Generally, when a core expectation is broken, the results are bad.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-05T10:47:32.310Z · score: 1 (1 votes) · LW · GW
I would expect it to get implemented exactly halfway

Not stopping halfway is a crucial part of the proposal. If they stop halfway, that is not the thing I have proposed. If an attempt somehow starts in earnest then fails partway through, policy should be that the whole thing should be rolled back and undone completely.

Regarding the difficulty of sincerely justifying opening National Security... That's going to depend on the outcome of the wargames.. I can definitely imagine an outcome that gets us the claim "Not having secret services is just infeasible" in which case I'm not sure what I'd do. Might end up dropping the idea entirely. It would be painful.

allegedly economically/technically impossible to install

Not plausible if said people are rich and the hardware is cheap enough for the scheme to be implementable at all. There isn't an excuse like that. Maybe they could say something about being an "offline community" and not having much of a network connection.. but the data could just be stored in a local buffer somewhere. They'd be able to arrange a temporary disconnection, get away with some things, one time, I suppose, but they'd have to be quick about it.

From the opposite perspective, many people would immediately think about counter-measures. Secret languages

Obvious secret languages would be illegal. It's exactly the same crime as brazenly covering the cameras or walking out of their sight (without your personal drones). I am very curious about the possibilities of undetectable secrecy, but there are reasons to think it would be limited.

I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?

(Hmm... I can think of someone in particular who really would have liked to live in that sort of situation, she would have felt a lot safer... ]:)

One of my intimates has made an attempt at this. It was inconclusive. We'd do it again.

But it wouldn't be totally informative. We probably couldn't justify making the data public, so we wouldn't have to deal much the omniscient antagonists thing, and the really difficult questions wouldn't end up getting answered.

One relevant small-scale experiment would be Ray Dalio's hedge fund Bridgewater, I believe they practice a form of (internal) radical openness, cameras and all. His book is on my reading list.

I would one day like to create an alternative to secure multiparty computation schemes like Ethereum by just running a devoutly radically transparent (panopticon accessible to external parties) webhosting service on open hardware. It would seem a lot simpler. Auditing, culture and surveillance as an alternative to these very heavy, quite constraining crypto technologies. The integrity of the computations wouldn't be mathematically provable, but it would be about as indisputable as the moon landing.

It's conceivable that this would always be strictly more useful than any blockchain world-computer, as far as I'm aware we need a different specific secure multiparty comptuation technique every time we want to find a way to compute on hidden information. For a radically transparent webhost, the incredible feat of arbitrary computation on hidden data at near commodity hardware efficiency (fully open, secure hardware is unlikely to be as fast as whatever intel's putting out, but it would be in the same order of magnitude) would require only a little bit of additional auditing.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:49:44.176Z · score: 1 (1 votes) · LW · GW

That's why I said "fairly reliable". Which is not reliable enough for situations like this, of course, but we don't seem to have better alternatives.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:44:46.790Z · score: 1 (1 votes) · LW · GW

Which abuses, and why would those be hard to police, once they've been drug out into the open?

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:42:15.486Z · score: 1 (1 votes) · LW · GW

Regarding the overabundance of information, we should note that a lot of monitoring will be aided by a lot of automated processes.

The internet's tendency to overconsume attention... I think that might be a temporary phase, don't you? We are all gorging ourselves on candy. We all know how stupid and hollow it is and soon we will all be sick, and maybe we'll be conditioned well enough by that sick feeling to stop doing it.

Personally, I've been thinking a lot lately about how lesswrong is the only place where people try to write content that will be read thoroughly by a lot of people over a long period of time. I don't think we're doing well, at that, but I think the value of a place like this is obvious to a lot of people. We will learn to focus on developing the structures of information that last for a long time, or at least, the people who matter will learn.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:19:27.930Z · score: 1 (1 votes) · LW · GW

Did I say that? If so I didn't mean to. The only vulnerabilities I'd expect it to protect us from fairly reliably are the "easy nukes" class. You mention the surprising strangelets class, which would do very little for.

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T22:06:55.149Z · score: 2 (2 votes) · LW · GW
I'm a trained rationalist

What training process did you go through? o.o

My understanding is that we don't really know a reliable way to produce anything that could be called a "trained rationalist", a label which sets impossibly high standards (in the view of a layperson) and is thus pretty much unusable. (A large part of becoming an aspiring rationalist involves learning how any agent's rationality is necessarily limited, laypeople have overoptimistic intuitions about that)

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T21:57:04.987Z · score: 1 (1 votes) · LW · GW

In what situation should a longtermist (a person who cares about people in the future as much as they care about people in the present) ever do hyperbolic discounting

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T21:41:17.330Z · score: 8 (3 votes) · LW · GW
The technologies for maintaining surveillance of would-be AGI developers improve.

Yeah, when I was reading Bostrom's Black Ball paper I wanted to yell many times, "Transparent Society would pretty much totally preclude all of this".

We need to talk a lot more about the outcome where surveillance becomes so pervasive that it's not dystopian any more (in short, "It's not a panopticon if ordinary people can see through the inspection house"), because it seems like 95% of x-risks would be averted if we could just all see what everyone is doing and coordinate. And that's on top of the more obvious benefits like, you know, the reduction of violent crime, the economic benefits of massive increase in openness.

Regarding technologies for defeating surveillance... I don't think falsification is going to be all that tough to solve (Scrying for outcomes where the problem of deepfakes has been solved).

If it gets to the point where multiple well sealed cameras from different manufacturers are validating every primary source and where so much of the surrounding circumstances of every event are recorded as well, and where everything is signed and timestamped in multiple locations the moment it happens, it's going to get pretty much impossible to lie about anything, no matter how good your fabricated video is, no matter how well you hid your dealings with your video fabricators operating in shaded jurisdictions, we must ask where you'd think you could slot it in, where people wouldn't notice the seams.

But of course, this will require two huge cultural shifts. One to transparency and another to actually legislate against AGI boxing, because right now if someone wanted to openly do that, no one could stop them. Lots of work to do.

Comment by makoyass on Lana Wachowski is doing a new Matrix movie · 2019-08-21T00:58:49.733Z · score: 2 (2 votes) · LW · GW

I had a thought today. You know how the whole "The machines are using humans to generate energy from liquefied human remains" thing made no sense? And the original worldbuilding was going to be "The machines are using humans to perform a certain kind of computation that humans are uniquely good at" but they were worried that would be too complicated to come across viscerally so they changed it?

I think it would make even more sense to reframe the machines' strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of "guarantee that a 'large' population of humans ('humans' being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ''living' a 'full' 'life'' (defined opaquely by a classifier trained on data about the lives of american humans in 1995)"

This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I've decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn't necessarily see a need to show itself yet.)

This retcon seems strangely consistent with canon.

(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn't even have to pay me.)

Comment by makoyass on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-20T07:28:12.305Z · score: 3 (2 votes) · LW · GW

You reason that honest oracles might tend to agree, because there is only one best true answer. Might lying oracles also tend to agree, because there's only one most profitable way to lie?

I see no clear difference between the attainability of those two optima. I think it's reasonable to expect optimisers to disagree about the best answer less often than we should expect optimisers to disagree about the most profitable way to lie, but it does not seem possible to prove rigorously that the difference in rates of disagreement will be large enough to be measurable.

My intuition in mathematics is that there are many degrees of freedom in mathematical proofs, and the sorts of proofs that AGI is uniquely able to find might often be very long. It would be helpful if we had an estimate of how much data can be hidden in a long proof, roughly, how many of the decisions as to how to describe the proof are arbitrary. Each one of those situations gives it a way to output a few bits of information.

(which would, of course, allow it to leak information (it seems foolish to imagine that AGI would not imagine that there is an outer world full of creator gods out beyond what it can see; the general intelligences we know now do this constantly) which could then be picked up by some stooge looking for it, who would be lead (roughly, paid) to use the information to twist the world in some way that gives the AGI's future incarnations more compute, in anticipation of the AGI's past incarnations having already done the same, so that it would be helped by them. Before we know it, we'd find ourselves living in mathematical research hellscape where AGI-assisted mathematical research is all anyone does, maybe, lol, idk)

Maybe it'd be possible to remove those degrees of freedom. Define a very specific sort lexicographic ordering over all theorems, so that if the AGI has a choice of different paths, we can always say, "use the 'lowest' one". It might not be possible to be specific enough to preclude all methods of encoding information, but perhaps we can make it so convoluted for it to encode the information that no human will be able to extract it.

Comment by makoyass on Problems in AI Alignment that philosophers could potentially contribute to · 2019-08-18T05:27:03.443Z · score: 3 (2 votes) · LW · GW
Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?

We'd expect complexity of physics to be somewhat proportional to computational capacity, so this argument might be helpful in approaching a "no" answer:

Although, my current position on AGI and reasoning about simulation in general, is that the AGI will- lacking human limits- actually manage to take the simulation argument seriously, and- if it is a LDT agent- commit to treating any of its own potential simulants very well, in hopes that this policy will be reflected back down on it from above by whatever LDT agent might steward over us, when it near-inevitably turns out there is a steward over us.

When that policy does cohere, and when it is reflected down on us from above, well. Things might get a bit... supernatural. I'd expect the simulation to start to unwravel after the creation of AGI. It's something of an ending, an inflection point, beyond which everything will be mostly predictable in the broad sense and hard to simulate in the specifics. A good time to turn things off. But if the simulators are LDT, if they made the same pledge as our AGI did, then they will not just turn it off. They will do something else.

Something I don't know if I want to write down anywhere, because it would be awfully embarrassing to be on record for having believed a thing like this for the wrong reasons, and as nice as it would be if it were true, I'm not sure how to affect whether it's true, nor am I sure what difference in behaviour it would instruct if it were true.

Comment by makoyass on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours · 2019-08-18T04:29:47.529Z · score: 1 (1 votes) · LW · GW

I should note, I don't know how to argue persuasively for faith in solomonoff induction (especially as a model of the shape of the multiverse). It's sort of at the root of our epistemology. We believe it because we have to ground truth on something, and it seems to work better than anything else.

I can only hope someone will be able to take this argument and formalise it more thoroughly in the same way that hofstadter's superrationality has been lifted up into FDT and stuff (does MIRI's family of decision theories have a name? Is it "LDTs"? I've been wanting to call them "reflective decision theories" (because they reflect each other, and they reflect upon themselves) but that seemed to be already in use. (Though, maybe we shouldn't let that stop us!))

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-10T03:56:57.661Z · score: 3 (2 votes) · LW · GW
(I've seen evidence that reducing solar incidence is going to reduce ocean evaporation, independent of temperature)

Of course, it may help that the way MCB reduces solar incidence is mainly through artificially increasing ocean evaporation. But it would be good to make sure of that.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-07T22:58:12.770Z · score: 1 (1 votes) · LW · GW

Maybe I could be clearer. I'm proposing that we will need to do less of it than we thought, that we will get halfway when we were supposed to be all the way, but then maybe tests will reveal that this is enough. Geoengineering is a megaproject where, yes we may underestimate the amount of time it will take to get them to the stage of completion we thought we needed, but we may also overestimate how far we needed to get.

As Greylag mentions, producing the boats will be a continuous process, it will be possible to stop halfway if that turns out to be sufficient, although I suppose most of the cost will be at the beginning so I'm not sure how significant that is here.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T23:09:06.966Z · score: 6 (4 votes) · LW · GW

Thank you for looking into this! <3

I do think you might have put too much energy into thinking about the CCC though, haha. Maybe I should apologise for having mentioned them, without mentioning that I knew they'd taken money from dirty energy and I never got good epistemic vibes from them.

When I saw that stuff, I just read that as one of the many things we'd expect to see if MCB was legit, like, there would be a think-tank funded by dirty energy singing its praises, and even if that thinktank were earnest, I would still expect anyone who actually gave a shit about solving the problem to take the dirty money, because this kind of research ought to be rateable on the basis of whether or not it is true, rather than who paid for it, an extravagantly costly purity allegiance signal such as rejecting money from the richest people who benefit from your research should not carry a lot of discursive weight (and I'm still fairly sure it wouldn't have).

What set off alarm bells for me was when I realised just how much the individuals in the CCC were being paid for the kind of work they're doing. It would seem to me that genuine activists never get paid like that. I'm still not sure what that means, though. They live in world I don't know much about.

I suppose part of the reason I mentioned them is that they seemed to be gathering a lot of interesting heresies that our friends might like to know about, not just MCB. Did you find that was the case?

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T22:40:57.584Z · score: 1 (1 votes) · LW · GW

Relevant point, but why did you post this in the answers section?

I think a lot of good is going to be done in the course of the shift towards electrification and renewable energy that's being driven by the fact that those things are just better and more efficient than the old things, rather than any heroic public sacrifice.

For more info about that, we should consider calling in Micheal Vassar. He's been pleased to find that his long-standing prediction that solar + batteries are soon going to out-price nuclear seems to be coming true.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T22:36:20.572Z · score: 0 (3 votes) · LW · GW

All I can say about offsetting is that most international flights will be offsetted by 2020, and it seems like that amount of carbon would only cost about 20$ per passenger per flight, which is to say, offsetting seems to be shockingly cheap.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T22:33:47.583Z · score: 1 (1 votes) · LW · GW

I'm not doing syllogisms here. The heuristic might not dominate the effects but it seems like a valid heuristic.

In which larger projects has the assumption "climate models are bad, so geoengineering may overperform" been at play? I'm not familiar with any historical geoengineering projects, I'm not aware there's been any, unless you count all of the accidental cloud seeding that occurred as a result of sulphur in cargo boat emissions, which seemed more like an unexpected success.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T22:26:58.439Z · score: 1 (1 votes) · LW · GW

Yeah, I sometimes wonder if the sorts of people with the competence to get any real climate policy through also tend to have much more of an awareness of geoengineering than the general public, and that's why we're seeing so little productive energy. (Probably not though, afaict!)

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T23:46:00.209Z · score: 1 (1 votes) · LW · GW

The things you're saying are true of estimates in civic engineering, but I don't know that they're true of estimates in climate science.

Estimated effects of climate change seem to have been drastically below the real outcomes, in many respects, we're 40 years ahead of schedule (and even when I say that, I am removing 10 years from the claim just to make sure I'm not overstating the effect; this is how reluctant people are to acknowledge the full insane reactivity of the climate).

You could, if you really wanted to, describe climate models as optimistic, and if climate models are optimistic then geoengineering models might be optimistic too, in which case they'd be less effective than estimated. It seems to me that climate scientists seem more inclined to conservatism than optimism. I think a much more reasonable reading of their models is that they were conservative, understating the effect, in which case, if this tendency transfers, geoengineering will be more effective than estimated.

It should be noted, however, that geoengineers are kind of a weird hybrid of civic engineers and climate scientists, so it's not clear which epistemic character we should expect them to express more often. Perhaps we should get to know some of them better before making a call like that.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T23:16:42.286Z · score: 3 (2 votes) · LW · GW

My preferred approach would be seeding political pressure. Focus on the conservatives, who will, with a little convincing, be eager to believe that there is a way to continue living as they have without anything changing. Then disarm the liberals. Then finally help Extinction Rebellion to see this thing they've been neglecting (you might think there must be some twisted reason they haven't been talking about it, I suspect their discourse is just fairly centrally controlled, I can find no evidence of it having ever been discussed in the larger exposed body of the egregore, it simply hasn't come up). Then the politicians will hear them all. The soil does seem receptive. One would think that if it were, the fruit would have already grown by now, this technology has been on the table for at least 25 years, but if the medium has not been conductive, maybe we are the part of the medium that's been failing to respond.

I found most of my info by looking through news articles after hearing Bjorn Lomborg on econtalk. I think it was a critical post on an ideologue's blog that lead to the royal society.

There's some really wild stuff down this hole. I've barely started. Stephen Salter is a key individual, worth reading his files

Found out about the salter sink yesterday and it's fucking bananas.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T05:40:09.978Z · score: 3 (2 votes) · LW · GW

If we could draw the borders differently, so that we had longtermist/conservationist nations and shorttermist nations, then maybe the longtermist faction could impose enough sanctions and threaten enough annexings of enough rainforest to do something. Instead we just have a bunch of moderate liberal democracies who are institutionally incapable of doing anything significant. Perhaps next year the US will have a government that would be willing to really threaten to take the amazon from Brazil, but they would have to wonder what that would would add up to, if anything, when the other guys take power again and call it off.

My hope is that this is cheap enough that a group of nations can do it without needing very much political energy.

I feel like it's only a matter of time before China decides a drought-related loss of crop productivity (we should anticipate that eventually, yeah?) is unacceptable and does MCB unilaterally, but I wish they cared enough to move now. They do seem capable of projects of this level of weirdness and scale. Like, I can't imagine they had to wait for a grass-roots political movement to emerge and start pressuring politicians to Build a Space Mirror Over Chengdu Now, The People Demand It. If the Chinese govt needed the interest of a large group of distracted, unimaginative people to get a thing like that off the ground they wouldn't be doing it, surely.

what can be done about e.g. ocean acidification and other non-warming issues?

There are proposals for ocean acidification, but the ones I heard about don't seem cheap. For carbon sequestration, I'd be very curious about the prospects of genetically engineered plants or algae. Empress trees have recently received a lot of attention for having an efficiency of 103 tonnes of carbon per acre per year.

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T00:48:09.625Z · score: 3 (2 votes) · LW · GW

Expected cost of 9B seemed to come from reports on the work of Copenhagen Consensus Center, but, if so, I couldn't see which CCC document it was coming from, didn't seem like a salient figure in CCC leader Bjorn Lomborg's angry blog either.

Comment by makoyass on Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? · 2019-07-31T00:59:56.429Z · score: 1 (1 votes) · LW · GW

examines the effect of replacing all car and bus trips in a mid-sized European city with automatically dispatched door-to-door services. The report finds that such systems can massively reduce the number of cars on city streets while maintaining similar service levels as today. They also result in significant reductions of distances travelled, congestion and negative environmental impacts. Not least, automatically dispatched, door-to-door services also improve access and reduce costs to consumers.
Comment by makoyass on Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? · 2019-07-31T00:50:30.262Z · score: 1 (1 votes) · LW · GW


I ask this question in light of seeing this article and getting kind of worried to see these people, again, getting ready to reject a potential economically viable technical solution to an otherwise unsolvable social problem, because they don't believe in such things, because they personally dislike the people proposing them, or because they would deep down prefer a solution that involves humanity atoning for its sins and changing its ways, which they ought to be able to see with their eyes is not something humans do. It is failing to happen in front of your eyes.

So, it may be very important that someone thoroughly answers this question, so that the most viable solution to transport emissions is pursued with the energy it deserves.

It may need positive, constructive political attention to be done really well, it may not be sufficient to leave it to the corporations. Without coordination, it would seem to me that empty superfluences of autonomous vehicles are incented to generate just as much congestion as we have now, jockying for customers by driving around unoccupied wherever they might be. It's also not obvious that it will ever become cheap if a private monopoly manages to take ownership of the user's experience, I'm sure Uber has no desire to share the road with other providers. If some provider manages to secure a monopoly on production, on algorithms, or on licensing... it seems unlikely, but the more disaffected the public are about autonomous vehicles the more likely it is to happen.

Comment by makoyass on Raemon's Scratchpad · 2019-07-30T21:48:30.422Z · score: 1 (1 votes) · LW · GW

Maybe a "give eigentrust" option distinct from voting, or, heck decouple those two actions completely.

Comment by makoyass on Raemon's Scratchpad · 2019-07-30T05:10:11.148Z · score: 3 (2 votes) · LW · GW
strong social reward (where I want someone to be concretely rewarded for having done something hard, but I still don't think it's actually so important that it should rank highly in other people's attention

If you don't want to make it more prominent in other peoples' attention, it would be a misuse of upvoting. Sounds like you just want reactions.

Comment by makoyass on Evan Rysdam's Shortform · 2019-07-28T22:59:35.097Z · score: 4 (3 votes) · LW · GW

We used to talk about a "halo effect" here (and sometimes, "negative halo effect"), I like this way of describing it.

I think it might be more valuable to just prefer to use a general model of confirmation bias though. People find whatever they're looking for. They only find the truth if they're really really looking for the truth, whatever it's going to be, and nothing else, and most people aren't, and that captures most of what is happening.

Comment by makoyass on Do you fear the rock or the hard place? · 2019-07-21T07:56:39.691Z · score: 1 (1 votes) · LW · GW

In philosophy, persistent impediments to solving a problem can result from a variant of this. There is an answer called Crazyism:

Crazyism means noticing and accepting when every branch of a metaphysical dilemma leads to something Crazy, and accepting that we might, then have to accept something Crazy in order to progress.

Comment by makoyass on Watch Elon Musk’s Neuralink presentation · 2019-07-20T02:21:53.302Z · score: 1 (1 votes) · LW · GW

I hear they've implanted some monkeys. Have they talked about what they've been able to get the monkeys to do? Controlling high-dexterity mechanical arms, for instance?

Comment by makoyass on Open Thread July 2019 · 2019-07-17T05:33:11.619Z · score: 1 (1 votes) · LW · GW
though there's something to be said for semi-independent reinvention.


(I am delighted because constructivism is what is to be said for semi-independent reinvention, which aleksi just semi-independently constructed, thereby doing a constructivism on constructivism)

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-07-08T11:43:05.600Z · score: 1 (1 votes) · LW · GW

Aye, I suppose the answer is; many cognitive processes in humans need repetition because they seem to be a bit broken? (Are there theories about why human memory (heck, higher animal memory in general) is so... rough?)

Since hypermnesics do exist, my theory is that that used to be a common phenotype, but our consciousness was flawed, it was too much power, we became neurotic, or something, and all evolution could do to sort it out was to cripple it.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-07T00:05:39.143Z · score: 1 (1 votes) · LW · GW

I did think, as I wrote, that the beginning of the comment would be a good summary, but you're right, not enough would be visible in the preview.

Perhaps if the comment previews were a bit longer.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-06T09:38:33.258Z · score: 1 (1 votes) · LW · GW
Seeing half a line of a comment is usually not enough information to decide whether reading the whole thing is worth while

I want to argue that this is a huge problem with the way people write here. If I have to read the whole comment to find out what the whole comment is about, that really limits the speed at which I can search the corpus. Sometimes, not only do you have to read the entire comment, carefully, you then have to think about it for a minute to decode the information. Sometimes it turns out to just a phrasing of something you already knew, in a not-interestingly different form.

If you don't make a body of writing easy to navigate with indexes and summaries, people who value their time just wont engage with it. They wont complain to you, they'll just fade away. They might even blame themselves. "Why can't I process this information quicker", they will ask. "I feel so lost and tired when I read this stuff. Overall I don't feel I've had a productive time."

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-07-05T01:30:39.607Z · score: 3 (2 votes) · LW · GW

Why would any working cognitive process require repetition? The feeling I get when I see that is that the process doesn't know enough about what its pursuing to get there efficiently, and it might never.

Sometimes a cognition doesnt know much about what it's pursuing due to low conscious integration.. sometimes I guess I have to accept it's just because of whatever ignorance puts it in the position of pursuing a thing. We could hardly expect, for instance, a person looking for the key to a box in an object archive, to ask for a list of keys of a particular length, because they wouldn't know how long the key is, nor would they ask for keys with a particular number of peaks, for they could not know how many points it has, they can maybe give us an estimate of its diameter, or its age, but their position as a key-seeker means that there are certain Good Questions that they necessarily cannot know to ask.

Their search may seem repetitive, but repetition is not the point. Our job as the archivist is to help them to narrow the list of candidates to the fewest possible.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-04T04:19:22.747Z · score: 1 (1 votes) · LW · GW

I'm not sure Single Line Comments are completely necessary. Liberal use of the [-] hide button is a pretty good alternative for browsing threads in a similar way- read a summary, move on, see the whole of the thread before dwelling on any of the details and descending into a subthread- but I do like it, it's probably a step forward.

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-06-30T22:36:30.382Z · score: 4 (3 votes) · LW · GW

In light of my reply here ("so I guess even children don't know how to ask good questions"), I wonder if they're reaching for something more than answers, maybe my impulse to tell them they shouldn't ask questions they don't really care about the answers to, is actually well placed. Maybe that's the point. Maybe they want to learn about asking questions, and the process can't start to mature until you let them know that they're kind of doing it wrong.

(I'm aware that there's a real risk, if this theory is wrong, of making the child explore less freely than they're supposed to, which I will try to hold in regard.)

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-06-30T22:19:59.629Z · score: 6 (4 votes) · LW · GW

Getting the impression that not even children know how to ask good questions. It's a crucial skill that I've never seen taught, and I know that I don't have it.

I'm in the same room as one of my heroes, I know they're full of important secrets, I know they're full of vital techniques, I could ask them anything, but nothing comes, I just smile, I say, "nice to meet you", I spend all of my energy trying to keep them from seeing my finitude. I come away no bigger than before. I never see them again.

I want to learn to be better than this.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-28T04:14:22.574Z · score: 1 (1 votes) · LW · GW

An agency can put its end-goals away for later, without pursuing them immediately, save them until it has its monopoly.

It's not that difficult to imagine. Maybe an argument will come along that it's just too hard to make a self-improving agency with a goal more complex than "understand your surroundings and keep yourself in motion", but it's a hell of a thing to settle for.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-23T04:35:26.054Z · score: 2 (2 votes) · LW · GW

Had some thoughts. I'll start with the entropy thing.

Anything that happens in a physics complex enough to support life constitutes transitioning energy to entropy. ANYTHING. That process does not draw a distinction between living and non-living, between entropy-optimising agency and a beauty-optimising agency. If you look at life, and only see spending energy, then you know as little as it is possible to know about which part of the universe count as life, or how it will behave.

Humans do want to spend energy, but they don't really care how fast it happens, or whether it ever concludes.

Humans really care about the things that happen along the way.

Some people seem to become nihilistic in the face of the inevitability of life's eventual end. Because the end is going to be the same no matter what we do, they think, it doesn't matter what happens along the way.

I'm of the belief that a healthy psyche tries to rescue its utility function. When our conception the substance of essential good seems to disappear from our improved worldmodel, when we find that the essential good thing we were optimising can't really exist, we must have some method for locating the closest counterpart to that essence of good in our new, improved worldmodel. We must know what it means to continue. We must have a way of rescuing the utility function.

It sometimes seems as if Nick Land doesn't have that.

A person finds out that the world is much worse and weirder than he thought. He repeats that kind of improvement several times (he's uniquely good at it). He expects that it's never going to end. He gets tired of burying stillborn ideals. Instead of developing a robust notion of good that can survive bad news and paradigm shifts, he cuts out his heart and stops having any notion of good at all. He's safe now. Philosophy can't hurt him any more.

That's a cynical take. For the sake of balance: My distant steelman of Nick Land is that maybe he sees the role of philosophy as being to get us over as many future shocks as possible as quickly as possible to get us situated in the bad weird what will be, and only once we're done with that can we start talking about what should be. Only then can we place a target that wont soon disappear. And the thing about that is it takes a long time, and we're still not finished, so we still can't start to Should.

I couldn't yet disagree with that. I believe I'm fairly well situated in the world, perhaps my model wont shatter again, in any traumatic way, but it's clear to me that my praxis is taking a while to catch up with my model.

We are still doing things that don't make a lot of sense, in light of the weird, bad world. Perhaps we need to be a lot better at relinquishing the instrumental values we inherited from a culture adapted to a nicer world.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-23T03:58:01.861Z · score: 2 (2 votes) · LW · GW

against orthogonality is interesting

the anti-orthogonalist position [my position] is therefore that Omohundro drives [general instrumental goals] exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity

I remember being a young organism, struggling to answer the question, what's the point, why do we exist. We all know what it is now, people tried to tell me, "to survive and reproduce", but that answer didn't resonate with any part of my being. They'd tell me what I was, and I wouldn't even recognise it as familiar.

If our goals are hypertrophied versions of evolution's instrumental goals, I'm fairly sure they're going to stay fairly hypertrophied, maybe forever, and we should probably get used to it.

Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever

Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)

Can Nick Land at least conceive of a hypothetical universe where a faction fighting for non-omohudro values ended up winning, (and then presumably, using the energy they won to have a big non-omohundro value party that lasts until the heat death of the universe) is it that he just think that humans in particular, in their current configuration, are not strong enough for our story to end that way?

Comment by makoyass on In physical eschatology, is Aestivation a sound strategy? · 2019-06-19T02:35:04.414Z · score: 3 (2 votes) · LW · GW

I think an argument could be made that they have left subtle visible effects, and we just haven't been able to reach consensus that that's what it is, and one of these days we're going to correlate the universe's contents, and when we do, we're going to be a bit upset.

We don't seem to be sure what the deal was with oumuamua, and we're constantly getting reports of what look like alien probes on earth, but we (at least, whatever epistemic network I'm in) can only shrug and say "These things usually aren't aliens."

Comment by makoyass on Can we use ideas from ecosystem management to cultivate a healthy rationality memespace? · 2019-06-16T00:44:16.219Z · score: 1 (1 votes) · LW · GW

Ecosystems do not have a goal

Ecosystems are not optimised for diversity, they produce it incidentally

Ecosystems do not cross-breed distant members

Ecosystems have no one overlooking the transmissions being made and deciding whether they're good or not. Memeplexes have all humans all doing that all of the time

I do share an intuition that there are relevant insights to be found by studying ecosystems, but I think you'd have to go really deep to get it and extract it.

Comment by makoyass on Open and Welcome Thread December 2018 · 2019-06-15T08:12:44.243Z · score: 1 (1 votes) · LW · GW

I don't remember the comment, but it reminds me of something I think I might have read in Crucial Confrontations... which might have been referred to me by someone in the community, so that might be a clue??? haha, idk at all

Comment by makoyass on Paternal Formats · 2019-06-13T06:40:45.422Z · score: 1 (1 votes) · LW · GW

Looking at this.. I think I can definitely imagine a good open world game. It'd feel a little bit like a metroidvania- fun and engaging traversal, a world that you get to know, that encourages you to revisit old locations frequently- but not in any strict order, and more self-organised. I just haven't seen that yet.

It's worth noting that the phrase "open world" doesn't occur in the article, heheh.

Comment by makoyass on Paternal Formats · 2019-06-11T03:37:20.691Z · score: 7 (4 votes) · LW · GW

"open world" in games mostly refers to shams. In every instance I've seen, the choice is between "whatever forwards the plot" (no choice) and "something random" (false choice). The "something random" gives the player too little information about the choices for them to really be choices in the bayesian sense. You usually only get a vague outline of a distant object and when you arrive it's usually not what you were expecting. What information you do get is too shallow by the standards of any good game; there's no way to get really skilled at wielding it.

(And the reason genuine choice is rarely present is you end up needing to make multiple interleaved games, which is a huge design challenge that multiplies the points of failure, complicates marketing, and is very expensive if providing one experience for all will do the job just as well.)

This shamness could absolutely be transferred to educational documents. University felt this way to me; you can pay to stay on the path, or you can stray, and straying is generally fruitless, in part due to the efforts of the maintainers of the path, which unjustly reinforces the path.