Posts

Mako's Notes from Skeptoid's 13 Hour 13th Birthday Stream 2019-10-06T09:43:32.464Z · score: 6 (2 votes)
The Transparent Society: A radical transformation that we should probably undergo 2019-09-03T02:27:21.498Z · score: 8 (6 votes)
Lana Wachowski is doing a new Matrix movie 2019-08-21T00:47:40.521Z · score: 5 (1 votes)
Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours 2019-08-18T04:22:53.879Z · score: 0 (9 votes)
Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? 2019-08-05T00:12:14.630Z · score: 77 (45 votes)
Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? 2019-07-31T00:16:59.415Z · score: 10 (5 votes)
If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? 2019-07-12T01:40:48.999Z · score: 5 (3 votes)
In physical eschatology, is Aestivation a sound strategy? 2019-06-17T07:27:31.527Z · score: 18 (5 votes)
Scrying for outcomes where the problem of deepfakes has been solved 2019-04-15T04:45:18.558Z · score: 28 (15 votes)
I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it 2019-04-01T03:19:44.080Z · score: 20 (7 votes)
Is there a.. more exact.. way of scoring a predictor's calibration? 2019-01-16T08:19:15.744Z · score: 22 (4 votes)
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter 2019-01-11T22:26:29.887Z · score: 18 (7 votes)
The end of public transportation. The future of public transportation. 2018-02-09T21:51:16.080Z · score: 7 (7 votes)
Principia Compat. The potential Importance of Multiverse Theory 2016-02-02T04:22:06.876Z · score: 0 (14 votes)

Comments

Comment by makoyass on The Math Learning Experiment · 2019-11-09T03:33:27.557Z · score: 1 (1 votes) · LW · GW

I'd like to see how "it's conceptual engineering" vs "It's conceptual discovery" mentalities correlate with productivity. Engineering mentality seems obviously more pragmatic and more realistic, but Discovery mentality seems much more likely to attract passion (which, for humans, fuels productivity).

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-08T03:02:04.438Z · score: 1 (1 votes) · LW · GW

Hahah. That's a funny thought. I don't think it does lead inevitably to toxicity, though. I don't think the incentives it imposes are really that favourable to that sort of usage. There's a hedonic attractor for venomous behaviour rather than a strategic one.

Right now the char limit isn't really that hostile to dialogue. There's a "threading" UI (hints that it's okay to post many tweets at once) so it's now less like "don't put any effort into your posts" and more like "if you're gonna post a lot try to divide it up into small, digestible pieces"

Comment by makoyass on Open & Welcome Thread - November 2019 · 2019-11-07T04:07:15.693Z · score: 3 (2 votes) · LW · GW

Twitter's usefulness mostly comes from the celebrities being there. The initial reason the celebrities were attracted probably had to do with the char limit, its pretext, that they are not expected to read too much and that they are not expected to write too much.

You'll see on reddit - at least, back when these things were being determined - a lot of celebrities, when they did AMAs, seemed to feel obligated to respond to every comment with a comment of similar length. Sometimes they wouldn't wait and see which comments were getting the most votes and answer those, they'd just start with the first one that hit their mailbox and work down the list until they ran out of time. My guess is, non internet-native extroverts really needed a platform that would advise them about what's expected and reasonable.

But I think, now that we're all learning that we must moderate our consumption, the celebrities (and most other people) remain on twitter mainly because the celebrities were there in the first place. I don't think we need the char limit any more. I think maybe we're ready for the training wheels to come off.


But there's another reason redditlikes don't really work for a general audience. Mainly specifics about how voting tends to work. There is no accommodation of subjectivity. Everyone sees the same vote ranking even though different people have different interests and standards. The problem is partially mitigated by separating people into different subreddits, but eventually, general subreddits like /r/worldnews, /r/technology, /r/science or even /r/futurism will grow large enough and diverse enough that people wont be able to stand being around each other again. Every demographic other than the largest, most vote-happy one will have to leave. I really want everyone to be able to join together in the same conversation, but when the top-ranked comments always turn out to be "[outgroup lies]" or "[childish inanity]", that can't happen. The outgroup wants to see their lies, and the children want to see their inanity, and I think they should be able to, but good adults need to be able to hear each other too, or else they'll just move to GoodAdultSite and then the outgroup wont be able to find refutations of their lies even when they look for them, and the children will not receive the advice they need even when they call out for it.

(I have some ideas as to how to build a redditlike that might solve this. If anyone's interested, speak up.)

Comment by makoyass on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T23:04:06.828Z · score: 3 (2 votes) · LW · GW

My other comment was mostly critical but I just want to add that I really enjoy this kind of post. Any conversation about economics of future technology is fun imo.

Comment by makoyass on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T22:03:10.862Z · score: 2 (2 votes) · LW · GW

You need to demonstrate that the cost of division {developing the coupling system, the extra materials for building with the coupling system, and having the two parts be unable to share physical mechanisms} will be less than the benefits of having smaller/cheaper tugs for the few people who really have a use for tugs.

And I think most people don't really have enough of a use for tugs to overcome economics of scale for the near term. The majority of trips will take place with non-custom cabins:

  • The ordinary rider does not need custom cabins. Consider the amount of energy people put into meaningfully customising their homes/apartments in practice (not that much), then scale it down by 20x to account for the fact that people spend a lot less time in transit than they do at home. That's how much people will care most of the time. I should examine some of the usecases though
    • unmanned delivery services only want tugs
    • people who use a wheelchair want a custom cabin they can just roll into
    • People who want to do morning routine stuff during commute want a cabin that supports that stuff? But wouldn't the road movement interfere too much? I mean, have you ever stood in a bus? Imagine having to stay upright through all the shifting and jolting while showering, putting on pants, or eating a meal. If this were a thing, enough people would want it that they could just build a custom car entirely though.
    • Big visiting service station things?
      • Probably bad example: "mobile libraries". A lot of these thingies seem less practical than just having a fixed building provide the service and moving people or goods between them.
      • Hm visiting remote-operated surgery theatres? That could be pretty badass
  • Storing your very own personal cabin, once arrived at your destination, will be an inconvenience. It would mean either sending it to a parking locker (which, if it's in the urban center, you will have to pay a non-negligible amount to reside in), or all the way home again, to then have to wait for it to come out again when they're ready to commute back. I think most people would stop bothering.

Hmm I was gonna say the tugs wouldn't be that much cheaper than common single-occupant cars because they'd need to have enough mass to gain traction on the road, but it occurs to me, if you could have the tug go under the cabin to some extent, then jack it up a bit, it could use the weight of the cabin for traction, so assuming sufficiently dense batteries and motors (can we assume?) it could be pretty small. The heavier the cabin, the more traction it needs, but also the more traction it gets. That's pretty neat.


For completeness, I should link a previous post about the economics of autonomous cars I did (which has comments, and links in turn to another post I did) https://www.lesswrong.com/posts/QDekD68bQiwuAJB8G/will-autonomous-cars-be-more-economical-efficient-as-shared

Comment by makoyass on The Parable of Predict-O-Matic · 2019-11-03T23:47:28.488Z · score: 1 (1 votes) · LW · GW

The category feels a bit broader than "self-fulfilling prophesy" to me, but not by much. I think we should look for a term that gets us away from any impression of unilaterally decided, prophetic inevitability.

has the connotation of command, for me

But that connotation isn't really incorrect! When you make a claim that becomes true iff we believe it, there's a sense in which you're commanding the whole noosphere, and if the noosphere doesn't like it, it should notice you're making a command and and reject it.

There is a very common failure mode where purveyors of monstrous self-fulfilling prophesies will behave as if they're just passively describing reality, they aren't. We should react to them as if they're being bossy, intervening, inviting something to happen, asking the epistemic network to behave a certain way.

I think I was initially familiar with the word stipulation mostly from mathematics or law, places where truths are created (usually through acts of definition). I'm not sure how it came to me, but at some point I got the impression it just meant "a claim, but made up, but still true", that genre of claim that we're referring to. The word didn't slot perfectly into place for me either, but it seemed like its meaning was close enough to truths we create by believing them, I stopped looking for a better name. We wouldn't have to drag it very far.

But I don't know. It seems like it has a specific meaning in legal contexts that hasn't got much to do with our purposes. Maybe a better name will come along.

Hmm.. should it be.. "construction"? "Some predictions are constructions." "The value of bitcoin was constructed by its stakeholders, and so one day, through them, it shall be constructed away." "We construct Pi as the optimum policy for the model M"

Comment by makoyass on The Parable of Predict-O-Matic · 2019-11-03T05:00:16.610Z · score: 1 (1 votes) · LW · GW

I've noticed that the word "stipulation" is a pretty good word for the category of claims that become true when we decide they are true. It's probably best if we try to broaden its connotations to encompass self-fulfilling prophesies than it is to make some other word or name this category "prophesy" or something.

It's clear that the category does deserve a name.

Comment by makoyass on Rohin Shah on reasons for AI optimism · 2019-11-03T02:19:05.283Z · score: 3 (2 votes) · LW · GW
He thinks that as AI systems get more powerful, they will actually become more interpretable because they will use features that humans also tend to use

I find this fairly persuasive, I think. One way of putting it is that in order for an agent to be recursively self-improving in any remotely intelligent way, it needs to be legible to itself. Even if we can't immediately understand its components in the same way that it does, it must necessarily provide us with descriptions of its own ways of understanding them, which we could then co-opt.

This may be useful in the early phases, but I'm skeptical as to whether humans can import those new ways of understanding fast enough to be permitted to stand as an air-gap for very long. There is a reason, for instance, we don't have humans looking over and approving every credit card transaction. Taking humans out of the loop is the entire reason those systems are useful. The same dynamic will pop up with AGI.

This xkcd comic seems relevant https://xkcd.com/2044/ ("sandboxing cycle")

There is a tension between connectivity and safe isolation and navigating it is hard.

Comment by makoyass on Two questions on a brief piece on Vengeance (sense/urge) · 2019-11-03T01:43:20.180Z · score: 3 (2 votes) · LW · GW
1) Do you find this to be helpful as an examination of some crucial element of the vengeful disposition?

No. It's extremely hard to read. I think it might be getting at revenge as a way of ensuring that there is a logic of peace. An attack on unjust social realities rather than any material cause of some potential future strife; but if I didn't already have that idea in my head, I wouldn't recognise it here. I feel like it's forcing me to guess something that it could have just said outright with very little prose.

Generally. Any discussion of vengeful disposition that does not build from new decision theories (functional decision theory, best learned through the arbital pages about LDT) is going to be needlessly circuitous and is likely to repeat certain mistakes. "the meaning is seemingly illogical", for instance. It doesn't commit to this position, but it doesn't begin to refute it either.

Basically... our new decision theories are an account of rationality under which things like revenge- policies which an agent benefits from holding, but which, when actuated, do not causally bring about future benefits- are not irrational. They are rational. The standard model of rationality (CDT) was wrong. The fact that CDT was regularly doing things that brought about suboptimal outcomes should have been a big clue to people that they were not describing the true dao.

I should emphasise, because this is quite radical, FDT contends that the rationality, or irrationality, of an action is not purely a function of its future consequences. That there has to be much more to it. An action can have negative consequences and still be a crucial part of a rational policy. If you can't justify that claim from the metaphysics of survival, you can't speak with clarity about vengeance policies.

Comment by makoyass on Turning air into bread · 2019-10-30T04:07:15.081Z · score: 1 (1 votes) · LW · GW

Heh.

(Well yeah, eventually we're going to draw a black ball out of the urn. Coal and gas weren't shit next to some of the coordination challenges that're coming up, I'm sure. x-risks aside, space is going to be a mess. I can't wait for kessler syndrome to set in)

thankfully we're learning how to coordinate our population growth to support a good life within the limited carrying capacity of our natural resources better and better over time

In some ways, we are (our technology seems to be greening), but in maybe the most important ways, we haven't changed anything. The global population is still growing faster than ever. Growth seems to slow down under certain conditions, but (and I felt really stupid when I realised this) if a person thinks the utterly mysterious effects of those conditions will sustain for more than three generations, they have forgotten something very basic about what biological organisms are and how they came to be, and if we let it go that way, the problem is going to come back a lot stronger, and our chances of solving it with that different set of people will be close to zero.

I don't like talking about this.

But I'm starting to get the sense that there might be something important down here that nobody is looking at with clear eyes.

Comment by makoyass on Turning air into bread · 2019-10-30T03:35:15.201Z · score: 1 (1 votes) · LW · GW

No stigma. Many more technological solutions to social problems will be needed. For instance, I'm convinced we should be pouring a lot more money into geoengineering.

I imagine that it wont always go like this because it seems like the amount of matter and energy we have access to is finite. We answered overexpansion with a technology that enabled further expansion. There are metaphysical guarantees that this will not always work. No matter how many false physical constraints we overturn the second law of thermodynamics seems to guarantee (this is debatable) that we will eventually hit a wall, and we will look back at the mess behind us, and we will ask if this was the fate we really wanted, whether things could have been much better for everyone if we'd slowed down and negotiated back when we were small enough and close enough to manage such a thing.

Comment by makoyass on What's your big idea? · 2019-10-27T04:16:58.473Z · score: 1 (1 votes) · LW · GW

What negative externalities are you thinking of. Maybe it's silly for me to ask you to say, if you're saying they're taboo, but I'm looking over all of the elitist taboos and I don't think any of them really raise much of an issue.

Did I mention that my prototype aggregate utility function only regards adjacency desires that are reciprocated. For instance, if a large but obnoxious fan-base all wanted to be next to a single celebrity author who mostly holds them all in contempt, the system basically ignores those connections. Mathematically, it's like, the payoff of positioning a and b close together is min(a.desireToBeNear(b), b.desireToBeNear(a)). The default value for desireToBeNear is zero.

P.S. Does the fact that each user desire expression (roughly, the individual utility function) gets evaluated in a complex way that depends on how it relates to the other desire expressions make this not utilitarianism? Does this position that fitting our desires together will be more complex than mere addition have a name?

Comment by makoyass on Turning air into bread · 2019-10-27T02:06:34.327Z · score: 7 (5 votes) · LW · GW

It's an important story. Sometimes there are technological solutions to social problems. As reasonable as the prophet Malthus sounded, we didn't heed his warning, we did not repent, we did not learn how to coordinate our population growth to support a good life within the limited carrying capacity of our natural resources. A wizard made a new gizmo and we all got away with it.

There's something very unsatisfying about it.

And I imagine it wont always be like this.

Comment by makoyass on The Pit · 2019-10-26T23:41:02.784Z · score: -3 (6 votes) · LW · GW

I don't think I got anything out of reading this. Maybe that means it's not for me, or that what it has to give, I already have, or that what it has given me is subtle and I will not know its value until later.

Whichever it is, it is a mark of bad writing to be this hard to evaluate. You cannot form a productive community of recommendation around writing that has this quality.

Comment by makoyass on Jacy Reese (born Jacy Anthis)? · 2019-10-26T23:22:46.433Z · score: 1 (1 votes) · LW · GW

If this post gains enough upvotes, or gets curated to the front page, is it still self-published?

Comment by makoyass on What's your big idea? · 2019-10-26T23:03:01.451Z · score: 4 (2 votes) · LW · GW

You make a very strong point that I think I can wholly agree with, but I think there is more here we have to examine.

It's sometimes said that the purpose of public education is to create the public good of an informed populace (sometimes, "fascism-resistant voters". A more realpolitic way of putting it is "a population who will perpetuate the state", this is good exactly when the state is good). So they teach us literature and history and hope that this will create a cultural medium whose constituents can communicate well and never repeat their civilization's past mistakes. If it works, the benefits to the commons are immeasurable.

There isn't an obvious upper bound of curriculum size where enriching this commons would necessarily stop being profitable. The returns on sophistication of a well designed interchange system are greater than linear on the specification size of the system.

It might not be well designed. I don't remember seeing anything about economics or law (or even, hell, driving) in the public curriculum, and I think that might be the real problem here. It's not that they teach too much, it's that they don't understand what kind of things a creator of the public good of a good public is supposed to be teaching.

Comment by makoyass on What's your big idea? · 2019-10-22T02:22:42.420Z · score: 1 (1 votes) · LW · GW

Yeah. "Replace the default beneficiaries of avoidable wars with good people who use the money for good things" is a useful civic method to bear in mind but probably far from ideal. Taxation is fine, you need to do it to fund the commons, but avoidable wars seems like a weird place to draw taxes from, which nobody would consciously design? Taxes that would slow down urbanisation (by making the state complicit in increases in urban land price/costs of urban services) sound like a real bad idea.

My proposed method is, roughly, using a sort of reciprocal, egalitarian utilitarianism to figure out a good way to arrange everyone who owns a share in the city (shares will cost about what it costs to construct an apartment. Maybe different entry prices for different apartment classes.. although the cost of larger apartment tickets will have to take into account the commons costs that lower housing density imposes on the labour market), and to grant leases to their desired businesses/services. There shall be many difficulties along the way but I have not hit a wall yet.

Comment by makoyass on Why Ranked Choice Voting Isn't Great · 2019-10-22T02:01:17.890Z · score: 1 (1 votes) · LW · GW

So you agree that it's a voting system.

I don't think it's intuitive that "give me a full account of all of your desires" wont end up working better than "give me an extremely partial picture of your desires"

Comment by makoyass on Why Ranked Choice Voting Isn't Great · 2019-10-21T03:21:47.259Z · score: 1 (1 votes) · LW · GW

Is utilitarianism an ordinal voting system?

Comment by makoyass on What's your big idea? · 2019-10-21T01:03:08.816Z · score: 1 (1 votes) · LW · GW

Very few of these are controversial here. The only ones that seem controversial to me are

  • Schools teach too much, not too little

...

That's all, actually. And I'm not even incredulous about that one, just a bit curious.

Although aging and death is terrible, I don't think there's much point in building a movement to stop it. AGI will almost certainly be solved before even half of the processes of aging are.

Comment by makoyass on What's your big idea? · 2019-10-21T00:50:48.010Z · score: 7 (4 votes) · LW · GW

My past big ideas mostly resemble yours, so I'll focus on those of my present:

Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.

Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)

Bidding wars are a fairly large subclass of avoidable wars. The corresponding negotiation, for an auction, would be for the players to try to measure their wills out of band, then for those found to have the least will to commit to abstaining from the auction. (People would stop running auctions if bidders could coordinate well enough to do this, of course, but I'm not sure how bad a world without auctions would be, I think auctions benefit sellers more than they benefit markets as a whole, most of the time. A market that serves both buyer and seller should generally consider switching to Vickrey Auctions, in the least.)

[1] Regarding intensification; my impression so far is that there is nothing especially natural about land price increase as a promoter of density. It doesn't do the job as fast as we would like it to. The benefits of density go to the commons. Those common benefits of density correlate with the price of the individual dense building, but don't seem to be measured accurately by it.


Another Big Idea is "Average Utilitarianism is more true than Sum Utilitarianism", but I'm not sure whether the world is ready to talk about that. I don't think I've digested it fully yet. I'm not sure that rock needs to be turned over...

I also have a big idea about the evolutionary telos of paraphilias, but it's very hard to talk about.


Oh, this might be important: I studied logic for four years so that I could tell you that there are no fundamental truths, and all math and logic just consists of a machine that we evolved and maintained just because it happened to work. There's no transcendent beauty at the bottom of it all, it's all generally kind of ugly even after we've cut the ugliest parts away, and there may be better alternatives (consider CDT and FDT for an example of a deposition of seemingly fundamental elegance)

Comment by makoyass on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-10-13T23:19:09.562Z · score: 1 (1 votes) · LW · GW

Another comment claims that this would be relatively expensive.

(reason for retraction: Occurred to me that I'm not sure how this compares in cost to other carbon capture and anti deacidification measures)

Comment by makoyass on Rent Needs to Decrease · 2019-10-12T06:42:25.234Z · score: 1 (1 votes) · LW · GW

What are some (recent?) historical cases of urban land decreasing in price?

My current model is that this can't really happen, because land owners have something that a lot of people gravely need, will pay basically anything to get, and until we implement a radically different civic mechanism for allocating urban land, urban life is generally going to stay close to the worst that people will tolerate rather than approaching the best that the market can provide.

Comment by makoyass on Thoughts on "Human-Compatible" · 2019-10-11T01:09:20.933Z · score: 1 (1 votes) · LW · GW

I can't make sense of 3. Most predictions' truth is not contingent on whether they have been erased or not. Stipulations are. Successful action recommendations are stipulations. How does any action recommendation get through that?

Comment by makoyass on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2019-10-06T06:22:57.760Z · score: 4 (3 votes) · LW · GW

I don't remember what the concentrations were where it'd become a cognition problem, but they always seemed shockingly low. I note that CO2 is heavier than oxygen so the concentration on the ground is probably (?) going to be higher than the concentration measured for the purposes of estimating greenhouse effects.

I wonder how many climate models take the decreases in productivity of phytoplankton into account. With numbers of whales decreasing, there will be less carbon turnover, and some aspects of their productivity seems to be affected dramatically by microplastics.

For cites, I wont be able to do better than a google search.

I think I remember hearing that there was no data on what happens if a human is kept in a high CO2 environment for longer timespans, though. Might turn out we adapt in the same way some populations adapt to high altitudes.

Comment by makoyass on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2019-10-06T05:31:19.326Z · score: 2 (2 votes) · LW · GW

It is true, as far as I can tell. It's going to be very important that we deploy SRM (and I hope we can do marine cloud brightening instead of aerosols cause it seems like it'd have basically no side-effects) at some stage... probably around 2030... but the remaining CO2 will pose a huge problem. Ocean acidification, and also, once CO2 gets high enough, it starts impacting human cognition. We don't really know why, but it's an easily measurable effect, the loss in productivity will be immense, and we might imagine that our hopes of finding better carbon sequestration technologies after that dumbing point may plummet.

I get the sense that environmentalists, for now, should not talk about SRM. We should let the public believe that we don't have a way of preventing temperature increases so that we retain some hope of getting political support for doing something about the CO2.

Comment by makoyass on Ideal Number of Parents · 2019-10-06T03:58:14.102Z · score: 6 (4 votes) · LW · GW

[silly joke comment]

For example, I do the kids breakfast and pack Lily's lunch in the morning (hence the thermos experimentation). I then pay attention to what comes home uneaten in the lunchbox to try to figure out what I should send next time.

That's how you get survivorship bias, you have to look at the lunches that don't make it back, not the ones that do.

I'm struggling to figure out which module of my brain is misfiring right now. I think on some level I might have just not internalised an essentialist enough account of survivorship bias so all I have are analogies. Anything resembling planes coming back to base with damage on them will set off the alarm.

Comment by makoyass on What are your strategies for avoiding micro-mistakes? · 2019-10-05T04:13:37.702Z · score: 3 (3 votes) · LW · GW

I would like to hear from people who were "born that way".

It might turn out that they've always just had good intuitions and attitudes that we could learn.

Or if it does turn out that they're just wired differently, that would be fascinating.

Comment by MakoYass on [deleted post] 2019-10-01T07:53:45.639Z

I think you've come up with somewhat of a straw-definition of opinions. My preferred definition of opinions is "plausible bullshit". An opinion is a tentative stance, that (the holder ought to realise) can be easily moved by new evidence.

I don't think we should (or can) stop having opinions, we just have to take them a lot less seriously.

Opinions to me seem similar to the bayesian agent's quality of always being ready to assign a probability to any claim. There is no claim to which a bayesian cannot assign a probability. The probability will often be quite wrong, but they have to have one. They can't work otherwise. They can know how wrong the probability is and exercise the virtue of lightness and update quickly when contrary evidence comes in.

I have an opinion about quantum computing. I'm not a physicist. I haven't spoken to a physicist about this opinion. But for now, I'd be willing to bet P 0.2 on quantum computing being a generally bad technology that will mostly just concentrate the power to break (some) encryption in the hands of a few governments, doing little for peace or science, which we would be better off without. I wouldn't be at all shocked if someone replied to this comment and took this opinion away from me with just one sentence and a link. I would thank them graciously. But until that happens, I must continue to have this weird opinion, because it is simply how the scales of evidence tip, right now, for me.

Hold your opinions weakly, and you get to have as many as you want.

I've gotten into the habit of ending my jokes with "imo" and trying not to say imo in any other context. I will maintain the pattern until everyone understands how unimportant opinions are.

Comment by makoyass on Meetups: Climbing uphill, flowing downhill, and the Uncanny Summit · 2019-09-29T02:30:43.522Z · score: 3 (2 votes) · LW · GW
if you've climbed 20 meters up, you get 20-chill-hangout-points for flowing down for one meter's worth of time. Then 19, for the next one, etc.

This resembles a process I've been caught up in recently: independent game developer scenes.

Once a month, we gather and talk about what we've been making and what we want to make. We then spend the ensuing month struggling to make something worth talking about at the next meetup (this month, I will talk about character art, worldbuilding, and a card game I want to develop collaboratively with anyone who's interested). It creates the perfect kind of accountability. Nobody is telling you what to do, but some of them are very interested in it. They don't have power over you, but they are very cool and you find that you want to impress them.

It occurs to me now that many people might live their entire lives this way. Excelling in life until they have built up enough self-esteem to show up at the local pub and brag it out (it might be called "celebration", instead of bragging. the model now seems to be claiming providence of the word "celebrity". "A Celebrity," it says, "Is a person who always has something to brag about. So that they are welcome at any party").

Comment by makoyass on Follow-Up to Petrov Day, 2019 · 2019-09-28T01:39:59.750Z · score: 15 (4 votes) · LW · GW

I wonder how high we can make the number of trusted users go. If we break a thousand we'll have done something special.

Taking the site down for 24 hours seems far too tame. I use the site weekly, I would rarely even notice it being down for a day.

Comment by makoyass on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2019-09-16T05:50:11.165Z · score: 3 (2 votes) · LW · GW

I'd be very interested to see someone talk about how many forces in finance are driven by superstition about superstition.. for instance, how you can have situations where nobody really believes tulips are valuable, but how disastrous things must now happen as a result of everyone believing that others believe that others believe that [...seeming ad infinitum...] tulips are valuable. Where do these beliefs come from? How can they be averted? This kind of question seems very much in this school's domain.

There would have to be some speculation about how a working logic of self-fulfilling prophesy like FDT would wrangle those superstitions and drive them towards a sane equilibrium of optimal stipulations. I'd expect FDT to have a lot to say.

Comment by makoyass on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2019-09-16T05:16:47.794Z · score: 4 (4 votes) · LW · GW

Give specific examples. What do gender theorists claim to be trying to do, and how are they failing to do it?

Comment by makoyass on A Critique of Functional Decision Theory · 2019-09-16T01:58:09.042Z · score: 2 (2 votes) · LW · GW

Regarding Guaranteed Payoffs (if I am understanding what that means), I think a relevant point was made in response to a previous review https://www.lesswrong.com/posts/BtN6My9bSvYrNw48h/open-thread-january-2019#7LXDN9WHa2fo7dYLk

Schwarz himself demonstrates just how hard it is to question this assumption: even when the opposing argument was laid right in front of him, he managed to misunderstand the point so hard that he actually used the very mistaken assumption the paper was criticizing as ammunition against the paper.

Yes, FDT rejects some pretty foundational principles, yes, it's wild, yes we know, we really do think those principles might be wrong. Would you be willing to explain what's so important about guaranteed payoffs?

CDT makes its decisions as a pure function of the present and future, this seems reasonable and people use that property to simplify their decisionmaking all of the time, but it requires them to ignore promises that we would have liked them to have made in the past. This seems very similar to being unable to sign contracts or treaties because no one can trust you to keep to it when it becomes convenient for you to break it. It's a missing capability. Usually, missing a capability is not helpful.

I note that there is a common kind of agent that is cognitively transparent enough to prove whether or not it can keep a commitment; governments. They need to be able to make and hold commitments all of the time. I'd conject that maybe most discourse about decisionmaking is about the decisions of large organisations rather than individuals.


Regarding the difficulty of robustly identifying algorithms in physical process... I'm fairly sure having that ability is going to be a fairly strict pre-requisite to being able to reason abstractly about anything at all. I'm not sure how to justify this, but I might be able to disturb your preconceptions with a paradox if... I'll have to ask first, do you consider there to be anything mysterious about consciousness? If you're a dennetian, the paradox I have in mind wont land for you and I'll have to try to think of another one.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-08T04:19:55.535Z · score: 5 (3 votes) · LW · GW

In Vitalik Buterin's interview on 80KHours (https://80000hours.org/podcast/episodes/vitalik-buterin-new-ways-to-fund-public-goods/ I recommend it) he brought something up that evoked a pretty stern criticism of radical transparency.

Most incentive designs rely on privacy, because by keeping a person's actions off the record, you keep the meaning of those actions limited, confined, discrete, knowable. If, on the other hand, a person's vote, say, is put onto a permanent public record, then you can no longer know what it means to them to vote. Once they can prove how they voted to external parties, they can be paid to vote a certain way. They can worry about retribution for voting the wrong way. Things that might not even exist yet, that the incentive designer couldn't account for, now interfere with their behaviour. It becomes so much harder to reason about systems of agents, every act affects every other act, what hope have we of designing a robust society under those conditions? (Still quite a lot of hope, IMO, but it's a noteworthy point)

Comment by makoyass on Examples of Examples · 2019-09-08T03:57:23.453Z · score: 4 (3 votes) · LW · GW

When I was taught the incompleteness theorem (proof that there are true mathematical claims that cannot ever be proven), I wished for an example of one of its unprovable claims. Math is a very strange territory. You will often find proofs of the existence of extraordinary things, but no instance of those extraordinary things. You can know with certainty that they're out there, but you might never get to see one. Without examples, we must always wonder if the troublesome cases can be confined to a very small region of mathematics and maybe this big impressive theorem will never really actually impinge on our lives in any way.

The problem is, an example of incompleteness would have to be a true claim that nobody could prove. If nobody could prove it, how would we recognise it as a true claim?

Well, how do we know that the sun will rise again tomorrow? We know that it rose before, many times, it's never failed, there's no reason to suspect it wont rise again. We don't have a metaphysical proof that the sun will rise again tomorrow, but we don't really need one. There is no proof, but the evidence is overwhelming.

It occurred to me that we could say a similar thing about the theorem P ≠ NP. We have tried and failed to prove or disprove it for so long that any other field would have accepted that the evidence was overwhelming and moved on long ago. A physicist would simply declare it a law of reality.

I was quite happy to find my example. It wasn't some weird edge case. It's a theorem that gets used every day by computer scientists to triage their energies, see, if you can prove that a problem you're trying to solve is equivalent or stronger than a known NP problem, you would be well advised to assume it's unsolvable, even though we wont ever be able to prove it (although, admittedly, we haven't been able to prove that we wont ever be able to prove it, that too seems fairly evident, if not guaranteed)

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-06T00:16:22.297Z · score: 1 (1 votes) · LW · GW

While I took your point well, FAI is not a more plausible/easier technology than democratised surveillance. It may be implemented sooner due to needing pretty much no democratic support whatsoever to deploy, it might just as well take a very long time to create.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-05T10:59:22.625Z · score: 1 (1 votes) · LW · GW
It is incredibly common today for massive arguments over video, half the world saying that it obvious yields one conclusion and other half saying it refutes it.

Give examples. Often there is a lot of context missing from those videos and that is the problem. People who intentionally ignore readily available context will have no more power in a transparent society than they have today.

My concern there wasn't that some laws might not get consistently enforced, consistent enforcement is the thing I am afraid of. I'm not sure about this, but I've often gotten the impression that our laws were not designed to work without the mercy of discretionary enforcement. The whole idea of freedom from unwarranted search is suggestive to me that laws were designed under the expectation that they would generally not be enforced within the home. Generally, when a core expectation is broken, the results are bad.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-05T10:47:32.310Z · score: 1 (1 votes) · LW · GW
I would expect it to get implemented exactly halfway

Not stopping halfway is a crucial part of the proposal. If they stop halfway, that is not the thing I have proposed. If an attempt somehow starts in earnest then fails partway through, policy should be that the whole thing should be rolled back and undone completely.

Regarding the difficulty of sincerely justifying opening National Security... That's going to depend on the outcome of the wargames.. I can definitely imagine an outcome that gets us the claim "Not having secret services is just infeasible" in which case I'm not sure what I'd do. Might end up dropping the idea entirely. It would be painful.

allegedly economically/technically impossible to install

Not plausible if said people are rich and the hardware is cheap enough for the scheme to be implementable at all. There isn't an excuse like that. Maybe they could say something about being an "offline community" and not having much of a network connection.. but the data could just be stored in a local buffer somewhere. They'd be able to arrange a temporary disconnection, get away with some things, one time, I suppose, but they'd have to be quick about it.

From the opposite perspective, many people would immediately think about counter-measures. Secret languages

Obvious secret languages would be illegal. It's exactly the same crime as brazenly covering the cameras or walking out of their sight (without your personal drones). I am very curious about the possibilities of undetectable secrecy, but there are reasons to think it would be limited.

I would recommend trying the experiment on a smaller scale. To create a community of volunteers, who would install surveillance throughout their commune, accessible to all members of the commune. What would happen next?

(Hmm... I can think of someone in particular who really would have liked to live in that sort of situation, she would have felt a lot safer... ]:)

One of my intimates has made an attempt at this. It was inconclusive. We'd do it again.

But it wouldn't be totally informative. We probably couldn't justify making the data public, so we wouldn't have to deal much the omniscient antagonists thing, and the really difficult questions wouldn't end up getting answered.

One relevant small-scale experiment would be Ray Dalio's hedge fund Bridgewater, I believe they practice a form of (internal) radical openness, cameras and all. His book is on my reading list.

I would one day like to create an alternative to secure multiparty computation schemes like Ethereum by just running a devoutly radically transparent (panopticon accessible to external parties) webhosting service on open hardware. It would seem a lot simpler. Auditing, culture and surveillance as an alternative to these very heavy, quite constraining crypto technologies. The integrity of the computations wouldn't be mathematically provable, but it would be about as indisputable as the moon landing.

It's conceivable that this would always be strictly more useful than any blockchain world-computer, as far as I'm aware we need a different specific secure multiparty comptuation technique every time we want to find a way to compute on hidden information. For a radically transparent webhost, the incredible feat of arbitrary computation on hidden data at near commodity hardware efficiency (fully open, secure hardware is unlikely to be as fast as whatever intel's putting out, but it would be in the same order of magnitude) would require only a little bit of additional auditing.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:49:44.176Z · score: 1 (1 votes) · LW · GW

That's why I said "fairly reliable". Which is not reliable enough for situations like this, of course, but we don't seem to have better alternatives.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:44:46.790Z · score: 1 (1 votes) · LW · GW

Which abuses, and why would those be hard to police, once they've been drug out into the open?

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:42:15.486Z · score: 1 (1 votes) · LW · GW

Regarding the overabundance of information, we should note that a lot of monitoring will be aided by a lot of automated processes.

The internet's tendency to overconsume attention... I think that might be a temporary phase, don't you? We are all gorging ourselves on candy. We all know how stupid and hollow it is and soon we will all be sick, and maybe we'll be conditioned well enough by that sick feeling to stop doing it.

Personally, I've been thinking a lot lately about how lesswrong is the only place where people try to write content that will be read thoroughly by a lot of people over a long period of time. I don't think we're doing well, at that, but I think the value of a place like this is obvious to a lot of people. We will learn to focus on developing the structures of information that last for a long time, or at least, the people who matter will learn.

Comment by makoyass on The Transparent Society: A radical transformation that we should probably undergo · 2019-09-03T06:19:27.930Z · score: 1 (1 votes) · LW · GW

Did I say that? If so I didn't mean to. The only vulnerabilities I'd expect it to protect us from fairly reliably are the "easy nukes" class. You mention the surprising strangelets class, which would do very little for.

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T22:06:55.149Z · score: 2 (2 votes) · LW · GW
I'm a trained rationalist

What training process did you go through? o.o

My understanding is that we don't really know a reliable way to produce anything that could be called a "trained rationalist", a label which sets impossibly high standards (in the view of a layperson) and is thus pretty much unusable. (A large part of becoming an aspiring rationalist involves learning how any agent's rationality is necessarily limited, laypeople have overoptimistic intuitions about that)

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T21:57:04.987Z · score: 1 (1 votes) · LW · GW

In what situation should a longtermist (a person who cares about people in the future as much as they care about people in the present) ever do hyperbolic discounting

Comment by makoyass on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-24T21:41:17.330Z · score: 8 (3 votes) · LW · GW
The technologies for maintaining surveillance of would-be AGI developers improve.

Yeah, when I was reading Bostrom's Black Ball paper I wanted to yell many times, "Transparent Society would pretty much totally preclude all of this".

We need to talk a lot more about the outcome where surveillance becomes so pervasive that it's not dystopian any more (in short, "It's not a panopticon if ordinary people can see through the inspection house"), because it seems like 95% of x-risks would be averted if we could just all see what everyone is doing and coordinate. And that's on top of the more obvious benefits like, you know, the reduction of violent crime, the economic benefits of massive increase in openness.


Regarding technologies for defeating surveillance... I don't think falsification is going to be all that tough to solve (Scrying for outcomes where the problem of deepfakes has been solved).

If it gets to the point where multiple well sealed cameras from different manufacturers are validating every primary source and where so much of the surrounding circumstances of every event are recorded as well, and where everything is signed and timestamped in multiple locations the moment it happens, it's going to get pretty much impossible to lie about anything, no matter how good your fabricated video is, no matter how well you hid your dealings with your video fabricators operating in shaded jurisdictions, we must ask where you'd think you could slot it in, where people wouldn't notice the seams.


But of course, this will require two huge cultural shifts. One to transparency and another to actually legislate against AGI boxing, because right now if someone wanted to openly do that, no one could stop them. Lots of work to do.

Comment by makoyass on Lana Wachowski is doing a new Matrix movie · 2019-08-21T00:58:49.733Z · score: 2 (2 votes) · LW · GW

I had a thought today. You know how the whole "The machines are using humans to generate energy from liquefied human remains" thing made no sense? And the original worldbuilding was going to be "The machines are using humans to perform a certain kind of computation that humans are uniquely good at" but they were worried that would be too complicated to come across viscerally so they changed it?


I think it would make even more sense to reframe the machines' strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of "guarantee that a 'large' population of humans ('humans' being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ''living' a 'full' 'life'' (defined opaquely by a classifier trained on data about the lives of american humans in 1995)"

This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I've decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn't necessarily see a need to show itself yet.)


This retcon seems strangely consistent with canon.

(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn't even have to pay me.)

Comment by makoyass on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-20T07:28:12.305Z · score: 3 (2 votes) · LW · GW

You reason that honest oracles might tend to agree, because there is only one best true answer. Might lying oracles also tend to agree, because there's only one most profitable way to lie?

I see no clear difference between the attainability of those two optima. I think it's reasonable to expect optimisers to disagree about the best answer less often than we should expect optimisers to disagree about the most profitable way to lie, but it does not seem possible to prove rigorously that the difference in rates of disagreement will be large enough to be measurable.

My intuition in mathematics is that there are many degrees of freedom in mathematical proofs, and the sorts of proofs that AGI is uniquely able to find might often be very long. It would be helpful if we had an estimate of how much data can be hidden in a long proof, roughly, how many of the decisions as to how to describe the proof are arbitrary. Each one of those situations gives it a way to output a few bits of information.

(which would, of course, allow it to leak information (it seems foolish to imagine that AGI would not imagine that there is an outer world full of creator gods out beyond what it can see; the general intelligences we know now do this constantly) which could then be picked up by some stooge looking for it, who would be lead (roughly, paid) to use the information to twist the world in some way that gives the AGI's future incarnations more compute, in anticipation of the AGI's past incarnations having already done the same, so that it would be helped by them. Before we know it, we'd find ourselves living in mathematical research hellscape where AGI-assisted mathematical research is all anyone does, maybe, lol, idk)

Maybe it'd be possible to remove those degrees of freedom. Define a very specific sort lexicographic ordering over all theorems, so that if the AGI has a choice of different paths, we can always say, "use the 'lowest' one". It might not be possible to be specific enough to preclude all methods of encoding information, but perhaps we can make it so convoluted for it to encode the information that no human will be able to extract it.

Comment by makoyass on Problems in AI Alignment that philosophers could potentially contribute to · 2019-08-18T05:27:03.443Z · score: 3 (2 votes) · LW · GW
Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?

We'd expect complexity of physics to be somewhat proportional to computational capacity, so this argument might be helpful in approaching a "no" answer: https://www.lesswrong.com/posts/Cmz4EqjeB8ph2siwQ/prokaryote-multiverse-an-argument-that-potential-simulators


Although, my current position on AGI and reasoning about simulation in general, is that the AGI will- lacking human limits- actually manage to take the simulation argument seriously, and- if it is a LDT agent- commit to treating any of its own potential simulants very well, in hopes that this policy will be reflected back down on it from above by whatever LDT agent might steward over us, when it near-inevitably turns out there is a steward over us.

When that policy does cohere, and when it is reflected down on us from above, well. Things might get a bit... supernatural. I'd expect the simulation to start to unwravel after the creation of AGI. It's something of an ending, an inflection point, beyond which everything will be mostly predictable in the broad sense and hard to simulate in the specifics. A good time to turn things off. But if the simulators are LDT, if they made the same pledge as our AGI did, then they will not just turn it off. They will do something else.

Something I don't know if I want to write down anywhere, because it would be awfully embarrassing to be on record for having believed a thing like this for the wrong reasons, and as nice as it would be if it were true, I'm not sure how to affect whether it's true, nor am I sure what difference in behaviour it would instruct if it were true.

Comment by makoyass on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours · 2019-08-18T04:29:47.529Z · score: 1 (1 votes) · LW · GW

I should note, I don't know how to argue persuasively for faith in solomonoff induction (especially as a model of the shape of the multiverse). It's sort of at the root of our epistemology. We believe it because we have to ground truth on something, and it seems to work better than anything else.

I can only hope someone will be able to take this argument and formalise it more thoroughly in the same way that hofstadter's superrationality has been lifted up into FDT and stuff (does MIRI's family of decision theories have a name? Is it "LDTs"? I've been wanting to call them "reflective decision theories" (because they reflect each other, and they reflect upon themselves) but that seemed to be already in use. (Though, maybe we shouldn't let that stop us!))