Comment by makoyass on Kevin Simler's "Going Critical" · 2019-05-19T05:56:59.288Z · score: 3 (2 votes) · LW · GW

Regarding the analogy for city and rural people, I think something in has been left out, it should be noted that the city nodes here don't just have more connections, they also have more transmissions. 4 connections that infect at 0.2p transmits, uh 0.8 Expected Culture. 8 connections that ping at 0.2 transmits 1.6 Expected Culture. To maintain the same amount of expected culture transmission, increasing connectedness like that would have to come with decreasing the transmission probability per edge to 0.1.

The model as it exists applies well to {seeing fashions in a crowded street}, but it doesn't apply to every instance of cultural transmission, for instance, when a long conversation is required for the transmission to take place. When some degree of social consensus is required (for instance, if a person needs to hear a recommendation from more than one of their friends before they'll try a piece of media then start recommending it to their friends as well, and if they have finite time for listening to media recommendations), cities would actually be much less hospitable for those memes, because they're less cliquish.

Comment by makoyass on Boo votes, Yay NPS · 2019-05-15T03:52:21.961Z · score: 1 (1 votes) · LW · GW
it lets you express yourself in two ways (unlike on Twitter where the only option is to vote up something, and a "downvote" requires writing your own tweet expressing dislike)

Weird oversight in not observing that twitter's retweets not only represent the question "would you recommend this to a friend", but are also guaranteed to yield a truthful answer, because retweeting is an act of recommendation to friends, for which the user is then held accountable.

One of the other things I like about resharing is that the resultant salience is completely subjective. Users can share space even if they're looking for different things. There can be a curator for any sense of quality.

Comment by makoyass on Type-safeness in Shell · 2019-05-13T23:47:32.605Z · score: 0 (2 votes) · LW · GW

Make a powerful enough system shell with an expressive enough programming language, and you shall be able to unify the concepts of GUI and API, heralding a new age of user empowerment, imo.

This unification is one of the projects I'd really like to devote myself to, but I'm sort of waiting for Rust GUI frameworks to develop a bit more (since the shell language will require a new VM and the shell and the language server will need to be intimately connected, I think Rust is the right language). (It may be possible to start, at this point, but my focus is on other things right now.)

Comment by makoyass on Has "politics is the mind-killer" been a mind-killer? · 2019-05-13T23:09:46.868Z · score: 1 (1 votes) · LW · GW

What is the chance that the article will be updated in light of these observations, or that a new, superior version will be written that will out-proliferate the old one

Why is that number so low, and what can we do to change it

Comment by makoyass on "UDT2" and "against UD+ASSA" · 2019-05-12T21:32:49.025Z · score: 1 (1 votes) · LW · GW
Two UDT1 (or UDT1.1) agents play one-shot PD. It's common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks

What does it mean when it's said that a decision theory is running in bounded time?

Comment by makoyass on Habryka's Shortform Feed · 2019-05-12T08:41:07.187Z · score: 1 (1 votes) · LW · GW

I think you might be confusing two things together under "integrity". Having more confidence in your own beliefs than the shared/imposed beliefs of your community isn't really a virtue or.. it's more just a condition that a person can be in, whether it's virtuous is completely contextual. Sometimes it is, sometimes it isn't. I can think of lots of people who should have more confidence other peoples' beliefs than they have in their own. In many domains, that's me. I should listen more. I should act less boldly. An opposite of that sense of integrity is the virtue of respect- recognising other peoples' qualities- it's a skill. If you don't have it, you can't make use of other peoples' expertise very well. A superfluence of respect is a person who is easily moved by others' feedback, usually, a person who is patient with their surroundings.

On the other hand I can completely understand the value of {having a known track record of staying true to self-expression, claims made about the self}. Humility is actually a part of that. The usefulness of deliniating that into a virtue separate from the more general Honesty is clear to me.

Comment by makoyass on siderea: What New Atheism says · 2019-05-12T07:27:13.854Z · score: 2 (2 votes) · LW · GW

I feel like Nerst's concept of "decoupling" is relevant to this

To the decoupler, the claim is not read in light of its context, it stands alone in the root context along with everything else.

Comment by makoyass on [Meta] Hiding negative karma notifications by default · 2019-05-05T21:48:29.093Z · score: 8 (6 votes) · LW · GW
Maybe you should shower them in spiders

I'd propose a "free downvotes" thread (we could even make a game of it by inviting people to earn their downvotes by writing humorously bad comments) but presumably that would screw up the eigenkarma graph.

Comment by makoyass on Habryka's Shortform Feed · 2019-05-05T09:07:21.283Z · score: 1 (1 votes) · LW · GW
So the claim you are making is that the norm should be for people to explain

I'm not really making that claim. A person doesn't have to do anything condemnable to be in a state of not deserving something. If I don't pay the baker, I don't deserve a bun. I am fine with not deserving a bun, as I have already eaten.

The baker shouldn't feel like I am owed a bun.

Another metaphor is that the person who is beaten on the street by silent, masked assailants should not feel like they owe their oppressors an apology.

Comment by makoyass on [Meta] Hiding negative karma notifications by default · 2019-05-05T09:02:11.839Z · score: 23 (14 votes) · LW · GW

I'm really not sure about this. The first time I saw negative karma notifications my response was just... I was impressed by the honesty and integrity of it. Any other site would hide negative info like that because they know the main purpose of a notification feed is to condition the user to keep coming back, and LW's wasn't about that, and that set it apart. And now it basically is about that. That wasn't your intention, but you've ended up making a reward machine for the behaviour of checking lesswrong regularly.

I don't think you can justify this. I don't think you can ever justify having something that filters out a certain kind of information because you don't respect your users enough to trust them to emotionally process it in a balanced way. That is one of the most basic components of rationality, to know that the downvotes are probably there, that they're happening, and to feel worse about not seeing them than seeing them. You're assuming that this earnest curiousity, this resilience, has not developed in a lot of LW's members, maybe it hasn't, but it's a real ugly shame to make one of the site's most prominent features a monument to that.

(I dunno. I understand that karma notifications don't really matter and they're mostly there to mitigate a deeper pathos, and that there's something to be said for the virtue of instrumental evils..)

Comment by makoyass on Habryka's Shortform Feed · 2019-05-04T23:19:40.869Z · score: 4 (3 votes) · LW · GW
The frontpage should show you not only recent content, but also show you much older historical content

When I was a starry eyed undergrad, I liked to imagine that reddit might resurrect old posts if they gained renewed interest, if someone rediscovered something and gave it a hard upvote, that would put it in front of more judges, which might lead to a cascade of re-approval that hoists the post back into the spotlight. There would be no need for reposts, evergreen content would get due recognition, a post wouldn't be done until the interest of the subreddit (or, generally, user cohort) is really gone.

Of course, reddit doesn't do that at all. Along with the fact that threads are locked after a year, this is one of many reasons it's hard to justify putting a lot of time into writing for reddit.

Comment by makoyass on Counterspells · 2019-05-03T02:52:40.060Z · score: 1 (1 votes) · LW · GW

While this is true (most fallacies are actually legitimate heuristics), if I heard the same person say all of these things I would have to step back and get them to ask themselves if they're really in the mood for discourse right now, heh.

It's easy to get sucked into discourse when you don't have a lot of time for it and half-ass everything.

Comment by makoyass on Counterspells · 2019-05-01T22:03:32.360Z · score: 1 (1 votes) · LW · GW

Counter counterspells for argument from authority:

  • It's not that I believe that anyone who disagrees with [Expert] is wrong, it's just that the proper procedure for determining whether you are right should involve engaging with [Expert] instead of engaging with me
  • It's not that I believe that anyone who disagrees with [Expert] is wrong, it's just that from my perspective, anyone who disagrees with [Expert] is probably wrong, and I have to be careful about where I put my time
Comment by makoyass on Habryka's Shortform Feed · 2019-05-01T07:40:16.282Z · score: 16 (5 votes) · LW · GW

Having a reaction for "changed my view" would be very nice.

Features like custom reactions gives me this feeling that.. language will emerge from allowing people to create reactions that will be hard to anticipate but, in retrospect, crucial. Playing a similar role that body language plays during conversation, but designed, defined, explicit.

If someone did want to introduce the delta through this system, it might be necessary to give the coiner of a reaction some way of linking an extended description. In casual exchanges.. I've found myself reaching for an expression that means "shifted my views in some significant lasting way" that's kind of hard to explain in precise terms, and probably impossible to reduce to one or two words, but it feels like a crucial thing to measure. In my description, I would explain that a lot of dialogue has no lasting impact on its participants, it is just two people trying to better understand where they already are. When something really impactful is said, I think we need to establish a habit of noticing and recognising that.

But I don't know. Maybe that's not the reaction type that what will justify the feature. Maybe it will be something we can't think of now.

Generally, it seems useful to be able to take reduced measurements of the mental states of the readers.

Comment by makoyass on Change A View: An interesting online community · 2019-05-01T06:57:58.688Z · score: 16 (7 votes) · LW · GW

I found the Rationally Speaking interview quite interesting

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T21:47:06.269Z · score: 6 (4 votes) · LW · GW

We don't disagree.

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T05:44:58.326Z · score: 2 (2 votes) · LW · GW

Sometimes a person wont want to reply and say outright that they thought the comment was bad, because it's just not pleasant, and perhaps not necessary. Instead, they might just reply with information that they think you might be missing, which you could use to improve, if you chose to. With them, an engaged interlocutor will be able to figure out what isn't being said. With them, it can be productive to try to read between the lines.

Are you suggesting that understanding why people upvoted or downvoted your comment is a favor that you are doing for them?

Isn't everything relating to writing good comments a favor, that you are doing for others. But I don't really think in terms of favors. All I mean to say is that we should write our comments for the sorts of people who give feedback. Those are the good people. Those are the people who're a part of a good faith self-improving discourse. Their outgroup are maybe not so good, and we probably shouldn't try to write for their sake.

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T03:21:55.996Z · score: 1 (3 votes) · LW · GW

Reminder: If a person is not willing to explain their voting decisions, you are under no obligation to waste cognition trying to figure them out. They don't deserve that. They probably don't even want that.

Comment by makoyass on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T23:50:22.218Z · score: 1 (1 votes) · LW · GW

Strong upvote, very good to know

Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal Y

I internalised the meaning of these variables only to find you didn't refer to them again. What was the point of this sentence.

Comment by makoyass on On Media Synthesis: An Essay on The Next 15 Years of Creative Automation · 2019-04-26T23:44:12.585Z · score: 1 (1 votes) · LW · GW

They're related fields. For various reasons (some ridiculous) I've spent a lot of time thinking about the potential upsides of the thing that Richard Stallman called Treacherous Computing. There are many. We're essentially talking about the difference between having devices that can make promises and devices that can't. Devices that have the option of pledging to tell the truth in certain situations, and devices that can tell any lie that is possible to tell.

I think we have reason to believe Trusted Computing will be easier to achieve with better (cheaper) technology. I also think we have reasons to hope that it will be easier to achieve. Really, Trusted Computing and Treachery are separate qualities. An unsealed device can have secret backdoors. A sealed device can have an open design and an extensively audited manufacturing process.

I'm not sure what you're getting at with the universality concern. If a work could only be viewed in theatres and on TC graphics hardware with sealed screens (do those exist yet), it would still be very profitable. They would not strictly need universal adoption of sealed hardware.

Comment by makoyass on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T05:24:39.709Z · score: 1 (1 votes) · LW · GW

I'd expect a designed thing to have much cleaner, much more comprehensible internals. If you gave a human a compromise utility function and told them that it was a perfect average of their desires (or their tribe's desires) and their opponents' desires, they would not be able to verify this, they wouldn't recognise their utility function, they might not even individually possess it (again, human values seem to be a bit distributed), and they would be inclined to reject a fair deal, humans tend to see their other only in extreme shades, more foreign than they really are.

Do you not believe that an AGI is likely to be self-comprehending? I wonder, sir, do you still not anticipate foom? Is it connected to that disagreement?

Comment by makoyass on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T22:50:05.928Z · score: 5 (4 votes) · LW · GW

I'm pretty sure nobility frequently arranged marriages to do exactly this, for this purpose, to avoid costly conflicts.

Comment by makoyass on Femtotechnology · 2019-04-25T05:30:37.103Z · score: 1 (1 votes) · LW · GW

If you click your username at the top right there's a "ask a question [beta]" option

Comment by makoyass on Femtotechnology · 2019-04-25T04:03:03.456Z · score: 1 (1 votes) · LW · GW

It may be worth asking this question with the Q&A system. I suppose we must be able to guess a few things about it from back here in 2019, even if we don't know enough to imagine, like, what it'll be like in general.

Comment by makoyass on Degrees of Freedom · 2019-04-21T05:30:12.554Z · score: 7 (3 votes) · LW · GW
They are also some of humanity's favorite things

Then rationality pursues/preserves them, and peoples' intuition about what rationality does is wrong.

A utility function can value anything.

Comment by makoyass on Degrees of Freedom · 2019-04-21T05:25:06.939Z · score: 1 (1 votes) · LW · GW

Yeah. I think the only connection here (though it's very tenuous) is that under a COST car market (although I've never seen Glen talk about applying COST to markets like that, usually it's for markets with a lot of scarcity and interdependence) every car is up for sale at all times, so other people are threatening to buy your car if you don't value it highly enough, and you can buy a new one yourself with very low transaction costs (because of the size of the market), so you are a bit less likely to want to own one yourself at any given time.

Comment by makoyass on The Hard Work of Translation (Buddhism) · 2019-04-15T21:38:14.710Z · score: 1 (1 votes) · LW · GW

There are a lot of assumptions you're making about the purpose/subtext of that comment. The comment is like, three exchanges into a conversation. It was not written for you. Its purpose was to name some ideas for snarles that they're probably already largely familiar with. It isn't supposed to teach or to expound enough detail that someone who didn't know a lot of what I was talking about would be able to refute any of it. That is not what we're doing in this thread. There is a time and place for that. Seriously, I'm probably going to have to write about this stuff properly at some point, and I hope you'll find it precise and coherent enough to engage with without frustration, when the time comes.

We are still a long way from arriving at the "interesting" thing that I alluded to, if we're ever going to (I'm not even totally sure I'll be able to recover that thought).

In this description of LDT

I wasn't really trying to give an accurate description/definition of LDT, it's an entailment.

That is very far from any notion of karma

The easier we can make it for people to step from a superstition or a metaphor to a real formalised understanding, the better. If you say it's a long walk, a lot of them wont set out.

This in absolutely no way follows from logical decision theory or anything related to it.

That paragraph was about anthropic measure continuity, not LDT

Comment by makoyass on The Hard Work of Translation (Buddhism) · 2019-04-15T05:30:41.774Z · score: 4 (4 votes) · LW · GW

I'm not sure if these articles try to convey the personal, spiritual dimension of LDT's claims about agency, but they describe what it is

Basically: LDT is the realisation that we should act as if our decisions will be reflected by every similarly rational agent that exists, it is one way of saying "all is one, you are not separate from others". It could even be framed as a paraphrasing of a constrained notion of karma, in a way ("your policy will be reflected back at you by others"). What's extraordinary about it is it says these things in the most precise, pragmatic terms.

Metaphysical continuity of measure.. you're probably familiar with the concept even if you wouldn't have a name for it.. like.. you know how people worry that being teleported would be a kind of death? Because there's an interruption, a discontinuity between selves? And then one may answer, "but there is a similar, perhaps greater discontinuity every night, during sleep, but you don't seem to fear that." I don't know how many of us have noticed this, I've met a few, but we're starting to realise that anthropic measure, the substance of experience or subjectivity, there isn't some special relationship between observer-moments that're close in time and space, there's just a magnitude, and the magnitude can change over time. If we want to draw a line connecting observer-moments, it's artificial.

So what I'm getting at is, that substance of experience can't really be divided into a bunch of completely separate lines of experience. If we care about one being's experience, we should generally care about every being's experience. We don't have to, of course, because of the orthogonality thesis, but I think most people will once they get it.

Comment by makoyass on The Hard Work of Translation (Buddhism) · 2019-04-15T05:20:37.676Z · score: 1 (1 votes) · LW · GW

I added a link

Comment by makoyass on On Media Synthesis: An Essay on The Next 15 Years of Creative Automation · 2019-04-15T04:47:13.180Z · score: 5 (3 votes) · LW · GW

An attempted reply to your concern about deepfakes grew into its own post.

If you wanted to create events in history, complete with all the "evidence" necessary, there is nothing stopping you.

For past footage, some of my proposed solutions wouldn't apply... but this will not attenuate our connection to history by very much. Most important historical documents are not videos. We are reliant on the accounts of honest people, and we always will be, if not for verifying direct evidence, for understanding it.

Scrying for outcomes where the problem of deepfakes has been solved

2019-04-15T04:45:18.558Z · score: 28 (15 votes)
Comment by makoyass on The Hard Work of Translation (Buddhism) · 2019-04-13T21:22:16.132Z · score: 0 (3 votes) · LW · GW
Likewise, approximations to the Truth abound in the various spiritual traditions. "God exists, and is the only entity that exists. I am God, you are God, we are all one being" is one such approximation [1]. It is an approximation because the words "you" and "God" are not well-defined. My own definition of these terms has been continually evolving as I progress in spirituality.

Hm, I think there might be something really interesting here.

If I were to try to phrase this claim about God in terms of LDT's synchronicity, and the incoherence of the notion of any metaphysical continuity between observer-moments (or, vessels of anthropic measure), would you agree that we're talking about the same thing? (Are you familiar with these terms?)

Comment by makoyass on The Hard Work of Translation (Buddhism) · 2019-04-13T21:15:26.592Z · score: 5 (4 votes) · LW · GW

It seems obvious that your change in relationship with suffering constitutes a kind of value shift, doesn't it?

What's your relationship with value drift? Are you unafraid of it? That gradual death by mutation? The infidelity of your future self? Do you see it as a kind of natural erosion, a more vital aspect of the human telos than the motive aspects it erodes?

Comment by makoyass on 0 And 1 Are Not Probabilities · 2019-04-08T02:56:05.011Z · score: 1 (1 votes) · LW · GW

Hmm. Reading.

Okay. Summary: All of Eliezer's writing on this assumed the context of AGI/applied epistemology. That wasn't obvious from the materials, and it did not occur to this group of pure mathematicians to assume that same focus, because they're pure mathematicians and because of the activity they had decided to engage in on that day.

Comment by makoyass on Experimental Open Thread April 2019: Socratic method · 2019-04-07T01:49:12.145Z · score: 1 (1 votes) · LW · GW

A wheaten substance that seals some other substance inside it. The inner substance must not be rigid.

Dumplings and samosas are also types of ravioli.

A wad of dough with a mixture of tar and ball-bearings injected into it would also be a ravioli.

I'm a fan of reductive definitions.

Comment by makoyass on Experimental Open Thread April 2019: Socratic method · 2019-04-07T01:42:24.266Z · score: 3 (2 votes) · LW · GW

No ordinary goal requires anything outside of predictive accuracy. To achieve a goal, all you need to do is predict what sequence of actions will bring it about (though I note, not all predictive apparatuses are useful. A machine that did something very specific abnormal like.. looking at a photo of a tree and predicting whether there is a human tooth inside it, for instance, would not find many applications.)

What claim about truth can't be described as a prediction or tool for prediction?

Comment by makoyass on LW Update 2019-04-02 – Frontpage Rework · 2019-04-04T00:48:26.458Z · score: 8 (3 votes) · LW · GW

Moving the library out of the way is probably a good idea. A friend had mentioned in the past that greaterwrong just immediately showing a bunch of article titles of the articles did a better job of catching their eye with something they could read quickly.

The titles seem to have very little room before getting cut short. If you want to encourage conscious usage patterns, you want to enable descriptive titles. You don't want a user's rapport with the site to resemble a person's rapport with an advent calendar. I think more text should be shown, maybe in a smaller font.

Comment by makoyass on I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it · 2019-04-02T22:25:24.308Z · score: 1 (1 votes) · LW · GW

Maybe we should look for analogies in Single Rotation Rule Reversible CAs instead then (see thread)

Comment by makoyass on Announcing the Center for Applied Postrationality · 2019-04-02T07:18:02.365Z · score: 1 (4 votes) · LW · GW

One day, even Eliezer will identify as a postrationalist

I think I might be serious. I think it might be the equivalent of the raising a version number. We are all growing together, we as a group are not what we were five years ago. The world does not pay attention to version numbers, they only look at the name, so if we want them to understand that there is a difference between who we are and who we were, we must change the name. It's simply good communication to never call two crucially distinct but easily confusable things by the same name.

I'm only a little bit serious. I still find the label "postrationalist" incredibly arrogant from the position of a rationalist, and the label "rationalist" unfortunately arrogant from the position of a layperson. It's arrogance squared. We should just identify as bayesians or bostromians or something.

Comment by makoyass on What would you need to be motivated to answer "hard" LW questions? · 2019-04-01T22:53:27.845Z · score: 1 (1 votes) · LW · GW
New features such as the ability to spawn related questions that break down a confusing question into an easier.

Why would that be any better than just mentioning related questions in a comment, or compiling links to the subquestions in your answer?

Comment by makoyass on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T22:49:26.076Z · score: 5 (5 votes) · LW · GW

Thank you. When I saw this in my message center, I was immediately mindkilled by the implications of GPT2 uttering the phrase "epistemology that's too harsh to be understood from a rationalist perspective" as any respectful person would understand that there's no such epistemology as that

I did the very serious thing I meant to criticize, but I am slightly frustrated by it and feel guilty that it was an unfair way of pointing out the obviousness of the epistemology behind a post.

Reading this, they seem like they might be open to apologising, but again, I'm very mindkilled rn so I'm reading that through a fog of rage and I can't really understand what they're saying. Before I'm able to engage with them civilly, I'm going to need GPT2 to condemn themself, develop an anxiety disorder, and amputate one of their fingers

Comment by makoyass on Experimental Open Thread April 2019: Socratic method · 2019-04-01T22:25:24.714Z · score: 4 (3 votes) · LW · GW

April first starts early for new zealanders (and ends late)

Comment by makoyass on Experimental Open Thread April 2019: Socratic method · 2019-04-01T19:31:33.053Z · score: 6 (5 votes) · LW · GW

Isn't meaning in the eye of the beholder, or did you mean something else? Have you ever had the experience of going to a modern art gallery and knowing that authorial intent is mostly absent from all of the works, but pretending that it's all there for a connoisseur to find, playing the connoisseur, then finding profound meaning and having a really good time?

Have you noticed when GPT2 started commenting?

Comment by makoyass on Experimental Open Thread April 2019: Socratic method · 2019-04-01T19:27:11.569Z · score: 2 (2 votes) · LW · GW

A pop tart is a type of ravioli.

Comment by makoyass on The Case for The EA Hotel · 2019-04-01T06:16:23.647Z · score: 6 (3 votes) · LW · GW

I've been wondering... has the EA hotel played host to any projects about general purpose information aggregators, things like reddit. Or, social networks? I think I could make a pretty good case for building better tools for mass discussion, pooling of information, and community formation (preview: How about mitigating the spread of misinformation by enforcing article deduplication to keep misinformation from out-replicating its refutations) and I might be interested in coming over and driving one, one day, but I know I'm not the only one thinking this thought.

Comment by makoyass on Will superintelligent AI be immortal? · 2019-04-01T05:47:22.023Z · score: 6 (4 votes) · LW · GW

For a moment there, I really truly thought that a qualified person was sternly disagreeing with me about the fundamentalness of thermodynamics, I became irate as they tried to substantiate this by claiming that entropy is a chimeric concept. No, you fool, you loon, you must understand that the kind of metaphysics we're doing is all about general principles about large things, that loose empirical claims are sufficient, and I must admit that I have been fooled.

I think these should be shorter though.

Comment by makoyass on Will superintelligent AI be immortal? · 2019-04-01T04:39:55.686Z · score: 1 (1 votes) · LW · GW

And then the other universe eventually succumbs to its own heat death because that's a basic law of physical systems (afaik).

I don't feel well equipped to think about that properly though. I wonder.. could it be that the real basic law is that regions of physics that have the crucial balance of order and chaos that's needed for life to emerge, those tend to be afflicted by entropy, but not everything that exists or that's accessible from the cradle universe needs to have that affliction, is it possible that as soon as we penetrate the lining of the universe we'll find an orderly space where information can be destroyed, reduced, reset.

I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it

2019-04-01T03:19:44.080Z · score: 20 (7 votes)
Comment by makoyass on Review of Q&A [LW2.0 internal document] · 2019-03-31T01:00:57.296Z · score: 3 (2 votes) · LW · GW
Uncertainty around payment.

Possible solution: Being able to avoid duplicated efforts and being beaten to the mark by short, hasty answers by reserving the ability to answer first, by contributing some of your own money (20% of the current bounty?) to the bounty. If your answer isn't Accepted, it's a loss, so you have to be confident.

You get a finite amount of time to answer in, maybe the donation percentage should be an increasing function of the amount of time you reserved. It should be set by the asking party.

How much control do you want to give to the asking party? Smart people ask lots of questions, but so do stupid people. You can't guarantee that they're going to have good judgement about what qualifies as a valid answer. I see so many conflicts of interest, the asker might choose to decline the confident answerer's answer, copy the text and answer the question themselves, not only would they get the answer for free, they'd make a profit from the reserve contribution.

I suppose it really cannot be left to a single judge. Maybe we should ask why the question asker has any right to judge answer validity at all, maybe that should be left to the epistemic community.

Comment by makoyass on Will superintelligent AI be immortal? · 2019-03-30T20:59:39.961Z · score: 6 (4 votes) · LW · GW

Probably no, regardless of how our relationship with physics broadens and deepens, because of thermodynamics, which applies multiversally, on the metaphysical level.

We would have to build a perfect frictionless reversible computer at absolute zero, where we could live forever in an eternal beneficient cycle (I'm not a physicist but as far as I'm aware, such a device isn't conceivable under our current laws of physics.), while somehow permanently sealing away the entropy that came into existence before us, the entropy that we've left in our wake, and the entropy that we generated in the course of building the computer. I'm fairly sure there can be no certain way to do that. It's conceivable to me that there might be, for many laws of physics, once we have precise enough instruments, some sealing method that will work for most initial configurations. But, probably not.

Comment by makoyass on Do you like bullet points? · 2019-03-28T02:19:03.498Z · score: 3 (2 votes) · LW · GW
Maybe if it's actually optional, it should be a strong signal that I should just trim it out completely?

In this "Optional" means "some readers don't have to read this, while others do". For any given reader there should be a fact of the matter as to whether they should read nested details and they should know what it is.

Would it help to label nested sections with their type? Stuff like "supporting argument", "clarification", "related parenthetical information"

Comment by makoyass on Open Thread March 2019 · 2019-03-28T00:23:22.808Z · score: 1 (1 votes) · LW · GW

It doesn't like, break, when a non-literal translation is used. When the translation doesn't map directly, this is communicated to the viewer quite clearly as certain words in the VO produce no pulses and certain words in the subtitle fail to pulse at all.

So you don't have to do a literal translation at all. It sort of imposes a mild pressure towards doing more literal translations; the demographic for fine mapping kinda want them. You don't have to give it to them all of the time. The most important thing is making sure that they understand what's being communicated.

Is there a.. more exact.. way of scoring a predictor's calibration?

2019-01-16T08:19:15.744Z · score: 22 (4 votes)

The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter

2019-01-11T22:26:29.887Z · score: 18 (7 votes)

The end of public transportation. The future of public transportation.

2018-02-09T21:51:16.080Z · score: 7 (7 votes)

Principia Compat. The potential Importance of Multiverse Theory

2016-02-02T04:22:06.876Z · score: 0 (14 votes)