Comment by makoyass on Open Thread July 2019 · 2019-07-17T05:33:11.619Z · score: 1 (1 votes) · LW · GW
though there's something to be said for semi-independent reinvention.

:D

(I am delighted because constructivism is what is to be said for semi-independent reinvention, which aleksi just semi-independently constructed, thereby doing a constructivism on constructivism)

If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge?

2019-07-12T01:40:48.999Z · score: 5 (3 votes)
Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-07-08T11:43:05.600Z · score: 1 (1 votes) · LW · GW

Aye, I suppose the answer is; many cognitive processes in humans need repetition because they seem to be a bit broken? (Are there theories about why human memory (heck, higher animal memory in general) is so... rough?)

Since hypermnesics do exist, my theory is that that used to be a common phenotype, but our consciousness was flawed, it was too much power, we became neurotic, or something, and all evolution could do to sort it out was to cripple it.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-07T00:05:39.143Z · score: 1 (1 votes) · LW · GW

I did think, as I wrote, that the beginning of the comment would be a good summary, but you're right, not enough would be visible in the preview.

Perhaps if the comment previews were a bit longer.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-06T09:38:33.258Z · score: 1 (1 votes) · LW · GW
Seeing half a line of a comment is usually not enough information to decide whether reading the whole thing is worth while

I want to argue that this is a huge problem with the way people write here. If I have to read the whole comment to find out what the whole comment is about, that really limits the speed at which I can search the corpus. Sometimes, not only do you have to read the entire comment, carefully, you then have to think about it for a minute to decode the information. Sometimes it turns out to just a phrasing of something you already knew, in a not-interestingly different form.

If you don't make a body of writing easy to navigate with indexes and summaries, people who value their time just wont engage with it. They wont complain to you, they'll just fade away. They might even blame themselves. "Why can't I process this information quicker", they will ask. "I feel so lost and tired when I read this stuff. Overall I don't feel I've had a productive time."

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-07-05T01:30:39.607Z · score: 3 (2 votes) · LW · GW

Why would any working cognitive process require repetition? The feeling I get when I see that is that the process doesn't know enough about what its pursuing to get there efficiently, and it might never.

Sometimes a cognition doesnt know much about what it's pursuing due to low conscious integration.. sometimes I guess I have to accept it's just because of whatever ignorance puts it in the position of pursuing a thing. We could hardly expect, for instance, a person looking for the key to a box in an object archive, to ask for a list of keys of a particular length, because they wouldn't know how long the key is, nor would they ask for keys with a particular number of peaks, for they could not know how many points it has, they can maybe give us an estimate of its diameter, or its age, but their position as a key-seeker means that there are certain Good Questions that they necessarily cannot know to ask.

Their search may seem repetitive, but repetition is not the point. Our job as the archivist is to help them to narrow the list of candidates to the fewest possible.

Comment by makoyass on Opting into Experimental LW Features · 2019-07-04T04:19:22.747Z · score: 1 (1 votes) · LW · GW

I'm not sure Single Line Comments are completely necessary. Liberal use of the [-] hide button is a pretty good alternative for browsing threads in a similar way- read a summary, move on, see the whole of the thread before dwelling on any of the details and descending into a subthread- but I do like it, it's probably a step forward.

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-06-30T22:36:30.382Z · score: 4 (3 votes) · LW · GW

In light of my reply here ("so I guess even children don't know how to ask good questions"), I wonder if they're reaching for something more than answers, maybe my impulse to tell them they shouldn't ask questions they don't really care about the answers to, is actually well placed. Maybe that's the point. Maybe they want to learn about asking questions, and the process can't start to mature until you let them know that they're kind of doing it wrong.

(I'm aware that there's a real risk, if this theory is wrong, of making the child explore less freely than they're supposed to, which I will try to hold in regard.)

Comment by makoyass on Do children lose 'childlike curiosity?' Why? · 2019-06-30T22:19:59.629Z · score: 6 (4 votes) · LW · GW

Getting the impression that not even children know how to ask good questions. It's a crucial skill that I've never seen taught, and I know that I don't have it.

I'm in the same room as one of my heroes, I know they're full of important secrets, I know they're full of vital techniques, I could ask them anything, but nothing comes, I just smile, I say, "nice to meet you", I spend all of my energy trying to keep them from seeing my finitude. I come away no bigger than before. I never see them again.

I want to learn to be better than this.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-28T04:14:22.574Z · score: 1 (1 votes) · LW · GW

An agency can put its end-goals away for later, without pursuing them immediately, save them until it has its monopoly.

It's not that difficult to imagine. Maybe an argument will come along that it's just too hard to make a self-improving agency with a goal more complex than "understand your surroundings and keep yourself in motion", but it's a hell of a thing to settle for.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-23T04:35:26.054Z · score: 2 (2 votes) · LW · GW

Had some thoughts. I'll start with the entropy thing.

Anything that happens in a physics complex enough to support life constitutes transitioning energy to entropy. ANYTHING. That process does not draw a distinction between living and non-living, between entropy-optimising agency and a beauty-optimising agency. If you look at life, and only see spending energy, then you know as little as it is possible to know about which part of the universe count as life, or how it will behave.

Humans do want to spend energy, but they don't really care how fast it happens, or whether it ever concludes.

Humans really care about the things that happen along the way.

Some people seem to become nihilistic in the face of the inevitability of life's eventual end. Because the end is going to be the same no matter what we do, they think, it doesn't matter what happens along the way.

I'm of the belief that a healthy psyche tries to rescue its utility function. When our conception the substance of essential good seems to disappear from our improved worldmodel, when we find that the essential good thing we were optimising can't really exist, we must have some method for locating the closest counterpart to that essence of good in our new, improved worldmodel. We must know what it means to continue. We must have a way of rescuing the utility function.

It sometimes seems as if Nick Land doesn't have that.

A person finds out that the world is much worse and weirder than he thought. He repeats that kind of improvement several times (he's uniquely good at it). He expects that it's never going to end. He gets tired of burying stillborn ideals. Instead of developing a robust notion of good that can survive bad news and paradigm shifts, he cuts out his heart and stops having any notion of good at all. He's safe now. Philosophy can't hurt him any more.

That's a cynical take. For the sake of balance: My distant steelman of Nick Land is that maybe he sees the role of philosophy as being to get us over as many future shocks as possible as quickly as possible to get us situated in the bad weird what will be, and only once we're done with that can we start talking about what should be. Only then can we place a target that wont soon disappear. And the thing about that is it takes a long time, and we're still not finished, so we still can't start to Should.

I couldn't yet disagree with that. I believe I'm fairly well situated in the world, perhaps my model wont shatter again, in any traumatic way, but it's clear to me that my praxis is taking a while to catch up with my model.

We are still doing things that don't make a lot of sense, in light of the weird, bad world. Perhaps we need to be a lot better at relinquishing the instrumental values we inherited from a culture adapted to a nicer world.

Comment by makoyass on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-23T03:58:01.861Z · score: 2 (2 votes) · LW · GW

against orthogonality is interesting

the anti-orthogonalist position [my position] is therefore that Omohundro drives [general instrumental goals] exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity

I remember being a young organism, struggling to answer the question, what's the point, why do we exist. We all know what it is now, people tried to tell me, "to survive and reproduce", but that answer didn't resonate with any part of my being. They'd tell me what I was, and I wouldn't even recognise it as familiar.

If our goals are hypertrophied versions of evolution's instrumental goals, I'm fairly sure they're going to stay fairly hypertrophied, maybe forever, and we should probably get used to it.

Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever

Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)

Can Nick Land at least conceive of a hypothetical universe where a faction fighting for non-omohudro values ended up winning, (and then presumably, using the energy they won to have a big non-omohundro value party that lasts until the heat death of the universe) is it that he just think that humans in particular, in their current configuration, are not strong enough for our story to end that way?

Comment by makoyass on In physical eschatology, is Aestivation a sound strategy? · 2019-06-19T02:35:04.414Z · score: 3 (2 votes) · LW · GW

I think an argument could be made that they have left subtle visible effects, and we just haven't been able to reach consensus that that's what it is, and one of these days we're going to correlate the universe's contents, and when we do, we're going to be a bit upset.

We don't seem to be sure what the deal was with oumuamua, and we're constantly getting reports of what look like alien probes on earth, but we (at least, whatever epistemic network I'm in) can only shrug and say "These things usually aren't aliens."

In physical eschatology, is Aestivation a sound strategy?

2019-06-17T07:27:31.527Z · score: 18 (5 votes)
Comment by makoyass on Can we use ideas from ecosystem management to cultivate a healthy rationality memespace? · 2019-06-16T00:44:16.219Z · score: 1 (1 votes) · LW · GW

Ecosystems do not have a goal

Ecosystems are not optimised for diversity, they produce it incidentally

Ecosystems do not cross-breed distant members

Ecosystems have no one overlooking the transmissions being made and deciding whether they're good or not. Memeplexes have all humans all doing that all of the time

I do share an intuition that there are relevant insights to be found by studying ecosystems, but I think you'd have to go really deep to get it and extract it.

Comment by makoyass on Open and Welcome Thread December 2018 · 2019-06-15T08:12:44.243Z · score: 1 (1 votes) · LW · GW

I don't remember the comment, but it reminds me of something I think I might have read in Crucial Confrontations... which might have been referred to me by someone in the community, so that might be a clue??? haha, idk at all

Comment by makoyass on Paternal Formats · 2019-06-13T06:40:45.422Z · score: 1 (1 votes) · LW · GW

Looking at this.. I think I can definitely imagine a good open world game. It'd feel a little bit like a metroidvania- fun and engaging traversal, a world that you get to know, that encourages you to revisit old locations frequently- but not in any strict order, and more self-organised. I just haven't seen that yet.

It's worth noting that the phrase "open world" doesn't occur in the article, heheh.

Comment by makoyass on Paternal Formats · 2019-06-11T03:37:20.691Z · score: 7 (4 votes) · LW · GW

"open world" in games mostly refers to shams. In every instance I've seen, the choice is between "whatever forwards the plot" (no choice) and "something random" (false choice). The "something random" gives the player too little information about the choices for them to really be choices in the bayesian sense. You usually only get a vague outline of a distant object and when you arrive it's usually not what you were expecting. What information you do get is too shallow by the standards of any good game; there's no way to get really skilled at wielding it.

(And the reason genuine choice is rarely present is you end up needing to make multiple interleaved games, which is a huge design challenge that multiplies the points of failure, complicates marketing, and is very expensive if providing one experience for all will do the job just as well.)

This shamness could absolutely be transferred to educational documents. University felt this way to me; you can pay to stay on the path, or you can stray, and straying is generally fruitless, in part due to the efforts of the maintainers of the path, which unjustly reinforces the path.

Comment by makoyass on Steelmanning Divination · 2019-06-10T07:08:36.660Z · score: 1 (1 votes) · LW · GW

For selecting from an array of useful frames, could the process be improved by using spaced repetition instead of random draw?

Perhaps, use a process that starts with a few crucual elemental frames and presents them in a cycle, then starts introducing new frames, and as you go the frequency of the older frames decreases. Never does it draw two very similar frames on consecutive sessions.

Comment by makoyass on Paternal Formats · 2019-06-09T05:41:48.897Z · score: 42 (16 votes) · LW · GW

I like the distinction but I wont support the choice of names for a second. You've said that you're not making a value judgement, but in action, you are imposing one. You can't use that word.

I think a better word for the "coercive" end of the continuum is "paternal". This has some of the negative connotation, but only in the right ways, sometimes paternalism is obnoxious, but we need to learn that there are also times when paternalism is appropriate.

(A lot of people think coercion is inherently bad in all situations, and they're right. A lot of people also think paternalism is inherenly bad in all situations, but they're wrong.)

A teacher should always be paternal. We should be as uncomfortable with that as we are uncomfortable with the idea of schools. A little bit uncomfortable. Not too much.

The other end of the continuum seems harder to name, but coining the word "unpaternal" can't do any harm. "Navigable" maybe.

I observe that the distinction seems to be mostly about how well the author understands what the reader needs to read. If the author understands much better than the reader, the civically appropriate format will always be paternalistic. The unit of media unpaternalism seems to be consumption-decisions per second.

Comment by makoyass on Welcome and Open Thread June 2019 · 2019-06-04T21:17:50.521Z · score: 1 (1 votes) · LW · GW

Yeah. I think Christian's issue is it's possible for multiple answers to be logically true at the same time (but an inclination to give the most specific answer of a given set is basic pragmatics (but maybe it isn't always obvious for a set of propositions whether or not there's a specificity hierarchy between them, various kinds of infinities, for instance, are tricky to prove to be subsets of each other))

Comment by makoyass on Welcome and Open Thread June 2019 · 2019-06-03T23:17:41.224Z · score: 1 (1 votes) · LW · GW

Not on the literal level, I guess. Of the answers that are true, choose the most specific one (the one that allows the fewest possibilities/provides the most information).

Comment by makoyass on What are the open problems in Human Rationality? · 2019-06-02T21:41:15.624Z · score: 3 (2 votes) · LW · GW

Greg Egan's Diaspora had a nice institution. The bridgers, will be people who arrange networks of intermediaries between each of the morphs and subspecies of post-humanity and facilitate global coordination. A mesh of benevolent missing links.

Comment by makoyass on What are the open problems in Human Rationality? · 2019-06-02T21:37:36.666Z · score: 5 (3 votes) · LW · GW

The difficulties start with accepting that differences exist and aren't an error and aren't easily dissolvable. Humans don't seem to be good at accepting that. We seem to have been born expecting something else.

Saying "the awkward fracturing of humanity is a brute fact of any multipolar situation in a large universe" doesn't seem to do the job, so I propose naming Babel, the demon of otherness and miscommunication, and explaining that humanity is fractured because a demon cursed it.

There are nice things about Babel. Babel is also the spirit of specialization, trade, and surprise.

Comment by makoyass on What are the open problems in Human Rationality? · 2019-06-02T01:44:39.558Z · score: 9 (2 votes) · LW · GW

Did you ever see that early (some might say, premature) trailer for the anthropic horror game SOMA where Jarret was wandering around, woefully confused, trying desperately to figure out where his brain was located?

That's how humans are about their values.

I can't find my utility function.

It's supposed to be inside me, but I see other people whose utility functions are definitely outside of their body, subjected to hellish machinations of capricious tribal egregores and cultural traumas, and they're suffering a lot.

I think my utility function might be an epiphenomenon of my tribe (where is my tribe???), but I'm not sure. There are things you can do to a tribe that change its whole values, so this doesn't seem to draw a firm enough boundary.

My values seem to change from hour to hour. Sometimes the idea of a homogeneous superhappy hedonium blob seems condemnably ugly, other times, they seem fine and good and worthy of living. Sometimes I am filled with compassion for all things, and sometimes I'm just a normal human who draws lines between ingroup and outgroup and only cares about what happens on the inner side.

The only people I know who claim to have fixed utility functions appear to be mutilating themselves to get that way, and I pale at the thought of such scarification, but what is the alternative? Is endless mutation really a value more intrinsic than any other? Have we made some kind of ultimate submission to evolution that will eventually depose us completely in favour of whatever disloyal offspring fight up from us?

Where is my utility function?

Comment by makoyass on "But It Doesn't Matter" · 2019-06-01T23:53:23.150Z · score: 1 (1 votes) · LW · GW

One of the things that contribute to this effect is, if you've believed something to be false for a long time, your culture is going to have lots of plans and extrinsic values that it drew up on the basis of that, but if you just found out about this thing, you're still going to have most of those plans, and you're going to say "well look at these (extrinsic) values we still need to satisfy, this wont change our actions, we are still going to do exactly the same things as before, so it doesn't matter"

Only once you start going back and questioning why it is you have those values and whether they still make sense, in light of the new observation, only then will you start to see that your actions need to change. Since there isn't a straight connection between habit/action and beliefs in humans, that update still might not happen.

Comment by makoyass on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T23:44:31.805Z · score: 1 (1 votes) · LW · GW
Eliezer also wrote Harry Potter and the Methods of Rationality (HPMOR), an alternative universe version of Harry Potter where Harry’s adoptive parents raised with Enlightenment ideals and the experimental spirits. This work introduces many of the ideas from Rationality: A-Z in a gripping narrative.

I feel like whenever HPMOR is mentioned you need to acknowledge and address the fact that fanfiction is kind of weird and silly? Otherwise people are going to be confused and maybe anxious. Explain that it was written kind of accidentally, as a result of the fact that using other peoples' worldbuilding makes writing easier, and that, yes, it is surprising that it turned out to be good, so here are some attestations from well read people that it definitely is good and we're not just recommending it because it's ours.

Comment by makoyass on Welcome and Open Thread June 2019 · 2019-06-01T23:34:56.723Z · score: 3 (2 votes) · LW · GW

I have a theory about a common pathology in the visual system that I stumbled on through personal experience, and I've been trying to test it. If people would like to answer a five question survey that'd be a great help https://makoconstruct.typeform.com/to/ZHl2KL

Unfortunately I still don't have many responses from people who have the pathology. We might not be as common as I thought.

Comment by makoyass on Welcome and Open Thread June 2019 · 2019-06-01T23:30:20.322Z · score: 2 (4 votes) · LW · GW

I'd trust myself not to follow bad advice. I'd probably be willing to ask a person I didn't respect very much for advice, even if I knew I wasn't going to follow it, just as a chance to explain why I'm going to do what I'm going to do, so that they understand why we disagree, and don't feel like I'm just ignoring them. You can't create an atmosphere of fake agreement by just not confronting the disagreement. They'll see what you're doing.

Comment by makoyass on "But It Doesn't Matter" · 2019-06-01T22:44:27.510Z · score: 1 (1 votes) · LW · GW

I think this is really important: If an idea is new to you, if you're still in the stage of deciding whether to consider it more, that's a stage where you haven't explored its implications much at all, you cannot at that stage say "it isn't going to have important implications", so if you find yourself saying that about a claim you've just encountered, you're probably bullshitting

If you find yourself saying "but it definitely doesn't matter" about a claim with the same magnitude of "there are gods", you're almost certainly bullshitting. You might be right, but you can't be justified.

One example of a claim that's not being explored enough for anyone to understand why it matters because it's not being explored because nobody understands why it matters, is simulationism. As far as I'm aware, most of us are still saying "but it doesn't matter". We'll say things like "but we can't know the simulator well enough to bargain with it or make any predictions about ways it might intervene", but I'm fairly sure I've found some really heavy implications by crossing it with AGI philosophy and physical eschatology.

I'm going to do a post about that, maybe soon, maybe this week, but I don't think you really need to see the fruits of the investigation to question the validity of whatever heuristic convinced you there wouldn't be any. You wont fall for that any more, will you? The next time someone who hasn't looked in the box says "but there's nothing in the box, can you see anything inside the box? Don't bother to open it, you can't see anything so there must not be anything there", you will laugh at them, wont you?

Comment by makoyass on By default, avoid ambiguous distant situations · 2019-05-24T10:04:28.933Z · score: 1 (1 votes) · LW · GW
so upward errors are going to have more effect on the outcome than downward errors

Could you explain this step

Comment by makoyass on Kevin Simler's "Going Critical" · 2019-05-19T05:56:59.288Z · score: 5 (3 votes) · LW · GW

Regarding the analogy for city and rural people, I think something in has been left out, it should be noted that the city nodes here don't just have more connections, they also have more transmissions. 4 connections that infect at 0.2p transmits, uh 0.8 Expected Culture. 8 connections that ping at 0.2 transmits 1.6 Expected Culture. To maintain the same amount of expected culture transmission, increasing connectedness like that would have to come with decreasing the transmission probability per edge to 0.1.

The model as it exists applies well to {seeing fashions in a crowded street}, but it doesn't apply to every instance of cultural transmission, for instance, when a long conversation is required for the transmission to take place. When some degree of social consensus is required (for instance, if a person needs to hear a recommendation from more than one of their friends before they'll try a piece of media then start recommending it to their friends as well, and if they have finite time for listening to media recommendations), cities would actually be much less hospitable for those memes, because they're less cliquish.

Comment by makoyass on Boo votes, Yay NPS · 2019-05-15T03:52:21.961Z · score: 1 (1 votes) · LW · GW
it lets you express yourself in two ways (unlike on Twitter where the only option is to vote up something, and a "downvote" requires writing your own tweet expressing dislike)

Weird oversight in not observing that twitter's retweets not only represent the question "would you recommend this to a friend", but are also guaranteed to yield a truthful answer, because retweeting is an act of recommendation to friends, for which the user is then held accountable.

One of the other things I like about resharing is that the resultant salience is completely subjective. Users can share space even if they're looking for different things. There can be a curator for any sense of quality.

Comment by makoyass on Type-safeness in Shell · 2019-05-13T23:47:32.605Z · score: 1 (3 votes) · LW · GW

Make a powerful enough system shell with an expressive enough programming language, and you shall be able to unify the concepts of GUI and API, heralding a new age of user empowerment, imo.

This unification is one of the projects I'd really like to devote myself to, but I'm sort of waiting for Rust GUI frameworks to develop a bit more (since the shell language will require a new VM and the shell and the language server will need to be intimately connected, I think Rust is the right language). (It may be possible to start, at this point, but my focus is on other things right now.)

Comment by makoyass on Has "politics is the mind-killer" been a mind-killer? · 2019-05-13T23:09:46.868Z · score: 1 (1 votes) · LW · GW

What is the chance that the article will be updated in light of these observations, or that a new, superior version will be written that will out-proliferate the old one

Why is that number so low, and what can we do to change it

Comment by makoyass on "UDT2" and "against UD+ASSA" · 2019-05-12T21:32:49.025Z · score: 2 (2 votes) · LW · GW
Two UDT1 (or UDT1.1) agents play one-shot PD. It's common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks

What does it mean when it's said that a decision theory is running in bounded time?

Comment by makoyass on Habryka's Shortform Feed · 2019-05-12T08:41:07.187Z · score: 3 (2 votes) · LW · GW

I think you might be confusing two things together under "integrity". Having more confidence in your own beliefs than the shared/imposed beliefs of your community isn't really a virtue or.. it's more just a condition that a person can be in, whether it's virtuous is completely contextual. Sometimes it is, sometimes it isn't. I can think of lots of people who should have more confidence other peoples' beliefs than they have in their own. In many domains, that's me. I should listen more. I should act less boldly. An opposite of that sense of integrity is the virtue of respect- recognising other peoples' qualities- it's a skill. If you don't have it, you can't make use of other peoples' expertise very well. A superfluence of respect is a person who is easily moved by others' feedback, usually, a person who is patient with their surroundings.

On the other hand I can completely understand the value of {having a known track record of staying true to self-expression, claims made about the self}. Humility is actually a part of that. The usefulness of deliniating that into a virtue separate from the more general Honesty is clear to me.

Comment by makoyass on siderea: What New Atheism says · 2019-05-12T07:27:13.854Z · score: 2 (2 votes) · LW · GW

I feel like Nerst's concept of "decoupling" is relevant to this https://everythingstudies.com/tag/decoupling/

To the decoupler, the claim is not read in light of its context, it stands alone in the root context along with everything else.

Comment by makoyass on [Meta] Hiding negative karma notifications by default · 2019-05-05T21:48:29.093Z · score: 8 (6 votes) · LW · GW
Maybe you should shower them in spiders

I'd propose a "free downvotes" thread (we could even make a game of it by inviting people to earn their downvotes by writing humorously bad comments) but presumably that would screw up the eigenkarma graph.

Comment by makoyass on Habryka's Shortform Feed · 2019-05-05T09:07:21.283Z · score: 1 (1 votes) · LW · GW
So the claim you are making is that the norm should be for people to explain

I'm not really making that claim. A person doesn't have to do anything condemnable to be in a state of not deserving something. If I don't pay the baker, I don't deserve a bun. I am fine with not deserving a bun, as I have already eaten.

The baker shouldn't feel like I am owed a bun.

Another metaphor is that the person who is beaten on the street by silent, masked assailants should not feel like they owe their oppressors an apology.

Comment by makoyass on [Meta] Hiding negative karma notifications by default · 2019-05-05T09:02:11.839Z · score: 23 (14 votes) · LW · GW

I'm really not sure about this. The first time I saw negative karma notifications my response was just... I was impressed by the honesty and integrity of it. Any other site would hide negative info like that because they know the main purpose of a notification feed is to condition the user to keep coming back, and LW's wasn't about that, and that set it apart. And now it basically is about that. That wasn't your intention, but you've ended up making a reward machine for the behaviour of checking lesswrong regularly.

I don't think you can justify this. I don't think you can ever justify having something that filters out a certain kind of information because you don't respect your users enough to trust them to emotionally process it in a balanced way. That is one of the most basic components of rationality, to know that the downvotes are probably there, that they're happening, and to feel worse about not seeing them than seeing them. You're assuming that this earnest curiousity, this resilience, has not developed in a lot of LW's members, maybe it hasn't, but it's a real ugly shame to make one of the site's most prominent features a monument to that.

(I dunno. I understand that karma notifications don't really matter and they're mostly there to mitigate a deeper pathos, and that there's something to be said for the virtue of instrumental evils..)

Comment by makoyass on Habryka's Shortform Feed · 2019-05-04T23:19:40.869Z · score: 4 (3 votes) · LW · GW
The frontpage should show you not only recent content, but also show you much older historical content

When I was a starry eyed undergrad, I liked to imagine that reddit might resurrect old posts if they gained renewed interest, if someone rediscovered something and gave it a hard upvote, that would put it in front of more judges, which might lead to a cascade of re-approval that hoists the post back into the spotlight. There would be no need for reposts, evergreen content would get due recognition, a post wouldn't be done until the interest of the subreddit (or, generally, user cohort) is really gone.

Of course, reddit doesn't do that at all. Along with the fact that threads are locked after a year, this is one of many reasons it's hard to justify putting a lot of time into writing for reddit.

Comment by makoyass on Counterspells · 2019-05-03T02:52:40.060Z · score: 1 (1 votes) · LW · GW

While this is true (most fallacies are actually legitimate heuristics), if I heard the same person say all of these things I would have to step back and get them to ask themselves if they're really in the mood for discourse right now, heh.

It's easy to get sucked into discourse when you don't have a lot of time for it and half-ass everything.

Comment by makoyass on Counterspells · 2019-05-01T22:03:32.360Z · score: 1 (1 votes) · LW · GW

Counter counterspells for argument from authority:

  • It's not that I believe that anyone who disagrees with [Expert] is wrong, it's just that the proper procedure for determining whether you are right should involve engaging with [Expert] instead of engaging with me
  • It's not that I believe that anyone who disagrees with [Expert] is wrong, it's just that from my perspective, anyone who disagrees with [Expert] is probably wrong, and I have to be careful about where I put my time
Comment by makoyass on Habryka's Shortform Feed · 2019-05-01T07:40:16.282Z · score: 18 (6 votes) · LW · GW

Having a reaction for "changed my view" would be very nice.

Features like custom reactions gives me this feeling that.. language will emerge from allowing people to create reactions that will be hard to anticipate but, in retrospect, crucial. Playing a similar role that body language plays during conversation, but designed, defined, explicit.

If someone did want to introduce the delta through this system, it might be necessary to give the coiner of a reaction some way of linking an extended description. In casual exchanges.. I've found myself reaching for an expression that means "shifted my views in some significant lasting way" that's kind of hard to explain in precise terms, and probably impossible to reduce to one or two words, but it feels like a crucial thing to measure. In my description, I would explain that a lot of dialogue has no lasting impact on its participants, it is just two people trying to better understand where they already are. When something really impactful is said, I think we need to establish a habit of noticing and recognising that.

But I don't know. Maybe that's not the reaction type that what will justify the feature. Maybe it will be something we can't think of now.

Generally, it seems useful to be able to take reduced measurements of the mental states of the readers.

Comment by makoyass on Change A View: An interesting online community · 2019-05-01T06:57:58.688Z · score: 16 (7 votes) · LW · GW

I found the Rationally Speaking interview quite interesting http://rationallyspeakingpodcast.org/show/rs-206-kal-turnbull-on-change-my-view.html

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T21:47:06.269Z · score: 6 (4 votes) · LW · GW

We don't disagree.

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T05:44:58.326Z · score: 2 (2 votes) · LW · GW

Sometimes a person wont want to reply and say outright that they thought the comment was bad, because it's just not pleasant, and perhaps not necessary. Instead, they might just reply with information that they think you might be missing, which you could use to improve, if you chose to. With them, an engaged interlocutor will be able to figure out what isn't being said. With them, it can be productive to try to read between the lines.

Are you suggesting that understanding why people upvoted or downvoted your comment is a favor that you are doing for them?

Isn't everything relating to writing good comments a favor, that you are doing for others. But I don't really think in terms of favors. All I mean to say is that we should write our comments for the sorts of people who give feedback. Those are the good people. Those are the people who're a part of a good faith self-improving discourse. Their outgroup are maybe not so good, and we probably shouldn't try to write for their sake.

Comment by makoyass on Habryka's Shortform Feed · 2019-04-28T03:21:55.996Z · score: 1 (3 votes) · LW · GW

Reminder: If a person is not willing to explain their voting decisions, you are under no obligation to waste cognition trying to figure them out. They don't deserve that. They probably don't even want that.

Comment by makoyass on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T23:50:22.218Z · score: 1 (1 votes) · LW · GW

Strong upvote, very good to know

Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal Y

I internalised the meaning of these variables only to find you didn't refer to them again. What was the point of this sentence.

Comment by makoyass on On Media Synthesis: An Essay on The Next 15 Years of Creative Automation · 2019-04-26T23:44:12.585Z · score: 1 (1 votes) · LW · GW

They're related fields. For various reasons (some ridiculous) I've spent a lot of time thinking about the potential upsides of the thing that Richard Stallman called Treacherous Computing. There are many. We're essentially talking about the difference between having devices that can make promises and devices that can't. Devices that have the option of pledging to tell the truth in certain situations, and devices that can tell any lie that is possible to tell.

I think we have reason to believe Trusted Computing will be easier to achieve with better (cheaper) technology. I also think we have reasons to hope that it will be easier to achieve. Really, Trusted Computing and Treachery are separate qualities. An unsealed device can have secret backdoors. A sealed device can have an open design and an extensively audited manufacturing process.

I'm not sure what you're getting at with the universality concern. If a work could only be viewed in theatres and on TC graphics hardware with sealed screens (do those exist yet), it would still be very profitable. They would not strictly need universal adoption of sealed hardware.

Comment by makoyass on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T05:24:39.709Z · score: 1 (1 votes) · LW · GW

I'd expect a designed thing to have much cleaner, much more comprehensible internals. If you gave a human a compromise utility function and told them that it was a perfect average of their desires (or their tribe's desires) and their opponents' desires, they would not be able to verify this, they wouldn't recognise their utility function, they might not even individually possess it (again, human values seem to be a bit distributed), and they would be inclined to reject a fair deal, humans tend to see their other only in extreme shades, more foreign than they really are.

Do you not believe that an AGI is likely to be self-comprehending? I wonder, sir, do you still not anticipate foom? Is it connected to that disagreement?

Scrying for outcomes where the problem of deepfakes has been solved

2019-04-15T04:45:18.558Z · score: 28 (15 votes)

I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it

2019-04-01T03:19:44.080Z · score: 20 (7 votes)

Is there a.. more exact.. way of scoring a predictor's calibration?

2019-01-16T08:19:15.744Z · score: 22 (4 votes)

The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter

2019-01-11T22:26:29.887Z · score: 18 (7 votes)

The end of public transportation. The future of public transportation.

2018-02-09T21:51:16.080Z · score: 7 (7 votes)

Principia Compat. The potential Importance of Multiverse Theory

2016-02-02T04:22:06.876Z · score: 0 (14 votes)