Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-20T17:09:33.611Z · score: 2 (1 votes) · LW · GW

That is a bit more specific than what I meant. In this case though, the second more broad meaning of "someone who's trying to gum up the works of social decisionmaking" still works in the context of the comment.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-20T16:36:04.881Z · score: 4 (2 votes) · LW · GW

By werewolf I meant something like "someone who is pretending be working for the community as a member, but is actually working for their own selfish ends". I thought Jessica was using it in the same way.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-20T10:54:48.939Z · score: 6 (3 votes) · LW · GW

Sorry, I was trying to be really careful as I was writing of not accusing you specifically of bad intentions, but obviously it's hard in a conversation like this where you're jumping between the meta and the object-level.

It's important to distinguish a couple things.

1. Jessica and I were talking about people with negative intentions in the last two posts. I'm not claiming that you're one of those people that is deliberately using this type of argument to cause harm.

2. I'm not claiming that it was the writing of those two posts that were harmful in the way we were talking about. I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it's sacred value. At the same time, you were banking on the fact that it was a sacred value to say "even in this case, we would uphold the sacred value." This has the same structure as the werewolf move mentioned above, and it was important for me to speak up, even if you're not a werewolf.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-20T02:38:16.197Z · score: 5 (3 votes) · LW · GW

Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can't be called out because doing so would be akin to attacking the community's norms.

I've seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it's ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we're actually fighting for - and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).

99 times out of a 100, the correct way to remember what we're fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.

However, 1 time out of 100 the correct way to remember what you're fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it's important to make clear (mostly to other people in the community) that sacrificing that value is an option.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-20T00:53:38.020Z · score: 2 (1 votes) · LW · GW
It's important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren't maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).

I agree it's important to realize that these things are fundamentally different.

It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It's still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.

A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.

In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to "have norms about epistemics and break them sometimes," but only if when someone points at edge cases where the norms are actively harmful, they're challenged that sometimes the breaking of those norms is perfectly OK.

IE, if someone is using the norms of the community as a weapon, it's important to point at that the norms are a means to an end, and that the community won't blindly allow itself to be taken advantage of.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-19T21:20:51.459Z · score: 2 (1 votes) · LW · GW

It might just make more sense to give this one up to word inflation and come up with new words. I'll happily use the denotative vs. enactive language to point to this thing in the future, but I'll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-19T21:18:40.418Z · score: 6 (3 votes) · LW · GW
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?

It might be that we just have different definitions of absurd and you're not missing anything, or it could be that you're taking an extreme version of what I'm saying.

To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you're not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.

Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth - and once you've done that you're ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.

Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-19T18:32:35.522Z · score: 4 (2 votes) · LW · GW
Your points 1,2,3 have noting to do with the epistemic problem of decoupling vs contextualizing,

This is probably because I don't know what the epistemic problem is. I only know about the linked post, which defines things like this:

Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation - free of any context or potential implications. Attempts to raise these issues are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or even an intentional evasion.
... To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser's insistence that this isn’t possible looks like naked bias and an inability to think straight"

I sometimes round this off in my head to something like "pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits".

There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it's not defined in the canonical article on the site.

My basic read of Zack's entire post was him saying over and over "Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good." And my immediate reaction to that was "No I don't, and that's a bad norm."



Comment by mr-hire on Comment section from 05/19/2019 · 2019-05-19T08:58:40.960Z · score: 9 (8 votes) · LW · GW

I'm not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can't take arguments in isolation, but have to consider the context in which they are made.

1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people's posts towards a discussion of their thing. It would be perfectly alright for moderators who didn't want to drive away their visitors to ask this person to stop.

2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don't want to associate with that person, even IF that person has good behavior.

3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you're worried about the incentives of those posts being highly upvoted.

All of these things are tradeoffs with decoupled conversation obviously, which has its' own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don't think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.

Comment by mr-hire on The Relationship Between the Village and the Mission · 2019-05-18T13:58:41.241Z · score: 4 (2 votes) · LW · GW

That makes sense. I think I was tripped up by your use of the words "is" and "bad", both of which are ambiguous. Things that might have helped me get your meaning are swapping "is" for "feels", swapping "bad" for "aversive" or "unpleasant", and adding the qualifier "for me" or "for many people".

Of course, if you were under the impression that this is a near universal aversion, it makes less sense to make any of those changes. I suspect that that assumption also underlies the miscommunication of why people didn't address the "change is aversive" objection in the original post as well - they typical-mind fallacied that change was neutral or good, and you did the reverse.

Comment by mr-hire on The 3 Books Technique for Learning a New Skilll · 2019-05-18T07:57:44.861Z · score: 2 (1 votes) · LW · GW

The old fashioned way I suppose, going by the reputation of the creator and the description they provide. I think readily available reviews are often worth going with a medium that has them though.

Comment by mr-hire on The Relationship Between the Village and the Mission · 2019-05-17T17:40:47.700Z · score: 6 (3 votes) · LW · GW

I don't know how a personal value judgement fits in with your talk about a "burden of justification." Why should someone feel the need to justify against your personal value judgement that change is bad? They simply have a different value judgement than you.

Comment by mr-hire on The Relationship Between the Village and the Mission · 2019-05-17T09:14:07.621Z · score: 10 (5 votes) · LW · GW
Indeed not, as it is not an assumption at all!

What is it then? You beg the question again by assuming it while trying to show how its not an assumption.

not only anything that got worse, but also the inherent badness of changing something! You “start with a negative score”,

This isn't an argument, it's just restating the premise. To see this, just change all instances of "change is bad" to "change is good" in your argument, and notice how they entire thing is still coherent. You start with a positive score for the change, because of the inherent goodness of change, and so on...

What makes a scientific fact 'ripe for discovery'?

2019-05-17T09:01:32.578Z · score: 8 (2 votes)
Comment by mr-hire on Why exactly is the song 'Baby Shark' so catchy? · 2019-05-17T08:46:45.195Z · score: 4 (2 votes) · LW · GW

I don't have a specific answer, but I do have two keywords that will get you most of the research

  • Earworm
  • Involuntary Musical Imagery

In addition, I think that the Baby Shark video underwent viral/marketing dynamics apart from its' inherent catchiness, so any explanation will have to take that into account as well.

Comment by mr-hire on Boo votes, Yay NPS · 2019-05-17T08:42:42.564Z · score: 4 (2 votes) · LW · GW
would you recommend this to a friend", but are also guaranteed to yield a truthful answer, because retweeting is an act of recommendation to friends, for which the user is then held accountable.

Note that even in this relatively straightforward case, the meaning of retweets can become conflated, from "I would recommend this to a friend" to "I agree with this". I sometimes have to be careful about what I retweet because of this confusion, and things that I would otherwise recommend people read I hesitate to retweet because I don't want people thinking I agree.

Comment by mr-hire on The Relationship Between the Village and the Mission · 2019-05-17T08:11:28.904Z · score: 8 (2 votes) · LW · GW
Change is bad.

I can certainly see a few reasons why one could have this assumption, but assuming it without arguing it in this case seems to be begging the question.

Comment by mr-hire on Integrating disagreeing subagents · 2019-05-16T17:28:06.333Z · score: 4 (2 votes) · LW · GW

Connection Theory, from Leverage Research.

Comment by mr-hire on Integrating disagreeing subagents · 2019-05-15T20:43:12.677Z · score: 2 (1 votes) · LW · GW

If Eliezers goals or beliefs are learned, then it applies. Anything that is learned can be unlearned with memory re-consolidation, although it seems to be particularly effective with emotional learning. An interesting open question is "are human's born with internal conflicts, or do they only result from subsequent learnings?" After playing around with CT Charting and Core Transformation with many clients, I tend to think the latter, but if the former is true then memory reconsolidation won't help for those innate conflicts.

Comment by mr-hire on Integrating disagreeing subagents · 2019-05-15T07:43:58.066Z · score: 7 (3 votes) · LW · GW

In Unlocking the Emotional Brain, Bruce Ecker argues that the same psychological process is involved in all of the processes you mentioned above - memory reconsolidation (which is also the same process that electroconvulsive therapy is accidentally triggering).

According to Ecker, there are 3 steps needed to trigger memory reconsolidation:

1. Reactivate. Re-trigger/re-evoke the target knowledge by presenting salient cues or contexts from the original learning.

2. Mismatch/unlock. Concurrent with reactivation, create an experience that is significantly at variance with the target learning’s model and expectations of how the world functions. This step unlocks synapses and renders memory circuits labile, i.e., susceptible to being updated by new learning.

3. Erase or revise via new learning. During a window of about five hours before synapses have relocked, create a new learning experience that contradicts (for erasing) or supplements (for revising) the labile target knowledge.

There's some problems to the theory that memory reconsolidation is what's going on in experiential therapies like focusing, IFS, and exposure therapy, chief among them IMO that in animal studies, reconsolidation needs to happen within hours of creating the original learning (whereas in these therapies it can happen decades later).

However, I've found the framework incredibly useful for figuring out the essential and non-essential parts of the therapies mentioned above, for troubleshooting when a shift isn't happening with coaching clients, and for creating novel techniques and therapies that apply the above 3 steps in the most straightforward way possible.

Comment by mr-hire on Narcissism vs. social signalling · 2019-05-12T08:30:42.651Z · score: 10 (2 votes) · LW · GW
There's no such thing as "convincing yourself" if you're an agent, due to conservation of expected evidence.

Is your claim that the actual way the brain works is close enough to Bayesian updating that this is true?

Comment by mr-hire on Why books don't work · 2019-05-12T00:15:46.401Z · score: 8 (5 votes) · LW · GW
There is no royal road to knowledge. One has to engage with a book in order to retain not just the conclusions of the book, but also the reasoning that led to the conclusions.

But what if there was? Certainly with hard work and deep reading, you can learn a good amount from books. However, the central point of the piece is that this is not the optimal way to learn. What if with other mediums you can have a lot of this work done for you, learning more material in less time?

Comment by mr-hire on Disincentives for participating on LW/AF · 2019-05-11T01:04:44.949Z · score: 29 (12 votes) · LW · GW

Yes, one of the frustrating things is getting criticism that just feels like "this is just not the conversation I want to be having." I'm trying to discuss how this particular shade of green effects the aesthetics of this particular object, but you're trying to talk to me about how green doesn't actually exist, and blue is the only real color. It's understandable but it's just frustrating.

Comment by mr-hire on Counterspells · 2019-05-04T16:17:49.758Z · score: 1 (1 votes) · LW · GW
Ultimately they have no bearing on whether or not the topic of discussion is true or false. Certainly I could tell someone, "Your belief in a flat earth makes me not interested in trusting your thoughts on homeopathy", and I would be right to do so. But homeopathy is still true or false regardless of this person's other unconventional beliefs.

But the purpose of our discussion is to change my mind about homeopathy. There's something of a frequentist epistemology behind saying that homeopathy is true or false regardless of your other beliefs - it's certainly true, but that doesn't help me make up my mind about:

1. Whether or not it's true.

2. Whether or not it's useful to discuss this specific aspect.

Counter-counter-spells are a way of pointing out when a bias is actually a heuristic. Your Martin Luther King example isn't such a case, but there are certainly many cases where it is a good heuristic.

Comment by mr-hire on Counterspells · 2019-05-02T20:23:33.753Z · score: 4 (3 votes) · LW · GW

We can probably do counter-counter spells for all of these

Ad Hominen:

Based on my previous experience, the fact that this person is (x) provides evidence against their argument, and I have a limited amount of time to analyze every argument that comes my way.

Response to tone:

People who speak with (tone) usually aren't taking other arguments seriously or making good arguments, and I have a limited amount of time to analyze every argument that comes my way

Non-Central Fallacy:

This thing is of class (x), things of class (x) are usually bad, and I have a limited amount of time to analyze every argument that comes my way.

Comment by mr-hire on Change A View: An interesting online community · 2019-05-02T12:18:48.477Z · score: 12 (4 votes) · LW · GW

I've participated a bit in the Change My View subreddit, both asking and answering questions, and found it very rewarding to change someones mind and have my mind changed. I've found that the type of thinking I've learned in the rationality, post-rationality, and EA communities have allowed me to engage there with a clarity of thought that's rate.

Comment by mr-hire on S-Curves for Trend Forecasting · 2019-04-26T12:28:04.506Z · score: 1 (1 votes) · LW · GW

Agree, this misconception (and seeing it everywhere) is one of the things that made me wrote the article (particularly the part about "exponentional growth vs. s-curves).

The other side of it is when people think that trends are made of a single s-curve, and think that when growth is slowing down, that means that the trend is done forever, rather than simply the start of another s-curve when the constraint is defeated.

Comment by mr-hire on S-Curves for Trend Forecasting · 2019-04-25T19:03:50.040Z · score: 1 (1 votes) · LW · GW

Agree, recognizing constraints before they limit you is key if you're looking to create growth (or alternatively, adding multiple constraints if you're looking to slow growth).

Comment by mr-hire on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T18:07:16.124Z · score: 3 (3 votes) · LW · GW
Paradoxically, if a person doesn't sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn't care about her decisions. Many of this worlds are s-risks worlds.

This seems to depend on how much weight you put on resurrection being able to happen without being frozen. Many people put even the possibility of resurrection with being frozen to be negligible, and without being frozen to be impossible. If this is how your probabilities fall, then the chance of S-risk has less to do with the AI caring about your decisions, and more to do with the AI being physically able to resurrect you.

Comment by mr-hire on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T18:01:52.288Z · score: 2 (2 votes) · LW · GW
Whatever answer you give it should be the same as to the question "How do S-Risk scenarios impact the decision to wear a seat belt when in a car" since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them.

This only seems to apply if you have a constant probability for S-risk scenarios. If you think they're more likely in the far future, then the calculation should be quite different.

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-17T13:05:06.384Z · score: 2 (2 votes) · LW · GW

I've never seen Socratic questioning work in person because it's always clear there's a trap coming and people don't want to be trapped taking views because the questions slowly destroy the nuance of their views. It's even worse here

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-17T13:02:27.254Z · score: 1 (1 votes) · LW · GW

Downvotes for not being Socratic.

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-16T10:59:46.544Z · score: 1 (1 votes) · LW · GW

Through comparing it to other similar problems, understanding the number of factors involved, asking people who have worked on similar problems, or many other methods.

Comment by mr-hire on The Hard Work of Translation (Buddhism) · 2019-04-15T21:01:27.246Z · score: 6 (3 votes) · LW · GW

I don't mind the speculation, just the wild ones. Putting forward a hypothesis based on your own experience is incredibly useful, especially if it makes novel predictions that people can then go test themselves.

I don't know how for instance Romeo would have gotten the "electrical resistance" thing from his own experience, but many of the other tools I can see by him noticing similarities between what happens in meditation, and various other psychotherapies he has tried or read about. This is actually one place where I expect phenomenological experience to provide valuable hypotheses, which is why I think this sort of post can be credible enough to want to test/further explore the hypotheses.

Comment by mr-hire on The Hard Work of Translation (Buddhism) · 2019-04-15T16:14:08.923Z · score: 14 (4 votes) · LW · GW
On what other basis, then, are we to believe any of this stuff about rewiring neurons, “electrical resistance = emotional resistance”, etc. etc.? We’ve been told that there’s no evidence whatsoever for any of it and that Romeo got the idea for the latter claim, in particular, from literally nowhere at all. So we can’t believe any of this on the basis of evidence, because we’ve been given none, and told that none is forthcoming. And you say we’re not to believe it on Romeo’s word. What’s left?

You both latched onto the least interesting part of the post. The part that is literally just Romeo throwing out some wild speculations. The post would probably have been a bit cleaner to not mention the few wild speculations he mentions, but getting caught up on the tiny details seems to miss the forest from the trees.

The more interesting part is the general framework where he matches up the some of the processes mentioned in Buddhism with some insights from behavioral psychology, psychotherapy, and pop psychology. It gives a framework to start understanding why anecdotally people claim such big effects from extended meditation practice, and gives an insight about how one might begin to test the hypothesis that this is what's happening, both personally and on a broader scale.

Comment by mr-hire on Subagents, akrasia, and coherence in humans · 2019-04-10T22:05:43.880Z · score: 13 (3 votes) · LW · GW

I originally learned about the theory from the book I linked to, which is a good place to start but also clearly biased because they're trying to make the case that their therapy uses memory reconsolidation. Wikipedia seems to have a useful summary.

Comment by mr-hire on Subagents, akrasia, and coherence in humans · 2019-04-10T21:43:30.930Z · score: 18 (5 votes) · LW · GW

Note that memory re-consolidation was originally discovered in rats, so there at least appears to be preliminary evidence that goes against this perspective.. Although "Memory" here refers to something different than what we normally think about, the process is basically the same.

There's also been some interesting speculation that what's actually going on in modalities like IFS and Focusing is the exact same process. The speculation comes from the fact that the requirements seem to be the same for both animal memory reconsolidation and therapies that have fast/instant changes such as coherence therapy, IFS, EMDR, etc. I've used some of these insights to create novel therapeutic modalities that seem to anecdotally have strong effects by applying the same requirements in their most distilled form.

Comment by mr-hire on The Simple Solow Model of Software Engineering · 2019-04-09T13:03:40.705Z · score: 6 (4 votes) · LW · GW
The biggest factor here (at least in my experience) is external APIs. A company whose code does not call out to any external APIs has relatively light depreciation load - once their code is written, it’s mostly going to keep running, other than long-term changes in the language or OS. APIs usually change much more frequently than languages or operating systems, and are less stringent about backwards compatibility. (For apps built by large companies, this also includes calling APIs maintained by other teams.)

Of course, the trade off is that then YOUR engineers have to maintain the code for security updates/refactors when they realize its' unmaintable or built wrong vs. having the API do it.

One way to look at this is through the lens of Wardley Evolution. When a new function is not that well understood, it needs to be changed very frequently, as people are still trying to figure out what the API needs and how to correctly abstract what you're doing. In this case, it makes sense to build the code yourself rather than using an API that knows less about your use case than you do. An example might be the first few blockchain's writing their own consensus code instead of relying on Bitcoin's.

On the other extreme, when a certain paradigm is extremely understood such that it's commoditized, it makes sense to use an existing API that will keep up to date with infrequent security vulnerabilities instead of having your engineers do that. An example would be having webapps use existing SQL databases and the existing SQL API instead of writing their own database format and query language.

In the middle is where it gets murky. If you adopt an API too early, you run the risk of being in a place where you're spending more time keeping up with the API changes than you would writing your own thing. However, adopt it too late, and you're spending valuable time trying to come up with the correct abstractions and refactor your code so that it's more maintainable, whereas it would be cheaper to just outsource that work to the API developers.

Comment by mr-hire on How do people become ambitious? · 2019-04-09T12:31:40.884Z · score: 13 (5 votes) · LW · GW

I haven't looked at this in a serious way, but have thought about.

I think basically all of the ways people become more ambitious have to do with increasing either the self-efficacy of the person expecting to be able to do big things, or the reward that the person would get from doing big things. Here are some common examples I've seen in biographies and acquaintances:

1. Success spirals. They have small successes which make them think they can do a little bit more, which makes them think they can do more. This is basically a path to increasing self-efficacy.

2. Change in support structures/expectation. They meet a mentor or become part of a group that has big goals and is very agenty, and believes that they can be to. This is increasing both expectancy and reward.

3. Change in ontology. People often become much more agenty and ambitious as they transition from Kegan 3 to Kegan 4, they begin to realize that the world is a large system and that they can affect it, rather than looking at their immediate emotions.

4. Removing large emotional blocks, Sometimes people have lots of internal conflict that is holding them back. If one of the major blocks to action is removed, from the outside they suddenly become much more ambitious and agenty.

5. Adding emotional scarring. Sometimes, the opposite of the above happens. People are relatively content with the status quo and don't feel the need to prove themselves. Then, something bad happens to them that makes them feel the need to prove themselves, thus raising their ambition.

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-06T06:38:20.030Z · score: 1 (1 votes) · LW · GW

Yes. But I'm not sure how that's related.

Comment by mr-hire on What are CAIS' boldest near/medium-term predictions? · 2019-04-04T11:29:49.339Z · score: 1 (1 votes) · LW · GW
I don't think CAIS takes much of a position on the AI-foom debate. CAIS seems entirely compatible with very fast progress in AI.

Isn't the "foom scenario" referring to an individual AI that quickly gains ASI status by self-improving?

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-04T11:23:10.493Z · score: 3 (2 votes) · LW · GW

Meta: Downvoted because this is not a question.

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-04T01:45:53.641Z · score: 2 (2 votes) · LW · GW

Why can't you just believe in the territory without trying g to confuse it with maps?

Comment by mr-hire on Degrees of Freedom · 2019-04-03T16:18:15.791Z · score: 15 (8 votes) · LW · GW

Reminded me of this tweet from Julia Galef:

https://twitter.com/juliagalef/status/878788641400553472

"There's no way to tell which choice is higher value!!" feels paralyzing "The expected value across choices is similar" feels liberating"

Which makes me think that sometimes people can find freedom in both the arbitrary and the optimal. If there's no obvious choice, we can find freedom in the choosing between. If there is an obvious choice, we can find freedom in the choosing to take.

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-03T15:49:11.169Z · score: 4 (3 votes) · LW · GW

In what way is your meta-observation of consistency different than the belief in a territory?

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-03T08:44:28.342Z · score: 1 (1 votes) · LW · GW

How can you confirm the model of "past predictions predict future predictions" with the data that "in the past past predictions have predicted future predictions?" Isn't that circular?

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-02T16:32:59.259Z · score: 5 (3 votes) · LW · GW

Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-02T09:59:04.599Z · score: 4 (3 votes) · LW · GW

Meta: are the answers to questions all supposed to be given by the OP?

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-01T22:04:00.438Z · score: 6 (2 votes) · LW · GW

Question: Have you always been a monster, or did you just become one recently?

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-01T20:52:57.877Z · score: 1 (1 votes) · LW · GW

Claim: One way that instrumental and epistimic rationality diverge is that you often get better results using less accurate models that are simpler rather than more accurate models that are more complicated.

(example: thinking of people as 'logical' or 'emotional', and 'selfish' or 'altruistic' is often more helpful in many situations than trying to work up a full list of your motivations as you know them and their world model as you know it and making a guess as to how they'll react)

Comment by mr-hire on Experimental Open Thread April 2019: Socratic method · 2019-04-01T20:48:22.705Z · score: 1 (1 votes) · LW · GW

Claim: One way in which instrumental and epistemic rationality diverge is with self-fulfilling prophecies:

(example: all your data says that you will be turned down when asking for a date. You rationally believe that you will be turned down by a date, and every time you ask for a date you are turned down. However, if you were to switch this belief to the fact you would be enthusiastically accepted when asking for a date, this would create a situation where you were in fact enthusiastically accepted.)

The Case for The EA Hotel

2019-03-31T12:31:30.969Z · score: 66 (23 votes)

How to Understand and Mitigate Risk

2019-03-12T10:14:19.873Z · score: 48 (14 votes)

What Vibing Feels Like

2019-03-11T20:10:30.017Z · score: 9 (20 votes)

S-Curves for Trend Forecasting

2019-01-23T18:17:56.436Z · score: 97 (35 votes)

A Framework for Internal Debugging

2019-01-16T16:04:16.478Z · score: 29 (14 votes)

The 3 Books Technique for Learning a New Skilll

2019-01-09T12:45:19.294Z · score: 126 (67 votes)

Symbiosis - An Intentional Community For Radical Self-Improvement

2018-04-22T23:15:06.832Z · score: 29 (7 votes)

How Going Meta Can Level Up Your Career

2018-04-14T02:13:02.380Z · score: 40 (19 votes)

Video: The Phenomenology of Intentions

2018-01-09T03:40:45.427Z · score: 34 (9 votes)

Video - Subject - Object Shifts and How to Have Them

2018-01-04T02:11:22.142Z · score: 11 (4 votes)