Posts

What is meant by Simulcra Levels? 2020-06-17T02:20:19.620Z · score: 21 (7 votes)
Does equanimity prevent negative utility? 2020-06-11T07:00:42.130Z · score: 14 (5 votes)
What is Ra? 2020-06-06T04:29:23.413Z · score: 16 (6 votes)
What are Michael Vassar's beliefs? 2020-05-15T06:15:39.008Z · score: 19 (7 votes)
Constructive Definitions 2020-05-04T23:50:17.251Z · score: 15 (5 votes)
What makes counterfactuals comparable? 2020-04-24T22:47:38.365Z · score: 11 (3 votes)
The World According to Dominic Cummings 2020-04-14T05:05:44.159Z · score: 48 (18 votes)
The Sandwich Argument 2020-04-09T03:58:46.682Z · score: 27 (13 votes)
Open thread: Language 2020-04-08T14:57:36.798Z · score: 8 (2 votes)
How strong is the evidence for hydroxychloroquine? 2020-04-05T09:32:00.058Z · score: 35 (12 votes)
Referencing the Unreferencable 2020-04-04T10:42:08.164Z · score: 17 (3 votes)
The Hammer and the Dance 2020-03-20T16:09:26.740Z · score: 49 (14 votes)
A Sketch of Answers for Physicalists 2020-03-14T02:27:13.196Z · score: 24 (6 votes)
Vulnerabilities in CDT and TI-unaware agents 2020-03-10T14:14:54.530Z · score: 5 (5 votes)
Analyticity Depends On Definitions 2020-03-08T14:00:34.492Z · score: 10 (3 votes)
Embedded vs. External Decision Problems 2020-03-05T00:23:07.970Z · score: 9 (2 votes)
Abstract Plans Lead to Failure 2020-02-27T21:20:11.554Z · score: 22 (11 votes)
Stuck Exploration 2020-02-19T12:31:55.276Z · score: 16 (6 votes)
A Memetic Mediator Manifesto 2020-02-17T02:14:56.683Z · score: 12 (3 votes)
Reference Post: Trivial Decision Problem 2020-02-15T17:13:26.029Z · score: 17 (7 votes)
Is backwards causation necessarily absurd? 2020-01-14T19:25:44.419Z · score: 16 (7 votes)
The Universe Doesn't Have to Play Nice 2020-01-06T02:08:54.406Z · score: 17 (7 votes)
Theories That Can Explain Everything 2020-01-02T02:12:28.772Z · score: 9 (3 votes)
The Counterfactual Prisoner's Dilemma 2019-12-21T01:44:23.257Z · score: 20 (8 votes)
Counterfactual Mugging: Why should you pay? 2019-12-17T22:16:37.859Z · score: 5 (3 votes)
Counterfactuals: Smoking Lesion vs. Newcomb's 2019-12-08T21:02:05.972Z · score: 9 (4 votes)
What is an Evidential Decision Theory agent? 2019-12-05T13:48:57.981Z · score: 10 (3 votes)
Counterfactuals as a matter of Social Convention 2019-11-30T10:35:39.784Z · score: 11 (3 votes)
Transparent Newcomb's Problem and the limitations of the Erasure framing 2019-11-28T11:32:11.870Z · score: 6 (3 votes)
Acting without a clear direction 2019-11-23T19:19:11.324Z · score: 9 (4 votes)
Book Review: Man's Search for Meaning by Victor Frankel 2019-11-04T11:21:05.791Z · score: 18 (8 votes)
Economics and Evolutionary Psychology 2019-11-02T16:36:34.026Z · score: 12 (4 votes)
What are human values? - Thoughts and challenges 2019-11-02T10:52:51.585Z · score: 13 (4 votes)
When we substantially modify an old post should we edit directly or post a version 2? 2019-10-11T10:40:04.935Z · score: 13 (4 votes)
Relabelings vs. External References 2019-09-20T02:20:34.529Z · score: 13 (4 votes)
Counterfactuals are an Answer, Not a Question 2019-09-03T15:36:39.622Z · score: 16 (12 votes)
Chris_Leong's Shortform 2019-08-21T10:02:01.907Z · score: 11 (2 votes)
Emotions are not beliefs 2019-08-07T06:27:49.812Z · score: 26 (9 votes)
Arguments for the existence of qualia 2019-07-28T10:52:42.997Z · score: -2 (19 votes)
Against Excessive Apologising 2019-07-19T15:00:34.272Z · score: 7 (5 votes)
How does one get invited to the alignment forum? 2019-06-23T09:39:20.042Z · score: 17 (7 votes)
Should rationality be a movement? 2019-06-20T23:09:10.555Z · score: 53 (22 votes)
What kind of thing is logic in an ontological sense? 2019-06-12T22:28:47.443Z · score: 13 (4 votes)
Dissolving the zombie argument 2019-06-10T04:54:54.716Z · score: 1 (5 votes)
Visiting the Bay Area from 17-30 June 2019-06-07T02:40:03.668Z · score: 18 (4 votes)
Narcissism vs. social signalling 2019-05-12T03:26:31.552Z · score: 15 (7 votes)
Natural Structures and Definitions 2019-05-01T00:05:35.698Z · score: 21 (8 votes)
Liar Paradox Revisited 2019-04-17T23:02:45.875Z · score: 11 (3 votes)
Agent Foundation Foundations and the Rocket Alignment Problem 2019-04-09T11:33:46.925Z · score: 13 (5 votes)
Would solving logical counterfactuals solve anthropics? 2019-04-05T11:08:19.834Z · score: 23 (-2 votes)

Comments

Comment by chris_leong on Situating LessWrong in contemporary philosophy: An interview with Jon Livengood · 2020-07-02T12:03:38.531Z · score: 4 (2 votes) · LW · GW

"And the difference in graduate training in the two programs is, HPS you come in, write some papers, get out in 6-8 years, get a job, everybody does that. The Pitt Philosophy program you come, think some things, try to think the deep thoughts; the very best people go on to an awesome career, the rest of you, well, we're happy to burn through a hundred grad students to find a diamond." - I found this passage surprising. I'd expect that the ease of finding a job in an area such as philosophy or HPS would be based on the availability of funding, not differences in approach.

Comment by chris_leong on Chris_Leong's Shortform · 2020-06-30T08:35:12.355Z · score: 8 (4 votes) · LW · GW

I really dislike the fiction that we're all rational beings. We really need to accept that sometimes people can't share things with us. Stronger: not just accept but appreciate people who make this choice for their wisdom and tact. ALL of us have ideas that will strongly trigger us and if we're honest and open-minded, we'll be able recall situations when we unfairly judged someone because of a view that they held. I certainty can, way too many times to list.

I say this as someone who has a really strong sense of curiosity, knowing that I'll feel slightly miffed when someone doesn't feel comfortable being open with me. But it's my job to deal with that, not the other person.

Don't get me wrong. Openness and vulnerability are important. Just not *all* the time. Just not *everything*.

Comment by chris_leong on What is meant by Simulcra Levels? · 2020-06-24T23:12:14.712Z · score: 2 (1 votes) · LW · GW

Thanks for writing this comment. I agree with you that simulcra levels and the unnamed object level vs social reality grid should ideally be separated as concepts. Also thanks for saving me the effort of adding my own theory here (I was planning to eventually, but I have a tendency to procrastinate). Anyway, I'll just add that the main purpose of my characterisation was to try to explore some of the religious language that Baudrillard was using.

Comment by chris_leong on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T06:46:43.782Z · score: 4 (2 votes) · LW · GW

I like the general idea, but I'd be wary of venturing so far in terms of privacy that the usability becomes terrible and no-one wants to use it.

Comment by chris_leong on What is meant by Simulcra Levels? · 2020-06-19T01:03:34.463Z · score: 2 (1 votes) · LW · GW

Interesting. I like the grid model and in some ways it is more natural than the four seperate levels.

Comment by chris_leong on Does equanimity prevent negative utility? · 2020-06-17T03:43:16.334Z · score: 2 (1 votes) · LW · GW

""Bad" requires defining. Define the utility function, and the answer falls out" - Exactly. How should it be defined?

Comment by chris_leong on Creating better infrastructure for controversial discourse · 2020-06-17T01:04:26.620Z · score: 10 (7 votes) · LW · GW

I guess there is The Motte on Reddit, but I could see benefits of someone creating a separate community. One problem is that far more meta discussion needs to occur on how to have these conversations.

Comment by chris_leong on Pragmatism and Completeness · 2020-06-13T06:09:37.595Z · score: 5 (3 votes) · LW · GW

One thing this leaves out is how pragmatism contains the risk that you are completely misunderstanding what is going on. Sometimes the risk is worth it, other times it isn't, although it is hard to tell in advance.

Comment by chris_leong on Does equanimity prevent negative utility? · 2020-06-11T22:30:29.196Z · score: 2 (1 votes) · LW · GW

The later

Comment by chris_leong on What is Ra? · 2020-06-08T08:38:06.572Z · score: 4 (2 votes) · LW · GW

Maybe I should have said that there two sides to Ra - the institutional incentive and the reason why people fall for this or (stronger) want this

Comment by chris_leong on Legibility: Notes on "Man as a Rationalist Animal" · 2020-06-08T08:36:19.266Z · score: 4 (2 votes) · LW · GW

I'm really keen to see the later posts in this series, since Lou's posts are often somewhat tricky to decrypt.

Comment by chris_leong on What is Ra? · 2020-06-06T22:15:56.350Z · score: 8 (4 votes) · LW · GW

I formed my own opinion at the start, but I didn't post it right away since I didn't want to possibly bias other people into agreeing with me. I guess the way I'll answer this will be slightly different from the other answers, since I think the dynamics of the situation are more complex than an idealisation of vagueness. Pjeby seems hotter(/closer) in estimation when they say it's a preference for mysterious, prestigious authority, but again I think we have to dive deeper.

I see Ra as a dynamic which tends to occur once an organisation has obtained a certain amount of status. At that point there is an incentive and a temptation to use that status to defend itself against criticism. One way of doing that is providing vague, but extremely positive-sounding non-justifications for the things that it does and use the status to prevent people from digging too deep. This works since there are often social reasons not to ask too many questions. If someone gives a talk, to keep asking followups is to crowd out other people. People will often assume that someone who keeps hammering a point is an ideologue or simply lose interest. In any case, these can usually be answered with additional layers of vagueness.

This also reminds me of the concept of hyperreal or realer than real. Organisations that utilise Ra become a simulation of a great organisation instead of the great organisation that they might have once been. By projecting this image of perfection they feel realer than any real great organisation which will inevitably have its faults and hence inspire doubt.

Comment by chris_leong on What is Ra? · 2020-06-06T21:55:05.662Z · score: 2 (1 votes) · LW · GW

Great to hear that this article helped you

Comment by chris_leong on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T08:36:58.748Z · score: 2 (1 votes) · LW · GW

Oh, one more thing I forgot to mention. This idea of Conceptual Engineering seems highly related to what I was discussing in Constructive Definitions. I'm sure this kind of idea has a name in epistemology as well, although unfortunately, I haven't had the time to investigate.

Comment by chris_leong on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T05:08:37.558Z · score: 3 (2 votes) · LW · GW

Thanks for writing this post. Better connecting the discussion on Less Wrong with the discussions in philosophy is important work.

Also, how is the idea of conceptual engineering different from Wittgenstein's idea of language as use?

Comment by chris_leong on "It's Okay", Instructions, Focusing, Experiencing and Frames · 2020-05-24T12:22:49.234Z · score: 2 (1 votes) · LW · GW

Why do you say it isn't an emotional state?

Comment by chris_leong on Chris_Leong's Shortform · 2020-05-20T03:18:28.189Z · score: 2 (1 votes) · LW · GW

I've always found the concept belief in belief slightly hard to parse cognitively. Here's what finally satisfied my brain: whether you will be rewarded or punished in heaven is tied to whether or not God exists, whether or not you feel a push to go to church is tied to whether or not you believe in God. If you do go to church and want to go your brain will say, "See I really do believe" and it'll do the reverse if you don't go. However, it'll only affect your belief in God indirectly through your "I believe in God" node. Putting it another way, going to church is evidence you believe in God, not evidence that God exists. Anyway, the result of all this is that your "I believe in God" node can become much stronger than your "God exists" node

Comment by chris_leong on What are Michael Vassar's beliefs? · 2020-05-19T01:38:09.013Z · score: 3 (2 votes) · LW · GW

Are you able to expand any more on his thoughts about cybernetics/control theory? Plus can you tell me any more about what kind of Chesterton's fences are being removed? Are these internal beliefs or are people breing convinced to break social norms?

Comment by chris_leong on What are Michael Vassar's beliefs? · 2020-05-15T23:18:32.040Z · score: 3 (2 votes) · LW · GW

Thanks, but Twitter is an extremely inefficient manner of figuring out someone's beliefs

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2020-05-13T05:18:52.248Z · score: 2 (1 votes) · LW · GW

"For the variants, I'm not proposing they ever get run" - that makes sense

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2020-05-12T21:45:22.551Z · score: 2 (1 votes) · LW · GW

I don't have strong opinions on an A vs. B debate or a B vs. C debate. That was a detail I wasn't paying much attention to. I was just proposing using two AI's with equivalent strengtht to A. One worry I have about making D create variants with known flaws would be if any of these exploited security holes, although maybe a normal AGI, being fully general, would be able to exploit security holes anyway.

Comment by chris_leong on Arguments about fast takeoff · 2020-05-12T07:27:07.811Z · score: 2 (1 votes) · LW · GW

A few thoughts:

  • Even if we could theoretically double output for a product, it doesn't mean that there will be sufficient demand for it to be doubled. This potential depends on how much of the population already has thing X
  • Even if we could effectively double our workforce, if we are mostly replacing low-value jobs, then our economy wouldn't double
  • Even if we could say halve the cost of producing robot workers, that might simply result in extra profits for a company instead of increasing the size of the economy
  • Even if we have a technology that could double global output, it doesn't mean that we could or would deploy it in that time, especially given that companies are likely to be somewhat risk adverse and not scale up as fast as possible as they might be worried about demand. This is the weakest of the four arguments in my opinion, which is why it is last.

So economic progress may not accurately represent technological progress, meaning that if we use this framing we may get caught up in a bunch of economic debates instead of debates about capacity.

Comment by chris_leong on Eli's shortform feed · 2020-05-11T11:21:31.200Z · score: 2 (1 votes) · LW · GW

Thanks for mentioning conjugative cruxes. That was always my biggest objection to this technique. At least when I went through CFAR, the training completely ignored this possibility. It was clear that it often worked anyway, but the impression that I got was that it was the general frame which was important more than the precise methodology which at that time still seemed in need of refinement.

Comment by chris_leong on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-10T23:53:09.011Z · score: 2 (1 votes) · LW · GW

Hmm, the quote that demonstrates this issue the most is: "But there is a hidden problem with the observer technique, which becomes obvious once you think about it. Who is the observer? Who is this person who is behind the binoculars, watching your experience from the outside?", but that is of course a quote rather than a peice of text you wrote yourself.

I also feel it applies somewhat to the discussion of the sense of looking out at the world from behind your eye. I think you're implying that the fact that we can observe this system implies that it is a seperate sub-agent from the system observing this sense, but reflective programs seem to demonstrate that this isn't necessarily the case.

Comment by chris_leong on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-10T12:59:16.989Z · score: 5 (3 votes) · LW · GW

Thanks for writing! This is far clearer than most explanations and has some helpful analogies. I think it is possible to be even clearer though, which is important for topics like this which are inherently ambiguous. For example, one place where you could have been more precise is the discussions around self-reference. Like there are such things as reflection in programming languages, so we have to be careful when saying what a process can or can't observe about itself. Additionally, multiagents systems don't necessarily imply no self - it may be that we only identify with one of the agents.

Comment by chris_leong on Chris_Leong's Shortform · 2020-05-06T07:10:07.852Z · score: 3 (2 votes) · LW · GW

Pet theory about meditation: Lots of people say that if you do enough meditation that you will eventually realise that there isn't a self. Having not experienced this myself, I am intensely curious about what people observe that persuades them to conclude this. I guess I get a sense that many people are being insufficiently skeptical. There's a difference between there not appearing to be such a thing as a self and a self not existing. Indeed, how do we know meditation just doesn't temporarily silence whatever part of our mind is responsible for self-hood?

Recently, I saw a quote from Sam Harris that makes me think I might (emphasis on might) finally know what people are experiencing. In a podcast with Eric Winstein he explains that he believes there isn't a self because, "consciousness is an open space where everything is appearing - that doesn't really answer to I or me". The first part seems to mirror Global Workspace Theory, the idea (super roughly) that there is a part of the brain for synthesising thoughts from various parts of the brain which can only pay attention to one thought at a time.

The second part of Sam Harris' sentence seems to say that this Global Workspace "doesn't answer to I or me". This is still vague, but it sounds like there is a part of the brain that identifies as "I or me" that is separate from this Global Workspace or that there are multiple parts that are separate from the Global Workspace and don't identify as "I or me". In the first of these sub-interpretations, "no-self" would merely mean that our "self" is just another sub-agent and not the whole of us. In the second of these sub-interpretations, it would additionally be true that we don't have a unitary self, but multiple fragments of self-hood.

Anyway, as I said, I haven't experienced no-self, but curious to see if this resonates with people who have.

Comment by chris_leong on Constructive Definitions · 2020-05-05T04:16:27.206Z · score: 2 (1 votes) · LW · GW

Thanks, glad you appreciate it!

Comment by chris_leong on Negative Feedback and Simulacra · 2020-05-04T04:23:05.355Z · score: 4 (2 votes) · LW · GW

"In particular, it’s a recent development that I would have noticed my friend’s unilateral demand for fairness as in fact tilted towards MAPLE" - To recast that perspective slightly more sympathetically, if applied consistently, it isn't just titled towards MAPLE but tilted towards "the defendant". But beyond that it has the advantage of reducing conflict. It has downsides too as you've described.

Comment by chris_leong on What makes counterfactuals comparable? · 2020-05-04T00:56:45.114Z · score: 2 (1 votes) · LW · GW

Yeah, sorry, that's a typo, fixed now.

Comment by chris_leong on What makes counterfactuals comparable? · 2020-05-04T00:56:15.262Z · score: 2 (1 votes) · LW · GW

Hey Vojta, thanks so much for your thoughts.

I feel slightly worried about going too deep into discussions along the lines of "Vojta reacts to Chris' claims about what other LW people argue against hypothetical 1-boxing CDT researchers from classical academia that they haven't met" :D.

Fair enough. Especially since this post isn't so much about the way people currently frame their arguments but attempt to persuade people to reframe the discussion around comparability.

My take on how to do counterfactuals correctly is that this is not a property of the world, but of your mental models

I feel similarly. I've explained my reasons for believing this in the Co-operation Game, Counterfactuals are an Answer, not a Question and Counterfactuals as a matter of Social Convention.

According to this view, counterfactuals only make sense if your model contains uncertainty...

I would frame this slightly differently and say that this is the paradigmatic case which forms the basis of our initial definition. I think the example of numbers can be constructive here. The first numbers to be defined are the counting numbers: 1, 2, 3, 4... It is then convenient to add fractions, then zero, then negative numbers and eventually we extend to the complex numbers. In each case we've slightly shifted the definition of what a number is and this choice is solely determined by convention. Of course, convention isn't arbitrary, but determined by what is natural.

Similarly, the cases where there is actual uncertainty provides the initial domain over which we define counterfactuals. And we can then try to extend this as you are doing above. I see this as a very promising approach.

A lot of what you are saying there aligns with my most recent research direction (Counterfactuals as a matter of Social Convention), although it's unfortunately stalled with coronavirus and my focus being mostly on attempting to write up my ideas from the AI safety program. There seem to be a bunch of properties that make a situation more or less likely to be accepted by humans as a valid counterfactual. I think it would be viable to identify the main factors, with the actual weighting being decided by each human. This would acknowledge both the subjective, constructed nature of counterfactuals, but also the objective elements with real implications that doesn't make this a completely arbitrary choice. I would be keen to discuss further/bounce ideas of each other if you'd be up for it.

Finally, when some counterfactual would be inconsistent with our model, we might take it for granted that we are supposed to relax M in some manner

This sounds very similar to the erasure approach I was previously promoting, but have shifted away from. Basically, I when I started thinking about it, I realised that only allowing counterfactuals to be constructed by erasing information didn't match how humans actually use counterfactuals.

Second, when doing counterfactuals, we might take it for granted that you are to replace the actual observation history o by some alternative o′

This is much more relevant to how I think now.

I think that "a typical AF reader" uses a model in which "a typical CDT adherent" can deliberate, come to the one-boxing conclusion, and find 1M in the box, making the options comparable for "typical AF readers". I think that "a typical CDT adherent" uses a model in which "CDT adherents" find the box empty while one-boxers find it full, thus making the options incomparable

I think that's an accurate framing of where they are coming from.

The third question I didn't understand.

What was unclear? I made one typo where I said an EDT agent would smoke when I meant they wouldn't smoke. Is it clearer now?

Comment by chris_leong on Chris_Leong's Shortform · 2020-05-02T14:02:06.769Z · score: 2 (1 votes) · LW · GW

I honestly have no idea how he'd answer, but here's one guess. Maybe we could tie prime numbers to one of a number of processes for determining primeness. We could observe that those processes always return true for 5, so in a sense primeness is a property of five.

Comment by chris_leong on Chris_Leong's Shortform · 2020-05-02T02:03:19.421Z · score: 2 (1 votes) · LW · GW

Wittgenstein didn't think that everything was a command or request; his point was that making factual claims about the world is just one particular use of language that some philosophers (including early Wittgenstein) had hyper-focused on.

Anyway, his claim wasn't that "five" was nonsense, just that when we understood how five was used there was nothing further for us to learn. I don't know if he'd even say that the abstract concept five was nonsense, he might just say that any talk about the abstract concept would inevitably be nonsense or unjustified metaphysical speculation.

Comment by chris_leong on Motivating Abstraction-First Decision Theory · 2020-05-01T00:10:33.453Z · score: 4 (2 votes) · LW · GW

Ah, I think I now get where you are coming from

Comment by chris_leong on Motivating Abstraction-First Decision Theory · 2020-04-30T22:54:34.134Z · score: 2 (1 votes) · LW · GW

I guess what is confusing me is that you seem to have provided a reason why we shouldn't just care about high-level functional behaviour (because this might miss correlations between the low-level components), then in the next sentence you're acting as though this is irrelevant?

Comment by chris_leong on Chris_Leong's Shortform · 2020-04-30T22:36:55.560Z · score: 4 (2 votes) · LW · GW

I won't pretend that I have a strong understanding here, but as far as I can tell, (Later) Wittgenstein and the Ordinary Language Philosophers considered our conception of the number "five" existing as an abstract object as mistaken and would instead explain how it is used and consider that as a complete explanation. This isn't an unreasonable position, like I honestly don't know what numbers are and if we say they are an abstract entity it's hard to say what kind of entity.

Regarding the word "apple" Wittgenstein would likely say attempts to give it a precise definition are doomed to failure because there are an almost infinite number of contexts or ways in which it can be used. We can strongly state "Apple!" as a kind of command to give us one, or shout it to indicate "Get out of the way, there is an apple coming towards you" or "Please I need an Apple to avoid starving". But this is only saying attempts to spec out a precise definition are confused, not the underlying thing itself.

(Actually, apparently Wittgenstein consider attempts to talk about concepts like God or morality as necessarily confused, but thought that they could still be highly meaningful, possibly the most meaningful things)

Comment by chris_leong on Motivating Abstraction-First Decision Theory · 2020-04-30T11:15:50.817Z · score: 4 (2 votes) · LW · GW

"First and foremost: why do we care about validity of queries on correlations between the low-level internal structures of the two agent-instances? Isn’t the functional behavior all that’s relevant to the outcome? Why care about anything irrelevant to the outcome?" - I don't follow what you are saying here

Comment by chris_leong on Measly Meditation Measurements · 2020-04-30T10:23:45.096Z · score: 2 (1 votes) · LW · GW

Meditation increases working memory? Do you have a reference on that?

Comment by chris_leong on Chris_Leong's Shortform · 2020-04-30T06:50:49.037Z · score: 8 (4 votes) · LW · GW

I've recently been reading about ordinary language philosophy and I noticed that some of their views align quite significantly with LW. They believed that many traditional philosophical question only seemed troubling because of the philosophical tendency to assume words like "time" or "free will" necessarily referred to some kind of abstract entity when this wasn't necessary at all. Instead they argued that by paying attention to how we used these words in ordinary, everyday situations we could see that the way people used these words didn't need to assume these abstract entities and that we could dissolve the question.

I found it interesting that the comment thread on dissolving the question makes no reference to this movement. It doesn't reference Wittgenstein either who also tried to dissolve questions.

(https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question)

Comment by chris_leong on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-24T22:58:16.328Z · score: 4 (2 votes) · LW · GW

It'd be useful if the search function allowed searching for tags, as that'd likely be quicker than clicking through the tags page. A synonym feature would probably also be useful so that someone could try tag it with X and that would be replaced by the canonical tag Y instead.

I'd suggest meta-rationality as it's own core tag, but I imagine that'd be controversial.

Comment by chris_leong on Jimrandomh's Shortform · 2020-04-15T03:41:56.595Z · score: 4 (2 votes) · LW · GW

Maybe they don't know whether it escaped or not. Maybe they just think there is a chance that the evidence will implicate them and they figure it's not worth the risk as there'll only be consequences if there is definitely proof that it escaped from one of their labs and not mere speculation.

Or maybe they want to argue that it didn't come from China? I think they've already been pushing this angle.

Comment by chris_leong on The Unilateralist’s “Curse” Is Mostly Good · 2020-04-15T01:17:48.526Z · score: 3 (2 votes) · LW · GW

Thanks for asking this question; it's a good question. Let's take for example things like machine guns. This was an age which, like our age, gave inventors more power to affect the world than ever before. However, if you didn't invent machine guns, someone else would have, just slightly later. However, we're now in the situation where inventions can either end the whole game or remove us from this time of perils. It's not just about the timing of inventions, but our ultimate destiny that is at stake.

Comment by chris_leong on The World According to Dominic Cummings · 2020-04-14T23:50:15.317Z · score: 2 (1 votes) · LW · GW

My confusion was he seems to think the permanent nature of the civil service provides them an advantage over ministers, but if they are always shifting around then wouldn't this prevent them from gaining a local knowledge advantage.

Comment by chris_leong on The Unilateralist’s “Curse” Is Mostly Good · 2020-04-14T06:08:32.132Z · score: 13 (8 votes) · LW · GW

I think this is a case of unilaterists curse being good until it suddenly isn't. We are entering a phase of technology which is fundamentally different from before. If we embrace unilaterism we need to do it based on an understanding of our current situation, not just past history.

Comment by chris_leong on In Defense of Politics · 2020-04-10T21:52:32.552Z · score: -2 (5 votes) · LW · GW
That's where Assange comes into play. He wants to empower that single individual that thinks the group is injust. Assange also made the observation that if a group spends a large amount of resources on keeping certain information secret that corresponds to the harm that the group will suffer should the information become public

The problem with it is unilaterists curse and the idealism involved in believing that everything should be public.

We however have seen that Wikipedia managed to outcompete the Encyclopedia Britanica. There in principle nothing that stops a smart programmer from building a platform that provides for model bills that change society for the better.

Hmm... I'd be surprised if this worked. In most cases there would be way too much disagreement.

Comment by chris_leong on The One Mistake Rule · 2020-04-10T21:34:11.637Z · score: 4 (2 votes) · LW · GW

I think you're making this argument a bit strongly. Now, I've written a number of posts arguing that most people are too dismissive of flaws in models that only occur in hypothetical or unrealistic situations, but I don't think perfection is realistic. It seems that a model with no flaws would have to approach infinite complexity in most cases. The only reason why this rule might work is eventually your model will become complex enough that you can't find the mistake. Additionally, you will be limited by the data you have. It's no good knowing that prediction X is wrong because you ignore factor F if you don't have data related to factor F.

Comment by chris_leong on How to evaluate (50%) predictions · 2020-04-10T21:23:36.239Z · score: 9 (5 votes) · LW · GW

So I've thought about this a bit more. It doesn't matter how someone states their probabilities. However, in order to use your evaluation technique we just need to transform the probabilities so that all of them are above the baseline.

In any case, it's good to see this post. I've always worried for a long time that being calibrated on 50% estimates mightn't be very meaningful as you might be massively overconfident on some guesses and massively underconfident on others.

Comment by chris_leong on How to evaluate (50%) predictions · 2020-04-10T20:44:20.115Z · score: 8 (5 votes) · LW · GW

"Always phrase predictions such that the confidence is above the baseline probability" - This really seems like it should not matter. I don't have a cohesive argument against it at this stage, but reversing should fundamentally be the same prediction.

(Plu in any case it's not clear that we can always agree on a baseline probability)

Comment by chris_leong on Open thread: Language · 2020-04-10T19:27:56.608Z · score: 2 (1 votes) · LW · GW

Yeah, "we" is often a word that is highly ambiguous.

Because some things are easier to express than others and humans don't have unlimited energy.

Comment by chris_leong on Would 2014-2016 Ebola ring the alarm bell? · 2020-04-08T18:02:43.781Z · score: 2 (1 votes) · LW · GW

I briefly scanned through this, but I couldn't see a figure for how many it rang.

Comment by chris_leong on Open thread: Language · 2020-04-08T15:06:02.235Z · score: 2 (1 votes) · LW · GW

One example I saw recent is the concept of cutting corners. Generally, if someone is asks, "So you want us to cut corners?" we'd expect them to have a negative evaluation of the timesaving procedure. However, this article was different in that it used this term and actually argued in favour of it given the extreme situation. But in a more normal case, it's very hard to say, "Yes, we want to cut corners".