Posts

Defensible positions 2020-10-26T10:37:09.840Z
Chris Leong's Shortform 2019-09-23T10:01:38.200Z
Should there be a header feature? 2019-06-20T06:45:38.654Z
All knowledge is circularly justified 2019-06-04T22:39:20.766Z
The Curious Prisoner Puzzle 2018-06-16T00:40:15.034Z

Comments

Comment by chris-leong on Effective Epidemiology · 2020-10-24T12:24:59.407Z · LW · GW

"A common estimate is that the loss of a full year of education leads to a loss of ~$100,000 in lifetime earnings" - I find this very hard to believe
 

Comment by chris-leong on A tale from Communist China · 2020-10-19T05:04:02.396Z · LW · GW

When did you start to doubt?

Comment by chris-leong on What should experienced rationalists know? · 2020-10-14T23:50:12.576Z · LW · GW

This is an excellent question. Here's some of the things I consider personally important.

Regarding probability, I recently asked the question: Why is Bayesianism Important? I found this Slatestarcodex post to provide an excellent overview of thinking probabilistically, which seems way more important than almost any of the specific theorems.

I would include basic game theory - prisoner's dilemma, tragedy of the commons, multi-polar traps (see Meditations on Moloch for this later idea).

In terms of decision theory, there's the basic concept of expected utility, decreasing marginal utility, then the Inside/Outside views.

I think it's also important to understand the limits of rationality. I've written a post on this (pseudo-rationality), there's Barbarians vs. Bayesians and there's these two posts by Scott Alexander -  Seeing as a State and The Secret of Our Success. Thinking Fast and slow has already been mentioned.

The Map is Not the Territory revolutionised my understanding of philosophy and prevented me from ending up in stupid linguistic arguments. I'd suggest supplementing this by understanding how Conceptual Engineering avoids the plague of counterexample philosophy prevalent with conceptual engineering (Wittgenstein's conception of meanings as Family Resemblances is useful too - Eliezier talks about the cluster structure of thingspace).

Most normal people are far too ready to dismiss hypothetical situations. While if taken too far Making Beliefs Pay Rent can lead to a naïve kind of logical positivism, it is in general a good heuristic. Where Recursive Justification Hits Bottom argues for a kind of circular epistemology.

In terms of morality Torture vs. Dust Specks is a classic.

Pragmatically, there's the Pareto Principle (or 80/20 rule) and I'll also throw in my posts on Making Exceptions to General Rules and Emotions are not Beliefs.

In terms of understanding people better there's Inferential Distance, Mistake Theory vs. Conflict Theory, Contextualising vs. Decoupling Norms, The Least Convenient Possible World, Intellectual Turing Tests and Steelmanning/Principal of Charity.

There seems to be an increasingly broad agreement that meditation is really important and compliments rationality beautifully insofar as irrationality is more often a result of lack of control over our emotions, than lack of knowledge. But beyond this, it can provide extra introspective capacities and meditative practises like circling can allow us to relate better with humans.

One of my main philosophical disagreements with people here is that they often lean towards verificationism, while I don't believe that the universe has to play nice and so that often things will be true that we can't actually verify.

Comment by chris-leong on Postmortem to Petrov Day, 2020 · 2020-10-03T23:32:19.121Z · LW · GW

I appreciate how Ben handled this: it was nice for him to let me comment before he posted and for him to also add some words of appreciation at the end.

Regarding point 2, since I was viewing this in game mode I had no real reason to worry about being tricked. Avoiding being tricked by not posting about it would have been like avoiding losing in chess by never making the next move.

I guess other than that, I'd suggest that even a counterfactual donation of $100 to charity not occurring would feel more significant than the frontpage going down for a day. Like the current penalty feels like it was specifically chosen to be insignificant.

Also, I definitely would have taken it more seriously if I realised it was serious to people. This wasn't even in my zone of possibility.

Comment by chris-leong on On Destroying the World · 2020-09-28T07:59:08.736Z · LW · GW

Why would there be? I'm sure they saw it as just a game too and it would be extremely hypocritical for me to be annoyed at anyone for that.

Comment by chris-leong on What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to? · 2020-09-26T23:28:31.110Z · LW · GW

Hey, I've become interested in this field too recently. I've been listening to the Jim Rutt show which is pretty interesting, but I haven't dived into it in any real depth. I agree that it is something that we should be looking more into.

I won't pretend to be an expert on this topic, but my understanding of the differences is as follow:

  • Systems theory tends to involve attempts to understand the overall system, while complex systems are much more likely to have emergent novel behaviour, so any models used need to be held more lightly/it's more likely that we have macro level trends that are there and we just don't know why
  • Cybernetics is mostly about control systems (classic example is the thermostat). Feedback loops are an important part of systems theory, but they are just one particular tool.
  • Regarding the breadth of applications, we can model dynamics in formal mathematical situations, then try to claim that similar dynamics occur in actual physical systems
Comment by chris-leong on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T22:49:10.882Z · LW · GW

I recieved both messages

Comment by chris-leong on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T21:08:52.938Z · LW · GW

I hadn't decided whether or not to nuke it, but if I did nuke it, I would have been it several hours later, after people had a chance to wake up.

Comment by chris-leong on Without models · 2020-03-30T00:03:13.162Z · LW · GW

Could you explain the answer to 4?

Comment by chris-leong on Section 7: Foundations of Rational Agency · 2019-12-23T23:37:31.702Z · LW · GW

For the evidential game, it doesn't just matter whether you co-operate or not, but why. Different why's will be more or less likely to be adopted by the other agents.

Comment by chris-leong on A simple sketch of how realism became unpopular · 2019-10-14T04:17:17.078Z · LW · GW

It's something people say, but don't necessarily fully believe

Comment by chris-leong on A simple sketch of how realism became unpopular · 2019-10-12T13:22:58.795Z · LW · GW

I appreciated this post for explaining Berkeley's beliefs really clearly to me. I never knew what he was going on about before.

Comment by chris-leong on Looking for remote writing partners (for AI alignment research) · 2019-10-03T04:32:52.281Z · LW · GW

Would be happy to try this

Comment by chris-leong on Chris Leong's Shortform · 2019-09-23T10:01:38.389Z · LW · GW

POSTED ON WRONG ACCOUNT

Comment by chris-leong on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-06T22:59:50.108Z · LW · GW

In a booming market, buying can be valuable as a hedge against rising house prices

Comment by chris-leong on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-29T05:49:26.164Z · LW · GW

Yeah, I meant part 7. What did he say about feminism and neoreaction?

Comment by chris-leong on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-25T05:09:12.518Z · LW · GW

I'd like to know more about the dark sides part of the book

Comment by chris-leong on Should there be a header feature? · 2019-06-20T07:26:41.018Z · LW · GW

I'd still like the ability to make the explicit abstract just read off the text after a certain point, but I suppose it would require a lot of work to support that functionality.

Comment by chris-leong on Mistakes with Conservation of Expected Evidence · 2019-06-09T23:09:16.102Z · LW · GW
I agree fairly strongly, but this seems far from the final word on the subject, to me.

Hmm, actually I think you're right and that it may be more complex than this.

Ah. I take you to be saying that the quality of the clever arguer's argument can be high variance, since there is a good deal of chance in the quality of evidence cherry-picking is able to find. A good point.

Exactly. There may only be a weak correlation between evidence and truth. And maybe you can do something with it or maybe it's better to focus on stronger signals instead.

Comment by chris-leong on Mistakes with Conservation of Expected Evidence · 2019-06-09T12:59:47.812Z · LW · GW

I view the issue of intellectual modesty much like the issue of anthropics. The only people who matter are those whose decisions are subjunctively linked to yours (it only starts getting complicated when you start asking whether you should be intellectually modest about your reasoning about intellectual modesty)

One issue with the clever arguer is that the persuasiveness of their arguments might have very little to do with how persuasive they should be, so attempting to work off expectations might fail.

Comment by chris-leong on All knowledge is circularly justified · 2019-06-06T11:08:11.540Z · LW · GW

Where would you start with his work?

Comment by chris-leong on All knowledge is circularly justified · 2019-06-05T22:29:36.558Z · LW · GW

I've heard of it, but I haven't read into it, so I avoided using the term

Comment by chris-leong on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-04T02:24:04.666Z · LW · GW

Maybe there is a possible project in this direction. I'll assume that this is general advice you'd give to many people who want to work in this space. If it is important for people to build a model of what is required for AI to go well then people may as well work on this together. And sure there's websites like Less Wrong, but people can exchange information much faster by chatting either in person or over Skype. (Of course there are worries that this might lead to overly correlated answers)

Comment by chris-leong on The Relationship Between the Village and the Mission · 2019-05-13T08:01:37.089Z · LW · GW

On a related, but somewhat different issue: I feel that there has been something of an under-investment in rationality community building overall. EA has CEA, but rationality doesn't have an equivalent (CFAR doesn't play the same community building role). There isn't any organisation responsible for growing the community, organising conferences and addressing challenges that arrive.

That said, I'm not sure that there is necessarily agreement that there is a single mission. Some people are in rationality for ai, some insight porn, some for the personal development and some simply for social reasons. Even though EA has a massively broad goal, doing the most good seems to suffice to spur action in a way that rationality hasn't.

Comment by chris-leong on Hierarchy and wings · 2019-05-06T22:26:48.940Z · LW · GW

This doesn't seem to accurately describe contemporary politics, at least in the Western world. The left wing isn't just non-central groups, but the cultural/intellectual elites.

Comment by chris-leong on Liar Paradox Revisited · 2019-04-18T14:36:00.995Z · LW · GW

Probably wrong editor

Comment by chris-leong on The Simple Solow Model of Software Engineering · 2019-04-09T02:33:36.686Z · LW · GW

There's other considerations that slow large code bases:

  • The more features you have, the more potential interactions
  • The bigger a codebase is, the harder it is to understand it
  • Having more features means more work is involved in testing
  • Customer bases shift over time from early adopters to those who want more stability and reliability
  • When a code base is more mature, there's more chance that a change could make the product worse, so you have to spend more time on evaluation
  • A larger customer base forces you to care more about rare issues
Comment by chris-leong on On AI and Compute · 2019-04-04T03:19:34.777Z · LW · GW

The figures should be less than this. Only a fraction of human time is spent learning.

Comment by chris-leong on LW Update 2019-04-02 – Frontpage Rework · 2019-04-03T04:53:12.011Z · LW · GW

What's happening with regards to meta? Are they just going to be personal blogposts now?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T08:41:17.859Z · LW · GW

How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?

If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:46:19.784Z · LW · GW

Is predictive power an instrumental or terminal goal?

Is your view a denial of the territory or agnosticism about it?

Is the therapy example a true model of the world or a useful fiction?

Comment by chris-leong on [April Fools] User GPT2 is Banned · 2019-04-02T03:38:48.421Z · LW · GW

I thought that GPT2 was funny at first, but after a while it got irritating. If there's a next time, it should be more limited in how many comments it makes. 1) You could train it on how many votes its comments got to try to figure out which comments to reply to 2) It might also automatically reply to every reply on its comments.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:29:23.864Z · LW · GW

1) That's a good point, but I was thinking about how to improve the high school maths syllabus, not so much about high school in general. I don't have any strong opinions on removing literature instead if it were one or the other. However, I do have other ideas for literature. I'd replace literature with a subject that is half writing/giving speeches about what students are passionate about and half reading books mostly just for participation marks. I'd have the kinds of things students currently do in literature part of an elective only.

2) p-testing is a rather mechanised process. It's exactly the kind of thing high school is good at teaching. Basic Bayesian statistics only has one key formula (although it has another form). Even if there is a need for prerequisite units in order to prepare students, it still seems worthwhile.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:18:46.950Z · LW · GW

That's a good point. There's all kinds of things that might be worth considering adding such as programming, psychology or political philosophy. I guess my point was only that if we were going to replace it with something within maths, then stats seems to be the best candidate (at least better than any of the other content that I covered in university)

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:15:48.033Z · LW · GW

Good point, I should have clarified this more. I'm not saying that people shouldn't know how to calculate the area and circumference of a circle as people may actually use that. It's more to do with all the things to do with tangents and chords and shapes inscribed in circles.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:27:00.525Z · LW · GW

Circle geometry should be removed from the high school maths syllabus and replaced with statistics because stats is used in science, business and machine learning, while barely anyone needs circle geometry.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:15:34.869Z · LW · GW

Do maps need to ultimately be grounded in something that is not a map and if not why are these maps meaningful?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:04:15.985Z · LW · GW

Question: How representative do you think posts on Less Wrong are in terms of how rationalists make decisions in practise? If there is a difference, do you think spending time on LW may affect your perspective on how rationalists make decisions?

Comment by chris-leong on Review of Q&A [LW2.0 internal document] · 2019-03-30T02:00:18.164Z · LW · GW

I actually suspect that the biggest market for this would be EA, not LW. There are already a large number of people who want to work in EA; and many organisations which would like to see certain questions answered.

Comment by chris-leong on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T22:51:14.133Z · LW · GW

If the utility function is the square root of the number of apples you could multiply the number of apples by four. The question is mainly about whether you can do that kind of adaption than about anything else.

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-18T17:39:46.555Z · LW · GW

Do we need the ability to reason about logically inconsistent situations? Perhaps we could attempt to transform the question of logical counterfactuals into a question about consistent situations instead as I describe in this post? Or to put it another way, is the idea of logical counterfactuals an analogy or something that is supposed to be taken literally?

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-18T00:42:58.138Z · LW · GW

Hmm, I'm still confused. I can't figure out why we would need logical uncertainty in the typical case to figure out the consequences of "source code X outputs action/policy Y". Is there a simple problem where this is necessary or is this just a result of trying to solve for the general case?

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-17T23:41:54.225Z · LW · GW

I'm actually still quite confused by the necessity of logical uncertainty for UDT. Most of the common problems like Newcomb's or Parfit's Hitchhiker don't seem to require it. Where does it come in?

(The only reference to it that I could find was on the LW wiki)

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-17T18:53:27.030Z · LW · GW

You may find this comment that Rob Bensinger left on one of my questions interesting:

"The main datapoint that Rob left out: one reason we don't call it UDT (or cite Wei Dai much) is that Wei Dai doesn't endorse FDT's focus on causal-graph-style counterpossible reasoning; IIRC he's holding out for an approach to counterpossible reasoning that falls out of evidential-style conditioning on a logically uncertain distribution. (FWIW I tried to make the formalization we chose in the paper general enough to technically include that possibility, though Wei and I disagree here and that's definitely not where the paper put its emphasis. I don't want to put words in Wei Dai's mouth, but IIRC, this is also a reason Wei Dai declined to be listed as a co-author.)"

Rob also left another comment explaining the renaming from UDT to FDT

Comment by chris-leong on LessWrong.com URL transfer complete, data import will run for the next few hours · 2018-03-23T09:28:40.386Z · LW · GW

Two issues:

  • There isn't a link to the Wiki in the sidebar, but I suppose you might be planning to restyle it and better integrate it into the rest of the site before it appears there?
  • I can't log in with Google on LessWrong.com. It still works on LesserWrong.com, but it freezes after I select the email address I want to use, with the pop-up window just being blank.