Comment by chris-leong on Should there be a header feature? · 2019-06-20T07:26:41.018Z · score: 3 (2 votes) · LW · GW

I'd still like the ability to make the explicit abstract just read off the text after a certain point, but I suppose it would require a lot of work to support that functionality.

Should there be a header feature?

2019-06-20T06:45:38.654Z · score: 3 (1 votes)
Comment by chris-leong on Mistakes with Conservation of Expected Evidence · 2019-06-09T23:09:16.102Z · score: 1 (1 votes) · LW · GW
I agree fairly strongly, but this seems far from the final word on the subject, to me.

Hmm, actually I think you're right and that it may be more complex than this.

Ah. I take you to be saying that the quality of the clever arguer's argument can be high variance, since there is a good deal of chance in the quality of evidence cherry-picking is able to find. A good point.

Exactly. There may only be a weak correlation between evidence and truth. And maybe you can do something with it or maybe it's better to focus on stronger signals instead.

Comment by chris-leong on Mistakes with Conservation of Expected Evidence · 2019-06-09T12:59:47.812Z · score: 1 (1 votes) · LW · GW

I view the issue of intellectual modesty much like the issue of anthropics. The only people who matter are those whose decisions are subjunctively linked to yours (it only starts getting complicated when you start asking whether you should be intellectually modest about your reasoning about intellectual modesty)

One issue with the clever arguer is that the persuasiveness of their arguments might have very little to do with how persuasive they should be, so attempting to work off expectations might fail.

Comment by chris-leong on All knowledge is circularly justified · 2019-06-06T11:08:11.540Z · score: 2 (2 votes) · LW · GW

Where would you start with his work?

Comment by chris-leong on All knowledge is circularly justified · 2019-06-05T22:29:36.558Z · score: 1 (1 votes) · LW · GW

I've heard of it, but I haven't read into it, so I avoided using the term

All knowledge is circularly justified

2019-06-04T22:39:20.766Z · score: 10 (8 votes)
Comment by chris-leong on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-04T02:24:04.666Z · score: 1 (1 votes) · LW · GW

Maybe there is a possible project in this direction. I'll assume that this is general advice you'd give to many people who want to work in this space. If it is important for people to build a model of what is required for AI to go well then people may as well work on this together. And sure there's websites like Less Wrong, but people can exchange information much faster by chatting either in person or over Skype. (Of course there are worries that this might lead to overly correlated answers)

Comment by chris-leong on The Relationship Between the Village and the Mission · 2019-05-13T08:01:37.089Z · score: 7 (4 votes) · LW · GW

On a related, but somewhat different issue: I feel that there has been something of an under-investment in rationality community building overall. EA has CEA, but rationality doesn't have an equivalent (CFAR doesn't play the same community building role). There isn't any organisation responsible for growing the community, organising conferences and addressing challenges that arrive.

That said, I'm not sure that there is necessarily agreement that there is a single mission. Some people are in rationality for ai, some insight porn, some for the personal development and some simply for social reasons. Even though EA has a massively broad goal, doing the most good seems to suffice to spur action in a way that rationality hasn't.

Comment by chris-leong on Hierarchy and wings · 2019-05-06T22:26:48.940Z · score: 8 (4 votes) · LW · GW

This doesn't seem to accurately describe contemporary politics, at least in the Western world. The left wing isn't just non-central groups, but the cultural/intellectual elites.

Comment by chris-leong on Liar Paradox Revisited · 2019-04-18T14:36:00.995Z · score: 1 (1 votes) · LW · GW

Probably wrong editor

Comment by chris-leong on The Simple Solow Model of Software Engineering · 2019-04-09T02:33:36.686Z · score: 7 (4 votes) · LW · GW

There's other considerations that slow large code bases:

  • The more features you have, the more potential interactions
  • The bigger a codebase is, the harder it is to understand it
  • Having more features means more work is involved in testing
  • Customer bases shift over time from early adopters to those who want more stability and reliability
  • When a code base is more mature, there's more chance that a change could make the product worse, so you have to spend more time on evaluation
  • A larger customer base forces you to care more about rare issues
Comment by chris-leong on On AI and Compute · 2019-04-04T03:19:34.777Z · score: 1 (1 votes) · LW · GW

The figures should be less than this. Only a fraction of human time is spent learning.

Comment by chris-leong on LW Update 2019-04-02 – Frontpage Rework · 2019-04-03T04:53:12.011Z · score: 8 (4 votes) · LW · GW

What's happening with regards to meta? Are they just going to be personal blogposts now?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T08:41:17.859Z · score: 1 (1 votes) · LW · GW

How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?

If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:46:19.784Z · score: 1 (1 votes) · LW · GW

Is predictive power an instrumental or terminal goal?

Is your view a denial of the territory or agnosticism about it?

Is the therapy example a true model of the world or a useful fiction?

Comment by chris-leong on User GPT2 is Banned · 2019-04-02T03:38:48.421Z · score: 16 (5 votes) · LW · GW

I thought that GPT2 was funny at first, but after a while it got irritating. If there's a next time, it should be more limited in how many comments it makes. 1) You could train it on how many votes its comments got to try to figure out which comments to reply to 2) It might also automatically reply to every reply on its comments.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:29:23.864Z · score: 2 (2 votes) · LW · GW

1) That's a good point, but I was thinking about how to improve the high school maths syllabus, not so much about high school in general. I don't have any strong opinions on removing literature instead if it were one or the other. However, I do have other ideas for literature. I'd replace literature with a subject that is half writing/giving speeches about what students are passionate about and half reading books mostly just for participation marks. I'd have the kinds of things students currently do in literature part of an elective only.

2) p-testing is a rather mechanised process. It's exactly the kind of thing high school is good at teaching. Basic Bayesian statistics only has one key formula (although it has another form). Even if there is a need for prerequisite units in order to prepare students, it still seems worthwhile.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:18:46.950Z · score: 3 (2 votes) · LW · GW

That's a good point. There's all kinds of things that might be worth considering adding such as programming, psychology or political philosophy. I guess my point was only that if we were going to replace it with something within maths, then stats seems to be the best candidate (at least better than any of the other content that I covered in university)

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-02T03:15:48.033Z · score: 1 (1 votes) · LW · GW

Good point, I should have clarified this more. I'm not saying that people shouldn't know how to calculate the area and circumference of a circle as people may actually use that. It's more to do with all the things to do with tangents and chords and shapes inscribed in circles.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:27:00.525Z · score: 8 (4 votes) · LW · GW

Circle geometry should be removed from the high school maths syllabus and replaced with statistics because stats is used in science, business and machine learning, while barely anyone needs circle geometry.

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:15:34.869Z · score: 3 (2 votes) · LW · GW

Do maps need to ultimately be grounded in something that is not a map and if not why are these maps meaningful?

Comment by chris-leong on Experimental Open Thread April 2019: Socratic method · 2019-04-01T06:04:15.985Z · score: 5 (4 votes) · LW · GW

Question: How representative do you think posts on Less Wrong are in terms of how rationalists make decisions in practise? If there is a difference, do you think spending time on LW may affect your perspective on how rationalists make decisions?

Comment by chris-leong on Review of Q&A [LW2.0 internal document] · 2019-03-30T02:00:18.164Z · score: 1 (1 votes) · LW · GW

I actually suspect that the biggest market for this would be EA, not LW. There are already a large number of people who want to work in EA; and many organisations which would like to see certain questions answered.

Comment by chris-leong on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T22:51:14.133Z · score: 4 (4 votes) · LW · GW

If the utility function is the square root of the number of apples you could multiply the number of apples by four. The question is mainly about whether you can do that kind of adaption than about anything else.

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-18T17:39:46.555Z · score: 1 (1 votes) · LW · GW

Do we need the ability to reason about logically inconsistent situations? Perhaps we could attempt to transform the question of logical counterfactuals into a question about consistent situations instead as I describe in this post? Or to put it another way, is the idea of logical counterfactuals an analogy or something that is supposed to be taken literally?

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-18T00:42:58.138Z · score: 3 (2 votes) · LW · GW

Hmm, I'm still confused. I can't figure out why we would need logical uncertainty in the typical case to figure out the consequences of "source code X outputs action/policy Y". Is there a simple problem where this is necessary or is this just a result of trying to solve for the general case?

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-17T23:41:54.225Z · score: 3 (2 votes) · LW · GW

I'm actually still quite confused by the necessity of logical uncertainty for UDT. Most of the common problems like Newcomb's or Parfit's Hitchhiker don't seem to require it. Where does it come in?

(The only reference to it that I could find was on the LW wiki)

Comment by chris-leong on Comparison of decision theories (with a focus on logical-counterfactual decision theories) · 2019-03-17T18:53:27.030Z · score: 13 (4 votes) · LW · GW

You may find this comment that Rob Bensinger left on one of my questions interesting:

"The main datapoint that Rob left out: one reason we don't call it UDT (or cite Wei Dai much) is that Wei Dai doesn't endorse FDT's focus on causal-graph-style counterpossible reasoning; IIRC he's holding out for an approach to counterpossible reasoning that falls out of evidential-style conditioning on a logically uncertain distribution. (FWIW I tried to make the formalization we chose in the paper general enough to technically include that possibility, though Wei and I disagree here and that's definitely not where the paper put its emphasis. I don't want to put words in Wei Dai's mouth, but IIRC, this is also a reason Wei Dai declined to be listed as a co-author.)"

Rob also left another comment explaining the renaming from UDT to FDT

The Curious Prisoner Puzzle

2018-06-16T00:40:15.034Z · score: 3 (8 votes)
Comment by chris-leong on LessWrong.com URL transfer complete, data import will run for the next few hours · 2018-03-23T09:28:40.386Z · score: 8 (4 votes) · LW · GW

Two issues:

  • There isn't a link to the Wiki in the sidebar, but I suppose you might be planning to restyle it and better integrate it into the rest of the site before it appears there?
  • I can't log in with Google on LessWrong.com. It still works on LesserWrong.com, but it freezes after I select the email address I want to use, with the pop-up window just being blank.