Posts

Robin Hanson on whether governments can squash COVID-19 2020-03-19T18:23:57.574Z · score: 11 (4 votes)
Should we all be more hygenic in normal times? 2020-03-17T06:14:23.093Z · score: 9 (2 votes)
Did any US politician react appropriately to COVID-19 early on? 2020-03-17T06:12:31.523Z · score: 22 (11 votes)
An Analytic Perspective on AI Alignment 2020-03-01T04:10:02.546Z · score: 53 (15 votes)
How has the cost of clothing insulation changed since 1970 in the USA? 2020-01-12T23:31:56.430Z · score: 14 (3 votes)
Do you get value out of contentless comments? 2019-11-21T21:57:36.359Z · score: 28 (12 votes)
What empirical work has been done that bears on the 'freebit picture' of free will? 2019-10-04T23:11:27.328Z · score: 9 (4 votes)
A Personal Rationality Wishlist 2019-08-27T03:40:00.669Z · score: 43 (26 votes)
Verification and Transparency 2019-08-08T01:50:00.935Z · score: 37 (17 votes)
DanielFilan's Shortform Feed 2019-03-25T23:32:38.314Z · score: 19 (5 votes)
Robin Hanson on Lumpiness of AI Services 2019-02-17T23:08:36.165Z · score: 16 (6 votes)
Test Cases for Impact Regularisation Methods 2019-02-06T21:50:00.760Z · score: 65 (19 votes)
Does freeze-dried mussel powder have good stuff that vegan diets don't? 2019-01-12T03:39:19.047Z · score: 18 (5 votes)
In what ways are holidays good? 2018-12-28T00:42:06.849Z · score: 22 (6 votes)
Kelly bettors 2018-11-13T00:40:01.074Z · score: 23 (7 votes)
Bottle Caps Aren't Optimisers 2018-08-31T18:30:01.108Z · score: 66 (26 votes)
Mechanistic Transparency for Machine Learning 2018-07-11T00:34:46.846Z · score: 55 (21 votes)
Research internship position at CHAI 2018-01-16T06:25:49.922Z · score: 25 (8 votes)
Insights from 'The Strategy of Conflict' 2018-01-04T05:05:43.091Z · score: 73 (27 votes)
Meetup : Canberra: Guilt 2015-07-27T09:39:18.923Z · score: 1 (2 votes)
Meetup : Canberra: The Efficient Market Hypothesis 2015-07-13T04:01:59.618Z · score: 1 (2 votes)
Meetup : Canberra: More Zendo! 2015-05-27T13:13:50.539Z · score: 1 (2 votes)
Meetup : Canberra: Deep Learning 2015-05-17T21:34:09.597Z · score: 1 (2 votes)
Meetup : Canberra: Putting Induction Into Practice 2015-04-28T14:40:55.876Z · score: 1 (2 votes)
Meetup : Canberra: Intro to Solomonoff induction 2015-04-19T10:58:17.933Z · score: 1 (2 votes)
Meetup : Canberra: A Sequence Post You Disagreed With + Discussion 2015-04-06T10:38:21.824Z · score: 1 (2 votes)
Meetup : Canberra HPMOR Wrap Party! 2015-03-08T22:56:53.578Z · score: 1 (2 votes)
Meetup : Canberra: Technology to help achieve goals 2015-02-17T09:37:41.334Z · score: 1 (2 votes)
Meetup : Canberra Less Wrong Meet Up - Favourite Sequence Post + Discussion 2015-02-05T05:49:29.620Z · score: 1 (2 votes)
Meetup : Canberra: the Hedonic Treadmill 2015-01-15T04:02:44.807Z · score: 1 (2 votes)
Meetup : Canberra: End of year party 2014-12-03T11:49:07.022Z · score: 1 (2 votes)
Meetup : Canberra: Liar's Dice! 2014-11-13T12:36:06.912Z · score: 1 (2 votes)
Meetup : Canberra: Econ 101 and its Discontents 2014-10-29T12:11:42.638Z · score: 1 (2 votes)
Meetup : Canberra: Would I Lie To You? 2014-10-15T13:44:23.453Z · score: 1 (2 votes)
Meetup : Canberra: Contrarianism 2014-10-02T11:53:37.350Z · score: 1 (2 votes)
Meetup : Canberra: More rationalist fun and games! 2014-09-15T01:47:58.425Z · score: 1 (2 votes)
Meetup : Canberra: Akrasia-busters! 2014-08-27T02:47:14.264Z · score: 1 (2 votes)
Meetup : Canberra: Cooking for LessWrongers 2014-08-13T14:12:54.548Z · score: 1 (2 votes)
Meetup : Canberra: Effective Altruism 2014-08-01T03:39:53.433Z · score: 1 (2 votes)
Meetup : Canberra: Intro to Anthropic Reasoning 2014-07-16T13:10:40.109Z · score: 1 (2 votes)
Meetup : Canberra: Paranoid Debating 2014-07-01T09:52:26.939Z · score: 1 (2 votes)
Meetup : Canberra: Many Worlds + Paranoid Debating 2014-06-17T13:44:22.361Z · score: 1 (2 votes)
Meetup : Canberra: Decision Theory 2014-05-26T14:44:31.621Z · score: 1 (2 votes)
[LINK] Scott Aaronson on Integrated Information Theory 2014-05-22T08:40:40.065Z · score: 22 (23 votes)
Meetup : Canberra: Rationalist Fun and Games! 2014-05-01T12:44:58.481Z · score: 0 (3 votes)
Meetup : Canberra: Life Hacks Part 2 2014-04-14T01:11:27.419Z · score: 0 (1 votes)
Meetup : Canberra Meetup: Life hacks part 1 2014-03-31T07:28:32.358Z · score: 0 (1 votes)
Meetup : Canberra: Meta-meetup + meditation 2014-03-07T01:04:58.151Z · score: 3 (4 votes)
Meetup : Second Canberra Meetup - Paranoid Debating 2014-02-19T04:00:42.751Z · score: 1 (2 votes)

Comments

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-08-07T19:05:13.397Z · score: 6 (3 votes) · LW · GW

Alternate title: negation is a little tricky.

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-08-07T19:00:18.270Z · score: 4 (2 votes) · LW · GW

Avoid false dichotomies when reciting the litany of Tarski.

Suppose I were arguing about whether it's morally permissible to eat vegetables. I might stop in the middle and say:

If it is morally permissible to eat vegetables, I desire to believe that it is morally permissible to eat vegetables. If it is morally impermissible to eat vegetables, I desire to believe that it is morally impermissible to eat vegetables. Let me not become attached to beliefs I may not want.

But this ignores the possibility that it's neither morally permissible nor morally impermissible to eat vegetables, because (for instance) things don't have moral properties, or morality doesn't have permissible vs impermissible categories, or whether or not it's morally permissible or impermissible to eat vegetables depends on whether or not it's Tuesday.

Luckily, when you're saying the litany of Tarski, you have a prompt to actually think about the negation of the belief in question. Which might help you avoid this mistake.

Comment by danielfilan on Tagging Open Call / Discussion Thread · 2020-08-03T01:35:38.107Z · score: 13 (6 votes) · LW · GW

We haven't yet built any ways to recognize or reward the taggers, and I'd really like to. Any suggestions for how to do that?

Publish a book of the best instances of people applying tags to posts in 2020.

Comment by danielfilan on Tagging Open Call / Discussion Thread · 2020-08-03T01:12:47.925Z · score: 4 (2 votes) · LW · GW

This is probably a joke, but in my experience, explaining other people's ideas is also a valued contribution if you explain it well and people are interested in the ideas.

Comment by danielfilan on Tags Discussion/Talk Thread · 2020-07-31T16:19:39.619Z · score: 4 (2 votes) · LW · GW

FWIW I vote for "Bayes' Theorem" over "Bayes Theorem".

Comment by danielfilan on New Paper on Herd Immunity Thresholds · 2020-07-31T02:11:08.725Z · score: 2 (1 votes) · LW · GW

In SIR models you can overshoot herd immunity, right? As such, I'm not sure I should take ~30% seroprevalence as strong evidence that herd immunity is greater than ~20%. That being said, it's hard to understand how you could have ~70% seroprevalence if herd immunity is ~20%.

Comment by danielfilan on What if memes are common in highly capable minds? · 2020-07-30T21:54:28.612Z · score: 8 (4 votes) · LW · GW

My understanding of meme theory is that it considers the setting where memes mutate, reproduce, and are under selection pressure. This basically requires you to think that there's some population pool where the memes are spreading. So, one way to think about it might be to ask what memetic environment your AI systems are in.

  • Are human memes a good fit for AI agents? You might think that a physics simulator is not going to be a good fit for most human memes (except perhaps for memes like "representation theory is a good way to think about quantum operators"), because your physics simulator is structured differently from most human minds, and doesn't have the initial memes that our memes are co-adapted with. That being said, GPT-8 might be very receptive to human memes, as memes are pretty relevant to what characters humans type on the internet.
  • How large is the AI population? If there's just one smart AI overlord and then a bunch of MS Excel-level clever computers, the AI overlord is probably not exchanging memes with the spreadsheets. However, if there's a large number of smart AI systems that work in basically the same manner, you might think that that forms the relevant "meme pool", and the resulting memes are going to be different from human memes (if the smart AI systems are cognitively different from humans), and as a result perhaps harder to predict. You could also imagine there being lots of AI system communities where communication is easy within each community but difficult between communities due to architectural differences.
Comment by danielfilan on Forum participation as a research strategy · 2020-07-29T18:00:59.420Z · score: 2 (1 votes) · LW · GW

Hm, I percieved Raemon to be referring more specifically to turning forum discussions into posts, or otherwise tidying them up. I think that's importantly different to transcribing a talk (since a talk isn't a discussion), or a debate (since you only have a short period of time to think about your response to the other person). I guess it's possible that the tagging system helps with this, but it's not obvious to me how it would. That being said, I do agree that more broadly LW has moved towards more synthesis and intertemporal discussions.

Comment by danielfilan on Forum participation as a research strategy · 2020-07-29T15:53:19.930Z · score: 8 (4 votes) · LW · GW

Just got a calendar reminder to check if this happened - my impression is that any such efforts haven't really materialised on the site.

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-07-29T04:35:45.398Z · score: 6 (3 votes) · LW · GW

Just learned that 80,000 hours' career guide includes the claim that becoming a Russia or India specialist might turn out to be a very promising career path.

Comment by danielfilan on Superintelligence 7: Decisive strategic advantage · 2020-07-29T01:52:17.283Z · score: 2 (1 votes) · LW · GW

Some reasons that come to mind:

  • It might take you a while to come to the conclusion that your technology won't overtake theirs.
  • You might have slightly different computational resources, and the code might be specific to that.
Comment by danielfilan on Superintelligence 6: Intelligence explosion kinetics · 2020-07-23T04:58:35.347Z · score: 2 (1 votes) · LW · GW

FWIW I think this 'milestone' is much less clear than Bostrom makes it sound. I'd imagine there's a lot of variation in fidelity of simulation, both measured in terms of brain signals and in terms of behaviour, and I'd be surprised if there were some discrete point at which everybody realised that they'd got it right.

Comment by danielfilan on What should we do about network-effect monopolies? · 2020-07-22T17:51:49.477Z · score: 2 (1 votes) · LW · GW

I basically disagree with the idea that the US FTC gets to decide what the word 'monopoly' means. I also think that having a high market share doesn't mean you face competition - indeed, it can mean that you're winning the competition.

Re: Apple, it may have a monopoly on iOS app distribution, but when people are considering what phones to buy, they get to choose between iPhones and iOS apps and Androids with Android apps. Admittedly, there's some friction in changing from one to the other.

Comment by danielfilan on One Way to Think About ML Transparency · 2020-07-20T19:37:48.917Z · score: 2 (1 votes) · LW · GW

Looking back on this, the relevant notion isn't going to be those distributions, but just plain old Kt complexity: the minimum over programs  that take time  to compute the data of .

Comment by danielfilan on Lessons on AI Takeover from the conquistadors · 2020-07-18T00:00:44.054Z · score: 5 (3 votes) · LW · GW

The post I was referring to

Comment by danielfilan on Superintelligence 5: Forms of Superintelligence · 2020-07-17T21:22:58.916Z · score: 2 (1 votes) · LW · GW

This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or able to do better things, though it may have benefits in dimensions I'm not thinking of.)

The way I might understand it is that you can be good at baking a cake yourself, or you can be good at leveraging other people's talents to bake cakes. Similarly, a collective superintelligence is smart by virtue of figuring out how to solve a hard problem using moderately smart things.

Comment by danielfilan on Superintelligence 5: Forms of Superintelligence · 2020-07-17T21:15:54.635Z · score: 2 (1 votes) · LW · GW

To me, this seems like an issue with the definition of 'chunk of information'. Sure, maybe I can only remember a few at a time, but each chunk has a whole bunch of associations and connected information that I can access fairly easily. As such, my guess is that these chunks actually store a very large number of bits, and that's why you can't fit too many of them into short-term memory at once. Of course, you could do even better with better hardware etc., but this seems to just be an instance of the point "humans have finite memory, for every finite number there's a bigger number, therefore we could make machines with more memory".

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-07-17T04:51:28.172Z · score: 2 (1 votes) · LW · GW

Excerpts from a FB comment I made about the principle of charity. Quote blocks are a person that I'm responding to, not me. Some editing for coherence has been made. tl;dr: it's fine to conclude that people are acting selfishly, and even to think that it's likely that they're acting selfishly on priors regarding the type of situation they're in.


The essence of charitable discourse is assuming that even your opponents have internally coherent and non-selfish reasons for what they do.

If this were true, then one shouldn't engage in charitable discourse. People often do things for entirely selfish reasons. I can determine this because I often do things for entirely selfish reasons, and in general things put under selection pressure will behave in accordance with that pressure. I could elaborate or develop this point further, but I'd be surprised to learn that you disagreed. I further claim that you shouldn't assume that something isn't the case if it is often the case.

That being said, the "non-selfish" qualifier doesn't appear in what Wikipedia thinks the principle of charity is, nor does it appear in r/slatestarcodex's sidebar description of what it means to be charitable, and I don't understand why you included it. In fact, in all of these cases, the principle of charity seems like it's meant to apply to arguments or stated beliefs rather than actions in general...

Tech people don't like it when the media assumes tech companies are all in it just for the money, and have no principles. We shouldn't do the same.

You should in fact assume that tech companies are in it for the money and have no principles, at least until seeing contrary evidence, since that's the standard and best model of corporations (although you shouldn't necessarily assume the same of their employees). Regarding "we shouldn't do the same", I wholeheartedly reject the implication that if people don't like having certain inferences drawn about them, one shouldn't draw those inferences. Sometimes the truth is unflattering!

Comment by danielfilan on Superintelligence Reading Group 3: AI and Uploads · 2020-07-17T04:09:31.494Z · score: 2 (1 votes) · LW · GW

Perhaps surrogacy might be such an analogue?

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-17T03:00:40.621Z · score: 7 (4 votes) · LW · GW

What the proofs actually mean in practice is obviously up for debate, but I think that a pretty reasonable interpretation is that they're something like analogies which help us get a handle on how powerful the different proposals are in theory.

I'm curious if you agree with the inference of conclusions 1 and 2 from premises 1, 2, and 3, and/or the underlying claim that it's bad news to learn that your alignment scheme would be able to solve a very large complexity class.

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-17T01:08:40.080Z · score: 6 (3 votes) · LW · GW

I read the post as attempting to be literal, ctrl+F-ing "analog" doesn't get me anything until the comments. Also, the post is the one that I read as assuming for the sake of analysis that humans can solve all problems in P, I myself wouldn't necessarily assume that.

Also, regarding your version of premise 1, I think I buy that AI can only give you a polynomial speedup over humans.

I think this is easily handled by saying "in practice, the models we train will not be literally optimal".

My guess is that the thing you mean is something like "Sure, the conclusion of the post is that optimal models can do more than a polynomial speedup over humans, but we won't train optimal models, and in fact the things we train will just be a polynomial speedup over humans", which is compatible with my argument in the top-level comment. But in general your comments make me think that you're interpreting the post completely differently than I am. [EDIT: actually now that I re-read the second half of your comment it makes sense to me. I still am confused about what you think this post means.]

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-17T01:00:43.782Z · score: 8 (3 votes) · LW · GW

It doesn't talk about using M as an oracle, but otherwise I don't see how the proofs pan out: for instance, how else is



Given , return  where  is the initial state of  is the empty tape symbol, and  is string concatenation.



supposed to only take polynomial time?

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T23:10:24.228Z · score: 2 (1 votes) · LW · GW

I don't see where the post says that humans can solve all problems solvable in polynomial time given access to an oracle.

From the post:

First, we'll assume that a human, , is polynomial-time such that  can reliably solve any problem in 

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T23:09:02.424Z · score: 4 (2 votes) · LW · GW

TBC by "can" I mean "can in practice". Also if we're getting picky my laptop is just a finite state machine :)

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T22:47:58.363Z · score: 9 (6 votes) · LW · GW

I'm confused by this comment. The post assumes that humans can solve all problems in P (in fact, all problems solvable in polynomial time given access to an oracle for M), then proves that various protocols can solve tricky problems by getting the human player to solve problems in P that in reality aren't typical human problems to solve. Therefore, I take P to actually mean P, rather than being an analogy for problems humans can solve.

Also, regarding your version of premise 1, I think I buy that AI can only give you a polynomial speedup over humans.

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T19:54:33.388Z · score: 6 (3 votes) · LW · GW

Note that this assumes that P does not contain PSPACE.

Also, the proofs in the post (or at least the proof that iterative amplification with weak HCH accesses PSPACE, which is the only one I've read so far) assume polynomial-time access to , which in reality contradicts premise 2. So I'm not sure what to make of things.

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T19:46:58.219Z · score: 13 (6 votes) · LW · GW

It seems to me that the following argument should hold:

Premise 1: We can't build physical machines that solve problems outside of P.

Premise 2: Recursive alignment proposals (HCH, debate, market-making, recursive reward modelling) at equilibrium would solve problems outside of P.

Conclusion 1, from premises 1 and 2: We can't build physical machines that implement equilibrium policies for recursive alignment proposals.

Premise 3: Existing arguments for the safety of recursive alignment proposals reason about their equilibrium policies.

Conclusion 2, from conclusion 1 and premise 3: If we were to build physical machines that were trained using recursive alignment proposals, existing arguments would not establish the safety of the resulting policies.

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T19:39:25.707Z · score: 6 (3 votes) · LW · GW

A notation question I can already ask though: what is  ?

See my comment that attempted to figure this out.

Comment by danielfilan on Alignment proposals and complexity classes · 2020-07-16T19:37:31.733Z · score: 6 (3 votes) · LW · GW

A notational stumble it took me a while to solve when reading this - what's  supposed to mean? My guess at the answer, so that others can get through this quicker, or so that evhub can tell me I'm wrong:

 is supposed to be 's distribution over answers to questions . If  is a distribution over set  and , then  is the probability mass  assigns to . Finally,  is how the human would answer question  when given access to . So,  is the probability that  assigns to the answer that human  would give if  could use  to answer .

Comment by danielfilan on What should we do about network-effect monopolies? · 2020-07-07T16:03:02.675Z · score: 3 (2 votes) · LW · GW

The examples that you highlighted in your links are Google Chrome, Facebook, Grubhub, Amazon, Apple phones, Google search, and Yelp. Out of these, FB is the only one that seems like it deserves to be called a monopoly: Chrome competes with Firefox (among others), Grubhub competes with Caviar (among others), Amazon competes with google searching "buy [product]" and the gazillion other results that show up, Apple phones compete with Android, Google search competes with Bing and DuckDuckGo, and Yelp competes with Google reviews as well as other review sites.

Comment by danielfilan on Site Redesign Feedback Requested · 2020-07-05T04:56:40.200Z · score: 4 (2 votes) · LW · GW

I prefer white background to grey background.

Comment by danielfilan on A reply to Agnes Callard · 2020-06-30T01:17:13.268Z · score: 2 (1 votes) · LW · GW

To be fair, they also have a feedback page where you can type stuff.

To me, this seems like a strong counterargument - I'd think that petitions are an interface to the NYT the same way that they are to me, that is to say an unwelcome one.

Comment by danielfilan on Radical Probabilism [Transcript] · 2020-06-28T02:59:54.785Z · score: 6 (3 votes) · LW · GW

More formally, we want to update P to get Q where Q(X)=c for some chosen c. We set Q(Y) = cP(Y|X) + (1-c)P(~Y|X).

Huh, I'm really surprised this isn't Q(Y) = cP(Y|X) + (1-c)P(Y|~X). Was that a typo? If not, why choose your equation over mine?

Comment by danielfilan on [ongoing] Thoughts on Proportional voting methods · 2020-06-20T21:29:25.556Z · score: 2 (1 votes) · LW · GW

FYI this was visible to me on the front page.

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-05-15T17:33:48.773Z · score: 12 (7 votes) · LW · GW

Hot take: the norm of being muted on video calls is bad. It makes it awkward and difficult to speak, clap, laugh, or make "I'm listening" sounds. A better norm set would be:

  • use zoom in gallery mode, so somebody making noise doesn't make them more focussed than they were before
  • call from a quiet room
  • be more tolerant of random background sounds, the way we are IRL
Comment by danielfilan on Against strong bayesianism · 2020-05-02T06:58:24.662Z · score: 2 (1 votes) · LW · GW

Actually, I think the synthesis is that many of the things that Bob is saying are implications of Eliezer's description and ways of getting close to Bayesian reasoning, but seem like they're almost presented as concessions. I could try to get into some responses chosen by you if that would be helpful.

Comment by danielfilan on Against strong bayesianism · 2020-05-02T05:47:15.802Z · score: 2 (1 votes) · LW · GW

A lot of Bob's responses seem like natural consequences of Eliezer's claim, but some of them aren't.

Comment by danielfilan on DanielFilan's Shortform Feed · 2020-05-02T02:56:23.418Z · score: 11 (6 votes) · LW · GW

I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the 'Simplicio' character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn't try sufficiently hard to pass the ITT of the view they're arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type "as I understand it, defenders of proposition P might state X, but of course I could be wrong".

Comment by danielfilan on Against strong bayesianism · 2020-05-02T02:56:00.461Z · score: 4 (2 votes) · LW · GW

Also (crossposted to shortform):

I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the 'Simplicio' character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn't try sufficiently hard to pass the ITT of the view they're arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type "as I understand it, defenders of proposition P might state X, but of course I could be wrong".

Comment by danielfilan on Against strong bayesianism · 2020-05-02T02:48:56.908Z · score: 5 (3 votes) · LW · GW

I feel like a lot of Bob's responses are natural consequences of Eliezer's position that you describe as "strong bayesianism", except where he talks about what he actually recommends, and as such this post feels very uncompelling to me. Where they aren't, "strong bayesianism" is correct: it seems useful for someone to actually think about what the likelihood ratio of "a random thought popped into my head" is, and similarly about how likely skeptical hypotheses are.

Similarly,

In other words, an ideal bayesian is not thinking in any reasonable sense of the word - instead, it’s simulating every logically possible universe. By default, we should not expect to learn much about thinking based on analysing a different type of operation that just happens to look the same in the infinite limit.

seems like it just isn't an argument against

Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.

(and also I dispute the gatekeeping around the term 'thinking': when I simulate future worlds, that sure feels like thinking to me! but this is less important)

In general, I feel like I must be missing some aspect of your world-view that underlies this, because I'm seeing almost no connection between your arguments and the thesis you're putting forwards.

Comment by danielfilan on Subjective implication decision theory in critical agentialism · 2020-04-28T21:35:49.088Z · score: 4 (2 votes) · LW · GW

I'm kind of tired right now, so I might be missing something obvious, but:

It seems that subjective implication decision theory agrees with timeless decision theory on the problems considered, while diverging from causal decision theory, evidential decision theory, and functional decision theory.

Why do you say that it diverges from evidential decision theory (EDT)? AFAICT on all problems listed it does the same thing as EDT, and the style of reasoning seems pretty similar. Would you mind saying what SIDT would do in XOR mugging? (I'd try to work this out myself but for the aforementioned tiredness and the fear that I don't quite understand SIDT well enough).

Comment by danielfilan on Holiday Pitch: Reflecting on Covid and Connection · 2020-04-23T19:24:51.469Z · score: 3 (2 votes) · LW · GW

As the post says:

I wanted to tell all my friends “hey! Are you feeling lonely and disconnected? Try a Seder!”... but, well, I’m not Jewish, and most people aren’t Jewish, and… the story of Seder really deeply assumes “you are a part of Jewish history, or at least the people hosting the event are.”

Personally, as somebody who isn't Jewish and doesn't have Jewish ancestry, I would feel weird hosting a Seder or making one happen (where the feeling is that it would be the bad kind of cultural appropriation), and would also feel weird about it being a Rationalist holiday rather than a holiday for Rationalist Jewish people, just like I'd feel weird about Rationalists adopting Christmas or Obon as a Rationalist holiday, where the feeling is that religion is Actually Bad and rationalists shouldn't have religious ceremonies be an important part of their communities if they can help it.

Comment by danielfilan on Don't Use Facebook Blocking · 2020-04-22T18:38:12.148Z · score: 2 (1 votes) · LW · GW

Or have discussions that don't include disagreeing pairs.

The type of discussion I'm talking about is open-ended community discussion, so it would be weird to limit it such.

Or by having a public block list.

TBC, you need the vast majority of people to have a public block list for this to work.

underestimates?

Thanks, fixed.

Comment by danielfilan on Don't Use Facebook Blocking · 2020-04-22T03:18:53.760Z · score: 7 (3 votes) · LW · GW

When I block people on FB, I do so because I don't consider their contributions to discussions valuable and don't really care about them. If I'm correct about how much they matter, then presumably it's fine if they can't meaningfully participate in conversations. Furthermore, I don't think this is an unusual blocking pattern for people who block people on FB and participate in rationality community discussions.

Counterpoints:

  • There's a unilateralist curse problem where if just one person underestimates how valuable a person's contributions are, they don't get to fully participate in discussions. Hopefully this can be fixed by holding a high bar for blocking?
  • Even if you don't want people to participate in discussions, you often want them to see the discussion. A common case of this is when a norm is being hashed out that you want everybody to follow. (You could attempt to fix this by only blocking people who are peripheral enough to communities of concern that you don't care about their behaviour, but sadly your friends probably have slightly different communities of concern that you're bad at determining the boundaries of.)
  • If you're reading a big community thread, then even if nobody has blocked you, if you don't know that nobody has blocked you (and you don't) then you have an overhead of not knowing if you can see all the discussion, which probably makes discussions worse. This is a cost that you can only eliminate by having blocking be extremely uncommon.
Comment by danielfilan on April Coronavirus Open Thread · 2020-04-17T05:41:23.456Z · score: 5 (3 votes) · LW · GW

Metaculus is running a competition for accurate, publicly-posted, well-reasoned predictions about how COVID-19 will hit El Paso, Texas, in order to help the city with its disaster response. The top prize is $1,000.

Comment by danielfilan on Where should LessWrong go on COVID? · 2020-04-14T03:43:29.869Z · score: 7 (4 votes) · LW · GW

Note that Metaculus also estimates things that are likely inputs to models e.g. "the" IFR.

Comment by danielfilan on Reason as memetic immune disorder · 2020-04-13T04:31:36.175Z · score: 4 (2 votes) · LW · GW

My guess is that his name is "A. J. Jacobs" and not "A. J. Acobs"

Comment by danielfilan on [Announcement] LessWrong will be down for ~1 hour on the evening of April 10th around 10PM PDT (5:00AM GMT) · 2020-04-10T18:10:17.084Z · score: 7 (4 votes) · LW · GW

PSA: if you just say PT you won't be wrong in summer or winter.

Comment by danielfilan on An Orthodox Case Against Utility Functions · 2020-04-09T22:48:18.655Z · score: 6 (3 votes) · LW · GW

Specifically, discontinuous utility functions have always seemed basically irrational to me, for reasons related to incomputability.

Comment by danielfilan on COVID-19 and the US Elections · 2020-04-08T20:48:37.502Z · score: 2 (1 votes) · LW · GW

Relevant Metaculus question: Will the US hold mass-turnout elections for President on schedule in 2020?