LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
Good article.
It's an asymmetry worth pointing out.
It seems related to some concept of "low interest rate phenomenon in ideas". Sometimes in a low interest rate environment, people fund all sorts of stuff, because they want any return and credit is cheap. Later much of this looks bunk. Likewise, much EA behaviour around the plentiful money and status of the FTX era looks profligate by todays standards. In the same way I wonder what ideas are held up by some vague consensus rather than being good ideas.
nathan-young on Motivation gaps: Why so much EA criticism is hostile and lazyFeels like there is something off about the following graph. Ie these people could write better critiques. Many care a lot. Émile spends a lot of time on their work for instance. I don't think effort really catches what's going on.
I think it's a mix of effort status and norms. In our community it's high status to bend over backwards to write a critique (not that we always succeed). For Émile, as an example, I don't think this is the case. Perhaps, they gain status by articles that are widely shared and link ideas they dislike to a broader worldview.
anand-baburajan on Open Thread Spring 2024
I like his UI. In fact, I shared about CQ2 with Andy in February since his notes site was the only other place where I had seen the sliding pane design. He said CQ2 is neat!
kaj_sotala on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious termsBased on the link, it seems you follow the Theravada tradition.
For what it's worth, I don't really follow any one tradition, though Culadasa does indeed have a Theravada background.
cubefox on Priors and PrejudiceThe problem with calling parts of a learning algorithm a prior that are not free variables, is that then anything (every part of any learning algorithm) would count as a prior. So even the Bayesian conditionalization rule itself. But that's not what Bayesians consider part of a prior.
dr_s on Priors and PrejudiceI'd definitely call any assumption about which forms preferred explanations should take as a "prior". Maybe I have a more flexible concept of what counts as Bayesian than you, in that sense? Priors don't need to be free parameters, the process has to start somewhere. But if you already have some data and then acquire some more data, obviously the previous data will still affect your conclusions.
cubefox on Priors and PrejudiceAll of that can be accounted for in a Bayesian framework though?
I mean that those factors don't presuppose different priors. You could still end up with different "posteriors" even with the same "starting point".
An example for an (informal) alternative to Bayesian updating, that doesn't require subjective priors, is Inference to the Best Explanation. One could, of course, model the criteria that determine the goodness of explanations as a sort of "prior". But those criteria would be part of the hypothetical IBE algorithm, not a free variable like in Bayesian updating. One could also claim that there are no objective facts about the goodness of explanations and that IBE is invalid. But that's an open question.
dr_s on Priors and PrejudiceI'm not sure how that works. Bayes' theorem, per se, is correct. I'm not talking about a level of abstraction in which I try to define decisions/beliefs as symbols, I'm talking about the bare "two different brains with different initial states, subject to the same input, will end up in different final states".
Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.
All of that can be accounted for in a Bayesian framework though? Different experiences produce different posteriors of course, and as for path dependence and random chance, I think you can easily get those by introducing some kind of hidden states, describing things we don't quite know about the inner workings of the brain.
morpheus on A couple productivity tips for overthinkers
- If you find that you’re reluctant to delete computer files / emails, don’t empty the trash
In Gmail I like to scan the email headers and then I bulk select and archive them (* a e
thanks to vim shortcuts). After 5 years of doing this I still didn't run out of the free storage in Gmail. I already let Gmail sort the emails by "Primary" , "Promotions" , "Updates" etc. Usually the only important things are in "Primary" and 1 or 2 in "Updates".
Though this is only what Bayesianism predicts. A different theory of induction (e.g. one that explains human intelligence, or one that describes how to build an AGI) may not have an equivalent to Bayesian priors. Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.