Posts

'Chat with impactful research & evaluations' (Unjournal NotebookLMs) 2024-09-28T00:32:16.845Z
Calibration training for 'percentile rankings'? 2024-09-14T21:51:55.705Z
Checking public figures on whether they "answered the question" quick analysis from Harris/Trump debate, and a proposal 2024-09-11T20:25:27.845Z
A possible check against motivated reasoning using elicit.org 2022-05-18T20:52:35.601Z

Comments

Comment by david reinstein (david-reinstein) on An academic journal is just a Twitter feed · 2024-09-29T22:17:52.465Z · LW · GW

(How) does this proposal enable single-blind peer review?

For ratings or metrics for the credibility of the research, I could imagine likes/reposts, etc., but could this enable 

  • Rating along multiple dimensions
  • Rating intensity (e.g., strong positive, weak positive, etc.) 
  • Experts/highly rated people to have more weight in the rating (if people want this)
Comment by david reinstein (david-reinstein) on The Best Textbooks on Every Subject · 2024-09-28T00:24:41.860Z · LW · GW


Microeconomics and macroeconomics are different subjects and have different content. Why are they grouped together?

Comment by david reinstein (david-reinstein) on Checking public figures on whether they "answered the question" quick analysis from Harris/Trump debate, and a proposal · 2024-09-15T15:06:10.274Z · LW · GW

I think saying “I am not going to answer that because…” would not necessarily feel like taking a hit to the debater/interviewee. Could also bring scrutiny and pressure to moderators/interviewers to ask fair and relevant questions.

I think people would appreciate the directness. And maybe come to understand the nature of inquiry and truth a tiny bit better.

Comment by david reinstein (david-reinstein) on [Crosspost] ACX 2022 Prediction Contest Results · 2023-01-31T23:24:59.264Z · LW · GW

Anyone know where I can find some of the analysis and code that was done on this? Like Jupiter notebook or Quarto or  Google Collab or Kaggle or  something. 

Either the modeling someone used to make their predictions (including aggregating round 1 perhaps) or the models of what predicted success that was done ex post?

I want to use these in my own work, which I will share publicly. The big question I have (without having dug much in much into the research and methods) is 

suppose I had a model that was good at predict in the accuracy of an individual's prediction, as a function of their demography, their other prediction, etc. How would I then use that (along with everyone's predictions), to come up with a good prediction aggregation?

Comment by david reinstein (david-reinstein) on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-06-25T20:39:59.132Z · LW · GW

I started a [link post to this on the EA Forum](https://forum.effectivealtruism.org/posts/e7rWnAFGjWyPeQvwT/2-factor-voting-karma-agreement-for-ea-forum) to discuss if it makes sense over there. 

One thing I suggested as a variation of thi:



> B. Perhaps the 'agreement' axis should be something that the post author can add voluntarily, specifying what is the claim people can indicate agreement/disagreement with? (This might also work well with the metaculus prediction link that is in the works afaik).