Posts
Comments
I only realised the latter when I saw the Dutch word for this “middellandse zee”. The sea in the middle of the lands.
“Terranean” had never scanned separately to me
Related; when you never realized a compound word had a literal meaning....
Cup board -- board to put cups on -- cupboard
Medi terrain -- between two continents -- Mediterranean
Etc.
I think the gut thing is usually metaphorical though
(How) does this proposal enable single-blind peer review?
For ratings or metrics for the credibility of the research, I could imagine likes/reposts, etc., but could this enable
- Rating along multiple dimensions
- Rating intensity (e.g., strong positive, weak positive, etc.)
- Experts/highly rated people to have more weight in the rating (if people want this)
- On economics, michaba03m recommends Mankiw's Macroeconomics over Varian's Intermediate Microeconomics and Katz & Rosen's Macroeconomics.
- On economics, realitygrill recommends McAfee's Introduction to Economic Analysis over Mankiw's Principles of Microeconomics and Case & Fair's Principles of Macroeconomics.
Microeconomics and macroeconomics are different subjects and have different content. Why are they grouped together?
I think saying “I am not going to answer that because…” would not necessarily feel like taking a hit to the debater/interviewee. Could also bring scrutiny and pressure to moderators/interviewers to ask fair and relevant questions.
I think people would appreciate the directness. And maybe come to understand the nature of inquiry and truth a tiny bit better.
Anyone know where I can find some of the analysis and code that was done on this? Like Jupiter notebook or Quarto or Google Collab or Kaggle or something.
Either the modeling someone used to make their predictions (including aggregating round 1 perhaps) or the models of what predicted success that was done ex post?
I want to use these in my own work, which I will share publicly. The big question I have (without having dug much in much into the research and methods) is
suppose I had a model that was good at predict in the accuracy of an individual's prediction, as a function of their demography, their other prediction, etc. How would I then use that (along with everyone's predictions), to come up with a good prediction aggregation?
I started a [link post to this on the EA Forum](https://forum.effectivealtruism.org/posts/e7rWnAFGjWyPeQvwT/2-factor-voting-karma-agreement-for-ea-forum) to discuss if it makes sense over there.
One thing I suggested as a variation of thi:
> B. Perhaps the 'agreement' axis should be something that the post author can add voluntarily, specifying what is the claim people can indicate agreement/disagreement with? (This might also work well with the metaculus prediction link that is in the works afaik).