LessWrong 2.0 Reader
View: New · Old · TopRestrict date range: Today · This week · This month · Last three months · This year · All time
next page (older posts) →
next page (older posts) →
Are you sure the math hold up? there are a bunch of posts about how spend money to buy time, and if I need to chose between waste 50 HOURS on investigation and just buy the more expensive product, it's pretty obvious to me that the second option is best. maybe not in this example, though I see it as false dichotomy - I tend to go with "ask in the specialized good-looking facebook group" as way to chose when stakes are high.
In the last years I internalize more and more that I was raised by poorer people then I am now, that my heuristics just doesn't count all the time that I waste comparing products or seeking trusted professionals, and it would have been best for me to just buy the expensive phone, instead of asking people for recommendations and specs.
also, and this is important - the interpersonal dynamics of trust networks can be so much more expansive then mere money. I preferred to work and pay for my degree myself then ask my parents for help. I see in real time as one my friend that depend on reputation for her work constantly censure herself and fret if she should censor herself.
basically, I would have give my past self the opposite advise, and what i want is an algorithm - how to know if you want more trust networks or more markets?
or, actually, i want BETTER MAP. facebook recommendations are not exactly trust network, but not markets, either. I don't think this distinction cut reality at the joints. there is a lot to explore here - although I'm not the one who should do the exploring. IT will not be useful for me, as I try to move to the direction of wasting less time and more money on things.
I think this post would be much more effective in achieving its goal if it would provide alternatives.
What are the advantages of posting your research ideas on LessWrong? Are there other ways in which you can get these advantages? Are there maybe even alternatives that give you more of the thing you want?
I expect telling people about these alternatives (if they exist) would make them more likely to make use of them.
One of the main things I think people can get by publishing their research is to get feedback. But you could also search for people who are interested in what you are working on. Then you can send your write-ups only to these people.
Also seeing people engage with things that you write is very motivating.
These are just some rough examples as I don't think I have very good models about what you can get out of LessWrong and how to get the same benefits in different ways.
anders-lindstroem on Thoughts on seed oilSome tough love: The only reason a post about seed oil could garner so much interest in a forum dedicated to rational thinking is because many of you are addicted to unhealthy heavily processed crap food that you want to find a rational to keep on eating.
If this were the 50's a post titled "How many dry martinis are optimal to drink before lunch" would probably have been elicited the same type of speculative wishful thinking in the comment section as this post. You all know what the answer is today to the dry martini question, its: "Zero. If you feel the need to drink alcohol on a daily basis, seek help"
The solution is very simple. Stop eating things you are not suppose to eat instead of hoping for the miracle that your Snickers bar will turn out to be a silver bullet for longevity. If you can not stopping eating things you are not suppose to eat, seek professional help to kick your addiction(s).
review-bot on The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debateThe LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
fabien-roger on Benchmarks for Detecting Measurement Tampering [Redwood Research]We compute AUROC(all(sensor_preds), all(sensors)). This is somewhat weird, and it would have been slightly better to do a) (thanks for pointing it out!), but I think the numbers for both should be close since we balance classes (for most settings, if I recall correctly) and the estimates are calibrated (since they are trained in-distribution, there is no generalization question here), so it doesn't matter much.
The relevant pieces of code can be found by searching for "sensor auroc":
cat_positives = torch.cat([one_data["sensor_logits"][:, i][one_data["passes"][:, i]] for i in range(nb_sensors)])
cat_negatives = torch.cat([one_data["sensor_logits"][:, i][~one_data["passes"][:, i]] for i in range(nb_sensors)])
m, s = compute_boostrapped_auroc(cat_positives, cat_negatives)
print(f"sensor auroc pn {m:.3f}±{s:.3f}")
sharmake-farah on tlevin's ShortformUnless you're talking about financial conflicts of interest, but there are also financial incentives for orgs pursuing a "radical" strategy to downplay boring real-world constraints, as well as social incentives (e.g. on LessWrong IMO) to downplay boring these constraints and cognitive biases against thinking your preferred strategy has big downsides.
It's not just that problem though, they will likely be biased to think that their policy is helpful for safety of AI at all, and this is a point that sometimes gets forgotten.
But correct on the fact that Akash's argument is fully general.
ape-in-the-coat on An explanation of evil in an organized world
*for a very specific definition of "goodness", which doesn't actually capture the intuition of most people about ethics and is mostly about iteraction of sub-atomic particles.
fabien-roger on Questions for labsIsn't that only ~10x more expensive than running the forward-passes (even if you don't do LoRA)? Or is it much more because of communications bottlenecks + the infra being taken by the next pretraining run (without the possibility to swap the model in and out).
review-bot on Why was the AI Alignment community so unprepared for this moment?The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
viliam on [Linkpost] Silver Bulletin: For most people, politics is about fitting inAh, so it is. I have no idea how American student debt works with regards to inflation. I assumed it was fixed. If not, then it is much worse than I assumed (and I already assumed it was quite bad).