Posts

Comments

Comment by drgunn on How much fraud is there in academia? · 2023-11-16T16:47:54.002Z · LW · GW

Fanelli is a good, if dated reference for this. Another important point is that there are levels of misconduct in research, ranging from bad authorship practices to outright fabrication of results, with the less severe practices being relatively more common: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4269469/

Aside from all that, there's irreproducibility, which doesn't arise from any kind of deliberate misconduct, but still pollutes the epistemic commons: https://www.cos.io/rpcb

Comment by drgunn on Vaniver's thoughts on Anthropic's RSP · 2023-10-29T01:29:51.446Z · LW · GW

As someone with experience in BSL-3 labs, BSL feels like a good metaphor to me. The big issue with the RSP proposal is that it's still just a set of voluntary commitments that could undermine progress on real risk management by giving policymakers a way to make it look like they've done something without really doing anything. It would be much better with input from risk management professionals.

Comment by drgunn on Berkeley, California, USA – ACX Meetups Everywhere Fall 2023 · 2023-10-21T21:23:46.556Z · LW · GW

I'm confused.The protest is now listed at 4. Have you coordinated with them?

Comment by drgunn on The Great Disembedding · 2023-09-27T16:17:02.062Z · LW · GW

I like it, but it feels like you could have worked snakes in there somehow: https://www.vectorsofmind.com/p/the-snake-cult-of-consciousness

Comment by drgunn on “X distracts from Y” as a thinly-disguised fight over group status / politics · 2023-09-25T23:31:04.635Z · LW · GW

X-risk discussions aren't immune from the "grab the mic" dynamics that affect every other cause advocacy community.

There will continue to be tactics such as "X distracts from Y" and "if you really cared about X you would ..." unless and until people who care about the cause for the cause's sake can identify and exclude those who care about the cause for the sake of the cultural and social capital that can be extracted. Inclusivity has such a positive affect halo around it that it's hard to do this, but it's really the only way.

Longer-form of the argument: https://meaningness.com/geeks-mops-sociopaths

Comment by drgunn on Places to meet interesting middle-aged men? · 2023-09-24T00:31:39.598Z · LW · GW

I'm not available, I'll be clear about that up front, but I am in my late 40s, in case that helps anyone update their priors about the demographics. YMMV as to whether I'm intellectually interesting ;-)

Comment by drgunn on Why I Don't Believe The Law of the Excluded Middle · 2023-09-19T05:30:39.278Z · LW · GW

When I hear someone saying,

"Because it is asking to me to believe it completely if I believe it at all, I feel more comfortable choosing to consider it “false” on my terms, which is merely that it gave me no other choice because it defined itself to be false when it is only believed weakly."

what I think is "of course there are strong and weak beliefs!" but true and false is only defined relative to who is asking and why (in some cases), so you need to consider the context in which you're applying LoEM.

In other words, LoEM applies to "Does 2+2=4?" but it does not apply to "Is there water in the fridge?", unless the context is specified more carefully.

It's obviously an error to only have 100% or 0% as truth values for all propositions, and it's perhaps less obviously an error to have the same probabilities that a proposition is true across all possible contexts in which that proposition might be evaluated.

More here: https://metarationality.com/refrigerator

Comment by drgunn on Lesswrong's opinion on infinite epistemic regress and Bayesianism · 2023-09-17T03:24:23.235Z · LW · GW

You seem to be talking about "combinatorial explosion". It's a classic problem in AI, and I like John Vervaeke's approach to explaining how humans solve the problem for themselves. See: http://sites.utoronto.ca/jvcourses/jolc.pdf

No one has solved it for AI yet.

Comment by drgunn on What are examples of someone doing a lot of work to find the best of something? · 2023-07-28T02:32:59.581Z · LW · GW

It's a very interesting model of the world that tastes 18 different brands of store-bought sauce and doesn't compare it to just making your own. Add "seriouseats" to any recipe-related query and you'll get recipes that both work and taste like they're supposed to. They may eventually fire their writers and replace them with AI trained on blogspam recipes, so exploit this knowledge while you can. Surprisingly few people know about it, given how much utility basic culinary knowledge can add to your life.

Comment by drgunn on The case for Doing Something Else (if Alignment is doomed) · 2023-03-14T16:35:30.795Z · LW · GW

What if AI safety and governance people published their papers on Arxiv in addition to NBER or wherever? I know it's not the kind of stuff that Arxiv accepts, but if I was looking for a near-term policy win, that might be one.

Comment by drgunn on What an actually pessimistic containment strategy looks like · 2023-03-14T06:27:57.917Z · LW · GW

It strikes me that 80000 hours puts you just about when the prediction markets are predicting AGI to be available, i.e., a bit late. I wonder if EA folks still think government roles are the best way to go?