Posts
Comments
OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn't punish them by giving them even more work.
I've only read a little bit of Martin Gardner, but he might be the Matt Levine of recreational math.
Many newspapers have a (well-earned) reputation for not technically lying.
Thank you, this information was useful for a project I'm working on.
I don't think I understand what "learn to be visibly weird" means, and how it differs from not following social conventions because you fail to understand them correctly.
I was recently looking into donating to CLTR and I'm curious why you are excited about it? My sense was that little of its work was directly relevant to x-risk (for example this report on disinformation is essentially useless for preventing x-risk AFAICT), and the relevant work seemed to be not good or possibly counterproductive. For example their report on "a pro-innovation approach to regulating AI" seemed bad to me on two counts:
- There is a genuine tradeoff between accelerating AI-driven innovation and decreasing x-risk. So to the extent that this report's recommendations support innovation, they increase x-risk, which makes this report net harmful.
- The report's recommendations are kind of vacuous, e.g. they recommend "reducing inefficiencies", like yes, this is a fully generalizable good thing but it's not actionable.
(So basically I think this report would be net negative if it wasn't vacuous, but because it's vacuous, it's net neutral.)
This is the sense I get as someone who doesn't know anything about policy and is just trying to get the sense of orgs' work by reading their websites.
My perspective is that I'm much more optimistic about policy than about technical research, and I don't really feel qualified to evaluate policy work, and LTFF makes almost no grants on policy. I looked around and I couldn't find any grantmakers who focus on AI policy. And even if they existed, I don't know that I could trust them (like I don't think Open Phil is trustworthy on AI policy and I kind of buy Habryka's arguments that their policy grants are net negative).
I'm in the process of looking through a bunch of AI policy orgs myself. I don't think I can do a great job of evaluating them but I can at least tell that most policy orgs aren't focusing on x-risk so I can scratch them off the list.
if you think the polling error in 2024 remains unpredictable / the underlying distribution is unbiased
Is there a good reason to think that if polls have recently under-reported Republican votes?
I don't know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)
- replace "^(https://.? )([[.?]] )*" with "\1"
- replace "[[(.*?)]]" with "\1"
This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.
RE Shapley values, I was persuaded by this comment that they're less useful than counterfactual value in at least some practical situations.
(2) have "truesight", i. e. a literally superhuman ability to suss out the interlocutor's character
Why do you believe this?
If your goal is to influence journalists to write better headlines, then it matters whether the journalist has the ability to take responsibility over headlines.
If your goal is to stop journalists from misrepresenting you, then it doesn't actually matter whether the journalist has the ability to take responsibility, all that matters is whether they do take responsibility.
Often, you write something short that ends up being valuable. That doesn't mean you should despair about your longer and harder work being less valuable. Like if you could spend 40 hours a week writing quick 5-hour posts that are as well-received as the one you wrote, that would be amazing, but I don't think anyone can do that because the circumstances have to line up just right, and you can't count on that happening. So you have to spend most of your time doing harder and predictably-less-impactful work.
(I just left some feedback for the mapping discussion post on the post itself.)
Some feedback:
- IMO this project was a good use of your time ex ante.[1] Unclear if it will end up being actually useful but I think it's good that you made it.
- "A new process for mapping discussions" is kind of a boring title and IMO does not accurately reflect the content. It's mapping beliefs more so than discussions. Titles are hard but my first idea for a title would be "I made a website that shows a graph of what public figures believe about SB 1047"
- I didn't much care about the current content because it's basically saying things I already knew (like, the people pessimistic about SB 1047 are all the usual suspects—Andrew Ng, Yann LeCun, a16z).
- If I cared about AI safety but didn't know anything about SB 1047, this site would have led me to believe that SB 1047 was good because all the AI safety people support it. But I already knew that AI safety people supported SB 1047.
- In general, I don't care that much about what various people believe. It's unlikely that I would change my mind based on seeing a chart like the ones on this site.[2] Perhaps most LW readers are in the same boat. I think this is the sort of thing journalists and maybe public policy people care more about.
- I have changed my mind based on opinion polls before. Specifically, I've changed my mind on scientific issues based on polls of scientists showing that they overwhelmingly support one side (e.g. I used to be anti-nuclear power until I learned that the expert consensus went the other way). The surveys on findingconsensus.ai are much smaller and less representative.
[1] At least that's my gut feeling. I don't know you personally but my impression from seeing you online is that you're very talented and therefore your counterfactual activities would have also been valuable ex ante, so I can't really say that this was the best use of your time. But I don't think it was a bad use.
[2] Especially because almost all the people on the side I disagree with are people I have very little respect for, eg a16z.
This is a good and important point. I don't have a strong opinion on whether you're right, but one counterpoint: AI companies are already well-incentivized to figure out how to control AI, because (as Wei Dai said) controllable AI is more economically useful. It makes more sense for nonprofits / independent researchers to do work that AI companies wouldn't do otherwise.
If Open Phil is unwilling to fund some/most of the best orgs, that makes earning to give look more compelling.
(There are some other big funders in AI safety like Jaan Tallinn, but I think all of them combined still have <10% as much money as Open Phil.)
I should add that I don't want to dissuade people from criticizing me if I'm wrong. I don't always handle criticism well, but it's worth the cost to have accurate beliefs about important subjects. I knew I was gonna be anxious about this post but I accepted the cost because I thought there was a ~25% chance that it would be valuable to post.
A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently.
My impression is that those people are paying a social cost for how willing they are to bring up perceived concerns, and I have a lot of respect for them because of that.
Thanks for the reply. When I wrote "Many people would have more useful things to say about this than I do", you were one of the people I was thinking of.
AI Impacts wants to think about AI sentience and OP cannot fund orgs that do that kind of work
Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it's generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I've criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they've gotten worse on AI safety (and weird causes in general).
I've been avoiding LW for the last 3 days because I was anxious that people were gonna be mad at me for this post. I thought there was a pretty good chance I was wrong, and I don't like accusing people/orgs of bad behavior. But I thought I should post it anyway because I believed there was some chance lots of people agreed with me but were too afraid of social repercussions to bring it up (like I almost was).
What are the norms here? Can I just copy/paste this exact text and put it into a top-level post? I got the sense that a top-level post should be more well thought out than this but I don't actually have anything else useful to say. I would be happy to co-author a post if someone else thinks they can flesh it out.
Edit: Didn't realize you were replying to Habryka, not me. That makes more sense.
I get the sense that we can't trust Open Philanthropy to do a good job on AI safety, and this is a big problem. Many people would have more useful things to say about this than I do, but I still feel that I should say something.
My sense comes from:
- Open Phil is reluctant to do anything to stop the companies that are doing very bad things to accelerate the likely extinction of humanity, and is reluctant to fund anyone who's trying to do anything about it.
- People at Open Phil have connections with people at Anthropic, a company that's accelerating AGI and has a track record of (plausibly-deniable) dishonesty. Dustin Moskovitz has money invested in Anthropic, and Open Phil employees might also stand to make money from accelerating AGI. And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.
A lot of people (including me as of ~one year ago) consider Open Phil the gold standard for EA-style analysis. I think Open Phil is actually quite untrustworthy on AI safety (but probably still good on other causes).
I don't know what to do with this information.
As a frequent oatmeal-eater, I have a few miscellaneous comments:
- You mentioned adding fruit paste, fruit syrup, and fruit pulp to oatmeal, but I'm surprised you didn't mention what I consider the best option: whole fruit. I usually use blueberries but sometimes I mix it up with blackberries or sliced bananas.
- I buy one-minute oats. You don't actually need them to cook them for a minute, you can just pour boiling water onto them and they'll soften up by the time they're cool enough to eat.
- I wouldn't eat oats for the protein, they have more than rice but still not very much. I mix 80g (1 cup) of oatmeal with 25g of soy protein powder, which brings the protein up from 10g to 30g.
- I don't get the appeal of overnight oats. I have to microwave it anyway to get it to a reasonable temperature, and it tends to stick to the jar which greatly increases cleanup time. (I think the stickiness comes more from the protein powder than the oats.)
Relatedly, I see a lot of people use mediocre AI art when they could just as easily use good stock photos. You can get free, watermarkless stock photos at https://pixabay.com/.
The mnemonic I've heard is "red and yellow, poisonous fellow; red and black, friend of Jack"
I was reading some scientific papers and I encountered what looks like fallacious reasoning but I'm not quite sure what's wrong with it (if anything). It does like this:
Alice formulates hypothesis H and publishes an experiment that moderately supports H (p < 0.05 but > 0.01).
Bob does a similar experiment that contradicts H.
People look at the differences in Alice's and Bob's studies and formulate a new hypothesis H': "H is true under certain conditions (as in Alice's experiment), and false under other conditions (as in Bob's experiment)". They look at the two studies and conclude that H' is probably true because it's supported by both studies.
This sounds fishy to me (something like post hoc reasoning) but I'm not quite sure how to explain why and I'm not even sure I'm correct.
Suppose an ideology says you're not allowed to question idea X.
I think there are two different kinds of "not questioning": there's unquestioningly accepting an idea as true, and there's refusing to question and remaining agnostic. The latter position is reasonable in the sense that if you refuse to investigate an issue, you shouldn't have any strong beliefs about it. And I think the load-bearingness is only a major issue if you refuse to question X while also accepting that X is true.
There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.
(This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)
If I want to write to my representative to oppose this amendment, who do I write to? As I understand, the bill passed the Senate but must still pass Assembly. Is the Senate responsible for re-approving amendments, or does that happen in Assembly?
Also, should I write to a representative who's most likely to be on the fence, or am I only allowed to write to the representative of my district?
5 minute super intense cardio, as a replacement for long, low intensity cardio. It is easier to motivate oneself to do 5 minutes of Your-Heart-Might-Explode cardio than two hours of jogging or something. In fact it takes very little motivation, if you trick yourself into doing it right after waking up, when your brain is on autopilot anyway, and unable to resist routine.
Interesting, I had the complete opposite experience. I previously had the idea that exercise should be short and really hard, and I couldn't stick with it. Then I learned that it's better if the majority of your exercise is very easy. Now I go for hour-long walks and I get exercise every day. (Jogging is too hard to qualify as easy exercise.)
What's the deal with mold? Is it ok to eat moldy food if you cut off the moldy bit?
I read some articles that quoted mold researchers who said things like (paraphrasing) "if one of your strawberries gets mold on it, you have to throw away all your strawberries because they might be contaminated."
I don't get the logic of that. If you leave fruit out for long enough, it almost always starts growing visible mold. So any fruit at any given time is pretty likely to already have mold on it, even if it's not visible yet. So by that logic, you should never eat fruit ever.
They also said things like "mold usually isn't bad, but if mold is growing on food, there could also be harmful bacteria like listeria." Ok, but there could be listeria even if there's not visible mold, right? So again, by this logic, you should never eat any fresh food ever.
This question seems hard to resolve without spending a bunch of time researching mold so I'm hoping there's a mold expert on LessWrong. I just want to know if I can eat my strawberries.
I don't understand how not citing a source is considered acceptable practice. It seems antithetical to standard journalistic ethics.
we have found Mr Altman highly forthcoming
He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don't lie to the board?
Taylor's and Summers' comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam's now-publicly-verified bad behavior.
I was just thinking not 10 minutes ago about how that one LW user who casually brought up Daniel K's equity (I didn't remember your username) had a massive impact and I'm really grateful for them.
There's a plausible chain of events where simeon_c brings up the equity > it comes to more people's attention > OpenAI goes under scrutiny > OpenAI becomes more transparent > OpenAI can no longer maintain its de facto anti-safety policies > either OpenAI changes policy to become much more safety-conscious, or loses power relative to more safety-conscious companies > we don't all die from OpenAI's unsafe AI.
So you may have saved the world.
The target audience for Soylent is much weirder. Although TBF I originally thought the Soylent branding was a bad idea and I was probably wrong.
This also stood out to me as a truly insane quote. He's almost but not quite saying "we have raised awareness that this bad thing can happen by doing the bad thing"
Some ideas:
- Make Sam Altman look stupid on Twitter, which will marginally persuade more employees to quit and more potential investors not to invest (this is my worst idea but also the easiest, and people seem to pretty much have this one covered already)
- Pay a fund to hire a good lawyer to figure out a strategy to nullify the non-disparagement agreements. Maybe a class-action lawsuit, maybe a lawsuit on the behalf of one individual, maybe try to charge Altman with some sort of crime, I'm not sure the best way to do this but that's the lawyer's job to figure out.
- Have everyone call their representative in support of SB 1047, or maybe even say you want SB 1047 to have stronger whistleblower protections or something similar.
"we would also expect general support for OpenAI to be likely beneficial on its own" seems to imply that they did think it was good to make OAI go faster/better, unless that statement was a lie to avoid badmouthing a grantee.
What do you think is the strongest evidence on sunscreen? I've read mixed things on its effectiveness.
Update: I finished my self-experiment, results are here: https://mdickens.me/2024/04/11/caffeine_self_experiment/
Have there been any great discoveries made by someone who wasn't particularly smart?
This seems worth knowing if you're considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?
I find that sort of feedback more palatable when they start with something like "This is not related to your main point but..."
I am more OK with talking about tangents when the commenter understands that it's a tangent.
I wonder if there's a good way to call out this sort of feedback? I might start trying something like
That's a reasonable point, I have some quibbles with it but I think it's not very relevant to my core thesis so I don't plan on responding in detail.
(Perhaps that comes across as rude? I'm not sure.)
I realize I got to this thread a bit late but here are two things you can do:
- Pull-up negatives. Use your legs to jump up to the top of a pull-up position and then lower yourself as slowly as possible.
- Banded pull-ups. This might be tricky to set up in a doorway but if you can, tie a resistance band at a height where you can kneel on it while doing pull-ups and the band will help push you up.
When the NYT article came out, some people discussed the hypothesis that perhaps the article was originally going to be favorable, but the editors at NYT got mad when Scott deleted his blog so they forced Cade to turn it into a hit piece. This interview pretty much demonstrates that it was always going to be a hit piece (and, as a corollary, Cade lied to people saying it was going to be positive to get them to do interviews).
So yes this changed my view from "probably acted unethically but maybe it wasn't his fault" to "definitely acted unethically".
people have repeatedly told me that a surprisingly high fraction of applicants for programming jobs can't do fizzbuzz
I've heard it argued that this isn't representative of the programming population. Rather, people who suck at programming (and thus can't get jobs) apply to way more positions than people who are good at programming.
I have no idea if it's true, but it sounds plausible.
On the note of wearing helmets, wearing a helmet while walking is plausibly as beneficial as wearing one while cycling[1]. So if you weren't so concerned about not looking silly[2], you'd wear a helmet while walking.
[1] I've heard people claim that this is true. I haven't looked into it myself but I find the claim plausible because there's a clear mechanism—wearing a helmet should reduce head injuries if you get hit by a car, and deaths while walking are approximately as frequent as deaths while cycling.
[2] I'm using the proverbial "you" in the same way as Mark Xu.
Just last week I wrote a post reviewing the evidence on caffeine cycling and caffeine habituation. My conclusion was that the evidence was thin and it's hard to say anything with confidence.[1]
My weakly held beliefs are:
- Taking caffeine daily is better than not taking it at all, but worse than cycling.
- Taking caffeine once every 3 days is a reasonable default. A large % of people can take it more often than that, and a large % will need to take it less.
I take caffeine 3 days a week and I am currently running a self-experiment (described in my linked post). I'm currently in the experimental phase, I already did a 9-day withdrawal period and my test results over that period (weakly) suggest that I wasn't habituated previously because my performance didn't improve during the withdrawal period (it actually got worse, p=0.4 on a regression test).
[1] Gavin Leech's post that you linked cited a paper on brain receptors in mice which I was unaware of, I will edit my post to include it. Based on reading the abstract, it looks like that study suggests a weaker habituation effect than the studies I looked at (receptor density in mice increased by 20–25% which naively suggests a 20–25% reduction in the benefit of caffeine whereas other studies suggest a 30–100% reduction, but I'm guessing you can't just directly extrapolate from receptor counts to efficacy like that). Gavin also cited Rogers et al. (2013) which I previously skipped over because I thought it wasn't relevant, but on second thought, it does look relevant and I will give it a closer look.
The contextualizer/decoupler punch is an outstanding joke.
Based on your explanation in this comment, it seems to me that St. Petersburg-like prospects don't actually invalidate utilitarian ethics as it would have been understood by e.g. Bentham, but it does contradict the existence of a real-valued utility function. It can still be true that welfare is the only thing that matters, and that the value of welfare aggregates linearly. It's not clear how to choose when a decision has multiple options with infinite expected utility (or an option that has infinite positive EV plus infinite negative EV), but I don't think these theorems imply that there cannot be any decision criterion that's consistent with the principles of utilitarianism. (At the same time, I don't know what the decision criterion would actually be.) Perhaps you could have a version of Bentham-esque utilitarianism that uses a real-valued utility function for finite values, and uses some other decision procedure for infinite values.