Posts

A "slow takeoff" might still look fast 2023-02-17T16:51:48.885Z
How much should I update on the fact that my dentist is named Dennis? 2022-12-26T19:11:07.918Z
Why does gradient descent always work on neural networks? 2022-05-20T21:13:28.230Z
MichaelDickens's Shortform 2021-10-18T18:26:53.537Z
How can we increase the frequency of rare insights? 2021-04-19T22:54:03.154Z
Should I prefer to get a tax refund, or not to? 2020-10-22T20:21:05.073Z

Comments

Comment by MichaelDickens on Remap your caps lock key · 2024-12-17T05:38:41.278Z · LW · GW

I primarily use a weird ergonomic keyboard (the Kinesis Advantage 2) with custom key bindings. But my laptop keyboard has normal key bindings, so my "normal keyboard" muscle memory still works.

Comment by MichaelDickens on Remap your caps lock key · 2024-12-17T05:36:28.018Z · LW · GW

On Linux Mint with Cinnamon, you can do this in system settings by going to Keyboard -> Layouts -> Options -> Caps Lock behavior. (You can also put that line in a shell script and set the script to run at startup.)

Comment by MichaelDickens on Remap your caps lock key · 2024-12-17T05:32:46.521Z · LW · GW

I use a Kinesis Advantage keyboard with the keys rebound to look like this (apologies for my poor graphic design skills):

https://i.imgur.com/Mv9FI7a.png

  • Caps Lock is rebound to Backspace and Backspace is rebound to Shift.
  • Right Shift is rebound to Ctrl + Alt + Super, which I use as a command prefix for window manager commands.
  • "custom macro" uses the keyboard's built-in macro feature to send a sequence of four keypresses (Alt-G Ctrl-`), which I use as a prefix for some Emacs commands.
  • By default, the keyboard has two backslash (\) keys. I use the OS keyboard software to rebind the second one to "–" (unshifted) and "—" (shifted), which for me are the most useful characters that aren't on a standard US keyboard.
Comment by MichaelDickens on Effective Altruism FAQ · 2024-12-17T05:10:00.860Z · LW · GW
  1. There were two different clauses, one about malaria and the other about chickens. "Helping people is really important" clearly applies to the malaria clause, and there's a modified version of the statement ("helping animals is really important") that applies to the chickens clause. I think writing it that way was an acceptable compromise to simplify the language and it's pretty obvious to me what it was supposed to mean.
  2. "We should help more rather than less, with no bounds/limitations" is not a necessary claim. It's only necessary to claim "we should help more rather than less if we are currently helping at an extremely low level".
Comment by MichaelDickens on What is MIRI currently doing? · 2024-12-14T04:40:49.459Z · LW · GW

MIRI's communications strategy update published in May explained what they were planning on working on. I emailed them a month or so ago and they said they are continuing to work on the things in that blog post. They are the sorts of things that can take longer than a year so I'm not surprised that they haven't released anything substantial in the way of comms this year.

Comment by MichaelDickens on Analysis of Global AI Governance Strategies · 2024-12-07T01:55:48.873Z · LW · GW

That's only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from "it's possible to build superintelligence with a huge multi-billion-dollar project" and "it's possible to build superintelligence on a few consumer GPUs". (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it's moot.)

Comment by MichaelDickens on Analysis of Global AI Governance Strategies · 2024-12-06T19:27:46.263Z · LW · GW

I don't think controlling compute would be qualitatively harder than controlling, say, pseudoephedrine.

(I think it would be harder, but not qualitatively harder—the same sorts of strategies would work.)

Comment by MichaelDickens on Analysis of Global AI Governance Strategies · 2024-12-05T01:31:11.013Z · LW · GW

Also, I don't feel that this article adequately addressed the downside of SA that it accelerates an arms race. SA is only favored when alignment is easy with high probability and you're confident that you will win the arms race, and you're confident that it's better for you to win than for the other guy[1], and you're talking about a specific kind of alignment where an "aligned" AI doesn't necessarily behave ethically, it just does what its creator intends.

[1] How likely is a US-controlled (or, more accurately, Sam Altman/Dario Amodei/Mark Zuckerberg-controlled) AGI to usher in a global utopia? How likely is a China-controlled AGI to do the same? I think people are too quick to take it for granted that the former probability is larger than the latter.

Comment by MichaelDickens on Analysis of Global AI Governance Strategies · 2024-12-05T01:24:53.709Z · LW · GW

Cooperative Development (CD) is favored when alignment is easy and timelines are longer. [...]

Strategic Advantage (SA) is more favored when alignment is easy but timelines are short (under 5 years)

I somewhat disagree with this. CD is favored when alignment is easy with extremely high probability. A moratorium is better given even a modest probability that alignment is hard, because the downside to misalignment is so much larger than the downside to a moratorium.[1] The same goes for SA—it's only favored when you are extremely confident about alignment + timelines.

[1] Unless you believe a moratorium has a reasonable probability of permanently preventing friendly AI from being developed.

Comment by MichaelDickens on A few questions about recent developments in EA · 2024-11-23T04:35:33.606Z · LW · GW
  1. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.

I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).

Comment by MichaelDickens on A few questions about recent developments in EA · 2024-11-23T03:36:54.331Z · LW · GW

I will attempt to answer a few of these.

  1. Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it?

Power within EA is currently highly centralized. It seems very likely that the correct amount of centralization is less than the current amount.

  1. Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?

This sounds like a rhetorical question. The non-rhetorical answer is that women are much more likely than men to join a Community Health team, for approximately the same reason that most companies' HR teams are mostly women; and nobody has bothered to counteract this.

  1. Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence?

I had never considered that but I don't think it's a strong incentive. It doesn't look like the Community Health team is doing this. If anything, I think they're incentivized to give themselves less work, not more.

  1. Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?

That's not correct. Lots of EA orgs fundraise outside of the EA community.

  1. Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?

Because guilt-by-association is a very weak form of argument. (And it's not even obvious to me that there are relevant parallels there.) And FWIW I don't respond to the sorts of people who use the word "TESCREAL" because I don't think they're worth taking seriously.

  1. Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?

University groups do do those other things. But they do those things internally so you don't notice. Recruiting is the only thing they do externally, so that's what you notice.

  1. Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up?

Some orgs did that and it generally didn't go well (eg Leverage Research). I think most people believe that totalizing jobs are bad for mental health and create bad epistemics and it's not worth it.

  1. When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?

Those are not examples of the unilateralist's curse. I don't want to explain it in this short comment but I would suggest re-reading some materials that explain the unilateralist's curse, e.g. the original paper.

  1. Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?

Because doing so would be a lot of work, which would take time away from doing other important things. I think people agree that having a second hub would be good, but not good enough to justify the effort.

Comment by MichaelDickens on Zvi’s Thoughts on His 2nd Round of SFF · 2024-11-21T00:01:16.762Z · LW · GW

Thank you for writing about your experiences! I really like reading these posts.

How big an issue do you think the time constraints were? For example, how much better a job could you have done if all the recommenders got twice as much time? And what would it take to set things up so the recommenders could have twice as much time?

Comment by MichaelDickens on Announcing turntrout.com, my new digital home · 2024-11-18T06:32:59.906Z · LW · GW

Do you think a 3-state dark mode selector is better than a 1-state (where "auto" is the only state)? My website is 1-state, on the assumption that auto will work for almost everyone and it lets me skip the UI clutter of having a lighting toggle that most people won't use.

Also, I don't know if the site has been updated but it looks to me like turntrout.com's two modes aren't dark and light, they're auto and light. When I set Firefox's appearance to dark or auto, turntrout.com's dark mode appears dark, but when I set Firefox to light, turntrout.com appears light. turntrout.com's light mode appears to be light regardless of my Firefox setting.

Comment by MichaelDickens on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-16T16:31:21.966Z · LW · GW

OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn't punish them by giving them even more work.

Comment by MichaelDickens on Gwern: Why So Few Matt Levines? · 2024-11-14T04:45:05.533Z · LW · GW

I've only read a little bit of Martin Gardner, but he might be the Matt Levine of recreational math.

Comment by MichaelDickens on Not Technically Lying · 2024-10-27T02:23:37.562Z · LW · GW

Many newspapers have a (well-earned) reputation for not technically lying.

Comment by MichaelDickens on Brief analysis of OP Technical AI Safety Funding · 2024-10-26T16:32:09.103Z · LW · GW

Thank you, this information was useful for a project I'm working on.

Comment by MichaelDickens on leogao's Shortform · 2024-10-18T21:54:02.911Z · LW · GW

I don't think I understand what "learn to be visibly weird" means, and how it differs from not following social conventions because you fail to understand them correctly.

Comment by MichaelDickens on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-18T16:43:00.406Z · LW · GW

I was recently looking into donating to CLTR and I'm curious why you are excited about it? My sense was that little of its work was directly relevant to x-risk (for example this report on disinformation is essentially useless for preventing x-risk AFAICT), and the relevant work seemed to be not good or possibly counterproductive. For example their report on "a pro-innovation approach to regulating AI" seemed bad to me on two counts:

  1. There is a genuine tradeoff between accelerating AI-driven innovation and decreasing x-risk. So to the extent that this report's recommendations support innovation, they increase x-risk, which makes this report net harmful.
  2. The report's recommendations are kind of vacuous, e.g. they recommend "reducing inefficiencies", like yes, this is a fully generalizable good thing but it's not actionable.

(So basically I think this report would be net negative if it wasn't vacuous, but because it's vacuous, it's net neutral.)

This is the sense I get as someone who doesn't know anything about policy and is just trying to get the sense of orgs' work by reading their websites.

Comment by MichaelDickens on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-18T16:32:17.715Z · LW · GW

My perspective is that I'm much more optimistic about policy than about technical research, and I don't really feel qualified to evaluate policy work, and LTFF makes almost no grants on policy. I looked around and I couldn't find any grantmakers who focus on AI policy. And even if they existed, I don't know that I could trust them (like I don't think Open Phil is trustworthy on AI policy and I kind of buy Habryka's arguments that their policy grants are net negative).

I'm in the process of looking through a bunch of AI policy orgs myself. I don't think I can do a great job of evaluating them but I can at least tell that most policy orgs aren't focusing on x-risk so I can scratch them off the list.

Comment by MichaelDickens on Pablo's Shortform · 2024-10-18T02:06:47.789Z · LW · GW

if you think the polling error in 2024 remains unpredictable / the underlying distribution is unbiased

Is there a good reason to think that if polls have recently under-reported Republican votes?

Comment by MichaelDickens on sarahconstantin's Shortform · 2024-10-15T04:18:13.431Z · LW · GW

I don't know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)

  1. replace "^(https://.? )([[.?]] )*" with "\1"
  2. replace "[[(.*?)]]" with "\1"

This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.

Comment by MichaelDickens on sarahconstantin's Shortform · 2024-10-15T04:08:01.404Z · LW · GW

RE Shapley values, I was persuaded by this comment that they're less useful than counterfactual value in at least some practical situations.

Comment by MichaelDickens on Matt Goldenberg's Short Form Feed · 2024-10-12T16:55:40.805Z · LW · GW

(2) have "truesight", i. e. a literally superhuman ability to suss out the interlocutor's character

Why do you believe this?

Comment by MichaelDickens on Advice for journalists · 2024-10-09T20:51:30.024Z · LW · GW

If your goal is to influence journalists to write better headlines, then it matters whether the journalist has the ability to take responsibility over headlines.

If your goal is to stop journalists from misrepresenting you, then it doesn't actually matter whether the journalist has the ability to take responsibility, all that matters is whether they do take responsibility.

Comment by MichaelDickens on Nathan Young's Shortform · 2024-10-09T20:36:37.062Z · LW · GW

Often, you write something short that ends up being valuable. That doesn't mean you should despair about your longer and harder work being less valuable. Like if you could spend 40 hours a week writing quick 5-hour posts that are as well-received as the one you wrote, that would be amazing, but I don't think anyone can do that because the circumstances have to line up just right, and you can't count on that happening. So you have to spend most of your time doing harder and predictably-less-impactful work.

(I just left some feedback for the mapping discussion post on the post itself.)

Comment by MichaelDickens on A new process for mapping discussions · 2024-10-09T20:36:03.731Z · LW · GW

Some feedback:

  • IMO this project was a good use of your time ex ante.[1] Unclear if it will end up being actually useful but I think it's good that you made it.
  • "A new process for mapping discussions" is kind of a boring title and IMO does not accurately reflect the content. It's mapping beliefs more so than discussions. Titles are hard but my first idea for a title would be "I made a website that shows a graph of what public figures believe about SB 1047"
  • I didn't much care about the current content because it's basically saying things I already knew (like, the people pessimistic about SB 1047 are all the usual suspects—Andrew Ng, Yann LeCun, a16z).
  • If I cared about AI safety but didn't know anything about SB 1047, this site would have led me to believe that SB 1047 was good because all the AI safety people support it. But I already knew that AI safety people supported SB 1047.
  • In general, I don't care that much about what various people believe. It's unlikely that I would change my mind based on seeing a chart like the ones on this site.[2] Perhaps most LW readers are in the same boat. I think this is the sort of thing journalists and maybe public policy people care more about.
  • I have changed my mind based on opinion polls before. Specifically, I've changed my mind on scientific issues based on polls of scientists showing that they overwhelmingly support one side (e.g. I used to be anti-nuclear power until I learned that the expert consensus went the other way). The surveys on findingconsensus.ai are much smaller and less representative.

[1] At least that's my gut feeling. I don't know you personally but my impression from seeing you online is that you're very talented and therefore your counterfactual activities would have also been valuable ex ante, so I can't really say that this was the best use of your time. But I don't think it was a bad use.

[2] Especially because almost all the people on the side I disagree with are people I have very little respect for, eg a16z.

Comment by MichaelDickens on Mark Xu's Shortform · 2024-10-07T23:24:38.340Z · LW · GW

This is a good and important point. I don't have a strong opinion on whether you're right, but one counterpoint: AI companies are already well-incentivized to figure out how to control AI, because (as Wei Dai said) controllable AI is more economically useful. It makes more sense for nonprofits / independent researchers to do work that AI companies wouldn't do otherwise.

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-05T01:56:03.420Z · LW · GW

If Open Phil is unwilling to fund some/most of the best orgs, that makes earning to give look more compelling.

(There are some other big funders in AI safety like Jaan Tallinn, but I think all of them combined still have <10% as much money as Open Phil.)

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-05T00:48:35.057Z · LW · GW

I should add that I don't want to dissuade people from criticizing me if I'm wrong. I don't always handle criticism well, but it's worth the cost to have accurate beliefs about important subjects. I knew I was gonna be anxious about this post but I accepted the cost because I thought there was a ~25% chance that it would be valuable to post.

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-04T23:09:58.521Z · LW · GW

A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently.

My impression is that those people are paying a social cost for how willing they are to bring up perceived concerns, and I have a lot of respect for them because of that.

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-04T22:53:56.787Z · LW · GW

Thanks for the reply. When I wrote "Many people would have more useful things to say about this than I do", you were one of the people I was thinking of.

AI Impacts wants to think about AI sentience and OP cannot fund orgs that do that kind of work

Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it's generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I've criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they've gotten worse on AI safety (and weird causes in general).

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-04T22:48:30.719Z · LW · GW

I've been avoiding LW for the last 3 days because I was anxious that people were gonna be mad at me for this post. I thought there was a pretty good chance I was wrong, and I don't like accusing people/orgs of bad behavior. But I thought I should post it anyway because I believed there was some chance lots of people agreed with me but were too afraid of social repercussions to bring it up (like I almost was).

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-04T22:42:40.752Z · LW · GW

What are the norms here? Can I just copy/paste this exact text and put it into a top-level post? I got the sense that a top-level post should be more well thought out than this but I don't actually have anything else useful to say. I would be happy to co-author a post if someone else thinks they can flesh it out.

Edit: Didn't realize you were replying to Habryka, not me. That makes more sense.

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-10-02T04:11:29.085Z · LW · GW

I get the sense that we can't trust Open Philanthropy to do a good job on AI safety, and this is a big problem. Many people would have more useful things to say about this than I do, but I still feel that I should say something.

My sense comes from:

  • Open Phil is reluctant to do anything to stop the companies that are doing very bad things to accelerate the likely extinction of humanity, and is reluctant to fund anyone who's trying to do anything about it.
  • People at Open Phil have connections with people at Anthropic, a company that's accelerating AGI and has a track record of (plausibly-deniable) dishonesty. Dustin Moskovitz has money invested in Anthropic, and Open Phil employees might also stand to make money from accelerating AGI. And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.

A lot of people (including me as of ~one year ago) consider Open Phil the gold standard for EA-style analysis. I think Open Phil is actually quite untrustworthy on AI safety (but probably still good on other causes).

I don't know what to do with this information.

Comment by MichaelDickens on you should probably eat oatmeal sometimes · 2024-09-09T20:34:10.942Z · LW · GW

As a frequent oatmeal-eater, I have a few miscellaneous comments:

  • You mentioned adding fruit paste, fruit syrup, and fruit pulp to oatmeal, but I'm surprised you didn't mention what I consider the best option: whole fruit. I usually use blueberries but sometimes I mix it up with blackberries or sliced bananas.
  • I buy one-minute oats. You don't actually need them to cook them for a minute, you can just pour boiling water onto them and they'll soften up by the time they're cool enough to eat.
  • I wouldn't eat oats for the protein, they have more than rice but still not very much. I mix 80g (1 cup) of oatmeal with 25g of soy protein powder, which brings the protein up from 10g to 30g.
  • I don't get the appeal of overnight oats. I have to microwave it anyway to get it to a reasonable temperature, and it tends to stick to the jar which greatly increases cleanup time. (I think the stickiness comes more from the protein powder than the oats.)
Comment by MichaelDickens on Please stop using mediocre AI art in your posts · 2024-08-26T18:32:36.573Z · LW · GW

Relatedly, I see a lot of people use mediocre AI art when they could just as easily use good stock photos. You can get free, watermarkless stock photos at https://pixabay.com/.

Comment by MichaelDickens on Shortform · 2024-08-26T03:53:56.165Z · LW · GW

The mnemonic I've heard is "red and yellow, poisonous fellow; red and black, friend of Jack"

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-08-22T23:16:19.070Z · LW · GW

I was reading some scientific papers and I encountered what looks like fallacious reasoning but I'm not quite sure what's wrong with it (if anything). It does like this:

Alice formulates hypothesis H and publishes an experiment that moderately supports H (p < 0.05 but > 0.01).

Bob does a similar experiment that contradicts H.

People look at the differences in Alice's and Bob's studies and formulate a new hypothesis H': "H is true under certain conditions (as in Alice's experiment), and false under other conditions (as in Bob's experiment)". They look at the two studies and conclude that H' is probably true because it's supported by both studies.

This sounds fishy to me (something like post hoc reasoning) but I'm not quite sure how to explain why and I'm not even sure I'm correct.

Comment by MichaelDickens on Dragon Agnosticism · 2024-08-02T17:16:11.846Z · LW · GW

Suppose an ideology says you're not allowed to question idea X.

I think there are two different kinds of "not questioning": there's unquestioningly accepting an idea as true, and there's refusing to question and remaining agnostic. The latter position is reasonable in the sense that if you refuse to investigate an issue, you shouldn't have any strong beliefs about it. And I think the load-bearingness is only a major issue if you refuse to question X while also accepting that X is true.

Comment by MichaelDickens on ektimo's Shortform · 2024-07-29T22:34:04.391Z · LW · GW

There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.

(This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)

Comment by MichaelDickens on Linch's Shortform · 2024-07-29T22:29:12.413Z · LW · GW

If I want to write to my representative to oppose this amendment, who do I write to? As I understand, the bill passed the Senate but must still pass Assembly. Is the Senate responsible for re-approving amendments, or does that happen in Assembly?

Also, should I write to a representative who's most likely to be on the fence, or am I only allowed to write to the representative of my district?

Comment by MichaelDickens on If you weren't such an idiot... · 2024-07-17T04:12:11.131Z · LW · GW

5 minute super intense cardio, as a replacement for long, low intensity cardio. It is easier to motivate oneself to do 5 minutes of Your-Heart-Might-Explode cardio than two hours of jogging or something. In fact it takes very little motivation, if you trick yourself into doing it right after waking up, when your brain is on autopilot anyway, and unable to resist routine.

Interesting, I had the complete opposite experience. I previously had the idea that exercise should be short and really hard, and I couldn't stick with it. Then I learned that it's better if the majority of your exercise is very easy. Now I go for hour-long walks and I get exercise every day. (Jogging is too hard to qualify as easy exercise.)

Comment by MichaelDickens on MichaelDickens's Shortform · 2024-06-07T18:26:14.523Z · LW · GW

What's the deal with mold? Is it ok to eat moldy food if you cut off the moldy bit?

I read some articles that quoted mold researchers who said things like (paraphrasing) "if one of your strawberries gets mold on it, you have to throw away all your strawberries because they might be contaminated."

I don't get the logic of that. If you leave fruit out for long enough, it almost always starts growing visible mold. So any fruit at any given time is pretty likely to already have mold on it, even if it's not visible yet. So by that logic, you should never eat fruit ever.

They also said things like "mold usually isn't bad, but if mold is growing on food, there could also be harmful bacteria like listeria." Ok, but there could be listeria even if there's not visible mold, right? So again, by this logic, you should never eat any fresh food ever.

This question seems hard to resolve without spending a bunch of time researching mold so I'm hoping there's a mold expert on LessWrong. I just want to know if I can eat my strawberries.

Comment by MichaelDickens on D0TheMath's Shortform · 2024-06-06T01:02:37.491Z · LW · GW

I don't understand how not citing a source is considered acceptable practice. It seems antithetical to standard journalistic ethics.

Comment by MichaelDickens on OpenAI: Helen Toner Speaks · 2024-05-30T22:58:20.077Z · LW · GW

we have found Mr Altman highly forthcoming

He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don't lie to the board?

Taylor's and Summers' comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam's now-publicly-verified bad behavior.

Comment by MichaelDickens on simeon_c's Shortform · 2024-05-23T21:00:45.094Z · LW · GW

I was just thinking not 10 minutes ago about how that one LW user who casually brought up Daniel K's equity (I didn't remember your username) had a massive impact and I'm really grateful for them.

There's a plausible chain of events where simeon_c brings up the equity > it comes to more people's attention > OpenAI goes under scrutiny > OpenAI becomes more transparent > OpenAI can no longer maintain its de facto anti-safety policies > either OpenAI changes policy to become much more safety-conscious, or loses power relative to more safety-conscious companies > we don't all die from OpenAI's unsafe AI.

So you may have saved the world.

Comment by MichaelDickens on [Linkpost] Statement from Scarlett Johansson on OpenAI's use of the "Sky" voice, that was shockingly similar to her own voice. · 2024-05-21T21:27:03.233Z · LW · GW

The target audience for Soylent is much weirder. Although TBF I originally thought the Soylent branding was a bad idea and I was probably wrong.

Comment by MichaelDickens on OpenAI: Exodus · 2024-05-20T22:19:29.000Z · LW · GW

This also stood out to me as a truly insane quote. He's almost but not quite saying "we have raised awareness that this bad thing can happen by doing the bad thing"

Comment by MichaelDickens on OpenAI: Exodus · 2024-05-20T22:17:49.876Z · LW · GW

Some ideas:

  1. Make Sam Altman look stupid on Twitter, which will marginally persuade more employees to quit and more potential investors not to invest (this is my worst idea but also the easiest, and people seem to pretty much have this one covered already)
  2. Pay a fund to hire a good lawyer to figure out a strategy to nullify the non-disparagement agreements. Maybe a class-action lawsuit, maybe a lawsuit on the behalf of one individual, maybe try to charge Altman with some sort of crime, I'm not sure the best way to do this but that's the lawyer's job to figure out.
  3. Have everyone call their representative in support of SB 1047, or maybe even say you want SB 1047 to have stronger whistleblower protections or something similar.