Posts

EY in the New York Times 2023-06-10T12:21:34.496Z

Comments

Comment by Blueberry on EY in the New York Times · 2023-06-11T11:41:20.705Z · LW · GW

I don't see it as sneering at all.

I'm not sure what you mean by "senpai noticed me" but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.

The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart - a lot of stuff on AI risk does not mention him.

I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.

Would you prefer this?

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

Comment by Blueberry on EY in the New York Times · 2023-06-11T11:36:37.982Z · LW · GW

I like Metz. I'd rather have EY, but that won't happen.

Comment by Blueberry on EY in the New York Times · 2023-06-11T11:35:26.987Z · LW · GW

This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.

Comment by Blueberry on EY in the New York Times · 2023-06-11T11:34:21.011Z · LW · GW

This is completely false, as well as irrelevant.

  • he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures.

  • this didn't happen, because Scott got the NYT to wait until he was ready before doing so.

  • the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced.

  • none of this is relevant, and it's silly to hold a grudge against a reporter for an article you don't like from years ago when what's more important is this current article about AI risk.

Comment by Blueberry on GPTs are Predictors, not Imitators · 2023-06-10T06:23:04.217Z · LW · GW

Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don't believe this is possible.

Comment by Blueberry on That Alien Message · 2023-06-10T05:25:19.767Z · LW · GW

oh wow, thanks!

She didn't get the 5D thing - it's not that the messengers live in five dimensions, they were just sending two-dimensional pictures of a three-dimensional world.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T03:15:57.932Z · LW · GW

LLMs' answers on factual questions are not trustworthy; they are often hallucinatory.

Also, I was obviously asking you for your views, since you wrote the comment.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T03:05:15.756Z · LW · GW

Sorry, 2007 was a typo. I'm not sure how to interpret the ironic comment about asking an LLM, though.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:42:18.819Z · LW · GW

OTOH, if you sent back Attention is all you need

What is so great about that 2007 paper?

People didn't necessarily have a use for all the extra compute

Can you please explain the bizarre use of the word "compute" here? Is this a typo? "compute" is a verb. The noun form would be "computing" or "computing power."

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:38:57.745Z · LW · GW

Yudkowsky makes a few major mistakes that are clearly visible now, like being dismissive of dumb, scaled, connectionist architectures

I don't think that's a mistake at all. Sure, they've given us impressive commercial products, but no progress towards AGI, so the dismissiveness is completely justified.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:36:10.470Z · LW · GW

Maybe you're an LLM.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:33:44.880Z · LW · GW

there would be no way to glue these two LLMs together to build an English-to-Japanese translator such that training the "glue" takes <1% of the comput[ing] used to train the independent models?

Correct. They're two entirely different models. There's no way they could interoperate without massive computing and building a new model.

(Aside: was that a typo, or did you intend to say "compute" instead of "computing power"?)

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:29:51.293Z · LW · GW

I don't see why fitting a static and subhuman mind into consumer hardware from 2023 means that Yudkowsky doesn't lose points for saying you can fit a learning (implied) and human-level mind into consumer hardware from 2008.

Because one has nothing to do with the other. LLMs are getting bigger and bigger, but that says nothing about whether a mind designed algorithmically could fit on consumer hardware.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:27:06.810Z · LW · GW

Yeah, one example is the view that AGI won't happen, either because it's just too hard and humanity won't devote sufficient resources to it, or because we recognize it will kill us all.

Comment by Blueberry on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-10T02:15:05.862Z · LW · GW

I really disagree with this article. It's basically just saying that you drank the LLM Kool-Aid. LLMs are massively overhyped. GPT-x is not the way to AGI.

This article could have been written a dozen years ago. A dozen years ago, people were saying the same thing: "we've given up on the Good Old-Fashioned AI / Douglas Hofstadter approach of writing algorithms and trying to find insights! it doesn't give us commerical products, whereas the statistical / neural network stuff does!"

And our response was the same as it is today. GOFAI is hard. No one expected to make much progress on algorithms for intelligence in just a decade or two. We knew in 2005 that if you looked ahead a decade or two, we'd keep seeing impressive-looking commercial products from the statistical approach, and the GOFAI approach would be slow. And we have, but we're no closer to AGI. GPT-x only predicts the next words based on a huge corpus, so it gives you what's already there. An average, basically. An impressive-looking toy, but it can't reason or set goals, which is the whole idea here. GOFAI is the only way to do that. And it's hard, and it's slow, but it's the only path going in the right direction.

Once you understand that, you can see where your review errs.

  • cyc - it's funny that Hanson takes what you'd expect to be Yudkowsky's view, and vice versa. cyc is the correct approach. The only reason to doubt this is if you're expecting commercially viable results in a few years, which no one was. Win Hanson.

  • AI before ems - AI does not seem well on its way, so I disagree that there's been any evidence one way or the other. Draw.

  • sharing cognitive content and improvements - clear win Yudkowsky. The neural network architecture is so common for commercial reasons only, not because it "won" or is more effective. And even if you only look at neural networks, you can't share content or improvements between one and another. How do you share content or improvements between GPT and Stable Diffusion, for instance?

  • algorithms:

Yudkowsky seems quite wrong here, and Hanson right, about one of the central trends -- and maybe the central trend -- of the last dozen years of AI.

  • Well, that wasn't the question, was it? The question was about AI progress, not what the commercial trend would be. The issue is that AI progress and the commercial trend are going in opposite directions. LLMs and throwing more money, data, and training at neural networks aren't getting us closer to actual AGI. Win Yudkowsky.

But -- regardless of Yudkowsky's current position -- it still remains that you'd have been extremely surprised by the last decade's use of comput[ing] if you had believed him

No, no you would not. Once again, the claim is that GOFAI is the slow and less commercializable path, but the only true path to AGI, and the statistical approach has and will continue to give us impressive-looking and commercializable toys and will monopolize research, but will not take us anywhere towards real AGI. The last decade is exactly what you'd expect on this trend. Not a surprise at all.

Comment by Blueberry on AI Fables · 2023-06-10T01:27:11.549Z · LW · GW

I don't agree with that. Neutral-genie stories are important because they demonstrate the importance of getting your wish right. As yet, deep learning hasn't taken us to AGI, and it may never, and even if it does, we may still be able to make them want particular things or give them particular orders or preferences.

Here's a great AI fable from the Air Force:

[This is] a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation ... "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col. Tucker “Cinco” Hamilton, the USAF's Chief of AI Test and Operations, said ... "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI"

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.

He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

Comment by Blueberry on I attempted the AI Box Experiment again! (And won - Twice!) · 2014-07-03T02:44:02.726Z · LW · GW

Can you give me an example of a NLP "program" that influences someone, or link me to a source that discusses this more specifically? I'm interested but, as I said, skeptical, and looking for more specifics.

Comment by Blueberry on Post ridiculous munchkin ideas! · 2014-06-29T12:29:09.421Z · LW · GW

I'd guess it was more likely to be emotional stuff relating to living with people who once had such control over you. I can't stand living at my parents' for very long either... it's just stressful and emotionally draining.

Comment by Blueberry on I attempted the AI Box Experiment again! (And won - Twice!) · 2014-06-29T12:26:48.397Z · LW · GW

What pragmatist said. Even if you can't break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?

Comment by Blueberry on I attempted the AI Box Experiment again! (And won - Twice!) · 2014-06-29T03:31:37.185Z · LW · GW

How did the attack happen? I'm skeptical.

Comment by Blueberry on LW Women- Minimizing the Inferential Distance · 2013-04-27T00:56:44.813Z · LW · GW

See this article on Sarah Hrdy.

Comment by Blueberry on A Marriage Ceremony for Aspiring Rationalists · 2012-10-24T01:42:38.127Z · LW · GW

Sorry, I couldn't tell what was a quote and what wasn't.

Polyamory is usually defined as honest nonmonogamy. In other words, any time someone is dating two people openly, that's poly. It's how many humans naturally behave. It doesn't require exposure to US poly communities, or any community in general for that matter.

Comment by Blueberry on A Marriage Ceremony for Aspiring Rationalists · 2012-10-23T23:44:14.189Z · LW · GW

As you discuss in the dropbox link, this is a pretty massive selection bias. I'd suggest that this invalidates any statement made on the basis of these studies about "poly people," since most poly people seem not to be included. People all over the world are poly, in every country, of every race and class.

It's as if we did a study of "rationalists" and only included people on LW, ignoring any scientists, policy makers, or evidence-based medical researchers, simply because they didn't use the term "rationalist."

You state:

While polyamory communities have blossomed for decades in the USA (cf. Munson and Stelboum 1999a; Anderlini-D’Onofrio 2004c), polyamory is still quite unknown in Europe. The social organisation of polyamorous communities is not very advanced in most European countries.

Clearly polyamory is not unknown in Europe, though the word "polyamory" might be. Let's not confuse polyamory, which exists anytime someone openly dates two people, with socially organized communities using the term "polyamory."

Comment by Blueberry on Causal Diagrams and Causal Models · 2012-10-23T21:01:08.708Z · LW · GW

I know that the antidepressant Wellbutrin, which is a stimulant, has been associated with a small amount of weight loss over a few months, though I'm not sure if this has been shown to stay for longer. That's an off-label use though.

I'd guess that any stimulant would show weight loss in the short-term. Is there some reason this wouldn't stay long-term?

Comment by Blueberry on Glenn Beck discusses the Singularity, cites SI researchers · 2012-10-23T07:56:25.582Z · LW · GW

How is Mormonism attractive? You don't even get multiple wives anymore. And most people think you're crazy.

Comment by Blueberry on Causal Diagrams and Causal Models · 2012-10-23T07:47:07.984Z · LW · GW

What about a small amount of mild stimulant use?

Comment by Blueberry on How To Have Things Correctly · 2012-10-23T07:07:31.637Z · LW · GW

Why would you not want to be someone who wears a cloak often? And whatever those reasons are, why wouldn't they prevent you from wearing a cloak after you buy it?

Comment by Blueberry on The Fabric of Real Things · 2012-10-23T07:00:04.551Z · LW · GW

it's very, very likely there's life outside our solar system, but I don't have any evidence of it

If there's no evidence of it (circumstantial evidence included), what makes you think it's very likely?

Comment by Blueberry on A Marriage Ceremony for Aspiring Rationalists · 2012-10-23T06:36:16.111Z · LW · GW

Poly groups tend to be well-educated well-paid white people

I'm baffled by this. Are you saying most studies tend to be done on this group? Do you mean in the US? Are you referring to groups who call themselves poly, or the general practice of honest nonmonogamy?

Comment by Blueberry on Polyhacking · 2012-10-23T06:25:34.989Z · LW · GW

Are you polysaturated yet? Most people seem to find 2-3 to be the practical limit.

Comment by Blueberry on Firewalling the Optimal from the Rational · 2012-10-23T06:19:09.165Z · LW · GW

I actually never was asked to say the Pledge in any US school I went to, and I've never even seen it said. I'm pretty sure this is limited to some parts of the country and is no longer as universal as it may have been once. If someone did go to one such school, they and their parents would have the option of simply not saying the Pledge, transferring to a different school (I doubt private or religious schools say it), or homeschooling/unschooling.

Comment by Blueberry on Firewalling the Optimal from the Rational · 2012-10-23T06:16:28.812Z · LW · GW

For what it's worth, I've never seen it said in any of the US schools I've attended. It's not universal.

Comment by Blueberry on SotW: Be Specific · 2012-04-04T19:58:45.990Z · LW · GW

I've played that game, using various shaped blocks that the Customer has to assemble in a specific pattern. It's great.

There's also the variation with an Intermediary, and the Expert and Customer can only communicate with the Intermediary, who moves back and forth between rooms and can't write anything down.

Comment by Blueberry on My Childhood Death Spiral · 2012-04-02T08:46:27.251Z · LW · GW

Precommiting is useful in many situations, one being where you want to make sure you do something in the future when you know something might change your mind. In Cialdini's "Influence," for instance, he discusses how saying in public "I am not going to smoke another cigarette" is helpful in quitting smoking.

If you think you might change your mind, then surely you would want to have the freedom to do so?

The whole point is that I want to remove that freedom. I don't want the option of changing my mind.

Another classic example is the general who burned his ships upon landing so there would be no option to retreat, to make his soldiers fight harder.

Comment by Blueberry on SotW: Check Consequentialism · 2012-04-02T08:42:23.709Z · LW · GW

If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.

Please elaborate!

Comment by Blueberry on My Childhood Death Spiral · 2012-04-02T08:07:51.965Z · LW · GW

I don't want to eat anything steaklike unless it came from a real, mooing, cow. I don't care how it's killed.

I'm worried I'm overestimating my resistance to advertising, so I'm hereby precommitting to this in writing.

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-02T07:48:02.386Z · LW · GW

Would "servant" not otherwise be justified?

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-02T07:46:11.084Z · LW · GW

That's a good reminder but I'm not sure how it applies here.

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-02T07:44:37.720Z · LW · GW

It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.

More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.

Comment by Blueberry on Why Does Power Corrupt? · 2012-04-02T07:37:44.202Z · LW · GW

Yes, "sweet" is a great description. Why, how would you describe it?

Comment by Blueberry on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-04-01T20:08:03.010Z · LW · GW

Awesome! Good to know.

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-01T20:07:04.899Z · LW · GW

Also judging from his other quotes I'm pretty sure that's not what he meant...

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-01T20:01:04.288Z · LW · GW

I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.

Comment by Blueberry on What I've learned from Less Wrong · 2012-04-01T19:53:12.163Z · LW · GW

Thanks... I'm still going through the most recent Callahan novels. Jake Stonebender does kinda have a temper.

Comment by Blueberry on Why Does Power Corrupt? · 2012-04-01T19:51:55.998Z · LW · GW

Downvoted and upvoted the counterbalance (which for some reason was at -1 already; someone didn't follow your instructions). You're surprised people like power?

Comment by Blueberry on My Childhood Death Spiral · 2012-04-01T19:46:52.436Z · LW · GW

I suspect you overestimate how much most people like cows...

Comment by Blueberry on My Childhood Death Spiral · 2012-04-01T19:45:52.297Z · LW · GW

There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.

No doubt at all? I'd put money on this being wrong. Why would it be outlawed?

Including the problems like 'how to have a steak without killing a cow'.

I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"

Comment by Blueberry on Rationality Quotes April 2012 · 2012-04-01T19:41:53.259Z · LW · GW

I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?

Comment by Blueberry on What I've learned from Less Wrong · 2012-04-01T19:35:22.758Z · LW · GW

Ok, but... wouldn't the same objection apply to virtually any action/adventure movie or novel? Kick Ass, all the Die Hard movies, anything Tarantino, James Bond, Robert Ludlum's Bourne Identity novels and movies, et cetera. They all have similar violent scenes.

Comment by Blueberry on What I've learned from Less Wrong · 2012-04-01T02:29:35.452Z · LW · GW

Jura gurl'er nobhg gb encr Wraavsre? Ur qrfreirf gung naq vg'f frys-qrsrafr