Posts
Comments
I don't see it as sneering at all.
I'm not sure what you mean by "senpai noticed me" but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.
The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart - a lot of stuff on AI risk does not mention him.
I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.
Would you prefer this?
I like Metz. I'd rather have EY, but that won't happen.
This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.
This is completely false, as well as irrelevant.
-
he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures.
-
this didn't happen, because Scott got the NYT to wait until he was ready before doing so.
-
the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced.
-
none of this is relevant, and it's silly to hold a grudge against a reporter for an article you don't like from years ago when what's more important is this current article about AI risk.
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don't believe this is possible.
oh wow, thanks!
She didn't get the 5D thing - it's not that the messengers live in five dimensions, they were just sending two-dimensional pictures of a three-dimensional world.
LLMs' answers on factual questions are not trustworthy; they are often hallucinatory.
Also, I was obviously asking you for your views, since you wrote the comment.
Sorry, 2007 was a typo. I'm not sure how to interpret the ironic comment about asking an LLM, though.
OTOH, if you sent back Attention is all you need
What is so great about that 2007 paper?
People didn't necessarily have a use for all the extra compute
Can you please explain the bizarre use of the word "compute" here? Is this a typo? "compute" is a verb. The noun form would be "computing" or "computing power."
Yudkowsky makes a few major mistakes that are clearly visible now, like being dismissive of dumb, scaled, connectionist architectures
I don't think that's a mistake at all. Sure, they've given us impressive commercial products, but no progress towards AGI, so the dismissiveness is completely justified.
Maybe you're an LLM.
there would be no way to glue these two LLMs together to build an English-to-Japanese translator such that training the "glue" takes <1% of the comput[ing] used to train the independent models?
Correct. They're two entirely different models. There's no way they could interoperate without massive computing and building a new model.
(Aside: was that a typo, or did you intend to say "compute" instead of "computing power"?)
I don't see why fitting a static and subhuman mind into consumer hardware from 2023 means that Yudkowsky doesn't lose points for saying you can fit a learning (implied) and human-level mind into consumer hardware from 2008.
Because one has nothing to do with the other. LLMs are getting bigger and bigger, but that says nothing about whether a mind designed algorithmically could fit on consumer hardware.
Yeah, one example is the view that AGI won't happen, either because it's just too hard and humanity won't devote sufficient resources to it, or because we recognize it will kill us all.
I really disagree with this article. It's basically just saying that you drank the LLM Kool-Aid. LLMs are massively overhyped. GPT-x is not the way to AGI.
This article could have been written a dozen years ago. A dozen years ago, people were saying the same thing: "we've given up on the Good Old-Fashioned AI / Douglas Hofstadter approach of writing algorithms and trying to find insights! it doesn't give us commerical products, whereas the statistical / neural network stuff does!"
And our response was the same as it is today. GOFAI is hard. No one expected to make much progress on algorithms for intelligence in just a decade or two. We knew in 2005 that if you looked ahead a decade or two, we'd keep seeing impressive-looking commercial products from the statistical approach, and the GOFAI approach would be slow. And we have, but we're no closer to AGI. GPT-x only predicts the next words based on a huge corpus, so it gives you what's already there. An average, basically. An impressive-looking toy, but it can't reason or set goals, which is the whole idea here. GOFAI is the only way to do that. And it's hard, and it's slow, but it's the only path going in the right direction.
Once you understand that, you can see where your review errs.
-
cyc - it's funny that Hanson takes what you'd expect to be Yudkowsky's view, and vice versa. cyc is the correct approach. The only reason to doubt this is if you're expecting commercially viable results in a few years, which no one was. Win Hanson.
-
AI before ems - AI does not seem well on its way, so I disagree that there's been any evidence one way or the other. Draw.
-
sharing cognitive content and improvements - clear win Yudkowsky. The neural network architecture is so common for commercial reasons only, not because it "won" or is more effective. And even if you only look at neural networks, you can't share content or improvements between one and another. How do you share content or improvements between GPT and Stable Diffusion, for instance?
-
algorithms:
Yudkowsky seems quite wrong here, and Hanson right, about one of the central trends -- and maybe the central trend -- of the last dozen years of AI.
- Well, that wasn't the question, was it? The question was about AI progress, not what the commercial trend would be. The issue is that AI progress and the commercial trend are going in opposite directions. LLMs and throwing more money, data, and training at neural networks aren't getting us closer to actual AGI. Win Yudkowsky.
But -- regardless of Yudkowsky's current position -- it still remains that you'd have been extremely surprised by the last decade's use of comput[ing] if you had believed him
No, no you would not. Once again, the claim is that GOFAI is the slow and less commercializable path, but the only true path to AGI, and the statistical approach has and will continue to give us impressive-looking and commercializable toys and will monopolize research, but will not take us anywhere towards real AGI. The last decade is exactly what you'd expect on this trend. Not a surprise at all.
I don't agree with that. Neutral-genie stories are important because they demonstrate the importance of getting your wish right. As yet, deep learning hasn't taken us to AGI, and it may never, and even if it does, we may still be able to make them want particular things or give them particular orders or preferences.
Here's a great AI fable from the Air Force:
[This is] a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation ... "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col. Tucker “Cinco” Hamilton, the USAF's Chief of AI Test and Operations, said ... "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI"
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.
He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”
Can you give me an example of a NLP "program" that influences someone, or link me to a source that discusses this more specifically? I'm interested but, as I said, skeptical, and looking for more specifics.
I'd guess it was more likely to be emotional stuff relating to living with people who once had such control over you. I can't stand living at my parents' for very long either... it's just stressful and emotionally draining.
What pragmatist said. Even if you can't break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?
How did the attack happen? I'm skeptical.
See this article on Sarah Hrdy.
Sorry, I couldn't tell what was a quote and what wasn't.
Polyamory is usually defined as honest nonmonogamy. In other words, any time someone is dating two people openly, that's poly. It's how many humans naturally behave. It doesn't require exposure to US poly communities, or any community in general for that matter.
As you discuss in the dropbox link, this is a pretty massive selection bias. I'd suggest that this invalidates any statement made on the basis of these studies about "poly people," since most poly people seem not to be included. People all over the world are poly, in every country, of every race and class.
It's as if we did a study of "rationalists" and only included people on LW, ignoring any scientists, policy makers, or evidence-based medical researchers, simply because they didn't use the term "rationalist."
You state:
While polyamory communities have blossomed for decades in the USA (cf. Munson and Stelboum 1999a; Anderlini-D’Onofrio 2004c), polyamory is still quite unknown in Europe. The social organisation of polyamorous communities is not very advanced in most European countries.
Clearly polyamory is not unknown in Europe, though the word "polyamory" might be. Let's not confuse polyamory, which exists anytime someone openly dates two people, with socially organized communities using the term "polyamory."
I know that the antidepressant Wellbutrin, which is a stimulant, has been associated with a small amount of weight loss over a few months, though I'm not sure if this has been shown to stay for longer. That's an off-label use though.
I'd guess that any stimulant would show weight loss in the short-term. Is there some reason this wouldn't stay long-term?
How is Mormonism attractive? You don't even get multiple wives anymore. And most people think you're crazy.
What about a small amount of mild stimulant use?
Why would you not want to be someone who wears a cloak often? And whatever those reasons are, why wouldn't they prevent you from wearing a cloak after you buy it?
it's very, very likely there's life outside our solar system, but I don't have any evidence of it
If there's no evidence of it (circumstantial evidence included), what makes you think it's very likely?
Poly groups tend to be well-educated well-paid white people
I'm baffled by this. Are you saying most studies tend to be done on this group? Do you mean in the US? Are you referring to groups who call themselves poly, or the general practice of honest nonmonogamy?
Are you polysaturated yet? Most people seem to find 2-3 to be the practical limit.
I actually never was asked to say the Pledge in any US school I went to, and I've never even seen it said. I'm pretty sure this is limited to some parts of the country and is no longer as universal as it may have been once. If someone did go to one such school, they and their parents would have the option of simply not saying the Pledge, transferring to a different school (I doubt private or religious schools say it), or homeschooling/unschooling.
For what it's worth, I've never seen it said in any of the US schools I've attended. It's not universal.
I've played that game, using various shaped blocks that the Customer has to assemble in a specific pattern. It's great.
There's also the variation with an Intermediary, and the Expert and Customer can only communicate with the Intermediary, who moves back and forth between rooms and can't write anything down.
Precommiting is useful in many situations, one being where you want to make sure you do something in the future when you know something might change your mind. In Cialdini's "Influence," for instance, he discusses how saying in public "I am not going to smoke another cigarette" is helpful in quitting smoking.
If you think you might change your mind, then surely you would want to have the freedom to do so?
The whole point is that I want to remove that freedom. I don't want the option of changing my mind.
Another classic example is the general who burned his ships upon landing so there would be no option to retreat, to make his soldiers fight harder.
If you know what you're doing, the Phd example is not more than a 5 minute process - I've walked people through worse things in about that time.
Please elaborate!
I don't want to eat anything steaklike unless it came from a real, mooing, cow. I don't care how it's killed.
I'm worried I'm overestimating my resistance to advertising, so I'm hereby precommitting to this in writing.
Would "servant" not otherwise be justified?
That's a good reminder but I'm not sure how it applies here.
It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.
More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.
Yes, "sweet" is a great description. Why, how would you describe it?
Awesome! Good to know.
Also judging from his other quotes I'm pretty sure that's not what he meant...
I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.
Thanks... I'm still going through the most recent Callahan novels. Jake Stonebender does kinda have a temper.
Downvoted and upvoted the counterbalance (which for some reason was at -1 already; someone didn't follow your instructions). You're surprised people like power?
I suspect you overestimate how much most people like cows...
There's no doubt that killing cows like we do now will be outlawed after we find another way to have the steak.
No doubt at all? I'd put money on this being wrong. Why would it be outlawed?
Including the problems like 'how to have a steak without killing a cow'.
I'm not sure that's the relevant problem. The more important problem is "how can we get more and better steaks cheaper?"
I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?
Ok, but... wouldn't the same objection apply to virtually any action/adventure movie or novel? Kick Ass, all the Die Hard movies, anything Tarantino, James Bond, Robert Ludlum's Bourne Identity novels and movies, et cetera. They all have similar violent scenes.
Jura gurl'er nobhg gb encr Wraavsre? Ur qrfreirf gung naq vg'f frys-qrsrafr