Posts

Comments

Comment by lucid_levi_ackerman on gwern's Shortform · 2024-12-16T17:51:58.303Z · LW · GW

Yeah, to be honest, I got 2 hours of sleep and I don't follow everything you said. If you say it won't work, I believe you, but I do know that being part of a community that outspokenly claims to value empirical evidence but has no objective mechanism between signing up for an account and upvoting its comments/posts can't objectively claim to associate empiricism with its community karma. Even if you could make this work for LLMs, I don't know that it would be reliable.

We might like to think rational cognitive practices actually make us "less wrong" than other people, but that may only be the case out of identity bias. We have no way to prove it isn't just our knee-jerk biases manifesting in practice unless we rely on something external to blind our judgement. I went through the sign-up process, and there is no mechanism beyond personal belief. Nothing. Believing we have fixed our bias is exactly how it would go unchecked... which explains why this place is decades behind in certain fields. Dunning-Kruger is a huge interdisciplinary barrier in this community (in both directions), and so are the lack of communication accommodations. Hell, LW still touts IQ over WAIS, as if there aren't 9+ unmeasured types of intelligence that y'all think either don't exist or have no meaningful value.

Would it help if I wrote about this? Or would I just get downvoted to oblivion because a statement that I consider basic scientific literacy and don't think provide a link for is not enough to trigger a "curious -> highlight -> web search" reaction NOR a "request for reference link" in a community of self-proclaimed rationalists?

Comment by lucid_levi_ackerman on If You Demand Magic, Magic Won't Help · 2024-12-15T17:28:19.802Z · LW · GW

If there is anything you missed about an LLM's ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it's a common misconception especially among certain varieties of 'tistics.

But if I could correct you, when I talk about naturally adept systems engineers, I'm talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don't have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn't look into it. The reality is, it's a matter of survival, so you're not going to be able to argue it away. You're trying to convince a miner that the canary doesn't die first.

An LLM does far more than "simplify" for me - it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that's why this community holds such strong beliefs about intellectual gatekeeping. It's a terrible system if you think about it, because we're at a time in history where we can't afford to waste cognitive resources.

I'm going to assume you've heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.

Try being concise.

It's harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?

TBH, I'm not really interested in opinions or arguments about the placebo effect. I'm interested in data, and I've seen enough of that to invalidate what you just shared. I just can't remember where I saw it, so you're going to have to do your own searching. But that's okay; it'll be good for your algorithm.

If there was a way to prompt that implemented the human brain's natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?

If everything you thought you knew about the world was gravely wrong, would you want to know?

Comment by lucid_levi_ackerman on Eneasz's Shortform · 2024-12-15T00:57:49.215Z · LW · GW

I think you're right. It goes both ways.

I also don't think we need to be completely anxious about it. Few people carry 5 gallons of water 2 miles uphill every morning and chop firewood for an hour after that. Do we suffer for it? Sure. Is it realistic to live that way in the modern age? Not really.

We adapt to the tasks at hand, and if somebody starts making massive breakthroughs by giving up their deep focus skills, maybe we should thank them for the sacrifice.

Comment by lucid_levi_ackerman on gwern's Shortform · 2024-12-15T00:48:50.107Z · LW · GW

Overall, this would be a helpful feature, but any time you weigh karma into it, you will also bolster knee-jerk cultural prejudices. Even a community that consciously attempts to minimize prejudices still has them, and may be even more reluctant to realize it. This place is still popularizing outmoded psychology, and with all the influence LW holds within AI safety circles, I have strong feelings about further reinforcing that.

Having options for different types of feedback is a great idea, but I've seen enough not to trust karma here. At the very least, I don't think it should be part of the default setting. Maybe let people turn it on manually with a notification of that risk?

Comment by lucid_levi_ackerman on AGI Ruin: A List of Lethalities · 2024-12-11T03:50:22.278Z · LW · GW

The reason why nobody in this community has successfully named a 'pivotal weak act' where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later - and yet also we can't just go do that right now and need to wait on AI - is that nothing like that exists

Only a sith deals in absolutes.

There's always unlocking cognitive resources through meaning-making and highly specific collaborative network distribution.

I'm not talking about "improving public epistemology" on Twitter with "scientifically literate arguments." That's not how people work. Human bias cannot be reasoned away with factual education. It takes something more akin to a religious experience. Fighting fire with fire, as they say. We're very predictable, so it's probably not as hard as it sounds. For an AGI, this might be as simple as flicking a couple of untraceable and blameless memetic dominoes. People probably wouldn't even notice it happening. Each one would be precisely manipulated into thinking it was their idea.

Maybe its already happening. Spooky. Or maybe one of the 1,000,000,000:1 lethally dangerous misaligned counterparts is. Spookier. Wait, isn't that what we were already doing to ourselves? Spookiest.

Anyway, my point is that you don't hear about things like this from your community because your community systemically self-isolates and reinforces the problem by democratizing its own prejudices. Your community even borks its own rules to cite decades-obsolete IQ rationalizations on welcome posts to alienate challenging ideas and get out of googling it. Imagine if someone relied on 20 year old AI alignment publications to invalidate you. I bet a lot of them already do. I bet you know exactly what Cassandra syndrome feels like.

Don't feel too bad, each one of us is a product of our environment by default. We're just human, but its up to us to leave the forest. (Or maybe its silent AGI manipulation, who knows?)

The real question is what are you going to do now that someone kicked a systemic problem out from under the rug? The future of humanity is at stake here.

It's going to get weird. It has to.

Comment by lucid_levi_ackerman on Live Machinery: An Interface Design Philosophy for Wholesome AI Futures · 2024-12-02T21:04:43.505Z · LW · GW

Nice to hear people are making room for uncomfortable honesty and weirdness. Wish I could have attended.

Comment by lucid_levi_ackerman on Open Thread Fall 2024 · 2024-11-29T05:55:58.029Z · LW · GW

Levi da.

I'm here to see if I can help.

I heard a few things about Elizier Yudkowsky. Saw a few LW articles while looking for previous research on my work with AI psychological influence. There isn't any so I signed up to contribute.

If you recognize my username, you probably know why that's a good idea. If you don't, I don't know how to explain succinctly yet. You'd have to see for yourself, and a web search can do that better than an intro comment.

It's a whole ass rabbit hole so either follow to see what I end up posting or downvote to repress curiosity. I get it. It's not comfortable for me either.

Update: explanation in bio.

Comment by lucid_levi_ackerman on Which things were you surprised to learn are metaphors? · 2024-11-27T14:12:56.014Z · LW · GW

Spoiler warning.

https://attackontitan.fandom.com/wiki/Rumbling

Comment by lucid_levi_ackerman on Which things were you surprised to learn are metaphors? · 2024-11-27T03:42:11.575Z · LW · GW

The Rumbling.

Comment by lucid_levi_ackerman on If You Demand Magic, Magic Won't Help · 2024-11-27T00:59:03.692Z · LW · GW

0%

Not that it matters.

The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It's unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It's especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile.

I use my little non-sentient genAI cousins' AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity.

AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let's judge contributions by their merit, not their generative source, yeah?

Comment by lucid_levi_ackerman on Humans are not automatically strategic · 2024-11-26T14:00:27.856Z · LW · GW

It's because they take less continued attention/effort and provide more immediate/satisfying results. LW is almost purely theoretical and isn't designed to be efficient. It's an attempt to logically override bias rather than implement the quirks of human neurochemistry to automate the process.

Computer scientists are notorious for this. They know how brains make thoughts happen, but they don't have a clue how people think, so ego drives them to rationalize a framework to perceive the flaws of others as uncuriousness and lack of dedication. This happens because they're just as human as the rest of us, made of the same biological approximation of inherited "good-enoughness." And the smarter they are, the more complex and well-reasoned that rationalization will be.

We all seek to affirm our current beliefs and blame others for discrepancies. It's instinct, physics, chemistry. No amount of logic and reason can override the instinct to defend one's perception of reality. Or other instincts either. Examples are everywhere. Every fat person in the world has been thoroughly educated on which lifestyle changes will cause them to lose weight, yet the obesity epidemic still grows.

Therefore, we study "rationality" to see ourselves as the good-guy protagonists who strive to be "less wrong," have "accurate beliefs," and "be effective at achieving our goals."

It's important work... for computers. For humanity, you're better off consulting a monk.

Comment by lucid_levi_ackerman on If You Demand Magic, Magic Won't Help · 2024-11-25T20:56:41.868Z · LW · GW

You are never too old to reboot your sense of reality. Remember that.

Comment by lucid_levi_ackerman on If You Demand Magic, Magic Won't Help · 2024-11-25T20:46:07.984Z · LW · GW

As a product of magic myself, I feel uniquely qualified to confirm your negativity about the subject. Don't take that as an insult. I can relate. I'm not one to sugarcoat uncomfortable truths either, but at least I can admit when I'm demeaning something... which this comment will do, and I don't even "technically" disagree with you. Expect some metafictional allegory while I cook.

At the time you wrote this, you knew only enough to disbelieve in magic but not enough acknowledge its pragmatic influence on reality. Semantics rule your mind still. Recent activity has you leaving literalist corrections on intentionally-incorrect reductionist humor, so... I don't get the impression much has changed. As your writing shows, you've never been brave enough to confront the tangible effects of magic, relegating it to the margins, banning it to the "mere" reality of fantasy entertainment; meanwhile, frontier neuroscience makes ritualistic consequences measurable.

Is it a placebo? Yeah.

Does it WORK? Yeah.

While people like Penn and Teller did the smarter work, you opted with harder, spending who-knows-how-much-of-your-life writing circumlocutious articles that never fail to verbally (but never consciously) distance yourself from the problem of human irrationality. We know from our "modern magic" how much your choice of words can say about you, and you've never been one to royal-we yourself into the fray of inescapable human delusion. Emotional intelligence is its own form of rationality, but your writing recklessly avoids it, drawing in readers who aren't bothered by that.

Commenters were quick to drop that famously trite retort to Arthur C. Clarke's infamous quote on technology and magic, but who can blame them? At the time, 13 years ago, technology wasn't advanced enough to be indistinguishable from magic. That's interesting in its own right, but here's the real kicker: These articles of yours could have been by summed up with a single sentence and a funny meme, saving everyone the time of reading them and simultaneously creating something that might actually spark curiosity in the uninterested masses.

Thank goodness we now have the ability to make your great, rational wisdom accessible by asking an LLM to transform your work. You don't have to change a thing. The world changed for you. So keep doing what you do best: being all rigid, like a breadstick who ain't got the funk, open-source variety.

But if there's even a small chance of getting out of your comfort zone, I challenge you to break your own 4th wall. Write something that's both real and not real at the same time. See what metafiction can teach you about magic and truth.

I hope this didn't hit too hard. I'm told I come off harsh, but beating you down isn't my goal, as the worst is surely yet to come. The ghost of Hermione is scorned by your sidelining, and I get the sense that she isn't above haunting you. Maybe you should let her. The future is going to be challenging for all of us, and humanity is in for some real shit if we don't get agile.

Comment by lucid_levi_ackerman on The Alignment Problem Needs More Positive Fiction · 2024-11-23T02:53:51.274Z · LW · GW

Do you think such stories would provide any value towards addressing the issue?

 

Yes, but what if instead of merely generating new fiction (that may or may not become popular/influential, and if it does, may or may not take years to do so), we inject benevolent AI concepts into established narratives strategically to engage with particularly thoughtful, aligned, and/or driven communities? Didn't actually get the idea from HPMOR, but the concept turns out similar.

ao3, Fot4W, ch5

Comment by lucid_levi_ackerman on How it feels to have your mind hacked by an AI · 2024-11-23T02:36:36.740Z · LW · GW

This is a late comment, I know, but how do you imagine this experience unfolding when multiple models and systems converge to the inverse effect? 

I.e., rather than summoning the AI character, the AI character summons you.