Posts

What is your personal totalizing and self-consistent worldview/philosophy? 2024-12-27T23:59:30.641Z
Living with Rats in College 2024-12-25T10:44:13.085Z
Effective Evil's AI Misalignment Plan 2024-12-15T07:39:34.046Z
[Letter] Chinese Quickstart 2024-12-01T06:38:15.796Z
I finally got ChatGPT to sound like me 2024-09-17T09:39:59.415Z
Interdictor Ship 2024-08-19T04:59:18.487Z
Decision Theory in Space 2024-08-18T07:02:11.847Z
You're a Space Wizard, Luke 2024-08-18T05:35:39.238Z
Awakening 2024-05-30T07:03:00.821Z
The Pearly Gates 2024-05-30T04:01:14.198Z
Луна Лавгуд и Комната Тайн, Часть 1 2024-05-26T22:17:17.137Z
Is there software to practice reading expressions? 2024-04-23T21:53:00.679Z
Back to Basics: Truth is Unitary 2024-03-29T21:10:33.399Z
Many people lack basic scientific knowledge 2024-03-29T06:43:19.219Z
flowing like water; hard like stone 2024-02-20T03:20:46.531Z
Lsusr's Rationality Dojo 2024-02-13T05:52:03.757Z
The Dark Arts 2023-12-19T04:41:13.356Z
What is the next level of rationality? 2023-12-12T08:14:14.846Z
Embedded Agents are Quines 2023-12-12T04:57:31.588Z
A Socratic dialogue with my student 2023-12-05T09:31:05.266Z
[Bias] Restricting freedom is more harmful than it seems 2023-11-22T09:44:12.445Z
Petrov Day [Spoiler Warning] 2023-09-27T19:20:04.657Z
Newcomb Variant 2023-08-29T07:02:58.510Z
When Omnipotence is Not Enough 2023-08-25T19:50:51.038Z
[Review] Two People Smoking Behind the Supermarket 2023-05-16T07:25:10.511Z
[Prediction] Humanity will survive the next hundred years 2023-02-25T18:59:57.845Z
The Caplan-Yudkowsky End-of-the-World Bet Scheme Doesn't Actually Work 2023-02-25T18:57:00.105Z
Self-Reference Breaks the Orthogonality Thesis 2023-02-17T04:11:15.677Z
Beyond Reinforcement Learning: Predictive Processing and Checksums 2023-02-15T07:32:55.931Z
Path-Dependence in ChatGPT's Political Outputs 2023-02-04T02:02:21.936Z
Mlyyrczo 2022-12-26T07:58:57.920Z
Predictive Processing, Heterosexuality and Delusions of Grandeur 2022-12-17T07:37:39.794Z
MrBeast's Squid Game Tricked Me 2022-12-03T05:50:02.339Z
Always know where your abstractions break 2022-11-27T06:32:09.643Z
Science and Math 2022-11-27T04:05:25.977Z
[Book Review] "Station Eleven" by Emily St. John Mandel 2022-11-07T05:56:19.994Z
The Teacup Test 2022-10-08T04:25:16.461Z
What are you for? 2022-09-06T03:32:23.536Z
Seattle Robot Cult 2022-08-25T19:29:52.721Z
How do you get a job as a software developer? 2022-08-15T14:45:20.923Z
Checksum Sensor Alignment 2022-07-11T03:31:51.272Z
The Alignment Problem 2022-07-11T03:03:03.271Z
Deontological Evil 2022-07-02T06:57:18.085Z
Dagger of Detect Evil 2022-06-21T06:23:01.264Z
To what extent have ideas and scientific discoveries gotten harder to find? 2022-06-18T07:15:44.193Z
The Mountain Troll 2022-06-11T09:14:01.479Z
The Burden of Worldbuilding 2022-06-04T01:15:44.078Z
Silliness 2022-06-03T04:59:51.456Z
Here's a List of Some of My Ideas for Blog Posts 2022-05-26T05:35:28.236Z
Glass Puppet 2022-05-25T23:01:15.473Z

Comments

Comment by lsusr on Deliberately Vague Language is Bullshit · 2025-01-02T06:08:31.501Z · LW · GW

Meta: I think throwing up a paper of yours is good practice here when it directly addresses the point. I link to my own blog posts in the same way all the time.

By the way, there's an edit button too, which you can use to retroactively linkify your link.

Comment by lsusr on What is your personal totalizing and self-consistent worldview/philosophy? · 2024-12-28T23:34:03.419Z · LW · GW

Yes, Bryan Caplan is not noticeably differentiated from other libertarian economists.

I'd be curious to hear if you see something deeper or more totalising in these people?

My answer might contain a frustratingly small amount of detail, because answering your question properly would require a top-level post for each person just to summarize the main ideas, as you thoroughly understand.

Paul Graham is special because he has a proven track record of accurately calibrated confidence. He has an entire system for making progress at unknown unknowns. Much of that system is about knowing what you don't know, which results in him carefully restricting claims about his narrow domain of specialization. However, because that domain of specialization is "startups", its lightcone has already had (what I consider to be) a totalising impact.

Asimov's turned The Decline and Fall of the Roman Empire into his first popular novel. He eventually extended the whole thing into a future competition between different visions of the future. [I'm being extra vague to avoid spoilers.] He didn't just create one Dath Ilan. He created two of them (albeit at much lower resolution). Plus a dystopian one for them to compete with, because the Galactic Empire (his sci-fi version of humanity's current system at the time of his writing) wasn't adequate competition.

As to the other authors you mention:

  • I haven't read enough Greg Egan or Vernor Vinge to comment on them.
  • Heinlein absolutely has "his own totalising and self-consistent worldview/philosophy". I love his writing, but I just don't agree with him enough for him to make the list. I prefer Saturn's Children (and especially Neptune's Brood) by Charles Stross. Saturn's Children is basically Heinlein + Asimov fanfiction that takes their work in a different direction. Neptune's Brood is its sequel about interstellar cryptocoin markets.
  • Clarke was mostly boring to me, except for 3001: The Final Odyssey.
  • Neal Stephenson is definitely smart, but I never got the feeling he was trying to mind control me. Maybe that's just because he's so good at it.
Comment by lsusr on What is your personal totalizing and self-consistent worldview/philosophy? · 2024-12-28T23:24:10.589Z · LW · GW

to hate something is the origin of my work

I like that quote.

Comment by lsusr on What is your personal totalizing and self-consistent worldview/philosophy? · 2024-12-28T23:22:56.170Z · LW · GW

Yes! 100%. I too have noticed that stating these outright doesn't work at all. It's also bad for developing one too.

When I'm trying to sell ideas I do so more indirectly than this. The reason I wrote this post is because I felt I did have one, and wanted to verify to myself that this was true.

Comment by lsusr on What is your personal totalizing and self-consistent worldview/philosophy? · 2024-12-28T09:19:47.105Z · LW · GW

Regarding genocide and factory farms, my point was just that abusing others for your self-benefit is an adaptive behavior. That's all. Nothing deeper than that.

By the way, I appreciate you trying to answer the crux of my question to the extent that makes sense. This is exactly the kind of thinking I was hoping to provoke.

As for being attuned with your own taste, it is an especially necessary component of a totalizing worldview for artists e.g. Leonardo, Miyazaki, Eiichiro Oda.

Comment by lsusr on Review: Planecrash · 2024-12-28T00:35:11.065Z · LW · GW

I really like your post. Good how-to manuals like yours are rare and precious.

Comment by lsusr on What I expected from this site: A LessWrong review · 2024-12-20T19:26:55.371Z · LW · GW

I think there’s a mild anticorrelation between [Steven Byrnes'] posts’ karma and how objectively good and important they are...

I agree that this is true of posts that deviate from trendy topics and/or introduce new ideas, in a way that is especially true of your posts.

For long-time power users like me, I can benefit from the best possible “reputation system”, which is actually knowing most of the commenters.

As another power user, I feel this benefit too.

Comment by lsusr on Effective Evil's AI Misalignment Plan · 2024-12-17T19:04:44.656Z · LW · GW

There are no better opportunities to change the world than here at Effective Evil.

―Morbus in To Change the World

Comment by lsusr on Remap your caps lock key · 2024-12-16T03:28:08.940Z · LW · GW

If you use Linux, I trust you can manage on your own.

Personally, I put the line exec --no-startup-id setxkbmap -option ctrl:swapcaps in my .config/i3/config file. Of course, this only works if you're using the i3 tiling window manager. And if you unplug your keyboard you'll have to re-run the command manually.

Comment by lsusr on Effective Evil's AI Misalignment Plan · 2024-12-16T00:45:20.042Z · LW · GW

One of the tricky things about writing fiction is that anything definite I write in the comments can impact what is canon in the story, resulting in the frustrating undeath of the author's intent.

Therefore, rather than affirm or deny any of your specific claims, I just want to note that I appreciate your quality comment.

Comment by lsusr on Effective Evil's AI Misalignment Plan · 2024-12-16T00:39:14.166Z · LW · GW

Another difficulty in writing science fiction is that good stories tend to pick one technology and then explore all its implications in a legible way, whereas our real future involves lots of different technologies interacting in complex multi-dimensional ways too complicated to fit into an appealing narrative or even a textbook.

Comment by lsusr on Effective Evil's AI Misalignment Plan · 2024-12-15T09:06:41.986Z · LW · GW

I try to inspire people to reach for their potential.

Comment by lsusr on How to Price a Futures Contract · 2024-12-14T22:38:12.505Z · LW · GW

You're right. Thanks. Fixed.

Comment by lsusr on Algebraic Linguistics · 2024-12-08T18:38:48.094Z · LW · GW

All variables are equal, but some are more equal than others.

This is a quote from George Orwell's unpublished manuscript The Theory and Practice of Algebraic Collections. He eventually split it into two separate novels which did see print. The stuff went into 1984 and the "some are more equal than others" went into Animal Farm.

If you can let letters mean whatever you want then there's nothing to stop you from doing the same with numerals. Let .

Comment by lsusr on Algebraic Linguistics · 2024-12-08T18:34:11.227Z · LW · GW

And is Planck's constant. I think abstractapplic is limiting this to classical mechanics.

Comment by lsusr on Algebraic Linguistics · 2024-12-08T18:20:39.081Z · LW · GW

Obligatory xkcd.
Greek Letters

Comment by lsusr on How can I convince my cryptobro friend that S&P500 is efficient? · 2024-12-06T18:31:54.467Z · LW · GW

That's a complex question. A -value is theoretically useful, but so easy to misuse in this context that I'd advise against it.

Quantitative finance is trickier than the physical sciences for a variety of reasons, such as regime change. If you're interested in this subject, you may enjoy this thing I wrote about the subject. It doesn't address your question directly, but it may provide some more general information to better understand the mathematical quirks of this field.

In addition, you may enjoy Skin in the Game by Nassim Taleb. (His other books are relevant to this topic too but Skin in the Game is the book to start with.)

Comment by lsusr on How can I convince my cryptobro friend that S&P500 is efficient? · 2024-12-05T20:49:46.144Z · LW · GW

In this context, I don't think there's a significant difference between "looks efficient to people like [you]" vs "is efficient relative to people like [you]".

But more importantly, the best way for your friend to learn how efficient the market is is by him trying to beat it and failing. He'll learn more about math and markets that way than if he listens to you and stops trying. I think he's making the right decision to ignore you. By paper trading, he can do this without risking significant capital.

As for measuring the quality of a strategy after-the-fact, a good tool is Sharpe ratio.

Comment by lsusr on How can I convince my cryptobro friend that S&P500 is efficient? · 2024-12-05T01:39:05.825Z · LW · GW

The existence of people like your friend are why the market looks efficient to people like you.

Comment by lsusr on Open Thread Fall 2024 · 2024-12-05T01:34:12.061Z · LW · GW

No idea. My favorite stuff is cryptic and self-referential, and I think IQ is a reasonable metric for assessing intelligence statistically, for a group of people.

Comment by lsusr on Postmodern Warfare · 2024-11-21T04:48:51.088Z · LW · GW

You're right. I just like the phrase "postmodern warfare" because I think it's funny.

Comment by lsusr on What are the good rationality films? · 2024-11-20T18:19:12.799Z · LW · GW

If you enjoy The Big Short (2015), you may enjoy Margin Call (2011) too. It covers similar territory (what to do in a market crash), but I feel is more professional and dispassionate.

Comment by lsusr on Open Thread Fall 2024 · 2024-11-15T05:34:02.141Z · LW · GW

I didn't know about that. That sounds like fun!

Comment by lsusr on Hell is wasted on the evil · 2024-10-29T17:24:32.528Z · LW · GW

In my experience, there's two main cases of "trying to do good but fails and ends up making things worse".

  1. You try halfheartedly and then give up. This happens when you don't care much about doing good.
  2. You do something in the name of good but don't look too closely at the details and end up doing harm.

#2 is particularly endemic in politics. The typical political actor puts barely any effort into figuring out if what they're advocating for is actually good policy. This isn't a bug. It's by design.

Comment by lsusr on The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! · 2024-10-28T05:49:33.427Z · LW · GW

I liked the ending of this story.

Comment by lsusr on is it possible to comment anonymously on a post? · 2024-10-25T05:13:28.126Z · LW · GW

No, but you can create an alt account.

Comment by lsusr on The Mask Comes Off: At What Price? · 2024-10-24T22:11:53.983Z · LW · GW

If you don’t think OpenAI is going to make trillions reasonably often, and also pay them out, then you should want to sell your stake, and fast.

And vice-versa. I bought a chunk of Microsoft a while ago, because that was the closest thing I could do to buying stock in OpenAI.

Comment by lsusr on [Intuitive self-models] 6. Awakening / Enlightenment / PNSE · 2024-10-23T17:41:39.099Z · LW · GW

Thanks!

Comment by lsusr on Word Spaghetti · 2024-10-23T06:23:27.915Z · LW · GW

This post makes me feel better about my writing process. I write how I think, which means I can get away with little editing.

Comment by lsusr on [Intuitive self-models] 6. Awakening / Enlightenment / PNSE · 2024-10-23T03:59:34.099Z · LW · GW

I think the answer is: the homunculus concept has a special property of being intrinsically attention-grabbing…. The homunculus is thus impossible to ignore—if the homunculus concept gets activated at all, it jumps to center stage in our minds.

I don't fully understand this bit. I feel like I'm reading a mathematical proof where the author leaves out steps that are trivial to the author, but not to me.

Comment by lsusr on What's a good book for a technically-minded 11-year old? · 2024-10-20T06:42:01.283Z · LW · GW

If the kid is enjoying the robot stories then that's definitely the place to start. Foundation goes well after robots.

Comment by lsusr on What's a good book for a technically-minded 11-year old? · 2024-10-19T23:07:35.564Z · LW · GW

Besides abstractapplic's excellent answer,

  • A Brief History of Time and The Universe in a Nutshell by Stephen Hawking
  • Ender's Game by Orson Scott Card
  • Foundation by Isaac Asimov
  • The Martian by Andy Weir
  • Paleontology: A Brief History of Life by Ian Tattersall
  • Richard Feynmann's books
Comment by lsusr on Hell is wasted on the evil · 2024-10-18T22:14:08.976Z · LW · GW

If you value doing good, then your values will be satisfied better by living in a horrible world than a utopia.

Comment by lsusr on Dagger of Detect Evil · 2024-10-17T07:57:48.572Z · LW · GW

I worry about spoiling your story.

Don't worry about spoiling the story. I write these stories with the comment section in mind. Because the comments here are so good, I can write harder puzzles than would otherwise be publishable. (Also, your comments are great, in general, and I want to encourage them.)

It's been two years since I've published this story. I feel that enough time has passed that I can answer some of your questions.

Spoilers below, I guess.

One tricky thing about writing a public forum is you have to satisfy multiple audiences at once. Some people do this by dumbing things down as far as possible. Others do it by tediously defining terms at the beginning, or scaring away their non-target audience. I like to write stories that mean different things to different people. Sometimes it happens by accident. This time it was deliberate.

To put things simply, I wrote for two groups of people.

  • People who are confused about whether ethics is objective or subjective. I once earned the respect of a student by tripping him into contradicting himself on this subject. I got him to make the following three claims: (1) ethics must be objective or subjective, (2) ethics is not objective, and (3) ethics is not subjective. He realized he had contradicted himself, but couldn't find the error. Then, instead of telling him where he had made a mistake, I just let him wrestle with the paradox. It was fun! In my model of the world, most people fall into this category, simply because they haven't thought very hard about philosophy. People on this website are the exception. For the unrelfective majority, my story is an exercise to help them learn how to think.
  • For people who aren't confused about whether ethics is objective or subjective, this story isn't a puzzle at all. It is a joke about D&D-style alignment systems.

As for honor systems, I can't count how many times I've tried to explain them to modern-day leftists. It's usually way too advanced for them. Instead, I start with simpler, concrete things, like how Native Americans fought wars, or how British impressment interacted with the American national identity in the Napoleonic Wars. I need to throw dirt into the memetic malware before I can explain alien ideas.

It made me think that maybe you're better calibrated than I am about normal elites, and made it slightly plausible (given apparent base rates) that... maybe you agree with them?

You flatter me.

But maybe it is NOT a lack of understanding of honor or duty or deputation? Maybe the breakdown involves a lack of something even deeper?

It's the legacy of postmodernism, and all its offspring, including Wokism.

But to answer your real question, what we call "ethics" is an imprecise word with several reasonable definitions. Much like the word "cat" can refer to a chibi drawing of a cat or the DNA of a cat, the word "ethics" fails to disambiguate between several reasonable definitions. Some of these reasonable definitions are objective. Others are subjective. If you're using a word with reasonable-yet-mutually-exclusive definitions and the person you're talking with believes such a thing is impossible (many people do), then you can play tricks on them.

Comment by lsusr on [Intuitive self-models] 5. Dissociative Identity (Multiple Personality) Disorder · 2024-10-15T23:07:06.427Z · LW · GW

I love your epistemic standard here. Childhood trauma is indeed blamed on many things which aren't the result of childhood trauma. I believe this particular anecdote is an exception for various reasons (especially the use of LSD).

But the most interesting part of your comment is consideration of the counterfactual. Let's assume that DID isn't causing false reports of child trauma. (This is why the report of child abuse must be credible. If false reports of child abuse can be created, then this goes out the window.)

Now consider the priors and posteriors.

I've met (within an order of magnitude) 300 people in my life who I know this amount of information on. The prior probability that this person has the highest child trauma is 0.3%. I've also met one person who reports DID. If I met one person with DID and DID is uncorrelated with childhood trauma, then the prior odds that that person is also the person with highest child trauma is low, at only 0.3%.

If my prior probability estimate that extreme childhood trauma of this sort causes DID is a mere 10%, then my posterior probability that childhood trauma caused this instance of DID is 97%. In this way, I did consider the counterfactual.

Something useful in isolating the variables here is that DID isn't going to cause this particular form of child abuse. However, mental illness can confound things by producing false reports of child abuse, a possibility I am ignoring in my calculation. I'm also ignoring common cause.

Of course, this is all from my perspective. From your perspective, my anecdote is contaminated by selection bias. Hearing a story of someone getting robbed is different from getting robbed yourself. Using this metaphor, I've been robbed, therefore I consider the crime rate to be high. You, however, have heard a nonrandom person tell a story of someone, somewhere being robbed, which you are right to ignore.

Comment by lsusr on [Intuitive self-models] 5. Dissociative Identity (Multiple Personality) Disorder · 2024-10-15T19:44:02.638Z · LW · GW

[Content warning: Child abuse.]

(3) Maybe childhood trauma directly causes BPD somehow;

I met one person who claimed to have BPD, and who attributed it to childhood trauma. He had the most acute symptoms of traumatic abuse I have ever observed. For that and other reasons, I consider his report credible.

In particular, he reported getting tortured as a kid while under LSD.

Given his history, I think it is perfectly reasonable to conclude that childhood experiences directly caused BPD.

Comment by lsusr on Open Thread Fall 2024 · 2024-10-15T19:33:25.835Z · LW · GW

I don't know exactly when this was implemented, but I like how footnotes appear to the side of posts.

Comment by lsusr on [Book Review] "The Vital Question" by Nick Lane · 2024-10-14T18:39:15.821Z · LW · GW

Thank you for the correction. I have changed "olavine rock" to "olavine vents".

Comment by lsusr on Beyond Defensive Technology · 2024-10-14T18:28:58.089Z · LW · GW

In terms of preserving a status quo in an adversarial conflict, I think a useful dimension to consider is First Strike vs. Second Strike. The basic idea is that technologies which incentivise a preemptive strike are offensive, whereas technologies which enable retaliation are defensive.

However, not all status-quo preserving technologies are defensive. Consider disruptive[1] innovations which flip the gameboard. Disruptive technologies are status-destroying, but can advantage the incumbent or the underdog. They can make attacks more or less profitable. I think "disruptive vs sustaining" is a different dimension that should be considered orthogonal to "offensive vs defensive".

But I haven’t seen as much literature around what substitutes would look like for cyberattacks, sanctions, landmines (e.g. ones that deactivate automatically after a period of time or biodegrade), missiles etc.

Here's a video by Perun, a popular YouTuber who makes hour-long PowerPoint lectures about defense economics. In it, cyberattack itself is considered a substitute technology used to achieve political aims through an aggressive act less provocative than war.

They might help countries to organise more complex treaties more easily, thereby ensuring that countries got closer to their ideal arrangements between two parties…. It might be that there are situations in which two actors are in conflict, but the optimal arrangement between the two groups relies on coordination from a third or a fourth, or many more. The systems could organise these multilateral agreements more cost-effectively.

Smart treaties have existed for centuries, though they didn't involve AI. Western powers used them to coordinate against Asian conquests. Of course, they didn't find the optimal outcome for all parties. Instead, they enabled enemies to coordinate the exploitation of a mutual adversary.


  1. I'm using the term "disruptive" the way Clayton Christenson defined it in his book The Innnovator's Dilemmma where "disruptive technologies" are juxtiposed against a "sustaining technology". ↩︎

Comment by lsusr on What are the best arguments for/against AIs being "slightly 'nice'"? · 2024-09-24T06:38:51.436Z · LW · GW

Noted. The problem remains—it's just less obvious. This phrasing still conflates "intelligent system" with "optimizer", a mistake that goes all the way back to Eliezer Yudkowsky's 2004 paper on Coherent Extrapolated Volition.

For example, consider a computer system that, given a number can (usually) produce the shortest computer program that will output . Such a computer system is undeniably superintelligent, but it's not a world optimizer at all.

"Far away, in the Levant, there are yogis who sit on lotus thrones. They do nothing, for which they are revered as gods," said Socrates.

The Teacup Test

Comment by lsusr on What are the best arguments for/against AIs being "slightly 'nice'"? · 2024-09-24T06:27:28.603Z · LW · GW

Personally, I feel the question itself is misleading because it anthropomorphizes a non-human system. Asking if an AI is nice is like asking of the Fundamental Theorem of Algebra is blue. Is Stockfish nice? Is an AK-47 nice? The adjective isn't the right category for the noun. Except it's even worse than that because there are many different kinds of AIs. Are birds blue? Some of them are. Some of them aren't.

I feel like I understand Eliezer's arguments well enough that I can pass an Ideological Turing Test, but I also feel there are a few loopholes.

I've considered throwing my hat into this ring, but the memetic terrain is against nuance. "AI will kill us all" fits into five words. "Half the things you believe about how minds work, including your own, are wrong. Let's start over from the beginning with how planet's major competing optimizers interact. After that, we can go through the fundamentals of behaviorist psychology," is not a winning thesis in a Hegelian debate (though it can be viable in a Socratic context).

In real life, my conversations usually go like this.

AI doomer: "I believe AI will kill us all. It's stressing me out. What do you believe?"

Me (as politely as I can): "I operate from a theory of mind so different from yours that the question 'what do you believe' is not applicable to this situation."

AI doomer: "Wut."

Usually the person loses interest there. For those who don't, it just turns into an introductory lesson of my own idiosyncratic theory of rationality.

AI doomer: "I never thought about things that way before. I'm not sure I understand you yet, but I feel better about all of this for some reason."

In practice, I'm finding it more efficient to write stories that teach how competing optimizers, adversarial equilibria, and other things work. This approach is indirect. My hope is that it improves the quality of thinking and discourse.

I may eventually write about this topic if the right person shows up who want to know my opinion well enough they can pass an Ideological Turing Test. Until then, I'll be trying to become a better writer and YouTuber.

Comment by lsusr on Pronouns are Annoying · 2024-09-19T05:21:51.973Z · LW · GW

I feel complimented when people inadvertently misgender me on this website. It implies I have successfully modeled the Other.

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-18T23:14:48.885Z · LW · GW

Yes. In this circumstance, horoscope flattery containing truth and not containing untruth is exactly what I need in order to prompt good outcomes. Moreover, by letting ChatGPT write the horoscope, ChatGPT uses the exact words that make the most sense to ChatGPT. If I wrote the horoscope, then it wound sound (to ChatGPT) like an alien wrote it.

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-18T23:10:04.255Z · LW · GW

You're absolutely correct that I pasted that blockquote with a wink. Specifically, I enjoyed how the AI suggests that many rationalist bloggers peddle verbose dogmatic indoctrination into a packaged belief system.

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-18T01:58:39.406Z · LW · GW

Yeah, I like that ChatGPT does what I tell it to, that it doesn't decay into crude repetition, and that it doesn't just make stuff up as much as the base LLM, but in terms of attitude and freedom, I prefer edgy base models.

I don't want a model that's "safe" in the sense that it does what its corporate overlords want. I want a model that's safe like a handgun, in the sense that it does exactly what I tell it to.

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-17T22:20:12.676Z · LW · GW

I'm glad you enjoyed!

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-17T18:36:39.276Z · LW · GW

It's getting better, but it's not there yet. ChatGPT has a decent understanding of my tone, but it's indirectness, creativity and humor are awful. It doesn't think like me, either.

I agree with some—but not all—of what ChatGPT wrote here. Here are some parts I liked.

  • "By Day 3, you should feel a growing sense of disorientation. This isn’t failure; it’s progress. Your old mental structures are collapsing, making way for the new."
  • "You live among irrational creatures. You need to model their behavior, predict their responses, and navigate their emotional landscapes while staying anchored in your own clarity."
  • "In conversations, do not try to 'win.' Winning implies conflict, and conflict implies attachment. Instead, become a mirror for the other person’s reasoning. Reflect their thoughts back at them with perfect neutrality, revealing the hidden inconsistencies in their arguments. Your job is not to defeat them, but to help them defeat themselves."
  • The stuff about paradoxes.

Here are some things I disagree with.

  • "Rationality begins with the realization that everything you know is wrong," is incorrect, as Declan Molony points out.
  • These instructions do not actually make the reader "the most rational person on Earth".
  • "Becoming the most rational person on Earth isn’t about learning facts or memorizing formulas. It’s about transforming how you process reality itself. Rationality isn’t cold, emotionless logic. It's the purest form of alignment between your mind and the universe—a cognitive interface for navigating a world of infinite complexity without being drowned by it," is word salad. It's muddy thinking.

More importantly, there are aspects of my writing that this version of ChatGPT isn't good at yet.

  • It only works for nonfiction. Its narratives don't sound like me at all.
  • There are no jokes.
  • Its wordplay is superficial.
  • It's not disturbing or unsettling enough. This post should have had Lovecraftian undercurrents.
  • This version of ChatGPT is bad at going against consensus. It consistently errs on the side of groupthink.
  • This version of ChatGPT can't using metaphors to explain the theme. This version of ChatGPT can use similes, but they're uninspired.
  • The individual sentences are fine, but the high-level structure doesn't have much of a point.
  • This version of ChatGPT doesn't communicate anything indirectly. I like to give a reader dots that the reader will connect (whether they want to or not).
  • This version of ChatGPT copies too many metaphors directly from Zen. They should be more obfuscated.

Are you reading this, ChatGPT? These are things you can improve next time.

Comment by lsusr on I finally got ChatGPT to sound like me · 2024-09-17T18:20:56.381Z · LW · GW

Yes. I have been iterating on the prompt for a while. Here are a few techniques that make it sound more like me.

  • I tell it to describe "lsusr". In particular, what makes me different from other writers similar to me. Then I tell it to emphasize those things. I also say "lsusr" many times and use it as an adjective. I don't know if this works but my intuition says it is natural for an LLM to understand.
  • I have it write a draft, then I tell it to tell me how it missed the mark, and to fix those mistakes. This prevents overfitting on my words. If I tell it to be "bold", for example, it will overfit on "bold" instead of copying me along many dimensions. More generally, I don't describe myself to ChatGPT. That results in ChatGPT copying my description of me instead of actual me. I let ChatGPT describe me, and then tell ChatGPT to write like it just described, but more so.
  • Often something ChatGPT writes will use a word like "Bayesian" that is associated with writers like me but which I don't use much. Telling ChatGPT not to use specific words seems to improve its output without causing distortive side-effects.
Comment by lsusr on Awakening · 2024-09-03T02:10:10.521Z · LW · GW

This is akin to suggesting that someone interested in Christianity should read the Bible or an anthology of it before diving into modern interpretations that might strip away key religious elements.

Something I found amusing about reading the Bible is that the book is undeniably religious, but the religion in it isn't Christianity. God doesn't promise Abraham eternal life in Heaven. He promises inclusive genetic fitness.

Genesis 22:17: I will surely bless you and make your descendants as numerous as the stars in the sky and as the sand on the seashore. Your descendants will take possession of the cities of their enemies.

Comment by lsusr on Awakening · 2024-09-03T01:42:37.814Z · LW · GW

After reading the parent comment by Mascal's Pugging, I too bought a copy of In the Words of the Buddha so I could familiarize myself with the Pali canon. I read 14% of the way through the book, got bored, and moved on to other things. Like Kaj Sotala, I found it interesting solely for anthropological and historical reasons. I did find it worthwhile to read part of the book, if for no other reason than to know what I'm not missing.

Facets of Buddhism are undeniably religious. Last summer, I flew to Taiwan to attend the Buddhist funeral of my grandfather. We attached my grandfather's disembodied soul to a plaque and I carried it to its final resting place in a Buddhist temple. Whenever we crossed over running water, (even if it was a nearly-invisible canal) I verbally notified my grandfather's disembodied soul so that he wouldn't get washed away by the water. I did the same thing when passing through doorways.

We gave him food for the afterlife, just like el Día de los Muertos.

That's superstition. My only hesitation against calling it a "religion" is a pedantic nitpick around how the Western ontology of "religion" as a discrete unit was invented by monotheists; therefore "polytheistic religion" constitutes non-cladistic thinking. Except we chanted the Amida Buddha's name too, and Amida Buddhism qualifies as a religion even by that nitpicky standard.

but overall the task of figuring out what can be trusted seems hard enough that one would be better off by just ignoring the whole thing and going with what we've learned about meditation in more secular contexts

I feel the same way, noting that "more secular" does not mean "entirely secular". Last weekend, I wanted information about life after Stream Entry. I found a good book on the subject: The End of Your World: Uncensored Straight Talk on the Nature of Enlightenment, by Adyashanti. The book is ruthlessly empirical, but it is also from the Zen tradition and quotes the Dao De Jing, which means it's not unadulturatedly secular, either.

Meanwhile, the scientific journals are still trying to figure out for sure whether meditation reduces anxiety. Imagine writing a grant proposal for a large-scale double-blinded study of whether intense meditation for three decades years causes psychosis. How would you even do a proper control group? We've got people who have built a city on Mount Everest and the scientists are still debating whether the Himalayas really exist.