Posts

Book Review: Replacing Guilt - On Having Something to Fight For 2024-11-03T19:47:35.093Z
Book Review: Spiritual Enlightenment: The Damnedest Thing 2023-01-21T02:38:14.854Z
Chat GPT's views on Metaphysics and Ethics 2022-12-03T18:12:19.290Z
SBF, Pascal's Mugging, and a Proposed Solution 2022-11-18T18:39:48.823Z
Don't be a Maxi 2022-07-31T23:59:01.506Z
Types of Friends 2021-06-22T06:04:52.249Z

Comments

Comment by Cole Killian (cole-killian) on LessWrong's (first) album: I Have Been A Good Bing · 2024-05-12T09:07:47.115Z · LW · GW

I love this album, big thank you. I liked the ordering of songs in the full album youtube video - specifically the way it started with the folk album and later went through the dance album. One minor thing I found confusing is the ordering of songs in the albums on spotify, youtube music, etc. They seem to have a different ordering, and jump between the folk songs and the dance songs. Is this intentional? Do you think the ordering of songs on these various platforms could be updated to match the full album youtube video?

Comment by Cole Killian (cole-killian) on Satisficers want to become maximisers · 2023-04-26T22:27:16.324Z · LW · GW

Unlike a maximiser, that will attempt to squeeze the universe to every drop of utility that it can, a satisficer will be content when it reaches a certain level expected utility (a satisficer that is content with a certain level of utility is simply a maximiser with a bounded utility function).

Does it make sense to to claim that a satisficer will be content when it reaches a certain level of expected utility though? Some satisficers may work that way, but they don't all need to work that way. Expected utility is somewhat arbitrary.

Instead, you could have a satisficer which tries to maximize the probability that the utility is above a certain value. This leads to different dynamics than maximizing expected utility. What do you think?

Related post on utility functions here: https://colekillian.com/posts/sbf-and-pascals-mugging/

Comment by Cole Killian (cole-killian) on Robin Hanson on "Explaining the Sacred" · 2023-03-09T20:54:30.431Z · LW · GW

Is there a reason for not link posting all overcoming bias posts to lesswrong?

Comment by Cole Killian (cole-killian) on Welcome & FAQ! · 2023-02-27T20:12:29.649Z · LW · GW

Could you elaborate on the reasoning behind the high bar for alignment forum membership?

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-24T00:50:25.267Z · LW · GW

I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn't heard before, and some completely ridiculous ideas. I couldn't find her definition of "good" or "bad" or the idea of tiling the future lightcone with copies of herself.

Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.

I'm wondering, how do you look at the question of what we want to tile the future lightcone with?

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-23T16:11:22.250Z · LW · GW

Yea I like the way you describe it.

I'll check out his writings on the history of Buddhism and meditation, thanks.

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-23T01:15:38.390Z · LW · GW

I agree it can be seen as a destructive meme. At the same time, I wonder why it has spread so little. Maybe because it doesn't have a very evangelical property. People who become infected with it might not have much of a desire to pass it on to others.

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-23T00:50:45.297Z · LW · GW

Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.

I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:

  • Conventional Morality :: Do what feels right without thinking much about it.
  • Utilitarianism I :: The atomic unit of "goodness" and "badness" is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
  • Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
  • Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
  • Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
  • Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can't hope to properly anticipate (we can't expect to have properly understood our preferences with such a weak understanding of "reality"). The lexicographical preference then becomes understanding consciousness and making the "right" decision on what to do next upon understanding it. In this case, it would mean that all of our "moral" actions were only good in so far as their contribution to this revelation and making the "right" decision upon understanding consciousness.
  • Utilitarianism VI :: ?

Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.

But "yourself" is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There's no real coherent persistent definition of "yourself".

What do you want to tile the future lightcone with?

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-22T23:48:12.347Z · LW · GW

I took a look at meaningness a few months ago but couldn't really get into it. It felt a bit too far from rationality and very hand wavy.

Did you find Meaningness valuable? I may take another look

Comment by Cole Killian (cole-killian) on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2023-01-21T06:02:40.460Z · LW · GW

You're assessment seems very accurate!

It didn't occur to me that there are probably many more people like him than I realize. I'm not sure I've met any. Have you?

Comment by Cole Killian (cole-killian) on Timeless Identity · 2022-12-10T21:20:30.043Z · LW · GW

My response is to say that sometimes it doesn't all add up to normality. Sometimes you learn something which renders your previous way of living obsolete.

It's similar to the idea of thinking of yourself as having free will even if it isn't the case: It can be comforting to think of yourself as having continuity of consciousness even if it isn't the case.

Wei Dai posts here (https://www.lesswrong.com/posts/uXxoLPKAdunq6Lm3s/beware-selective-nihilism) suggesting that we "keep all of our (potential/apparent) values intact until we have a better handle on how we're supposed to deal with ontological crises in general". So basically, favor the status quo until you develop an alternative and understand its implications.

What do you think?

Comment by Cole Killian (cole-killian) on An Introduction to Current Theories of Consciousness · 2022-12-09T01:13:55.540Z · LW · GW

Thanks for writing this post.

You mention that:

only conscious beings will ask themselves why they are conscious

But at the same time you support epiphenomenalism whereby consciousness has no effect on reality.

This seems like a contradiction. Why would only conscious things discuss consciousness if consciousness has no effect on reality?

Also, what do you think about Eliezer's Zombies post? https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted

Comment by Cole Killian (cole-killian) on SBF, Pascal's Mugging, and a Proposed Solution · 2022-11-18T19:34:14.536Z · LW · GW

I think we mostly agree.

That's only clear if you define "long enough" in a perverse way. For any finite sequence of bets, this is positive value. Read SBF's response more closely - maybe you have an ENORMOUSLY valuable existence.

I agree that it's positive expected value calculated as the arithmetic mean. Even so, I think most humans would be reluctant to play the game even a single time.

tl;dr: it depends on whether utility is linear or sublinear in aggregation. Either way, you have to accept some odd conclusions.

I agree it's mostly a question of "what is utility". This post is more about building a utility function which follows most human behavior and showing how if you model utility in a linear unbounded way, you have to accept some weird conclusions.

The main conflict is between measuring utility as some cosmic value that is impartial to you personally, and a desire to prioritize your own life over cosmic utility. Thought experiments like pascal's mugging force this into the light.

Personally, I bite the bullet and claim that human/sentient lives decline in marginal value. This is contrary to what most utilitarians claim, and I do recognize that it implies I prefer fewer lives over more in many cases. I additionally give some value to variety of lived experience, so a pure duplicate is less utils in my calculations than a variant.

I don't think this fully "protects" you. In the post I constructed a game which maximizes log utility and still leaves you with nothing in 99% of cases. This is why I also truncate low probabilities and bound the utility function. What do you think?

But that doesn't seem to be what you're proposing. You're truncating at low probabilities, but without much justification. And you're mixing in risk-aversion as if it were a real thing, rather than a bias/heuristic that humans use when things are hard to calculate or monitor (for instance, any real decision has to account for the likelihood that your payout matrix is wrong, and you won't actually receive the value you're counting on).

My main justification is that you need to do it if you want your function to model common human behavior. I should have made that more clear.

Comment by cole-killian on [deleted post] 2022-11-18T18:42:21.682Z

I posted a V2 of the post here: https://www.lesswrong.com/posts/WYGp9Kwd9FEjq4PKM/sbf-pascal-s-mugging-and-a-proposed-solution. I'm curious what do you think?

The new approach is to also incorporate (with more details in the post):

  • A bounded utility function to account for human indifference to changes in utility above or below a certain point.
  • A log or sub log utility function to account for human risk aversion.
Comment by cole-killian on [deleted post] 2022-11-16T22:20:08.405Z

Good point thanks for the comment. I'll think about it some more and get back to you.

Comment by Cole Killian (cole-killian) on Don't be a Maxi · 2022-08-04T04:50:10.278Z · LW · GW

Gotcha thanks yea I should have elaborated more.

I think the general consensus is that it's very unlikely bitcoin inevitably takes a monopoly position in the cryptocurrency scene, which is what the bitcoin maxi position is referring to here.

Vitalik goes into reasons here: https://blog.ethereum.org/2014/11/20/bitcoin-maximalism-currency-platform-network-effects/

But I could have been more charitable to the bitcoin maxi position.

Comment by Cole Killian (cole-killian) on Don't be a Maxi · 2022-08-04T04:35:50.078Z · LW · GW

Yes good point, I agree that it's bad advice to ask people to dispose of beliefs which actually work.

I'd also say that disposing of the belief that "an environment of multiple competing cryptocurrencies is undesirable, that it is wrong to launch 'yet another coin', and that it is both righteous and inevitable that the Bitcoin currency comes to take a monopoly position in the cryptocurrency scene", does not forbid somebody from investing in bitcoin based on some other belief.

I think the advice of asking somebody to find better grounding for beliefs is dangerous because it makes it harder for people to change their minds.

Overall the advice is a rule of thumb and should not be followed religiously. If you can coincidentally find new grounding for a belief who's old grounding no longer makes sense, you should keep it like you said.

Comment by Cole Killian (cole-killian) on Don't be a Maxi · 2022-08-04T04:23:11.914Z · LW · GW

Gotcha yea I hadn't considered those terms; my thinking was that there isn't an established standard name for this phenomenon. I haven't seen a standard term for it on lesswrong, but you would think if such a term existed it would be found here pretty easily. Of the ones you list I think "orthodox" fits the best, but they are all highly overloaded and generally used with religious connotations.

I agree that "maxi" doesn't ring a bell outside of the crypto space right now, but my thinking was to introduce it as a term to represent this idea of "belief in belief" and have it spread from there.

"Maximalist" does have a general definition not specific to crypto: a person who holds extreme views and is not prepared to compromise (Oxford languages).