Posts
Comments
On reflection, I endorse the conclusion and arguments in this post. I also like that it's short and direct. Stylistically, it argues for a behavior change among LessWrong readers who sometimes make surveys, rather than being targeted at general LessWrong readers. In particular, the post doesn't spend much time or space building interest about surveys or taking a circumspect view of them. For this reason, I might suggest a change to the original post to add something to the top like "Target audience: LessWrong readers who often or occasionally make formal or informal surveys about the future of tech; Epistemic status: action-oriented; recommends behavior changes." It might be nice to have a longer version of the post that takes a more circumspect view of surveys and coordination surveys, that is more optimized for interestingness to general LessWrong readers, and that is less focused on recommending a change of behavior to a specific subset of readers. I wouldn't want this shorter more direct version to be fully replaced by the longer more broadly interesting version, though, because I'm still glad to have a short and sweet statement somewhere that just directly and publically explains the recommended behavior change.
I've been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it's the definition of extortion). Can we use this opportunity to just make the switch?
I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it's high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.
"Brier scoring" is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it's what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strategy after it. It makes me think the people naming their strategy don't have enough mathematician friends to advise them otherwise... which, as evidenced by these comments, is not the case for CFAR ;) Possible re-naming options that contrast well with "signal boosting"
- Score boosting
- Signal filtering
- Signal vetting
This is a cryonics-fails story, not a cryonics-works-and-is-bad story.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn't like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.
This is an example where cryonics fails, and so not the kind of example I'm looking for in this thread. Sorry if that wasn't clear from the OP! I'm leaving this comment to hopefully prevent more such examples from distracting potential posters.
Hmm, this seems like it's not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:
OP: There's a separate question of whether the outcome is positive enough to be worth the money, which I'd rather discuss in a different thread.
(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I'm not allowed to die for thousands of subjective years or more.
(1) A well-meaning but slightly-too-obsessed cryonics scientist wakes up some semblance of me in a semi-conscious virtual delirium for something like 1000 very unpleasant subjective years of tinkering to try recovering me. She eventually quits, and I never wake up again.
See Nate's comment above:
http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely "effective".
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
Just donated $500 and pledged $6500 more in matching funds (10% of my salary).
I would expect not for a paid workshop! Unlike CFAR's core workshops, which are highly polished and get median 9/10 and 10/10 "are you glad you came" ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
1) Logical depth seems super cool to me, and is perhaps the best way I've seen for quantifying "interestingness" without mistakenly equating it with "unlikeliness" or "incompressibility".
2) Despite this, Manfred's brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible... do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be penalized?
I think it would be easy to rationalize/over-fit our intuitions about this formula to convince ourselves that it matches our intuitions about what is a good future. More realistically, I suspect that our favorite futures have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).
Great question! It was in the winter of 2013, about a year and a half ago.
Thanks, fixed!
you cannot use the category of "quantum random" to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn't quite make sense. If I am entangled with a a state that you are not entangled with, it may "be superposed" from your perspective but not from either of my various perspectives.
For example: a projection of the universe can be in state
(you observe NULL)⊗(I observe UP)⊗(photon is spin UP) + (you observe NULL)⊗(I observe DOWN)⊗(photon is spin DOWN) = (you observe NULL)⊗((I observe UP)⊗(photon is spin UP) + (I observe DOWN)⊗(photon is spin DOWN))
The fact that your state factors out means you are disentangled from the joint state of me and the particle, and so together the particle and I are "in a superimposed state" from "your perspective". However, my state does not factor out here; there are (at least) two of me, each observing a different outcome and not a superimposed photon.
Anyway, having cleared that up, I'm not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce an expected and measurable interference paterns. This is what we have failed to produce at a macroscopic level, and it is this failure that you are talking about when you say
a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).
I do not believe I have been shown a convincing empirical test ruling out the possibility that the state is not, from my brain's perspective, in a superposition of vastly many states with amplitudes whose complex arguments are difficult to predict or control well enough to produce clear interference patterns, and half of which are "heads" state and half of which are "tails" states. But I am very ready to be corrected on this, so if anyone can help me out, please do!
Not justify: instead, explain.
I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.
If you're curious about someone else's emotions or perspective, first, remember that there are two ways to encode knowledge of how someone else feels: by having a description of their feelings, or by empathizing and actually feeling them yourself. It is more costly --- in terms of emotional energy --- to empathize with someone, but if you care enough about them to afford them that cost, I think it's the way to go. You can ask them to help you understand how they feel, or help you to see things the way they do. If you succeed, they'll appreciate having someone who can share their perspective.
My summary of this idea has been that life is a non-convex optimization problem. Hill-climbing will only get you to the top of the hill that you're on; getting to other hills requires periodic re-initializing. Existing non-convex optimization techniques are often heuristic rather than provably optimal, and when they are provable, they're slow.
And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.
What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI?
I don't think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I'm okay with some agents being worse off with the FAI, if that's the kind of agents they are.
Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.
Non-VNM agents satisfying only axiom 1 have coherent preferences... they just don't mix well with probabilities.
Dumb solution: an FAI could have a sense of justice which downweights the utility function of people who are killing and/or procreating to game their representation in AI's utility function, or something like that do disincentivize it. (It's dumb because I don't know how to operationalize justice; maybe enough people would not cheat and want to punish the cheaters that the FAI would figure that out.)
Also, given what we mostly believe about moral progress, I think defining morality in terms of the CEV of all people who ever lived is probably okay... they'd probably learn to dislike slavery in the AI's simulation of them.
Thanks for writing this up!
I don't see how it could be true even in the sense described in the article without violating Well Foundation somehow
Here's why I think you don't get a violation of the axiom of well-foundation from Joel's answer, starting from way-back-when-things-made-sense. If you want to skim and intuit the context, just read the bold parts.
1) Humans are born and see rocks and other objects. In their minds, a language forms for talking about objects, existence, and truth. When they say "rocks" in their head, sensory neurons associated with the presence of rocks fire. When they say "rocks exist", sensory neurons associated with "true" fire.
2) Eventually the humans get really excited and invent a system of rules for making cave drawings like "∃" and "x" and "∈" which they call ZFC, which asserts the existence of infinite sets. In particular, many of the humans interpret the cave drawing "∃" to mean "there exists". That is, many of the same neurons fire when they read "∃" as when they say "exists" to themselves. Some of the humans are careful not to necessarily believe the ZFC cave drawing, and imagine a guy named ZFC who is saying those things... "ZFC says there exists...".
3) Some humans find ways to write a string of ZFC cave drawings which, when interpreted --- when allowed to make human neurons fire --- in the usual way, mean to the humans that ZFC is consistent. Instead of writing out that string, I'll just write in place of it.
4) Some humans apply the ZFC rules to turn the ZFC axiom-cave-drawings and the cave drawing into a cave drawing that looks like this:
"∃ a set X and a relation e such that <(X,e) is a model of ZFC>"
where <(X,e) is a model of ZFC> is a string of ZFC cave drawings that means to the humans that (X,e) is a model of ZFC. That is, for each axiom A of ZFC, they produce another ZFC cave drawing A' where "∃y" is always replaced by "∃y∈X", and "∈" is always replaced by "e", and then derive that cave drawing from the cave drawing " and " according to the ZFC rules.
Some cautious humans try not to believe that X really exists... only that ZFC and the consistency of ZFC imply that X exists. In fact if X did exist and ZFC meant what it usually does, then X would be infinite.
4) The humans derive another cave drawing from ZFC+:
"∃Y∈X and f∈X such that <(Y,f) is a model of ZFC>",
6) The humans derive yet another cave drawing,
"∃ZeY and geX such that <(Z,g) is a model of ZFC>".
Some of the humans, like me, think for a moment that Z∈Y∈X, and that if ZFC can prove this pattern continues then ZFC will assert the existence of an infinite regress of sets violating the axiom of well-foundation... but actually, we only have "ZeY∈X" ... ZFC only says that Z is related to Y by the extra-artificial e-relation that ZFC said existed on X.
I think that's why you don't get a contradiction of well-foundation.
testing this symbol: ∃
That was imprecise, but I was trying to comment on this part of the dialogue using the language that it had established
Ah, I was asking you because I thought using that language meant you'd made sense of it ;) The language of us "living in a (model of) set theory" is something I've heard before (not just from you and Eliezer), which made me think I was missing something. Us living in a dynamical system makes sense, and a dynamical system can contain a model of set theory, so at least we can "live with" models of set theory... we interact with (parts of) models of set theory when we play with collections of physical objects.
Models being static is a matter of interpretation.
Of course, time has been a fourth dimension for ages ;) My point is that set theory doesn't seem to have a reasonable dynamical interpretation that we could live in, and I think I've concluded it's confusing to talk like that. I can only make sense of "living with" or "believing in" models.
Help me out here...
One of the participants in this dialogue ... seems too convinced he knows what model he's in.
I can imagine living a simulation... I just don't understand yet what you mean by living in a model in the sense of logic and model theory, because a model is a static thing. I heard someone once before talk about "what are we in?", as though the physical universe were a model, in the sense of model theory. He wasn't able to operationalize what he meant by it, though. So, what do you mean when you say this? Are you considering the physical universe a first-order structure) somehow? If so, how? And concerning its role as a model, what formal system are you considering it a model of?
Until I'm destroyed, of course!
... but since Qiaochu asked that we take ultrafinitism seriously, I'll give a serious answer: something else will probably replace ultrafinitism as my preferred (maximum a posteriori) view of math and the world within 20 years or so. That is, I expect to determine that the question of whether ultrafinitism is true is not quite the right question to be asking, and have a better question by then, with a different best guess at the answer... just because similar changes of perspective have happened to me several times already in my life.
I also wish both participants in the dialogue would take ultrafinitism more seriously.
For what it's worth, I'm an ultrafinitist. Since 2005, at least as far as I've been able to tell.
If you want to make this post even better (since apparently it's attracting massive viewage from the web-at-large!), here is some feedback:
I didn't find your description of the owl monkey experiment very compelling,
If a monkey was trained to keep a hand on the wheel that moved just the same, but he did not have to pay attention to it… the cortical map remained the same size.
because it wasn't clear that attention was causing the plasticity; the temporal association of subtle discriminations with rewards could plausibly cause plasticity directly, without attentional control being an intermediate causal link. I.e., because attention is a latent variable in the monkeys, either of the following could explain the observations:
(1) {attention} <-- {discrimination associated with reward} --> {plasticity}
(2) {discrimination associated with reward} --> {attention} --> {plasticity}
It's the human studies you cited but didn't describe, e.g. Heron et al (2010), that really pin down the {attention} --> {plasticity} arrow, because we can verbally direct humans to pay attention to something without requiring more discrimination from that group compared to a non-attentive group. In particular, Heron et al didn't just replicate the findings in the monkeys as you said...
This finding has since been replicated in humans, many times (for instance [5, 6]).
... they actually tested a direct causal link from {attention} to {co-opting neurons}, which makes your point much more convincing, I think :)
So if you're reading this, I suggest editing in the human study! And also this helpful comment you wrote.
I'm pretty sure that the idea of the previous two paragraphs has been talked about before, but I can't find where.
On LessWrong: VNM expected utility theory: uses, abuses, and interpretation (shameless self-citation ;)
On Wikipedia: Limitations of the VNM utility theorem
+1 for sharing; you seem the sort of person my post is aimed at: so averse to being constrained by self-image that you turn a blind eye when it affects you. It sounds to me like you that you are actively trying to suppress having beliefs about yourself:
People around me have a model of what Dan apparently is which is empathetic, nice, generous etc. I'm always the first to point out a bias such as racism or nonfactual emotional opinions etc. I don't have to see myself as any of those things though.
I've been there, and I can think of a number of possible causes of this aversion:
Possibility #1: You see that other people are biased by their self-images in harmful ways, so you try not to have any self image that might resemble one that they would have. What you end up with is something like a "moral calculator" self-image, or a "really objective guy" self-image:
All I have to do is keep asking questions properly and at the right time and then output a response. No i'm not a calculator but the results are good according to everyone I meet and interact with.
This distinguishes you from others in a way that doesn't activate your "don't screw up like them" alarm bells.
Possibility #2. You are mildly disgusted by human biases and limitations, and find using the story-like heuristics of "common people" quaint but distasteful. This gives you a "too good for that silly human-think" self-image, which biases you to ignore methods of thinking that are especially useful for humans if employed correctly (i.e., as moderate-bias-high-accuracy estimators). No one is saying go think like all your wrong friends now or stop having real-time assessments of things, and the fact that you interpreted the post in that way suggests that you are somewhat sensitive to this issue. I'm saying to spend some time understanding the strengths of common emotional heuristics like narrative, not just their weaknesses, so you can make a better decision about when and how to use them.
One final comment:
I do make decisions as they come up and if I ever was to base one off the fact that "that's what Dan would do" then that throws up a red flag to me.
It should. This should also throw up a red flag:
I don't decide what is right in advance because if i do anything to predetermine my answer before a question arises then i'm starting off with a bias.
You are not going to escape having to cache some of your thoughts. Computers do it, AIs are going to do it, people do it, and you do it. When I learned linear algebra, I made myself re-derive every theorem and its dependencies, back to the field axioms, in my head every time I used them... but eventually I had to stop in order to follow seminar talks that'll use 5 major results results in a span of 10 seconds. It was inevitable. And really, you don't add 12 to itself 12 times every time you compute 12x12, even if you feel like a calculator. You don't re-derive the distributive law from first principles every time you use a multiplication algorithm. And if you do, you're going to be unnecessarily --- dare I say irrationally --- slower than otherwise ;)
The best thing to do is accept this fact, so that you can start caching instructions like keep an eye out for the following exception to this other cached instruction or watch out I don't think I'm a calculator and assume I'm immune to biases arising from my own self-image.
If you're a "people-hater" who is able to easily self-modify, why do you still "hate" people? Are you sure you're not rationalizing the usefulness of your dislike of others? What do you find yourself saying to yourself and others about what being a "people-hater" achieves for you. Are there other ways for you to achieve those without hating people? What do you find yourself saying to yourself and others about why it's hard to change? What if in a group of 20+ people interested in rationality, someone has a cool trick you haven't tried?
Even if you're 90% sure you should hate people, you're 10% sure you shouldn't. Supposing you were wrong, what would it be worth to you, e.g. in hours of happiness or positive effects on others, integrated over the rest of your life, to find that out? You'd have an interesting chance of finding that out immersed for a few days in a group of people who could produce interesting arguments for both sides of the issue.
I say this as someone who used to dislike people a lot, changed on purpose, and am now happier and doing better for it ;)
ETA But I think I see what Anna is saying about not attending if you hate being near people...
You're describing costly signaling. Contrary to your opening statement,
The word 'signalling' is often used in Less Wrong, and often used wrongly.
people on LessWrong are usually using the term "signalling" consistently with its standard meaning in economics and evolutionary biology. From Wikipedia,
In economics, more precisely in contract theory, signalling is the idea that one party credibly conveys some information about itself to another party
Within evolutionary biology, signalling theory is a body of theoretical work examining communication between individuals. The central question is when organisms with conflicting interests should be expected to communicate "honestly".
In particular, the ev bio article even includes a section on dishonest signalling, which seems to be what you're complaining about here:
Seriously though, "signalling" is being used to mean "tricking people in to thinking that you are".
This post is still interesting as a highlight reel of different examples of signalling, and shows that the term is, in its standard usage, rather non-specific. It's just not an illustration that people here are using it wrongly.
tl;dr: I was excited by this post, but so far I find reading the cited literature uncompelling :( Can you point us to a study we can read where the authors reported enough of their data and procedure that we can all tell that their conclusion was justified?
I do trust you, Yvain, and I know you know stats, and I even agree with the conclusion of the post --- that people are imperfect introspectors --- but I'm discouraged to continue searching through the literature myself at the moment because the first two articles you cited just weren't clear enough on what they were doing and measuring for me to tell if their conclusions were justified, other than by intuition (which I already share).
For example, none of your summaries says whether the fraction of people noticing the experimenters' effect on their behavior was enough to explain the difference between the two experiment groups, and this seems representative of the 1977 review article you cited as your main source as well.
I looked in more detail at your first example, the electric shocks experiment (Nisbett & Schachter, 1966), on which you report
... people who took the pill tolerated four times as strong a shock as controls ... Only three of twelve subjects made a connection between the pill and their shock tolerance ...
I was wondering, did the experimenters merely observe
(1) a "Statistically Significant" difference between PILL-GROUP and CONTROL-GROUP? And then say "Only 3 of 12 people in the pill group managed to detect the effect of the placebo on themselves?"
Because that's not a surprise, given the null hypothesis that people are good introspectors... maybe just those three people were affected, and that caused the significant difference between the groups! And jumping to conclusions from (1) is a kind of mistake I've seen before from authors assuming (if not in their minds, at least in their statistical formulae) that an effect is uniform across people, when it clearly probably isn't.
Or, did the experimenters observe that
(2) believing that only those three subjects were actually affected by (their knowledge of) the pill was not enough to explain the difference between the groups?
To see what the study really found, after many server issues with the journal website I tracked down the original 1966 article, which I've made available here. The paper doesn't mention anything about people's assessments of whether being (told they were) given a pill may have affected their pain tolerance.
Wondering why you wrote that, I went to the 1977 survey article you read, which I've made available as a searchable pdf here. There they say, at the bottom left of page 237, that their conclusion about the electric shocks vs pills was based on "additional unpublished data, collected from ... experiments by Nisbett and Schachter (1966)". But their description of that was almost as terse as your summary, and in particular, included no statistical reasoning.
Like I said, I do intuitively agree with the conclusion that people are imperfect introspectors, but I worry that the authors and reviewers of this article may have been sloppy in finding clear, quantitative evidence for this perspective, perhaps by being already too convinced of it...
Sometimes I have days of low morale where I don't get much done, and don't try to force myself to do things because I know my morale is low and I'll likely fail. I'm experimenting with a few different strategies for cutting down on low-morale days... I'd like to have ... better motivation (which might allow me to work on things with less willpower/energy expenditure),
Morale, and reducing the need for willpower / conscious effort, are things I've had success with using self-image changes, e.g. inspired by Naruto :) So...
those things seem to me to be more about trying out a wide variety of techniques and empirically determining what works.
... I'd say paying close attention to how you see yourself and your place in the world during times of low morale is definitely worth experimenting with. I'd actually be quite surprised if there aren't variables at play there which, if changed, would cause changes in your morale.
I avoid listening to Britney Spears, for instance, because I don't want to be the sort of person who likes Britney Spears.
Hah! It's funny you should mention that! Liking Britney Spears was one of the first intentional changes I made to myself at the time I mentioned in the post when I was 16 and trying to be self-image-free. It worked; I realized I naturally quite liked most of her hits, and I always perk up when her music comes on the radio. It's nice not to have to hate it :)
I'm not saying rationalists should avoid engaging in ritual like the plague; but I do a lot of promoting of CFAR and rationality to non-LW-readers, and I happen to know from experience that a post like this in Main sends bad vibes to a lot of people. Again, I think it's sad to have to worry so much about image, but I think it's a reality.
Thanks for sharing this, Quiet; I'm sad to say I agree with you. I think rationality as a movement can't afford to be associated with ritual. It's just too hard to believe that it's not a failure mode. I personally find Raemon's perspective inspiring and convincing. Raemon, it seems to me that you have a very sane perspective on the role of ritual in people's lives. And I'm all about trying to acknowledge and work with our own emotional needs, e.g. in this post. But I personally think openly associating with Ritual with a Capital R is just too sketchy looking for the community. It saddens me to have to worry about such alarm bells going off, but I think it's the reality.
Of course there are other easier-to-worry-about negative effects of ritual than simply appearances; what I'm saying is that, Raemon, even if you are able to avoid those failure modes --- and I have to say, to me, you seem very trustworthy in this regard --- I think strong ritual associations are worth avoiding for signaling alone.
I would still tend to say that 1/3 apiece is the fair division
I'm curious why you personally just chose to use the norm-connoting term "fair" in place of the less loaded term "equal division" ... what properties does equal division have that make you want to give it special normative consideration? I could think of some, but I'm particularly interested in what your thoughts are here!
Kind of, though "intrinsic uncertainty" also suggests the possibility that the subsystems might be generating moral intuitions which simply cannot be reconciled and that the conflict might be unresolvable unless one is willing to completely cut away or rewrite parts of their own mind.
Don't you think that things being perfectly balanced in a way such that there is no resolution is sort of a measure zero set of outcomes? In drift-diffusion models of neural groups in human and animal brains arrive at decisions/actions (explained pretty well here), even if the drift term (tendency to eventually favor one outcome) is zero, the diffusion term (tendency to randomly select some outcome) would eventually result in a decision being made, with probability 1, where more subtle conflicts tend to take more time to resolve.
This is why I prefer to think of those situations as "not finished computing" rather than "intrinsically unresolvable".
Do you maybe have a different notion of resolving a conflict, that makes unresolvedness a sustainable situation?
Nice post. Do I understand you correctly that what you call "Intrinsic Moral Uncertainty" is the feeling of unresolved conflict between subsystems of our moral-intuition-generators? If so, I'd suggest calling it "Mere internal conflict" or "Not finished computing" or something more descriptive than "Intrinsic".
The Linguistic Consistency Fallacy: claiming, implicitly or otherwise, that a word must be used in the same way in all instances.
I'm definitely talking about the concept of purpose here, not the word.
in my experience they tend to say something like "Not for anyone in particular, just sort of "ultimate" purpose"... That said, the fact that everywhere else we use the word "purpose" it is three-place is certainly a useful observation. It might make us think that perhaps the three-place usage is the original, well-supported version, and the other one is a degenerate one that we are only using because we're confused. But the nature of that mistake is quite different. ... If you think I'm splitting hairs here,
I don't think you're splitting hairs; this is not a word game, and perhaps I should say in the post that I don't think just saying "Purpose to whom?" is the way to address this problem in someone else. In my experience, saying something like this works better:
"The purpose of life is a big question, and I think it helps to look at easier examples of purpose to understand why you might be looking for a purpose of life. First of all, you may be lacking satisfaction in your life for some reason, and framing this to yourself in philosophical terms like "Life has no purpose, because ." If that's true, it's quite likely that you'd feel differently if your emotional needs as a social primate were being met, and in that sense the solution is not an "answer" but rather some actions that will result in these needs being met.
Still, that does not address the . So because "What's the purpose of life" may be a hard question, let's look at easier examples of purpose and see how they work. Notice how they all have someone the purpose is to? And how that's missing in your "purpose of life" question? That means you could end up feeling one of two ways:
(1) Satisfied, because now you can just ask "What could be the purpose of my life to ", etc, and come up with answers, or
(2) unsatisfied because there is no agent to ask about such that the answer would satisfy you.
And I claim that whether you end up at (1) or (2) is a function of whether your social primate emotional needs are being met than any particular philosophical argument."
Done :)
I actually think people who say "Life has no meaning and everything I do is pointless" are actually making a deeper mistake than confusing connotations with denotations... I think they're actually making a denotational error in missing that e.g. "purpose" or "pointfulness" typically denotes ternary relationships of the form "The purpose of X to Y is Z." In other words, one must ask or tacitly understand "purpose to whom?" and "meaning to whom" before the statement makes any sense.
My favorite connotationally-heavy follow-up to this is that "My life has as many purposes as there are agents it has a purpose to."
I've been wanting to post on this under the name "Relation Projection Fallacy" for a while now, so I just did :)
It would help you and other commenters to have an example in mind of something you want to change about yourself, and what methods you've already tried. Do you already do everything that you think you should? Do you ever procrastinate? Do you ever over-weight short-term pains against long-term gains? Is there anything you don't enjoy such that people who enjoy that thing have better lives than you, in your estimation?
If you answer one of these questions positively, and you have not been paying attention to conscious and unconscious aspects of self-image, I'd expect low hanging fruit there to get yourself to change. If you're comfortable posting what you want to change and what you've already tried, especially to one of the commenters who seems to take benefit from using narrative to motivate themselves, maybe they'll offer you some ideas.
(I'm not saying that self image is the most important factor here, only that it might be an important marginal factor if you have been ignoring it.)