Posts
Comments
"Local rationalist learns to beat akrasia using this one weird trick!"
Jonathan Livingston Seagull doggedly pursues his passion, improving continuously, and as he does so, the needs and limits of his body fall away. By flying, he no longer needs to eat. By flying, he no longer needs to age. By flying, he no longer needs to die. Richard Bach allows his character to surpass realistic diminishing returns, distractions, and danger.
That, uh... that sounds pretty good.
Strength is determined by biology and behavior; the stereotype reflects both biological reality and cultural expectations. Note that boys are/were expected to be stronger than girls even before puberty actually creates a meaningful biological gap...
Can we safely tack "processed sugar bad" onto that list?
Being cuckolded (in this context) means unknowingly raising not-your-genetic-offspring while believing it is your own. A male partner's infidelity can't cuckold a woman.
But I imagine most mothers would be horrified to discover, ten years after the fact, that their newborn was stolen and replaced with someone else's, and the child they've devoted so much to is not actually their genetic offspring. A brief bout of Google indicates that hospital baby swaps can spark multimillion dollar lawsuits, sometimes successful...
Me, 60%.
Are AI players actually indistinguishable from humans in Chess? Could an interrogator not pick out consistent stylistic differences between equally-ranked human and AI players?
...hmm.
If we perfectly understood the decision-making process and all its inputs, there'd be no black box left to label 'free will.' If instead we could perfectly predict the outcomes (but not the internals) of a person's cognitive algorithms... so we know, but don't know how we know... I'm not sure. That would seem to invite mysterious reasoning to explain how we know, for which 'free will' seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
"If you're so good at logical reasoning why didn't you create the next killer app"
'Designing the next killer app' seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.
Seems like idea-fights between humans result in vastly more effort put into the fight than into the idea.
Twenty lines of close paren.
If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.
Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that $1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.
Couldn't you start with easier cognitive dissonances, and work your way up?
For "I did A but could have done otherwise" I see two coherent meanings:
1) My mind produced A from the local conditions, but a conceivable different mind with otherwise identical local conditions would've produced not A. My mind is therefore a crucial causal factor in the reality of A.
OR
2) From my limited knowledge, I cannot trace the causal steps to A that precede my decision well enough to determine, from those steps alone, the decision I make which leads to A.
...actually, probably both.
So, the causal steps to A include my decision (and A is inconsistent with certain decisions that differ from my real one), but I cannot trace the causal steps of my decision precisely enough to have precluded those differing decision (without already knowing the reality of my decision.)
Alternatively: if we work from full knowledge of the causal path to A, except that we treat my cognition as a black box whose outcome we don't know, we could not conclude A even with unlimited processing power.
'Free will' is the halting point in the recursion of mental self-modeling.
Our minds model minds, and may model those minds' models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.
This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others' minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.
...d-do I get the prize?
By examining our cognitive pieces (techniques, beliefs, etc.) one at a time in light of the others, we check not for adherence of our map to the territory but rather for the map's self-consistency.
This would appear to be the best an algorithm can do from the inside. Self-consistent may not mean true, but it does mean it can't find anything wrong with itself. (Of course, if your algorithm relies on observational inputs, there should be a theoretical set of observations which would break its self-consistency and thus force further reflection.)
That's not a proper mugging.
"If you don't donate $5 to SIAI, the entire multiverse will be paperclip'd because we couldn't build FAI before uFAI took over."
...whereas with vampires, you're stuck pointing to a collection of fictional representations. This restricts certain information-gathering techniques (you can't put a vampire under a microscope; at best, you can use a fictional account of a vampire under a microscope) but shouldn't make the exercise impossible. I'm pretty sure we could convey 'stop sign' without ever letting you observe a real-life stop sign.
Literature in English class generally serves as reading practice, and as an odd excuse to practice composing thoughts for other people to read. Literature is the vehicle rather than the purpose, unless you're looking at a literature degree.
I'm curious how to test an understanding of literature, and what purpose one serves. Intuitively, a person well-versed in literature should be better equipped to write or recommend fiction than a person who is not well-versed in literature. Is there another benefit one might test?
Hmm.
So, it's not sufficient to define a set of steps that determine a number... it must be possible to execute them? That's a rather pragmatic approach. Albeit it one you'd have to keep updating if our power to compute and comprehend lengthier series of steps grows faster than you predict.