Posts
Comments
I reckon I can make it, and it'll be my first in Sydney. Looking forward to it!
I reckon I can make that!
Ah. So if I understand correctly, your intuition on what will satisfice sometimes returns zero information, which certainly happens to me sometimes and I would guess most people. In that situation, I switch from optimising on the decision as presented, and optimise on + .
In most cases, the variance in utility over the spread of outcomes of the decision is outweighed by the reduced cognitive effort and anxiety in the simplified decision procedure. Plus there's the chance of exposure to an unexpected benefit.
In other words, there may be a choice that is better than the current best candidate (however that was derived), and rules may exist that dominate "random", but it's not worth your time and effort to figure them out.
I argued earlier that the only circumstances under which it should be morally acceptable to impose a particular way of thinking on children, is when the result will be that later in life they come to hold beliefs that they would have chosen anyway, no matter what alternative beliefs they were exposed to. And what I am now saying is that science is the one way of thinking — maybe the only one — that passes this test. There is a fundamental asymmetry between science and everything else.
- Nicholas Humphrey, What Shall We Tell the Children?
I try to avoid over-optimising on considered principles. I am willing to accept less-than-optimal outcomes based on the criteria I actually consider because those deficits are more often than not compensated by reduced thinking time, reduced anxiety, and unexpected results (eg the movie turning out to be much better or worse than expected).
'Simple Heuristics That Make Us Smart' indicates most decisions are actually made by considering a single course of action, and taking it unless there is some unacceptable problem with it. What really surprised the researchers was that this often does better than linear recursion and stacks up respectably against Bayesian reasoning.
So my answer is, "make random selections from the menu until you hit something you're willing to eat." :)
Unfortunately, there's nothing that says the tea will force your lunch-mate to drink on the first thing you think about that would cause him to choke. You could run through a dozen true and shocking guesses in your head without him feeling any urge to drink. Once you get bored and give up thinking of new hypotheses, if your lunch-mate hasn't drunk from the can, the vendor's guarantee is still intact because none of the the tea has been drunk. Why does this remind me of the halting problem?
On the other hand, if you wait until after your lunch-mate has taken his first sip (taking the risk that something unexpected and shocking will happen when he does so), and you have resolved to take away the drink immediately after his second sip, you might be in a better position.
You also need to hope that the shocking event that causes your companion to choke is not an epiphany on his part where he suddenly deduces one of your secrets.
Another possibility is that they've gone through their own singularity, and no longer have a significant visible (to us) presence in physical space (or at least don't use radio waves any more), i.e. they've transcended.
Naturally we can only speculate about what new laws of physics become identifiable post-singularity, but string theory suggests there's a lot of extra dimensions around to which we don't have direct access (yet?). What if civilisations that transcend tend to find 4D spacetime too limiting and stretch out into the wider universe, like a small-town kid making it big?
It look like, if it happens for us, that it will happen within the next hundred years or so. Considering that we've only been using radio for a bit over a hundred years, that gives a roughly two hundred year window for any two pre-singularity civilisations to spot each other before one or both of them transcend. 0.4 seconds in the year of the cosmos.
I don't think it's fair to demand a full explanation of a topic that's been around for over two decades (though a link to an online treatment would have been nice). Warrigal didn't 'come up with' fractional values for truth. It's a concept that's been around (central?) in Eastern philosophy for centuries if not millenia, but was more-or-less exiled from Western philosophy by Aristotle's Law of the Excluded Middle.
Fuzzy logic has proven itself very useful in control systems and in AI, because it matches the way people think about the world. Take Hemingway's Challenge to "write one true [factual] sentence" (for which you would then need to show 100% exact correspondence of words to molecules in all relevant situations) and one's perspective can change to see all facts as only partially true. ie, with a truth value in [0,1].
The statement "snow is white" is true if and only if snow is white, but you still have to define "snow" and "white". How far from 100% even reflection of the entire visible spectrum can you go before "white" becomes "off-white"? How much can snow melt before it becomes "slush"? How much dissolved salt can it contain before it's no longer "snow"? Is it still "snow" if it contains purple food colouring?
The same analysis of most concepts reveals we inherently think in fuzzy terms. (This is why court cases take so damn long to pick between the binary values of "guilty" and "not guilty", when the answer is almost always "partially guilty".) In fuzzy systems, concepts like "adult" (age of consent), "alive" (cryonics), "person" (abortion), all become scalar variables defined over n dimensions (usually n=1) when they are fed into the equations, and the results are translated back into a single value post-computation. The more usual control system variables are things like "hot", "closed", "wet", "bright", "fast", etc., which make the system easier to understand and program than continuous measurements.
Bart Kosko's book on the topic is Fuzzy Thinking. He makes some big claims about probability, but he says it boils down to fuzzy logic being just a different way of thinking about the same underlying math. (I don't know if this gels with the discussion of 'truth functionalism' above) However, this prompts patterns of thought that would not otherwise make sense, which can lead to novel and useful results.
One reason people might pick the $500 is because they'll come out better off than 85% of people who take the more rational course. It is little comfort to be able to claim to have made the right decision when everyone who made the less rational decision is waving a stack of money in your face and laughing at the silly rationalist. People don't want to be rich - they just want to be richer than their next door neighbour.
Some paranormal phenomena such as ghost sightings and communication with the dead are actually real, though only able to be perceived by people with a particular sensitivity.
My life has been a protracted hallucination.
One or more gods exist and play an active part in our day-to-day lives.
A previous civilisation developed advanced enough technology to leave the planet and remove all traces of their existence from it.
I would not believe that rationality has no inherent value - that belief without evidence is a virtue.
A simple variant with interesting results would be to deal everyone one card from a full deck. Anyone who is dealt a diamond is a deceiver. The dealer can be the spokesman, so it will rotate each turn. This way there is a 1/4 chance that any given person is a deceiver, and a small (1/(4^n))-ish chance that all n players (including the dealer) are trying to deceive each other.
Trying to reach the best outcome for everyone with an unknown number of deceivers in the mix? Sounds like life.
Sydney, Australia
But I could make it to Canberra ;)