Posts

[link] Pedro Domingos: "The Master Algorithm" 2015-11-30T22:28:48.575Z

Comments

Comment by GMHowe on Bad intent is a disposition, not a feeling · 2017-05-01T06:32:05.590Z · LW · GW

That may well be true, but I should clarify that neither of my hypotheticals require or suggest that bad faith communication was more common in the past. They do suggest that assumptions of bad faith may have been significantly more common than actual bad faith, and that this hypersensitivity may have been adaptive in the ancestral environment but be maladaptive now.

Comment by GMHowe on Bad intent is a disposition, not a feeling · 2017-05-01T04:39:54.940Z · LW · GW

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

You may not be wrong but I don't think it would necessarily be surprising. We adapted under social conditions that are radically different than exist today. It may no longer be adaptive.

Hypothesis: In small tribes and family groups assumptions of bad faith may have served to help negotiate away from unreasonable positions while strong familial ties and respected third parties mostly mitigated the harms. Conflicts between tribes without familial connections may have tended to escalate however (although there are ways to mitigate against this too).

Hypothesis: Perhaps assumptions of good and bad faith were reasonably accurate in small tribal and familial groups but in intertribal disagreements there was a tendency to assume bad faith because the cost of assuming good faith and being wrong was so much higher than assuming bad faith and being wrong.

Comment by GMHowe on Crony Beliefs · 2016-11-04T01:08:14.995Z · LW · GW

I really liked this post. I thought it was well written and thought provoking.

I do want to push back a bit on one thing though. You write:

What makes for a crony belief is how we're rewarded for it. And the problem with beliefs about climate change is that we have no way to act on them — by which I mean there are no actions we can take whose payoffs (for us as individuals) depend on whether our beliefs are true or false.

It is true that most of us probably won't take actions whose payoffs depend on beliefs about global warming, but it is not true that there are no such actions. One could simply make bets about the future global average temperature.

So the problem is not that there are no actions we can take whose payoffs depend on whether our beliefs are true or false. Rather beliefs about global warming are likely to be cronies because the subject has become highly political. And as you correctly point out, in politics social rewards completely dominate pragmatic rewards.

To illustrate, it is even harder to find actions we can take whose payoffs depend on the accuracy of the belief that the Great Red Spot is a persistent anticyclonic storm on the planet Jupiter. Does this mean that a belief in the Great Red Spot is even more likely to be cronyistic than a belief regarding global warming?

Comment by GMHowe on Scott Aaronson: Common knowledge and Aumann's agreement theorem · 2015-12-22T00:20:23.636Z · LW · GW

Thanks, I did end up figuring out my error.

Comment by GMHowe on Scott Aaronson: Common knowledge and Aumann's agreement theorem · 2015-08-23T21:16:41.033Z · LW · GW

Maybe I'm confused, in the 'muddy children puzzle' it seems it would be common knowledge from the start that at least 98 children have muddy foreheads. Each child sees 99 muddy foreheads. Each child could reason that every other child must see at least 98 muddy foreheads. 100 minus their own forehead which they cannot see minus the other child's forehead which the other child cannot see equals 98.

What am I missing?

Comment by GMHowe on Rationality Quotes Thread August 2015 · 2015-08-21T20:35:33.312Z · LW · GW

Desire is a contract you make with yourself to be unhappy until you get what you want.

Naval Ravikant

Comment by GMHowe on [Link]: The Unreasonable Effectiveness of Recurrent Neural Networks · 2015-06-04T23:05:49.205Z · LW · GW

You can see more results here: Image Annotation Viewer

Judging generously, but based on only about two dozen or so image captions, I estimate it gives a passably accurate caption about one third of the time. This may be impressive given the simplicity of the model, but it doesn't seem unreasonably effective to me, and I don't immediately see the relevance to strong AI.

Comment by GMHowe on Why isn't the following decision theory optimal? · 2015-04-16T03:56:10.101Z · LW · GW

Let's say you precommit to never paying off blackmailers. The advantage of this is that you are no longer an attractive target for blackmailers since they will never get paid off. However if someone blackmails you anyway, your precommitment now puts you at a disadvantage, so now (NDT)you would act as if you had a precommitment to comply with the blackmailers all along since at this point that would be an advantageous precommitment to have made.

Comment by GMHowe on Rationality Quotes Thread April 2015 · 2015-04-04T22:48:57.694Z · LW · GW

Audio here, Mitchell and Webb - The Boy Who Cried Wolf.

Comment by GMHowe on Request for Steelman: Non-correspondence concepts of truth · 2015-03-25T21:13:38.477Z · LW · GW

It's a funny joke but beside the point. Knowing that he is in a balloon about 30 feet above a field is actually very useful. It's just useless to tell him what he clearly already knows.

Comment by GMHowe on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-20T19:28:59.852Z · LW · GW

I recall a SF story that took place on a rotating space station orbiting Earth that had several oddities. The station had greater than Earth gravity. Each section was connected to the next by a confusing set of corridors. The protagonist did some experiments draining water out of a large vat and discovered a coriolis effect.

So spoiler alert it turned out that the space station was a colossal fraud. It was actually on a massive centrifuge on Earth.

Comment by GMHowe on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-07T03:47:33.664Z · LW · GW

Due to the finite speed of sound, the explosion would have had to occur approximately 20 seconds before they heard it. So if Voldemort's death was coincident with the explosion it would had to have happened about 20 seconds before Harry said it did.

She'd just about decided that this had to all be a prank in unbelievably poor taste, when a distant but sharp CRACK filled the air. [...] "It worked," Harry Potter gasped aloud, "she got him, he's gone." [...] "I think it's in that direction." Harry Potter pointed in the rough direction the CRACK had come from, "I'm not sure how far. The sound from there took twenty seconds to get here, so maybe two minutes on a broomstick -"

Comment by GMHowe on False thermodynamic miracles · 2015-03-05T21:59:23.042Z · LW · GW

Why would it backtrack (or what do you mean by backtrack)? Eventually, it observes that w = false (that "ON" went through unchanged) and that its actions are no longer beneficial, so it just stops doing anything, right? The process terminates or it goes to standby?

I think the presumption is that the case where the "ON" signal goes thru normally and the case where the "ON" signal is overwritten by a thermodynamic miracle... into exactly the same "ON" signal are equivalent. That is that after the "ON" signal has gone though the AI would behave identically to an AI that was not indifferent to worlds where the thermodynamic miracle did not occur.

The reason for this is that although the chance that the "ON" signal was overwritten into exactly the same "ON" signal is tiny, it is the only remaining possible world that the AI cares about so it will act as if that is what it believes.

Comment by GMHowe on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-29T01:29:35.373Z · LW · GW

I was not aware of Tuxedage's ruleset. However any ruleset that allows for the AI to win without being explicitly released by the gatekeeper is problematic.

If asd had won due to the gatekeeper leaving it would only have demonstrated that being unpleasant can cause people to disengage from conversation, which is different from demonstrating that it is possible to convince a person to release a potentially dangerous AI.

Comment by GMHowe on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-28T07:41:03.106Z · LW · GW

That's not really in the spirit of the experiment. For the AI to win the gatekeeper must explicitly release the AI. If the gatekeeper fails to abide by the rules that merely invalidates the experiment.

Comment by GMHowe on A List of Nuances · 2014-11-14T03:25:57.478Z · LW · GW

Everything is actually about signalling.

Counterclaim: Not everything is actually about signalling.

Almost everything can be pressed into use as a signal in some way. You can conspicuously overpay for things to signal affluence or good taste or whatever. Or you can put excessive amounts of effort into something to signal commitment or the right stuff or whatever. That almost everything can be used as a signal does not mean that almost everything is being used primarily as a signal all of the time.

Signalling only makes sense in a social environment, so things that you would do or benefit from even if you were in a nonsocial environment are good candidates for things that are not primarily about signalling. Things like eating, wearing clothes, sleeping areas, medical attention and learning.

Some of the items from the list of X is not about Y:

"Food isn’t about nutrition. Clothes aren’t about comfort. Bedrooms aren’t about sleep. Laughter isn’t about humour. Charity isn’t about helping. Medicine isn’t about health. Consulting isn’t about advice. School isn’t about learning. Research isn’t about progress. Language isn’t about communication."

All these are primarily about something other than signalling. Yes they can be "about" signalling some of the time to varying degrees but not as their primary purpose. (At least not without becoming dysfunctional.)