Posts

Comments

Comment by gelisam on A proof of Löb's theorem in Haskell · 2014-09-21T05:10:08.340Z · LW · GW

A → □ A doesn't mean "all true statements are provable" [...]

I agree! You are right, the formula "A → □ A" does not mean that all true statements are provable. It means that all "true" statements are "provable" :)

Discussing the math of provability is tricky, because there are many notions of truth and many notions of provability. There are true-in-the-real-world and provable-in-the-real-world, the notions we are trying to model. True-in-the-real-world does not imply provable-in-the-real-world, so ideally, we would like a model in which there are no rules to infer provable-in-the-model from true-in-the-model. Provability logic provides such a model: it uses "A" for true-in-the-model, and "□ A" for provable-in-the-model.

One extra level of complexity is that "provable-in-the-model" is not the same as "provable-in-the-real-world by following the rules of the model", which I'll abbreviate as "derivable", or "PL-derivable" when the model is Provability logic. The reason the two are different is that "□ A" only represents provability; the actual rules of the model don't care what meaning we assign to the symbols. If the rules were poorly chosen, it could very well be that a formula "□ A" was PL-derivable but that the formula "A" wasn't.

A → □ A [...] means "all statements we can construct a proof term for are provable"

Yes and no. Yes, at runtime, all the values we will actually receive will have been constructed by our callers. No, Agda implications do not always mean that if the left-hand side is constructible, so it the right-hand side. Implications of the form (A → ⊥), for example, mean that the left-hand side is not constructible. In that case, we never receive any value from our callers, and we implement (A → ⊥) by writing a constructive proof that all possible inputs are absurd.

But even if we ignore this corner case, it would only be justified to conclude that receiving an A as input implies that an Agda proof for A exists. Löb's theorem is not about Agda, but about a specific model of provability encoded by the rules of provability logic. If your proof uses (A → □ A) as an assumption (as opposed to ⊢ A → ⊢ □ A), then your proof is not a proof of Löb's theorem, because that theorem makes no such assumption.

EDIT 2: Updated the gist, should actually contain a valid proof of Löb's theorem now

Your latest version at the time of writing is revision 6, which uses a postulate

postulate
  ev : ∀ {A} → A → □ A

instead of the data declaration

data □_ (A : Set β) : Set (suc β)
  ev : A → □ A

That is, you have removed the (□ A → A) direction, but kept the (A → □ A) direction. That's a bit better, but you're still using the forbidden assumption.

the real issue is the construction of the fixpoint

I agree, that part does look hard to implement.

Comment by gelisam on A proof of Löb's theorem in Haskell · 2014-09-20T13:38:20.025Z · LW · GW

When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.

It's an easy mistake to make, because the first rule of inference for provability logic is

⊢ A
------
⊢ □ A

So it looks like (A → □ A) should be a theorem, but notice that the left-hand side of the turnstile is empty. This means that the rule is only valid when there are no assumptions in context; or equivalently in a programming language, when there are no variables in scope. Since Agda doesn't know how to talk about which Agda variables are in scope, we cannot implement this rule as an Agda function, which would be available whether the scope is empty or not.

A few examples would probably help. A valid use of inference rule 1 would look like this:

----- variable rule
A ⊢ A
--------- deduction rule
⊢ (A → A)
----------- inference rule 1
⊢ □ (A → A)

Here, we have used inference rule 1 and a few other common rules to prove that the trivial implication (A → A) is "provable". By comparison, here is what happens when we try to use the same technique to prove that all "true" statements are "provable", i.e. (A → □ A). (I put "true" and "provable" in quotes in order to distinguish "provable according to the map" from provable in the territory. Provability logic is a map we use to understand provability.)

----- variable rule
A ⊢ A
------- inference rule 1 (invalid, the context is not empty)
A ⊢ □ A
--------- deduction rule
⊢ A → □ A

As you can see, the proof is invalid. And that's a good thing, too, because if we could prove that all "true" statements were "provable", then our provability logic would have allowed us to prove something which we know to be false, so the logic would have been a poor model of real-world provability.

Anyway, the conclusion of all this is that we should not represent inference rule 1 by an Agda function from A to □ A, but by an object of type "Theorem (A ⇾ Provable A)", where "⇾" is a syntactic construct representing implication, not an Agda function.

Comment by gelisam on New censorship: against hypothetical violence against identifiable people · 2012-12-24T16:27:20.781Z · LW · GW

I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.

It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.

Comment by gelisam on New censorship: against hypothetical violence against identifiable people · 2012-12-24T15:40:49.435Z · LW · GW

After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".

The path to becoming a volunteer contributor is now very clear.

Comment by gelisam on New censorship: against hypothetical violence against identifiable people · 2012-12-24T07:24:29.805Z · LW · GW

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

Comment by gelisam on What Would You Do Without Morality? · 2012-10-24T04:07:14.479Z · LW · GW

I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?

Comment by gelisam on SotW: Be Specific · 2012-04-12T22:34:28.528Z · LW · GW

Thanks for the suggestion. This is not, however, what I was looking for.

Cached thoughts explains that hearing a phrase might be enough for our brain to remember it as true, while genetic fallacy warns that the original cause of a belief is not as important as the sum-total of evidence for or against that belief.

I am not, however, looking for evidence that our past taints our beliefs. I have come to realize that finding the historical cause of a thought is a good first step towards getting rid of an unwanted thought, and I wanted to know whether this strategy was covered yet. If not... I'll accumulate a bit more evidence, and then maybe I'll write a post!

Comment by gelisam on SotW: Be Specific · 2012-04-06T17:14:00.558Z · LW · GW

search for the historical causes of your thoughts, rather than their justifications.

Is there a standard name or LW article on the subject? I first stumbled upon the importance of that skill here, on this site, and I wish I knew more about it than just that one personal anecdote.

Comment by gelisam on What should I have for dinner? (A case study in decision making) · 2010-08-12T21:19:35.384Z · LW · GW

Great case study, in that studying my own reaction to your article has thought me a lot about my own decision making. And my conclusion is that reading a rationalist blog isn't sufficient to become rational!

I am thin, despite having very bad eating habits (according to conventional dietary wisdom). I had not heard of Taubes before. Specifically, I have never considered that conventional dietary wisdom could be incorrect; people say that I eat unhealthily, and I have simply taken their word for it. The fact that I continue to eat unhealthily has more to do with laziness than rationality.

How much have I shifted my beliefs now that I have read this article? I don't think in probabilities yet, but... by a lot. Probably more than I should. Neither Taubes' belief nor bentarm's belief nor Eliezer's belief nor Taubes' lack of biological credentials has affected my own degree of belief, not by much anyway. Just the fact that there exists an alternative theory of food has succeeded in demolishing the confidence I had in the conventional theory.

How could such a weak amount of evidence cause such a large shift? By relying on a bias which happened to be pushing in the same direction. Just ask me: "since you have decreased your belief in the conventional theory after reading an article on Taubes, you must have correspondingly increased your belief in Taubes' theory, right?"

Well, let me google whether the fatty food I currently eat is high-carb or low-carb, and I'll get back to you on that.

Comment by gelisam on Open Thread: May 2010 · 2010-05-02T15:46:21.664Z · LW · GW

could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.

Comment by gelisam on Open Thread: May 2010 · 2010-05-01T13:35:45.155Z · LW · GW

I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place.

Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.

Comment by gelisam on City of Lights · 2010-04-29T15:16:32.431Z · LW · GW

I often get the feeling that Alicorn's posts could use more evidence. However, given her status here, I take the very fact that she recommends something as evidence that she has herself encountered good evidence that the recommendation works; you know, Aumann agreement and all that.

Besides, even though it would be nice to see which evidence she has encountered, I know that I wouldn't bother to read the research if she linked to it. Intellectually, I trust Alicorn's conclusions. Therefore, I wish to believe in her conclusions; you know, Tarski's litany and all that.

Emotionally, however, I can't help but to doubt. Fortunately, I know that I'm liable to being emotionally convinced by unreliable arguments like personal experience stories. That's why I can't wait to reach the end of this sequence, with the promised "how Alicorn raised her happiness setpoint" story.

Comment by gelisam on Proposed New Features for Less Wrong · 2010-04-27T22:59:22.099Z · LW · GW

I voted for the karma-coward option because people are different, so having more options is good. But being new myself (only 16 comments, 43 karma points), you might be more interested in the fact that I would not use them if they were available.

I find it very gratifying to receive karma points. It motivates me to write more comments. If I was granted a grace period in which my comments did not receive karma points, I might have posted even less. Even the downvotes are a motivator, not to post more, but to put more effort next time I have something to say.

When I post a comment, a part of me does it to earn more karma points. If I expect that a comment will get downvoted, I rephrase it until I expect it to be accepted by the community. Therefore, even if I was the kind of person who needs protection from those scary downvotes, I would never cover my comments with cowardliness, as I would never expect the downvotes in the first place.

Comment by gelisam on The Spotlight · 2010-04-24T19:10:32.640Z · LW · GW

Sure, let's try.

okay, writing down my thoughts. seems easy enough. reminds me of this exercise where we needed to write down our thoughts without thinking about it and i was mad because I kept thinking private thoughts and writing "can't write this" but in a very pleasing way and i liked being liked and that's a pattern which characterizes me well and i stopped- why? to catch my breath. not to think about the next sentence? are you sure? I put a lot of importance on writing down clearly and smartly I want readers to feel how smart I am, I want to be appreciated, I need to be appreciated, I am appreciated but still not enough yet. private thoughts again. girls. I hate how "boy thinks private thoughts about girls" tends to be interpreted as sex thoughts by default.

Wow. "This person" seems to care a hell of a lot more about the opinion of others than I thought I did. Among my friends, I am known as the clown, the person who will readily wear the hat of ridicule if the situation arises. It seems like this might be a front after all. I need to find a way to confirm this.

I will definitely perform this exercise again in the future. It was easy to perform, and the results surprised me, so in terms of information, I had a very high return on investment. Thanks, Alicorn!

Comment by gelisam on Undiscriminating Skepticism · 2010-03-20T23:58:17.889Z · LW · GW

You win. I did not realize that we knew that galaxies have been flying apart for billions and billions of years, as opposed to just right now. If something has been going on for so long, I agree that the simplest explanation is that it has always been going on, and this is precisely the conclusion which I thought popular science books took for granted.

Your other arguments only hammer the nail deeper, of course. But I notice that they have a much smaller impact on my unofficial beliefs, even thought they should have a bigger impact. I mean, the fact that the expansion has been going on for at least a billion years is a weaker evidence for the Big Bang than the fact that it predicts the cosmic background radiation and the age of the universe.

I take this as an opportunity to improve the art of rationality, by suggesting that in the case where an unofficial belief contradicts an official belief, one should attempt to find what originally caused the unofficial belief to settle in. If this original internal argument can be shown to be bogus, the mind should be less reluctant to give up and align with the official belief.

Of course, I'm forced to generalize from the sole example I've noticed so far, so for the time being, please take this suggestion with a grain of salt.

Comment by gelisam on Undiscriminating Skepticism · 2010-03-20T15:44:24.347Z · LW · GW

I don't think we can reasonably elevate our estimate of our own rationality by observing that we disagree with the consensus of a respected community.

But isn't Eliezer suggesting, in this very post, that we should use uncommon justified beliefs as an indicator that people are actually thinking for themselves as opposed to copying the beliefs of the community? I would assume that the standards we use to judge others should also apply when judging ourselves.

On the other hand, what you're saying sounds reasonable too. After all, crackpots also disagree with the consensus of a respected community.

The point is that there could be many reasons why a person would disagree with a respected community, one of which is that the person is actually being rational and that the community is wrong. Or, as seems to be the case here, that the person is actually being rational but hasn't yet encountered all the evidence which the community has. In any case, given the fact that I'm here, following a website dedicated to the art of rationality, I think that in this case rationality is quite a likely cause for my disagreement.

I should not be able to discredit a theory by the act of collecting all possible evidence and publishing before they have a chance to think things through.

I agree that if a piece of evidence is published before it is predicted, this is not evidence against the theory, but it does weaken the prediction considerably. Therefore, please don't publish this entire collection of all possible evidence, as it will make it much harder afterwards to distinguish between theories!

Comment by gelisam on Undiscriminating Skepticism · 2010-03-20T14:52:33.891Z · LW · GW

Thanks! I had only heard about the accidental discovery by two Bell employees of an excess measurement which they could not explain, but now that you mention that it was in fact predicted, it's totally reasonable that the Bell employees simply did not know about the scientific prediction at the moment of their measurement. I should have read Wikipedia.

The probability of predicting something as strange as the background radiation given that the theory on which the prediction is based is fundamentally flawed seems rather low. Accordingly, I should update my belief in the Big Bang substantially. But actually updating on evidence is hard, so I don't feel convinced yet, even though I know I should. For this reason, I will read the book you recommended, in the hope that its contents will manage to shift my unofficial beliefs too. Thanks again!

Comment by gelisam on Undiscriminating Skepticism · 2010-03-19T20:03:19.808Z · LW · GW

I've been following Alicorn's sequence on luminousness, that is, on getting to know ourselves better. I had lowered my estimate of my own rationality when she mentioned that we tend to think too highly of ourselves, but now I can bump my estimate back up. There is at least one belief which my tribe elevates to the rank of scientific fact, yet which I think is probably wrong: I do not believe in the Big Bang.

Of course, I don't believe the universe was created a few thousand years ago either. I don't have any plausible alternative hypothesis, I just think that the arguments I have read in the many popular science physics book I have read are inconclusive.

First, these books usually justify the Big Bang theory as follows. Right now, it is an observable fact that stars are currently moving away from each other. Therefore, there was a time in the past where they were much closer. Therefore, there was a time where all the stars in the universe occupied the same point. It is this last "therefore" which I don't buy: there is no particular reason to assume that if the stars are moving away from each other right now, then they must always have done so. They could be expanding and contracting in a sort of sine wave, or something more complicated.

Second, the background radiation which is said to be leftover stray photons from the big bang. If the background radiation was a prediction of Big Bang theory, then I might have been convinced by this experimental evidence, but in fact the background radiation was discovered by accident. Only afterwards did the proponents of Big Bang theory retrofit it as a prediction of their model.

Third, the acceleration. The discovery that the expansion was accelerating was a surprise to the scientific community. In particular, it was not predicted by Big Bang theory, even though it seems like the kind of thing which an explanatory model of the expansion of the universe should have predicted right away.

Fourth, the inflation phase. This part was added later on, once it had been observed that Big Bang theory did not fit with the observed homogeneousness of the cosmos. To me, this seems like a desperate and ad hod attempt to fix a broken theory.

Now, it could be that all these changes are a progression of refinements, just like Newtonian physics was adjusted to take into account the effects of relativity, and just like the spherical Earth was adjusted to make it an elliptical Earth. But the adjustments which Big Bang theory has suffered seem like they should change the predictions completely, rather than, as in the other cases, increasing the precision of the existing theory.

I am, of course, open to being convinced otherwise. If Big Bang theory really is true, then I wish to believe it is true.

Comment by gelisam on You Are Likely To Be Eaten By A Grue · 2010-03-17T02:43:30.497Z · LW · GW

I'm rather happy with my Zs and omegas, but some of my dear ones aren't.

I expect that this sequence will allow me to give them more educated suggestions about how to fix themselves up, but based on past experience, they will most likely nod in agreement, yet go on using the dysfunctional coping method they've been using for years. This expected lack of result will, of course, not prevent me from eagerly awaiting the next instalment of your sequence.

Comment by gelisam on Rationality quotes: March 2010 · 2010-03-04T04:02:36.126Z · LW · GW

This story sports an interesting variation on the mind projection fallacy anti-pattern. Instead of confusing intrinsic properties with those whose observation depends both on one's mind and one's object of study, this variation confuses intrinsically correct conclusions with those whose validity depends both on the configuration of the world and on the correct interpretation of the evidence. In particular, one of the characters would like the inhabitants of the simulation to reconstruct our modern, "correct" scientific theories, even though said theories are in fact not a correct description of the simulated world.

Here is a relevant (and spoiler-free) passage.

[The simulation's] stars were just a planetarium-like backdrop, present only to help [the inhabitants of the simulation] get their notions of heliocentricity and inertia right

The mistake, of course, is that if the simulation's sun is merely projected on a rotating dome, then heliocentricity isn't right at all.

edit: it turns out that Eliezer has already generalized this anti-pattern from minds to worlds a while ago.

Comment by gelisam on Debate tools: an experience report · 2010-02-06T01:02:11.446Z · LW · GW

Reading that Argunet was both open source and flawed, I decided to download the code (signals geekiness) in order to fix the problem you have encountered (signals generosity). I didn't have to change anything: it took me a while (signals perseverance) but I finally figured out how to move a box in Argunet (signals superior problem-solving abilities).

It turns out that the result of dragging a box depends on whether the box was selected prior to dragging it. If the box was selected, it is moved, otherwise, an arrow is created. So, before you move a box, select it by clicking on it.

Comment by gelisam on The Wannabe Rational · 2010-01-18T19:27:07.323Z · LW · GW

You can't maximize truth.

I think it's fairly obvious that "maximizing truth" meant "maximizing the correlation between my beliefs and truth".

Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.

Truth is overrated. My prior was heavily biased toward truth, but then a brief and unpleasant encounter with nihilism caused me to lower my estimate.

And before you complain that this doesn't make any sense either, let me spell out that is an estimate of the probability that the strategy "pursue truth first, happiness second" yields, on average, more hedons than "pursue happiness using the current set of beliefs".

Comment by gelisam on The Wannabe Rational · 2010-01-18T19:01:16.923Z · LW · GW

Ah, so that's why people downvoted my comment! Thanks for explaining. I thought it was only because I appeared to be confusing utilons with hedons.

Regarding the doublethink post, I agree that I couldn't rationally assign myself false but beneficial beliefs, and I feel silly for writing that I could. On the other hand, sometimes I want to believe in false but beneficial beliefs, and that's why I can't pretend to be an aspiring rationalist.

Comment by gelisam on The Wannabe Rational · 2010-01-17T17:15:15.270Z · LW · GW

Uh-oh.

I... I don't think I do want to believe in things due to evidence. Not deep down inside.

When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth.

I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment.

And I take notes, for those rare cases in my life where truth actually is more important to my happiness than social conventions: when I encounter a real-world problem that I actually want to solve. This happens less often than you might think.

Comment by gelisam on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers · 2010-01-07T21:11:53.966Z · LW · GW

Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.

Comment by gelisam on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2009-12-13T17:56:35.780Z · LW · GW

Everybody seemed to be going backward from assuming AK's guilt at 50-50, whereas komponisto went forward from the background probability.

I think I can see why. komponisto pretended to be a juror following the "innocent unless proven otherwise" mantra and updating on the evidence presented in the court. We, on the other hand, did what komponisto challenged us to do: figure out the answer to his riddle using the two websites he gave us. This being a riddle, not a court, we had no reason to favour one hypothesis over the other, hence the 50-50.

That being said, I did favour one hypothesis over the other (my stated priors were 75/75/25) because at the moment I paused to write down an approximation of my current beliefs, I had already updated on the evidence presented by komponisto himself in his post, namely, that there was a trial against AK and RS.

Maybe the reason why many of us gave so much importance to the fact that those particular individuals were on trial for murder was because it was our very first piece of information; and I don't think it's right for rationalists to do that.

Comment by gelisam on You Be the Jury: Survey on a Current Event · 2009-12-10T01:10:46.128Z · LW · GW

priors

Amanda Knox is guilty: 75%, Raffaele Sollecito: 75%, Rudy Guede: 25%. I shall abbreviate future percentage triples as 75/75/25.

No knowledge of the case before reading this post. My prior is due to my assumption that trial people know what they are doing, and on the fact that I imagined that the trial was trying to show that the guilty were K+S instead of G.

acquiring information

Reading about G's DNA, which should be rather good evidence: switching to 50/50/75. I contemplated switching all the way to 25/25/75, but I figured there had to be some reason for the new trial.

Reading about the police's claim that the murder was linked to a group sex game; thinking that this would be a ridiculous motive. This made me think that maybe the trial people didn't knew what they were doing after all. Switching to 25/25/80.

Finally realized that the trial was in fact trying to show that the guilty were K+S+G instead of just G, not K+S instead of G. Stopped keeping track of percentages for some reason.

Reading about the police switching from K+S+L to K+S+G, which lowered my esteem of the police even more.

Reading about the DNA of K+S, figured it was natural for a woman and her boyfriend to have DNA all over the woman's own house.

Still trying to understand who was G relative to the others. I think he's a robber now. Definitely not part of the group sex thing. Even worse feelings toward the police.

Over all, the truejustice website seems more emotional than the friends of K website, which surprises me. I would have expected the family of the victim to have calmed down after the original G trial, yet truejustice still seemed angry; and doesn't even seem to be ran by the victim's family at all. They should be a lot less emotional about this than K's friend, which seem to be a lot more clearheaded than truejustice is.

I'm now quite convinced of the innocence of K+S, although I'm too shy to give an actual percentage. 5/5/95, if not more extreme.

Comment by gelisam on Intuitive differences: when to agree to disagree · 2009-09-30T02:03:20.847Z · LW · GW

theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.

I happen to be studying conflicts in a completely different domain, in which I claim the solution is to ensure the events shape identical result no matter in which order they are applied. I briefly wondered whether my result could be useful in other domains, and I thought of lesswrong: perhaps we should advocate update strategies which don't depend on the order in which the evidence is encountered.

And then your post came up! Nice timing.

Comment by gelisam on Outlawing Anthropics: An Updateless Dilemma · 2009-09-09T04:01:12.087Z · LW · GW

The reason we shouldn't update on the "room color" evidence has nothing to do with the fact that it constitutes anthropic evidence. The reason we shouldn't update is that we're told, albeit indirectly, that we shouldn't update (because if we do then some of our copies will update differently and we will be penalized for our disagreement).

In the real world, there is no incentive for all the copies of ourselves in all universes to agree, so it's all right to update on anthropic evidence.