Posts

Comments

Comment by David_Bolin on There's No Fire Alarm for Artificial General Intelligence · 2017-10-15T17:12:21.579Z · LW · GW

"If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. "

I do get that feeling even more so, in exactly that situation. I therefore normally do not respond to fire alarms.

Comment by David_Bolin on Why people want to die · 2015-08-29T13:02:15.767Z · LW · GW

I don't think the belief in life after death necessarily indicates a wish to live longer than we currently do. I think it is a result of the fact that it appears to people to be incoherent to expect your consciousness to cease to be: if you expect that to happen, what experience will fulfill that expectation?

Obviously none. The only expectation that could theoretically be fulfilled by experience is expecting your consciousness to continue to exist. This doesn't actually prove that your consciousness will in fact continue to exist, but it is probably the reason there is such a strong tendency to believe this.

This article here talks about how very young children tend to believe that a mouse will have consciousness after death, even though they certainly do not hear this from adults:

For example, in a study by Bering and Bjorklund (2004), children (as well as an adult comparison group) were presented with a puppet show in which an anthropomorphized mouse was killed and eaten by an alligator, and then asked about the biological and psychological functioning of the now-dead mouse. Kindergartners understood that various biological imperatives (e.g., the capacity to be sick, the need to eat, drink, and relieve oneself) no longer applied to the dead mouse. The majority of these children even said that the brain of the dead mouse no longer worked, which is especially telling given that children at this age also understand that the brain is “for thinking” (Bloom 2004; Gottfried & Jow 2003; Johnson & Wellman 1982; Slaughter & Lyons 2003). Yet when asked whether the dead mouse was hungry or thirsty, or whether it was thinking or had knowledge, most kindergartners said yes. In other words, young children were cognizant of the fact that the body stops working at death but they viewed the mind as still active. Furthermore, both the children and adults were particularly likely to attribute to the dead mouse the capacity for certain psychological states (i.e., emotions, desires, and epistemic states) over others (i.e., psychobiological and perceptual states), a significant trend that will be addressed in the following section. In general, however, kindergartners were more apt to make psychological attributions to the dead mouse than were older children, who were not different from adults in this regard. This is precisely the opposite pattern that one would expect to find if the origins of such beliefs could be traced exclusively to cultural indoctrination. In fact, religious or eschatological-type answers (e.g., Heaven, God, spirits, etc.) among the youngest children were extraordinarily rare. Thus, a general belief in the continuity of mental states in dead agents seems not something that children acquire as a product of their social– religious upbringing, because increasing exposure to cultural norms would increase rather than attenuate afterlife beliefs in young minds. Instead, a natural disposition toward afterlife beliefs is more likely the default cognitive stance and interacts with various learning channels (for an alternative interpretation, see Astuti, forthcoming a). Moreover, in a follow-up study that included Catholic schoolchildren, this incongruous pattern of biological and psychological attributions to the dead mouse appeared even after controlling for differences in religious education (Bering et al. 2005).

Comment by David_Bolin on Words per person year and intellectual rigor · 2015-08-28T16:21:44.778Z · LW · GW

Ok. In that sense I agree that this is likely to be the case, and would be the case more often than not with any educated person's assessment of who does rigorous work.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-28T12:36:45.701Z · LW · GW

It is not that these statements are "not generally valid", but that they are not included within the axiom system used by H. If we attempt to include them, there will be a new statement of the same kind which is not included.

Obviously such statements will be true if H's axiom system is true, and in that sense they are always valid.

Comment by David_Bolin on Words per person year and intellectual rigor · 2015-08-28T12:25:23.214Z · LW · GW

How does this not come down to saying that people you consider rigorous, on average did more work on their texts than people you don't consider rigorous, and therefore they wrote less as a whole?

If we take a random (educated) person, and ask him to classify authors into rigorous and non-rigorous, something similar should be true on average, and we should find similar statistics. I can't see how that shows some deep truth about the nature of rigorous thought, except that it means doing more work in your thinking.

I agree that it does mean at least that, so that e.g. some author has written more than 100 books, that is a pretty good sign that he is not worth reading, even if it is not a conclusive one.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-27T13:12:11.399Z · LW · GW

I looked at your specified program. The case there is basically the same as the situation I mentioned, where I say "you are going to think this is false." There is no way for you to have a true opinion about that, but there is a way for other people to have a true opinion about it.

In the same way, you haven't proved that no one and nothing can prove that the program will not halt. You simply prove that there is no proof in the particular language and axioms used by your program. When you proved that program will not halt, you were using a different language and axioms. In the same way, you can't get that statement right ("you will think this is false") because it behaves as a Filthy Liar relative to you. But it doesn't behave that way relative to other people, so they can get it right.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-27T13:04:02.770Z · LW · GW

I said "so the probability that a thing doesn't exist will be equal to or higher than etc." exactly because the probability would be equal if non-existence and logical impossibility turned out to be equivalent.

If you don't agree that no logically impossible thing exists, then of course you might disagree with this probability assignment.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-26T13:29:01.669Z · LW · GW

Also, there is definitely some objective fact where you cannot get the right answer:

"After thinking about it, you will decide that this statement is false, and you will not change your mind."

If you conclude that this is false, then the statement will be true. No paradox, but you are wrong.

If you conclude that this is true, then the statement will be false. No paradox, but you are wrong.

If you make no conclusion, or continuously change your mind, then the statement will be false. No paradox, but the statement is undecidable to you.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-26T13:25:32.981Z · LW · GW

There is no program such that no Turing machine can determine whether it halts or not. But no Turing machine can take every program and determine whether or not each of them halts.

It isn't actually clear to me that you a Turing machine in the relevant sense, since there is no context where you would run forever without halting, and there are contexts where you will output inconsistent results.

But even if you are, it simply means that there is something undecidable to you -- the examples you find will be about other Turing machines, not yourself. There is nothing impossible about that, because you don't and can't understand your own source code sufficiently well.

Comment by David_Bolin on Yudkowsky's brain is the pinnacle of evolution · 2015-08-26T13:17:41.028Z · LW · GW

I've seen this kind of thing happen before, and I don't think it's a question of demographics or sockpuppets. Basically I think a bunch of people upvoted it because they thought it was funny, then after there were more comments, other people more thoughtfully downvoted it because they saw (especially after reading more of the comments) that it was a bad idea.

So my theory it was a question of difference in timing and in whether or not other people had already commented.

Comment by David_Bolin on Open Thread - Aug 24 - Aug 30 · 2015-08-26T12:57:35.696Z · LW · GW

It is definitely true that this could be someone's subjective probability, if he he doesn't understand the statement.

But if you do understand it, a thing which is logically impossible doesn't exist, so the probability that a thing doesn't exist will be equal to or higher than the probability that it is logically impossible.

Comment by David_Bolin on Why people want to die · 2015-08-25T14:19:24.276Z · LW · GW

Maybe. I upvoted it because I thought it was correct, and corrects the misconception that desiring to live forever is obviously the correct thing to do, and that everyone would want it if they weren't confused.

Note that unless the probability that you begin to want to die during a certain period of time is becoming continuously lower, forever, then you will almost surely begin to want to die sooner or later.

Comment by David_Bolin on Yudkowsky's brain is the pinnacle of evolution · 2015-08-24T21:37:00.693Z · LW · GW

The post would have to be toned down quite a bit in order to appear to be possibly sincere.

Comment by David_Bolin on Instrumental Rationality Questions Thread · 2015-08-24T12:59:12.812Z · LW · GW

I use dtSearch for the text searching, which works pretty well. I don't have to use it constantly but it works well when I need it, e.g. finding something from a website I viewed a few months ago, when I no longer remember which site it was, or determining whether I've ever come across a certain's person's name before, finding one of my passwords after I've forgotten where I saved it, and so on. Also, sometimes I haven't been sure about which keywords to search for, but I was able to determine that something must have happened on a particular day, and then I've been able to use the screenshots directly to search for it, scrolling through them like through a movie.

I don't use a terminal. Currently I've been using two personal computers and have kept the history from both.

Comment by David_Bolin on Instrumental Rationality Questions Thread · 2015-08-23T12:59:16.381Z · LW · GW

I do the screenshot / webcam thing, and OCR the screenshots so that my entire computing history is searchable.

Comment by David_Bolin on Robert Aumann on Judaism · 2015-08-22T22:13:55.351Z · LW · GW

Yes, both of these happen. Also, it's harder to be friends even with the people you already know because you feel dishonest all the time (obviously because you are in fact being dishonest with them.)

Comment by David_Bolin on Robert Aumann on Judaism · 2015-08-22T15:13:09.984Z · LW · GW

If you are a Muslim in many Islamic countries today, and you decide that Islam is false, and let people know it, you can be executed. This does not seem to have a high expected value.

Of course, you could decide it is false but lie about it, but people have a hard time doing that. It is easier to convince yourself that it is true, to avoid getting killed.

Comment by David_Bolin on Robert Aumann on Judaism · 2015-08-22T14:38:32.416Z · LW · GW

I don't see why so many people are assuming that Aumann is accepting a literal creation in six days. I read the article and it seems obvious to me that he believes that the world came to be in the ordinary way accepted by science, and he considers the six days to be something like an allegory. There is no need for explanations like a double truth or compartmentalization.

Comment by David_Bolin on 0 And 1 Are Not Probabilities · 2015-08-20T19:08:55.984Z · LW · GW

It should probably be defined by calibration: do some people have a type of belief where they are always right?

Comment by David_Bolin on 0 And 1 Are Not Probabilities · 2015-08-20T18:50:57.786Z · LW · GW

Of course if no one has absolute certainty, this very fact would be one of the things we don't have absolute certainty about. This is entirely consistent.

Comment by David_Bolin on 0 And 1 Are Not Probabilities · 2015-08-20T13:10:15.939Z · LW · GW

Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty. This would mean that mathematical probability theory is an imperfect formalization of people's subjective degrees of belief. It would not necessarily mean that it is impossible in principle to come up with a better formalization.

Comment by David_Bolin on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-19T18:33:15.612Z · LW · GW

Yes, it won't work for people who can't manage a day without eating at least from time to time, although you can also try slowing down the rate of change.

As I said in another comment, changes in water retention (and scale flucuations etc.) don't really matter because it will come out the same on average.

Comment by David_Bolin on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-19T18:30:43.700Z · LW · GW

It doesn't matter. Fluctuations with scales and with water retention may mean that you may end up fasting an extra day here and there for random reasons, but you will also end up eating on extra days for the same reason. It ends up the same on average.

Comment by David_Bolin on Rational approach to finding life partners · 2015-08-19T13:34:39.030Z · LW · GW

Technology frequently improves some things while making other things worse. But sooner or later people find a way to improve both the some things and the other things. In this particular case, maybe they haven't found it yet.

Comment by David_Bolin on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-19T13:31:06.869Z · LW · GW

"But actually they weren't aware that they were not eating less."

This is why I advocate the method of using a Beeminder weight goal (or some equivalent), weigh yourself every day, and don't eat for the rest of the day when you are above the center line. When you are below it, you can eat whatever you want for the rest of the day.

This doesn't take very much willpower because there is a very bright line, you don't have to carefully control what or how much you eat, it's either you eat today or you don't.

Comment by David_Bolin on Versions of AIXI can be arbitrarily stupid · 2015-08-10T15:02:07.261Z · LW · GW

Ok, that makes sense.

Comment by David_Bolin on Versions of AIXI can be arbitrarily stupid · 2015-08-10T14:42:30.716Z · LW · GW

I don't think I understand. What is the third possible environment? And why exactly is the behavior stupid? It sounds like it might be actually true that it is too dangerous to test whether you are in Heaven or Hell in that situation.

Comment by David_Bolin on Rationality Quotes Thread August 2015 · 2015-08-09T15:33:55.802Z · LW · GW

"Imagine a world in which no one was living below the average income level."

This is a world where everyone has exactly the same income. I don't see any special reason why it would be desirable, though.

Comment by David_Bolin on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T09:04:19.931Z · LW · GW

Probability that there are two elephants given one on the left and one on the right.

In any case, if your language can't express Fermat's last theorem then of course you don't assign a probability of 1 to it, not because you assign it a different probability, but because you don't assign it a probability at all.

Comment by David_Bolin on Does Probability Theory Require Deductive or Merely Boolean Omniscience? · 2015-08-03T08:48:15.369Z · LW · GW

Basically the problem is that a Bayesian should not be able to change its probabilities without new evidence, and if you assign a probability other than 1 to a mathematical truth, you will run into problems when you deduce that it follows of necessity from other things that have a probability of 1.

Comment by David_Bolin on How to escape from your sandbox and from your hardware host · 2015-08-01T13:50:27.918Z · LW · GW

Any program that reads this post and these articles wasn't stuck in a sandbox anyway.

Comment by David_Bolin on Wear a Helmet While Driving a Car · 2015-07-31T15:46:59.458Z · LW · GW

I would agree with the social norm of never ever going swimming. In fact, I have a very hard time understanding why people are so willing to basically immerse themselves in an environment so deadly to human beings. I certainly never do it myself.

Comment by David_Bolin on An overall schema for the friendly AI problems: self-referential convergence criteria · 2015-07-28T06:18:38.230Z · LW · GW

You are assuming that human beings are much more altruistic than they actually are. If your wife has the chance of leaving you and having a much better life where you will never hear from her again, you will not be sad if she does not take the chance.

Comment by David_Bolin on Lawful Uncertainty · 2015-07-26T03:30:03.454Z · LW · GW

If the other player is choosing randomly between two numbers, you will have a 50% chance of guessing his choice correctly with any strategy whatsoever. It doesn't matter whether your strategy is random or not; you can choose the first number every time and you will still have exactly a 50% chance of getting it.

Comment by David_Bolin on Steelmaning AI risk critiques · 2015-07-25T08:06:05.269Z · LW · GW

That is not a useful rebuttal if in fact it is impossible to guarantee that your AGI will not be a socialpath no matter how you program it.

Eliezer's position generally is that we should make sure everything is set in advance. Jacob_cannell seems to be basically saying that much of an AGI's behavior is going to be determined by its education, environment, and history, much as is the case with human beings now. If this is the case it is unlikely there is any way to guarantee a good outcome, but there are ways to make that outcome more likely.

Comment by David_Bolin on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-25T07:47:22.599Z · LW · GW

If you are "procrastinate-y" you wouldn't be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.

Comment by David_Bolin on Steelmaning AI risk critiques · 2015-07-23T10:53:07.024Z · LW · GW

Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/

I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.

I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html

Comment by David_Bolin on Base your self-esteem on your rationality · 2015-07-23T02:29:31.132Z · LW · GW

Human beings are not very willing to be rational, and that includes those of us on Less Wrong.

Comment by David_Bolin on Base your self-esteem on your rationality · 2015-07-22T13:22:55.233Z · LW · GW

If you're really honest about your willingness to be rational, it seems like this could be kind of depressing.

Comment by David_Bolin on I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life · 2015-07-22T04:46:00.446Z · LW · GW

I actually meant it more generally, in the sense of highly unusual situations. So gjm's suggested path would count.

But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk -- given what just happened -- and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.

Comment by David_Bolin on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-21T09:41:37.449Z · LW · GW

Caricatures such as describing people who disagree with you as saying "let's bring back slavery" and supporting "burning down the whole Middle East" are not productive in political discussions.

Comment by David_Bolin on Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings) · 2015-07-21T08:04:30.590Z · LW · GW

I tried to register there just now but the email which is supposed to contain the link to verify my email is empty (no link). What can I do about it?

Comment by David_Bolin on Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings) · 2015-07-21T07:46:26.629Z · LW · GW

I think this is probably true, and I have seen cases where e.g. Eliezer is highly upvoted for a certain comment, and some other person little or not at all for basically the same insight in a different case.

However, it also seems to me that their long comments do tend to be especially insightful in fact.

Comment by David_Bolin on Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings) · 2015-07-21T07:41:54.312Z · LW · GW

I don't think this would be helpful, basically for the reason Lumifer said. In terms of how I vote personally, if I consider a comment unproductive, being longer increases the probability that I will downvote, since it wastes more of my time.

Comment by David_Bolin on List of Fully General Counterarguments · 2015-07-19T19:28:27.056Z · LW · GW

That isn't really fully general because not everything is evidence in favor of your conclusion. Some things are evidence against it.

Comment by David_Bolin on Crazy Ideas Thread · 2015-07-19T19:07:00.995Z · LW · GW

For many people, 32 karma would also be sufficient benefit to justify the investment made in the comment.

Comment by David_Bolin on I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life · 2015-07-19T13:04:26.143Z · LW · GW

You can pretty easily think of "apocalyptic" scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.

Comment by David_Bolin on List of Fully General Counterarguments · 2015-07-19T06:01:11.964Z · LW · GW

A common one that I see works like this: first person holds position A. A second person points out fact B which provides evidence against position A. The first person responds, "I am going to adjust my position to position C: namely that both A and B are true. B is evidence for C, so your argument is now evidence for my position." Continue as needed.

Example:

First person: The world was created. Second person: Living things evolved, which makes it less likely that things were created than if they had just appeared from nothing. First person: The world was created through evolution. Facts implying evolution are evidence for this fact, so your argument now supports my position.

Continuing in this way allows the first person not only to maintain his original position, even if modified, but also to say that all possible evidence supports it.

(The actual resolution is that even if the modified position is supported by the evidence in issue, the modified position is more unlikely in itself than the original position, since the conjunction requires two things to be true, so following this process results in holding more and more unlikely positions.)

Comment by David_Bolin on I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life · 2015-07-18T14:00:18.242Z · LW · GW

Ok. My link was also for the USA and you are correct that there would be differences in other countries.

Comment by David_Bolin on An overall schema for the friendly AI problems: self-referential convergence criteria · 2015-07-18T13:56:42.110Z · LW · GW

This sounds like Robin Hanson's idea of the future. Eliezer would probably agree that in theory this would happen, except that he expects one superintelligent AI to take over everything and impose its values on the entire future of everything. If Eliezer's future is definitely going to happen, then even if there is no truly ideal set of values, we would still have to make sure that the values that are going to be imposed on everything are at least somewhat acceptable.