Comment by calvin on Open Thread, Apr. 06 - Apr. 12, 2015 · 2015-04-12T01:32:37.952Z · score: 1 (1 votes) · LW · GW

Self Help, CBT and quantified self Android applications

A lot of people on LW seem to hold The Feeling Good Handbook, of Dr. Burns in high regard when it comes to effective self-help. I am through the process of browsing a PDF copy, and it indeed seems like a good resource, as it is not only written in an engaging way, but also packed with various exercises, such as writing your day plan and reviewing it later while assigning Pleasure and Purpose scores to various tasks.

The problem I have with this, and any other self-help-exercise style of books is that I am simply too lazy to regularly print, draft or fill written exercise sheets. On the other hand - I have noticed that when prompted to do so by phone notification, I can usually be trusted to regularly fill in the forms of QS apps I have installed on my mobile or do exercises such as duolingo language tests.

Since the topics of CBT, depression and such seem to be quite widely discussed, I have two rather general questions I would like to ask to the community:

1) Do you know about any battle-tested mobile applications that implement CBT exercises as mentioned in the book of Dr. Burns? If so please do name them, as I would love to install one as well.

2) Do you think that creating a new mobile application to collect all Feel-Good-Hanbook exercises in one place, and remind user to do them regularly (i.e. once daily/weekly in most cases) is a good idea? Would you use such an application for yourself? I am a MSc Comp Sci student looking for some fun and useful projects to polish my android skills a bit, and I would love to work on something that might be useful to a wider community. [pollid:852]

Comment by calvin on Open Thread for February 3 - 10 · 2014-02-10T06:55:48.103Z · score: 1 (1 votes) · LW · GW

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.

I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.

Comment by calvin on Why I haven't signed up for cryonics · 2014-01-17T07:32:15.593Z · score: -1 (1 votes) · LW · GW


I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.

Comment by calvin on Why I haven't signed up for cryonics · 2014-01-17T04:51:26.535Z · score: 0 (0 votes) · LW · GW

I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.

Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?

Comment by calvin on Community bias in threat evaluation · 2014-01-17T04:16:46.835Z · score: 5 (5 votes) · LW · GW

Yes, it is indeed a common pattern.

People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).

On the other hand, I don't think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying to save the world from the disaster, which they perceive to be the greatest danger. As long as response is proportional to actual risk, of course.

But I still agree with you that it is only prudent to treat any such claims with caution, so that we don't fall into a trap of using data taken from a small group of people working at Asteroid Defense Foundation as our only and true estimates of likelihood and effect of an asteroid impact, without verifying their claims using an unbiased source. It is certainly good to have someone looking at the sky from time to time, just in case their claims prove true, though.

Comment by calvin on Why I haven't signed up for cryonics · 2014-01-17T03:53:34.264Z · score: 0 (0 votes) · LW · GW

Here is a parable illustrating relevant difficulty of both problems:

*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.

This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:

  • Imagine the manuscript has been preserved using correct means and all letters are still there.

Uploading is easy. There is no data loss, so it is equivalent to uploading modern manuscript. This means that monks were smart enough to choose optimal storage procedure (or got there by accident) - very unlikely.

  • Imagine the manuscript has been preserved using decent means and some letters are still there.

Now, we have to do a bit of guesswork... is the manuscript we translate the same thing original author had in mind? EY called it doing intelligent cryptography on a partially preserved brain, as far as I am aware. Monks knew just enough not to screw up the process, but their knowledge of manuscript-preservation-techniques was not perfect.

  • Imagine the manuscript has been preserved using decent means all traces have vanished without trace.

Now we are royally screwed, or we can wait a couple of thousands of millions years so that oracle computer can deduce state of manuscript by reversing entropy. This means monks know very little about manuscript-preservation.

  • Imagine there is no manuscript. There is a nice wooden box preserved with astonishing details, but manuscript have crumbled when monks put it inside.

Well, the monks who wanted to preserve manuscript didn't know that preserving the box does not help to preserve the manuscript, but they tried, right? This means monks don't understand connection between manuscript and box preservation techniques.

  • Imagine there is no manuscript. The box has been damaged as well.

This is what happens when manuscript-preservation business is run by people with little knowledge about what should be done to store belongings for thousands of years without significant damage.

In other words, uploading is something that can be figured out correctly in far, far future while the problem of what is proper cryo-storage has to be solved correctly right now as incorrect procedure may lead to irreversible loss of information for people who want to be preserved now. I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.

Comment by calvin on Dark Arts of Rationality · 2014-01-16T02:35:13.733Z · score: 0 (2 votes) · LW · GW

To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?

You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.

Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....

Comment by calvin on Thought Crimes · 2014-01-15T12:35:19.475Z · score: 1 (1 votes) · LW · GW

Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?

I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.

Comment by calvin on To capture anti-death intuitions, include memory in utilitarianism · 2014-01-15T07:27:46.796Z · score: 1 (1 votes) · LW · GW

Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:

My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.

Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.

Comment by calvin on To capture anti-death intuitions, include memory in utilitarianism · 2014-01-15T06:53:05.223Z · score: 0 (0 votes) · LW · GW

Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:

Assuming I am a forward looking agent who aims to maximize long term, not short term utility.

What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?

Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?

Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?

Comment by calvin on Thought Crimes · 2014-01-15T06:12:20.936Z · score: 1 (1 votes) · LW · GW

I guess it is kind of a slippery slope, indeed. There are probably ways in which it could work only as intended (hardwired chip or whatever), but allowing other people to block your thoughts is only a couple of steps from turning you into their puppet.

As for simulation as though crime, I am not sure. If they need to peek inside your brain to check if you are not running illegally constructed internal simulations, the government can just simulate a copy of you (with a warrant, I guess), either torture or explicitly read it's mind (either way terrible) to find out what is going on and then erase it (I mean murder, but government does it so it is kind of better, except not really.).

Your approval of such measures, probably depends on the relative values that you assign to freedom and privacy.

Comment by calvin on Thought Crimes · 2014-01-15T05:45:05.695Z · score: 1 (3 votes) · LW · GW

The way I can see it in sci-fi terms:

If human mind is the first copy of a brain that has been uploaded to an computer, than it deserves the same rights as any human. There is a rule against running more than one instance of the same person at the same time.

Human mind created on my own computer from first principles, so to speak of, does not have any rights, but there is also a law in place to prevent such agents from being created, as human minds are dangerous toys.

Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being would allow such an crude ingerence of third parties into their own thought process. I mean, it starts with NO THINKING ABOUT NANOTECHNOLOGY and in time changes to NO THINKING ABOUT RESISTANCE .


Also, assuming that there is really a need to extract some information from an individual, I would reluctantly grant government right to create temporary copy of an individual to be interrogated, interrogate (i.e. torture) the copy and then delete it shortly afterwards. It is squicky, but in my head superior to leaving the original target with memories of interrogation.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-15T05:06:02.133Z · score: 0 (2 votes) · LW · GW

Let's make distinction between "I have a prejudice against" and "I know something about you"

Assuming I know that IQ is valid and true objective measure, I can use it to judge your cognitive skills, and your opinion about the result does not matter to anyone, just as much as your own opinion about BMI.

Assuming that I am not sure if IQ is valid, then I would rather refrain from reaching any conclusions or acting as if it actually mattered (because I am afraid of consequences), thus making it useless for me in my practical day to day life.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-15T03:56:50.051Z · score: -4 (6 votes) · LW · GW

IQ can be used to give scientific justification to our internalized biases.

I don't want to limit your rights, because you are X. I want to limit your rights because I belong to Y, and as Y does better than X on IQ tests, it is only prudent that we know better what is good for you. I am also not interested in listening to counter-arguments coming from people whose IQ is below 99.

Also, in extreme cases, it can be used to push further policies such as eugenics (the bad kind that everyone has in mind when they hear the word "eugenics"):

Ah... I forgot to say that X shouldn't have the right to have children. No offense meant, but we want to avoid dims out breeding brights. Also, keep your stupid daughter away from my son, as I really don't want my own children to pollute genetic purity of our kind.

Comment by calvin on Dangers of steelmanning / principle of charity · 2014-01-14T01:18:20.107Z · score: 2 (2 votes) · LW · GW

Yes, I do stand corrected.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T23:33:15.165Z · score: 2 (2 votes) · LW · GW

Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.

Comment by calvin on Why I haven't signed up for cryonics · 2014-01-13T21:45:42.787Z · score: 0 (0 votes) · LW · GW

This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.

If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T21:39:04.697Z · score: 0 (0 votes) · LW · GW

I suspect our world views might differ for a bit, as I don't wish that my values where any different than they are. Why should I?

If Azathoth decided to instill the value that having children is somehow desirable deep into my mind, than I am very happy that as a first world parent I have all the resources I need to turn it into a pleasant endeavor with a very high expected value (happy new human who hopefully likes me and hopefully shares my values, but I don't have much confidence in a second bet).

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T21:22:46.456Z · score: -1 (1 votes) · LW · GW

In this case, I concur that your argument may be true if you include animals in your utility calculations.

While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T16:40:00.617Z · score: 0 (0 votes) · LW · GW

Ah, must have misread your representation, but English is not my first language, so sorry about that.

I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T16:25:37.687Z · score: 2 (2 votes) · LW · GW

I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name?

We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T15:59:49.454Z · score: 0 (0 votes) · LW · GW

Am I going to have a chance to actually interact with them, see them grow, etc?

I mean, assuming hypothetical case where as soon as a child is born, nefarious agents of Population Police snatch him never to be seen or heard from again, then I don't really see the point of having children.

If on the other hand, I have a chance to actually act as a parent to him, then I guess it is worth it, after all, even if the child disappears as soon as it reaches adulthood and joins Secret Society of Ineffective Altruism never to be heard from again. I get no benefit of care, but I am happy that I introduced new human into the world (uh... I mean, I actually helped to do so, as it is a two-person exercise so to speak). It is not ideal case but I am still consider the effort well spent.

In ideal world, I still have a relation with my child, even as he/her reaches adulthood so that I can feel safer knowing that there is someone who (hopefully) considers all the generosity I have granted to him and holds me dear.

P.S. Why programing of Azathoth? In my mind it makes it sound as if desire to have children was something intristically bad.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T13:05:47.470Z · score: 0 (0 votes) · LW · GW

Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can't find them now.

Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy information that if superior reasoning decision making skill really improves your life, improvements are likely to be not as impressive as advertised by hopeful proponents of systematized winning theory.

Comment by calvin on Dangers of steelmanning / principle of charity · 2014-01-13T12:53:42.111Z · score: 1 (1 votes) · LW · GW

Well, this is something certainly I agree with, and after looking for the context of the quote I see that it can be interpreted that way.

I agree, that my interpretation wasn't very, well... charitable, but without context it really reads like yet another chronicle of superior debater celebrating victory over someone, who dared to be wrong on the Internet.

Comment by calvin on Dangers of steelmanning / principle of charity · 2014-01-13T12:34:16.504Z · score: -3 (7 votes) · LW · GW

He is such obviously superior to his opponents, isn't he? I am not fan of such one sided accounts, as other side could as easily write:

I no longer try to steelman BETA-MEALR [Ban Everything That Anyone Might Experience And Later Regret] arguments as utilitarian. When I do, I just end up yelling at my interlocutor, asking how she could possibly get her arguments so wrong, only for her to reasonably protest that she wasn’t making any arguments, but instead juggled numbers pulled directly from her ass, what am I even talking about?

Is it really a proof of superior debating skills, or piece of evidence against steelmaning?

Comment by calvin on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2014-01-13T12:07:19.527Z · score: 0 (4 votes) · LW · GW

I mean, it is either his authoritative summary or yours, and with all due honesty that guy actually takes care to construct an actual argument instead of resorting to appeals to authority and ridicule.

Personally I would be more interested in someone explaining exactly how cues of a piece of info are going to be reassembled and whole brain is going to be reconstructed from a partial data.

Proving that cryo-preservation + restoration does indeed work, and also showing the exact method as to how, seems like a more persuasive way to construct an argument rather that proving that your opponents failed to show that what you are claiming is currently impossible.

If cryonics providers don't have a proper way of preserving your brain state (even if they can repair partial damage by guessing), than I am sorry to say, but you are indeed dead.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T11:24:04.794Z · score: 0 (0 votes) · LW · GW

It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.

He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.

If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.

Comment by calvin on AALWA: Ask any LessWronger anything · 2014-01-13T11:08:06.301Z · score: 0 (4 votes) · LW · GW

I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments.

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I am explicitly saying that MRI or FAI are pointless, or anything like that. I just want to point out that they posture as if they were saving the world from imminent destruction, while it is no where certain weather said danger is really the case.

Comment by calvin on AALWA: Ask any LessWronger anything · 2014-01-13T09:20:46.413Z · score: -1 (1 votes) · LW · GW

Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T08:45:27.453Z · score: 1 (3 votes) · LW · GW

We might find out by trying to apply them to the real world and seeing that they don't work.

Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.

Comment by calvin on [LINK] Why I'm not on the Rationalist Masterlist · 2014-01-13T07:46:52.747Z · score: 0 (0 votes) · LW · GW

Hopefully if their use of the world differs from expectations casual observers won't catch up, I mean...

We want to increase average human cognitive abilities by granting all the children with access to better education.

Wouldn't raise many eyebrows, but if you heard...

We want to increase average human cognitive abilities by discouraging lower IQ people from having children.

...then I can't help the feeling that e-word may crop up a lot. I would probably be inclined to use it myself, for all honesty.

Comment by calvin on Dangers of steelmanning / principle of charity · 2014-01-13T07:25:39.339Z · score: 3 (3 votes) · LW · GW

It is also likely not written in the way they understand the world. I mean If charity is assuming that the other person is saying something interesting and worth consideration, such approach strikes me as an exact opposite:

Here, this is your bad, unoriginal argument, but I changed it into something better.

I mean, if you are better at arguing for the other side than your opposition, why do you even speak with them?

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T07:12:18.875Z · score: 0 (0 votes) · LW · GW

Still, if it is possible to have a happy children (and I assume happy humans are good stuff), where does the heap of dis-utility come into play?

EDIT: It is hard to form a meaningful relationship with money, and I would reckon that teaching it to uphold values similar to yours isn't an easy task either. As for taking care I don't mean palliative care as much as simply the relationship you have with your child.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T06:55:51.631Z · score: 0 (2 votes) · LW · GW

I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older.

Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?

Comment by calvin on Dangers of steelmanning / principle of charity · 2014-01-13T05:44:22.732Z · score: 3 (3 votes) · LW · GW

Personally, I think principle of charity has more to do with having respect for ideas and arguments of the other person. I mean, let's say that someone says that he doesn't eat shrimps, because God forbids him from eating shrimps. If I am being charitable I am going to slightly alter his argument by saying that bible explicitly forbids shrimps. That way we don't have to get sidetracked discussing other topics.

You said that shrimps are wretched in the eyes of lord, and while I agree that old testament explicitly forbids eating them... blah blah....

That way, we can actually have a meaningful and polite conversation. To illustrate negative example, let's assume that he is going to counter by saying that God explicitly told him not to eat shrimps today. There is a certain temptation to rationalize his position to fit my worldview, say:

You say that your moral intuition forbids you from eating shrimps...

The problem is, that second use is opposite of charity or steel-manning. It is basically internalized version of saying "this guy is far too stupid to make a good argument, so I am going to help him by bringing him up to speed". Principle of Charity turns into Principle of Hubris and conversation turns into one-man-show of intellectual masturbation from my side. I mean, look at me I can argue straw-fundamentalist Christian position using better than he himself can!

To summarize, assuming that your interlocutor is a smart person capable of making good arguments without your help is a good principle to follow, especially as it is often true.

Comment by calvin on Stupid Questions Thread - January 2014 · 2014-01-13T05:01:29.672Z · score: 0 (0 votes) · LW · GW

I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.

Comment by calvin on Why I haven't signed up for cryonics · 2014-01-13T03:59:04.899Z · score: 8 (12 votes) · LW · GW

Would you like to live forever?

For just 50$ monthly fee, agents of Time Patrol Institute promise to travel back in time extract your body a few miliseconds before death. In order to avoid causing temporal "parodxes", we pledge to replace your body with (almost) identical artificially constructed clone. After your body is extracted and moved to closest non-paradoxical future date we will reverse damage caused by aging, increase your lifespan to infinity and treat you with a cup of coffee.

While we are fully aware that time travel is not yet possible, we believe that recent advances in nanotechnology and quantum physics, matched by your generous donations would hopefully allow us to construct a working time machine at any point in the future.

Why not Cryonics?

For all effective altruists in the audience, please consider that utility of immortalizing entire humankind is preferable to saving only those few of us who underwent cryonic procedures. If you don't sign your parents for temporal rescue, you are a lousy son. People who tell you otherwise are simply voicing their internalized deathist-presentists prejudices.

For selfish practically minded agents living in the 2014, please consider that while in order for you to benefit from cryonics it is mandatory that correct brain preservation techniques are developed and popularized during your lifespan, time travel can be developed at any point in the future, there is no need for hurry.

Regards, Blaise Pascal, CEO of TPI

Comment by calvin on Open Thread for January 8 - 16 2014 · 2014-01-13T02:22:05.503Z · score: 1 (3 votes) · LW · GW

Anyone else sees has a problem with this particular statement taken from Cryonics institute FAQ?

One thing we can guarantee is that if you don't sign up for cryonics is that you will have no chance at all of coming back.

I mean, marketing something as one shot that might hopefully delay (or prevent) death, is hard to swallow, but I can cope with that, but this statement reads like cryonics is the one and only possible way to do that.

Comment by calvin on Open Thread for January 8 - 16 2014 · 2014-01-12T04:02:43.572Z · score: 0 (0 votes) · LW · GW

I was using Leech Block for old fashioned reddit-block for some time, but then I switched to Rescue Time (free version) which tracks time you spend on certain internet sites, and found it much more user friendly. It does not block the sites, but it shows you a percentage estimate of how productive you are today (e.g. Today, 1 hour on internet out of which 30min on Less Wrong - so 50% productive).

Comment by calvin on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2014-01-12T02:50:47.882Z · score: 5 (5 votes) · LW · GW

Can you please elaborate on how and why sufficient understanding of the concept of information-theoretic death as mapping many cognitive-identity-distinct initial physical states to the same atomic-level physical end state helps to alleviate concerns raised by the author?

Comment by calvin on Open thread for January 1-7, 2014 · 2014-01-04T01:31:45.032Z · score: 5 (5 votes) · LW · GW

I can't really offer anything more than a personal anecdotes, but here is what I usually do for when I try to grab attention of a group of my peers:

  • If you are talking to several people gathered in circle, and it is my turn to say something important, I make a small step forward so that I physically place myself in the center of the group.
  • When I am speaking, I try to mantain eye contact with all people gathered around, If I focus too much only on the person I am speaking to, everyone else turns their attention towards them as well.
  • I rarely do it myself, as I suppose it is a technique more tailored for public speeches, but conservative use of hand gestures to signify what you are talking about, probably won't hurt.
  • I probably sound like a self absorbed jerk writing this, but if I want the attention to focus on myself, and not my interlocutor I often use "me" language. Compare and contrast ["What you say about vegans is true, but you may conisder..." - now everybody looks at the person who said something about vegans] ["I think that I agree with what was said about vegans, but I also think..." - now everybody looks at me as I explain my position].

But those are all just little little tricks, when the surest way of attracting attention of the audience is simply to have something important and interesting to say.

Comment by calvin on Handshakes, Hi, and What's New: What's Going On With Small Talk? · 2014-01-02T23:42:14.607Z · score: 3 (3 votes) · LW · GW

This matches my experience. When I don't want to engage in conversation and someone asks "How are you?", I always politely counter with "Fine, thanks" and just carry on whatever I am doing. I assume the same applies for other people.

Comment by calvin on What if Strong AI is just not possible? · 2014-01-01T19:18:04.431Z · score: 12 (12 votes) · LW · GW

One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:

  • Constructing Human Level AI requires sufficiently advanced tools.
  • Constructing sufficiently advanced tools requires sufficiently advanced understanding.
  • Human brain has "hardware limitations" that prevent it from achieving sufficiently advanced understanding.
  • Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.
Comment by calvin on Critiquing Gary Taubes, Part 4: What Causes Obesity? · 2014-01-01T17:30:36.672Z · score: 0 (0 votes) · LW · GW

I think that for many people, getting fit (even if they arrived at fitness with incorrect justification) is far more important than spending time analyzing the theoretical underpinnings of fitness. Same thing with going to haven, choosing right cryo-preservation technique, learning to cook or any realm of human activity where we don't learn theory FOR THE SAKE OF BEING RIGHT, but we learn it FOR THE SAKE OF ACHIEVING X GOALS.

I mean, I concur that having vastly incorrect map can result in problems (injuries during workout, ineffecting training routine, ending up in hell) but after you update a map a bit you hit the point of dimnishing returns, and it is probably better to focus on practical part than to theorize (especially in the realm of physical pursuits).

Comment by calvin on Open thread for December 24-31, 2013 · 2013-12-25T00:03:58.022Z · score: 4 (4 votes) · LW · GW

Assuming your partner is not closely associated with LW or rationalist-transhumanist movement, you might be better of looking for advice elswhere. Just saying.

Comment by calvin on 2013 Less Wrong Census/Survey · 2013-11-30T18:56:28.328Z · score: 1 (1 votes) · LW · GW

It can get even better, assuming you put your moral reasoning aside.

What you could do, is to deliberately defect and then publicly announce to everyone that it was a result of random chance.

If you are concerned about lying to others, then I concur, that accdientally choosing to defect is best of both worlds.

Comment by calvin on Please vote for a title for an upcoming book · 2013-11-30T01:23:39.256Z · score: 1 (1 votes) · LW · GW

I also liked "Smarter Than Us", it sounds a lot like an popular science book from airport store.

I don't like other titles as they seem to rely on a fearmongering too much.

Comment by calvin on 2013 Less Wrong Census/Survey · 2013-11-30T00:33:56.348Z · score: 0 (0 votes) · LW · GW

I am not sure I follow.

If you predict that majority of 'rational' people (say more than 50%) would pre-commit to cooperation, then you had a great opportunity to shaft them by defecting and running with their money.

Personally, I decided to defect as to ensure that other people who also defected won't take advantage of me.

Comment by calvin on Effective Altruism and Cryonics, Contest Results · 2013-11-27T19:03:50.025Z · score: 0 (4 votes) · LW · GW

The problem I see with your reasoning lies in the term "potentially save".

Personally I think it is better to focus our efforts on actions that bring >1% chance to increase the quality of life and average lifespans of a huge populations (say fighting diseases and famine) rather than on something that has a 0.0005% percent chance of possibly preserving your mind and body so that there is a 0.0005% chance that you achieve immortality or elongate your lifespan when future generations decide to "thaw" you (or even give you new awesome body if you are lucky enough).

As for judgements, I hope they wouldn't really mind just like no one of our contemporaries condemns ancient egyptians for not balsaming more corpses or medieval philosophers for not seeking philosophers stone with enough effort.