Posts

Comments

Comment by Soki on PSA: Learn to code · 2012-05-27T13:41:41.937Z · LW · GW

Learning Computer Science and Theoretical Computer Science is useful for the kind of exercises found on Project Euler.

Comment by Soki on How I Lost 100 Pounds Using TDT · 2011-03-15T19:00:33.889Z · LW · GW

Assuming that the effects of dieting for a day are very small, it is likely that the utility of not eating knots today is lower than the utility of eating them for every possible future behavior.
A CDT agent only decides what it does now, so a CDT agents chooses to eat knots.
But an EDT,TDT or UDT agent would choose to diet.

Comment by Soki on Suggestions for a presentation on FAI? · 2011-02-11T21:09:25.075Z · LW · GW

Despite the fact that your audience is familiar with the singularity, I would still insist on the potential power of an AGI.
You could say something about the AI spreading on the Internet (from a 1000 to 1,000,000 time increase in processing power), bootstrapping nanotech and rewriting its source code, and that all of this could happen very quickly.

Ask them what they think such an AI would do, and if they show signs of anthropomorphism explain them that they are biased (mind projection fallacy for example).
You can also ask them what goal they would give to such an AI and show what kind of disaster may follow.
That can lead you to the complexity of wishes (a computer does not have common sens) and the complexity of human values.

I would also choose a nice set of links to lesswrong.org and singinst.com that they could read after the presentation.

It would be great if you could give us some feedback after your presentation : What worked, what did they find odd, what was their reaction, what questions did they ask.

Comment by Soki on Counterfactual Calculation and Observational Knowledge · 2011-02-01T14:37:05.496Z · LW · GW

It may not be what wedrifid meant, but does Omega always appear after you see the result on the calculator?
Does Omega always ask :
"Consider the counterfactual where the calculator displayed opposite_of_what_you_saw instead of what_you_saw" ?

If that is true, then I guess it means that what Omega replaces your answer with on the test sheet in the worlds where you see "even" is the answer you write on the counterfactual test sheet in the worlds where you see "odd". And the same with "even" and "odd" exchanged.

Comment by Soki on A sense of logic · 2010-12-11T15:59:23.735Z · LW · GW

When I hear a bad argument, it feels like listening to music and hearing a wrong note.
In one case it is the logical causality that is broken, in the other the interval between notes.
Actually it is worse because a pianist usually goes back on track.

Comment by Soki on Compartmentalization in epistemic and instrumental rationality · 2010-09-17T21:44:50.838Z · LW · GW

Ask yourself what are the thrilling aspects of what you want to prove. Look for what you cannot explain, but feel is true.

I want to write a proof.

Before writing, you should be satisfied with your understanding of the problem. Try to find holes in it, as if you were a teacher reading some student work.

You should also ask yourself why you want to write a correct proof, and remember that a proof that is wrong is not a proof.

Comment by Soki on Newcomb's Problem: A problem for Causal Decision Theories · 2010-08-18T20:49:36.923Z · LW · GW

I think that you should finish this sequence on lesswrong.
It is less technical and easier to understand than other posts on Decision Theory, and would therefore be valuable for newcomers.

Comment by Soki on Open Thread, August 2010 · 2010-08-11T18:59:33.963Z · LW · GW

I support this idea.

But what about copyright issues? What if posts and comments are owned by their writer?

Comment by Soki on Open Thread, August 2010 · 2010-08-07T05:07:15.396Z · LW · GW

knb, does your nephew know about lesswrong, rationality and the Singularity? I guess I would have enjoyed reading such a website when I was a teenager.

When it comes to a physical book, Engines of Creation by Drexler can be a good way to introduce him to nanotechnology and what science can make happen. (I know that nanotech is far less important that FAI, but I think it is more "visual" : you can imagine those nanobots manufacturing stuff or curing diseases, while you cannot imagine a hard takeoff).
Teenagers need dream.

Comment by Soki on The Threat of Cryonics · 2010-08-07T04:28:12.703Z · LW · GW

I just made a small calculation :

The number of deaths in the US is about 2.5 million per year.
The cost of cryonics is about $30000 per "patient" with the Cryonics Institute.
So if everyone wanted to be frozen, it would cost 75 billion dollars a year, about 0.5% of the US GDP, or 3% of the healthcare spending.
This neglects the economies of scales which could greatly reduce the price.

So even with a low probability of success, cryonics seems to be a good choice.

Comment by Soki on Politicians stymie human colonization of space to save make-work jobs · 2010-07-21T03:46:06.414Z · LW · GW

I have no reference, but as far as I understand, deuterium-tritium fusion is easier to achieve than deuterium-helium-3. But deuterium-helium-3 seems cleaner and the energy produced is easier to harvest.
So I think that the first energy producing fusion reactor would be a deuterium-tritium one, and deuterium-helium-3 would come later.

Comment by Soki on Politicians stymie human colonization of space to save make-work jobs · 2010-07-20T18:22:48.256Z · LW · GW

Helium-3 could be mined from the moon. It would be a good fusion fuel, but it is rare on earth so it makes sense to get it from space.

Comment by Soki on (One reason) why capitalism is much maligned · 2010-07-19T17:10:37.289Z · LW · GW

This video addresses this question : Anna Salamon's 2nd Talk at Singularity Summit 2009 -- How Much it Matters to Know What Matters: A Back of the Envelope Calculation
It is 15 minutes long, but you can take a look at 11m37s

Edit : added the name of the video, thanks for the remark Vladimir.

Comment by Soki on So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning · 2010-07-17T17:31:36.830Z · LW · GW

I would not say that this person replaced "and" by "or".
I guess they considered the statement "Lisa is a bank teller and a feminist" to be "50%" true if Lisa turns out to be a feminist but not a bank teller.

The formula used would be something like P(AB)=1/2*(P(A)+P(B))

Comment by Soki on Book Club Update, Chapter 3 of Probability Theory · 2010-07-17T16:19:40.374Z · LW · GW

What you said is true : you exchange the number of Drawn and Counted marbles.

However, the counted balls are white on wikipedia. (they are called defective indeed, so on wikipedia, people count the number of bad balls)

There was also a mistake in the part about symmetries, I replaced :
"Swapping the roles of black and drawn marbles" by :
"Swapping the roles of white and drawn marbles"
since m is the number of white marbles

Comment by Soki on Cryonics Wants To Be Big · 2010-07-12T14:42:11.503Z · LW · GW

why should the future want us?

Someone who knew you may want to bring you back.
If it takes centuries, then the more people frozen the better since it will be more likely that someone you knew would be brought back by someone else. And then he may bring you back too.
This assumes that the government does not prevent people form doing this.

Comment by Soki on A proposal for a cryogenic grave for cryonics · 2010-07-08T16:18:39.738Z · LW · GW

If you care about cryonics and its sustainability during an economic collapse or worse, chemical fixation might be a good alternative. http://en.wikipedia.org/wiki/Chemical_brain_preservation

The main advantage is that it requires no cooling and is cheap. People might be normally buried after the procedure, so it would seem less weird.
However, a good perfusion of the brain with the fixative is hard to achieve.

Chemical fixation could also be combined with those low maintenance cryonic graves just in case the nitrogen boils off.

Comment by Soki on Open Thread: July 2010 · 2010-07-03T21:07:30.126Z · LW · GW

First off all, I think that if Al does not see a sample, it makes the problem a bit simpler. That is, Al just tells Bob that he (Bob) is the first person that saw 25 big fishes.

I think that the number N of scientists matters, because the probability that someone will come to see Al depends on that.

Lets call B then event the lake has 75% big fishes, S the opposite and C the event someone comes, which means that someone saw 25 fishes.

Once Al sees Bob, he updates :
P(B/C)=P(B)* P(C/B)/(1/2*P(C/B)+1/2*P(C/S)).
When N tends toward infinity, both P(C/B) and P(C/S) tend toward 1, and P(B/SC) tends to 1/2.
But for small values of N, P(C/B) can be very small while P(C/S) will be quite close to 1.
Then the fact that someone was chosen lowers the probability of having a lake with big fishes.

If N=infinity, then the probability of being chosen is 0, and I cannot use Bayes' theorem.

If Charlie keeps inviting scientists until one sees 25 big fishes, then it becomes complicated, because the probability that you are invited is greater if the lake has more big fishes. It may be a bit like the sleeping beauty or the absent-minded driver problem.

Edited for formatting and misspellings

Comment by Soki on Book Club Update, Chapter 2 of Probability Theory · 2010-07-01T21:54:39.007Z · LW · GW

It is not very important, but since you mentioned it :

The interval of convergence of the Taylor series of 1/(1-z) at z=0 is indeed (-1,1).

But "1/(1-z) = 1 + z + O(z^2) for all z" does not make sense to me.

1/(1-z) = 1 + z + O(z^2) means that there is an M such as |1/(1-z) - (1 + z)| is no greater that M*z^2 for every z close enough to 0. It is about the behavior of 1/(1-z) - (1 + z) when z tends toward 0, not when z belongs to (-1,1).

Comment by Soki on Book Club Update, Chapter 2 of Probability Theory · 2010-07-01T15:23:58.058Z · LW · GW

I could not figure out why alpha > 0 neither and it seems wrong to me too. But this does not look like a problem.

We know that J is an increasing function because of 2-49. So in 2-53, alpha and log(x/S(x)) must have the same sign, since the remaining of the right member tends toward 0 when q tends toward + infinity.

Then b is positive and I think it is all that matters.

However, if alpha = 0, b is not defined. But if alpha=0 then log(x/S(x))=0 as a consequence of 2-53, so x/S(x)=1. There is only one x that gives us this since S is strictly decreasing. And by continuity we can still get 2-56.