Posts

June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. 2018-02-23T11:25:12.092Z
Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines. 2012-11-26T20:56:00.183Z
Minimum Viable Workout Routine is Dangerously Misinformative 2012-06-24T13:02:42.468Z
Less Wrong used to like Bitcoin before it was cool. Time for a revisit? 2012-06-20T13:40:24.642Z
Transhumanist philosopher David Pearce AMA on Reddit 2012-03-22T18:59:46.732Z
[link] Faster than light neutrinos due to loose fiber optic cable. 2012-02-22T21:52:54.244Z
You Are Not So Smart (Pop-Rationality Book) 2011-11-01T19:42:37.243Z

Comments

Comment by betterthanwell on June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. · 2018-02-23T13:40:20.847Z · LW · GW

Hopefully, AGI is at least "90 years" out.

Comment by betterthanwell on June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. · 2018-02-23T12:57:43.710Z · LW · GW

AlphaGo's victory over World Champion Lee Sedol made a (seemingly) deep impression on me at the time.

I had just NOT expected that. I had expected the game to remain intractable for decades. But the initial excitement and mild sense of doom that followed soon faded. I'm not a computer scientist, just a civilian interested for philosophical reasons.

But many people in attendance at the Alan Turing centenary celebration were World Champions of computer science. And either none of them knew any better, or if any did, or even suspected. Then, it seems that any suspicion that humans would be, uh, defeated at go, in the next decade, was defeated by subtle snickering and mild peer pressure.

Comment by betterthanwell on How has lesswrong changed your life? · 2015-04-05T20:49:13.593Z · LW · GW

I discovered the idea of Bitcoin early, made a life-changing amount of money.

Comment by betterthanwell on [LINK] No Boltzmann Brains in an Empty Expanding Universe · 2014-05-14T23:39:39.642Z · LW · GW

FQXi 2014 — Sean Carroll: "Quantum Fluctuations in de Sitter Space (do not happen)"

Comment by betterthanwell on What is the most anti-altruistic way to spend a million dollars? · 2014-03-30T17:48:48.165Z · LW · GW

Thus, I maintain the attacks were a huge failure at accomplishing the attackers' political agenda.

Osama Bin Laden (2004):

...All that we have mentioned has made it easy for us to provoke and bait this administration. All that we have to do is to send two Mujahedin to the farthest point East to raise a piece of cloth on which is written al-Qa'ida in order to make the generals race there to cause America to suffer human economic and political losses without their achieving for it anything of note other than some benefits to their private companies. This is in addition to our having experience in using guerrilla warfare and the war of attrition to fight tyrannical superpowers as we alongside the Mujahedin bled Russia for 10 years until it went bankrupt and was forced to withdraw in defeat. All Praise is due to Allah.

So we are continuing this policy in bleeding America to the point of bankruptcy. Allah is willing and nothing is too great for Allah. That being said, those who say that al-Qa'ida has won against the administration in the White House or that the administration has lost in this war have not been precise because when one scrutinizes the results, one cannot say that Al-Qa'ida is the sole factor in achieving these spectacular gains. Rather, the policy of the White House that demands the opening of war fronts to keep busy their various corporations -- whether they be working in the field of arms or oil or reconstruction -- has helped al-Qa'ida to achieve those enormous results. And so it has appeared to some analysts and diplomats that the White House and us are playing as one team towards the economic goals of the United States even if the intentions differ. And it was to these sorts of notions and their like that the British diplomat and others were referring in their lectures at the Royal Institute of International Affairs (when they pointed out that) for example, al-Qa'ida spent $500,000 on the event, while America in the incident and its aftermath lost -- according to the lowest estimates -- more than 500 billion dollars, meaning that every dollar of al-Qa'ida defeated a million dollars by the permission of Allah besides the loss of a huge number of jobs. As for the size of the economic deficit, it has reached record, astronomical numbers estimated to total more than a trillion dollars. And even more dangerous and bitter for America is that the Mujahedin recently forced Bush to resort to emergency funds to continue the fight in Afghanistan and Iraq which is evidence of the success of the bleed-until-bankruptcy plan with Allah's permission.

http://www.washingtonpost.com/wp-dyn/articles/A16990-2004Nov1.html

Comment by betterthanwell on December 2013 Media Thread · 2013-12-02T22:28:23.352Z · LW · GW

Gwern Branwen plays a notable part in this recent story about the demise of one ill-fated black market:

http://motherboard.vice.com/blog/did-one-of-the-silk-roads-successors-just-commit-the-perfect-bitcoin-scam

Comment by betterthanwell on Open Thread, June 16-30, 2013 · 2013-06-17T18:10:17.915Z · LW · GW

Aaron Winborn: Monday was my 46th birthday and likely my last. Anything awesome I should try after I die?

Just over two years ago, I was diagnosed with ALS, also known as Lou Gehrig's Disease. In short, that means that my mind will increasingly become trapped in my body as the motor neurons continue to die, and the muscles atrophy and waste away, until my diaphragm dies, bringing me with it.

...

But yes, there is a silver lining to this all, such as it is. Kim Suozzi made a similar plea to the Internet a year ago today, and came up with the brilliant idea of freezing her body in the hopes of a distant advanced technology being able to revive her someday. Her body now rests at liquid nitrogen temperatures.

...

Comment by betterthanwell on Justifiable Erroneous Scientific Pessimism · 2013-05-10T04:33:27.862Z · LW · GW

"I think there should be a law of Nature to prevent a star from behaving in this absurd way!" (Eddington, 1935)

Eddington erroneously dismissed M_(white dwarf) > M_limit ⇒ "a black hole" , but didn't he correctly anticipate new physics?
Do event horizons (Finkelstein, 1958) not prevent nature from behaving in "that absurd way", so far as we can ever observe?

Comment by betterthanwell on Recent updates to gwern.net (2012-2013) · 2013-03-18T18:04:58.984Z · LW · GW

With some awe and much respect, I would say that you are an inspiration, but that has already been said. I'll upvote that and say something else instead. For whatever reason, some part of my brain tells me; "Yeah, this is pretty much what I would expect the research interests of of a "supervillain"-in-training to look like". I don't pretend to know exactly what awesomeness is, but you have grown a lot of it.

Comment by betterthanwell on Farewell Aaron Swartz (1986-2013) · 2013-01-12T19:29:01.123Z · LW · GW

EDIT: OTOH, there's this... What person makes a will at 26?

It seems he published "If i get hit by a truck" in 2002, at age 16. Sad. Also, perhaps, awe-inspiring. Eliminating the problem of one's bus-factor would ordinarily be admirable... if you do it for the contingency where you simply get hit by a bus. I want, but can't, quite make myself believe that he didn't write this, at that time, in anticipation of an end like this. In that case; not awe-inspiring, only sad.

Comment by betterthanwell on Notes on Psychopathy · 2012-12-21T14:11:07.886Z · LW · GW

Almost no chance at all. Keep in mind, the most important thing, when it comes to dealing with this, this... insidious threat, is to be intensely careful in avoiding any false negatives. There must be none what so ever. And besides, who cares, really — about a few — or a few million false positives. Oh, and this condition, it's really quite heritable, and, as head of the program, you would need to... well, deal with the children, in a like manner as that of the parents. Not that this would be a problem, of course.

Joe Stalin, the NKVD, the Moscow Trials, and the Great Purge sort of came to mind.

Comment by betterthanwell on [Link] 'Something feels wrong with the state of philosophy today.' · 2012-12-21T13:42:44.904Z · LW · GW

Neat. Upvote delivered, as promised.

Comment by betterthanwell on [Link] 'Something feels wrong with the state of philosophy today.' · 2012-12-21T12:35:17.751Z · LW · GW

In the future, please consider adding a paragraph that provides a summary, or at least a snapshot, of the article's contents.

Yes. However, I would suggest not to wait for next time to do it right. Do it right, now.

I will downvote the top post, but I promise to upvote it, if and when benthamite's suggestion is followed.

Sorry for the carrot and stick, but doing so shouldn't take more than a minute.
(Which would be less than was spent on writing this.)

Comment by betterthanwell on Notes on Psychopathy · 2012-12-20T14:16:29.732Z · LW · GW

Maybe they need better treatments. Has anyone asked psychopaths - "How would you convince a psychopath like you to stop doing X?" Has anyone let psychopaths try? Aren't they the master manipulators? They even have a readily available model of a psychopath to test out the theory on. How convenient! Sufficiently motivating a psychopath with rewards for changing the mind of another psychopath might be an effective treatment for the first psychopath. Did they try that treatment?

Something like it was tried in Canada, in the sixties, with LSD, in a four year experiment where a group of 30 psychopaths were, at least apparently, temporarily reformed through unconventional means.

This strange and unique program was obliquely referenced in the top post:

...operated for over a decade in a maximum security psychiatric hospital and drew worldwide attention for its novelty. The program was described at length by Barker and colleagues…The results of a follow-up conducted an average of 10.5 years after completion of treatment showed that, compared to no program (in most cases, untreated offenders went to prison), treatment was associated with lower violent recidivism for non-psychopaths but higher violent recidivism for psychopaths.

The Insane Criminal as Therapist
E.T. Barker, M. H. Mason, The Canadian Journal of Corrections, Oct. 1968.

Here's an account from a recent pop-psychology book, The Psychopath Test:

In the late 1960s, a young Canadian psychiatrist believed he had the answer. His name was Elliott Barker and he had visited radical therapeutic communities around the world, including nude psychotherapy sessions occurring under the tutelage of an American psychotherapist named Paul Bindrim. Clients, mostly California free-thinkers and movie stars, would sit naked in a circle and dive headlong into a 24-hour emotional and mystical rollercoaster during which participants would scream and yell and sob and confess their innermost fears. Barker worked at a unit for psychopaths inside the Oak Ridge hospital for the criminally insane in Ontario. Although the inmates were undoubtedly insane, they seemed perfectly ordinary. This, Barker deduced, was because they were burying their insanity deep beneath a facade of normality. If the madness could only, somehow, be brought to the surface, maybe it would work itself through and they could be reborn as empathetic human beings.

And so he successfully sought permission from the Canadian government to obtain a large batch of LSD, hand-picked a group of psychopaths, led them into what he named the "total encounter capsule", a small room painted bright green, and asked them to remove their clothes. This was truly to be a radical milestone: the world's first ever marathon nude LSD-fuelled psychotherapy session for criminal psychopaths.

Barker's sessions lasted for epic 11-day stretches. There were no distractions – no television, no clothes, no clocks, no calendars, only a perpetual discussion (at least 100 hours every week) of their feelings. Much like Bindrim's sessions, the psychopaths were encouraged to go to their rawest emotional places by screaming and clawing at the walls and confessing fantasies of forbidden sexual longing for each other, even if they were, in the words of an internal Oak Ridge report of the time, "in a state of arousal while doing so".

...

Barker watched it all from behind a one-way mirror and his early reports were gloomy. The atmosphere inside the capsule was tense. Psychopaths would stare angrily at each other. Days would go by when nobody would exchange a word. But then, as the weeks turned into months, something unexpected began to happen.

The transformation was captured in an incredibly moving film. These tough young prisoners are, before our eyes, changing. They are learning to care for one another inside the capsule.

We see Barker in his office, and the look of delight on his face is quite heartbreaking. His psychopaths have become gentle. Some are even telling their parole boards not to consider them for release until after they've completed their therapy. The authorities are astonished.

Several of the 30 participants of the experiment went on to commit violent homicides some years after release.

An internal memo from the experiment: "LSD in a Coercive Milieu Therapy Program" (E.T Barker)

Intriguing.

Comment by betterthanwell on Musk, Mars and x-risk · 2012-12-01T13:31:25.525Z · LW · GW

"On the Earth, the IPCC states that "a 'runaway greenhouse effect'—analogous to Venus—appears to have virtually no chance of being induced by anthropogenic activities."
http://www.ipcc.ch/meetings/session31/inf3.pdf

Hmm, the IPCC asserts this statement without providing any argument to support it.

Some quick thoughts: In the beginning, there were no oceans. The earth was molten and without form. Now, assume venusian-runaway is a possibility for for this planet's climate. Why has it not already occurred, much, much earlier in the planet's history?

The planet was very much hotter and more humid in the very distant past. The CO2 in the oceans and the methane in the permafrost was captured from the atmosphere. The O2 in the atmosphere is a biogenic waste product of photosynthesis.

I do think the oceans will boil eventually, not because of global warming, but because of solar warming, after the sun has depleted it's hydrogen.

Comment by betterthanwell on Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines. · 2012-11-27T22:46:32.369Z · LW · GW

Welcome, and thanks for the comments.

Even the term getting out there is a positive!

Agreed.

If journalism demands that you stick to Hollywood references when communicating a concept,
it wouldn't be so bad if journalists managed to understand and convey the distinction between:

  • The wholly implausible, worse than useless Terminator humanoid hunter-killer robot scenario.
  • The not completely far-fetched Skynet launches every nuke, humanity dies scenario.
Comment by betterthanwell on Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines. · 2012-11-27T11:30:42.379Z · LW · GW

Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.

Wow. This particular mistake seems to be an unlikely and even difficult mistake to make in good faith,
as opposed to, for example, by outright dishonesty.

Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.

Never mind, it seems they don't even try to be honest.

Comment by betterthanwell on Musk, Mars and x-risk · 2012-11-25T13:38:59.037Z · LW · GW

Which catastrophic risks does a mars colony mitigate? ... Climate change : yes

If a Mars colony mitigates catastrophic risk (extinction risk?) from climate change,
then climate change is not an existential risk to human civilization on earth.

If humans can thrive on Mars, Earth based humanity will be able to cope with any climate change less drastic than transforming the climate of Earth to something as hostile as the current climate of Mars.

Comment by betterthanwell on Causal Reference · 2012-10-21T03:21:36.534Z · LW · GW

Mainstream status points to /Eliezer_Yudkowsky-drafts/ (Forbidden: You aren't allowed to do that.)

Comment by betterthanwell on Request for sympathy; frustrated with Dark Side · 2012-10-17T20:52:57.942Z · LW · GW

It is really quite frustrating to discuss the intersection of physics and free will with a man who is capable of posting this (...)

So... Don't?

Comment by betterthanwell on The basic argument for the feasibility of transhumanism · 2012-10-14T15:12:12.478Z · LW · GW

What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.

I'm not sure if this is an objection many people are likely to raise, or a good one, but in any case, here are my initial thoughts:

Transhumanism is just a set of values, exactly like humanism is a set of values. The feasibility of transhumanism can be shown from a compiling a list of those values that are said to qualify someone as a transhumanist, and the observed existence of people with such values, whom we then slap a label on, and say: Here is a transhumanist!

Half an hour on google should probably suffice to persuade the sceptic that transhumanists do in fact exist, and therefore transhumanism is feasible. And so we're done.


I realize that this is not what you mean when you refer to the feasibility of transhumanism. You want to make an argument for the possiblity of "actual transhumans". Something along the lines of: "It is feasible that humans with quantitatively or qualitatively superior abilities, in some domain, relative to some baseline (such as the best, or the average performance of some collection of humans, perhaps all humans) can exist." Which seems trivially true, for the reasons you mention.

Where are the boundaries of human design space? Who do we decide to put in the plain old human category? Who do we put in the transhuman category — and who is just another human with some novel bonus attribute?

If one goes for such a definition of a transhuman as the one I propose above, are world record holding athletes then weakly transhuman, since they go beyond the previously recorded bounds of human capability in strength, or speed, or endurance?

I'd say yes, but justifying that would require a longer reply. One question one would have to answer is: Who is a human? (The answers one would get to this question has likely changed quite a bit since the label "human" was first invented.)


If one allows the category of things that receives a "yes" in reply to the question "is this one a human?" to change at all, if one allows that category to expand or indeed to grow over time, perhaps by an arbitrary amount. (Which is excactly what seems, to me at least, to have happened, and seems to continue to be the case.) Then, perhaps, there will never be a transhuman. Only a growing category of things which one considers to be "human". Including some humans that are happier, better, stronger and faster than any current or previously recorded human.

In order to say "this one is a transhuman" one needs to first decide upon some limits to what one will call "human", and then decide, arbitrarily, that whoever goes beyond these limits, we will put into this new category, instead of continuing to relax the boundaries of humanity, so as to include the new cases, as is usual.

Comment by betterthanwell on The noncentral fallacy - the worst argument in the world? · 2012-10-05T22:06:34.748Z · LW · GW

I'm Eliezer Yudkowsky! Do you have any idea how many distinct versions of me there are in Tegmark Levels I through III?

1?

Comment by betterthanwell on The Useful Idea of Truth · 2012-10-05T11:10:23.846Z · LW · GW

(Continued)

Page 20:

According to my proposal, what characterizes the empirical method is its manner of exposing to falsification, in every conceivable way, the system to be tested. Its aim is not to save the lives of untenable systems but, on the contrary, to select the one which is by comparison the fittest, by exposing them all to the fiercest struggle for survival.

[a number of indicative, but not decisive quotes omitted]


I had hoped to find some decisive sound bite in part one, which is a brief discussion of the epistemological problems facing any theory of scientific method, and an outline of Popper's framework, but it looks like I shall have to go deeper. Will look into this over the weekend.

I also found another, though much more recent candidate, David Deutsch in The Beginning of Infinity, Chapter 1 on "The Reach of Explanations". Tough I'm beginning to suspect that although they both point out that "you have to look at things to draw accurate maps of them...", and describe "causal processes producing map-territory correspondences" (for example, between some state of affairs and the output of some scientific instument) both Deutsch and Popper seem to have omitted what one may call the "neuroscience of epistemology." (Where the photon reflects off your shoelace, gets absorbed by your retina, leading to information about the configuration of the world becoming entangled with some corresponding state of your brain, and so on.) This is admittedly quite a crucial step, which Yudkowsky's explanation does cover, and which I cannot recall to have seen elsewhere.

Comment by betterthanwell on The Useful Idea of Truth · 2012-10-05T11:09:51.428Z · LW · GW

A (very) quick attempt, perhaps this will suffice? (Let me know if not. )

I begin with the tersest possible defense of my claim that Popper argued that "you actually have to look at things to draw accurate maps of them...", even though this particular example is particularily trivial:

Page 19:

(Thus the statement, ‘It will rain or not rain here tomorrow’ will not be regarded as empirical, simply because it cannot be refuted; whereas the statement, ‘It will rain here tomorrow’ will be regarded as empirical.)

To paraphrase: You have to look actually out the window to discover whether it is raining or not.


Continuing, page 16:

The task of formulating an acceptable definition of the idea of an ‘empirical science’ is not without its difficulties. Some of these arise from the fact that there must be many theoretical systems with a logical structure very similar to the one which at any particular time is the accepted system of empirical science. This situation is sometimes described by saying that there is a great number—presumably an infinite number— of ‘logically possible worlds’. Yet the system called ‘empirical science’ is intended to represent only one world: the ‘real world’ or the ‘world of our experience’.*1

Various objections might be raised against the criterion of demarcation here proposed. In the first place, it may well seem somewhat wrong-headed to suggest that science, which is supposed to give us positive information, should be characterized as satisfying a negative requirement such as refutability. However, I shall show, in sections 31 to 46, that this objection has little weight, since the amount of positive information about the world which is conveyed by a scientific statement is the greater the more likely it is to clash, because of its logical character, with possible singular statements. (Not for nothing do we call the laws of nature ‘laws’: the more they prohibit the more they say.)

My proposal is based upon an asymmetry between verifiability and falsifiability; an asymmetry which results from the logical form of universal statements.4 For these are never derivable from singular statements, but can be contradicted by singular statements. Consequently it is possible by means of purely deductive inferences (with the help of the modus tollens of classical logic) to argue from the truth of singular statements to the falsity of universal statements. Such an argument to the falsity of universal statements is the only strictly deductive kind of inference that proceeds, as it were, in the ‘inductive direction’; that is, from singular to universal statements. 4 This asymmetry is now more fully discussed in section *22 of my Postscript.

(Oops, comment too long.)

Comment by betterthanwell on The Useful Idea of Truth · 2012-10-05T10:38:46.715Z · LW · GW

Could you please quote the part of Popper's book that makes the explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."?

Right, this is the obvious next question. I started looking for the appropriate "sound bites" yesterday, but encountered a bit of difficulty in doing so, as I shall explain. Popper's embrace of (Tarskian) correspondence theory should be at least somewhat clear from the footnote I quoted above.

It seems clear to me, from my recount of the book that "you have to look at things to draw accurate maps of them" is one of the chief aims, and one of the central claims of the book; a claim which is defended, by a lengthy, but quite convincing and unusually successful argument - the premises to which are presented only one at a time, and quite meticulously over at least several chapters, so I'm not exactly sure how to go about quoting only the "relevant parts".

My claim that his argument was convincing and successful, is based on the historical observation that popperian falsificationism (the hypothetico-deductive framework) won out over the then quite prevalent logical positivist / verificationist view, to such an extent that it quickly became the default mode of Science, a position it has held, mostly uncontested, ever since, and therefore is barely worthy of mention today. Except when it is, that is; when one encounters problems that are metaphysical (according to Popper), such as Susskind's String Landscape of perhaps 10^500 vacuua, the small (but significant) observed value of the cosmological constant, the (seemingly fine tuned) value of the fine structure constant, and other observations that may require anthropic i.e. metaphysical explanations, since these problems are seemingly not decidable inside of standard, i.e. popperian science.

I feel faced with a claim similar to "I don't believe any mathematician has convincingly proven Fermat's last theorem." To which I reply: Andrew Wiles (1995) The obvious next question is: "Can you please quote the part where he proves the theorem?" This is unfortunately somewhat involved, as the entire 109 page paper tries and succeds at doing so around as concisely as Wiles himself managed to go about it. Unfortunately, in the Popper case, I cannot simply provide the relevant Wikipedia Article and leave it at that.

I suppose that having made the claim, it is only my duty to back it up, or else concede defeat. If you're still interested, I shall give it a thorough look, but will need a bit of time to do so. Hopefully, you'll have my reply before monday.

Comment by betterthanwell on The Useful Idea of Truth · 2012-10-04T22:53:49.851Z · LW · GW

I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."

Karl Popper did so explicitly, thoroughly and convincingly in The Logic of Scientific Discovery. Pretty influential, and definitely a part of "Mainstream Academia."

Here's an interesting, if lengthy, footnote to Chapter 84 - Remarks Concerning the use of the concepts 'True' and 'Corroborated'.

(1) Not long after this was written, I had the good fortune to meet Alfred Tarski who explained to me the fundamental ideas of his theory of truth. It is a great pity that this theory—one of the two great discoveries in the field of logic made since Principia Mathematica—is still often misunderstood and misrepresented. It cannot be too strongly emphasized that Tarski’s idea of truth (for whose definition with respect to formalized languages Tarski gave a method) is the same idea which Aristotle had in mind and indeed most people (except pragmatists): the idea that truth is correspondence with the facts (or with reality). But what can we possibly mean if we say of a statement that it corresponds with the facts (or with reality)? Once we realize that this correspondence cannot be one of structural similarity, the task of elucidating this correspondence seems hopeless; and as a consequence, we may become suspicious of the concept of truth, and prefer not to use it. Tarski solved (with respect to formalized languages) this apparently hopeless problem by making use of a semantic metalanguage, reducing the idea of correspondence to that of ‘satisfaction’ or ‘fulfilment’. As a result of Tarski’s teaching, I no longer hesitate to speak of ‘truth’ and ‘falsity’. (...)

A (short) footnote of my own: Popper's writings have assumed the status of mere "background knowledge", which is a truly great achievement for any philosopher of science. However, The Logic of Scientific discovery is a glorious book which deserves to be even more widely read. Part I of the book spans no more than 30 pages. It's nothing short of beautiful. PDF here.

Comment by betterthanwell on Eliezer's Sequences and Mainstream Academia · 2012-09-20T00:09:33.465Z · LW · GW

Yep. Gloriously lucid and quite readable book.
Encapsulates good chunks of the sequences.

Much more accessible than I had anticipated.

Comment by betterthanwell on Neil Armstrong died before we could defeat death · 2012-08-26T16:30:44.289Z · LW · GW

But we can claim every star that now burns.

No, we can't. As I said, distant galaxies that we can see today are receding, such that no probe we send can ever reach them. Barring aliens already nearby, they will burn unclaimed.

Ouch! I had originally written "every star that burns in the night sky". But that sounded cheesy and pompous even in the context of the comment above. Apparently I failed to replace it with something reasonable before hitting the button.

Perhaps only every star and planet in every galaxy within a sphere centered at earth with a radius of at least a couple of billion light years will be in reach of our technologically mature descendants.

Even as distant civilizations trillions of years hence are lost to each other, forever separated by the expansion of space, their neighbors receding over the cosmological horizon, there can still be rich life in those bubbles. If we survive this eon, life can flourish for the next hundred trillion years.

After that we may be in trouble. After that the cynics may win.

Comment by betterthanwell on Neil Armstrong died before we could defeat death · 2012-08-26T12:37:39.300Z · LW · GW

We'll fill the stars and conquer death. The spark of intelligence and sentience will not extinguish.

No we won't, barring new physics. Even if our civilization avoids catastrophe and invents great improvements in therapies for aging, or brain emulation, that won't let us change the 2nd law of thermodynamics, or prevent the distant galaxies from accelerating out of our reach.

But we can claim every star that now burns. Even if in the vast, long, unimaginably long future of this universe, complexity itself must someday die, we should at least do what we can in the meantime. Perhaps we can't beat physics, but we do have some headroom still!

I found this thread to be "vapid and melodramatic" at first, but I now recognize that humanity did indeed lose something highly valuable with the death of Neil Armstrong, outside and beyond the tragedy that is inherent to the death of any mind.

A spark of intelligence and sentience, a very keen observer, but also, literally the first member of our species to transcend to another world, even if it were for a very brief time. Within a decade or two, human kind will likely no longer have visitors to other worlds among us. Were I a journalist, I would write: "A small death for a man, a giant leap backwards for mankind."

Armstrong, and his fellow Apollo astronauts are to us like the astronauts in Carl Sagan's novel Contact. Ambassadors from the Blue Dot to the vast dead Cosmos. Humanity no longer has it's eyes facing outwards to the other pebbles, to the other stars that burn with unspent opportunity. With their deaths we lose the steady gaze of those who look up, since they have been there, whereas we have not. We lose their voices, and their dreams of someday returning, of someday going beyond 1969.

O you who turn the wheel and look to windward,
Consider Phlebas, who was once handsome and tall as you.

May his footprints someday be lost to the footprints of many.

Someday going beyond 1969 is a crazy ambitious idea today, but seemingly wasn't so crazy before the late seventies/mid-eighties. I'm too young to tell, but it seems this ambition went from bold to crazy somethime around end of the cold war, perhaps as the salient threat of thermonuclear doom faded.

Comment by betterthanwell on Is lossless information transfer possible? · 2012-08-09T11:02:27.386Z · LW · GW

Lossless information transfer between humans may be possible, but it's certainly not free with respect to work and time, and it's certainly not the default.

For instance: Whenever I want to communicate a thought or an idea, for instance verbally or on paper, I find that I must first apply some work-intensive, lossy compression which outputs bad English. The output invariably looks or sounds much worse in comparison to the uncompressed idea I have in my head. Throughput is abysmal. A few minutes of thinking can sometimes require a few hours of writing in order to be communicated with some lucidity. In order to restore some similarity to the uncompressed idea as it appears to me, I need to apply further work-intensive error correction, and repeatedly compare the revised output to my internal model.

Comment by betterthanwell on A cynical explanation for why rationalists worry about FAI · 2012-08-06T10:04:46.211Z · LW · GW

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

Some of it yes. At the end of the day though, some of it does lead to real experiments, which need to pay rent. And some of it does quite well at that. Look for example at the recent discovery of the Higgs boson.

These theoretical physicists had to argue for several decades until they managed to argue themselves into enough money to hire the thousands of people to design, build and operate a machine that was capable of refuting, or as it turned out - supporting their well motivated hypothesis. Not to mention that the machine necessitated inventing the world wide web, advancing experimental technologies, data processing, and fields too numerous to mention by orders of magnitude compared to what was available at the time.

Perhaps today's theoretical programmers working on some form of General Artificial Intelligence find themselves faced with comparable challenges.

I don't know how things must have looked like at the time, perhaps people were wildly optimistic with respect to expected mass of the scalar boson(s) of the (now) Standard Model of physics, but in hindsight, it seems pretty safe to say that the Higgs boson must have been quite impossible for Humanity to experimentally detect back in 1964. Irrefutable metaphysics. Just like string theory, right?

Well, thousands upon thousands of people, billions of dollars, some directly but mostly indirectly (in semiconductors, superconductors, networking, ultra high vacuum technology, etc.) somehow made the impossible... unimpossible.

And as of last week, we can finally say they succeeded. It's pretty impressive, if nothing else.

Perhaps M-theory will be forever irrefutable metaphysics to mere humans, perhaps GAI. As Brian Greene put it: "You can't teach general relativity to a cat." Yet perhaps we shall see further (now) impossible discoveries made in our lifetimes.

Comment by betterthanwell on Link: Glial cells shown to be involved in working memory · 2012-07-22T19:53:32.014Z · LW · GW

I know next to nothing of biology, but I would naïvely expect the structure of the ATP, ADP, AMP, etc. to be fixed across all organisms with mitochondria. Shouldn't copying errors or variations that produce something other than ATP in place of ATP kill any eukaryote, let alone a human? Perhaps you mean variations to ATP synthase?

Comment by betterthanwell on Dual N-Back browser-based "game" in public alpha-testing state. · 2012-07-10T18:17:26.787Z · LW · GW

Some context: Several frequently cited studies on working memory training using dual n-back, most famously Jaeggi et al. 2008 strongly indicated that WMT could reliably produce lasting benefits to fluid intelligence. These studies obviously provide great material for marketing cognitive training software. See for instance this page by Lumosity:

In 2008, Dr. Susanne Jaeggi, Dr. Martin Buschkuehl and colleagues at the University of Michigan showed that cognitive training with a task called Dual N-Back enhanced fluid intelligence – the ability to creatively solve new problems, and a critical component of IQ. This study involved healthy young adults, mostly university students. After as little as eight hours of training, young adults who trained saw significant gains in fluid intelligence and working memory. We have worked with the Michigan group to include a version of their training program on Lumosity. In addition, we have created a game-like version of their task called Memory Lane.

However, later studies seem to have found little evidence in support of these claims. For example, the 2012 meta-analysis Is Working Memory Training Effective? concludes:

The absence of transfer to tasks that are unlike the training tasks shows that there is no evidence these programs are suitable as methods of treatment for children with developmental cognitive disorders or as ways of effecting general improvements in adults’ or children’s cognitive skills or scholastic attainments.

To me, it seems that the case for dual-n back exercises actually yielding transferable improvements to intelligence and memory is much weaker after some years of scrutiny. The linked dual-n back game seems like an excellent alternative to existing n-back software. I'm not sure it's worth the time and effort.

Comment by betterthanwell on CFAR website launched · 2012-07-04T12:45:18.826Z · LW · GW

I like it a whole lot. The design is beautiful, the layout is good, the prose is well crafted and concise. I feel a little bad for saying this but ... I like this website almost, but not quite as much as I dislike the new Singularity Institute website. I don't know what went wrong there, but the Singularity Institute website somehow seems / feels unprofessional and just badly done compared to this one.

Comment by betterthanwell on Open Thread, July 1-15, 2012 · 2012-07-04T08:59:22.768Z · LW · GW

Higgs day! Wohoo! Fist pumping and tap-dancing may be in order. Big day for Big Science.

Comment by betterthanwell on Introduction to Game Theory: Sequence Guide · 2012-06-29T10:16:45.705Z · LW · GW

This entire sequence needs to be promoted into visibility.

Comment by betterthanwell on Minimum Viable Workout Routine is Dangerously Misinformative · 2012-06-24T22:17:10.554Z · LW · GW

This is the main point of contention as I see it. I hold that getting newbies to consistently attain 85-95% of their maximum heart rate just isn't going to happen most of the time.

That's really not a problem, at least not physiologically. One cannot sustain this level of effort for more than a few minutes, which, it turns out, is enough. You'll need a heart rate monitor (cheststrap + wristwatch), get on a treadmill and warm up gently. Work out at around 4 x 4 minutes, with 3-4 minutes of lower intensity walking or jogging in between. Why 4 x 4 minutes, and not something else? Because this has been found to strike the balance between compliance, or self-compliance, as the case may be, and physiological benefits.

For an optimal workout, for a pro athlete you would want to do something like 30 or so 15-second intervals, but sticking to such a regimen is unrealistic for newbies if you don't have someone coaching you. It takes more willpower than most people actually have, so it doesn't really work. Most people do have just enough willpower to work hard for four times four minutes, and then go home.

Walking, jogging or running at a steady pace for an extended period of time at intensities below the lactate threshold does not confer dramatic health benefits. It's still good for you, but you won't be able to feel your body noticably improving from week to week.

At four minute intervals you are working beyond your lactate threshold, so walking, jogging or running is not sustainable at this level of effort for an extented amount of time. Push too hard, for too long and you'll can get nauseous or just feel terrible from burning lactate instead of glucose, if so you need to hold back. 4x4 minutes with 4 minutes rest in between should give an increase of around 0.5% per workout in VO2max, if memory serves. You should notice obvious increases in endurance within a few weeks. One needs to commit to any exercise regime, including this one, but it's not too hard or painful if you're doing it right.

One should also do strength exercises in addition to interval training. Your 3x5 schedule sounds great for this.

Comment by betterthanwell on Minimum Viable Workout Routine is Dangerously Misinformative · 2012-06-24T17:38:36.533Z · LW · GW

Excellent point. I should have thought of that.

I hope made the case that high intensity interval training is good for you, even if you're not very fit. Why do I think it is dangerous to advise people against endurance training? Because if you accept it, and update on it, and don't do endurance training because you read on Less Wrong that it is useless, soul-crushing and you shouldn't even try, you've increased your risk of getting sick and dying unnecessarily.

I also think it is dangerously misleading to warn people against certain vaccinations on the grounds that it may cause autism, if this claim is unsupported by evidence. If you tell people to not bother with endurance training they increase their risk of dying by listening to you. If you tell people to not vaccinate their children, they run a risk of getting sick children. Both are unsupported by evidence, and both are dangerous.

I started out writing "this paragraph is dangerously wrong", and when I expanded my reply into a separate topic on it, I chose an unfortunate title. I believe that the Minimum Viable Workout Routine was made with the best intentions. Calling the whole post dangerously misinformative, was harsh and uncalled for on my part.

But still, unsound information that can actually kill you (if you believe it) is dangerous.

Comment by betterthanwell on Group rationality diary, 6/11/12 · 2012-06-22T13:17:49.506Z · LW · GW

Wow. What can that phrase even mean?

Uhm, time should run backwards?

Comment by betterthanwell on Suggest alternate names for the "Singularity Institute" · 2012-06-22T12:26:18.105Z · LW · GW

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

I don't entirely disagree, but I do think Catastrophic Risks In Self-Improving Systems can be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence or The Singularity Institute.

In particular, there would be little chance of confusion stemming from familiarity with Kurzweil's singularity from accelerating change.

There are lessons to be learned from Scientist are from Mars the Public is from Earth, and first impressions are certainly important. That said, this description is less over-exaggerated than it may at seem at first glance. The usage can be qualified in that the technical meanings of these words are established, mutually supportive and applicable.

Looking at the technical meaning of the words, the description is (perhaps surprisingly) accurate:

Catastrophic failure: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system.

Catastrophe theory: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, (...) leading to large and sudden changes of the behaviour of the system.

Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed).

Is the CRISIS mnemonic / acronym overly dramatic?

Crisis: From Ancient Greek κρίσις (krisis, “a separating, power of distinguishing, decision, choice, election, judgment, dispute”), from κρίνω (krinō, “pick out, choose, decide, judge”)

A crisis is any event that is, or expected to lead to, an unstable and dangerous situation affecting an individual, group, community or whole society. Crises are deemed to be negative changes in the security, economic, political, societal or environmental affairs, especially when they occur abruptly, with little or no warning. More loosely, it is a term meaning 'a testing time' or an 'emergency event'.

Usage: crisis (plural crises)

  • A crucial or decisive point or situation; a turning point.
  • An unstable situation, in political, social, economic or military affairs, especially one involving an impending abrupt change.
  • A sudden change in the course of a disease, usually at which the patient is expected to recover or die.
  • (psychology) A traumatic or stressful change in a person's life.
  • (drama) A point in a drama at which a conflict reaches a peak before being resolved.

Perhaps CRISIS is overly dramatic in the common usage. But one would quite easily be able to explain how the use of this term is be qualified, and this in it self gives an attractive angle to journalists. In the process they would, inadvertently in a sense, explain what the Singularity institute does and why their work is important.

Comment by betterthanwell on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-21T20:09:29.073Z · LW · GW

Looks like you may be too late. The US based Bitcoin exchange Tradehill had to shut down operations earlier this year because of this exact method of fraud. Here is a description of the modus operandi. The fraud was conducted via an intermediary payment processing service.

Comment by betterthanwell on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-21T09:04:22.855Z · LW · GW

Their adjudication of bets is highly dubious and their customer service just screams "sham".

Could you please explicate your complaint? I've had no disputes so far.
I found less than a handful of complaints in the announcement thread.

Bets (are supposed to) get rejected if the outcome is not easily decidable.

Comment by betterthanwell on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-21T08:04:48.578Z · LW · GW

Last year, MBlume suggested:

Someone should really write a prediction market using bitcoins -- it would be simpler for US-based users to participate.

This now exists at BetsOfBitcoin. I signed up one exactly one month ago.

So far, I've won 13 out of 13 bets. Mostly small bets and low yields, but winning is fun.

Comment by betterthanwell on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-20T17:37:46.842Z · LW · GW

Damn fine writing in that essay. Some of the only intelligent criticism I've seen of the protocol.

Comment by betterthanwell on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-20T15:07:52.976Z · LW · GW

Clippy noticed that Bitcoin seems to make it easier for software agents to earn money, convince humans to do jobs for them, and optimize the universe in a paper-clip friendly direction.

If Clippy is right, is it a problem?

Singularity Institute visiting fellow Thomas McCabe is running GetBitcoin, a notable money handling service.

Does he have any insights to share?

Gwern wrote an article arguing that Bitcoin is an ugly protocol: Bitcoin is Worse is Better

What does gwern think today?

How should one extend or rework the protocol to make Bitcoin, or a successor more appealing?

Wei Dai is quoted in the original whitepaper. He writes:

If you read the Wikipedia article, you should know that I didn't create Bitcoin but only described a similar idea more than a decade ago. And my understanding is that the creator of Bitcoin, who goes by the name Satoshi Nakamoto, didn't even read my article before reinventing the idea himself. He learned about it afterward and credited me in his paper. So my connection with the project is quite limited.

What does he think of of Bitcoin and cryptocurrency in general? (Still mining?)

Scott Aaronson recently published a technical paper on quantum money, unrelated to Bitcoin. Would implementing this scheme require keeping qubits stable for an indefinite timespan, for the money not to go poof?

Comment by betterthanwell on [deleted post] 2012-06-19T20:56:34.695Z

I have some half-baked questions that probably belong in the Ask a Physicist thread, but I will plop them here for visibility. Please stop me when my reasoning goes belly-up.

Mirror matter is a candidate for dark matter. This stuff is like the normal stuff, only it has it's "parity bit"* flipped with respect to normal matter, and therefore only interacts with us through the weak interaction. Does mirror matter feel curves in space time made by the normal stuff? Otherwise, how does something exist in the same spatio-temporal coordinates as us, yet respond differently to gravity? Can mirror matter coalesce into massive objects? Could we detect mirror planets passing in front of ordinary stars? Would our space-time be distorted by mirrored black holes in a way we can detect?

*I'm being flippant here. How is parity quantized in physics?

Comment by betterthanwell on [deleted post] 2012-06-19T20:13:58.854Z

Is this any more likely to be true than the FTL-neutrinos?

First: Is what likely to be true? The quantitative result of the experiment, or their interpretation of it?

Second: Sounds like you noticed that one of the two scientists is affiliated with Gran Sasso?
They have recently acquired a reputation of jumping the gun at times. (Okay, cheap shot.)

Third: It would be something of a surprise if two scientists with a relatively simple apparatus were to make a discovery that would overshadow even the anticipated discovery of the Higgs particle at LHC. Is it likely that there is still low-hanging fruit of this size in physics?

Comment by betterthanwell on Suggest alternate names for the "Singularity Institute" · 2012-06-19T19:23:20.356Z · LW · GW

After thinking this over while taking a shower:

The CRISIS Research Institute — Catastrophic Risks In Self-Improving Systems
Or, more akin to the old name: Catastrophic Risk Institute for Self-Improving Systems

Hmm, maybe better suited as a book title than the name of an organization.

Comment by betterthanwell on Suggest alternate names for the "Singularity Institute" · 2012-06-19T17:10:52.767Z · LW · GW

So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.

A one word acronym that unfolds into a one sentence elevator pitch:

Crisis: Catastrophic Risks in Self Improving Software

  • "So, what do you do?"
  • "We do CRISIS research, that is, we work on figuring out and trying to manage the catastrophic risks that may be inherent to self improving software systems. Consider, for example..."

Lots of fun ways to play around with this term, to make it memorable in conversations.

It has some urgency to it, it's fairly concrete, it's memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.

Bonus: You practically own this term already.

An incognito Google search gives me no hits for "Catastrophic Risks In Self Improving Software", when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don't think he would mind too much.

Comment by betterthanwell on Sebastian Thrun AMA on reddit [link] · 2012-06-18T12:55:56.489Z · LW · GW

I was looking forward to this IAMA, but left feeling disappointed in having learned nothing worth remembering.

In summary: Almost every question directly concern products offered by Udacity. I guess this was my fault, and our fault, for not asking interesting and challenging questions. Some answers did offer shallow advice on succeeding in computer science and education. Almost every answer could have been penned by an intern at Udacity, or any reasonably experienced computer science professional. Udacity's marketing departement is the only winner here. This was a failed opportunity.