Posts

Comments

Comment by Halfwit on Public Service Announcement Collection · 2013-06-28T15:06:41.548Z · LW · GW

A lot of people got this from shuttle launches, and so reacted negatively to the the (in my opinion good) arguments for focusing NASA's budget on robotic space exploration.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-28T01:18:39.239Z · LW · GW

Hmm, one way to maybe get around this would be to start an intrinsically motivating project but limit oneself to the tools one has to learn for extrinsic reasons.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T23:16:30.933Z · LW · GW

Then my advice is this: talk to someone who has the entry-level job you want and ask him or her what skills he/she needs to do it and what skills whoever hired him or her thinks one needs. Then learn them. As for the "oddly unable" thing, I suggest reflecting on how you learned what you are good at in the first place. If there's anything different about your current, ineffective approach to learning new techniques stop doing it. Unless you've recently suffered brain trauma, it's likely just some weird ugh field-like effect.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T21:41:40.882Z · LW · GW

Yeah, that does sound pretty awful, not something you'd want to induce. For me it was just this: pressure on my chest, inability to move my limbs, and the feeling that some entity was observing me. There was no gnashing of teeth.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T21:21:09.302Z · LW · GW

You're asking me for advice? That was the first time I've looked at code in my life. I'm sure the textbook recommendation thread has something on programming. From what I understand, though, halfway-decent programmers are very employable at the moment, so either you're overestimating your ability, there's some other factor you haven't shared, or my intuition on the employment prospects of halfway-decent programmers (I assume this means close to, if slightly below, the level of the average pro) is incorrect.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T19:26:36.539Z · LW · GW

I was lucky enough to have read about that before the one time it happened to me. So I wasn't scared. I just thought, So this is sleep paralysis. Since then I've read that lucid dreamers often try to force themselves into sleep paralysis, as it's the first stop to the sandman's brooding realm. The next time it happens to you, you should try for a Feynman-style lucid dream. It could be fun.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T18:34:11.673Z · LW · GW

I edited because the code I looked at seemed to be atypical, comparing it to what others have posted. No, I don't think I'm M3 at all--though my father probably is, as he picked up programming in his twenties and knows many languages. As I had expected the code to look like nonsense, I was merely surprised I could get some idea about what was going on. My prior for being able to get a programming job with <300 hours of dedicated practice is low, but it could be something to investigate as a hobby.

Comment by Halfwit on Public Service Announcement Collection · 2013-06-27T18:06:36.567Z · LW · GW

quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.

I just did this. And I was surprised; this seemed far less inscrutable than I intuitively expected, having never read any code. My father is a computer programmer, so I may have it in my DNA. He is more intelligent than me though. Example, I once told him the three gods puzzle and he had it solved in ~20 minutes; he didn't even use paper.

P/S/A: If your work involves writing and you often find yourself procrastinating on the internet, buy an old laptop, rip out the wifi card and use it as your dedicated writing laptop.

P/S/A: When you need to get a large amount of writing done outside of office hours, go to some non-home location (a coffee shop not a library, as books are the ultimate distractions) and commit yourself to not leaving until you reach a specific word count--I find two thousand words is reasonable and achievable; at least it is for non-creative writing.

Also, If there is some fact that you need to research use the TK method to mark it down for later.

Comment by Halfwit on How to Write Deep Characters · 2013-06-19T01:50:12.338Z · LW · GW

Some early science fiction isn't so much about conflict as it is a relation of an unlikely experience. But then, the stories I have in mind weren't exactly that great. So that's not exactly evidence against the assumption. Still, I think a sufficiently skilled writer could create an enjoyable story without conflict, but it would be like a painter throwing out a primary color.

One of my favorite of OP's short posts is Building Weirdtopia. (Yudkowsky's no spoilers approach to scientific pedagogy is such an intriguing one, I'm a quite sad he hasn't spun it into a novel yet. I'd seriously love to read a Neal Stephenson-length epic about a child in such a society recapitulating modern science, but maybe I'm just weird that way.) It strikes me that one could write a novel about a Weirdtopia that has no conflict, featuring only exploration of a counter-intuitive, yet highly intriguing, world. Conversations within, and descriptions of, this strange world (so long as the writer is very, very clever) would keep my interest. But then, this would be more like speculative anthropology than a story.

Comment by Halfwit on After critical event W happens, they still won't believe you · 2013-06-14T06:26:44.629Z · LW · GW

By "the term" do you mean something Ben Ben Goertzel said once on SL4, or is this really a thing?

Comment by Halfwit on After critical event W happens, they still won't believe you · 2013-06-13T22:25:40.058Z · LW · GW

I do tend to think that Aubrey de Grey's argument holds some water. That is, it's not so much general society that will be influenced as wealthy elites. Elites seem more likely to update when they read about a 2x mouse. I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics? But cryonics is a lot harder to believe than life extension. You need to buy pattern identity theory and nanotechnology and Hanson's value of life calculations. In the case of LE, all you have to believe is that the techniques that worked on the mouse will, likely, be useful in treating human senescence. And anyway, Aubrey hopes to first convince the gerontology community and then the public at large. This approach has worked for climate science and a similar approach may work for AI risk.

Comment by Halfwit on Do Earths with slower economic growth have a better chance at FAI? · 2013-06-13T19:27:41.378Z · LW · GW

I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a new AI winter seems like a better idea, though I'm confused about how this could be done. Perceptrons discredited connectionist approaches for a surprisingly long time, perhaps a similar book could discredit (and indirectly defund) dangerous branches of AI which aren't useful for FAI research--but this seems unlikely, though less so than OP significantly altering economic growth either way.

Comment by Halfwit on AGI Quotes · 2013-06-10T20:47:46.578Z · LW · GW

The mathematician John von Neumann, born Neumann Janos in Budapest in 1903, was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake." One night in early 1945, von Neumann woke up and told his wife, Klari, that "what we are creating now is a monster whose influence is going to change history, provided there is any history left. Yet it would be impossible not to see it through." Von Neumann was creating one of the first computers, in order to build nuclear weapons. But, Klari said, it was the computers that scared him the most.

Konstantin Kakaes

Comment by Halfwit on Tiling Agents for Self-Modifying AI (OPFAI #2) · 2013-06-06T05:00:41.174Z · LW · GW

The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.

Comment by Halfwit on A Primer On Risks From AI · 2013-06-04T18:54:26.309Z · LW · GW

I think you're an important guy to have around for reasons of evaporative cooling.

Comment by Halfwit on The Singularity as Religion (yes/no links) · 2013-06-03T03:31:55.061Z · LW · GW

The line I came up with, when asking the question to myself, was this: If the singularity is a religion, it is the only religion with a plausible mechanism of action.

Comment by Halfwit on Rationality Quotes June 2013 · 2013-06-02T22:03:14.982Z · LW · GW

"Why do people worry about mad scientists? It's the mad engineers you have to watch out for." - Lochmon

Comment by Halfwit on [LINK] Soylent crowdfunding · 2013-05-22T02:31:39.407Z · LW · GW

I believe you can live off Boost for an indefinite period of time.

Comment by Halfwit on Rationality Quotes May 2013 · 2013-05-21T15:33:07.255Z · LW · GW

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive, - Randall Munroe.

Comment by Halfwit on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' · 2013-05-18T23:58:50.102Z · LW · GW

5% is pretty high considering the purported stakes.

Comment by Halfwit on Help us name the Sequences ebook · 2013-04-16T03:18:55.134Z · LW · GW

Untangling the Knot: A Users Guide to the Human Mind

Your Brain, an Owner's Manual

Less than One, Greater than Zero: The Sequences, 2006–2009

Approximating Omega (badly, of course)

Sharpening the Mace

Uncountable Infinite Shades of Grey (my apologies)

Stop Tripping Yourself: A Users Guide to the Human Mind

Marshaling the Mind: An Introduction to the Informed Art of Rationality

Motes and Meaning: The Less Wrong Archives

Of Motes and Meaning

Theory, in Practice

Thinking, in Practice

Thinking in Circles:Avoiding the Known Bugs in Human Reasoning.

Comment by Halfwit on Solved Problems Repository · 2013-03-31T17:38:15.952Z · LW · GW

It might be worth going to a sleep doctor; sleep apnea can really fuck up your metabolism, not to mention causing unbelievable akrasia. I would say sleep tests are a GOOD THING, something everyone should do. I had sleep apnea for years. It was like some eldritch monster was sucking away my willpower and I wasn't even aware. Within a few months of getting my mouth guard, which keeps my tongue from blocking my airway while in REM, I lost thirty pounds and gained an enormous well of mental stamina. A small minority of the "metabolically challenged" may just have undiagnosed sleep problems.

Comment by Halfwit on Suggest alternate names for the "Singularity Institute" · 2013-01-15T01:44:15.108Z · LW · GW

He was an adviser. But I see he no longer is. Retracted.

Comment by Halfwit on Farewell Aaron Swartz (1986-2013) · 2013-01-12T21:48:52.718Z · LW · GW

He killed himself; this is true. He faced 35 years of confinement and the very-real prospect of rape. This, too, is true. He was criminalized for his intent to freely distribute scientific knowledge. This makes him a hero. He broke, but only storybook heroes are unbreakable. It's depressing how society seems to persecute those most able to improve it; that the broken machine slays those very engineers who've dedicated their lives to its repair.

Comment by Halfwit on Suggest alternate names for the "Singularity Institute" · 2013-01-11T01:53:10.959Z · LW · GW

And now there are three: http://singularityhub.com/2013/01/10/exclusive-interview-with-ray-kurzweil-on-future-ai-project-at-google/

Comment by Halfwit on 2012 Winter Fundraiser for the Singularity Institute · 2013-01-10T17:45:26.504Z · LW · GW

In terms of minimizing the status loss for academics affiliating with SIAI, a banal minimally-descriptive name may be superior. People often overestimate the value of the piquant. Beige may not excite, but it doesn't offend. Any term which has the potential to become a buzzword, or acquire alternative definitions, should be avoided. The more exciting the term, the higher the chance of appropriation.

This was the point I was trying to make; on rereading it after posting, I realized it was remarkably poorly written and wasn't even clearly conveying what I was thinking when I wrote it. I didn't have time to edit it then, so I retracted.

Comment by Halfwit on Evaluating the feasibility of SI's plan · 2013-01-10T17:23:21.790Z · LW · GW

When a heuristic AI is creating a successor that shares its goals, does it insist on formally-verified self improvements? Does it try understanding its mushy, hazy goal system so as to avoid reifying something it would regret given its current goals? It seems to me like some mind eventually will have to confront the FAI issue, why not humans then?

Comment by Halfwit on 2012 Winter Fundraiser for the Singularity Institute · 2013-01-10T02:08:14.364Z · LW · GW

I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".

Comment by Halfwit on Normal Cryonics · 2013-01-06T19:01:23.660Z · LW · GW

I thought this was rather tasteful media coverage: http://www.telegraph.co.uk/science/8691489/Robert-Ettinger-the-father-of-cryonics-is-gone-for-now.html

Comment by Halfwit on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-20T08:48:45.188Z · LW · GW

How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?

Comment by Halfwit on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-09T07:43:32.128Z · LW · GW

The quantum lottery is my retirement plan, my messy messy retirement plan.

Comment by Halfwit on Constructing fictional eugenics (LW edition) · 2012-10-30T06:24:34.462Z · LW · GW

del

Comment by Halfwit on Constructing fictional eugenics (LW edition) · 2012-10-30T05:37:58.611Z · LW · GW

I just looked it up. That’s odd that there was little interest. There are so many advantages to a high-IQ child. Said child would likely need less years of child care, would require less attention academically and maybe attend college a few years earlier, likely with a full or partial scholarship. And in terms of maternal pride (i.e., signaling your own competence as a mother by talking about your child’s success) high-IQ sperm is a goldmine. Any single (or reproductively duplicitous) mother would be crazy not to select physicist or mathematician sperm, especially taking into account regression to the mean.

Comment by Halfwit on Constructing fictional eugenics (LW edition) · 2012-10-29T15:43:55.412Z · LW · GW

If sperm banks advertised high-IQ sperm, we would already have the beginnings of a eugenics program. If we found a way to clone eggs very cheaply, an average couple could have two children, each of whom would have half the DNA of a genius and half the DNA of one of their average parents. The advantage of this, in terms of social mobility, could be enough to avoid the need for coercive eugenics.

Regardless, I'm sure such a thing would be outlawed for various stupid reasons.

Comment by Halfwit on Constructing fictional eugenics (LW edition) · 2012-10-29T15:25:47.511Z · LW · GW

And remember, living in a world in which the average person is as smart as an upper-level computer programmer still isn't nearly as humbling as the fact that a well-organized cubic centimeter of carbon could be millions of times smarter than anyone.

I figure this to be a good general rule on these matters: unless you designed your own brain, you should not be proud of your own brain.

Comment by Halfwit on HP:MOR and the Radio Fallacy · 2012-07-26T02:34:43.253Z · LW · GW

SRI's Shakey would be justified in its dualism.