Posts

Comments

Comment by Henrik_Jonsson on How To Have Things Correctly · 2012-10-18T01:27:51.592Z · LW · GW

Having worked for / talked to some people who became decamillionaires or higher through startups, a common theme seems to be just being really competitive. They don't care too much about money for money's sake - that's just what we currently use to send the signal "your startup is doing something we really like" in our society.

Comment by Henrik_Jonsson on The Four-Hour Body by Timothy Ferriss - any LWers tried it? · 2011-06-08T23:04:02.884Z · LW · GW

I tried it (for building muscle), kept to the instructions fairly strictly and saw improvements over my regular workout, but nowhere near the results described in the book. Much of the book makes sense, but it might be overly specific to his own physiology, and/or have non-functional components mixed in by mistake.

Comment by Henrik_Jonsson on How to Save the World · 2010-12-01T19:34:20.223Z · LW · GW

Very good post Louie! I agree with all the points, pretty much.

Number 11 seems especially important - it seems like a common trap for people in our crowd to try to over-optimize, so for me having an enjoyable life is a very high priority. A way of thinking that seems to work personally is to work on the margin rather than trying to reorganize my life top-down - to try to continually be a bit more awesome, work with more interesting people, get a bit more money, invest a bit more energy, etc, than yesterday.

In contrast, if I started out trying to allocate the resources I had access to / could gain access to in an optimal manner I suspect I would be paralyzed.

Comment by Henrik_Jonsson on Proposed New Features for Less Wrong · 2010-04-27T01:34:43.725Z · LW · GW

The discussion section sounds like a solid idea. As for making LW less intimidating, I'd rank it as the grace period > doing nothing > "karma coward", though I think users should be able to exit the grace period earlier by choice, and also possibly the score of comments on users in the grace period should be hidden (not just kept from affecting the total karma).

Seeing your comments plummet in score might be demoralizing, even if it doesn't affect your total score.

Comment by Henrik_Jonsson on Shut Up and Divide? · 2010-02-10T07:49:03.092Z · LW · GW

Nick Tarleton said it well, but to try it another way: Depending on how you phrase things, both to yourself and others, the situation can appear to be as bleak as you describe it, or alternatively rather good indeed. If you were to phrase it as being stuck with a brain built for chasing deer across the savanna and caring about the few dozen members of your tribe, being able to try to gain money (because it's the most effective means to whatever your ends) and investing some appreciable fraction of it in the cause with highest expected future payoff, despite being abstract or far in the future, starts to sound fairly impressive -- especially given what most people spend their time and money on.

If Starbucks lattes (or more obviously living above the subsistence level) makes it more likely for me to maintain my strategy of earning money to try to protect the things I value, my indulgences are very plausibly worth keeping. Yes, if I had another psychology I could skip that and help much more, but I don't, so I likely can't. What I can do short-term is to see what seems to happen on the margin. Can I sustain donating 1% more? Can I get by without a fancy car? House? Phone? Conversely, does eating out regularly boost my motivations enough to be worth it? Aim for the best outcome, given the state of the board you're playing on.

Comment by Henrik_Jonsson on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-02T19:50:39.679Z · LW · GW

My impression is that JVM is worse at concurrency than every other approach that's been tried so far.

Haskell and other functional programming languages has many promising ideas but isn't widely used in the industry AFAIK.

This presentation gives a good short overview of the current state of concurrency approaches.

Comment by Henrik_Jonsson on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-01T02:12:21.068Z · LW · GW

I took part of the 2009 summer program during the vacation of my day job as a software developer in Sweden. This entailed spending five weeks with the smartest and most dedicated people I have ever met, working on a wide array of projects both short- and long-term, some of which were finished by the time I left and some of which are still on-going.

My biggest worry beforehand was that I would not be anywhere near talented enough to participate and contribute in the company of SIAI employees and supporters. That seems to not have occurred, though I don't claim to have anywhere near the talent of most others involved. Some of the things I was involved with during the summer was work on the Singularity Summit website as well continuing the Uncertain Future project for assigning probability distributions to events and having the conclusions calculated for you. I also worked on papers with Carl Shulman and Nick Tarleton, read a massive amount of papers and books, took trips to San Fransisco and elsewhere, played games, discussed weird forms of decision theories and counter-factual everything, etc, etc.

My own comparative advantages seem to be having the focus to keep hacking away at projects, as well as the specialized skills that came from having a CS background and some experience (less than a year though) of working in the software industry. I'm currently writing this from the SIAI house, to which I returned about three weeks ago. This time I mainly focused on getting a job as a software developer in the Bay area (I seem to have succeeded), for the aims of earning money (some of which will go to donations) and also making it easier for me to participate in SIAI projects.

I'd say that the most important factor for people considering applying should be if they have strong motivations and a high level of interest in the issues that SIAI involves itself with. Agreeing with specific perceived beliefs of the SIAI or people involved with it is not necessary, and the disagreements will be brought out and discussed as thoroughly as you could ever wish for. As long as the interest and motivation is there, the specific projects you want to work with should work itself out nicely. My own biggest regret is that I kept lurking for so long before getting in touch with the people here.

Comment by Henrik_Jonsson on Celebrate Trivial Impetuses · 2009-07-25T04:01:21.652Z · LW · GW

It looks like this might be the one: Knobe, Joshua. 2003. "Intentional action and side effects in ordinary language", Analysis 63: 190-194. [PDF]

Comment by Henrik_Jonsson on Applied Picoeconomics · 2009-07-20T02:21:21.219Z · LW · GW

I read the book, but found it rambling and poorly supported. The basic point about agents with hyperbolic discounting having dynamic inconsistencies is very important, but I wouldn't recommend the book over Ainslie's article. The only mental note I made of something new (for me) and interesting was a point about issues with a "bright line" being much easier to handle than those without. For example, it's easier to stop drinking alcohol completely than to drink less than a specific limit at each occasion, and even harder to eat a proper diet, when you obviously cannot make us of the only very bright line; no food at all.

I have been busy (with the SIAI summer program), but I do think I actually would have found time to write the post if I had found more data that was both interesting and not obvious to the LW crowd. This might be rationalization, but I don't think the me of one month ago would have wanted a post written about the book if he had known the contents of the book.

Comment by Henrik_Jonsson on Where are we? · 2009-07-18T00:13:19.053Z · LW · GW

Umeå, Sweden.

Comment by Henrik_Jonsson on Applied Picoeconomics · 2009-06-17T17:49:19.819Z · LW · GW

Ainslie's answer is that he should set a hard-and-fast rule: "I will never drink alcoholism".

You probably meant to write "alcohol" here.

All data, even anecdotal, on how to beat akrasia is great, and this sounds like a method that might work well in many cases. If you wanted to raise your odds of succeeding even more you could probably make your oath in front of a group of friends or family members, or even include a rule about donating your money or time if you failed, preferably to a cause you hated for bonus motivation.

I'd like to give a public oath myself, but I'm going away shortly and will be busy with various things, so I don't know how much time I will have for self-improvement. In somewhat of a coincidence, I just received "Breakdown of Will" in the mail yesterday. How about this.. I proudly and publicly swear to read the entire book "Breakdown of Will" by George Ainslie and write an interesting post on LW based on the book before July 17th 2009, so help me Bayes.

Comment by Henrik_Jonsson on With whom shall I diavlog? · 2009-06-17T06:31:52.794Z · LW · GW

It would be interesting to see Searle debate anyone who didn't defer to his high status and common-sense-sounding arguments and pressed him to the wall on what exactly would happen if you, say, simulated a human brain in high resolution. His intuition pumps are powerful ("thought is just like digestion, you don't really believe a computer will digest food if you simulate gastric enzymes, do you?"), but he never really presents any argument on his views of consciousness or AI, at least what I've seen.

Comment by Henrik_Jonsson on Intelligence enhancement as existential risk mitigation · 2009-06-17T05:51:42.493Z · LW · GW

On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit.

While I agree that it it would be cool, anything that doesn't keep your notes exactly like you left them is likely to be more annoying than productive unless it is very cleverly done. (Remember Microsoft Clippy?) You'd probably need to tag at least some things, like persons and places.

Comment by Henrik_Jonsson on Let's reimplement EURISKO! · 2009-06-15T16:51:36.807Z · LW · GW

But even if the AI discovered some things about our physics, it does not significantly narrow the range of possible minds. It doesn't know if it's dealing with paperclippers or a pebblesorters. It might know roughly how smart we are.

You're using your (human) mind to predict what a postulated potentially smarter-than-human intelligence could and could not do.

It might not operate on the same timescales as us. It might do things that appear like pure magic. No matter how often you took snapshots and checked how far it had gotten in figuring out details about us, there might be no way of ruling out progress, especially if you gave it motives for hiding that progress (such as pulling the plug every time it came close). Sooner or later you'd conclude that nothing interesting was happening and putting it on autopilot. A small self-improvement might cascade in an enormous difference in understanding, with the notorious FOOM following.

I don't usually like quoting myself, but

If you had a program that might or might not be on a track to self-improve and initiate an Intelligence explosion you'd better be sure enough that it would remain friendly to, at the very least, give it a robot body, a scalpel, and stand with your throat exposed before it.

If the scenario makes you nervous you should be pretty much equally nervous at the idea of giving your maybe-self-improving AI sitting inside thirty nestled sandboxes even 10 milliseconds (10^41 Planck intervals) of CPU time.

Let me be clear here: I'm not assigning any significant probability to someone recreating EURISKO or something like it in their spare time and having it recursively self-improve any time soon. My confidence intervals are spread widely enough that I can spend some time being worried about it, though. I'm just pointing out that sandboxing adds approximately zero extra defense in those situations we would need it.

The parallel to the simulation argument was interesting though, thanks.

Comment by Henrik_Jonsson on Let's reimplement EURISKO! · 2009-06-15T14:49:14.873Z · LW · GW

I think my other reply applies here too, if you read "communications channel" as all the information that might be inferred from the universe the AI finds itself in. Either the AI is not smart enough to be a worry without any sandboxing at all, or you have enough to worry about that you should not be relying on the sandbox to protect you.

Your point about our own simulation (if it is one) lacking a simple communications channel actually works against you - In our universe the simulation hypothesis has been proposed, despite the fact that we have only human intelligence to work with.

Comment by Henrik_Jonsson on Rationality Quotes - June 2009 · 2009-06-15T12:44:06.161Z · LW · GW

We shall not cease from exploration and the end of our exploring shall be to return where we started and know the place for the first time.

-- T.S. Eliot

Comment by Henrik_Jonsson on Rationality Quotes - June 2009 · 2009-06-15T05:14:57.884Z · LW · GW

Once again, we are saddled with a Stone Age moral psychology that is appropriate to life in small, homogeneous communities in which all members share roughly the same moral outlook. Our minds trick us into thinking that we are absolutely right and that they are absolutely wrong because, once upon a time, this was a useful way to think. It is no more, though it remains natural as ever. We love our respective moral senses. They are as much a part of us as anything. But if we are to live together in the world we have created for ourselves, so unlike the one in which our ancestors evolved, we must know when to trust our moral senses and when to ignore them.

--Joshua Greene

Comment by Henrik_Jonsson on Let's reimplement EURISKO! · 2009-06-13T22:33:25.812Z · LW · GW

If you had a program that might or might not be on a track to self-improve and initiate an Intelligence explosion you'd better be sure enough that it would remain friendly to, at the very least, give it a robot body, a scalpel, and stand with your throat exposed before it.

Surrounding it with a sandboxed environment couldn't be guaranteed to add any meaningful amount of security. Maybe the few bits of information you provide through your communications channel would be enough for this particular agent to reverse-engineer your psychology and find that correct combination to unlock you, maybe not. Maybe the extra layer(s) between the agent and the physical world would be enough to delay it slightly or stall it completely, maybe not. The point is you shouldn't rely on it.

Comment by Henrik_Jonsson on Let's reimplement EURISKO! · 2009-06-13T21:25:00.886Z · LW · GW

As long as you have a communications channel to the AI it would not be secure, since you are not a secure system and could be compromised by a sufficiently intelligent agent.

See http://yudkowsky.net/singularity/aibox

Comment by Henrik_Jonsson on Righting a Wrong Question · 2008-03-09T13:51:14.000Z · LW · GW

This is one of my all-time favourite posts of yours, Eliezer. I can recognize elements of what you're describing here in my own thinking over the last year or so, but you've made the processes so much more clear.

As I'm writing this, just a few minutes after finishing the post, it's increasingly difficult not to think of this as "obvious all along" and it's getting harder to pin down exactly what in the post that caused me to smile in recognition more than once.

Much of it may have been obvious to me before reading this post as well, but now the verbal imagery needed to clearly explain these things to myself (and hopefully to others) is available. Thank you for these new tools.