Posts

More intuitive programming languages 2012-04-15T11:35:49.516Z

Comments

Comment by A4FB53AC on Does random reward evoke stronger habits? · 2015-08-18T10:54:13.462Z · LW · GW

You could in principle very easily ignore the dice and eat the chocolate regardless. You need to take it upon yourself to follow through with the scheme and forfeit the chocolate 3 times out of 4. If you start with the understanding that chocolate is a possibility 4 times out of 4 if you followed a more permissive scheme, then you are effectively punishing yourself 3/4 of the time, which I expect would work as negative reinforcement for said task or the reward scheme in general. And it would also require enough willpower, which some people won't have.

Comment by A4FB53AC on Against the internal locus of control · 2015-04-04T14:24:58.849Z · LW · GW

The only factor under your control may be to realize that the only factor under your control is to obtain and use better methods and processes to think, gather information, act in the real world, generate feedback and adjust yourself.

Illustratively, no matter how innately intelligent a native English speaker might be, if he never had any experience with Japanese, he won't be able to read and understand kanji. Is that a failure of intelligence, or a failure of knowledge and method? If you've never had any experience in any science, and don't know the specialized vocabulary, then it is likely you won't be able to understand a technical paper. Again, is that a failure of intelligence, or is it just that you'll need some time to grow familiar with the field? A lot of intelligence and rationality is like that. Including understanding your own intelligence and capabilities to better yourself.

You'll ned to assess where you stand now, then iteratively improve yourself. You'll need to look outside, for information and help, to get better at it. Depending on your starting point, your incremental improvements may be slow at first, until you learn how to get better at improving yourself. You may have more terrain to cover too.

"I vow to always do my best to make my best become even better."

Your end point may still be determined by your IQ or working memory, but the starting realization that you can ameliorate yourself, can be as simple as a few words. It's still an external factor, but one that, depending on your sensitivity to such ideas, you could encounter regularly enough that it will eventually sink in, and start changing you. Frequenting places where such ideas are more prevalent (like here), may help bootstrap this process earlier.

Comment by A4FB53AC on An alarming fact about the anti-aging community · 2015-02-24T21:28:25.897Z · LW · GW

This is consistent with my experience with European life-extension movements. Generally speaking we just don't have a clear idea of where we should be going. Neither do we even always agree an what research or project is even relevant. So we have a collection of people sharing a vaguely defined goal of life-extension, all pushing for their pet projects and hypotheses. No one is really willing to abandon what they came up with because no clear evidence-based project under which they could assemble exists (or is perceptible)(this therefore of course includes all such personal pet projects and ideas). Additionally, few if any really seem to believe strongly in life extension (as a way of life or something important enough to take precedence over other projects in their life), and newly interested people turnover is very high with little retention beyond a few months.

Comment by A4FB53AC on An alarming fact about the anti-aging community · 2015-02-24T21:15:08.060Z · LW · GW

Hm. This was eye opening enough that I felt like commenting for the first time in a year. I've known for a while about people being too despaired to desire living on, but this puts it under a new perspective.

Most importantly it helps explain the huge discrepancy between how instrumentally important staying alive and able is for anyone who has any goal at all (barring some fringe cases), and how little most people do to plan and organize themselves in order to avoid aging and dying, even as it is reasonably expected to be unavoidable with our current means.

What you said suggests maybe another, little explored - to my knowledge - by life-extensionists set of strategies to sustain effective life extension projects - as generalized public acceptance and backing is still very much nowhere as far as it should be.

Comment by A4FB53AC on Against Open Threads · 2014-05-31T03:43:47.919Z · LW · GW

Interesting opinion. I rarely browse open threads, mainly because I find them a mess, and it takes a longer time to find if there's anything which would interest me in there. Discussion posts have their own page with neatly ordered titles, you get an idea at a glance, and can on a first filter sort through around 20 topics in around 2 seconds.

Comment by A4FB53AC on Rationalist fiction: a Slice of Life IN HELL · 2014-03-26T12:01:44.792Z · LW · GW

Please do note the delicious irony here :

I don't see much good in associating rationality with extreme caution.

I don't think that teaching people to expect worse case scenarios increases rational thinking.

Which in essence looks suspiciously like cautiously assuming a bad case scenario in which this story won't help the rationality cause, or even a worst case scenario in which it will do more wrong than right.

If you want to go forth and create a story about rationality, then do it. Humans are complex creatures, not everyone will react the same way to your story, and anybody who thinks they can accurately predict the reaction of all the different kinds of people who'll read your story (especially as this story hasn't even been written yet) is either severely deluded as to their ability, or secretly running the world behind curtains already.

When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.

Comment by A4FB53AC on Is IQ what we actually need to know? · 2014-02-25T20:37:25.380Z · LW · GW

I think this misses the point of the OP, which wasn't that IQ or intelligence can accurately be guessed in a casual conversation, but rather that intelligence can be guessed more accurately than other important parameters such as "conscientiousness, benevolence, and loyalty", for which we don't have tools nearly as good as those we have for measuring IQ. The consequence of which being, since we can't assess these as methodically, people can fake them more easily, and this has negative social consequences.

Comment by A4FB53AC on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-24T06:03:28.288Z · LW · GW

Especially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?

Comment by A4FB53AC on Resurrection through simulation: questions of feasibility, desirability and some implications · 2012-05-24T12:07:41.343Z · LW · GW

It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.

Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can't manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)

Comment by A4FB53AC on Resurrection through simulation: questions of feasibility, desirability and some implications · 2012-05-24T08:48:23.402Z · LW · GW

I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who'd be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.

Comment by A4FB53AC on [deleted post] 2012-05-24T03:00:33.430Z

I think you're making too many separate points (how to resurrect past people using all the information you can, simulation argument, some religious undertone) and the text is pretty long, many will not read it to the end. Also even if someone agrees with some part of it, it's likely they'll disagree with another (which often results in downvoting the whole post in my experience). I think you'd be better off rewriting this as several different posts.

Comment by A4FB53AC on Being a Realist (even if you believe in God) · 2012-05-17T20:00:51.706Z · LW · GW

First off, I'd like to say, I have met Christians who similarly were very open to rationality and applying it to the premises of their religion, especially the ethics. In practice, one of these was the only person who directly recognized me as an immortalist a few sentences into our first discussion, where no one else around me even knew what that is. I find that admirable, and fascinating.

I also think it likely that human beings as they are now need some sort of comfort, reassurance, that their universe is not that universe of cold mathematics.

So I'm not sure I should point this out, but, in the end, you're still trying to find a God of the gaps. In the end, you're still basing your view of the universe on a very special premise, that is, God.

Eventually, this can only be resolved in a few ways : either God exists, or He doesn't, or using its existence as a premise doesn't make a difference, and a theist would eventually come to the same understanding of the universe as a down-to-earth, reductionist atheistic rationalist.

But I also began to feel depressed, and then sort of hollow inside. I had no attachment to young-earth creationism, but I suppose I was trying to keep a sort of "God of the gaps" with regard to the beginning and development of intelligent life on Earth. Having seen why there were considerably fewer gaps than I had thought, I couldn't un-see it. A little part of me had been booted out of Eden.

I don't think God exists, and I'm still puzzled by how anyone could come to believe it does. Here I mean believe in that sense where you don't just "like to pretend something is real for the comfort it brings", which I do too, but rather in the sense where you think "stop kidding yourself now, you need a real, practical, useable answer now".

Both are different, the first is fine and necessary for many people, but if you use God in the latter I'm worried you're going to be up for a few disappointing experiences for the next few decades.

Comment by A4FB53AC on [LINK] International variation in IQ – the role of parasites · 2012-05-14T18:52:37.101Z · LW · GW

comes with nifty bonuses like 'increases the IQ of females more than males'.

Why is that a bonus?

Comment by A4FB53AC on How many people here agree with Holden? [Actually, who agrees with Holden?] · 2012-05-14T18:48:44.391Z · LW · GW

Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome? I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this.

My immediate reaction to this was "as opposed to doing what?" In this segment it seems like it is argued that SI's work, raising awareness that not all paths to AI are safe, and that we should strive to find safer paths towards AI, is actually making it more likely that an undesirable AI / Singularity will be spawned in the future. Can someone explain me how not discussing such issues and not working on them would be safer?

Just having that bottom line unresolved in Holden's post makes me reluctant to accept the rest of the argument.

Comment by A4FB53AC on A wild theist platonist appears, to ask about the path · 2012-05-13T23:04:33.889Z · LW · GW

Also, see How to Convince Me That 2 + 2 = 3.

Comment by A4FB53AC on [deleted post] 2012-05-06T11:53:08.851Z

The mind I've probably gained the most by exploring is Eliezer's, both because so much of his thinking is available online, and because out of many useful habits and qualities I didn't have, he seemed to have those qualities to the greatest extent. I'm not referring to the explicit points he's made in his writing (though I've gained by those as well), but the overall way he thinks and feels about the world.

Well, as Eliezer said

I have striven for a long time now to convey, pass on, share a piece of the strange thing I touched, which seems to me so precious. And I'm not sure that I ever said the central rhythm into words. Maybe you can find it by listening to the notes. I can say these words but not the rule that generates them, or the rule behind the rule; one can only hope that by using the ideas, perhaps, similar machinery might be born inside you.

Comment by A4FB53AC on How can we get more and better LW contrarians? · 2012-04-21T19:04:34.390Z · LW · GW

Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better.

At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd at least be more informative I think.

And yes, I was also arguing this idea thinking it would fit nicely in this post.

I guess I was wrong since it seems it wasn't clear at all what I was arguing for, and being tactless wasn't a good idea either, contrarian intolerance context or not. Regardless, arguing it in detail in comments, while off-topic in this post, wasn't the way to do it either.

Comment by A4FB53AC on How can we get more and better LW contrarians? · 2012-04-19T08:08:54.745Z · LW · GW

Not more so than "vote up".

In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.

Comment by A4FB53AC on How can we get more and better LW contrarians? · 2012-04-19T08:05:08.502Z · LW · GW

True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).

Comment by A4FB53AC on How can we get more and better LW contrarians? · 2012-04-19T04:12:38.100Z · LW · GW

You should call it black and white. Because that's what it is, black and white thinking.

Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).

Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people usually mean it to say "more / less of this please". But what's your evidence for it working? The quality of the discussion here? How much of that stems from the quality of the public, and the quality of the base material such as Eliezer's sequence?

Do you realize that judgements like "more / less of this" may well optimize less than you think for content, insight, or epistemic hygiene, and more than it should for stuff that just amuses and pleases people? Jokes, famous quotes, group-think, ego grooming, etc.

People optimizing for "more like this" eventually downgrades content into lolcats and porn. It's crude wireheading. I'm not saying this community isn't somewhat above going that deep, but we're still human beings and therefore still susceptible to it.

Comment by A4FB53AC on Cryonics without freezers: resurrection possibilities in a Big World · 2012-04-06T13:52:29.158Z · LW · GW

Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.

Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better what a human brain derived from your genetic code is, and correlate that with information left behind on the Internet, in your writings, the memories of other people, etc., about what some of your own psychological specs and memories should be.

The result might be a fairly close approximation of you, at least according to this gradation of identity idea.

Comment by A4FB53AC on Cryonics without freezers: resurrection possibilities in a Big World · 2012-04-06T13:43:26.252Z · LW · GW

suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot)

How is that a Friendly AI?

Comment by A4FB53AC on What is life? · 2012-04-01T21:42:21.973Z · LW · GW

'alive' relative to a specific environment

It's always relative to a certain environment. Human beings and most animals can't survive outside of the current biosphere. In that respect we're no less independent from certain peculiar conditions than viruses are. We both depend on other living organisms in order to survive.

Maybe redefine life against a continuum of how unlikely, complex the necessary environmental conditions are that sustain it?

Some autotrophic cells might rank at one currently known bound while higher animals would be on the other end.

Comment by A4FB53AC on Rationality Quotes April 2012 · 2012-04-01T15:48:12.770Z · LW · GW

A faith which cannot survive collision with the truth is not worth many regrets.

Arthur C. Clarke

Comment by A4FB53AC on Making computer systems with extended Identity · 2012-03-08T16:30:39.462Z · LW · GW

Yeah being considered a part of an AI. I might hate to be,say, its "hair". Just thinking about its next metaphorical "fashion induced haircut and coloring" gives me the chills.

Just because something is a part of something else doesn't mean it'll be treated in ways that it finds acceptable, let alone pleasant.

The idea may be interesting for human-like minds and ems derived from humans - and even then still dangerous. I don't see how that could apply in any marginally useful way to minds in general.

Comment by A4FB53AC on [LINK] Why You Should Keep Your Goals Secret · 2012-03-04T00:05:36.461Z · LW · GW

For what it's worth I had already observed this effect. I am less likely to carry on with some plan if I talk about it to other people. Now I tend to just do what I have to, and only talk about it once it's done.

Part of the problem is I hate feeling pressured into doing something. Social commitment will, if anything, simply make me want to run away from what I just implicitly promised I'd do. Perhaps because I can never be sure whether I can achieve something : if I fail silently and nobody knows, it's ok. Less so if I told people about it. It feels better to run away from something (failing by choice) than failing for other reasons.

Also in some cases, just saying you plan to do something already feels like you've done something. Either because you count it as a step towards doing the whole thing (a step after which it feels more acceptable to take a break, which can last indefinitely long), either because you fantasized about it enough that you don't feel the need to implement it for real anymore.

Comment by A4FB53AC on People who "don't rationalize"? [Help Rationality Group figure it out] · 2012-03-03T14:05:50.673Z · LW · GW

I feel like I can relate to that. It's not like I never rationalize, but I always know when I do it. Sometimes It may be pretty faint, but I'll still be aware of it. Whether I allow myself to proceed with justifying a false belief depends on the context. Sometimes it just feels uncomfortable enough to admit to being wrong, sometimes it is efficient to mislead people, and so on.