Posts

Comments

Comment by SoullessAutomaton on Open Thread: June 2010 · 2010-06-03T03:43:30.520Z · LW · GW

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

Comment by SoullessAutomaton on Open Thread: June 2010 · 2010-06-03T03:17:04.093Z · LW · GW

So if you're giving examples and you don't know how many to use, use three.

I'm not sure I follow. Could you give a couple more examples of when to use this heuristic?

Comment by SoullessAutomaton on Blue- and Yellow-Tinted Choices · 2010-05-29T22:43:17.490Z · LW · GW

Seems I'm late to the party, but if anyone is still looking at this, here's another color contrast illusion that made the rounds on the internet some time back.

For anyone who hasn't seen it before, knowing that it's a color contrast illusion, can you guess what's going on?

Major hint, in rot-13: Gurer ner bayl guerr pbybef va gur vzntr.

Full answer: Gur "oyhr" naq "terra" nernf ner gur fnzr funqr bs plna. Lrf, frevbhfyl.

The image was created by Professor Akiyoshi Kitaoka, an incredibly prolific source of crazy visual perception illusions.

Comment by SoullessAutomaton on Aspergers Poll Results: LW is nerdier than the Math Olympiad? · 2010-05-14T02:00:18.136Z · LW · GW

Commenting in response to the edit...

I took the Wired quiz earlier but didn't actually fill in the poll at the time. Sorry about that. I've done so now.

Remarks: I scored a 27 on the quiz, but couldn't honestly check any of the four diagnostic criteria. I lack many distinctive autism-spectrum characteristics (possibly to the extent of being on the other side of baseline), but have a distinctly introverted/antisocial disposition.

Comment by SoullessAutomaton on Open Thread: April 2010, Part 2 · 2010-04-28T03:44:08.115Z · LW · GW

A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week's Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.

Which is all well and good and worth reading, but largely off-topic. Rather, I'm mentioning this on LW because of the link and quotation Baez put at the end of the column, as it seemed like something people here would appreciate.

Go ahead and take a look, even if you don't follow the rest of the column!

Comment by SoullessAutomaton on Compartmentalization as a passive phenomenon · 2010-03-28T19:10:40.705Z · LW · GW

Ah, true, I didn't think of that, or rather didn't think to generalize the gravitational case.

Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.

Comment by SoullessAutomaton on Compartmentalization as a passive phenomenon · 2010-03-28T15:33:02.756Z · LW · GW

Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That's roughly the density of electron-degenerate matter; I'm pretty sure nothing will hold together at that density without substantial outside pressure, and since we're excluding gravitational compression here I don't think that's likely.

Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.

Comment by SoullessAutomaton on Compartmentalization as a passive phenomenon · 2010-03-27T23:19:11.031Z · LW · GW

I don't think you'd be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we're talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon's surface.

Of course, since the centrifugal force at the equator due to rotation would almost exactly counteract weight due to gravity, the only way the thing could hold itself together would be tensile strength; it wouldn't take much for it to slowly tear itself apart.

Comment by SoullessAutomaton on The mathematical universe: the map that is the territory · 2010-03-26T14:29:13.411Z · LW · GW

It's an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, though--I guess one could assume some sort of per-universe Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.

Comment by SoullessAutomaton on More thoughts on assertions · 2010-03-25T04:44:31.676Z · LW · GW

The example being race/intelligence correlation? Assuming any genetic basis for intelligence whatsoever, for there to be absolutely no correlation at all with race (or any distinct subpopulation, rather) would be quite unexpected, and I note Yvain discussed the example only in terms as uselessly general as the trivial case.

Arguments involving the magnitude of differences, singling out specific subpopulations, or comparing genetic effects with other factors seem to quickly end up with people grinding various political axes, but Yvain didn't really go there.

Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-23T03:12:18.386Z · LW · GW

The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win.

Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:

  1. You can't win the game.
  2. You can't break even.
  3. You can't stop playing.
Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-23T03:04:51.312Z · LW · GW

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that; then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so..." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can prove only trivial theorems, because every theorem that's proved is trivial."

The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

-- Surely you're joking, Mr. Feynman!

Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-23T02:56:32.501Z · LW · GW

Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.

There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it's still our sack of crap.

By all means, don't look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn't going to notice.

Comment by SoullessAutomaton on What would you do if blood glucose theory of willpower was true? · 2010-03-23T02:17:20.843Z · LW · GW

Really, does it actually matter that something isn't a magic bullet? Either the cost/benefit balance is good enough to warrant doing something, or it isn't. Perhaps taw is overstating the case, and certainly there are other causes of akrasia, but someone giving disproportionate attention to a plausible hypothesis isn't really evidence against that hypothesis, especially one supported by multiple scientific studies.

From what I can see, there's more than sufficient evidence to warrant serious consideration for something like the following propositions:

  • Application of short-term willpower measurably expends some short-term biological resource
  • Willpower "weakens" as the resource is depleted, recovering over a longer time span
  • Resource expenditure correlates with reduced blood sugar concentration
  • Increasing blood sugar (temporarily?) restores resource availability

So, my questions are: If this is correct, what practical use could we make of the idea? What could we do as individuals or as a group to decide whether it's useful enough to bother thinking about? Particularly in cases where willpower is needed mostly to start a task rather than continue it, if there's a simple way to get a quick, short-term boost that might make the difference between several hours of productivity vs. akratic frustration, that's significant!

As an aside, I recall seeing some studies indicating that there may be more general principles in play here, regarding the mind's executive functions as a whole, but I don't have citations on hand at the moment.

Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-21T21:19:55.095Z · LW · GW

I thought the mathematical terms went something like this:

  • Trivial: Any statement that has been proven
  • Obviously correct: A trivial statement whose proof is too lengthy to include in context
  • Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes
  • Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult
  • Interesting: Unproven, despite many attempts
Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-21T18:47:04.239Z · LW · GW

It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!

I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.

Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-21T18:20:11.839Z · LW · GW

Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.

Comment by SoullessAutomaton on The scourge of perverse-mindedness · 2010-03-21T17:34:58.466Z · LW · GW

It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?

Comment by SoullessAutomaton on Open Thread: March 2010, part 3 · 2010-03-20T23:01:29.003Z · LW · GW

A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).

Comment by SoullessAutomaton on Think Before You Speak (And Signal It) · 2010-03-20T22:51:15.769Z · LW · GW

Also, few ways are more effective at discovering flaws in an idea than to begin explaining it to someone else; the greatest error will inevitably spring to mind at precisely the moment when it is most socially embarrassing to admit it.

Comment by SoullessAutomaton on The Price of Life · 2010-03-20T22:37:35.642Z · LW · GW

My interpretation was to read "value" as roughly meaning "subjective utility", which indeed does not, in general, have a meaningful exchange rate with money.

Comment by SoullessAutomaton on The Graviton as Aether · 2010-03-05T04:43:18.881Z · LW · GW

You know, this really calls for a cartoon-y cliche "light bulb turning on" appearing over byrnema's head.

It's interesting the little connections that are so hard to make but seem simple in retrospect. I give it a day or so before you start having trouble remembering what it was like to not see that idea, and a week or so until it seems like the most obvious, natural concept in the world (which you'll be unable to explain clearly to anyone who doesn't get it, of course).

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-05T04:35:42.427Z · LW · GW

SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them.

Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.

Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...

Comment by SoullessAutomaton on Individual vs. Group Epistemic Rationality · 2010-03-05T04:02:55.335Z · LW · GW

Sorry for the late reply; I don't have much time for LW these days, sadly.

Based on some of your comments, perhaps I'm operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner's dilemma are clearly not, in any coherent sense, rationally maximizing individual outcomes. Thus I don't really see such a scenario as presenting a group vs. individual conflict, but rather a practical problem of coordinated action. Certainly, solving such problems applies to any rational agent, not just humans.

The part about giving undue weight to unlikely ideas--which seems to comprise about half the post--by mis-calibrating confidence levels to motivate behavior seems to be strictly human-oriented. Lacking the presence of human cognitive biases, the decision to examine low-confidence ideas is just another coordination issue with no special features; in fact it's an unusually tractable one, as a passable solution exists (random choice, as per CannibalSmith's comment, which was also my immediate thought) even with the presumption that coordination is not only expensive but essentially impossible!

Overall, any largely symmetric, fault-tolerant coordination problem that can be trivially resolved by a quasi-Kantian maxim of "always take the action that would work out best if everyone took that action" is a "problem" only insofar as humans are unreliable and will probably screw up; thus any proposed solution is necessarily non-general.

The situation is much stickier in other cases; for instance, if coordination costs are comparable to the gains from coordination, or if it's not clear that every individual has a reasonable expectation of preferring the group-optimal outcome, or if the optimal actions are asymmetric in ways not locally obvious, or if the optimal group action isn't amenable to a partition/parallelize/recombine algorithm. But none of those are the case in your example! Perhaps that sort of thing is what Eliezer et al. are working on, but (due to aforementioned time constraints) I've not kept up with LW, so you'll have to forgive me if this is all old hat.

At any rate, tl;dr version: wedrifid's "Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do." and the associated comment thread pretty much covers what I had in mind when I left the earlier comment. Hope that clarifies matters.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T06:09:48.091Z · LW · GW

Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)

And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T06:05:26.608Z · LW · GW

The history in the paper linked from this blog post may also be enlightening!

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T05:55:35.861Z · LW · GW

It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation.

It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T05:35:37.361Z · LW · GW

Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.

Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing remotely sensible.

There's plenty of ridiculously opaque models of computation (Post's tag machine, Conway's Life, exponential Diophantine equations...) but I can't begin to imagine one that would be more comprehensible than untyped lambda calculus.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T05:17:52.484Z · LW · GW

I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.

Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.

Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn't suggest for learning purposes, either.

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Well, the problem isn't really multiple inheritance itself, it's the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.

Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn't really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we've all seen trick questions about "okay, which method will this call?"). Something closer to a simple type predicate, like the interfaces in Google's Go language or like Haskell's type classes, is much less painful here. Or of course duck typing, if static type-checking isn't your thing.

Compositional code reuse in objects--what I meant by "implementation inheritance"--also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.

The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.

Note that "multiple inheritance" makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it's generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of "parent" types.

Consider the following types:

  • Tree structures containing values of some type A.
  • Lists containing values of some type A.
  • Text strings, stored as immutable lists of characters.
  • Text strings as above, but with a maximum length of 255.

The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there's no shared implementation or obvious subtyping relationship.

The text strings can't implement the above interface (because they're not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren't subtypes of the list, though, because it's mutable.

The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.

Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T02:14:28.336Z · LW · GW

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:

  • It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
  • Templates are a clunky, disappointing imitation of real metaprogramming.
  • Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
  • It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
  • Combining error handling via exceptions with manual memory management is frankly absurd.
  • The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.

I could elaborate further, but it's too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical "real" OO language, but I'd probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).

ETA: Well, that came out awkwardly verbose. Apologies.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T00:57:24.561Z · LW · GW

C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don't think that's really what XiXiDu is looking for.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T00:52:21.295Z · LW · GW

Dijkstra's quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn't actually a bad language at all. On the other hand, it also lacks much of the "easy to pick up and experiment with" aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-03T00:41:37.455Z · LW · GW

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!

Comment by SoullessAutomaton on Individual vs. Group Epistemic Rationality · 2010-03-03T00:33:05.338Z · LW · GW

I'm inclined to agree with your actual point here, but it might help to be clearer on the distinction between "a group of idealized, albeit bounded, rationalists" as opposed to "a group of painfully biased actual humans who are trying to be rational", i.e., us.

Most of the potential conflicts between your four forms of rationality apply only to the latter case--which is not to say we should ignore them, quite the opposite in fact. So, to avoid distractions about how hypothetical true rationalists should always agree and whatnot, it may be helpful to make explicit that what you're proposing is a kludge to work around systematic human irrationality, not a universal principle of rationality.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T03:12:06.556Z · LW · GW

All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T02:49:07.523Z · LW · GW

Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra.

Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T02:28:11.164Z · LW · GW

Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.

More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T02:07:20.148Z · LW · GW

adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.

Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T01:50:27.617Z · LW · GW

I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about.

I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It helps that both have excellent learning materials available.

Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some "mainstream" programming and wants to stretch their brain.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T01:37:58.278Z · LW · GW

Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.

The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.

So you end up with newcomers to Haskell trying to simultaneously:

  • Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
  • Unlearn existing habits from other languages
  • Learn about intimidating math-y-sounding things

And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.

But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.

The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.

Comment by SoullessAutomaton on Open Thread: March 2010 · 2010-03-02T01:08:52.731Z · LW · GW

Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.

Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.

Comment by SoullessAutomaton on Rationality quotes: March 2010 · 2010-03-02T00:55:23.507Z · LW · GW

Due to not being an appropriately-credentialed expert, I expect. The article does mention that he got a very negative reaction from a doctor.

Comment by SoullessAutomaton on The Last Days of the Singularity Challenge · 2010-03-01T03:41:33.941Z · LW · GW

Scraping in just under the deadline courtesy of a helpful reminder, I've donated a modest amount (anonymously, to the general fund). Cheers, folks.

Comment by SoullessAutomaton on Savulescu: "Genetically enhance humanity or face extinction" · 2010-01-11T02:59:37.274Z · LW · GW

I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.

Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.

If they really do know what they're getting into and are okay with it, then fine, not my problem.

If it helps, I also have no problem with someone valuing self-determination so highly that they'd rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they'd like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.

There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns.

Actually making someone aware of a danger they're approaching is often easier said than done. People have a habit of disregarding things they don't want to listen to. What's that Douglas Adams quote? Something like, "Humans are remarkable among species both for having the ability to learn from others' mistakes, and for their consistent disinclination to do so."

Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.

I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one's choices.

I begin to suspect that may be the root of our actual disagreement here.

In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it).

It's a completely different issue, actually.

...but there's a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.

Tying back to the "helping people against their will" issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn't show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.

On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]

A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.

Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone's lives? No--if only because that doesn't generally work out very well. Yet, lacking a solution doesn't make the problem any less real.

So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.

Not to say I wouldn't also be suspicious of such a proposition, but don't pretend that opposing the idea is free. It's not, so long as we're all sharing this society.

Maybe you're happy to pay the costs of allowing other people to make mistakes, but I'm not. It may very well be that the alternatives are worse, but that doesn't make the situation any more pleasant.

Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.

Complicated? That's clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.

[0] One might be tempted to argue that many of these aren't really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.

Comment by SoullessAutomaton on Savulescu: "Genetically enhance humanity or face extinction" · 2010-01-11T00:51:19.814Z · LW · GW

presumably you refer to the violation of individuals' rights here - forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?

Out of curiosity, what do you have in mind here as "participate in society"?

That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?

The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don't actually seem that much better. Hence your point about "the people who do get made smarter can figure it out", I guess.

Comment by SoullessAutomaton on Savulescu: "Genetically enhance humanity or face extinction" · 2010-01-11T00:09:16.567Z · LW · GW

On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.

Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.

Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.

Comment by SoullessAutomaton on Rationality Quotes January 2010 · 2010-01-09T23:03:01.960Z · LW · GW

Linus replies by quoting the Bible, reminding Charlie Brown about the religious significance of the day and thereby guarding against loss of purpose.

Loss of purpose indeed.

Charlie Brown: Isn't there anyone who knows what Christmas is all about?

Linus: Sure, Charlie Brown, I can tell you what Christmas is all about. Lights, please?

Hear ye the word which the LORD speaketh unto you, O house of Israel:

Thus saith the LORD, Learn not the way of the heathen, and be not dismayed at the signs of heaven; for the heathen are dismayed at them. For the customs of the people are vain: for one cutteth a tree out of the forest, the work of the hands of the workman, with the axe.

They deck it with silver and with gold; they fasten it with nails and with hammers, that it move not.

-- Jeremiah 10:1-4

Linus: It's a pagan holiday, Charlie Brown.

Comment by SoullessAutomaton on Case study: Melatonin · 2010-01-09T20:30:08.076Z · LW · GW

If you're actually collecting datapoints, not just using the term semi-metaphorically, it may help to add that I've been diagnosed with (fairly moderate) ADHD; if my experience is representative of anything, it's probably that.

Comment by SoullessAutomaton on Case study: Melatonin · 2010-01-09T16:36:55.636Z · LW · GW

The former category would include not experiencing, or noticing that you're experiencing, 'tiredness', even when your body is acting tired in a way that others would notice (e.g. yawning, stretching, body language).

I'm not sure if this is what you're talking about, but I've long distinguished two aspects of "tiredness". One is the sensation of fatigue, exhaustion, muddled thinking, &c.--physical indicators of "I need sleep now".

The second is the sensation of actually being sleepy, in the sense of reduced energy, body relaxation, and a general feeling that going to bed sounds like a fine plan.

I almost always notice the former, but unless accompanied by the latter (often not the case), acting on it by going to bed requires a conscious decision. Usually, the sleepiness will appear after I'm lying down, but at times I've been unable to clear my mind of activity and will lie in bed for two or more hours, unable to sleep despite being extremely tired.

If I'm deeply involved in something and not feeling "sleepy" I can easily fail to notice the fatigue (along with hunger and various other non-urgent physical sensations).

The second case involves not being able to stop whatever activity you're engaged in and go to bed, even though you recognize (perhaps briefly, before being drawn back into what you're doing) that you are tired and it would be a good idea.

In my case it's more garden-variety procrastination; going to sleep is just one more thing that I know I should do but don't really want to, because it's boring.

I'm curious to find out if those issues are also experienced by people who aren't autistic - perhaps to a lesser degree, or with different explanations than the ones that I mentioned. Do the issues I described sound like what you're experiencing? Are they close, or similar in some interesting way?

My experience mostly reduces to a disconnect between a non-critical physical need and the desire to fulfill it, generally to an extent proportional to how much mental activity is bouncing around my conscious mind (the default state being "too much").

As a final note, besides the melatonin not making me sleepy, neither ethanol nor caffeine seems to have an appreciable effect on whether I can get to sleep (though both will reduce the quality of any sleep).

Comment by SoullessAutomaton on Case study: Melatonin · 2010-01-09T04:53:58.330Z · LW · GW

This is my experience as well, for the most part.

The only times I recall "going to bed" feeling like a good idea is when I've been so far into exhausted sleep deprivation that base instincts took over and I found myself doing so almost involuntarily.

Even in those cases, my conscious mind was usually confabulating wildly about how I wasn't actually going to sleep, just lying down for a half a moment, not sleeping at all... right up until I pretty much passed out.

It's rather vexing.