Posts

In praise of gullibility? 2015-06-18T04:52:09.043Z
Philosophical differences 2015-06-13T01:16:21.237Z
Cold fusion: real after all? 2013-04-17T19:27:46.154Z

Comments

Comment by ahbwramc on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-12T02:42:06.076Z · LW · GW

I mean, Laffer Curve-type reasons if nothing else.

Comment by ahbwramc on Using humility to counteract shame · 2016-04-17T02:14:10.135Z · LW · GW

It's funny, I wrote a blog post arguing against humility not too long ago. I had a somewhat different picture of humility than you:

People internalize norms in very different ways and to very different degrees. There are people out there who don’t seem to internalize the norms of humility at all. We usually call these people “arrogant jerks”. And there are people – probably the vast majority of people – who internalize them in reasonable, healthy ways. We usually call these people “normal”.

But then there are also people who internalize the norms of humility in highly unhealthy ways. Humility taken to its most extreme limit is not a pretty thing – you don’t end up with with wise, virtuous, Gandalf-style modesty. You end up with self-loathing, pathological guilt, and scrupulosity. There are people out there – and they are usually exceptionally good, kind, and selfless people, although that shouldn’t matter – who are convinced that they are utterly worthless as human beings. For such people, showing even a modicum of kindness or charity towards themselves would be unthinkable. Anti-charity is much more common – whatever interpretation of a situation puts themselves in the worst light, that’s the one they’ll settle on. And why? Because it’s been drilled into their heads, over and over again, that to think highly of yourself – even to the tiniest, most minute degree – is wrong. It’s something that bad, awful, arrogant people do, and if they do it then they’ll be bad, awful, arrogant people too. So they take refuge in the opposite extreme: they refuse to think even the mildest of nice thoughts about themselves, and they never show themselves even the slightest bit of kindness.

Or take insecurity (please). All of us experience insecurity to one degree or another, of course. But again, there’s a pathological, unhealthy form it can take on that’s rooted in how we internalize the norms of humility. When you tell people that external validation is the only means by which they can feel good about themselves…well, surprisingly enough, some people take a liking to external validation. But in the worst cases it goes beyond a mere desire for validation, and becomes a need – an addiction, even. You wind up with extreme people-pleasers, people who center every aspect of their lives around seeking out praise and avoiding criticism.

But I actually don't think we disagree all that much, we're just using the same word to describe different things. I think the thing I called humility - the kind of draconian, overbearing anti-self-charity that scrupulous people experience - that is a bad thing. And I think the thing you called humility - acceptance of your flaws, self-compassion - that is a very good thing. In fact, I ended the essay with a call for more self-charity from (what I called) humble people. And I've been trying to practice self-compassion since writing that essay, and it's been a boon for my mental health.

(By far the most useful technique, for what it's worth, has been "stepping outside of myself", i.e. trying to see myself as just another person. I find when I do something embarrassing it's the worst thing to have ever happened, and obviously all my friends are thinking about how stupid I am and have lowered their opinion of me accordingly...whereas when a friend does something embarrassing, it maybe warrants a laugh, but then it seems totally irrelevant and has absolutely no bearing on what I think of them as a person. I now try as much as possible to look at myself with that second mindset.)

Anyway, language quibbles aside, I agree with this post.

Comment by ahbwramc on March 2016 Media Thread · 2016-03-02T04:46:38.611Z · LW · GW

Just wanted to say that I really appreciate your link roundups and look forward to them every month.

Comment by ahbwramc on [Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. · 2016-01-16T07:02:26.340Z · LW · GW

I just posted a comment on facebook that I'm going to lazily copy here:

At this point I have no idea what's going on and I'm basically just waiting for astrophysicists to weigh in. All I can say is that this is fascinating and I can't wait for more data to come in.

Two specific things I'm confused about:

  1. Apparently other astronomers already looked at this data and didn't notice anything amiss. Schaefer quotes them as saying "the star did not do anything spectacular over the past 100 years." But as far as I can tell the only relevant difference between their work and Schaefer's is that he grouped the data into five year bins and they didn't. And sure, binning is great and all, and it makes trends easier to spot. But it's not magic. It can't manufacture statistical significance out of thin air. If the binned data has a significant trend then the unbinned data should as well. So I don't get why the first paper didn't find a dimming trend (unless they were just eyeballing the data and didn't even bother to do a linear fit, but why would they do that?). I mean, in the end Schaefer's plot looks pretty convincing, so I don't think this throws his work into doubt. But it still seems weird.

  2. Any explanation for this has to kind of walk a tightrope walk - you need something that blocks out a significant amount of light to account for the data, but thermodynamics is pretty insistent that any light you absorb has to come out as infrared eventually. So if you posit something that blocks out too much light you run up against the problem of there being no infrared excess. The nice thing about the megastructure hypothesis was that it could explain the dips while still being small enough to not produce an infrared excess.

Now, though, we have to explain not just dips but progressive dimming. And yeah, progressive dimming certainly sounds consistent with a dyson swarm being built. But dyson swarms large enough to dim an entire star seem like the kind of thing that would definitely produce an infrared excess. And in fact it seems like any explanation for that much dimming would require an infrared excess, which we don't see.

I guess it all depends on the magnitude of the dimming, though. If it's not that much dimming, I guess there could be an intermediate-sized dyson swarm (or weird astrophysical phenomenon, it doesn't matter, they should all produce infrared) that was big enough to cause the dimming but not big enough to produce noticeable infrared excess.

For now I remain confused and fascinated.

Comment by ahbwramc on Open Thread, January 11-17, 2016 · 2016-01-13T19:47:07.936Z · LW · GW

Wait, I'm confused. How does this practice resistance to false positives? If the false signal is designed to mimic what a true detection would look like, then it seems like the team would be correct to identify it as a true detection. I feel like I'm missing something here.

Comment by ahbwramc on Stupid Questions July 2015 · 2015-07-06T02:22:58.642Z · LW · GW

Well, it's both redundant and anti-redundant, which I always liked. But I don't think there's anything more to it than that.

Comment by ahbwramc on There is no such thing as strength: a parody · 2015-07-06T02:07:39.604Z · LW · GW

I've had similar thoughts before:

Now imagine you said this [that some people are funnier than others] to someone and they indignantly responded with the following:

“You can’t say that for sure – there are different types of humour! Everyone has different talents: some people are good at observational comedy, and some people are good at puns or slapstick. Also, most so-called “comedians” are only “stand-up funny” – they can’t make you laugh in real life. Plus, just because you’re funny doesn’t mean you’re fun to be around. I have a friend who’s not funny at all but he’s really nice, and I’d hang out with him over a comedian who’s a jerk any day. Besides, no one’s been able to define funniness anyway, or precisely measure it. Who’s to say it even exists?”

(/shameless blog plug)

Comment by ahbwramc on Selecting vs. grooming · 2015-06-30T14:00:56.410Z · LW · GW

The first thing to come to mind is that selecting is simply much cheaper than grooming. If a company can get employees of roughly the same quality level without having to pay for an expensive grooming process over many years, they're going to do that. There's also less risk with selecting, because a groomed candidate can always decide to up and leave for another company (or die, or join a cult, or a have an epiphany and decide to live a simple life in the wilderness of Alaska, or whatever), and then the company is out all that grooming money. I feel as though groomed employees would have to be substantially better than selected ones to make up for these disadvantages.

Comment by ahbwramc on Stupid Questions June 2015 · 2015-06-26T16:09:19.607Z · LW · GW

Thanks for the great suggestions everyone. To follow up, here's what I did as a result of this thread:

-Put batteries back in my smoke detector

-Backed up all of my data (hadn't done this for many months)

-Got a small swiss army knife and put it on my keychain (already been useful)

-Looked at a few fire extinguishers to make sure I knew how to use them

-Put some useful things in my messenger bag (kleenex, pencil and paper) - I'll probably try to keep adding things to my bag as I think of them, since I almost always have it with me

All of the car-related suggestions seemed like good ones, but weren't applicable since I don't own a car. Some other suggestions were good but required more time than I was willing to put in right now, or weren't applicable for other reasons.

Comment by ahbwramc on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-23T15:25:09.480Z · LW · GW

ROT13: V thrffrq pbafreingvirf pbeerpgyl, nygubhtu V'z cerggl fher V unq urneq fbzrguvat nobhg gur fghql ryfrjurer.

Comment by ahbwramc on Open Thread, Jun. 15 - Jun. 21, 2015 · 2015-06-19T17:56:28.864Z · LW · GW

I don't know, it feels like I see more people criticizing perceived hero worship of EY than I see actual hero worship. If anything the "in" thing on LW these days seems to be signalling how evolved one is by putting down EY or writing off the sequences as "just a decent popular introduction to cognitive biases, nothing more" or whatever.

Comment by ahbwramc on In praise of gullibility? · 2015-06-18T13:22:06.268Z · LW · GW

I agree with this. "Half-baked" was probably the wrong phrase to use - I didn't mean "idea that's not fully formed or just a work in progress," although in retrospect that's exactly what half-baked would convey. I just meant an idea that's seriously flawed in one way or another.

Comment by ahbwramc on Philosophical differences · 2015-06-13T03:32:59.105Z · LW · GW

Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don't think it's exaggerating to say that the sleeper cell "already exists". I'm willing to own up to the analogy to that extent.

As for Knightian uncertainty: either the AI will be an existential threat, or it won't. I already think that it will be (or could be), so I think I'm already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.

(Of course, I'm against wasting research money, so I pay attention to arguments for why AI won't be a threat. I just haven't been convinced yet)

Comment by ahbwramc on The Fallacy of Gray · 2015-06-12T22:08:30.669Z · LW · GW

When I first read this post back in ~2011 or so, I remember remembering a specific scene in a book I had read that talked about this error and even gave it the same name. I intended to find the quote and post it here, but never bothered. Anyway, seeing this post on the front page again prompted me to finally pull out the book and look up the quote (mostly for the purpose of testing my memory of the scene to see if it actually matched what was written).

So, from Star Wars X-Wing: Isard's Revenge, by Michael A Stackpole (page 149 of the paperback edition):

Tycho stood. "It's called the gray fallacy. One person says white, another says black, and outside observers assume gray is the truth. The assumption of gray is sloppy, lazy thinking. The fact that one person takes a position that is diametrically opposed to the truth does not then skew reality so the truth is no longer the truth. The truth is still the truth."

So maybe not exactly the same sentiment as this post, but not a bad rationality lesson for a Star Wars book, really.

(for those interested: my memory of the scene was pretty much accurate, although it occurred much later in the book than I had thought)

Comment by ahbwramc on How much do we know about creativity? · 2015-06-09T17:14:50.149Z · LW · GW

I mean, I don't really disagree; it's not a very scientific theory right now. It was just a blog post, after all. But if I was trying to test the theory, I would probably take a bunch of people who varied widely in writing skill and get them to write a short piece, and then get an external panel to grade the writing. Then I would get the same people to take some kind of test that judged ability to recognize rather than generate good writing (maybe get some panel of experts to provide some writing samples that were widely agreed to vary in writing quality, and have the participants rank them). Then I would see how much of the variation in writing skill was explained by the variation in ability to recognize good writing. If it was all or most of the variation, that would probably falsify the theory - the theory would say the most difficult part of "guess and check" is the guessing part, but those results would say it's the checking.

That's the first thing to come to mind, anyway.

Comment by ahbwramc on How much do we know about creativity? · 2015-06-09T13:36:45.402Z · LW · GW

I wrote a couple posts on my personal blog a while ago about creativity. I was considering cross-posting them here but didn't think they were LessWrong-y enough. Quick summary: I think because of the one-way nature of most problems we face (it's easier to recognize a solution than it is to generate it), pretty much all of the problem solving we do is guess-and-check. That is, the brain kind of throws up solutions to problems blindly, and then we consciously check to see if the solutions are any good. So what we call "creativity" is just "those algorithms in the brain that suggest solutions to problems, but that we lack introspective access to". The lack of introspective access means it's difficult to pass creative skills on - think of a writer trying to explain how to write well. They can give a few basic rules of thumb, but most of their skill is contained within a black box that suggests possible sentences. The actual writing process is something like "wait for brain to come up with some candidate next sentence", and then "for each sentence, make a function call to 'is-sentence-good?' module of brain" (in other words, guess and check). Good writers/creative people are just those people who have brain algorithms that are unusually good at raising the correct solution to attention out of the vast possible space of solutions we could be considering. Of course, sometimes one has insights into a rule or process that generates some of the creative suggestions of the brain. When that happens you can verbalize explicitly how the creative skill works, and it stops being "creative" - you can just pass it on to anyone as a simple rule or procedure. This kind of maps nicely onto the art/science divide, as in "more of an art than a science". Skills are "arts" if they are non-proceduralizable because the algorithms that generate the skill are immune to introspection, and skills are "sciences" if the algorithms have been "brought up into consciousness", so to speak, to the point where they can be explicitly described and shared (of course, I think art vs science is a terrible way to describe this dichotomy, because science is probably the most creative, least proceduralizable thing we do, but what are you gonna do?)

Anyway, I don't know if all of this is just already obvious to everyone here, but I've found it a very useful way to think about creativity.

Edit: I missed your last sentence somehow. The above is definitely just plausible and/or fun to read.

Comment by ahbwramc on Stupid Questions June 2015 · 2015-05-31T16:35:56.623Z · LW · GW

Fair.

Comment by ahbwramc on Stupid Questions June 2015 · 2015-05-31T04:06:28.228Z · LW · GW

What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there's also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you're on breaks up, which...struck me as not the best use of one's planning time). So I'd be just as interested in mundane, everyday tips.

(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)

Comment by ahbwramc on Brainstorming new senses · 2015-05-21T14:43:53.865Z · LW · GW

I feel like there are interesting applications here for programmers, but I'm not exactly sure what. Maybe you could link up a particular programming language's syntax to our sense of grammar, so programs that wouldn't compile would seem as wrong to you as the sentence "I seen her". Experienced programmers probably already have something like this I suppose, but it could make learning a new programming language easier.

Comment by ahbwramc on How my social skills went from horrible to mediocre · 2015-05-21T13:44:37.042Z · LW · GW

I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.

For what it's worth, these recent comments of yours have been working on me, at least sort of. I used to think you were just naively arrogant, but now it's seeming more plausible that you're actually justifiably arrogant. I don't know if I buy everything you're saying, but I'll be paying more attention to you in the future anyway.

I've tried to convey certain hard-to-explain LessWrong concepts to people before and failed miserably. I'm recognizing the same frustration in you that I felt in those situations. And I really don't want to be on the wrong side of another LW-sized epistemic gap.

Comment by ahbwramc on How my social skills went from horrible to mediocre · 2015-05-21T03:53:13.603Z · LW · GW

Fair.

So, random anecdote time: I remember when I was younger my sister would often say things that would upset my parents; usually this ended up causing some kind of confrontation/fight. And whenever she would say these upsetting things, the second the words left her mouth I would cringe, because it was extremely obvious to me that what she had said was very much the wrong thing to say - I could tell it would only make my parents madder. And I was never quite sure (and am still not sure) whether she also recognized that what she was saying would only worsen the situation (but she still couldn't resist saying it because she was angry or whatever) or whether she was just blind to how her words would make my parents feel. So my question to you would be: can you predict when your LW comments will get a negative reaction? Do you think "yeah, this will probably get negative karma but I'm going to say it anyway"? Or are you surprised when you get downvoted?

(Not to say that it's irrational to post something you expect to be downvoted, of course, whereas it would be sort of irrational for my sister to say something in a fit of anger that she knew would only make things worse. I'm just trying to get a sense of how you're modelling LWer's)

Comment by ahbwramc on How my social skills went from horrible to mediocre · 2015-05-21T03:08:14.338Z · LW · GW

But my focus here is on the meta-level: I perceive a non-contingency about the situation, where even if I did have extremely valuable information to share that I couldn't share without signaling high status, people would still react negatively to me trying to share it. My subjective sense is that to the extent that people doubt the value of what I have to share, this comes primarily from a predetermined bottom line of the type "if what he's saying were true, then he would get really high status: it's so arrogant of him to say things that would make him high status if true, so what he's saying must not be true."

I have no particular suggestions for you, but it's clear that it's at least possible to convey valuable information to LW without giving off a status-grabbing impression, because plenty of people have done it (eg lukeprog, Yvain, etc)

Comment by ahbwramc on Group rationality diary, May 5th - 23rd · 2015-05-05T15:27:29.972Z · LW · GW

I've been trying to be more "agenty" and less NPC-ish lately, and having some reasonable success. In the past month I've:

-Gone to a SlateStarCodex meetup

This involved taking a greyhound bus, crossing the border into a different country, and navigating my way around an unfamiliar city - all things that would have stopped me from even considering going a few years ago. But I realized that none of those things were actually that big of a deal, that what was really stopping me was that it just wasn't something I would normally do. And since there was no real reason I couldn't go, and because I knew I really wanted to go, I just up and did it.

(had a great time btw, no regrets)

-Purchased a used (piano) keyboard

I used to just kind of vaguely wish that I had a keyboard, because it seemed like it would be a fun thing to learn. I would think this resignedly, as if it were an immutable fact of the universe that I couldn't have a keyboard - for some reason going out and buying one didn't really occur to me. Now that I have one I'm enjoying it, although I'm mostly just messing around and it's clear that I'll need more structure if I'm really going to make progress.

-Signed up for an interview for the MIRI Summer Fellows program

Working at MIRI would be amazing, a dream come true. But I always just sort of assumed I wasn't cut out for it. And that may well be true, but here's a practically zero-cost chance to find out. Why not take it? (Of course, there's always the possibility that I'm just wasting Anna Salamon's time, which I wouldn't want to do. But I don't think I'm so obviously underqualified that that would be the case). Again, I don't think is something I would have done even a year ago.

I've also been having much more success consistently writing for my blog, which I used to always enjoy but rarely do.

Basically I've gotten a ton of mileage out of just having the concept of agency installed in my brain. Knowing that I can just do the things I want, even if they're weird or I haven't done them before, is pretty freeing and pretty cool. The whole "Roles" arc of HPMOR really drove this idea home for me I think.

Comment by ahbwramc on Is Scott Alexander bad at math? · 2015-05-04T21:37:21.244Z · LW · GW

Sure, I understand the identity now of course (or at least I have more of an understanding of it). All I meant was that if you're introduced to Euler's identity at a time when exponentiation just means "multiply this number by itself some number of times", then it's probably going to seem really odd to you. How exactly does one multiply 2.718 by itself sqrt(-1)*3.14 times?

Comment by ahbwramc on Is Scott Alexander bad at math? · 2015-05-04T19:19:48.638Z · LW · GW

I remember my mom, who was a math teacher, telling me for the first time that e^(i*pi) = -1. My immediate reaction was incredulity - I literally said "What??!" and grabbed a piece of paper to try to work out how that could be true. Of course I had none of the required tools to grapple with that kind of thing, so I got precisely nowhere with it. But that's the closest I've come to having a reaction like you describe with Scott and quintics. I consider the quintic thing far more impressive of course - the weirdness of Euler's identity isn't exactly subtle, after all.

So do you think you could predict mathematical ability by simply giving students a list of "deep" mathematical facts and seeing which ones (if any) they're surprised by or curious about?

Comment by ahbwramc on Is Scott Alexander bad at math? · 2015-05-04T17:02:38.581Z · LW · GW

Since much of this sequence has focused on case studies (Grothendiek, Scott Alexander), I'd be curious as to what you think of Douglas Hofstadter. How does he fit into this whole picture? He's obviously a man of incredible talent in something - I don't know whether to call it math or philosophy (or both). Either way it's clear that he has the aesthetic sense you're talking about here in spades. But I distinctly remember him writing something along the lines of how, upon reaching graduate mathematics he hit a "wall of abstraction" and couldn't progress any further. Does your picture of mathematical ability leave room for something like that to happen? I mean, this is Douglas freakin' Hofstadter we're talking about - it's hard to picture someone being more of a mathematical aesthete than he is. And even he ran into a wall!

Comment by ahbwramc on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-05-03T20:03:08.800Z · LW · GW

You seem to be discussing in good faith here, and I think it's worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it's best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I'm still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn't heard of neurons, wouldn't they also seem like a reductio to you?

What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something "special" at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it's made of neurons. But that doesn't seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a "unique" physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don't think you're doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn't have any neurons or a close equivalent? I think you'd have to concede that they were conscious, wouldn't you? Of course, such aliens may not exist, so I can't really make an argument based on that. But still - really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)

So I'm led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn't seem central to me - if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat's Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I'm wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure - and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn't have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it's a standard-ish way of defining causality in philosophy (it's at least the first section in the wikipedia article, anyway, and it's the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain's counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.

Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It's hard for me to break the intuition down further than that, beyond saying that it's the if-then pattern that seems like the really important thing here. I just can't see what else it could be. And this view does have some nice features - if you wind up meeting apparently-conscious aliens, you don't have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.

To answer your question about simulations not being the thing that they're simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you "simulating" the causal structure when you implement it on a computer? It's still the same structure, still has the same dependencies. That seems just as real to me as what the brain does - you could just as easily say that neurons are "simulating" consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a "simulation" versus being "real" kind of disappears.

Does that help you understand where I'm coming from? I'd be interested to hear where in that line of arguments/intuitions I lost you.

Comment by ahbwramc on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-05-01T15:37:31.726Z · LW · GW

I think we might be working with different definitions of the term "causal structure"? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency - if neuron A hadn't have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn't call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That's what I think is wrong with your GIF example, btw - there's no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn't change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.

Anyway, beyond that, we're obviously working from very different intuitions, because I don't see the China Brain or Turing machine examples as reductio's at all - I'm perfectly willing to accept that those entities would be conscious.

Comment by ahbwramc on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-29T13:46:57.338Z · LW · GW

Well, since I'm on LW the first article to come to mind was Outside the Laboratory, although that's not really arguing for the proposition per se.

As for the stooping thing, I'm not entirely sure what you mean, but the first thing that came to mind was that maybe you have a rule out rather than rule in criteria for judging intelligence? As in: someone can say a bunch of smart things, but at best that just earns them provisional smart status. On the other hand if they say one sufficiently dumb thing that's enough to rule them out as being truly intelligent.

Comment by ahbwramc on CFAR-run MIRI Summer Fellows program: July 7-26 · 2015-04-29T02:38:53.692Z · LW · GW

Well, I signed up for an interview (probably won't amount to anything, but it's too good of an opportunity to just ignore). After signing up though it occurred to me that this might be a US-only deal. Would my being Canadian be a deal-breaker?

Comment by ahbwramc on LessWrong experience on Alcohol · 2015-04-17T15:47:09.339Z · LW · GW

Oh hey, convenient. Someone already wrote my reply.

Comment by ahbwramc on Open Thread, Apr. 13 - Apr. 19, 2015 · 2015-04-13T16:14:16.601Z · LW · GW

In my experience hostels are a lot more like the fictional bars you describe.

Comment by ahbwramc on Innate Mathematical Ability · 2015-04-02T00:23:31.160Z · LW · GW

Any more of this sequence forthcoming? I was looking forward to it continuing.

Comment by ahbwramc on Innate Mathematical Ability · 2015-02-19T03:56:56.551Z · LW · GW

I tried for maybe thirty seconds to solve it, but couldn't see anything obvious, so I decided to just truncate the fraction to see if it was close to anything I knew. From that it was clear the answer was root 2, but I still couldn't see how to solve it. Once I got into work though I had another look, and then (maybe because I knew what the answer was and could see that it was simple algebraically) I was able to come up with the above solution.

Comment by ahbwramc on Innate Mathematical Ability · 2015-02-19T03:48:05.190Z · LW · GW

Also how I did it. FWIW I know it took me more than a minute, but definitely less than five.

Comment by ahbwramc on MIRI's technical agenda: an annotated bibliography, and other updates · 2015-02-07T04:28:37.461Z · LW · GW

As a MIRI donor, glad to here it! Good luck to you guys, you're doing important work to say the least.

Comment by ahbwramc on MIRI's technical agenda: an annotated bibliography, and other updates · 2015-02-06T18:47:41.091Z · LW · GW

I'm curious, has this recent series of papers garnered much interest from the wider (or "mainstream") AI community? It seems like MIRI has made a lot of progress over the past few years in getting a lot of very smart people to take their ideas seriously (and in cultivating a more respectable, "serious" image). I was wondering if similar progress had been made in creating inroads into academia.

Comment by ahbwramc on Superintelligence 20: The value-loading problem · 2015-01-29T01:09:24.037Z · LW · GW

Did anyone else immediately try to come up with ways Davis' plan would fail? One obvious failure mode would be in specifying which dead people count - if you say "the people described in these books," the AI could just grab the books and rewrite them. Hmm, come to think of it: is any attempt to pin down human preferences by physical reference rather than logical reference vulnerable to tampering of this kind, and therefore unworkable? I know EY has written many times before about a "giant logical function that computes morality", but this puts that notion in a bit of a different light for me. Anyway, I'm sure there other less obvious ways Davis' plan could go wrong too. I also suspect he's sneaking a lot into that little word, "disapprove".

In general though, I'm continually astounded at how many people, upon being introduced to the value loading problem and some of the pitfalls that "common-sense" approaches have, still say "Okay, but why couldn't we just do [idea I came up with in five seconds]?"

Comment by ahbwramc on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T16:43:14.984Z · LW · GW

A number of SSC posts have gone viral on Reddit or elsewhere. I'm sure he's picked up a fair number of readers from the greater internet. Also, for what it's worth, I've turned two of my friends on to SSC who were never much interested in LW.

But I'll second it being among my favourite websites.

Comment by ahbwramc on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-16T15:37:30.651Z · LW · GW

The post was Polyamory is Boring btw, in case anyone else is curious.

Comment by ahbwramc on When the uncertainty about the model is higher than the uncertainty in the model · 2014-11-29T00:49:09.862Z · LW · GW

Confidence levels inside and outside an argument seems related.

Comment by ahbwramc on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? · 2014-11-28T00:40:08.921Z · LW · GW

See, if anything I have the exact opposite problem (which, ironically, I also attribute to arrogance). I almost never engage in arguments with people because I assume I'll never change their mind. When I do get into a debate with someone, I'm extremely quick to give up and write them off as a lost cause. This probably isn't a super healthy attitude to have (especially since many of these "lost causes" are my friends and family) but at least it keeps me out of unproductive arguments. I do have a few friends who are (in my experience) unusually good at listening to new arguments and changing their mind, so I usually wind up limiting my in-depth discussions to just them.

Comment by ahbwramc on Open thread, Oct. 20 - Oct. 26, 2014 · 2014-10-25T00:31:32.937Z · LW · GW

I can empathize to an extent - my fiance left me about two months ago (two months ago yesterday actually, now that I check). I still love her, and I'm not even close to getting over her. I don't think I'm even close to wanting to get over her. And when I have talked to her since it happened, I've said things that I wish I hadn't said, upon reflection. I know exactly what you mean about having no control of what you say around her.

But, with that being said...

Well, I certainly can't speak for the common wisdom of the community, but speaking for myself, I think it's important to remember that emotion and rationality aren't necessarily opposed - in fact, I think that's one of the most important things I've learned from LW: emotion is orthogonal to rationality. I think of the love I have for my ex-fiance, and, well...I approve of it. It can't be really be justified in any way (and it's hard to even imagine what it would mean for an emotion to be justified, except by other emotions), but it's there, and I'm happy that it is. As Eliezer put it, there's no truth that destroys my love.

Of course, emotions can be irrational - certainly one has to strive for reflective equilibrium, searching for emotions that conflict with one another and deciding which ones to endorse. And it seems like you don't particularly endorse the emotions that you feel around this person (I'll just add that for myself, being in love has never felt like another persons values were superseding my own - rather it felt like they were being elevated to being on par with my own. Suddenly this other person's happiness was just as important to me as my own - usually not more important, though). But I guess my point is that there's nothing inherently irrational about valuing someone else over yourself, even if it might be irrational for you.

Comment by ahbwramc on 2014 Less Wrong Census/Survey · 2014-10-23T02:46:23.884Z · LW · GW

Survey complete! I'd have answered the digit ratio question, but I don't have a ruler of all things at home. Ooh, now to go check my answers for the calibration questions.

Comment by ahbwramc on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-10T15:53:17.956Z · LW · GW

Scott is a LW member who has posted a few articles here

This seems like a significant understatement given that Scott has the second highest karma of all-time on LW (after only Eliezer). Even if he doesn't post much here directly anymore, he's still probably the biggest thought leader the broader rational community has right now.

Comment by ahbwramc on Get genotyped for free ( If your IQ is high enough) · 2014-09-22T16:00:16.783Z · LW · GW

It's been a while; any further updates on this project? All the BGI website says is that my sample has been received.

Comment by ahbwramc on Open thread, September 15-21, 2014 · 2014-09-21T15:35:23.421Z · LW · GW

Okay, fair enough, forget the whole increasing of measure thing for now. There's still the fact that every time I go to the subway, there's a world where I jump in front of it. That for sure happens. I'm obviously not suggesting anything dumb like avoiding subways, that's not my point at all. It's just...that doesn't seem very "normal" to me, somehow. MWI gives this weird new weight to all counterfactuals that seems like it makes an actual difference (not in terms of any actual predictions, but psychologically - and psychology is all we're talking about when assessing "normality"). Probably though this is all still betraying my lack of understanding of measure - worlds where I jump in front of the train are incredibly low measure, and so they get way less magical reality fluid, I should care about them less, etc. I still can't really grok that though - to me and my naive branch-counting brain, the salient fact is that the world exists at all, not that it has low probability.

Comment by ahbwramc on Open thread, September 15-21, 2014 · 2014-09-18T05:15:18.281Z · LW · GW

I've never been entirely sure about the whole "it should all add up to normality" thing in regards to MWI. Like, in particular, I worry about the notion of intrusive thoughts. A good 30% of the time I ride the subway I have some sort of weak intrusive thought about jumping in front of the train (I hope it goes without saying that I am very much not suicidal). And since accepting MWI as being reasonably likely to be true, I've worried that just having these intrusive thoughts might increase the measure of those worlds where the intrusive thoughts become reality. And then I worry that having that thought will even further increase the measure of such worlds. And then I worry...well, then it usually tapers off, because I'm pretty good at controlling runaway thought processes. But my point is...I didn't have these kinds of thoughts before I learned about MWI, and that sort of seems like a real difference. How does it all add up to normality, exactly?

Comment by ahbwramc on What is the difference between rationality and intelligence? · 2014-08-14T21:36:50.043Z · LW · GW

Perhaps (and I'm just thinking off the cuff here) rationality is just the subset of general intelligence that you might call meta-intelligence - ie, the skill of intelligently using your first-order intelligence to best achieve your ends.

Comment by ahbwramc on Open thread, 11-17 August 2014 · 2014-08-11T23:03:32.205Z · LW · GW

I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie's World, fwiw). It was like, thank you, that's what I've been trying to articulate all these years!

(although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)