Comment by feepingcreature on Reasonable Explanations · 2019-06-16T08:23:13.718Z · score: 8 (4 votes) · LW · GW

I had a similar one to that, where I completely overwrote my actual memory of what happened with what habit said should have happened, where I went to get my bike from the garage and it was not there. But I clearly remembered having stored it in the garage the day prior.

Spoiler: I hadn't. I'd gone to the store on the way back, left the bike locked in front of the store, then (since I almost always go to the store on foot) walked home. My brain internally rewrote this as "rode home, [stored bike, went to the store], went home." (The [] part did not happen.)

Memory is weird, especially if your experience is normally highly compressible.

Comment by feepingcreature on Drowning children are rare · 2019-05-28T20:58:41.716Z · score: 3 (2 votes) · LW · GW

Doesn't this only hold if you abdicate all moral judgment to Gates/Good Ventures, such that if Gates Foundation/Good Ventures pass up an opportunity to save lives, it follows necessarily that the offer was fraudulent?

Comment by feepingcreature on Minimize Use of Standard Internet Food Delivery · 2019-02-13T19:33:02.551Z · score: 4 (3 votes) · LW · GW

Non-profits are "profitable" in the limit sense of a profit of zero. Non-profits with negative profit cannot exist and, in fact, generally quickly cease to.

There are problems with the equation "profit=worth", but it holds to a first approximation. The free market is vulnerable to collusion, fraud and outright value hijacking, but those are all manipulations and divergences of the baseline, which is "1 money = 1 unit of caring."

I usually tend to assume that the markets, being the dominant optimization power in society, are the "authority" on value, because they generally function to model society's revealed preferences. A thought that often comes to mind is "If you didn't want X, why did you allow your markets to fall into an X attractor?" I suspect people tend to model markets as cosmic laws, whereas I think of them more as highly powerful mechanical contraptions that require maintenance. Or maybe as a dev I just model everything as software.

Comment by feepingcreature on Minimize Use of Standard Internet Food Delivery · 2019-02-11T20:49:06.051Z · score: 12 (8 votes) · LW · GW

I bought a service at an excessive price, so I'm defecting? What?

If the delivery services are taking too much, they'll be outcompeted. Given your description, Slice is not some sort of hero, it's a vertically integrated competitor doing exactly what the free market would expect to happen.

In any case, I reject the notion that "giving a company too much money" could in any sense be mapped to the game theoretic notion of defection. That's not how game theory works, and it's not how markets work.

If GrubHub was misadvertising their costs or declaring that 90% of the price went to the restaurant, and it didn't, then sure. But that's not my impression.

GrubHub offers a service for a price. In the back, it negotiates with the restaurant to push down prices. This is good and proper. The free market drives margins to zero; the point of it is to drive margins to zero. If you don't want that, then don't sell on GrubHub?

"But all my competitors are on GrubHub and I can't compete!" Yes? Were you under the impression that you were owed a business model? A business is either profitable or it isn't. If it is not profitable, then it morally shouldn't ought to exist; the market is indicating that the business is a waste of society's limited resources. Propping up a business that is unprofitable is more defection to me than anything GrubHub does.

Comment by feepingcreature on Bottle Caps Aren't Optimisers · 2018-12-26T20:41:16.262Z · score: 1 (1 votes) · LW · GW

The model of the bank account compresses the target function of the brain, even when expressed in terms of specific physical effects on the world. Further, the model of health compresses the target function of the liver better than the bank account.

Comment by feepingcreature on Is Clickbait Destroying Our General Intelligence? · 2018-11-17T11:17:18.215Z · score: 3 (2 votes) · LW · GW

Eh, the gig economy will fix it.

I can't think of any economic model that would more select for the ability to take pieces of cognitive architecture and put them together into novel ways. Weren't you the one who said science was going too slow, and that a true Bayesian should be able to solve shallow problems in, let's say a quarter hour and more complex ones like unified physics in a week? That does not sound "old style of work" to me, but it does moreso sound, amusingly, "glib memetics" - and "startups". Similarly, the Agile model of development is to accept doing damage but be aware of it as you do so - make the feature usable, then move on but put a cleanup task in the backlog. At least where I work, modern-style seems more reactive and demanding of fluidity, and startup/gig work can only increase that. I think we're kind of in a transition phase where the human mind is being effectively operationalized as a target platform, but large parts of the population haven't fully evolved the software to actually manage being treated as a target, and social systems are taking full advantage. But society is also taking advantage of the increased flexibility on offer here, and in the medium run self-awareness will have to catch up to be able to keep up and frontrun the rapidly-evolving memetic environment. At least that's my expectation.

Comment by feepingcreature on Rationality Is Not Systematized Winning · 2018-11-12T00:11:03.783Z · score: 20 (10 votes) · LW · GW

“Look, if I go to college and get my degree, and I go start a traditional family with 4 kids, and I make 120k a year and vote for my favorite political party, and the decades pass and I get old but I'm doing pretty damn well by historical human standards; just by doing everything society would like me to, what use do I have for your 'rationality'? Why should I change any of my actions from the societal default?”

You must have an answer for them. Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do more than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults. When you give yourself permission to move away from the "systematized winning" definition, without the fear that you'll tie yourself in knots of paradox; it's then that you can really start to think about the subject concretely.

I mean, isn't the answer to that, as laid out in the Sequences, that Rationality really doesn't have anything to offer them? Tsuyoku Naritai, Something to Protect, etc. - Eliezer made the Sequences because he needed people to be considering the evidence that AI was dangerous and was gonna kill everyone by default, so short-term give money to MIRI and/or long-term join up as a researcher. "No one truly searches for the Way until their parents have failed them, their Gods are dead and their tools have shattered in their hands." I think it's fair that the majority of people don't have problems with that sort of magnitude of impact in their lives; and in any case, anyone who cared that much would already have gone off to join an EA project. I'm not sure that Eliezer-style rationality needs to struggle for some way to justify its existence when the explicit goal of its existence has already largely been fulfilled. Most people don't have one or two questions in their life that they absolutely, pass-or-die need to get right, and the answer is nontrivial. The societal default is a time-tested satisficing path.

When you are struggling to explain why something is true, make sure that it actually is true.

Comment by feepingcreature on Update on Structured Concurrency · 2018-10-22T05:21:29.201Z · score: 5 (3 votes) · LW · GW

I've read the relevant articles on structured concurrency, and I don't see what it buys you.

To be honest, in all my time writing threaded code, I have never once worried about the lifetime of threads. If I wanted to shut threads down in a certain order, I'd just use condition variables to have the latter threads wait on the former threads' shutdown sequences. Is this just a more readable way of implementing that sort of thing? Which, well, great, but with a name like "structured concurrency" I'd expect a larger paradigm shift than that.

Comment by feepingcreature on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T18:18:16.003Z · score: 3 (2 votes) · LW · GW

Say that the second time you wake her on Monday, you just outright ignore everything she says. Suddenly, because you changed your behavior, her objectively correct belief is 0.5 / 0.5?

The real problem is that the question is undefined - Sleeping Beauty has no goal function. If she had a goal function, she could just choose the probability assignment that maximized the payoff under it. All the handwaving at "standard frameworks" is just another way to say "assignment that maximizes payoff under a broad spread of goals".

Alternate scenario: all three wakings of Sleeping Beauty are actually copies. Monday 2 is deleted after a few minutes. What's the "true" probability then?

Comment by feepingcreature on Are ethical asymmetries from property rights? · 2018-07-02T05:26:51.797Z · score: 7 (4 votes) · LW · GW

At least where I live, two out of three of those property rights are wrong.

Property rights explicitly give way to more basic human rights. For instance, you are allowed to steal a car if it's the only way that you can get an injured person to a hospital. And of course, you're allowed to steal bread if that's the only way you can get to eat.

Comment by feepingcreature on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-16T17:45:57.196Z · score: 31 (9 votes) · LW · GW

I would actually not be able to think in common or public spaces with that ambient level of physical threat.

As I understand the argument, the claim is that you (or rather, the reference class of people who feel like you) only react like that because you are hypersensitized to harmless (ie. with a short physiological return to baseline) threats due to lack of exposure.

If I am around a spider, I am in a state of mild to severe panic, depending on size. But this is not a fact about the inherent horribleness of spiders, but about my phobic mindset, and attempting to make the world spider-free would be a completely undue cost compared to treating my phobia.

To me, the question with Punch Bug is whether the suffering imposed on people who are naturally ill-suited to mild violence outweighs the suffering caused by lack of physicality and possible neuroticism (?) due to underexposure in people well-suited to mild violence.

Comment by feepingcreature on The Craft & The Community - A Post-Mortem & Resurrection · 2018-04-26T17:36:41.676Z · score: 3 (2 votes) · LW · GW

That's really shitty.

Comment by FeepingCreature on [deleted post] 2018-03-21T18:49:27.819Z

Yeah but you can't derive fault from property, because by your own admission your model makes no claim of fault. At most you can say that Alex is the immediate causal source of the problem.

Comment by feepingcreature on Hero Licensing · 2017-11-25T00:45:51.979Z · score: 5 (3 votes) · LW · GW

One thing I think is probably true: unless you're unusually competent or unusually driven, you should look for areas where you can "charge ahead with reckless abandon", and you should try to cultivate a sense of quality in those areas, possibly by transfer learning; then, having done so, you can exploit your ability to charge ahead in that direction to effectively build skill. In this model, directions to charge in are not something you can reasonably choose, but rather become available to you through chance. If this holds, not everyone can be "Eliezer writing books on rationality", but I think a lot more people can be "Eliezer writing HPMoR". I don't think those things are unknowable from the outside view; I think you just look for "has a good sense of quality" and "is charging recklessly ahead". At least for learnable skills, I believe that consumptive volume plus self-criticism plus productive volume = quality.

(Forcing yourself to charge in an arbitrary direction is, I suspect, a field that's genuinely inexhaustible because if you could repeatably do that, you'd own the self-help sector.)

Comment by FeepingCreature on [deleted post] 2017-05-31T11:41:37.155Z

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like.

Yeah but exposure therapy doesn't work like that though. If people are too sensitive, you can't just rub their faces in the thing they're sensitive about and expect them to change. In fact, what you'd want to desensitize people is the exact opposite - really tight conversation norms that still let people push slightly outside their comfort zone.

Comment by feepingcreature on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-16T01:44:21.955Z · score: 0 (0 votes) · LW · GW

Azathoth, check.

Comment by feepingcreature on CFAR’s new focus, and AI Safety · 2016-12-03T13:58:39.397Z · score: 6 (6 votes) · LW · GW

I shall preface by saying that I am neither a rationalist nor an aspiring rationalist. Instead, I would classify myself as a "rationality consumer" - I enjoy debating philosophy and reading good competence/insight porn. My life is good enough that I don't anticipate much subjective value from optimizing my decisionmaking.

I don't know how representative I am. But I think if you want to reach "people who have something to protect" you need to use different approaches from "people who like competence porn", and I think while a site like LW can serve both groups we are to some extent running into issues where we may have a population that is largely the latter instead of the former - people admire Gwern, but who wants to be Gwern? Who wants to be like Eliezer or lukeprog? We may not want leaders, but we don't even have heroes.

I think possibly what's missing, and this is especially relevant in the case of CFAR, is a solid, empirical, visceral case for the benefit of putting the techniques into action. At the risk of being branded outreach, and at the very real risk of significantly skewing their post-workshop stats gathering, CFAR should possibly put more effort into documenting stories of success through applying the techniques. I think the main focus of research should be full System-1 integration, not just for the techniques themselves but also for CFAR's advertisement. I believe it's possible to do this responsibly if one combines it with transparency and System-2 relevant statistics. Contingent, of course, on CFAR delivering the proportionate value.

I realize that there is a chicken-and-egg problem here where for reasons of honesty, you want to use System-1-appealing techniques that only work if the case is solid, which is exactly the thing that System-1 is traditionally bad at! I'm not sure how to solve that, but I think it needs to be solved. To my intuition, rationality won't take off until it's value-positive for S1 as well as S2. If you have something to protect you can push against S1 in the short-term, but the default engagement must be one of playful ease if you want to capture people in a state of idle interest.

Comment by feepingcreature on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T12:28:47.062Z · score: 1 (1 votes) · LW · GW

Would you use the LW comments section if it was embeddable, like Disqus is?

Comment by feepingcreature on Identity map · 2016-09-09T09:39:40.120Z · score: 0 (0 votes) · LW · GW

The past doesn't exist either.

Comment by feepingcreature on If MWI is correct, should we expect to experience Quantum Torment? · 2016-07-25T08:04:41.352Z · score: 0 (0 votes) · LW · GW

"Hell of a scary afterlife you got here, missy."

! ! !

Be honest. Are you prescient? And are you using your eldritch powers to troll us?

Comment by feepingcreature on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-27T22:42:07.928Z · score: 1 (1 votes) · LW · GW

Tegmark 4 is not related to quantum physics. Quantum physics does not give an avenue for rescue simulations; in fact, it makes them harder.

As a simulationist, you can somewhat salvage traditional notions of fear if you retreat into a full-on absurdist framework where the point of your existence is to give a good showing to the simulating universes; alternately, risk avoidance is a good Schelling point for a high score. Furthermore, no matter how much utility you will be able to attain in Simulationist Heaven, this is your single shot to attain utility on Earth, and you shouldn't waste it.

It does take the sting off death though, and may well be maladaptive in that sense. That said - it seems plausible a lot of simulating universes would end up with a "don't rescue suicides" policy, purely out of a TDT desire to avoid the infinite-suicidal-regress loop.

I am continuously amused how catholic this cosmology ends up by sheer logic.

Comment by feepingcreature on Why CFAR? The view from 2015 · 2015-12-25T00:55:13.243Z · score: 0 (2 votes) · LW · GW

book

Basically, systems that can improve from damage.

Comment by feepingcreature on Leaving LessWrong for a more rational life · 2015-05-22T14:49:03.996Z · score: 0 (0 votes) · LW · GW

Not sure. Suspect nobody knows, but seems possible?

I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own "universal" values.

Comment by feepingcreature on Leaving LessWrong for a more rational life · 2015-05-22T11:06:03.373Z · score: 9 (9 votes) · LW · GW

Intelligence to what purpose?

Nobody's saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it'll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that's basically the point of worrying about UFAI.

Comment by feepingcreature on The Hardcore AI Box Experiment · 2015-04-03T22:37:11.517Z · score: 1 (1 votes) · LW · GW

I suspect that seeing the logs would have made Eliezer seem like a horrible human being. Most people who hear of AI Box imagine a convincing argument, when to me it seems more plausible to exploit issues in people's sense of narrative or emotion.

Comment by feepingcreature on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-17T08:12:37.803Z · score: 0 (0 votes) · LW · GW

Unrelated conclusion: fursonas are HP canon.

Comment by feepingcreature on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-17T08:09:14.288Z · score: 2 (2 votes) · LW · GW

I'm still really curious how the Deathly Hallows are going to tie into this.

Okay. Hm. I think maybe you can't transfigure Hermione into Hermione if you don't have a true image of what Hermione was like. But if you had the Resurrection Stone, maybe you could use it to create a true image to work from?

No idea about the wand/cloak.

Comment by feepingcreature on Rationality Quotes Thread February 2015 · 2015-02-07T20:00:20.120Z · score: 2 (2 votes) · LW · GW

Haha. True!

Comment by feepingcreature on [LINK] Wait But Why - The AI Revolution Part 2 · 2015-02-07T17:18:18.193Z · score: 0 (0 votes) · LW · GW

I think the idea is, you need to solve the wireheading for any sort of self-improving AI. You don't have an AI catastrophe without that, because you don't have an AI without that (at least not for long).

Comment by feepingcreature on [LINK] Wait But Why - The AI Revolution Part 2 · 2015-02-06T13:37:24.037Z · score: 0 (0 votes) · LW · GW

It wouldn't.

But I think this is such a basic failure mechanism that I don't believe an AI could get to superintelligence without somehow valuing the accuracy and completeness of its model.

Solving this problem - somehow! - is part of the "normal" development of any self-improving AI.

Though note that a reward maximizing AI could still be an existential risk by virtue of turning the entire universe into a busy-beaver counter for its reward. Though this presumes it can't just set reward to float.infinity.

Comment by feepingcreature on [LINK] Wait But Why - The AI Revolution Part 2 · 2015-02-06T13:30:13.679Z · score: 1 (1 votes) · LW · GW

whatever terminal goal you've given it isn't actually terminal.

This is a contradiction in terms.

If you have given it a terminal goal, that goal is now a terminal goal for the AI.

You may not have intended it to be a terminal goal for the AI, but the AI cares about that less than it does about its terminal goal. Because it's a terminal goal.

If the AI could realize that its terminal goal wasn't actually a terminal goal, all it'd mean would be that you failed to make it a terminal goal for the AI.

And yeah, reinforcement based AIs have flexible goals. That doesn't mean they have flexible terminal goals, but that they have a single terminal goal, that being "maximize reward". A reinforcement AI changing its terminal goal would be like a reinforcement AI learning to seek out the absence of reward.

Comment by feepingcreature on Rationality Quotes Thread February 2015 · 2015-02-06T13:25:13.236Z · score: 3 (5 votes) · LW · GW

Yeah but it's also easy to falsely label a genuine problem as "practically already solved". The proof is in the pudding.

The next day, the novice approached Ougi and related the events, and said, "Master, I am constantly consumed by worry that this is all really a cult, and that your teachings are only dogma." Ougi replied, "If you find a hammer lying in the road and sell it, you may ask a low price or a high one. But if you keep the hammer and use it to drive nails, who can doubt its worth?"

--Two Cult Koans

Conversely, to show the worth of clarity you actually have to go drive some nails with it.

Comment by feepingcreature on Rationality Quotes Thread February 2015 · 2015-02-01T20:22:02.364Z · score: 6 (6 votes) · LW · GW

I need to stop being surprised at how many problems can be solved with clarity alone.

Note to Scott: a problem only counts as solved when it's actually gone.

Comment by feepingcreature on [LINK] The P + epsilon Attack (Precommitment in cryptoeconomics) · 2015-01-29T11:10:02.946Z · score: 1 (1 votes) · LW · GW

Weird question: superrationally speaking, wouldn't the "correct" strategy be to switch to B with 0.49 probability? (Or with however much is needed to ensure that if everybody does this, A probably still wins)

[edit] Hm. If B wins, this strategy halves the expected payoff. So you'd have to account for the possibility of B winning accidentally. Seems to depend on the size of the player base - the larger it is, the closer you can drive your probability to 0.5? (at the limit, 0.5-e?) Not sure. I guess it depends on the size of the attacker's epsilon as well.

I'm sure there's some elegant formula here, but I have no idea what it is.

Comment by feepingcreature on ... And Everyone Loses Their Minds · 2015-01-23T09:07:59.528Z · score: 0 (0 votes) · LW · GW

I think what Nietzsche is saying is that there doesn't seem any point to this society.

Comment by feepingcreature on 2014 Survey Results · 2015-01-05T22:53:19.944Z · score: 0 (0 votes) · LW · GW

I probably used the wrong word; rather, they don't diverge, they end up looking the same. If initial state is the same, and physics are the same, then calculation will end up the same likewise. In that sense, every interaction by simulation Gods with the sim is increases the bit count of the description of the world you find yourself in. (Unless the world of our simulation God is so much simpler that it's easier to describe our world by looking at their world. But that seems implausible.)

Comment by feepingcreature on 2014 Survey Results · 2015-01-04T16:34:28.341Z · score: 24 (26 votes) · LW · GW

This definitely belongs on the next survey!

Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts

Comment by feepingcreature on 2014 Survey Results · 2015-01-04T16:33:01.608Z · score: 0 (0 votes) · LW · GW

If there's one simulation, there are many simulations. Any given "simulation God" can only interfere with their own simulation. Interfered-with simulations diverge, not-interfered-with simulations converge. Thus, at any given point, I should expect to be in the not-interfered-with simulation. "God", if you can call it that, but not "Supernatural" because this prime mover cannot affect the world.

Comment by feepingcreature on SciAm article about rationality corresponding only weakly with IQ · 2014-12-29T06:13:43.709Z · score: 3 (3 votes) · LW · GW

Haha. The second I read the first sentence of that bit in the article I knew my mistake.

Comment by feepingcreature on 3-day Solstice in Leipzig, Germany: small, nice, very low cost, includes accommodation, 19th-21st Dec · 2014-12-10T13:42:48.400Z · score: 0 (0 votes) · LW · GW

(Count me under "sleeping bag"!)

Comment by feepingcreature on 3-day Solstice in Leipzig, Germany: small, nice, very low cost, includes accommodation, 19th-21st Dec · 2014-12-06T13:29:00.935Z · score: 1 (1 votes) · LW · GW

Hi, I'd like to come as well if you still have places!

By the way, if you still have spots, you should maybe post this again now that we're a bit closer to the actual date. I think it was posted somewhat early, which might mean people saw it, wanted to attend but didn't want to commit yet, and then forgot about it.

Also maybe message a moderator to get it listed as a meetup.

Comment by feepingcreature on Link: Elon Musk wants gov't oversight for AI · 2014-10-29T14:18:23.350Z · score: 3 (3 votes) · LW · GW

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

Note: given really really large computational resources, an AI can always "break out by breaking in"; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it's running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.

Comment by feepingcreature on LessWrong's attitude towards AI research · 2014-09-22T16:13:52.251Z · score: 1 (1 votes) · LW · GW

The question of "what are the right safety goals" is what FAI research is all about.

Comment by feepingcreature on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-11T01:13:22.545Z · score: 1 (3 votes) · LW · GW

Eliezers essay looks at humanism, looks at the reasons for it and than argues that those reasons apply to transhumanism.

Eliezer's essay then makes the case that transhumanism is preferable because it lacks special rules.

By analogy: "Love is good. Isolation is bad. If two people are in love, they can marry. It's that simple. You don't have to look at anybody's gender."

Elegant program designs imply elegant (occam!) rules.

Comment by feepingcreature on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-09-07T10:42:43.884Z · score: -1 (1 votes) · LW · GW

I feel like charitably, another explanation would just be that it's simply a better phrasing than people come up with on their own.

but we're talking about one-draft daily blog posts here.

So? Fast doesn't imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.

Comment by feepingcreature on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-06T20:56:37.727Z · score: 2 (2 votes) · LW · GW

I was trying to draw a comparison to Transhumanism as Simplified Humanism - Universal Marriage as simplified Hetero Marriage.

Comment by feepingcreature on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-06T10:59:23.167Z · score: 1 (5 votes) · LW · GW

Gay marriage is a straightforward simplification of marriage.

Comment by feepingcreature on Steelmanning MIRI critics · 2014-09-05T21:17:55.318Z · score: 1 (1 votes) · LW · GW

An existence proof is very different from a constructive proof!

Quite so. However, it does give reason to hope.

The question is, will this take decades or centuries?

If you look at Moore's Law coming to a close in silicon around 2020, and we're still so far away from a human brain equivalent computer, it's easy to get disheartened. I think it's important to remember that it's at least possible, and if nature could happen upon it..

Comment by feepingcreature on Steelmanning MIRI critics · 2014-08-31T18:38:51.193Z · score: 1 (1 votes) · LW · GW

(1) Moore's law seems to be slowing - this could be a speedbump before the next paradigm takes over, or it could be the start of stagnation, in which case the singularity is postponed.

The pithy one-liner comeback to this is that the human brain is an existence proof for a computer the size of the human brain with the performance of the human brain, and it seems implausible that nature arrived at the optimal basic design for neurons on (basically) its first try.

Comment by feepingcreature on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-31T12:34:35.840Z · score: 7 (7 votes) · LW · GW

[x] other reasons

What if we're the first in a winner-takes-all scenario? If the first-mover prevents (or vastly reduces the likelihood of) the evolution of latter intelligent life, intelligent life should not be surprised by being the first intelligent species to evolve.

Write down your basic realizations as a matter of habit. Share this file with the world.

2014-01-17T12:57:04.455Z · score: 14 (17 votes)

Forked Russian Roulette and Anticipation of Survival

2012-04-06T03:57:08.683Z · score: 7 (12 votes)