Posts

Comments

Comment by Halfwitz on Online Fun LW/SSC Meetup · 2020-03-24T23:08:01.012Z · LW · GW

I would be interested in organizing this if no one else will. Would like the tips

Comment by Halfwitz on Interview on IQ, genes, and genetic engineering with expert (Hsu) · 2017-05-29T18:53:16.512Z · LW · GW

Not really. May be worth listening to while washing dishes or something but nothing essential.

Comment by Halfwitz on DEAD THREAD · 2017-02-25T02:02:03.835Z · LW · GW

If people agree the test is fair and the randomization is fair, I'm not convinced it would not be stable after a generation or two. Pure sortition does retain that advantage, the IQ filter reduces this but the filter could be adjusted to increase stability. For example, say it took only the 50th percentile. At this level, coordination would be difficult as no one would want to publicly admit they weren't eligible for sortition. Perhaps this would remain true if only the 90th percentile were selected, if not the 99th.

Comment by Halfwitz on Open thread, Dec. 26, 2016 - Jan. 1, 2017 · 2016-12-29T02:56:49.334Z · LW · GW

If anyone is interested in playing an AI box experiment game, I'd be interested in being the gatekeeper.

Comment by Halfwitz on Reframing Average Utilitarianism · 2016-12-10T17:27:08.840Z · LW · GW

1)

Just be sure I'm understanding you correctly, what you're saying is average utilitarianism prescribes creating lives that are not worth living so long as they are less horrible than average. This does seem weird. Creating a life that is not worth living should be proscribed by any sane rule!

2)

I don't find this objection super compelling. Isn't the reason average utilitarianism was proposed was because people find mere addition unattractive?

3)

Another fine point. People with lives worth living shouldn't feel the need to suicide when they learn they are dragging down the average. I believe average preference utilitarianism is a patch for this though.

I can think of various patches, but I should probably read more on the topic first. Do you have any recommendations for a textbook or a book on population ethics?

Comment by Halfwitz on MIRI's 2015 Winter Fundraiser! · 2015-12-09T21:46:22.296Z · LW · GW

200, or 400 if you count matching.

Comment by Halfwitz on August 2015 Media Thread · 2015-08-04T04:05:22.068Z · LW · GW

I watched it based on this recommendation. I'll second it - great fun, great animation, but I don't mind CGI. I thought I detected some Hannu Rajaniemi influences, too.

Gubhtu V guvax vg fubhyq unir raqrq jvgu gur gjb wbvavat gur iblntr. Qvatb'f bowrpgvbaf gb fvzhyngvba jrer cheryl n znggre bs gur cbyvgvpf naq Natryn pyrneyl cersreerq yvsr nf na rz. Gur erny jbeyq ybbxrq cerggl pehzzl.

Comment by Halfwitz on Stupid Questions August 2015 · 2015-08-03T05:02:04.484Z · LW · GW

I imagine a lot of the selection was indirect selection for neoteny. I think it would be much, much harder to select for domestication in octopi, as they do not raise their young.

Comment by Halfwitz on August 2015 Media Thread · 2015-08-02T21:03:16.029Z · LW · GW

I've been looking for a good Anime/Manga podcast? The one's I've found have been ok but not exactly what I'm hoping for. Anyone know of one?

Comment by Halfwitz on August 2015 Media Thread · 2015-08-02T01:19:40.013Z · LW · GW

This made me laugh: http://www.clickhole.com/article/5-absolutely-stunning-examples-mathematics-nature-2081

Comment by Halfwitz on August 2015 Media Thread · 2015-08-01T18:35:29.935Z · LW · GW

I agree, there is some magic to NGE that RahXephon doesn't have - but I'm not sure how much of that is caused by the fact that I saw NGE first and it was the first Anime I ever watched. I love Neuromancer, but much of my love for it comes from the fact that it was the first science fiction novel I ever read. I had no antibodies. If I had read Vinge first, it's likely I wouldn't have been too impressed with Neuromancer, which has as many flaws as NGE.

I can't justify giving NGE a higher score for the reasons you described, but I do slightly prefer it - though less so after re-watching RahXephon.

Comment by Halfwitz on August 2015 Media Thread · 2015-08-01T16:22:35.043Z · LW · GW

Read The Martian - not bad I guess, but a sort of celebration of terrible ethics.

Comment by Halfwitz on August 2015 Media Thread · 2015-08-01T15:55:40.843Z · LW · GW

Watched a lot of robot anime last month.

Rewatched RahXephon. I'd tie it with NGE at 9/10. I especially liked the first two episodes. I thought the romance in it was quite good, too. The animation goes off-model from time to time, but it's serviceable. The music is wonderful, especially the closing theme https://www.youtube.com/watch?v=8aTUy44JA8w

I also watched Eureka Seven and found it vastly inferior to RahXephon - maybe 5/10 and that's pushing it.

I've been enjoying Knights of Sidonia [slight spoilers] - a half-and-half mix of neat science fiction and annoying fan service. There was an interesting romance in the first season (and one wonderful scene of a couple stranded in space) but it's pretty ridiculous how every female (including the eldritch monstrosity) loves the oblivious protagonist. Also, Izana has the potential to be super interesting but zer potential is mostly wasted.

As for the animation, I know it is controversial, but I think it's quite good. It's also obviously the future of the medium - people will get used to it. I'll give it 7/10.

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-13T02:32:24.104Z · LW · GW

I'm with Yvain on measure, I just can't bring myself to care.

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T04:22:00.363Z · LW · GW

I'm confused. What were you referring to when you said, "on this assumption"?

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T04:04:21.357Z · LW · GW

If you make Egan's assumption, I think it is an extremely strong argument.

Why don't you buy it?

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T03:59:50.515Z · LW · GW

It isn't me at all anymore.

There will be a "thread" of subjective experience that identifies with the state of you now no matter what insult or degeneration you experience. I assumed you were pro-teleporter. If you're not why are you even worried about dust theory?

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T03:44:26.562Z · LW · GW

Well, it might be that such observers are less 'dense' than ones in a stable universe

In that case most of your measure is in stable universes and dust theory isn't anything to worry about.

But that can't be the case, as isn't the whole point of dust theory that basically any set of relations can be construed as a computation implementing your subjective experience, and this experience is self-justifying? If that's the case the majority of your measure must be dust.

Dust theory has a weird pulled-up-by-your-own bootstraps taste to it and I have a strong aversion to regarding it as true, but Egan's argument against it is the best I can find and it's not entirely satisfying but should be sufficiently comforting to allow you to sleep.

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T03:27:53.781Z · LW · GW

That doesn't seem very air tight. There is still a world where a "you" survives or avoids all forms of degradation. It doesn't matter if it's non-binary. There are worlds were you never crossed the street without looking and very, very, very, very improbable worlds where you heal progressively. It's probably not pleasant but it is immortality.

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T03:20:38.916Z · LW · GW

Dust theory is beautiful and terrifying, but what do you say to Egan's argument against it: http://gregegan.customer.netspace.net.au/PERMUTATION/FAQ/FAQ.html

Comment by Halfwitz on I need a protocol for dangerous or disconcerting ideas. · 2015-07-12T03:18:58.500Z · LW · GW

Do you have a link to Max Tegmark's rebuttal? What I've read so far seemed like a confused dodge.

Comment by Halfwitz on Open Thread, May 25 - May 31, 2015 · 2015-05-27T02:28:14.615Z · LW · GW

If you're interested in robotics, this video is a must see: https://youtu.be/EtMyH_--vnU?t=32m34s

I have to say I'm baffled. I was genuinely shocked watching the thing. Its speed is incredible. I remember writing off general robots after closely following Willow Robotics' Work. That was only three years ago. Again, I'm pretty shocked.

Comment by Halfwitz on Open Thread, May 25 - May 31, 2015 · 2015-05-25T01:47:30.268Z · LW · GW

This forum doesn't allow you to comment if you have <2 karma. How does one get their first 2 karma then?

Comment by Halfwitz on Open Thread, May 18 - May 24, 2015 · 2015-05-18T02:10:54.376Z · LW · GW

I doubt there's much to be done. I wouldn't be surprised if MIRI shut down LessWrong soon. It's something of a status drain because of the whole Roko thing and no one seems to use it anymore. Even the open threads seem to be losing steam.

We still get most of the former value from the SlateStarCodex, Gwern.net, and the tumblr scene. Even for rationality, I'm not sure LessWrong is needed now that we have CFAR.

Comment by Halfwitz on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-12T16:16:12.937Z · LW · GW

If you’re looking for a useful major, Computer science is the obvious choice. I also think statistics majors are undersupplied, though only anecdotal data there. I know a few stats majors (none overly clever) that have done far more with the degree than I would have guessed as an undergraduate. But this could have changed since, markets being anti-inductive. If your goal is effective egotism, you’re probably not in the best major. Probably the best way to go about your goal is to follow the advice of effective altruists and then donate all the money to your future self, via a Vanguard fund. If this sounds too evil, paying a small tithe, 1%, would more than make up for this at a managable cost.

Comment by Halfwitz on January 2015 Media Thread · 2015-01-06T20:50:45.904Z · LW · GW

After reading a biography of Hugh Everett, I checked out his son's music and was pleasntly surprised. I especially liked this one: https://www.youtube.com/watch?v=ZYvj7oeIMCc

Comment by Halfwitz on December 2014 Media Thread · 2014-12-12T03:48:41.578Z · LW · GW

Good rec. And not just for the education - the whole show is very charming, though I agree nothing too special.

Comment by Halfwitz on Superintelligence 13: Capability control methods · 2014-12-10T01:10:25.545Z · LW · GW

Fuzzy metrics?

Comment by Halfwitz on December 2014 Media Thread · 2014-12-05T16:40:27.370Z · LW · GW

If you liked the visual style,

I liked it, but I think the static textures should have been used with a bit more subtlety.

Mononoke and Ayakashi.

I'll check those out, looks like they're both on Crunchyroll.

Comment by Halfwitz on December 2014 Media Thread · 2014-12-04T17:06:59.820Z · LW · GW

Finished it last night.

Gehr, V gubhtug pnfgvat Rqjneq nf zber ivyynvabhf guna pnaaba jnf vafcverq - gur snpg gung ur jnf cbffrffrq fbeg bs ehvarq gung.

Still one of the better animes I've seen recently, and probably the best adaptation of The Count of Monte Cristo I've ever seen - though I haven't seen many.

Now I need a new animie.

Comment by Halfwitz on December 2014 Media Thread · 2014-12-03T15:54:18.776Z · LW · GW

I've been enjoying Gankutsuou: The Count of Monte Cristo

Comment by Halfwitz on December 2014 Bragging Thread · 2014-12-03T03:53:11.850Z · LW · GW

After all Eliezer's warnings, you constructed a superintelligence in your own house.

Comment by Halfwitz on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2014-11-30T05:42:34.041Z · LW · GW

Or is it the apparent resemblance to Pascal's wager?

That and believing in hell is more low status than believing in heaven. Cryonics pattern matches to the a belief in a better life after death, the basilisk to hell.

Comment by Halfwitz on [Link] Will Superintelligent Machines Destroy Humanity? · 2014-11-28T15:26:34.650Z · LW · GW

I remain impressed by how much awareness one high-status academic can raise by writing a book.

Comment by Halfwitz on Breaking the vicious cycle · 2014-11-24T17:31:01.505Z · LW · GW

Those two quotes that are dated before 2004 are the least outrageous.

This is the most outrageous one to me:

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.

And it's clearly the exact opposite of what present Eliezer belives.

Comment by Halfwitz on Breaking the vicious cycle · 2014-11-24T16:57:20.267Z · LW · GW

The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.

Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you're criticising someone who no longer exists.

Also, the page where you try to diagnose him with narsisism just seems mean.

Comment by Halfwitz on Breaking the vicious cycle · 2014-11-24T15:20:27.366Z · LW · GW

As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.

Comment by Halfwitz on Breaking the vicious cycle · 2014-11-24T00:17:40.283Z · LW · GW

For the record, I genuinely object to being thought of as a "highly competent CEO."

But that's exactly what the Dunning-Kruger effect would lead us to expect a highly-competent CEOs to say! s/

non-natural CEO working hard and learning fast and picking up lots of low-hanging fruit but also making lots of mistakes along the way because he had no prior executive experience

To be honest, I didn't mean much by it. Just that MIRI has been more impressive lately, and presumably a good portion of this is due to your leadership.

Comment by Halfwitz on Breaking the vicious cycle · 2014-11-23T22:24:07.660Z · LW · GW

To be honest, I had you pegged as being stuck in a partisan spiral. The fact that you are willing to do this is pretty cool. Have some utils on the house. I don’t know if officially responding to your blog is worth MIRI’s time; it would imply some sort of status equivalence.

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors. Mining someone’s juvenilia for outrageous statements is not productive – I mean he was 16 when he wrote some of the stuff you quote. I would remove those pages. Same with the usenet stuff – I know it was posted publicly but it feels like furtively-recorded conversations to me all these years later. Stick to arguments against positions MIRI and Yudkowsky currently hold. Personally I’ve moved from highly-skeptical of MIRI to moderately approving. I made this comment a year ago:

The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.

And MIRI has stayed on course and is becoming a productive think tank with three full-time researchers and, it seems to me, a highly competent CEO. It is a very different organization now than the one you started out criticizing.

Comment by Halfwitz on Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild · 2014-11-19T20:38:11.648Z · LW · GW

I think "P!=NPC" would have been better.

Comment by Halfwitz on Neo-reactionaries, why are you neo-reactionary? · 2014-11-19T18:24:38.350Z · LW · GW

So the claim isn’t so much traditionalism is great, only enlightenment is worse than traditionalism after controlling for technology? I was thinking of neoreactionaries as deformed utopians, but the tone is more like, “let’s reset social ‘progress’ and then very carefully consider positive proposals.’

Comment by Halfwitz on Neo-reactionaries, why are you neo-reactionary? · 2014-11-19T17:55:00.303Z · LW · GW

That makes sense, but now that I think about it I don’t find this claim particularly neoreactionary: Enlightenment memes induce a sort of agnosia that prevents the rational design of non-enlightenment social structures. Treating this agnosia will increase the amount of possible social structures we are able to consider and the chances that we will be able to design something better.

What I see proposed are specific forms of monarchy or corporate-like governmental structures. More exotic proposals like futarchy and liquid democracy are dismissed, at least by Moldbug. So pre-enlightenment (or maybe anti-enlightenment) does feel like a better label to my non-expert ears.

Comment by Halfwitz on Neo-reactionaries, why are you neo-reactionary? · 2014-11-19T16:16:00.033Z · LW · GW

Non-Enlightenment principles

Beware of non-apples

Comment by Halfwitz on Link: Elon Musk wants gov't oversight for AI · 2014-10-30T02:41:20.483Z · LW · GW

You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.

The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]

These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.

Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.

I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.

Comment by Halfwitz on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T16:21:13.702Z · LW · GW

It takes years of study to write as poorly as he does.

Comment by Halfwitz on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T15:23:38.780Z · LW · GW

Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars

You’re confusing peoples’ goals with their expectations.

The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?

Comment by Halfwitz on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T14:39:08.514Z · LW · GW

Do not spam high-status people. That's a recipe for an ugh field. I'm pretty confident that Elon Musk is capable of navigating this terrain, including finding a competent guide if needed. He's obviously read extensively on the topic, something that’s not possible to do without discovering MIRI and its proponents.

Comment by Halfwitz on Podcasts? · 2014-10-26T00:18:01.453Z · LW · GW

Singularity 1 on 1 is a podcast that has interviewed people associated with this forum, like Lukeprog, Robin Hanson and James Miller. However there seems to be a lot of inferential distance between the host and his guests. I think someone like James Miller or Yvain would make a better host for this type of podcast.

Side note, if you find podcasts almost unlistenable at normal speed, you should use Overcast, which has the best speed-up effects of any app I've tried.

Comment by Halfwitz on May 2014 Media Thread · 2014-05-06T17:54:57.031Z · LW · GW

I just wattched Tim's Vermeer. It was very good, fun Documentary.

Comment by Halfwitz on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2014-04-26T18:15:27.995Z · LW · GW

Good call here, btw. I've been going through random reddit comments to posts that link to LessWrong (http://www.reddit.com/domain/lesswrong.com), discarding threads on /r/hpmor /r/lesswrong and other affiliated subs. The basilisk is brought up far more than I expected – and widely mocked. This also seems to occur in Hacker News, too – on which LessWrong was once quite popular. I wasn’t around when the incident occurred, but I’m surprised by how effective it’s been at making LessWrong low status – and its odd persistence years after its creation. Unless high IQ people are less likely to dismiss LessWrong after learning of the basilisk, it’s likely significantly reduced the effectiveness of LessWrong as a farm league for MIRI.

It really is amazingly well-optimized for discrediting MIRI and its goals, especially when amplified by censorship – which is so obviously negatively useful.

I wonder if EY actually thinks the basilisk idea is both correct and unavoidable. That would explain things.