Posts
Comments
I would be interested in organizing this if no one else will. Would like the tips
Not really. May be worth listening to while washing dishes or something but nothing essential.
If people agree the test is fair and the randomization is fair, I'm not convinced it would not be stable after a generation or two. Pure sortition does retain that advantage, the IQ filter reduces this but the filter could be adjusted to increase stability. For example, say it took only the 50th percentile. At this level, coordination would be difficult as no one would want to publicly admit they weren't eligible for sortition. Perhaps this would remain true if only the 90th percentile were selected, if not the 99th.
If anyone is interested in playing an AI box experiment game, I'd be interested in being the gatekeeper.
1)
Just be sure I'm understanding you correctly, what you're saying is average utilitarianism prescribes creating lives that are not worth living so long as they are less horrible than average. This does seem weird. Creating a life that is not worth living should be proscribed by any sane rule!
2)
I don't find this objection super compelling. Isn't the reason average utilitarianism was proposed was because people find mere addition unattractive?
3)
Another fine point. People with lives worth living shouldn't feel the need to suicide when they learn they are dragging down the average. I believe average preference utilitarianism is a patch for this though.
I can think of various patches, but I should probably read more on the topic first. Do you have any recommendations for a textbook or a book on population ethics?
200, or 400 if you count matching.
I watched it based on this recommendation. I'll second it - great fun, great animation, but I don't mind CGI. I thought I detected some Hannu Rajaniemi influences, too.
Gubhtu V guvax vg fubhyq unir raqrq jvgu gur gjb wbvavat gur iblntr. Qvatb'f bowrpgvbaf gb fvzhyngvba jrer cheryl n znggre bs gur cbyvgvpf naq Natryn pyrneyl cersreerq yvsr nf na rz. Gur erny jbeyq ybbxrq cerggl pehzzl.
I imagine a lot of the selection was indirect selection for neoteny. I think it would be much, much harder to select for domestication in octopi, as they do not raise their young.
I've been looking for a good Anime/Manga podcast? The one's I've found have been ok but not exactly what I'm hoping for. Anyone know of one?
This made me laugh: http://www.clickhole.com/article/5-absolutely-stunning-examples-mathematics-nature-2081
I agree, there is some magic to NGE that RahXephon doesn't have - but I'm not sure how much of that is caused by the fact that I saw NGE first and it was the first Anime I ever watched. I love Neuromancer, but much of my love for it comes from the fact that it was the first science fiction novel I ever read. I had no antibodies. If I had read Vinge first, it's likely I wouldn't have been too impressed with Neuromancer, which has as many flaws as NGE.
I can't justify giving NGE a higher score for the reasons you described, but I do slightly prefer it - though less so after re-watching RahXephon.
Read The Martian - not bad I guess, but a sort of celebration of terrible ethics.
Watched a lot of robot anime last month.
Rewatched RahXephon. I'd tie it with NGE at 9/10. I especially liked the first two episodes. I thought the romance in it was quite good, too. The animation goes off-model from time to time, but it's serviceable. The music is wonderful, especially the closing theme https://www.youtube.com/watch?v=8aTUy44JA8w
I also watched Eureka Seven and found it vastly inferior to RahXephon - maybe 5/10 and that's pushing it.
I've been enjoying Knights of Sidonia [slight spoilers] - a half-and-half mix of neat science fiction and annoying fan service. There was an interesting romance in the first season (and one wonderful scene of a couple stranded in space) but it's pretty ridiculous how every female (including the eldritch monstrosity) loves the oblivious protagonist. Also, Izana has the potential to be super interesting but zer potential is mostly wasted.
As for the animation, I know it is controversial, but I think it's quite good. It's also obviously the future of the medium - people will get used to it. I'll give it 7/10.
I'm with Yvain on measure, I just can't bring myself to care.
I'm confused. What were you referring to when you said, "on this assumption"?
If you make Egan's assumption, I think it is an extremely strong argument.
Why don't you buy it?
It isn't me at all anymore.
There will be a "thread" of subjective experience that identifies with the state of you now no matter what insult or degeneration you experience. I assumed you were pro-teleporter. If you're not why are you even worried about dust theory?
Well, it might be that such observers are less 'dense' than ones in a stable universe
In that case most of your measure is in stable universes and dust theory isn't anything to worry about.
But that can't be the case, as isn't the whole point of dust theory that basically any set of relations can be construed as a computation implementing your subjective experience, and this experience is self-justifying? If that's the case the majority of your measure must be dust.
Dust theory has a weird pulled-up-by-your-own bootstraps taste to it and I have a strong aversion to regarding it as true, but Egan's argument against it is the best I can find and it's not entirely satisfying but should be sufficiently comforting to allow you to sleep.
That doesn't seem very air tight. There is still a world where a "you" survives or avoids all forms of degradation. It doesn't matter if it's non-binary. There are worlds were you never crossed the street without looking and very, very, very, very improbable worlds where you heal progressively. It's probably not pleasant but it is immortality.
Dust theory is beautiful and terrifying, but what do you say to Egan's argument against it: http://gregegan.customer.netspace.net.au/PERMUTATION/FAQ/FAQ.html
Do you have a link to Max Tegmark's rebuttal? What I've read so far seemed like a confused dodge.
If you're interested in robotics, this video is a must see: https://youtu.be/EtMyH_--vnU?t=32m34s
I have to say I'm baffled. I was genuinely shocked watching the thing. Its speed is incredible. I remember writing off general robots after closely following Willow Robotics' Work. That was only three years ago. Again, I'm pretty shocked.
This forum doesn't allow you to comment if you have <2 karma. How does one get their first 2 karma then?
I doubt there's much to be done. I wouldn't be surprised if MIRI shut down LessWrong soon. It's something of a status drain because of the whole Roko thing and no one seems to use it anymore. Even the open threads seem to be losing steam.
We still get most of the former value from the SlateStarCodex, Gwern.net, and the tumblr scene. Even for rationality, I'm not sure LessWrong is needed now that we have CFAR.
If you’re looking for a useful major, Computer science is the obvious choice. I also think statistics majors are undersupplied, though only anecdotal data there. I know a few stats majors (none overly clever) that have done far more with the degree than I would have guessed as an undergraduate. But this could have changed since, markets being anti-inductive. If your goal is effective egotism, you’re probably not in the best major. Probably the best way to go about your goal is to follow the advice of effective altruists and then donate all the money to your future self, via a Vanguard fund. If this sounds too evil, paying a small tithe, 1%, would more than make up for this at a managable cost.
After reading a biography of Hugh Everett, I checked out his son's music and was pleasntly surprised. I especially liked this one: https://www.youtube.com/watch?v=ZYvj7oeIMCc
Good rec. And not just for the education - the whole show is very charming, though I agree nothing too special.
Fuzzy metrics?
If you liked the visual style,
I liked it, but I think the static textures should have been used with a bit more subtlety.
Mononoke and Ayakashi.
I'll check those out, looks like they're both on Crunchyroll.
Finished it last night.
Gehr, V gubhtug pnfgvat Rqjneq nf zber ivyynvabhf guna pnaaba jnf vafcverq - gur snpg gung ur jnf cbffrffrq fbeg bs ehvarq gung.
Still one of the better animes I've seen recently, and probably the best adaptation of The Count of Monte Cristo I've ever seen - though I haven't seen many.
Now I need a new animie.
I've been enjoying Gankutsuou: The Count of Monte Cristo
After all Eliezer's warnings, you constructed a superintelligence in your own house.
Or is it the apparent resemblance to Pascal's wager?
That and believing in hell is more low status than believing in heaven. Cryonics pattern matches to the a belief in a better life after death, the basilisk to hell.
I remain impressed by how much awareness one high-status academic can raise by writing a book.
Those two quotes that are dated before 2004 are the least outrageous.
This is the most outrageous one to me:
I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.
And it's clearly the exact opposite of what present Eliezer belives.
The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.
Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you're criticising someone who no longer exists.
Also, the page where you try to diagnose him with narsisism just seems mean.
As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.
For the record, I genuinely object to being thought of as a "highly competent CEO."
But that's exactly what the Dunning-Kruger effect would lead us to expect a highly-competent CEOs to say! s/
non-natural CEO working hard and learning fast and picking up lots of low-hanging fruit but also making lots of mistakes along the way because he had no prior executive experience
To be honest, I didn't mean much by it. Just that MIRI has been more impressive lately, and presumably a good portion of this is due to your leadership.
To be honest, I had you pegged as being stuck in a partisan spiral. The fact that you are willing to do this is pretty cool. Have some utils on the house. I don’t know if officially responding to your blog is worth MIRI’s time; it would imply some sort of status equivalence.
Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors. Mining someone’s juvenilia for outrageous statements is not productive – I mean he was 16 when he wrote some of the stuff you quote. I would remove those pages. Same with the usenet stuff – I know it was posted publicly but it feels like furtively-recorded conversations to me all these years later. Stick to arguments against positions MIRI and Yudkowsky currently hold. Personally I’ve moved from highly-skeptical of MIRI to moderately approving. I made this comment a year ago:
The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.
And MIRI has stayed on course and is becoming a productive think tank with three full-time researchers and, it seems to me, a highly competent CEO. It is a very different organization now than the one you started out criticizing.
I think "P!=NPC" would have been better.
So the claim isn’t so much traditionalism is great, only enlightenment is worse than traditionalism after controlling for technology? I was thinking of neoreactionaries as deformed utopians, but the tone is more like, “let’s reset social ‘progress’ and then very carefully consider positive proposals.’
That makes sense, but now that I think about it I don’t find this claim particularly neoreactionary: Enlightenment memes induce a sort of agnosia that prevents the rational design of non-enlightenment social structures. Treating this agnosia will increase the amount of possible social structures we are able to consider and the chances that we will be able to design something better.
What I see proposed are specific forms of monarchy or corporate-like governmental structures. More exotic proposals like futarchy and liquid democracy are dismissed, at least by Moldbug. So pre-enlightenment (or maybe anti-enlightenment) does feel like a better label to my non-expert ears.
Non-Enlightenment principles
You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.
The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]
These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.
Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.
I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.
It takes years of study to write as poorly as he does.
Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars
You’re confusing peoples’ goals with their expectations.
The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?
Do not spam high-status people. That's a recipe for an ugh field. I'm pretty confident that Elon Musk is capable of navigating this terrain, including finding a competent guide if needed. He's obviously read extensively on the topic, something that’s not possible to do without discovering MIRI and its proponents.
Singularity 1 on 1 is a podcast that has interviewed people associated with this forum, like Lukeprog, Robin Hanson and James Miller. However there seems to be a lot of inferential distance between the host and his guests. I think someone like James Miller or Yvain would make a better host for this type of podcast.
Side note, if you find podcasts almost unlistenable at normal speed, you should use Overcast, which has the best speed-up effects of any app I've tried.
I just wattched Tim's Vermeer. It was very good, fun Documentary.
Good call here, btw. I've been going through random reddit comments to posts that link to LessWrong (http://www.reddit.com/domain/lesswrong.com), discarding threads on /r/hpmor /r/lesswrong and other affiliated subs. The basilisk is brought up far more than I expected – and widely mocked. This also seems to occur in Hacker News, too – on which LessWrong was once quite popular. I wasn’t around when the incident occurred, but I’m surprised by how effective it’s been at making LessWrong low status – and its odd persistence years after its creation. Unless high IQ people are less likely to dismiss LessWrong after learning of the basilisk, it’s likely significantly reduced the effectiveness of LessWrong as a farm league for MIRI.
It really is amazingly well-optimized for discrediting MIRI and its goals, especially when amplified by censorship – which is so obviously negatively useful.
I wonder if EY actually thinks the basilisk idea is both correct and unavoidable. That would explain things.