next page (older posts) →
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-02-25T00:39:33.000Z · comments (77)
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-02-18T21:23:00.000Z · comments (234)
Instead of "dumb" or "narrow" I'd say "having a strong comparative advantage in X (versus humans)". E.g. imagine watching evolution and asking "will the first animals that take over the world be able to solve the Riemann hypothesis", and the answer is no because humans intelligence, while general, is still pointed more at civilisation-building-style tasks than mathematics.
Similarly, I don't expect that any AI which can do a bunch of groundbreaking science to be "narrow" by our current standards, but I do hope that they have a strong comparative disadvantage at taking-over-world-style tasks, compared with doing-science-style tasks.
And that's related to agency, because what we mean by agency is not far off "having a comparative advantage in taking-over-world style tasks".
Now, I expect that at some point, this line of reasoning stops being useful, because your systems are general enough and agentic enough that, even if their comparative advantage isn't taking over the world, they can pretty easily do that anyway. But the question is whether this line of reasoning is still useful for the first systems which can do pivotal task X. Eliezer thinks no, because he considers intelligence and agency to be very strongly linked. I'm less sure, because humans have been evolved really hard to be agentic, so I'd be surprised if you couldn't beat us at a bunch of intellectual tasks while being much less agentic than us.
Side note: I meant "pattern-matching" as a gesture towards "the bit of general intelligence that doesn't require agency" (although in hindsight I can see how this is confusing, I've just made an edit on the ACX comment).pattern on Why rationalists should care (more) about free software
If you thought the answers in that thread backed you up:
It's a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn't make it pseudoscience. Falsifiability is the key to demarcation.
That summarizes a few answers.
I agree, I wouldn't consider AI alignment to be scientific either. How is it a "problem" though?turntrout on TurnTrout's shortform feed
Argument sketch for why boxing is doomed if the agent is perfectly misaligned:
Consider a perfectly misaligned agent which has -1 times your utility function—it's zero-sum. Then suppose you got useful output of the agent. This means you're able to increase your EU. This means the AI decreased its EU by saying anything. Therefore, it should have shut up instead. But since we assume it's smarter than you, it realized this possibility, and so the fact that it's saying something means that it expects to gain by hurting your interests via its output. Therefore, the output can't be useful.pattern on Why rationalists should care (more) about free software
OpenAI's desire for everyone to have AI
I didn't find the full joke/meme again, but, seriously, OpenAi should be renamed to ClosedAI.jenniferrm on The Liar and the Scold
At one point I assigned myself the homework of watching all of black mirror so as to understand "what cultural associations would be applied to what ideas by default"...
...and most of the episodes had me suppressing anger at the writers for just writing characters who violate the same set of very basic rules over and over and over again with no lessons ever learned by anyone (lessons like "never trust something that talks until you know where it keeps its brains" and "own root on computing machines you rely on or personally trust the humans who do own root on such machines").
However, all the black mirror episodes that were essentially "a good love story in an alternate world" did not bother me in the same way :-)tailcalled on Hyperpalatable Food Hypothesis: A LessWrong Study?
You can compare both ingredient lists and serving sizes if you look at cookbooks from the 1950s-1960s and recipe sites today.
In principle yes, but the question is also the distribution of recipes they ate. I'd assume some of their recipes are more palatable than others, and if you disproportionately ate the more palatable ones, presumably the diet wouldn't work. I don't even how popular recipe books used to be back then [LW · GW]. It seems like one should put some serious historical effort in to ensure that it gets properly replicated.
My Betty Crocker cookbook from 1969 (where I get most of my dessert recipes) has a brownie recipe that calls for 2 cups sugar, 4 oz chocolate, 2/3 cup butter; it's meant to bake in a 13x9 pan and yield 32 brownies.
The brownie recipe on Betty Crocker's website (that is, "today's brownie recipe") calls for 1 3/4 cups sugar, 5 oz chocolate, 2/3 cup butter, but is meant to bake in a 9x9 pan and yield 16 brownies.
In addition to the distribution question, 1969 is a bit on the late side.slider on NFTs, Coin Collecting, and Expensive Paintings
The historic artifact conman is also trying to declare by fiat that the thing in his posession is the cool one. What if different NFTs declare different swords cool? Or in the case that multiple NFT systems designate the same sword as cool, whose proof of it is the most "legitimate". In the case of real swords I would expect the word of national museum or something like that to have the heftiest word on what is and is not cool. But that is broken because they have academic interest and otherwise are likely overdetermine it (ie even when nobody is contesting anything they have multiple professionals doing such assignments). Random certification mechanisms just for certifications sake might not have natural grounding.
As an analog one might think of guinness book of records vs olympic comittee. Where olympic comittee says anything it is unlikely anybody would take guinnesses word instead. On the other hand if you had a upstart competitor "general recordkeeper" then that would probably get overshadowed by guinness. But if the differences in establishmentness is small then it is not clear if the authorities list different champions whether that would be worth influencing. Just because a system is a A recordkeeping is not a very good guarantee or clue that it will be THE recordkeeping in the future.sts on An Observation of Vavilov Day
I hereby join in, too.leogao on NFTs, Coin Collecting, and Expensive Paintings
I think Patreon is kind of also the same idea (though there are probably better ways to do public good funding than either of these models). You couldn't own a blog post, but you could have your name etched into it at the top of the patrons list. Patreon is much more explicitly about supporting the creator, though, so the comparison isn't perfect.
I think NFTs, in the platonic ideal form that I think about, are actually the opposite of DRM in some sense. DRM aims to make the actual content scarce; NFTs aim to make the intangible essence of owning the Schelling point scarce. In that sense, NFTs are immune to what made DRM fail. It's fundamentally impossible to make the content scarce, because of the analog hole. With NFTs, even if everyone has the underlying content, the value of the Schelling point isn't diminished. If anything, the more famous and well known the underlying content is, the more valuable the essence of the Schelling point becomes.