Posts
Comments
Still, this is not a truth-maximizing website
I mean, I agree with this, but popularity has a better correlation with truth here compared with any other website -- or more broadly, social group -- that I know of. And actually, I think it's probably not possible for a relatively open venue like this to be perfectly truth-seeking. To go further in that direction, I think you ultimately need some sort of institutional design to explicitly reward accuracy, like prediction markets. But the ways in which LW differs from pure truth-and-importance-seeking don't strike me as entirely bad things either -- posts which are inspiring or funny get upvoted more, for instance. I think it would be difficult to nucleate a community focused on truth-seeking without "emotional energy" of this sort.
Or disadvantage, because it makes it harder to make long-term plans and commitments?
It's rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this
My view is probably different from Cole's, but it has struck me that the universe seems to have a richer mathematical structure than one might expect given a generic AIT-ish view(e.g. continuous space/time, quantum mechanics, diffeomorphism invariance/gauge invariance), so we should perhaps update that the space of mathematical structures instantiating life/sentience might be narrower than it initially appears(that is, if "generic" mathematical structures support life/agency, we should expect ourselves to be in a generic universe, but instead we seem to be in a richly structured universe, so this is an update that maybe we can only be in a rich/structured universe[or that life/agency is just much more likely to arise in such a universe]). Taken to an extreme, perhaps it's possible to derive a priori that the universe has to look like the standard model. (Of course, you could run the standard model on a Turing machine, so the statement would have to be about how the universe relates/appears to agents inhabiting it, not its ultimate ontology which is inaccessible since any Turing-complete structure can simulate any other)
They care if you have a PhD, they don’t care if you have researched something for 5 years in your own free time.
I don't think this is right. If anything, the median lw user would be more likely to trust a random blogger who researched a topic on their own for 5 years vs a PhD, assuming the blogger is good at presenting their ideas in a persuasive manner.
Marketing. It was odd enough for you to post about on LW!
Wearing a suit in an inappropriate context is like wearing a fedora. It says "I am socially clueless enough to do random inappropriate things"
This is far too broadly stated, the actual message people will take away from an unexpected suit is verrrrry context-dependent, depending on (among other things) who the suit-wearer is, who the people observing are, how the suit-wearer carries himself, the particular situation the suit is worn in, etc. etc. etc. Judging from the post it sounds like those things create an overall favorable impression for lsusr?(it's hard to tell from just a post of course, but still)
But I still have a problem with the post's tone because if you really internalized that "you" are the player, then your reaction to the informational content should be like "I'm a beyond-'human' uncontrollable force, BOOYEAH!!", not "I'm a beyond-human uncontrollable force, ewww tentacles😣"
Goodness maximizing as undefined without an arbitrary choice of values
By "(non-socially-constructed) Goodness" I mean the goodness of a state of affairs as it actually seems to that particular person really-deep-down. Which can have both selfish -- perhaps "arbitrary" from a certain perspective -- and non-selfish components.
I changed my mind about this, I actually think "lovecraftian horror" might be somewhat better than "monkey" as a mental image, but maybe "(non-socially-constructed)-Goodness-Maximizing AGI" or "void from which things spontaneously arise" or "the voice of God" could be even better?
He doesn't only talk about properties but also what people actually are according to our best physical theories, which is continuous wavefunctions -- of which there are only beth-1.
Sadly my perception is that there are some lesswrongers who reflexively downvote anything they perceive as "weird", sometimes without thinking the content through very carefully -- especially if it contradicts site orthodoxy in an unapologetic manner.
VC money. That disclaimer was misleading, they don't have fees on any markets.
Polymarket pays for the gas fees themselves, users don't have to pay any.
Liked the post btw!
The question is how we should extrapolate, and in particular if we should extrapolate faster than experts currently predict. You would need to show that Willow represents unusually fast progress relative to expert predictions. It's not enough to say that it seems very impressive.
I don't see how your first bullet point is much evidence for the second, unless you have reason to believe that the Willow chip has a level of performance much greater than experts predicted at this point in time.
I think the basic reason that it's hard to make an interesting QCA using this definition is that it's hard to make a reversible CA. Reversible cellular automata are typically made using block-partitioning or a second-order method. The (classical) laws of physics also seem to have a flavor more similar to these than a GoL-style CA, in that they have independent position and velocity coordinates which each determine the time evolution of the other.
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there's way more high-quality knowledge in those fields. Although "just dive in to AI" seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
People asked for a citation so here's one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein's annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s - 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)
It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)
Confused as to why this is so heavily downvoted.
These emails and others can be found in document 32 here.
but it seems that even on LW people think winning on a noisy N=1 sample is proof of rationality
It's not proof of a high degree of rationality but it is evidence against being an "idiot" as you said. Especially since the election isn't merely a binary yes/no outcome, we can observe that there was a huge republican blowout exceeding most forecasts(and in fact freddi bet a lot on republican pop vote too at worse odds, as well as some random states, which gives a larger update) This should increase our credence that predicting a republican win was rational. There were also some smart observers with IMO good arguments that trump was favored pre-election, e.g. https://x.com/woke8yearold/status/1851673670713802881
"Guy with somewhat superior election modeling to Nate Silver, a lot of money, and high risk tolerance" is consistent with what we've seen. Not saying that we have strong evidence that Freddi is a genius but we also don't have much reason to think he is an idiot IMO.
Looks likely that tonight is going to be a massive transfer of wealth from "sharps"(among other people) to him. Post hoc and all, but I think if somebody is raking in huge wins while making "stupid" decisions it's worth considering whether they're actually so stupid after all.
Good post, it's underappreciated that a society of ideally rational people wouldn't have unsubsidized, real-money prediction markets.
unless you've actually got other people being wrong even in light of the new actors' information
Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It's not a perfect such method, but does have the advantage of simplicity. How many of these issues could be solved by subsidizing markets?
Discord Message
What discord is this, sounds cool.
That's probably the one I was thinking of.
I know of only two people who anticipated something like what we are seeing far ahead of time; Hans Moravec and Jan Leike
I didn't know about Jan's AI timelines. Shane Legg also had some decently early predictions of AI around 2030(~2007 was the earliest I knew about)
Some beliefs can be worse or better at predicting what we observe, this is not the same thing as popularity.
Far enough in the future ancient brain scans would be fascinating antique artifacts like rare archaeological finds today, I think people would be interested in reviving you on that basis alone(assuming there are people-like things with some power in the future)
I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.
No I don't think so because people could just airgap the GPUs.
Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.
This discussion is a nice illustration of why x-riskers are definitely more power-seeking than the average activist group. Just like Eskimos proverbially have 50 words for snow, AI-risk-reducers need at least 50 terms for "taking over the world" to demarcate the range of possible scenarios. ;)
Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as "obtain god-like AI and use it to take over the world"(admittedly with some rhetorical exaggeration, but like, not that much)
I would be happy to take bets here about what people would say.
Sure, I DM'd you.
I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president
Yeah, but it's not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that "a small group of people with overwhelming hard power" was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.
I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.
But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer's worldview I don't think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)
It wasn't specified but I think they strongly implied it would be that or something equivalently coercive. The "melting GPUs" plan was explicitly not a pivotal act but rather something with the required level of difficulty, and it was implied that the actual pivotal act would be something further outside the political Overton window. When you consider the ways "melting GPUs" would be insufficient a plan like this is the natural conclusion.
doesn't require replacing existing governments
I don't think you would need to replace existing governments. Just block all AI projects and maintain your ability to continue doing so in the future via maintaining military supremacy. Get existing governments to help you, or at least not interfere, via some mix of coercion and trade. Sort of a feudal arrangement with a minimalist central power.
"Taking over" something does not imply that you are going to use your authority in a tyrannical fashion. People can obtain control over organizations and places and govern with a light or even barely-existent touch, it happens all the time.
Would you accept "they plan to use extremely powerful AI to institute a minimalist, AI-enabled world government focused on preventing the development of other AI systems" as a summary? Like sure, "they want to take over the world" as a gist of that does have a bit of an editorial slant, but not that much of one. I think that my original comment would be perceived as much less misleading by the majority of the world's population than "they just want to do some helpful math uwu" in the event that these plans actually succeeded. I also think it's obvious that these plans indicate a far higher degree of power-seeking(in aim at least) than virtually all other charitable organizations.
(..and to reiterate, I'm not taking a strong stance on the advisability of these plans. In a way, had they succeeded, that would have provided a strong justification for their necessity. I just think it's absurd to say that the organization making them is less power-seeking than the ADL or whatever)
Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+[...]
I think that AIS lobbying is likely to have more consequential and enduring effects on the world than environmental lobbying regardless of the absolute size in body count or amount of money, so yes.
"MIRI default plan" was "to do math in hope that some of this math will turn out to be useful".
I mean yeah, that is a better description of their publicly-known day-to-day actions, but intention also matters. They settled on math after it became clear that the god AI plan was not achievable(and recently, gave up on the math plan too when it became clear that was not realistic). An analogy might be an environmental group that planned to end pollution by bio-engineering a microbe to spread throughout the world that made oil production impossible, then reluctantly settled for lobbying once they realized they couldn't actually make the microbe. I think this would be a pretty unusually power-seeking plan for an environmental group!
Are you sure [...] et cetera are less power-seeking than AI Safety community?
Until recently the MIRI default plan was basically "obtain god-like AI and use it to take over the world"("pivotal act"), it's hard to get more power-seeking than that. Other wings of the community have been more circumspect but also more active in things like founding AI labs, influencing government policy, etc., to the tune of many billions of dollars worth of total influence. Not saying this is necessarily wrong but it does seem empirically clear that AI-risk-avoiders are more power-seeking than most movements.
let’s ensure that AGI corporations have management that is not completely blind to alignment problem
Seems like this is already the case.
Possibly related: Stochastic Collapse: How Gradient Noise Attracts SGD towards sparser subnetworks"