LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
Kaj_Sotala · 2018-08-02T08:30:05.174Z · comments (13)
Kaj_Sotala · 2018-08-02T08:30:04.961Z · comments (6)
Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. is under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity to just beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior.
emrik-1 on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversitySurely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment?
To be clear: his motivation isn't "I want to contribute to alignment research!" He's aiming to actually solve the problem. If he works as an engineer at an org, he's not pursuing his project, and he'd be approximately 0% as usefwl.
emrik-1 on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityI strongly endorse Johannes' research approach. I've had 6 meetings with him, and have read/watched a decent chunk of his posts and YT vids. I think the project is very unlikely to work, but that's true of all projects I know of, and this one seems at least better than almost all of them. (Reality doesn't grade on a curve.)
Still, I really hope funders would consider funding the person instead of the project, since I think Johannes' potential will be severely stifled unless he has the opportunity to go "oops! I guess I ought to be doing something else instead" as soon as he discovers some intractable bottleneck wrt his current project. He's literally the person I have the most confidence in when it comes to swiftly changing path to whatever he thinks is optimal, and it would be a real shame if funding gave him an incentive to not notice reasons to pivot. (For more on this, see e.g. Steve's post [LW · GW].)
I realize my endorsement doesn't carry much weight for people who don't know me, and I don't have much general clout here, but if you're curious here's my EA forum profile [EA · GW] and twitter. Some other things which I hope will nudge you to take my endorsement a bit more seriously:
The word "privilege" has been so tainted by its association with guilt that it's almost an infohazard to think you've got privilege at this point, it makes you lower your head in shame at having more than others, and brings about a self-flagellation sort of attitude. It elicits an instinct to lower yourself rather than bring others up. The proper reactions to all these things you've listed is gratitude to your circumstances and compassion towards those who don't have them. And certainly everyone should be very careful towards any instinct they have at publicly "acknowledging their privilege"... it's probably your status-raising instincts having found a good opportunity to boast about your intelligence, appearance and good looks while appearing like you're being modest.
nathan-helm-burger on Scientific Notation OptionsWell, the nice thing about at least agreeing on using e as the notation means its easy to understand variants which prefer subsets of exponents. 500e8, 50e9, and 5e10 all are reasonably mutually intelligible. I think sticking to a subset of exponents does feel intuitive for talking about numbers frequently encountered in everyday life, but seems a little contrived when talking about large numbers. 4e977 seems to me like it isn't much easier to understand when written as 40e976 or 400e975.
justus on Is acausal extortion possible?Hey I don't really get this could you explain it to me in plain language (or just simpler I guess) why should I be scared of this? Why would they extort me when I can't know what they want to do and so I can't do it? I'm probably just stupid but I'm curious about your answers!
gilch on robo's Shortformthe problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".
Not quite. It was to research how to build friendly AIs. We haven't succeeded yet. What research progress we have made points to the problem being harder than initially thought, and capabilities turned out to be easier than most of us expected as well.
Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.
Considered by whom? Rationalists? The public? The public would not have been so supportive before ChatGPT, because most everybody didn't expect general AI so soon, if they thought about the topic at all. It wasn't an option at the time. Talking about this at all was weird, or at least niche, certainly not something one could reasonably expect politicians to care about. That has changed, but only recently.
I don't particularly disagree with your prescription in the short term, just your history. That said, politics isn't exactly our strong suit.
But even if we get a pause, this only buys us some time. In the long(er) term, I think either the Singularity or some kind of existential catastrophe is inevitable. Those are the attractor states. Our current economic growth isn't sustainable without technological progress to go with it. Without that, we're looking at civilizational collapse. But with that, we're looking at ever widening blast radii for accidents or misuse of more and more powerful technology. Either we get smarter about managing our collective problems, or they will eventually kill us. Friendly AI looked like the way to do that. If we solve that one problem, even without world cooperation, it solves all the others for us. It's probably not the only way, but it's not clear the alternatives are any easier. What would you suggest?
I can think of three alternatives.
First, the most mundane (but perhaps most difficult), would be an adequate world government. This would be an institution that could easily solve climate change, ban nuclear weapons (and wars in general), etc. Even modern stable democracies are mostly not competent enough. Autocracies are an obstacle, and some of them have nukes. We are not on track to get this any time soon, and much of the world is not on board with it, but I think progress in the area of good governance and institution building is worthwhile. Charter cities are among the things I see discussed here.
Second might be intelligence enhancement through brain-computer interfaces. Neuralink exists, but it's early days. So far, it's relatively low bandwidth. Probably enough to restore some sight to the blind and some action to the paralyzed, but not enough to make us any smarter. It might take AI assistance to get to that point any time soon, but current AIs are not able, and future ones will be even more of a risk. This would certainly be of interest to us.
Third would be intelligence enhancement through biotech/eugenics. I think this looks like encouraging the smartest to reproduce more rather than the misguided and inhumane attempts of the past to remove the deplorables from the gene pool. Biotech can speed this up with genetic screening and embryo selection. This seems like the approach most likely to actually work (short of actually solving alignment), but this would still take a generation or two at best. I don't think we can sustain a pause that long. Any enforcement regime would have too many holes to work indefinitely, and civilization is still in danger for the other reasons. Biological enhancement is also something I see discussed on LessWrong.
d0themath on D0TheMath's ShortformI promise I won't just continue to re-post a bunch of papers, but this one seems relevant to many around these parts. In particular @Elizabeth [LW · GW] (also, sorry if you dislike being at-ed like that).
Food preferences significantly influence dietary choices, yet understanding natural dietary patterns in populations remains limited. Here we identifiy four dietary subtypes by applying data-driven approaches to food-liking data from 181,990 UK Biobank participants: ‘starch-free or reduced-starch’ (subtype 1), ‘vegetarian’ (subtype 2), ‘high protein and low fiber’ (subtype 3) and ‘balanced’ (subtype 4). These subtypes varied in diverse brain health domains. The individuals with a balanced diet demonstrated better mental health and superior cognitive functions relative to other three subtypes. Compared with subtype 4, subtype 3 displayed lower gray matter volumes in regions such as the postcentral gyrus, while subtype 2 showed higher volumes in thalamus and precuneus. Genome-wide association analyses identified 16 genes different between subtype 3 and subtype 4, enriched in biological processes related to mental health and cognition. These findings provide new insights into naturally developed dietary patterns, highlighting the importance of a balanced diet for brain health.
h/t Hal Herzog via Tyler Cowen
justus on Is acausal extortion possible?I don't really get this, can you explain in simpler words, I'm not to smart
elizabeth-1 on Do you believe in hundred dollar bills lying on the ground? Consider hummingThis is consistent with the dose being 130µl of a dilute liquid
Can you clarify this part? The liquid is a reactive solution (and contains other ingredients) so I don't understand how you calculated it.
I agree the integral is a reasonable interpretation and appreciate you pointing it out. My guess is low frequent applications are better than infrequent high doses, but I don't know what the conversion rate is and this definitely undermines the hundred-dollar-bill case.