Posts
Comments
It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that "changes to one’s beliefs should generally also be probabilistic, rather than total", but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.
My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it's okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.
There's nothing unjustified about appealing to your parents' authority. Parents are legally responsible for their children: they have literal (not epistemic) authority over them, although it's not absolute.
I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus' model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus' model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.
...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...
What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?
I don't have a solution to this, but I have a question that might rule in or out an important class of solutions.
The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that's $150 billion. There are about 2 million people in Gaza.
If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatima in and Fatima receives $37,000 for emigrating, etc... How many such pairings would that facilitate?
Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:
...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.
So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.
-- Tyler Cowen, Risks and Impact of Artificial Intelligence
is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?
-- David Chalmers, Twitter
Is work already being done to reformulate AI-risk arguments for these communities?
IMO, Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?
Does that mean the current administration is finally taking AGI risk seriously or does that mean they aren't taking it seriously?
IIRC, he says that in Intuition Pumps and Other Tools for Thinking.
I noticed that Meta (Facebook) isn't mentioned as being participants. Is that because they weren't asked to or because they were asked but declined?
...there is hardly any mention about memorization on either LessWrong or EA Forum.
I'm curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the Memory and Mnemonics
and Spaced Repetition
tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for "Anki" specifically is currently returning ~800+ comments.
FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn't already done so.
When I worked for a police department a decade ago, we used Zebra, not Zulu, for Z, but our phonetic alphabet started with Adam, Baker, Charles, etc...
Strictly speaking it is a (conditional) "call for violence", but we often reserve that phrase for atypical or extreme cases rather than the normal tools of international relations. It is no more a "call for violence" than treaties banning the use of chemical weapons (which the mainstream is okay with), for example.
If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
How many LessWrong users are there? What is the base rate for cult formation? Shouldn't we answer these questions before speculating about what "should be done"?
Virtue ethics says to decide on rules ahead of time.
This may be where our understandings of these ethical views diverges. I deny that virtue ethicists are typically in the position to decide on the rules (ahead of time or otherwise). If what counts as a virtue isn't strictly objective, then it is at least intersubjective, and is therefore not something that can decided on by an individual (at least relatively). It is absurd to think to yourself "maybe good knives are dull" or "maybe good people are dishonest and cowardly", and when you do think such thoughts it is more readily apparent that you are up to no good. On the other hand, the sheer number of parameters the consequentialist can play with to get their utility calculation to come to the result they are (subconsciously) seeking supplies them with an enormous amount of ammunition for rationalization.
Another interesting case study:
Phineas Gage was an American railroad construction foreman remembered for his improbable survival of an accident in which a large iron rod was driven completely through his head, destroying much of his brain's left frontal lobe, and for that injury's reported effects on his personality and behavior over the remaining 12 years of his life...".
Assuming humans can't be "aligned", then it would also make sense to allocate resources in an attempt to prevent one of them from becoming much more powerful than all of the rest of us.
We (and I mostly mean the US, where I'm located) seem to design our culture and our government in an incredibly convoluted, haphazard and error-prone way. No thought is given to the long-run consequences or the stability of our political decisions.
It's interesting to me that it looks that way to you, given that the architects of the American system (James Madison, John Jay etc...) where explicitly attempting to achieve a kind of "defense in depth" (e.g. separation of powers between the branches, federalism with independent states, decentralized militia system, etc...). Perhaps they failed in their attempt, or perhaps "backup plans for backup plans" just appear convoluted and wasteful when viewed by those living inside such systems.
If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.
Did nobody make the claim that 'guy who claims he wants free speech will restrict speech instead'?
I interpreted the following as saying just that:
Free speech good but endangered by this man who wants free speech.
Would you agree with a person that told you that human testimony is not sufficient grounds for the belief in a natural event (say, that your friend was attacked by another, but there were no witnesses and it left no marks) because humans are not perfect, etc...?
If not, might that indicate the rest of your argument only holds in the case where the prior probability of miracles is extremely low (and potentially misses the crux of the disagreement between yourself and miracle-believing people)?
Every industry has downsides. Some industries have much larger downsides for some kinds of people. If you personally think the tradeoffs are such that overall you prefer to stay in finance, then by analogy perhaps others who are like you would as well.
Deontology and virtue ethical frameworks have lots of resources for explaining why one shouldn't lie, but from a purely (naively) consequentialist perspective, it would be wrong to encourage people to enter your industry despite its problems only if compared to their next best alternative it would leave them worse off overall. Does it?
This is the form I expect answers to "why do you believe x"-type questions to take. Thanks.
Note: That interfax.ru link doesn't seem to work from North American or European IP addresses, but you can view a snapshot on the Way Back Machine here.
On March 4th Putin's troops shelled Zaporizhzhia nuclear power plant in Enerhodar city.
Why do you believe that?
Care to specify over what time horizon you expect(ed) it to fold?
Will DM with info.
Will DM you the number.
I've personally known many people who have had serious medical problems that sure looked clearly like vaccine reactions.
I don't consider it a "serious medical problem", but I attempted to report (via the phone number on the paperwork given me by the person that administered the shot at Wallgreens) my 48 hours long migraine + ~4 day long high blood pressure (as measured by my Omron home blood pressure monitor) after getting a Pfizer booster. I was told they don't need me to fill anything out because those are already known side-effects.
Searching Google for "does covid vaccine cause high blood pressure" just now returned Nebraska Medicine FAQ page as the first result with the following answer:
So far, no data suggests that COVID-19 vaccines cause an increase in blood pressure.
WTF...
PredictionBook now has a basic tagging functionality. Props to CFAR and Bellroy for supporting me in getting the feature added.
Are we assuming affirming A-theory is indicative of science illiteracy because it is incompatible with special relativity or for some other reason?
For reference, here are the raw data from when LWers took the survey in 2012 and here is the associate post from which it was extracted.
This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
Actually, several of the chapters of this book are very likely completely wrong and the rest are on shakier foundations than I believed 9 years ago (similar to other works of social psychology that accurately reported typical expert views at the time). See here for further elaboration.
I'm on the fence about recommending this book now, but please read skeptically if you do choose to read it.
I agree with your point about there being at least two distinct ways to interpret the non-central fallacy, and also the OPs point that while ad hominem arguments are technically invalid, they can be of high inductive strength in some circumstances. I'm mostly critiquing Scott's choice of examples for introducing the non-central fallacy, since mixing it with other fallacious forms of reasoning makes it harder to see what the non-central part is contributing to the mistake being made. For this reason, the theft example is preferred by me.
I think the Martin Luther King scenario is a particularly bad example for explaining the non-central fallacy, because it depends on a conjunction of fallacies, rather than isolating the non-central part. The inference from (1) MLK does/doesn't fit some category with negative emotional valence, to (2) his ideas are bad just is the ad hominem fallacy (which is distinct from the non-central fallacy). The truth (or falsity) of Bloch's theorem is logically independent of whether or not André Bloch was a murder (which he was).
Does this add you to an email list where discussion is happening, or merely put you on a map so that others in the area can reach out to you on an ad hoc basis?
I asked around about this on the ##hplusroadmap irc channel:
15:59 < Jayson_Virissimo> Yeah, sorry. Was much more interested in the claim about peptide sourcing specifically.
16:00 < Jayson_Virissimo> Is that 4-5 weeks duration normal? How flexible is it, if at all?
16:01 < yashgaroth> some of them might offer expedited service, though I've never had cause to find out when ordering peptides and am not bothered to check...and it'd save you a week or two at most
16:02 < Jayson_Virissimo> What would you guess as to the main cause? Does it really take that long to manufacture or is it slow to ship, or is there some legal check that happens that isn't instantaneous?
16:04 < yashgaroth> the legal check isn't an issue, though I'm sure all the major synthesis houses are aware of the Radvac peptide sequences and may hassle you about them - especially if you're not ordering as a company...shipping's not a problem since overnight is standard, so I'd say manufacturing time combined with the people ahead of you in the queue
16:04 < yashgaroth> and manufacturing includes purification, which is an important step for something you're ingesting, even if you're just snorting a line of it
16:07 < Jayson_Virissimo> yashgaroth: do the labs have any legal risk of their own if you are ordering something like Radvac sequences as a private person, or are they "hassling you for your own good"?
16:09 < yashgaroth> nah they're usually okay legally on their end, though most of them won't risk selling a small quantity to an individual since 'plausible deniability' wears a little thin on their end when you're buying sequences that match the Radvac ones
Are there any English language sources where I could learn more about the legal issues surrounding human experimentation in Russia such as the one you mentioned?
What explains the 4-5 weeks delivery time for special lab peptide synthesis?
Mati_Roy makes the case for Phoenix here.
Full Disclosure: I'm in Phoenix.
A similar "measure function is non-normalizable" argument is made at length in McGrew, T., McGrew, L., & Vestrup, E. (2001). Probabilities and the Fine-Tuning Argument: A Sceptical View. Mind, 110(440), 1027-1037.
I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.
Let me know what you think!
Do you happen to know where he discusses this idea?