Posts

The Unexpected Philosophical Depths of the Clicker Game Universal Paperclips 2019-03-28T23:39:28.461Z
Explanations of deontological responses to moral dilemmas 2017-04-10T03:43:57.398Z
PredictIt, a prediction market out of New Zealand, now in beta. 2015-03-16T02:02:14.126Z
Learn (and Maybe Get a Credential in) Data Science 2014-02-01T18:39:25.445Z
Meetup : Tempe, AZ: How to Measure Anything II 2013-11-01T16:24:55.602Z
Meetup : Tempe, AZ: How to Measure Anything I 2013-10-24T19:36:50.073Z
Meetup : Tempe, AZ (ASU) 2013-09-27T03:58:05.831Z
[Link] Bets, Portfolios, and Belief Revelation 2013-07-01T16:17:28.542Z
[Link] Caplan asks for help optimizing his will. 2013-04-30T02:12:51.663Z
Rationality Quotes March 2013 2013-03-02T10:45:48.626Z
Open Thread, March 1-15, 2013 2013-03-01T12:00:44.477Z
[Link] How Signaling Ossifies Behavior 2013-01-21T14:06:41.557Z
[Link] Think Bayes: Bayesian Statistics Made Simple 2012-10-10T08:11:30.369Z
[Poll] Less Wrong and Mainstream Philosophy: How Different are We? 2012-09-26T12:25:48.899Z
Rationality Quotes September 2012 2012-09-03T05:18:17.003Z
[Link] Nick Bostrom on the Status Quo Bias 2012-06-17T10:51:56.817Z
Meetup : Phoenix, Arizona 2012-05-01T06:31:41.895Z
PredictionBook: Feature Request and Bug Report 2012-01-20T09:20:59.786Z
Rationality Quotes December 2011 2011-12-02T06:01:27.343Z
PredictionBook: A Short Note 2011-11-10T15:10:28.480Z
Rationality Quotes November 2011 2011-11-01T18:28:38.290Z
[Link] Philip Pettit on Consequentialism 2011-09-17T08:13:59.743Z
[Link] Nick Bostrom on the Simulation Argument 2011-08-14T19:15:35.087Z
Bayesianism versus Critical Rationalism 2011-01-10T04:54:45.706Z

Comments

Comment by Jayson_Virissimo on The Aspiring Rationalist Congregation · 2024-01-11T19:20:34.040Z · LW · GW

It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that "changes to one’s beliefs should generally also be probabilistic, rather than total", but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.

Comment by Jayson_Virissimo on What are the results of more parental supervision and less outdoor play? · 2023-12-08T21:17:59.507Z · LW · GW

My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it's okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.

Comment by Jayson_Virissimo on What are the results of more parental supervision and less outdoor play? · 2023-12-08T18:06:03.372Z · LW · GW

There's nothing unjustified about appealing to your parents' authority. Parents are legally responsible for their children: they have literal (not epistemic) authority over them, although it's not absolute.

Comment by Jayson_Virissimo on Neither Copernicus, Galileo, nor Kepler had proof · 2023-11-23T21:02:18.218Z · LW · GW

I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus' model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus' model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.

Comment by Jayson_Virissimo on Architects of Our Own Demise: We Should Stop Developing AI Carelessly · 2023-10-26T16:41:11.052Z · LW · GW

...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...

What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?

Comment by Jayson_Virissimo on How can the world handle the HAMAS situation? · 2023-10-13T22:00:32.237Z · LW · GW

I don't have a solution to this, but I have a question that might rule in or out an important class of solutions.

The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that's $150 billion. There are about 2 million people in Gaza.

If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatima in and Fatima receives $37,000 for emigrating, etc... How many such pairings would that facilitate?

Comment by Jayson_Virissimo on Open Thread: June 2023 (Inline Reacts!) · 2023-06-07T04:29:59.028Z · LW · GW

Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.

Any idea if something similar is being done to cater to economists (or other social scientists)?

Comment by Jayson_Virissimo on Open Thread: June 2023 (Inline Reacts!) · 2023-06-06T23:00:36.894Z · LW · GW

Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:

...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.

-- Tyler Cowen, Risks and Impact of Artificial Intelligence
 

 

is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?

-- David Chalmers, Twitter

Is work already being done to reformulate AI-risk arguments for these communities?

Comment by Jayson_Virissimo on Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures · 2023-05-31T05:05:40.788Z · LW · GW

IMO,  Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.

Comment by Jayson_Virissimo on Why is violence against AI labs a taboo? · 2023-05-26T17:40:45.228Z · LW · GW

Consider the following rhetorical question:

Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?

Do we expect the answer to this to be any different for vegans than for AI-risk worriers?

Comment by Jayson_Virissimo on Open & Welcome Thread - May 2023 · 2023-05-08T04:10:07.774Z · LW · GW

Does that mean the current administration is finally taking AGI risk seriously or does that mean they aren't taking it seriously?

Comment by Jayson_Virissimo on Statistical models & the irrelevance of rare exceptions · 2023-05-07T22:03:40.995Z · LW · GW

IIRC, he says that in Intuition Pumps and Other Tools for Thinking.

Comment by Jayson_Virissimo on [deleted post] 2023-05-04T16:41:19.215Z

I noticed that Meta (Facebook) isn't mentioned as being participants. Is that because they weren't asked to or because they were asked but declined?

Comment by Jayson_Virissimo on Efficient Learning: Memorization · 2023-04-16T18:37:51.308Z · LW · GW

...there is hardly any mention about memorization on either LessWrong or EA Forum.

I'm curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the Memory and Mnemonics and Spaced Repetition tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for "Anki" specifically is currently returning ~800+ comments.

Comment by Jayson_Virissimo on A freshman year during the AI midgame: my approach to the next year · 2023-04-16T00:43:48.666Z · LW · GW

FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn't already done so.

Comment by Jayson_Virissimo on Use the Nato Alphabet · 2023-04-12T18:25:29.626Z · LW · GW

When I worked for a police department a decade ago, we used Zebra, not Zulu, for Z, but our phonetic alphabet started with Adam, Baker, Charles, etc...

Comment by Jayson_Virissimo on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T01:29:50.505Z · LW · GW

Strictly speaking it is a (conditional) "call for violence", but we often reserve that phrase for atypical or extreme cases rather than the normal tools of international relations. It is no more a "call for violence" than treaties banning the use of chemical weapons (which the mainstream is okay with), for example.

Comment by Jayson_Virissimo on What fact that you know is true but most people aren't ready to accept it? · 2023-03-10T01:10:04.540Z · LW · GW

If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.

Comment by Jayson_Virissimo on Some thoughts on the cults LW had · 2023-02-26T19:09:46.626Z · LW · GW

How many LessWrong users are there? What is the base rate for cult formation? Shouldn't we answer these questions before speculating about what "should be done"?

Comment by Jayson_Virissimo on Reflective Consequentialism · 2022-11-19T04:37:27.584Z · LW · GW

Virtue ethics says to decide on rules ahead of time.

This may be where our understandings of these ethical views diverges. I deny that virtue ethicists are typically in the position to decide on the rules (ahead of time or otherwise). If what counts as a virtue isn't strictly objective, then it is at least intersubjective, and is therefore not something that can decided on by an individual (at least relatively). It is absurd to think to yourself "maybe good knives are dull" or "maybe good people are dishonest and cowardly", and when you do think such thoughts it is more readily apparent that you are up to no good. On the other hand, the sheer number of parameters the consequentialist can play with to get their utility calculation to come to the result they are (subconsciously) seeking supplies them with an enormous amount of ammunition for rationalization.

Comment by Jayson_Virissimo on (Link) I'm Missing a Chunk of My Brain · 2022-09-05T03:01:51.573Z · LW · GW

Another interesting case study:


Phineas Gage was an American railroad construction foreman remembered for his improbable survival of an accident in which a large iron rod was driven completely through his head, destroying much of his brain's left frontal lobe, and for that injury's reported effects on his personality and behavior over the remaining 12 years of his life...".

Comment by Jayson_Virissimo on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-07T21:13:09.035Z · LW · GW

Assuming humans can't be "aligned", then it would also make sense to allocate resources in an attempt to prevent one of them from becoming much more powerful than all of the rest of us.

Comment by Jayson_Virissimo on [deleted post] 2022-06-07T20:36:08.716Z

We (and I mostly mean the US, where I'm located) seem to design our culture and our government in an incredibly convoluted, haphazard and error-prone way. No thought is given to the long-run consequences or the stability of our political decisions.

It's interesting to me that it looks that way to you, given that the architects of the American system (James Madison, John Jay etc...) where explicitly attempting to achieve a kind of "defense in depth" (e.g. separation of powers between the branches, federalism with independent states, decentralized militia system, etc...). Perhaps they failed in their attempt, or perhaps "backup plans for backup plans" just appear convoluted and wasteful when viewed by those living inside such systems.

Comment by Jayson_Virissimo on Salvage Epistemology · 2022-05-01T17:11:27.191Z · LW · GW

If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.

Comment by Jayson_Virissimo on Covid 4/28/22: Take My Paxlovid, Please · 2022-04-28T18:41:19.537Z · LW · GW

Did nobody make the claim that 'guy who claims he wants free speech will restrict speech instead'?

I interpreted the following as saying just that:

Free speech good but endangered by this man who wants free speech.

Comment by Jayson_Virissimo on 20 Modern Heresies · 2022-04-09T22:22:18.737Z · LW · GW
Comment by Jayson_Virissimo on Why Miracles Should Not Be Used as a Reason to Believe in a Religion · 2022-03-23T21:21:33.408Z · LW · GW

Would you agree with a person that told you that human testimony is not sufficient grounds for the belief in a natural event (say, that your friend was attacked by another, but there were no witnesses and it left no marks) because humans are not perfect, etc...? 

If not, might that indicate the rest of your argument only holds in the case where the prior probability of miracles is extremely low (and potentially misses the crux of the disagreement between yourself and miracle-believing people)?

Comment by Jayson_Virissimo on Ethicality Behind Breaking the Glass Ceiling · 2022-03-15T06:09:24.281Z · LW · GW

Every industry has downsides. Some industries have much larger downsides for some kinds of people. If you personally think the tradeoffs are such that overall you prefer to stay in finance, then by analogy perhaps others who are like you would as well. 

Deontology and virtue ethical frameworks have lots of resources for explaining why one shouldn't lie, but from a purely (naively) consequentialist perspective, it would be wrong to encourage people to enter your industry despite its problems only if compared to their next best alternative it would leave them worse off overall. Does it?

Comment by Jayson_Virissimo on Ukraine update 06/03/2022 · 2022-03-14T01:19:44.093Z · LW · GW

This is the form I expect answers to "why do you believe x"-type questions to take. Thanks.

Note: That interfax.ru link doesn't seem to work from North American or European IP addresses, but you can view a snapshot on the Way Back Machine here.

Comment by Jayson_Virissimo on Ukraine update 06/03/2022 · 2022-03-07T03:12:43.814Z · LW · GW

On March 4th Putin's troops shelled Zaporizhzhia nuclear power plant in Enerhodar city.

Why do you believe that?

Comment by Jayson_Virissimo on Russia has Invaded Ukraine · 2022-02-27T23:30:06.482Z · LW · GW

Care to specify over what time horizon you expect(ed) it to fold?

Comment by Jayson_Virissimo on What are sane reasons that Covid data is treated as reliable? · 2022-01-03T04:42:31.258Z · LW · GW

Will DM with info.

Comment by Jayson_Virissimo on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T23:30:31.989Z · LW · GW

Will DM you the number.

Comment by Jayson_Virissimo on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T23:06:21.334Z · LW · GW

I've personally known many people who have had serious medical problems that sure looked clearly like vaccine reactions.

I don't consider it a "serious medical problem", but I attempted to report (via the phone number on the paperwork given me by the person that administered the shot at Wallgreens) my 48 hours long migraine + ~4 day long high blood pressure (as measured by my Omron home blood pressure monitor) after getting a Pfizer booster. I was told they don't need me to fill anything out because those are already known side-effects.

Searching Google for "does covid vaccine cause high blood pressure" just now returned Nebraska Medicine FAQ page as the first result with the following answer:

So far, no data suggests that COVID-19 vaccines cause an increase in blood pressure.

WTF...

Comment by Jayson_Virissimo on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T23:01:58.465Z · LW · GW
Comment by Jayson_Virissimo on PredictionBook: A Short Note · 2021-12-27T20:45:45.339Z · LW · GW

PredictionBook now has a basic tagging functionality. Props to CFAR and Bellroy for supporting me in getting the feature added.

Comment by Jayson_Virissimo on 2020 PhilPapers Survey Results · 2021-11-03T20:51:32.576Z · LW · GW

Are we assuming affirming A-theory is indicative of science illiteracy because it is incompatible with special relativity or for some other reason?

Comment by Jayson_Virissimo on 2020 PhilPapers Survey Results · 2021-11-02T05:36:37.373Z · LW · GW

For reference, here are the raw data from when LWers took the survey in 2012 and here is the associate post from which it was extracted.

Comment by Jayson_Virissimo on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T20:50:48.060Z · LW · GW

This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Comment by Jayson_Virissimo on Welcome to Less Wrong! (2012) · 2021-05-06T21:47:58.869Z · LW · GW

Actually, several of the chapters of this book are very likely completely wrong and the rest are on shakier foundations than I believed 9 years ago (similar to other works of social psychology that accurately reported typical expert views at the time). See here for further elaboration.

I'm on the fence about recommending this book now, but please read skeptically if you do choose to read it.

Comment by Jayson_Virissimo on Defending the non-central fallacy · 2021-03-11T16:30:47.463Z · LW · GW

I agree with your point about there being at least two distinct ways to interpret the non-central fallacy, and also the OPs point that while ad hominem arguments are technically invalid, they can be of high inductive strength in some circumstances. I'm mostly critiquing Scott's choice of examples for introducing the non-central fallacy, since mixing it with other fallacious forms of reasoning makes it harder to see what the non-central part is contributing to the mistake being made. For this reason, the theft example is preferred by me.

Comment by Jayson_Virissimo on Defending the non-central fallacy · 2021-03-10T23:10:59.276Z · LW · GW

I think the Martin Luther King scenario is a particularly bad example for explaining the non-central fallacy, because it depends on a conjunction of fallacies, rather than isolating the non-central part. The inference from (1) MLK does/doesn't fit some category with negative emotional valence, to (2) his ideas are bad just is the ad hominem fallacy (which is distinct from the non-central fallacy). The truth (or falsity) of Bloch's theorem is logically independent of whether or not André Bloch was a murder (which he was).

Comment by Jayson_Virissimo on Making Vaccine · 2021-02-05T16:52:40.881Z · LW · GW

Does this add you to an email list where discussion is happening, or merely put you on a map so that others in the area can reach out to you on an ad hoc basis?

Comment by Jayson_Virissimo on How Hard Would It Be To Make A COVID Vaccine For Oneself? · 2020-12-24T21:20:49.910Z · LW · GW

I asked around about this on the ##hplusroadmap irc channel:

15:59 < Jayson_Virissimo> Yeah, sorry. Was much more interested in the claim about peptide sourcing specifically. 
16:00 < Jayson_Virissimo> Is that 4-5 weeks duration normal? How flexible is it, if at all? 
16:01 < yashgaroth> some of them might offer expedited service, though I've never had cause to find out when ordering peptides and am not bothered to check...and it'd save you a week or two at most 
16:02 < Jayson_Virissimo> What would you guess as to the main cause? Does it really take that long to manufacture or is it slow to ship, or is there some legal check that happens that isn't instantaneous? 
16:04 < yashgaroth> the legal check isn't an issue, though I'm sure all the major synthesis houses are aware of the Radvac peptide sequences and may hassle you about them - especially if you're not ordering as a company...shipping's not a problem since overnight is standard, so I'd say manufacturing time combined with the people ahead of you in the queue 
16:04 < yashgaroth> and manufacturing includes purification, which is an important step for something you're ingesting, even if you're just snorting a line of it 
16:07 < Jayson_Virissimo> yashgaroth: do the labs have any legal risk of their own if you are ordering something like Radvac sequences as a private person, or are they "hassling you for your own good"? 
16:09 < yashgaroth> nah they're usually okay legally on their end, though most of them won't risk selling a small quantity to an individual since 'plausible deniability' wears a little thin on their end when you're buying sequences that match the Radvac ones

Comment by Jayson_Virissimo on How Hard Would It Be To Make A COVID Vaccine For Oneself? · 2020-12-23T21:45:09.349Z · LW · GW

Are there any English language sources where I could learn more about the legal issues surrounding human experimentation in Russia such as the one you mentioned?

Comment by Jayson_Virissimo on How Hard Would It Be To Make A COVID Vaccine For Oneself? · 2020-12-23T21:43:10.664Z · LW · GW

What explains the 4-5 weeks delivery time for special lab peptide synthesis?

Comment by Jayson_Virissimo on The rationalist community's location problem · 2020-09-24T18:54:31.634Z · LW · GW

Mati_Roy makes the case for Phoenix here.

Full Disclosure: I'm in Phoenix.

Comment by Jayson_Virissimo on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-10T06:55:09.868Z · LW · GW

A similar "measure function is non-normalizable" argument is made at length in McGrew, T., McGrew, L., & Vestrup, E. (2001). Probabilities and the Fine-Tuning Argument: A Sceptical View. Mind, 110(440), 1027-1037.

Comment by Jayson_Virissimo on Open & Welcome Thread - July 2020 · 2020-07-19T05:59:45.862Z · LW · GW

I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.

Let me know what you think!

Comment by Jayson_Virissimo on Science eats its young · 2020-07-12T23:28:05.037Z · LW · GW

Do you happen to know where he discusses this idea?