Posts

Impactful Forecasting Prize for forecast writeups on curated Metaculus questions 2022-02-04T20:06:16.408Z
[Link] Intro to causal inference by Michael Nielsen (2012) 2019-07-19T12:19:50.817Z

Comments

Comment by yagudin on Large Language Models will be Great for Censorship · 2023-08-23T14:47:11.983Z · LW · GW

There is already a lot of automatic censoring happening. I am unsure how much LLMs add on top of existing and fairly successful techniques from spam filtering. And just using LLMs is probably prohibitive at the scale of social media (definitely for tech companies, maybe not for governments), but perhaps you can get an edge for some use-case with them.

Comment by yagudin on Private notes on LW? · 2023-08-04T21:45:22.283Z · LW · GW

But I (and I think others on LW team although for slightly different reasons) have been thinking about building a feature directly into LW to facilitate it. 

 

Maybe consider making it super easy (one click easy) to export LW posts to google docs? 

Comment by yagudin on Lightcone Infrastructure/LessWrong is looking for funding · 2023-06-17T20:01:33.491Z · LW · GW

ACX is probably a better reference class: https://astralcodexten.substack.com/p/2023-subscription-drive-free-unlocked. In Jan, ACX had 78.2k readers, of which 6.0k subscribers for a 7.7% subscription rate. 

Comment by yagudin on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-10T00:22:55.570Z · LW · GW

Consider https://thepeteplan.wordpress.com/beginner-training/. 

Comment by yagudin on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-17T09:19:16.589Z · LW · GW

I think it might be good to normalize "just try stuff until they fix your condition" as one of the treatment strategies. I guess it's a bit ironic that Dr. Spray-n-pray's indifference toward which pill worked and why seems so epistemically careless, while actually maybe being a correct way to orient towards success when you optimize for luck and have little reliable information.

Comment by yagudin on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-01T00:14:27.847Z · LW · GW
  1. Russian military doctrine allows the usage of nuclear weapons to defend Russian territory.

 

This is ~false. See: https://forum.effectivealtruism.org/posts/TkLk2xoeE9Hrx5Ziw/nuclear-attack-risk-implications-for-personal-decision?commentId=ukEznwTnD78wFdZip#ukEznwTnD78wFdZip

Comment by yagudin on The Track Record of Futurists Seems ... Fine · 2022-07-04T16:16:15.254Z · LW · GW

See:

Comment by yagudin on Book Launch: The Engines of Cognition · 2022-03-16T18:43:10.450Z · LW · GW
Trust
Rule Thinkers In, Not OutScott Alexander 
Gears vs BehaviorJohn S. Wentworth 
Book Review: The Secret Of Our SuccessScott Alexander 
Reason isn't magicBen Hoffman 
"Other people are wrong" vs "I am right"Buck Shlegeris 
In My CultureDuncan Sabien 
Chris Olah's views on AGI safetyEvan Hubinger 
Understanding "Deep Double Descent"Evan Hubinger 
How to Ignore Your Emotions (while also thinking you're awesome at emotions)Hazard 
Paper-Reading for GearsJohn S. Wentworth 
Book summary: Unlocking the Emotional BrainKaj Sotala 
Noticing Frame DifferencesRaymond Arnold 
Propagating Facts into AestheticsRaymond Arnold 
Do you fear the rock or the hard place?Ruben Bloom 
Mental MountainsScott Alexander 
Steelmanning DivinationVaniver 
Modularity
Book Review: Design Principles of Biological CircuitsJohn S. Wentworth 
Reframing Superintelligence: Comprehensive AI Services as General IntelligenceRohin M. Shah 
Building up to an Internal Family Systems modelKaj Sotala 
Being the (Pareto) Best in the WorldJohn S. Wentworth 
The Schelling Choice is "Rabbit", not "Stag"Raymond Arnold 
Literature Review: Distributed TeamsElizabeth Van Nostrand 
Gears-Level Models are Capital InvestmentsJohn S. Wentworth 
Evolution of ModularityJohn S. Wentworth 
You Have About Five WordsRaymond Arnold 
Coherent decisions imply consistent utilitiesEliezer Yudkowsky 
Alignment Research Field GuideAbram Demski 
Forum participation as a research strategyWei Dai 
The Credit Assignment ProblemAbram Demski 
Selection vs ControlAbram Demski 
Incentives
Asymmetric JusticeZvi Mowshowitz 
The Copenhagen Interpretation of EthicsJai Dhyani 
Unconscious EconomicsJacob Lagerros 
Power Buys You Distance From The CrimeElizabeth Van Nostrand 
Seeking Power is Often Convergently Instrumental in MDPsAlexander Turner & Logan Smith 
Yes Requires the Possibility of NoScott Garrabrant 
Mistakes with Conservation of Expected EvidenceAbram Demski 
Heads I Win,Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green RationalistsZack M. Davis 
Excerpts from a larger discussion about simulacraBen Hoffman 
Moloch Hasn’t WonZvi Mowshowitz 
Integrity and accountability are core parts of rationalityOliver Habryka 
The Real Rules Have No ExceptionsSaid Achmiz 
Simple Rules of LawZvi Mowshowitz 
The Amish, and Strategic Norms around TechnologyRaymond Arnold 
Risks from Learned Optimization: IntroductionEvan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, & Scott Garrabrant 
Gradient hackingEvan Hubinger 
Failure
The Parable of Predict-O-MaticAbram Demski 
BlackmailZvi Mowshowitz 
BioinfohazardsMegan Crawford, Finan Adamson, & Jeffrey Ladish 
What failure looks likePaul Christiano 
Seeking Power is Often Convergently Instrumental in MDPsAlexander Turner & Logan Smith 
AI Safety “Success Stories”Wei Dai 
Reframing ImpactAlexander Turner 
The strategy-stealing assumptionPaul Christiano 
Is Rationalist Self-Improvement Real?Jacob Falkovich 
The Curse Of The CounterfactualP.J. Eby 
human psycholinguists: a critical appraisalNostalgebraist 
Why wasn't science invented in China?Ruben Bloom 
Make more landJeff Kaufman 
Rest Days vs Zombie DaysLauren Lee 

 

Here is a google sheet.

Comment by yagudin on Challenges with Breaking into MIRI-Style Research · 2022-01-18T15:27:46.511Z · LW · GW

I want to mention that Tsvi Benson-Tilsen is a mentor at this summer's PIBBSS. So some readers might consider applying (the deadline is Jan 23rd).

I myself was mentored by Abram Demski once through the FHI SRF, which AFAIK was matching fellows with a large pull of researchers based on mutual interests.

Comment by yagudin on The Best Software For Every Need · 2021-12-27T11:37:07.622Z · LW · GW

I am looking for text-to-speech tools for various contexts.  As of now, I am using

Comment by yagudin on Book Launch: The Engines of Cognition · 2021-12-21T11:18:31.550Z · LW · GW

I would appreciate it if the ToC linked to the web versions of the essay.

Comment by yagudin on Epistea Summer Experiment (ESE) · 2021-12-03T07:53:50.877Z · LW · GW

A follow-up (h/t LW review). I got quite a bit out of the workshop, most importantly

  • I found a close friend and collaborator, whom I don't think I would have met otherwise.
  • I found a close friend and co-founder, whom I was likely to meet otherwise, but it's unlikely that we would have a good enough bond by covid-times.

There was much more but much less legible and "evaluatable." I think ESE was excellent, and I would have done it even if I knew that I wouldn't get two close friendships out of it.

Comment by yagudin on Petrov Day Retrospective: 2021 · 2021-10-22T07:34:15.612Z · LW · GW

Or, to change tack: the operating budget of the LessWrong website has historically been ~$600k, and this budget is artificially low because the site has paid extremely below-market salaries. Adjusting for the market value of the labor, the cost is more like $1M/year, or $2,700/day. If I assume LessWrong generates more value than the cost required to run it, I estimate that the site provides at least $2,700/day in value, probably a good deal more.

 

I think this estimate is mistaken because it ignores marginalism: basically, the cost of disabling LW for a year is much larger than 365 * the cost of disabling LW for only a day.  The same goes for disabling the whole website vs. disabling only the frontpage.

(Sorry for adding salt to hurt feelings; posting because impact evaluation of longtermism projects is important.)

Comment by yagudin on Hoagy's Shortform · 2021-06-23T14:43:42.317Z · LW · GW

Maybe reading Gelman's self-contained comments on SSC's More Confounders would make you more confused in a good way.

Comment by yagudin on alkjash's Shortform · 2020-12-17T09:40:23.511Z · LW · GW

Hey! Could you say more about a causal link between Sequences and writing these papers, please:

I was able to do from muscle memory certain calculations about conditional probability and expectation that might have taken weeks otherwise (if we figured them out at all). I attribute this ability in large part to reading the Sequences.

I think my confusion comes from (a) having enough math background (read some chapters of The Probabilistic Method yers ago); (b) while reading Sequences and more so AF discussions added to my understanding of formal epistemology, I am surprised that your emphasis how Sequences affected your muscle memory and ability to do calculations.

Comment by yagudin on What are Examples of Great Distillers? · 2020-11-13T18:23:52.974Z · LW · GW

As this answer got upvoted, I collected some Dubna's courses read in English, for which recordings are available (look for "Доступны 4 видеозаписи курса.")

Comment by yagudin on What are Examples of Great Distillers? · 2020-11-12T15:58:51.138Z · LW · GW
Comment by yagudin on Models predicting significant violence in the US? · 2020-11-03T09:51:25.360Z · LW · GW

Metaculus 2020 U.S. Election Risks Survey doesn't give >1% for >5000 deaths, but I think it is justified to infer something like that from it:

While large-scale violence and military intervention to quell civil unrest seem unlikely, experts still judged these possibilities to be far from remote. Experts predicted a median of 60 deaths occurring due to election-related violence, with an 80% confidence interval of 0 to 912 fatalities that reflects a high degree of uncertainty. Still, the real possibility of violence is a notable departure from the peaceful transitions that have been the hallmark of past U.S. elections. Results indicate an 8% probability of over 1,000 election-related deaths — suggesting that while widespread sustained clashes are unlikely, this possibility warrants real concern. Experts assigned a 10% median prediction that President Trump will invoke the Insurrection Act to mobilize troops during the transition period.

Comment by yagudin on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T12:08:38.608Z · LW · GW

A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes:

Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House's direction.

Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don't know enough about the US gov to asses it.

Comment by yagudin on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T11:51:26.133Z · LW · GW

While for me it is, indeed, a reason to put less weight on their analysis or expect less useful work/analysis to be done by them in a short/medium-term.

But I think this consideration, also, weakens certain types of arguments about the CDC's lack of judgment/untrustworthiness. For example, arguments like "they did this, but should have done better" loses part of its bayesian weight as the organization likely made a lot of decisions under time pressure and other constraints. And things are more likely to go wrong if you're under-stuffed and hence prioritize more aggressively.

I don't expect to have a good judgment here, but it seems to me that "testing kits the CDC sent to local labs were unreliable" might fall here. It might have been a right call for them to distribute tests quickly and ~skip ensuring that tests didn't have a false positive problem.

Comment by yagudin on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T11:14:14.337Z · LW · GW

Unless there are large enough demographics for which this post looks credible while FB conspiracies do not.

Comment by yagudin on Bucky's Shortform · 2020-03-08T21:19:42.879Z · LW · GW

If the only issue is tone, you could write something like: 'Initially, I was confused/surprised by the core claim you made but reading this, this, and that [or thinking for 15 minutes/further research] made me believe that your position is basically correct'. This looks quite

[...] "Yes, you are correct about that" comes across as quite arrogant [...]
Comment by yagudin on Epistea Workshop Series: Epistemics Workshop, May 2020, UK · 2020-02-28T17:48:30.468Z · LW · GW

I attended Epistea Summer Experiment and greatly enjoyed it. (At the same time I am quite skeptical about value of any rationality workshops for EA-inspired work.)

Comment by yagudin on We run the Center for Applied Rationality, AMA · 2019-12-20T21:22:22.184Z · LW · GW

I think Nuno's time-capped analysis is good.

Comment by yagudin on Let's Read: Superhuman AI for multiplayer poker · 2019-07-17T14:25:19.357Z · LW · GW

Thanks for the post. I would recommend reading the original blog post by Noam Brown as it has the proper level of exposition and more details/nuances.

Overall, it seems that Pluribus is conceptually very similar to Libratus; sadly, no new insights about >2-player games. My impression is that because poker players don't collude/cooperate too much, playing something close to an equilibrium against them will make you rich.

Comment by yagudin on What are principled ways for penalising complexity in practice? · 2019-07-02T08:02:15.142Z · LW · GW
If one has 2 possible models to fit a data set, by how much should one penalize the model which has an additional free parameter?

Penalization might not be necessary if your learning procedure is stochastic and favors simple explanations. I encourage you to take a look on the nice poster/paper «Deep learning generalizes because the parameter-function map is biased towards simple functions» (PAC-Bayesian learning theory + empirical intuitions).

Comment by yagudin on Alignment Newsletter #51 · 2019-04-03T18:08:12.057Z · LW · GW

Rohin, thank you for the especially long and informative newsletter.

When there are more samples, we get a lower validation loss [...]

I guess you've meant a higher validation loss ?

Comment by yagudin on Announcing Rational Newsletter · 2019-03-31T18:51:02.675Z · LW · GW

Alexey, happy birthday to your podcast! I've just subscribed and hope you would post consistently in the future. How many subscribers do you have?

Comment by yagudin on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-03-31T18:06:20.427Z · LW · GW

If you are curious why Russian chatroom is so big I encourage you to read about Kocherga. With 174 karma and 54 votes, it is the highest rated non-curated LW post at the moment.

Comment by yagudin on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-03-31T18:04:37.018Z · LW · GW

I would like to highlight Russian LessWrong Slack, which has 2000+ registered users, ~150 WAU (among which ~50 are posting) and ~80 DAU (~25 are posting).

Comment by yagudin on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-03-31T17:58:17.414Z · LW · GW

Said Achmiz's LessWrong Diaspora Map lists 12 chatrooms.

Comment by yagudin on What self-help has helped you? · 2018-12-22T13:01:51.075Z · LW · GW

I augment Pomodoros with

  • UltraWorking's Cycles, a check-list/spreadsheet for productive and focused work;
  • and Strechly, a cross-platform break reminder app.
Comment by yagudin on Open Thread November 2018 · 2018-12-11T07:36:13.486Z · LW · GW

EA Forum: Donating effectively is usually better than impact investing.

Comment by yagudin on Winter Solstice 2018 Roundup · 2018-11-28T09:23:50.295Z · LW · GW

I am quite sure, that Moscow's LW will celebrate a Secular Solstice on 21 or 22 of Dec.

Comment by yagudin on Incorrect hypotheses point to correct observations · 2018-11-22T06:45:24.804Z · LW · GW

An example from Feynman's «The Character of Physical Law»:

The next guy who did something great was Maxwell, who obtained the laws of electricity and magnetism. What he did was this. He put together all the laws of electricity, due to Faraday and other people who came before him, and he looked at them and realized that they were mathematically inconsistent. In order to straighten it out he had to add one term to an equation. He did this by inventing for himself a model of idler wheels and gears and so on in space. He found what the new law was – but nobody paid much attention because they did not believe in the idler wheels. We do not believe in the idler wheels today, but the equations that he obtained were correct. So the logic may be wrong but the answer right.
Comment by yagudin on Open Thread November 2018 · 2018-11-09T09:30:42.131Z · LW · GW

Great to hear!

Comment by yagudin on Book review: Why we sleep · 2018-11-02T22:39:23.984Z · LW · GW

Wikipedia page for 'Cognitive behavioral therapy for insomnia' is a great source of useful sleep related habits.

Comment by yagudin on Open Thread November 2018 · 2018-11-02T14:30:53.845Z · LW · GW

Divestment and mission hedging are examples of politically motivated finance activity. Divestment seems to be somewhat popular, but inefficient. Mission hedging is not well-known, but probably quite good.

Comment by yagudin on Open Thread November 2018 · 2018-11-01T08:06:08.521Z · LW · GW

A very successful crowdfunding for printing HPMoR has happened in Russia. 21k books are going to be printed: some of them will go to public/university libraries, some to gifted students. More good HPMoR related news are coming from Russia, but too early to announce them.

Comment by yagudin on The Art of the Overbet · 2018-10-20T13:43:58.782Z · LW · GW

I think this paper, which models winner-takes-all, public knowledge situations (ex. the space race between the US and USSR) by «Guess Who?» game, is interesting formal model of the first half of this post.

“Guess Who?” is a popular two player game where players ask “Yes”/“No” questions to search for their opponent’s secret identity from a pool of possible candidates. This is modeled as a simple stochastic game. Using this model, the optimal strategy is explicitly found. Contrary to popular belief, performing a binary search is not always optimal. Instead, the optimal strategy for the player who trails is to make certain bold plays in an attempt catch up. This is discovered by first analyzing a continuous version of the game where players play indefinitely and the winner is never decided after finitely many rounds.
Comment by yagudin on Book review: Why we sleep · 2018-09-28T15:08:21.429Z · LW · GW

You are welcome! A general concern about the pace of scientific progress.

Comment by yagudin on Book review: Why we sleep · 2018-09-24T11:00:06.549Z · LW · GW

The most in-depth, but a bit outdated (c. 2012) article on sleep is written by Piotr Wozniak, whom you might know as a pioneer of spaced repetition software. The article is ~300 pages long. It includes summary & myths sections which are a bit longer than this post.

Comment by yagudin on Paper: "A simple function of one parameter (θ) can fit any collection of ordered pairs {Xi,Yi} to arbitrary precision"--implications for Occam? · 2018-05-31T17:47:17.196Z · LW · GW

Somehow related papers in ML / DL:

  • Keeping NN Simple by Minimizing the Description Length of the Weights (Hinton, 1997);
  • Binarized Neural Networks (Courbariaux, 2016).
Comment by yagudin on Understanding is translation · 2018-05-31T06:27:23.636Z · LW · GW

It seems to me, that Dacyn's code executes [stuff] at least once for any n. But iff n <= 0, original while loop does not execute its body. Dacyn's code looks like a do-while loop.

Comment by yagudin on Are you the rider or the elephant? · 2018-02-22T12:07:28.041Z · LW · GW

I associate myself with the unconscious-self more and more (note: an unconscious-self is bigger than an elephant-self because some modules in a brain are deliberate & analytical, but not directly available to the verbal/conscious rider; I very much agree with @moridinamael's comment above).

Conscious-self seems more like press secretary for more hard-working unconscious-self, who is in charge of most of the decision-making. But, ugh, everyone experienced how «conscious ruled unconscious» (≈ will-power). I think the role of conscious-self in «the use of willpower» is to communicate from long-term modules to short-term modules of unconscious-self.

«Inner Game of Tennis» contains some recommendations on how to augment communication between the modules. I also found TDT-mindset helpful to tell early-evolved modules what later-evolved modules think is worth doing.

Comment by yagudin on Rationalist Lent · 2018-02-14T10:40:50.393Z · LW · GW

Youtube

I permanently blocked the website in all browsers I use. I use command line tool youtube-dl to download the videos I want/need to watch. This workflow gives me an option to watch videos (and also some friction to reevaluate the decision to watch a video); but prevents me from engaging with youtube, the risky game I might 'loss' otherwise.

Comment by yagudin on Rationalist Lent · 2018-02-14T10:36:23.457Z · LW · GW

I predict that a lot of people who would take rationalist lent's advice seriously would try to quite the same things and there are others who has hit on a good diet of experience that they could try to emulate. So It would be helpful to have a list of diets for quitting unwanted behaviour. Feel free to leave your recipes as a reply to this comment.

Comment by yagudin on A List Of Questions & Exercises For Reviewing Your Year · 2018-01-01T12:09:22.665Z · LW · GW

See also: the guide by Alex Vermeer is fruitful for reviewing the past and planning the following years in an analytical and systematic way.

Comment by yagudin on Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm · 2017-12-06T20:31:59.602Z · LW · GW

Two interesting questions arise:

  • could Alpha Zero beat the best human-computer team;
  • would human-AZ team systematically beat AZ.

I think the answer to the first question is positive, but unfortunately, I couldn't make much sense of the available raw data on Freestyle chess, so my opinion is based on the marginal revolution blog-post. The negative answer to the second question might make some optimists about human-AI cooperation like Kasparov less optimistic.