Comment by dr_manhattan on Counterfactuals about Social Media · 2019-04-23T15:50:52.829Z · score: 4 (2 votes) · LW · GW

I get that my usage hurts indirectly, my question was specifically if everyone used FB occasionally and for similar purposes as I do, would FB still be detrimental? Harming other people because they have unhealthy patterns of usage is still a concern but a lesser one to me

Comment by dr_manhattan on Counterfactuals about Social Media · 2019-04-22T18:15:02.660Z · score: 10 (7 votes) · LW · GW

I use FB occasionally to stay in touch with family/friends.

I subscribe to interesting people on Twitter and find it a great source of intellectual information.

I know these are harmful to some people, and I've occasionally noticed addictive behavior in myself, but overall seems like a good trade. If someone wants to explain/convince me that this is highly dangerous to me in a non-obvious way or that **my kind** of usage is endangering the commons I'm open to hear it.

Comment by dr_manhattan on Crypto quant trading: Intro · 2019-04-21T11:56:25.406Z · score: 2 (1 votes) · LW · GW

Are you working with Satvik? I know he was into this stuff.

Comment by dr_manhattan on Crypto quant trading: Intro · 2019-04-18T17:31:36.799Z · score: 5 (3 votes) · LW · GW

I have 1 foot in finance, and still learning a lot. As it appears you self-taught this stuff, what were good resources for you?

Comment by dr_manhattan on How do people become ambitious? · 2019-04-09T13:14:12.202Z · score: 5 (3 votes) · LW · GW

What is good literature on learned helplessness?

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:34:30.952Z · score: 9 (3 votes) · LW · GW
The One is a myth.

Yes, 100%. But really good chemistry comes in grades, in this case I feel really really good chemistry. There are probably partners "above the acceptable threshold" nearby, and if I date date long enough I might feel the same about someone else, but in this case LDR is a temporary inconvenience, I just want to make it work in LDR format until I feel good/convinced enough to turn it into N(ear)DR, which is well in my power to do.

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:31:56.155Z · score: 3 (2 votes) · LW · GW

Not retracted (I guess no more delete feature) just expanding longer comment elsewhere

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:21:23.628Z · score: 2 (1 votes) · LW · GW

It's an LDR that I intend to turn into NDR. LDR temporarily. Hope that helps.

Comment by dr_manhattan on What would you need to be motivated to answer "hard" LW questions? · 2019-03-29T17:19:27.596Z · score: 6 (3 votes) · LW · GW

Sure. I know something about general CS stuff, ML, applied Bayesian stats and finance. Generally I would not be answering questions for a bounty (I'm well compensated to do this at work and I don't want *more work*) but would spend some time if I think it helped people of contributed to an important body of knowledge. For me it comes from a different "time budget". I realize many people would feel differently but there's probably a class of people like me.

Comment by dr_manhattan on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T20:20:24.628Z · score: 2 (3 votes) · LW · GW

I think the answer will be highly dependent on the question. The opportunity cost of someone answering a q-n on yoga (being a yoga expert) is very different from someone answering an investment question.

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-26T12:50:59.904Z · score: 8 (5 votes) · LW · GW

Thanks for sharing. Good story and happens to give me encouragement as I seems to be falling into a (for now) LDR relationship with somebody in another country. It feels more "doable" based on this.

Comment by dr_manhattan on [Question] Tracking accuracy of personal forecasts · 2019-03-25T12:59:07.576Z · score: 3 (2 votes) · LW · GW

Agree! Tricky territory. I think it's fair to take an outside view as a first cut (e.g. how many people survived Everest), then very carefully evaluate if the reference class is relevant. Yudkowsky writes about this quite a bit in places cannot recall which particular place.

Comment by dr_manhattan on The Main Sources of AI Risk? · 2019-03-22T12:42:08.765Z · score: 6 (3 votes) · LW · GW

Great idea, would be awesome if someone adds links to best reference posts for each one of these (additional benefit this will identify whitespace that needs to be filled).

Comment by dr_manhattan on [Question] Tracking accuracy of personal forecasts · 2019-03-21T16:24:15.295Z · score: 3 (2 votes) · LW · GW
"Will Adam be able to get back to cycling within a month [after a recent accident]?"

(Probably unnecessary word of caution) do not forecast your own behavior due to risk of reduced agency.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-16T17:12:19.218Z · score: 2 (1 votes) · LW · GW

The confusion I'm trying to resolve is it feels like saving the best for last, if you have priors (assuming no attrition in the pool, which is obviously false). The intuition here is you're using some % of the pool for calibration purposes, and should not be using your best prospects for calibration.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-15T12:24:21.876Z · score: 2 (1 votes) · LW · GW

What if there are prior probabilities over "secretary ability" eg reading their resume or getting references? Has anyone worked out that variant?

Comment by dr_manhattan on Leaky Concepts · 2019-03-06T14:00:36.452Z · score: 2 (1 votes) · LW · GW
If you want to go mad, commit to drawing two lines the same length and don’t stop until you are dead from trying to line up atoms to be in the right places. There are quicker ways to go insane.

Related humor https://www.youtube.com/watch?v=w-wbWGwZ7_k

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-05T14:05:03.029Z · score: 2 (1 votes) · LW · GW

Hi Anna, since you've made the specific claim publicly (I assume intended as a warning), would you mind commenting on this

https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD

Specifically it's given there's some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it's more than that - (I think Vaniver alludes to it with the creepy comment).

Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T21:22:34.099Z · score: 2 (1 votes) · LW · GW

Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T13:20:00.095Z · score: 2 (1 votes) · LW · GW

Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-01T17:59:59.053Z · score: 4 (2 votes) · LW · GW

Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the "subtle skill of seeing the world with fresh eyes" potentially extremely valuable, which is why I suppose Anna kept on encouraging people.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-08T15:07:01.531Z · score: 2 (1 votes) · LW · GW

Thanks for doing this! I couldn't figure out how.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-26T21:03:32.378Z · score: 6 (3 votes) · LW · GW

My take+decision on the MIRI issue, in ROT13 to continue the pattern

Nabgure (zvabe) "Gbc Qbabe" bcvavba. Ba gur ZVEV vffhr: nterr jvgu lbhe pbapreaf, ohg pbagvahr qbangvat, sbe abj. V nffhzr gurl'er shyyl njner bs gur ceboyrz gurl'er cerfragvat gb gurve qbabef naq jvyy nqqerff vg va fbzr snfuvba. Vs gurl qb abg zvtug nqwhfg arkg lrne. Gur uneq guvat vf gung ZVEV fgvyy frrzf zbfg qvssreragvngrq va nccebnpu naq gnyrag bet gung pna hfr shaqf (if BcraNV naq QrrcZvaq naq jryy-shaqrq npnqrzvp vafgvghgvbaf)

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-21T02:22:02.189Z · score: 6 (3 votes) · LW · GW

Done. That thread is uge.

Comment by dr_manhattan on The Best Textbooks on Every Subject · 2018-11-21T02:21:22.407Z · score: 11 (2 votes) · LW · GW

Recommending Ben Lambert's "A student's guide to Bayesian Statistics" as the best all-in-one intro to *applied* Bayesian statistics

The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

PS. He has a playlist of complementary videos to go along with the book

Comment by dr_manhattan on New safety research agenda: scalable agent alignment via reward modeling · 2018-11-20T21:20:43.013Z · score: 6 (3 votes) · LW · GW

They mention and link to iterated amplification in the Medium article.

Scaling up
In the long run, we would like to scale reward modeling to domains that are too complex for humans to evaluate directly. To do this, we need to boost the user’s ability to evaluate outcomes. We discuss how reward modeling can be applied recursively: we can use reward modeling to train agents to assist the user in the evaluation process itself. If evaluation is easier than behavior, this could allow us to bootstrap from simpler tasks to increasingly general and more complex tasks. This can be thought of as an instance of iterated amplification.
Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-09T16:47:07.963Z · score: 4 (2 votes) · LW · GW

Thanks, added a comment

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-08T21:24:08.844Z · score: 12 (3 votes) · LW · GW

Ben Lambert's "A student's guide to Bayesian Statistics" as the best intro to *applied* Bayesian stats. The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

PS. He has a playlist of complementary videos to go along with the book

ETA: I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

Comment by dr_manhattan on Starting Meditation · 2018-11-08T21:11:05.825Z · score: 5 (2 votes) · LW · GW

Thanks!

Comment by dr_manhattan on Bayes Questions · 2018-11-08T13:50:53.710Z · score: 3 (2 votes) · LW · GW

The technologies I'm suggesting are just implementations of Bayes, which is what you're trying to do. There's some theory as to *how* they do inference (special versions of MCMC basically), but this is an "implementation detail" to a degree. Here's some references to get you started, though they're mostly Stan-centered http://mc-stan.org/users/documentation/external.html . If you want a better overall picture of the theory I really like this https://ben-lambert.com/a-students-guide-to-bayesian-statistics/ book, takes you from basics all the way to Stan usage

Comment by dr_manhattan on Bayes Questions · 2018-11-07T17:38:22.023Z · score: 3 (2 votes) · LW · GW

Can you program? In that case I highly recommend using PyMC or Stan for this kind of work. There's a pretty rich literature and culture of how to iteratively improve these types of models, and some tool support around these specific toolkits.

Comment by dr_manhattan on Starting Meditation · 2018-10-26T00:09:14.983Z · score: 2 (1 votes) · LW · GW

I should reread the book, but creation of the precise stages with precise description of experiences seemed unrealistic to me. Would love to hear your take

Comment by dr_manhattan on Starting Meditation · 2018-10-25T18:42:59.609Z · score: 2 (1 votes) · LW · GW

Did you do a review of TMI somewhere? While I liked the book the author seems overconfident (>>> my comfort level) about "you will have this kind of experience at this stage", largely backed by personal experience.

Comment by dr_manhattan on Fasting Mimicking Diet Looks Pretty Good · 2018-10-04T22:07:37.853Z · score: 4 (2 votes) · LW · GW

Thanks for writing this up - Vassar mentioned you've looked into this, was going to ask you!

One thing that I was a bit suspicious in the book is his strong no-skipping-breakfast recommendation, apparently it's based on a study, but seemed a lot less supported than the rest.

Comment by dr_manhattan on Reflections on Being 30 · 2018-10-03T13:58:04.862Z · score: 4 (2 votes) · LW · GW

Great essay Sarah! Mentally added "the cheapest thing to give up is being a dumbass" to my quote file. And Happy Birthday!

Comment by dr_manhattan on Direct Primary Care · 2018-09-25T18:28:23.409Z · score: 2 (1 votes) · LW · GW

It seems if this works on small scale it should scale up. The one counter-force to keep in mind are the rent seekers in this space; they can afford to ignore a small quirky operation but fighting this at scale is existential for them. You might actually need Amazon scale to fight them.

For reference on political forces around this:

Obama said, continuing on the healthcare theme. “Everybody who supports single-payer healthcare says, ‘Look at all this money we would be saving from insurance and paperwork.’ That represents 1 million, 2 million, 3 million jobs of people who are working at Blue Cross Blue Shield or Kaiser or other places. What are we doing with them? Where are we employing them?”

https://www.thenation.com/article/mr-obama-goes-washington/

Comment by dr_manhattan on Open AI co-founder on AGI · 2018-09-16T19:59:57.394Z · score: 1 (4 votes) · LW · GW
not at the same level of rigour as MIRI

I would agree, with a qualification. Due to OAI co-founders personal risk attitude I would say as an organization they care less about it. As far as the actual quality of research they have some pretty respectable people like Paul Christiano Dario Amodei (respectable here is a second-hand judgement, do not consider myself qualified for 1st hand)

Comment by dr_manhattan on The Scent of Bad Psychology · 2018-09-13T15:08:58.556Z · score: 10 (2 votes) · LW · GW

Why not post the full article, especially since this is your own blog?

Comment by dr_manhattan on I am the very model of a self-recursive modeler · 2018-09-07T16:55:29.673Z · score: 4 (2 votes) · LW · GW
It doesn't help that with an IQ of 134 I sometimes feel as dumb as a horse when browsing LW.

We should start a club :). Something like a Jerry daycare.

Comment by dr_manhattan on Would you benefit from audio versions of posts? · 2018-07-28T19:03:35.585Z · score: 3 (2 votes) · LW · GW

For me this does compare favorably to the TTS already built into the getpocket.com app, which also has a) follow along with text b) speed control features.

Comment by dr_manhattan on Would you benefit from audio versions of posts? · 2018-07-26T13:48:16.225Z · score: 4 (2 votes) · LW · GW

I occasionally use TTS in Pocket app (https://getpocket.com) (syncing the posts via their browser plugin). It's quite decent and I expect this technology to improve quite a bit in the coming years.

Comment by dr_manhattan on Paper Trauma · 2018-06-05T18:11:27.367Z · score: 11 (2 votes) · LW · GW

There are now tablet apps (e.g. GoodNotes or OneNote on the iPad) that allow you to have huge virtual paper (due to zoom in feature) and a wide variety of virtual writing tools. It also allows select, copy+paste (instead of re-writing something repetitive). Also solves the preservation of notes problem some people care about. I highly recommend trying it.

Comment by dr_manhattan on The Intelligent Social Web · 2018-05-22T13:09:38.949Z · score: 0 (2 votes) · LW · GW

This really made me think of Gandalf, as being a superb conductor/chef of the social web, based on very raw ingredients (Bilbo, Frodo, Aragorn).

Comment by dr_manhattan on Announcement: AI alignment prize round 2 winners and next round · 2018-04-16T16:47:25.088Z · score: 9 (2 votes) · LW · GW

I wonder if it's possible to help with this in a tax-advantaged way. Maybe set up donation to MIRI earmarked for this kind of thing.

Using rationality to debug Machine Learning

2018-04-10T20:03:44.357Z · score: 55 (12 votes)
Comment by dr_manhattan on Algorithms as Case Studies in Rationality · 2018-03-28T16:18:47.691Z · score: 3 (1 votes) · LW · GW

Recommending http://algorithmstoliveby.com/ for the same reasons

Comment by dr_manhattan on The abruptness of nuclear weapons · 2018-03-06T15:39:10.291Z · score: 7 (3 votes) · LW · GW

I think a highly relevant detail here is that the biggest bottleneck in development of nuclear weapons is refinement of fissionable material which is a tremendously intense industrial process (and still remains the major deterrent for obtainment of nukes). Without it the development would have been a lot more abrupt (and likely successful on German side).

Comment by dr_manhattan on A Proper Scoring Rule for Confidence Intervals · 2018-02-14T18:42:56.766Z · score: 3 (1 votes) · LW · GW

would you mind spelling out the integral part?

Comment by dr_manhattan on The Utility of Human Atoms for the Paperclip Maximizer · 2018-02-02T14:05:49.733Z · score: 9 (3 votes) · LW · GW
Even Friendly AI may deconstruct humans for their atoms in the AI’s early stages, and as such sacrificy will translate in the higher total number of sentient beings in the universe at the end.

“You keep using that word, I do not think it means what you think it means.”

Comment by dr_manhattan on Hammers and Nails · 2018-01-24T21:59:46.745Z · score: 13 (3 votes) · LW · GW

Feynman also explicitly spoke about hammer-mode “[so] I had got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me”, also some exerpts here https://www.farnamstreetblog.com/2016/07/mental-tools-richard-feynman/

Comment by dr_manhattan on The Right to be Wrong · 2017-11-30T14:46:03.105Z · score: 19 (5 votes) · LW · GW

The "Space Mom" part seems like what exploration/exploitation meta-algorithm feels like from the inside. To do quality exploration you need to shut down the inner voice of exploitation.

Also relevant: https://www.physics.ohio-state.edu/~kilcup/262/feynman.html

Bayesian statistics as epistemic advantage

2017-07-25T17:07:49.660Z · score: 0 (0 votes)

MILA gets a grant for AI safety research

2017-07-21T15:34:55.493Z · score: 9 (9 votes)

Learning from Human Preferences - from OpenAI (including Christiano, Amodei & Legg)

2017-06-13T15:52:00.294Z · score: 9 (9 votes)

NIPS 2015

2015-12-07T20:31:11.779Z · score: 7 (8 votes)

Velocity of behavioral evolution

2014-12-19T17:34:36.217Z · score: 3 (4 votes)

What Peter Thiel thinks about AI risk

2014-12-11T21:22:27.167Z · score: 12 (13 votes)

Cognitive distortions of founders

2014-12-11T03:19:58.802Z · score: 3 (6 votes)

FAI PR tracking well [link]

2014-08-15T21:23:48.622Z · score: 7 (8 votes)

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

2014-04-21T16:55:56.240Z · score: 18 (21 votes)

Gunshot victims to be suspended between life and death [link]

2014-03-27T16:33:49.413Z · score: 24 (27 votes)

Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link]

2014-01-30T01:20:10.579Z · score: 13 (26 votes)

H+ review of James Miller's Singularity Rising [link]

2014-01-17T02:16:08.126Z · score: -4 (9 votes)

PSA for LW futurists/academics

2013-10-31T15:54:39.861Z · score: 7 (12 votes)

Nudging around the world - [link]

2013-09-05T15:52:05.786Z · score: 8 (10 votes)

Course - Saving Millions of lives at a time [link]

2013-08-12T16:30:29.192Z · score: 4 (7 votes)

Proposal: periodic repost of the Best Learning resources

2013-08-10T16:21:19.138Z · score: 9 (10 votes)

RIP Doug Engelbart

2013-07-03T19:19:59.616Z · score: 11 (16 votes)

NES-game playing AI [video link and AI-boxing-related comment]

2013-04-12T13:11:36.365Z · score: 30 (35 votes)

Eliezer's YU lecture on FAI and MOR [link]

2013-03-07T16:09:54.710Z · score: 2 (9 votes)

Interesting discussion of concentration and productivity [link]

2013-02-06T13:58:35.082Z · score: 8 (13 votes)

CFAR’s Inaugural Fundraising Drive

2012-12-18T01:19:00.272Z · score: 10 (13 votes)

Popular media coverage of Singularity Summit -the Verge [link]

2012-10-23T03:19:03.676Z · score: 2 (7 votes)

Judea Pearl's Turing Award Lecture video now online

2012-09-10T15:18:43.251Z · score: 6 (9 votes)

Luke' AMA gets a plug @ Wired [link]

2012-08-26T04:15:49.745Z · score: 6 (13 votes)

Opinions on Boltzmann brain arguments constraining modern multiverse theories

2012-08-10T18:05:52.417Z · score: 1 (6 votes)

Russian plan for immortality [link]

2012-08-01T20:49:41.319Z · score: 5 (10 votes)

SIAI May report

2012-06-16T14:01:37.488Z · score: 5 (10 votes)

Arrison, Vassar and de Grey speaking at Peter Thiel's startup class [link]

2012-06-16T13:58:35.578Z · score: 10 (13 votes)

Sebastian Thrun AMA on reddit [link]

2012-06-14T02:53:15.258Z · score: 4 (7 votes)

MIT is working on industrial robots that (attempt to) learn what humans want from them [link]

2012-06-13T15:42:58.015Z · score: 3 (8 votes)

Peter Thiel's AGI discussion in his startups class @ Stanford [link]

2012-06-07T12:27:48.075Z · score: 13 (16 votes)

Audio interview with Judea Pearl [link]

2012-05-10T12:47:07.615Z · score: 7 (8 votes)

"Big Surprise" - the famous atheists are actually Bayesians [link]

2012-04-08T16:11:30.166Z · score: -9 (20 votes)

Singularity-oriented Sci-Fi collection to be published [link]

2012-02-29T16:39:40.714Z · score: 3 (6 votes)

Model Thinking class [link]

2012-02-20T18:27:51.909Z · score: 6 (7 votes)

Top 5 regrets of the dying [link]

2012-02-04T17:56:22.156Z · score: 3 (10 votes)

Bill Gates asks HS students "What are most important choices the world faces"?

2012-01-11T20:34:11.272Z · score: 0 (11 votes)

Nick Bostrom TED talk on world's biggest problems

2012-01-06T18:52:09.991Z · score: 17 (20 votes)

Roger Williams (Author of Metamorphosis of Prime Intellect) on Singularity

2012-01-06T05:09:36.511Z · score: 5 (14 votes)

Future of Moral Machines - New York Times [link]

2011-12-26T14:44:01.763Z · score: 0 (5 votes)

SingInst bloomberg coverage [link]

2011-12-19T19:31:41.651Z · score: 5 (8 votes)

For those in the Stanford AI-class

2011-12-10T22:54:38.294Z · score: 8 (11 votes)

Good interview with Kahneman [link]

2011-12-02T15:04:24.480Z · score: 2 (3 votes)

FAI-relevant XKCD

2011-11-22T13:28:44.070Z · score: -4 (13 votes)

Three more classes coming from Stanford of interest here

2011-11-17T17:57:42.468Z · score: 15 (16 votes)

Human augmented with CBT algorithms games Jeopardy [link]

2011-11-17T03:22:25.901Z · score: 10 (11 votes)

Biointelligence Explosion

2011-11-07T14:05:08.985Z · score: 2 (15 votes)

Guardian coverage of the Summit [link]

2011-11-03T03:17:13.585Z · score: 3 (6 votes)

Singularity Summit - some videos

2011-10-26T14:57:57.244Z · score: 1 (2 votes)