Posts

What are we predicting for Neuralink event? 2019-07-12T19:33:57.759Z · score: 28 (11 votes)
LW Dev question: FB-style tagging? 2019-06-20T19:19:45.807Z · score: 11 (3 votes)
Using rationality to debug Machine Learning 2018-04-10T20:03:44.357Z · score: 55 (12 votes)
Bayesian statistics as epistemic advantage 2017-07-25T17:07:49.660Z · score: 0 (0 votes)
MILA gets a grant for AI safety research 2017-07-21T15:34:55.493Z · score: 9 (9 votes)
Learning from Human Preferences - from OpenAI (including Christiano, Amodei & Legg) 2017-06-13T15:52:00.294Z · score: 9 (9 votes)
NIPS 2015 2015-12-07T20:31:11.779Z · score: 7 (8 votes)
Velocity of behavioral evolution 2014-12-19T17:34:36.217Z · score: 3 (4 votes)
What Peter Thiel thinks about AI risk 2014-12-11T21:22:27.167Z · score: 12 (13 votes)
Cognitive distortions of founders 2014-12-11T03:19:58.802Z · score: 3 (6 votes)
FAI PR tracking well [link] 2014-08-15T21:23:48.622Z · score: 7 (8 votes)
Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link] 2014-04-21T16:55:56.240Z · score: 18 (21 votes)
Gunshot victims to be suspended between life and death [link] 2014-03-27T16:33:49.413Z · score: 24 (27 votes)
Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link] 2014-01-30T01:20:10.579Z · score: 13 (26 votes)
H+ review of James Miller's Singularity Rising [link] 2014-01-17T02:16:08.126Z · score: -4 (9 votes)
PSA for LW futurists/academics 2013-10-31T15:54:39.861Z · score: 7 (12 votes)
Nudging around the world - [link] 2013-09-05T15:52:05.786Z · score: 8 (10 votes)
Course - Saving Millions of lives at a time [link] 2013-08-12T16:30:29.192Z · score: 4 (7 votes)
Proposal: periodic repost of the Best Learning resources 2013-08-10T16:21:19.138Z · score: 9 (10 votes)
RIP Doug Engelbart 2013-07-03T19:19:59.616Z · score: 11 (16 votes)
NES-game playing AI [video link and AI-boxing-related comment] 2013-04-12T13:11:36.365Z · score: 30 (35 votes)
Eliezer's YU lecture on FAI and MOR [link] 2013-03-07T16:09:54.710Z · score: 2 (9 votes)
Interesting discussion of concentration and productivity [link] 2013-02-06T13:58:35.082Z · score: 8 (13 votes)
CFAR’s Inaugural Fundraising Drive 2012-12-18T01:19:00.272Z · score: 10 (13 votes)
Popular media coverage of Singularity Summit -the Verge [link] 2012-10-23T03:19:03.676Z · score: 2 (7 votes)
Judea Pearl's Turing Award Lecture video now online 2012-09-10T15:18:43.251Z · score: 6 (9 votes)
Luke' AMA gets a plug @ Wired [link] 2012-08-26T04:15:49.745Z · score: 6 (13 votes)
Opinions on Boltzmann brain arguments constraining modern multiverse theories 2012-08-10T18:05:52.417Z · score: 1 (6 votes)
Russian plan for immortality [link] 2012-08-01T20:49:41.319Z · score: 5 (10 votes)
SIAI May report 2012-06-16T14:01:37.488Z · score: 5 (10 votes)
Arrison, Vassar and de Grey speaking at Peter Thiel's startup class [link] 2012-06-16T13:58:35.578Z · score: 10 (13 votes)
Sebastian Thrun AMA on reddit [link] 2012-06-14T02:53:15.258Z · score: 4 (7 votes)
MIT is working on industrial robots that (attempt to) learn what humans want from them [link] 2012-06-13T15:42:58.015Z · score: 3 (8 votes)
Peter Thiel's AGI discussion in his startups class @ Stanford [link] 2012-06-07T12:27:48.075Z · score: 13 (16 votes)
Audio interview with Judea Pearl [link] 2012-05-10T12:47:07.615Z · score: 7 (8 votes)
"Big Surprise" - the famous atheists are actually Bayesians [link] 2012-04-08T16:11:30.166Z · score: -9 (20 votes)
Singularity-oriented Sci-Fi collection to be published [link] 2012-02-29T16:39:40.714Z · score: 3 (6 votes)
Model Thinking class [link] 2012-02-20T18:27:51.909Z · score: 6 (7 votes)
Top 5 regrets of the dying [link] 2012-02-04T17:56:22.156Z · score: 3 (10 votes)
Bill Gates asks HS students "What are most important choices the world faces"? 2012-01-11T20:34:11.272Z · score: 0 (11 votes)
Nick Bostrom TED talk on world's biggest problems 2012-01-06T18:52:09.991Z · score: 17 (20 votes)
Roger Williams (Author of Metamorphosis of Prime Intellect) on Singularity 2012-01-06T05:09:36.511Z · score: 5 (14 votes)
Future of Moral Machines - New York Times [link] 2011-12-26T14:44:01.763Z · score: 0 (5 votes)
SingInst bloomberg coverage [link] 2011-12-19T19:31:41.651Z · score: 5 (8 votes)
For those in the Stanford AI-class 2011-12-10T22:54:38.294Z · score: 8 (11 votes)
Good interview with Kahneman [link] 2011-12-02T15:04:24.480Z · score: 2 (3 votes)
FAI-relevant XKCD 2011-11-22T13:28:44.070Z · score: -4 (13 votes)
Three more classes coming from Stanford of interest here 2011-11-17T17:57:42.468Z · score: 15 (16 votes)
Human augmented with CBT algorithms games Jeopardy [link] 2011-11-17T03:22:25.901Z · score: 10 (11 votes)
Biointelligence Explosion 2011-11-07T14:05:08.985Z · score: 4 (16 votes)

Comments

Comment by dr_manhattan on Ways that China is surpassing the US · 2019-11-04T13:28:27.435Z · score: 0 (3 votes) · LW · GW
What does it imply for things like AI governance and global coordination on x-risks?

I've read the article a while ago, and vaguely concluded there should be some implications here (but largely uncertain about the direction or magnitude, being a non-expert). Interested to hear what people think (esp. people who concentrate on policy)

Comment by dr_manhattan on Unstriving · 2019-08-22T12:14:31.776Z · score: 2 (1 votes) · LW · GW
How about we let go of success, but keep doing challenging stuff anyway, just for the fun of it?

This sort of feels like Feynman's attitude, despite him being extremely successful.

Comment by dr_manhattan on GPT-2: 6-Month Follow-Up · 2019-08-21T12:41:46.516Z · score: 6 (3 votes) · LW · GW

Also notable: NVIDIA trained a half-order-of-magnitude larger model https://nv-adlr.github.io/MegatronLM?utm_campaign=NLP%20News&utm_medium=email&utm_source=Revue%20newsletter

Comment by dr_manhattan on What are we predicting for Neuralink event? · 2019-07-17T19:27:57.727Z · score: 7 (3 votes) · LW · GW

Seems way off from the actual release; any post-mortem?

Comment by dr_manhattan on Integrity and accountability are core parts of rationality · 2019-07-16T14:55:07.325Z · score: 4 (2 votes) · LW · GW

A decent solution to the "who should you be accountable to", from the wisdom of the ancients (shows thought on many of the considerations mentioned)

When in doubt, remember Warren Buffett’s rule of thumb: “… I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”
Comment by dr_manhattan on What are we predicting for Neuralink event? · 2019-07-16T14:17:51.917Z · score: 2 (1 votes) · LW · GW

(might contain spoilers/private info) https://www.technologyreview.com/s/613961/elon-musks-brain-interface-company-is-promising-big-news-heres-what-it-could-be/

Comment by dr_manhattan on Open Thread July 2019 · 2019-07-10T15:59:46.855Z · score: 2 (1 votes) · LW · GW

I think the Hypothesis is not about Open Threads specifically

Comment by dr_manhattan on What would be the signs of AI manhattan projects starting? Should a website be made watching for these signs? · 2019-07-03T20:55:52.786Z · score: 4 (2 votes) · LW · GW

Tracking employment/location and publishing/conference attendance records of researchers will probably be good source data for this.

Comment by dr_manhattan on What was the official story for many top physicists congregating in Los Alamos during the Manhattan Project? · 2019-07-03T20:53:47.200Z · score: 9 (5 votes) · LW · GW

I think it was easier in that era; AFAIK they used conventional secrecy methods (project names, locations, misdirection, need to know, obfuscation) to pull it off. Feynman's "Surely you're joking" and Rhodes "making the atomic bomb" are good sources for some examples (and otherwise recommended)

Comment by dr_manhattan on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T21:27:06.741Z · score: 4 (2 votes) · LW · GW

Small typo:

Hence it has no motivation to manipulate[d] humans through its answer.
Comment by dr_manhattan on What is the evidence for productivity benefits of weightlifting? · 2019-06-27T16:34:19.929Z · score: 1 (3 votes) · LW · GW

IN MICE

Comment by dr_manhattan on Jordan Peterson on AI-FOOM · 2019-06-27T12:34:12.255Z · score: 6 (3 votes) · LW · GW

I somewhat overlooked this line and yes, it's a nod in the right direction

Comment by dr_manhattan on Jordan Peterson on AI-FOOM · 2019-06-26T19:19:06.026Z · score: 22 (10 votes) · LW · GW

Based on the transcript this does not sound like a FOOM discussion (as in rapid self-improvement) other than mentioning "group learning" by autonomous cars, which is maybe somewhat related. Also the pregnancy ad story is much more about pattern recognition with lots of data than any serious AI.

Basically JP is, in this area, a complete layman (unlike Gates, Musk, or, from the other side, Pinker) whose opinion counts for little and not talking about FOOM anyway.

Comment by dr_manhattan on Being the (Pareto) Best in the World · 2019-06-25T12:32:24.217Z · score: 17 (7 votes) · LW · GW

Is this much different from Scott Adams' advice https://dilbertblog.typepad.com/the_dilbert_blog/2007/07/career-advice.html

of


if you want something extraordinary, you have two paths:
1. Become the best at one specific thing.
2. Become very good (top 25%) at two or more things.

?


Comment by dr_manhattan on Reason isn't magic · 2019-06-20T19:10:45.161Z · score: 2 (1 votes) · LW · GW

Do you mean Zvi's "Change is bad"?

Comment by dr_manhattan on Reason isn't magic · 2019-06-19T14:34:39.633Z · score: 3 (2 votes) · LW · GW
Second, we're not actually comparing reason to tradition - we're comparing changing things to not changing things. Change, as we know, is bad.

Request for clarification: isn't "reasonable solution" always a "change" when compared to preexisting tradition?

Comment by dr_manhattan on Counterfactuals about Social Media · 2019-04-23T15:50:52.829Z · score: 4 (2 votes) · LW · GW

I get that my usage hurts indirectly, my question was specifically if everyone used FB occasionally and for similar purposes as I do, would FB still be detrimental? Harming other people because they have unhealthy patterns of usage is still a concern but a lesser one to me

Comment by dr_manhattan on Counterfactuals about Social Media · 2019-04-22T18:15:02.660Z · score: 10 (7 votes) · LW · GW

I use FB occasionally to stay in touch with family/friends.

I subscribe to interesting people on Twitter and find it a great source of intellectual information.

I know these are harmful to some people, and I've occasionally noticed addictive behavior in myself, but overall seems like a good trade. If someone wants to explain/convince me that this is highly dangerous to me in a non-obvious way or that **my kind** of usage is endangering the commons I'm open to hear it.

Comment by dr_manhattan on Crypto quant trading: Intro · 2019-04-21T11:56:25.406Z · score: 2 (1 votes) · LW · GW

Are you working with Satvik? I know he was into this stuff.

Comment by dr_manhattan on Crypto quant trading: Intro · 2019-04-18T17:31:36.799Z · score: 5 (3 votes) · LW · GW

I have 1 foot in finance, and still learning a lot. As it appears you self-taught this stuff, what were good resources for you?

Comment by dr_manhattan on How do people become ambitious? · 2019-04-09T13:14:12.202Z · score: 5 (3 votes) · LW · GW

What is good literature on learned helplessness?

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:34:30.952Z · score: 9 (3 votes) · LW · GW
The One is a myth.

Yes, 100%. But really good chemistry comes in grades, in this case I feel really really good chemistry. There are probably partners "above the acceptable threshold" nearby, and if I date date long enough I might feel the same about someone else, but in this case LDR is a temporary inconvenience, I just want to make it work in LDR format until I feel good/convinced enough to turn it into N(ear)DR, which is well in my power to do.

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:31:56.155Z · score: 3 (2 votes) · LW · GW

Not retracted (I guess no more delete feature) just expanding longer comment elsewhere

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-29T17:21:23.628Z · score: 2 (1 votes) · LW · GW

It's an LDR that I intend to turn into NDR. LDR temporarily. Hope that helps.

Comment by dr_manhattan on What would you need to be motivated to answer "hard" LW questions? · 2019-03-29T17:19:27.596Z · score: 6 (3 votes) · LW · GW

Sure. I know something about general CS stuff, ML, applied Bayesian stats and finance. Generally I would not be answering questions for a bounty (I'm well compensated to do this at work and I don't want *more work*) but would spend some time if I think it helped people of contributed to an important body of knowledge. For me it comes from a different "time budget". I realize many people would feel differently but there's probably a class of people like me.

Comment by dr_manhattan on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T20:20:24.628Z · score: 2 (3 votes) · LW · GW

I think the answer will be highly dependent on the question. The opportunity cost of someone answering a q-n on yoga (being a yoga expert) is very different from someone answering an investment question.

Comment by dr_manhattan on What I've Learned From My Parents' Arranged Marriage · 2019-03-26T12:50:59.904Z · score: 8 (5 votes) · LW · GW

Thanks for sharing. Good story and happens to give me encouragement as I seems to be falling into a (for now) LDR relationship with somebody in another country. It feels more "doable" based on this.

Comment by dr_manhattan on [Question] Tracking accuracy of personal forecasts · 2019-03-25T12:59:07.576Z · score: 3 (2 votes) · LW · GW

Agree! Tricky territory. I think it's fair to take an outside view as a first cut (e.g. how many people survived Everest), then very carefully evaluate if the reference class is relevant. Yudkowsky writes about this quite a bit in places cannot recall which particular place.

Comment by dr_manhattan on The Main Sources of AI Risk? · 2019-03-22T12:42:08.765Z · score: 6 (3 votes) · LW · GW

Great idea, would be awesome if someone adds links to best reference posts for each one of these (additional benefit this will identify whitespace that needs to be filled).

Comment by dr_manhattan on [Question] Tracking accuracy of personal forecasts · 2019-03-21T16:24:15.295Z · score: 3 (2 votes) · LW · GW
"Will Adam be able to get back to cycling within a month [after a recent accident]?"

(Probably unnecessary word of caution) do not forecast your own behavior due to risk of reduced agency.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-16T17:12:19.218Z · score: 2 (1 votes) · LW · GW

The confusion I'm trying to resolve is it feels like saving the best for last, if you have priors (assuming no attrition in the pool, which is obviously false). The intuition here is you're using some % of the pool for calibration purposes, and should not be using your best prospects for calibration.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-15T12:24:21.876Z · score: 2 (1 votes) · LW · GW

What if there are prior probabilities over "secretary ability" eg reading their resume or getting references? Has anyone worked out that variant?

Comment by dr_manhattan on Leaky Concepts · 2019-03-06T14:00:36.452Z · score: 2 (1 votes) · LW · GW
If you want to go mad, commit to drawing two lines the same length and don’t stop until you are dead from trying to line up atoms to be in the right places. There are quicker ways to go insane.

Related humor https://www.youtube.com/watch?v=w-wbWGwZ7_k

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-05T14:05:03.029Z · score: 2 (1 votes) · LW · GW

Hi Anna, since you've made the specific claim publicly (I assume intended as a warning), would you mind commenting on this

https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD

Specifically it's given there's some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it's more than that - (I think Vaniver alludes to it with the creepy comment).

Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T21:22:34.099Z · score: 4 (2 votes) · LW · GW

Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T13:20:00.095Z · score: 2 (1 votes) · LW · GW

Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-01T17:59:59.053Z · score: 4 (2 votes) · LW · GW

Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the "subtle skill of seeing the world with fresh eyes" potentially extremely valuable, which is why I suppose Anna kept on encouraging people.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-08T15:07:01.531Z · score: 2 (1 votes) · LW · GW

Thanks for doing this! I couldn't figure out how.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-26T21:03:32.378Z · score: 6 (3 votes) · LW · GW

My take+decision on the MIRI issue, in ROT13 to continue the pattern

Nabgure (zvabe) "Gbc Qbabe" bcvavba. Ba gur ZVEV vffhr: nterr jvgu lbhe pbapreaf, ohg pbagvahr qbangvat, sbe abj. V nffhzr gurl'er shyyl njner bs gur ceboyrz gurl'er cerfragvat gb gurve qbabef naq jvyy nqqerff vg va fbzr snfuvba. Vs gurl qb abg zvtug nqwhfg arkg lrne. Gur uneq guvat vf gung ZVEV fgvyy frrzf zbfg qvssreragvngrq va nccebnpu naq gnyrag bet gung pna hfr shaqf (if BcraNV naq QrrcZvaq naq jryy-shaqrq npnqrzvp vafgvghgvbaf)

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-21T02:22:02.189Z · score: 6 (3 votes) · LW · GW

Done. That thread is uge.

Comment by dr_manhattan on The Best Textbooks on Every Subject · 2018-11-21T02:21:22.407Z · score: 11 (2 votes) · LW · GW

Recommending Ben Lambert's "A student's guide to Bayesian Statistics" as the best all-in-one intro to *applied* Bayesian statistics

The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

PS. He has a playlist of complementary videos to go along with the book

Comment by dr_manhattan on New safety research agenda: scalable agent alignment via reward modeling · 2018-11-20T21:20:43.013Z · score: 6 (3 votes) · LW · GW

They mention and link to iterated amplification in the Medium article.

Scaling up
In the long run, we would like to scale reward modeling to domains that are too complex for humans to evaluate directly. To do this, we need to boost the user’s ability to evaluate outcomes. We discuss how reward modeling can be applied recursively: we can use reward modeling to train agents to assist the user in the evaluation process itself. If evaluation is easier than behavior, this could allow us to bootstrap from simpler tasks to increasingly general and more complex tasks. This can be thought of as an instance of iterated amplification.
Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-09T16:47:07.963Z · score: 4 (2 votes) · LW · GW

Thanks, added a comment

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-08T21:24:08.844Z · score: 12 (3 votes) · LW · GW

Ben Lambert's "A student's guide to Bayesian Statistics" as the best intro to *applied* Bayesian stats. The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

PS. He has a playlist of complementary videos to go along with the book

ETA: I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

Comment by dr_manhattan on Starting Meditation · 2018-11-08T21:11:05.825Z · score: 5 (2 votes) · LW · GW

Thanks!

Comment by dr_manhattan on Bayes Questions · 2018-11-08T13:50:53.710Z · score: 3 (2 votes) · LW · GW

The technologies I'm suggesting are just implementations of Bayes, which is what you're trying to do. There's some theory as to *how* they do inference (special versions of MCMC basically), but this is an "implementation detail" to a degree. Here's some references to get you started, though they're mostly Stan-centered http://mc-stan.org/users/documentation/external.html . If you want a better overall picture of the theory I really like this https://ben-lambert.com/a-students-guide-to-bayesian-statistics/ book, takes you from basics all the way to Stan usage

Comment by dr_manhattan on Bayes Questions · 2018-11-07T17:38:22.023Z · score: 3 (2 votes) · LW · GW

Can you program? In that case I highly recommend using PyMC or Stan for this kind of work. There's a pretty rich literature and culture of how to iteratively improve these types of models, and some tool support around these specific toolkits.

Comment by dr_manhattan on Starting Meditation · 2018-10-26T00:09:14.983Z · score: 2 (1 votes) · LW · GW

I should reread the book, but creation of the precise stages with precise description of experiences seemed unrealistic to me. Would love to hear your take

Comment by dr_manhattan on Starting Meditation · 2018-10-25T18:42:59.609Z · score: 2 (1 votes) · LW · GW

Did you do a review of TMI somewhere? While I liked the book the author seems overconfident (>>> my comfort level) about "you will have this kind of experience at this stage", largely backed by personal experience.

Comment by dr_manhattan on Fasting Mimicking Diet Looks Pretty Good · 2018-10-04T22:07:37.853Z · score: 6 (3 votes) · LW · GW

Thanks for writing this up - Vassar mentioned you've looked into this, was going to ask you!

One thing that I was a bit suspicious in the book is his strong no-skipping-breakfast recommendation, apparently it's based on a study, but seemed a lot less supported than the rest.