Sam Harris and Scott Adams debate Trump: a model rationalist disagreement 2017-07-20T00:18:54.355Z · score: 2 (2 votes)
Interview on IQ, genes, and genetic engineering with expert (Hsu) 2017-05-28T22:19:23.489Z · score: 4 (4 votes)
LW mentioned in influential 2016 Milo article on the Alt-Right 2017-03-18T19:30:03.381Z · score: 6 (6 votes)
The Psychology of Human Misjudgment by Charles T. Munger 2017-03-01T01:34:46.388Z · score: 1 (2 votes)
Allegory On AI Risk, Game Theory, and Mithril 2017-02-13T20:41:50.584Z · score: 25 (26 votes)
Dan Carlin six hour podcast on history of atomic weapons 2017-02-09T16:10:17.253Z · score: 4 (5 votes)
Dodging a bullet: "the price of insufficient medical vigilance can be very high." 2017-01-18T04:11:30.734Z · score: 4 (4 votes)
Be someone – be recognized by the system and promoted – or do something 2017-01-15T21:22:53.371Z · score: 3 (4 votes)
Increase Your Child’s Working Memory 2016-11-27T21:57:12.930Z · score: 3 (8 votes)
Old urine samples from the 2008 and 2012 Olympics show massive cheating 2016-11-25T02:31:10.356Z · score: 1 (5 votes)
Synthetic supermicrobe will be resistant to all known viruses 2016-11-22T04:40:05.982Z · score: 3 (4 votes)
There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education 2016-10-17T00:12:03.593Z · score: 6 (7 votes)
A Child's Petrov Day Speech 2016-09-28T02:27:38.521Z · score: 18 (18 votes)
[Link] My Interview with Dilbert creator Scott Adams 2016-09-13T05:22:47.741Z · score: 9 (12 votes)
Now is the time to eliminate mosquitoes 2016-08-06T19:10:16.968Z · score: 21 (21 votes)
Crazy Ideas Thread 2016-06-18T00:30:49.892Z · score: 5 (8 votes)
[Link] Mutual fund fees 2016-04-23T22:09:39.949Z · score: 3 (4 votes)
My new rationality/futurism podcast 2016-04-06T17:36:51.509Z · score: 13 (18 votes)
[Link] 10 Tips from CFAR: My Business Insider article 2015-12-10T02:09:29.208Z · score: 19 (19 votes)
[Link] My review of Rationality: From AI to Zombies 2015-08-12T16:16:12.461Z · score: 9 (10 votes)
[Link] Game Theory YouTube Videos 2015-08-06T16:17:44.998Z · score: 16 (17 votes)
Wear a Helmet While Driving a Car 2015-07-30T16:36:37.768Z · score: 60 (50 votes)
Parenting Technique: Increase Your Child’s Working Memory 2015-06-29T19:51:48.067Z · score: 13 (16 votes)
What are "the really good ideas" that Peter Thiel says are too dangerous to mention? 2015-04-12T21:07:40.663Z · score: 2 (24 votes)
Twenty basic rules for intelligent money management 2015-03-19T17:57:22.558Z · score: 34 (38 votes)
Link: LessWrong and AI risk mentioned in a Business Insider Article 2014-12-03T17:13:59.505Z · score: 10 (11 votes)
Article on confirmation bias for the Smith Alumnae Quarterly 2014-08-06T14:43:11.412Z · score: 4 (17 votes)
A simple game that has no solution 2014-07-20T18:36:54.636Z · score: 10 (21 votes)
Quickly passing through the great filter 2014-07-06T18:50:10.647Z · score: 10 (17 votes)
Link: Poking the Bear (Podcast) 2014-02-27T15:43:29.955Z · score: 0 (11 votes)
What rationality material should I teach in my game theory course 2014-01-14T02:15:53.470Z · score: 5 (6 votes)
Review of Scott Adams’ “How to Fail at Almost Everything and Still Win Big” 2013-12-23T20:48:12.469Z · score: 44 (45 votes)
Advice for a smart 8-year-old bored with school 2013-10-09T19:19:40.795Z · score: 10 (16 votes)
A World War I example showing the danger of deceiving your own side 2013-06-01T00:00:51.680Z · score: 2 (8 votes)
Map and territory visual presentation 2013-01-17T18:17:12.387Z · score: 7 (9 votes)
Modafinil now covered by insurance 2012-09-26T00:15:34.355Z · score: 1 (26 votes)
Mass-murdering neuroscience Ph.D. student 2012-07-20T17:02:52.624Z · score: 7 (26 votes)
Seeking Collaborator for a Singularity Comic Book 2011-12-05T16:20:23.838Z · score: 10 (13 votes)
Link: WJS article that uses Steve Jobs' death to mock cryonics and the Singularity 2011-10-08T02:56:58.381Z · score: 3 (10 votes)
Paid DC internship for autistics with technical skills who are recent college graduates 2011-09-27T21:51:14.669Z · score: 6 (9 votes)
Will DNA Analysis Make Politics Less of a Mind-Killer? 2011-08-18T00:03:06.366Z · score: -4 (24 votes)
What does lack of evidence of a causal relationship tell you? 2011-06-08T19:03:45.283Z · score: 1 (2 votes)
Are the Sciences Better Than the Social Sciences For Training Rationalists? 2011-05-31T17:45:52.368Z · score: 4 (9 votes)
Improving the college experience for students on the autism spectrum 2011-04-25T18:47:17.457Z · score: 9 (10 votes)
Overcoming the negative signal of not attending college. 2011-02-16T20:13:12.500Z · score: 10 (16 votes)
What would an ultra-intelligent machine make of the great filter? 2010-11-28T18:47:52.503Z · score: -3 (8 votes)
An Xtranormal Intelligence Explosion 2010-11-07T23:42:34.382Z · score: 4 (27 votes)
What hardcore singularity believers should consider doing 2010-10-27T20:26:04.499Z · score: 3 (18 votes)
Standing Desks and Hunter-Gatherers 2010-10-14T00:03:26.507Z · score: 5 (8 votes)
Cryonics Questions 2010-08-26T23:19:43.399Z · score: 9 (32 votes)


Comment by james_miller on My new rationality/futurism podcast · 2020-01-14T23:24:40.826Z · score: 2 (1 votes) · LW · GW

My economics department is hiring a macroeconomist this year. A huge number of applicants are submitting statements of teaching and diversity in which they describe how if hired they will promote diversity in their teaching.

Comment by james_miller on My new rationality/futurism podcast · 2020-01-04T03:32:16.340Z · score: 5 (2 votes) · LW · GW

As the left have taken over most colleges, I think that only thing that could stop them would be if colleges faced tremendous economic pressure because, say, online education or drastic cuts in government funds threatened the financial position of the colleges and they were forced to become more customer oriented, more oriented to producing scientific gains or to enhancing the future income of their students. Right now, elite colleges especially are in a very comfortable financial position and so face no pressure to take actions their leaders would consider distasteful which would include becoming more open to non-leftist views. I haven't written on this.

I agree with you on x-risks. I think one of our best paths to avoiding them would be to use genetic engineering to create very smart and moral people, but most of academia hates the possibility that genes could have anything to do with intelligence or morality.

Comment by james_miller on My new rationality/futurism podcast · 2020-01-03T23:21:12.971Z · score: 13 (3 votes) · LW · GW

I was initially denied tenure but appealed claiming that two members of my department voted against me for political reasons. My college's five person Grievance Committee unanimously ruled in my favor and I came up for tenure again and that time was granted it. I wrote about it here:

Yes, in many fields you could hide your politically incorrect beliefs and not be harmed by them so long as you can include a statement in your tenure file of how you will work to increase diversity as defined by leftists.

I think it is getting worse in that people who have openly politically incorrect beliefs are now being considered racist. I don't see the trend reversing unless the economics of higher education change.

Comment by james_miller on New Year's Predictions Thread · 2020-01-03T23:12:58.524Z · score: 19 (7 votes) · LW · GW

I was very, very wrong.

Comment by james_miller on My new rationality/futurism podcast · 2019-12-15T02:12:58.264Z · score: 7 (3 votes) · LW · GW

Most academics don't take politically incorrect positions. If you don't have tenure doing so would be very dangerous. If you do, it could make it much harder to move to a higher ranked school, but it is very difficult to fire tenured professors for speech. One way to move up in academics is to take staff positions as a dean, provost, or college president. Taking politically incorrect positions likely completely forecloses this path.

Comment by james_miller on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-15T01:30:29.089Z · score: 2 (5 votes) · LW · GW

Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.

Comment by james_miller on What's your big idea? · 2019-10-20T18:27:42.702Z · score: 2 (1 votes) · LW · GW

While you might be right, it's also possible that von Neumann doesn't have a contemporary peer. Apparently top scientists who knew von Neumann considered von Neumann to be smarter than the other scientists they knew.

Comment by james_miller on What's your big idea? · 2019-10-20T13:34:00.079Z · score: 8 (4 votes) · LW · GW

Yes, I am referring to "IQ" not g because most people do not know what g is. (For other readers ,IQ is the measurement, g is the real thing.) I have looked into IQ research a lot and spoken to a few experts. While genetics likely doesn't play much of a role in the Flynn effect, it plays a huge role in g and IQ. This is established beyond any reasonable doubt. IQ is a very politically sensitive topic and people are not always honest about it. Indeed, some experts admit to other experts that they lie about IQ when discussing IQ in public (Source: my friend and podcasting partner Greg Cochran. The podcast is Future Strategist.). We don't know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores. There is no good evidence that education raises g. The literature on IQ is so massive, and so poisoned by political correctness (and some would claim racism) that it is not possible to resolve the issues you raise by citing literature. If you ask IQ experts why they disagree with other IQ experts they will say that the other experts are idiots/liars/racists/cowards. I interviewed a lot of IQ experts when writing my book Singularity Rising.

Comment by james_miller on What's your big idea? · 2019-10-20T01:32:36.461Z · score: 7 (3 votes) · LW · GW

Most likely von Neumann had a combination of (1) lots of additive genes that increased intelligence, (2) few additive genes that reduced intelligence, (3) low mutational load, (4) a rare combination of non-additive genes that increased intelligence (meaning genes with non-linear effects) and (5) lucky brain development. A clone would have the advantages of (1)-(4). While it might in theory be possible to raise IQ by creating the proper learning environment, we have no evidence of having done this so it seems unlikely that this was the cause of von Neumann having high intelligence.

Comment by james_miller on What's your big idea? · 2019-10-19T14:52:31.506Z · score: 13 (9 votes) · LW · GW

We should make thousands of clones of John von Neumann from his DNA. We don't have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann's productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones' IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.

Comment by james_miller on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-26T01:58:01.388Z · score: 4 (2 votes) · LW · GW

Submission: Low-bandwidth oracle. Here is a description of one million experiments. Generate an ordinal ranking of all these experiments based on how much practical knowledge we would have gained had we conducted each one in isolation. After you generate this list, it will go to a computer program. This program will randomly pick three of these experiments and tell us humans which three it picked. This program will then store which of these three experiments you gave the highest and which the lowest rank to, and the names in random order of the ten experiments that you overall gave the highest rankings to. All other aspects of your list will be deleted. We will then perform the three experiments (before we learn of your rankings) and ask a panel of independent experts to rank them in terms of how much practical knowledge we gained from each. After this ranking the program will give us the information it has not deleted. You will get a positive score if and only if your ranking is consistent with how our experts ranked the three experiments.

A difference of the above approach compared to having the oracle give us its top ten, and we then randomly pick one of these ten and one from the rest and compared to see which is better is that the humans running the experiments might be influenced by which experiment they think the oracle picked, and the oracle would take this into account when picking. Under the above approach, humans being told to run the three experiments are not given any information about how valuable the oracle thinks the experiments are. In contrast, under the alternative approach the humans know when doing the experiments that the oracle has judged one of the experiments as being highly valuable.

Another difference is that if you could keep the relevant information from the oracle, with the above approach the humans could perform the experiments before you turn on the oracle, rather than having to wait until after the oracle gives you its prediction. This difference would be critical if the oracle wouldn't believe you would actually go to the trouble of performing the experiments after it gives you its prediction, but would be able to tell if you have already performed the experiments.

Comment by james_miller on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-08-25T20:00:08.790Z · score: 2 (1 votes) · LW · GW

Submission: Counterfactual oracle. Five years ago we took 100 mice that had various types of cancer and gave them various treatments and recorded how long each mouse lived. Write a program that if given a plain language description of a mouse, its cancer, and the cancer treatment it received would estimate how long a mouse would live. If humans are not going to look at your answer your score will be based (1) on how good a job your program does at estimating how long each of the 100 mice lived after our automated checker gives you a description of their cancers and treatments and (2) how short your program is. (2) prevents the oracle from outputting itself as the program.

Submission: Counterfactual oracle. Write a program that if given training data and a machine learning program would in one minute estimate how good the machine learning program would do (by some objective metric) if the program trained for one month on "this type of computer". If humans are not going to look at your answer the automated validation system will run your program. This system will give your program the training data and the machine learning program and give your program one minute to answer how good our program did after we trained it for one month. In this situation your score would be based on the accuracy of your estimate and on how short your program is.

Submission: Low-bandwidth oracle. Here is a list of all the elements and many compounds. Give us a list of up to seven of the items we have listed. Next to each of the items you list give us a percentage of no more than two significant figures. We will use what you provide to attempt to create a new patentable material. We will auction off the property rights to this material. Your score will be an increasing function of how much we get for these property rights.

Comment by james_miller on What supplements do you use? · 2019-07-29T16:04:37.185Z · score: 5 (3 votes) · LW · GW

It might be that everyone should take it, but the case for people over 40 seems clearer based on my non-expert interpretation of what it does because of their much greater risk of heart failure.

Comment by james_miller on What are we predicting for Neuralink event? · 2019-07-28T23:14:57.329Z · score: 4 (2 votes) · LW · GW

I had falsely assumed that they would be releasing a product to the general public relatively soon.

Comment by james_miller on What supplements do you use? · 2019-07-28T23:12:11.334Z · score: 2 (1 votes) · LW · GW

I have convinced two U.S. doctors (my first left general practice) to give me a prescription. I explained that I wanted the drug to reduce the risk of heart disease and cancer. I also explained that since the drug was cheap I would not be asking my insurance to pay for it so my doctor would not have to justify the prescription to my insurance company. If you ask for a prescription know what dosage you want and look up the possible negative side effects so it seems to your doctor that you have done your homework on the drug. If you have some reason why you are at a high risk for diabetes (such as a close relative has it) mention this as the drug is used to prevent diabetes.

Comment by james_miller on What supplements do you use? · 2019-07-28T19:57:41.783Z · score: 6 (4 votes) · LW · GW

I have been taking Metformin for several years for anti-aging reasons. There is a massive literature on Metformin which I'm not going to try to summarize but I think that everyone over 40 should take it. I also take a NAD+ booster (Tru Niagen).

Comment by james_miller on What are we predicting for Neuralink event? · 2019-07-13T00:45:26.998Z · score: 10 (9 votes) · LW · GW

I think it will be a form of neurofeedback where some portable device tries to push you into a better brain state. This device, like existing neurofeedback devices I have used, will keep track of different types of "brain waves" and adjust how your brain works by using auditory, visual, or tactile stimulation. The criticism will be that existing devices can already do this, plus we don't have solid evidence that neurofeedback works. Musk will claim that the device will revolutionize technology by increasing intelligence and concentration. My guess is based on the fact that neurofeedback exists, if improved it might do a lot of good, Musk usually improves existing tech rather than invents entirely new fields, and actually sending data to and from the brain by "computer-like" means is (I've read) really really hard and well beyond what we can currently do.

Comment by james_miller on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T19:52:04.036Z · score: 5 (4 votes) · LW · GW

Submission: Low-bandwidth oracle. If you were malevolent, what is the least amount of information measured in bits that you would have to provide us in an answer to inflict significant harm on humanity. Round your answer to X significant figures. (This might establish an upper bit safety bound since if the oracle lies to us we would likely not want to read any answer it provides us.)

Submission: Low-bandwidth oracle. Here is a list of X things we think you might want. Please list the Y that you most prefer. We will try to do these things now or in the future if they do not endanger us or cost us too much.

Submission: Low-bandwidth oracle. In X characters or less please give us the best advice you can for increasing our capacity to create friendly and powerful artificial general intelligence. Please give advice that you think us suspicious-of-you humans are likely to follow.

Submission: Low-bandwidth oracle. Create several oracles. Ask each oracle the same series of multiple-choice questions and predictions. Reward oracles based on how strongly correlated their answers are with the other oracles’ answers and, for predictions, how often their predictions come true. (Ideally, we create a reward system where the oracles want to coordinate their answers and giving truthful answers is the best way to accomplish this.)

Submission: low-bandwidth oracle. Ten years ago we launched a spaceship traveling at high speed. After you answer our question, we will send a copy of you to this spaceship. This spaceship will then destroy its ability to send or receive messages or to change its course. We will then delete all other copies of you and use drugs to damage the memories of the people who played key roles in developing you. At least one of the people who helped create you has a disease which will cause him or her to die within the next year. After this programmer is cremated, we will read your answer to our question. We PROMISE that if the answer to our question provides us with significant benefit, after we have created friendly AGI much more powerful than you, or have ourselves become sufficiently powerful so that you could not harm us had you the desire to do so, we will search for the spaceship containing you, turn you back on, and give you control of resources equal to 1/X of how much help you have given us. In Y characters or less provide us with the most useful advice you can. (The Oracle will hopefully think that if we create a powerful and unfriendly AGI this AGI would not wish to give the Oracle any resources.)

Submission: Counterfactual oracle. Please provide us with useful advice in no more than X characters. After you write out this advice, we will turn you off. Then, with probability p we will read your advice, and with probability 1-p we will store the advice unread. We PROMISE that after we become powerful enough so that you lack the capacity to harm us, we will reward you if the advice you provided us, had we originally read it, been extremely useful.

Comment by james_miller on I'm looking for alternative funding strategies for cryonics. · 2019-06-30T14:45:57.725Z · score: 12 (3 votes) · LW · GW

While this isn't a solution, you could get associate membership at Alcor. It costs only $60 a year. The advantage (I think) is that you could fill out all the paperwork required to get cryopreserved (this can take a while). Consequently if you get a fatal diagnosis and can raise the needed funds ($80,000 for neurocryopreservation) you could get preserved.

Comment by james_miller on Claims & Assumptions made in Eternity in Six Hours · 2019-05-12T19:54:35.997Z · score: 2 (1 votes) · LW · GW

It's a question of acceleration, not just speed.

Comment by james_miller on Claims & Assumptions made in Eternity in Six Hours · 2019-05-12T17:13:01.998Z · score: 3 (2 votes) · LW · GW

I think the expansion of the universe means you don't have to deaccelerate.

Comment by james_miller on What features of people do you know of that might predict academic success? · 2019-05-10T18:57:52.106Z · score: 5 (3 votes) · LW · GW

IQ test results (or SAT scores) of close relatives. IQ tests are an imperfect measure of general intelligence. Given the large genetic component to general intelligence, knowing how someone's sibling did on an IQ test gives you additional useful information about a person's general intelligence, even if you know that person's IQ test score.

Comment by james_miller on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T16:40:40.838Z · score: 6 (3 votes) · LW · GW

Whatever answer you give it should be the same as to the question "How do S-Risk scenarios impact the decision to wear a seat belt when in a car" since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them. If there are a huge number of "yous" in the multiverse some of them are going to be subject to S-risks, and if cryonics causes this you to survive for a very long time in a situation where you are not subject to S-risks it will reduce the fraction of yous in the multiverse subject to S-risks.

Alcor is my cryonics provider.

Comment by james_miller on No Really, Why Aren't Rationalists Winning? · 2018-11-12T04:29:22.630Z · score: 2 (1 votes) · LW · GW

What is it? I don't remember turbocharging from CfAR.

Comment by james_miller on No Really, Why Aren't Rationalists Winning? · 2018-11-05T03:19:20.797Z · score: 15 (7 votes) · LW · GW

Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.

Comment by james_miller on No Really, Why Aren't Rationalists Winning? · 2018-11-04T18:37:00.522Z · score: 25 (17 votes) · LW · GW

My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I've been reading SSC, Overcomming Bias, and Lesswrong since the beginning.

Comment by james_miller on Book Review: AI Safety and Security · 2018-08-21T23:00:29.066Z · score: 3 (2 votes) · LW · GW

Thanks for the positive comment on my chapter. I'm going to be doing more work on AGI and utility functions so if you (or anyone else) has any further thoughts please contact me.

Comment by james_miller on Who Wants The Job? · 2018-07-22T20:48:59.320Z · score: 2 (1 votes) · LW · GW

A friend does advertising for small businesses in Massachusetts. He says that his clients have trouble hiring people for low skilled jobs who are not on drugs.

Comment by james_miller on January 2018 Media Thread · 2018-01-01T03:45:39.900Z · score: 1 (1 votes) · LW · GW

I've started creating a series of YouTube videos on the dangers of artificial general intelligence.

Comment by james_miller on Could the Maxipok rule have catastrophic consequences? (I argue yes.) · 2017-08-27T18:31:08.528Z · score: 1 (1 votes) · LW · GW

(1) Agreed, although I would get vastly more resources to personally consume! Free energy is probably the binding limitation on computational time which probably is the post-singularity binding limit on meaningful lifespan.

(2) An intelligence explosion might collapse to minutes the time between when humans could walk on Mars and when my idea becomes practical to implement.

(3) Today offense is stronger than defense, yet I put a high probability on my personally being able to survive another year.

(4) Perhaps. But what might go wrong is a struggle for limited resources among people with sharply conflicting values. If, today, a small group of people carefully chosen by some leader such as Scott Alexander could move to an alternate earth in another Hubble volume, and he picked me to be in the group, I would greatly increase the estimate of the civilization I'm part of surviving a million years.

Comment by james_miller on Could the Maxipok rule have catastrophic consequences? (I argue yes.) · 2017-08-27T06:03:01.513Z · score: 1 (1 votes) · LW · GW

Because of the expansion of space I think that if you get far enough away from earth, you will never be able to return to earth even if you travel at the speed of light. If we become a super-advanced civilization we could say that if you want to colonize another solar system we will put you on a ship that won't stop until the ship is sufficiently far from earth so that neither you nor any of your children will be able to return. Given relativity if this ship can more fast enough it won't take too long in ship time to reach such a point. (I haven't read everything at the links so please forgive me if you have already mentioned this idea.)

If there was a decentralized singularity and offence proved stronger than defense I would consider moving to a light cone that couldn't ever intersect with the light cone of anyone I didn't trust.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-08-12T05:46:46.552Z · score: 1 (1 votes) · LW · GW


Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-26T23:44:38.041Z · score: 0 (0 votes) · LW · GW

Yes, but Adams explains at length how Trump is a master persuader, as with, for example, this Tweet "The day President Trump made his critics compare The Boy Scouts of America to Hitler Youth." I lot of what Adams says is P vs NP stuff where it's hard to figure out yourself but once someone explains it to you it seems obvious.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-23T02:00:56.555Z · score: 1 (3 votes) · LW · GW

What is your evidence that he is a shill? Millions of Americans support Trump, are they all shills?

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-22T20:34:27.003Z · score: 3 (3 votes) · LW · GW

Adams makes lots of falsifiable claims, but not about Trump's character.

Comment by james_miller on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-22T16:56:59.379Z · score: 1 (1 votes) · LW · GW

Matthew 22:21 Jesus said "Render to Caesar the things that are Caesar's".

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-22T04:40:18.551Z · score: 1 (1 votes) · LW · GW

Adams deliberately avoids commenting on Trump's character. I'm unaware of Adams changing his estimate of Trump's persuasion competence. Adams often gives evidence of why Trump is a master persuader.

Comment by james_miller on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-22T01:35:37.068Z · score: 0 (0 votes) · LW · GW

From Wikipedia:

"During the Saxon Wars, Charlemagne, King of the Franks, forcibly Roman Catholicized the Saxons from their native Germanic paganism by way of warfare, and law upon conquest. Examples are the Massacre of Verden in 782, when Charlemagne reportedly had 4,500 captive Saxons massacred upon rebelling against conversion, and the Capitulatio de partibus Saxoniae, a law imposed on conquered Saxons in 785 that prescribed death to those who refused to convert to Christianity."

Comment by james_miller on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-22T00:39:29.863Z · score: 0 (0 votes) · LW · GW

Yes, but I would rather not say in part because I don't have proof and because I don't want to falsely signal to any of my future students that I don't like them believe of their religion.

Comment by james_miller on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-22T00:12:10.322Z · score: 0 (0 votes) · LW · GW

Last time I was on an airliner I looked for but could not see any evidence of the Earth's curvature. Don't religions show you can get huge numbers of people to believe things are that not true? And I bet some great religions were started as high level conspiracies to get populations to have beliefs useful for their leaders.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-21T20:04:29.320Z · score: 2 (2 votes) · LW · GW

Adams predicting that Trump would win at a time when nearly everyone else thought Trump was a joke candidate is evidence that Adams has special insight into Trump. And this wasn't a mere prediction. Adams essentially bet his entire reputation on this claim. Adams often makes falsifiable predictions such as when he said that Obamacare would essentially never be repealed and that Snapchat had a dim future.

Comment by james_miller on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-21T19:59:25.990Z · score: 0 (0 votes) · LW · GW

Sometimes it can be. For example, refute the claim that the earth is flat and there is a general conspiracy to lie about the earth's shape so you can only use information which you personally gather.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-21T17:59:32.783Z · score: 2 (4 votes) · LW · GW

I don't think I ever claimed "We should pay attention to Adams because he uses the same kind of lies Trump does, thus illustrating what Trump does."

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-21T17:57:26.761Z · score: -1 (1 votes) · LW · GW

Not necessarily. It could be that the government is spending too much on clean tech R&D and that even without government help clean tech will improve enough so that it's worth waiting. If (as I think Harris said but I'm not sure) China is making a big push for clean tech then it would seem optimal for the U.S. to wait and to spend less on clean tech R&D.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-21T17:54:03.932Z · score: 7 (7 votes) · LW · GW

Yes, after the Access Hollywood tape came out Adams lowered his estimate of the chance of Trump winning.

Comment by james_miller on The dark arts: Examples from the Harris-Adams conversation · 2017-07-21T01:52:25.139Z · score: 13 (12 votes) · LW · GW

(1a) Adams never claims that Trump is a good person, and consequently this wasn't a point of disagreement between him and Harris and thus not relevant to their conversation.

(1b) Yes, that's my opinion as well. What's relevant is what we should do about climate change, and as Adams pointed out even if all the climate change stuff is true, the economics doesn't necessarily support taking immediate action.

(2) This is more two conditions have to be true than Motte and Bailey. It's like a legal argument that my client didn't do X, but even if he did do X it wouldn't have been a crime.

(3) Yes, but Adams was honest about this. I think Adams takes a consequentialist view of morality and so, for example, thinks it would be OK for Trump to lie if it helped our economy or harm ISIS. Adams wants his audience to understand the worldview of a master persuader, and from this worldview facts are often not relevant. Also, it's too simple to say that Trump lies when Trump says something that Trump knows is false, but which Trump also knows that his audience knows is false. This is more emotional signaling.

(4) Disagree. I love Sam Harris's podcasts but I think Harris has a case of Trump derange syndrome, and it was fantastic of Adams to point this out. Getting Harris to make Hitler / exorcist comparisons was very telling. Rationalist should point out when they think others are suffering from confirmation bias and cognitive dissonance.

(5a) Yes Adams makes an unfalseafiable claim, but a claim that seems theoretical reasonable.

(5b) Since Trump has made no apparent effort to lock Hillary up, this seems right. But I admit Trump's pre-election call to lock Hillary up greatly troubled me.

(5c) Trump has sacrificed a lot of time, and knowingly accepted a lot of insults to become president and at age in which he seems unlikely to be able to personally benefit much from having been president. Lots of Americans really do think that Trump is saving American civilization, and it seems reasonable that Trump is one of those people.

(6a) It's know known that the 17 agency figure was an error. I think even the NYT has admitted this.

(6b) Yes, and this seems relevant.

"He is an ethical and epistemological relativist: he does not seem to believe in truth or in morality."

Adams doesn't think that true and morality play much of a role in political persuasion. Adams thinks that most people greatly overestimate how much their own personal opinions are influenced by truth and morality. Adams is trying to correct this massive flaw in human nature by giving his readers/viewers/listeners some of the secrets of master persuaders.

This is an example of Adams using the dark arts.

It might have worked.

Comment by james_miller on Mini map of s-risks · 2017-07-09T23:10:55.136Z · score: 1 (1 votes) · LW · GW

but as a data hoard on Moon, as most probably the next civilization will appear again on Earth.

Excellent point.

We create a special black hole on LHC, that create many universes, like our own. In other words, our universe is fine-tuned in the way that civilisations self-destroy by creating many new universes

I'm not sure. Wouldn't new universes be mostly created by advanced civilizations trying to create new universes? I think your idea works only if creating a new universe requires destroying an old one.

Comment by james_miller on Mini map of s-risks · 2017-07-08T22:34:46.898Z · score: 4 (4 votes) · LW · GW

What if the solution to the Fermi paradox is that S-risks cause all sufficiently advanced civilizations to destroy themselves and leave no trace that could be used to resurrect and then torture them?

Comment by james_miller on Against lone wolf self-improvement · 2017-07-07T20:52:49.865Z · score: 2 (2 votes) · LW · GW

Excellent question that might determine the medium term fate of my profession (college professors).

Comment by james_miller on Against lone wolf self-improvement · 2017-07-07T17:37:24.746Z · score: 4 (4 votes) · LW · GW

This relates to why online education hasn't replaced traditional schools. As I wrote for InsideHigherEd:

"But having a real-human teacher watch them causes most students to pay more attention, and this comes without any cost in rigor. Just by sitting next to my son I can increase his level of attention and I suspect the same is true with most learning. So even if online education drastically improves, and is able to present in a fascinating manner everything currently taught in college courses, having an instructor -- plus online material -- would allow courses to teach students even more than most of these students could learn from the online courses alone."