Comment by madhatter on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-17T23:18:42.167Z · score: 0 (0 votes) · LW · GW

Is there no way to actually delete a comment? :)

Comment by madhatter on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-17T22:52:48.933Z · score: 0 (0 votes) · LW · GW

never mind this was stupid

Comment by madhatter on Open thread, July 10 - July 16, 2017 · 2017-07-12T22:51:37.647Z · score: 0 (0 votes) · LW · GW

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Comment by madhatter on Open thread, Jul. 03 - Jul. 09, 2017 · 2017-07-08T15:40:26.518Z · score: 0 (0 votes) · LW · GW

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Comment by madhatter on Mini map of s-risks · 2017-07-08T15:23:10.314Z · score: 2 (2 votes) · LW · GW

Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don't instantiate anything with any non-negligible level of sentience?

Comment by madhatter on Open thread, Jul. 03 - Jul. 09, 2017 · 2017-07-07T06:13:18.655Z · score: 0 (0 votes) · LW · GW

Two random questions.

1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?

2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?

Comment by madhatter on Open thread, June 26 - July 2, 2017 · 2017-07-01T02:11:32.741Z · score: 0 (0 votes) · LW · GW

Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.

A Call for More Policy Analysis

2017-06-25T14:24:25.100Z · score: 1 (1 votes)
Comment by madhatter on Idea for LessWrong: Video Tutoring · 2017-06-24T00:08:45.929Z · score: 0 (0 votes) · LW · GW

I agree - great idea!

Comment by madhatter on Open thread, June 5 - June 11, 2017 · 2017-06-05T22:31:22.669Z · score: 1 (1 votes) · LW · GW

Thoughts on Timothy Snyder's "On Tyranny"?

Comment by madhatter on Book recommendation requests · 2017-06-02T01:07:59.348Z · score: 0 (0 votes) · LW · GW

Anything not too technical about nanotechnology? (Current state, forecasts, etc.)

Comment by madhatter on Open thread, May 29 - June 4, 2017 · 2017-05-29T20:57:09.748Z · score: 0 (0 votes) · LW · GW

Well, "The set of all primes less than 100" definitely works, so we need to shorten this.

Comment by madhatter on [brainstorm] - What should the AGIrisk community look like? · 2017-05-28T15:43:33.673Z · score: 1 (1 votes) · LW · GW

More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.

Fiction advice

2017-05-26T21:31:30.088Z · score: 1 (2 votes)
Comment by madhatter on Open thread, May 22 - May 28, 2017 · 2017-05-24T23:02:28.465Z · score: 0 (0 votes) · LW · GW

I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It's an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.

AGI and Mainstream Culture

2017-05-21T08:35:12.656Z · score: 4 (5 votes)
Comment by madhatter on Reaching out to people with the problems of friendly AI · 2017-05-17T13:00:10.330Z · score: 0 (0 votes) · LW · GW

Wow, I hadn't thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won't start an arms race until we've figured out how to align them. Maybe we want the issue to remain largely a laughingstock.

Comment by madhatter on AI arms race · 2017-05-04T23:27:43.336Z · score: 0 (0 votes) · LW · GW

Sure. The ideas aren't fleshed out yet, just thrown out there:

http://lesswrong.com/r/discussion/lw/oyi/open_thread_may_1_may_7_2017/

Comment by madhatter on AI arms race · 2017-05-04T12:35:54.703Z · score: 0 (0 votes) · LW · GW

Stuart, since you're an author of the paper, I'd be grateful to know what you think about the ideas for variants that MrMind suggested in the open thread, as well as my idea of a government regulator parameter.

Comment by madhatter on Open thread, May. 1 - May. 7, 2017 · 2017-05-04T02:11:31.138Z · score: 0 (0 votes) · LW · GW

One idea I had was to introduce a parameter indicating the actions of a governmental regulatory agency. Does this seems like a good variant?

Comment by madhatter on Open thread, May. 1 - May. 7, 2017 · 2017-05-03T21:41:21.187Z · score: 0 (0 votes) · LW · GW

Hi all,

A friend and I (undergraduate math majors) want to work on either exploring a variant or digging deeper into the model introduced in this paper:

http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Any ideas?

Comment by madhatter on Open thread, May. 1 - May. 7, 2017 · 2017-05-03T02:44:44.402Z · score: 0 (0 votes) · LW · GW

#1 is the smallest.

Comment by madhatter on Stupid Questions May 2017 · 2017-04-26T16:16:33.950Z · score: 1 (1 votes) · LW · GW

It becomes uncomfortable for me to stay in bed more than about half an hour after waking up.

Comment by madhatter on Stupid Questions May 2017 · 2017-04-26T06:07:13.507Z · score: 3 (3 votes) · LW · GW

Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?

Comment by madhatter on Defining the normal computer control problem · 2017-04-26T02:49:04.047Z · score: 2 (2 votes) · LW · GW

This is a cool idea! My intuition says you probably can't completely solve the normal control problem without training the system to become generally intelligent, but I'm not sure. Also, I was under the impression there is already a lot of work on this front from antivirus firms (i.e. spam filters, etc.)

Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.

Comment by madhatter on April '17 I Care About Thread · 2017-04-19T03:29:46.947Z · score: 0 (0 votes) · LW · GW

I'd like to see the end of state lotteries, although I know that's not gonna happen.

Comment by madhatter on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-17T11:06:14.879Z · score: 0 (0 votes) · LW · GW

Haha, yea I agree there are some practical problems.

I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn't be trusting our intuitions on something like this.

Comment by madhatter on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-16T13:01:20.045Z · score: 0 (0 votes) · LW · GW

Why?

Comment by madhatter on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-15T22:03:14.767Z · score: 0 (0 votes) · LW · GW

I have said before that I think consciousness research is not getting enough attention in EA, and I want to add another argument for this claim:

Suppose we find compelling evidence that consciousness is merely "how information feels from the inside when it is being processed in certain complex ways", as Max Tegmark claims (and Dan Dennett and others agree). Then, I argue, we should be compelled from a utilitarian perspective to create a superintelligent AI that is provably conscious, regardless of whether it is safe, and regardless whether it kills us humans (or worse), if we know it will try to maximize the subjective happiness of itself and the subagents it creates.

The above isn't my argument (Sam Harris mentioned someone else arguing this) but I am claiming this is one reason why consciousness research is ethically important.

Comment by madhatter on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-12T17:44:12.068Z · score: 0 (0 votes) · LW · GW

No, at least not yet. That's a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don't know the legal details, though, I could be completely wrong.

Comment by madhatter on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-11T22:47:34.622Z · score: 0 (0 votes) · LW · GW

I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.

Comment by madhatter on Net Utility and Planetary Biocide · 2017-04-10T17:15:52.069Z · score: 4 (4 votes) · LW · GW

Wow, that had for some reason never crossed my mind. That's probably a very bad sign.

Comment by madhatter on Net Utility and Planetary Biocide · 2017-04-09T23:14:27.434Z · score: 3 (3 votes) · LW · GW

Perhaps I was a bit misleading, but when I said the net utility of the Earth may be negative, I had in mind mostly fish and other animals that can feel pain. That was what Singer was talking about in the beginning essays. I am fairly certain net utility of humans is positive.

Comment by madhatter on Net Utility and Planetary Biocide · 2017-04-09T13:06:20.380Z · score: 1 (1 votes) · LW · GW

Thanks for your reply, username2. I am disheartened to see that "You're crazy" is still being used in the guise of a counterargument.

Why do you think the net utility of the world is either negative or undefined?

Net Utility and Planetary Biocide

2017-04-09T03:43:27.944Z · score: 10 (11 votes)
Comment by madhatter on "On the Impossibility of Supersized Machines" · 2017-04-01T08:50:20.901Z · score: 1 (1 votes) · LW · GW

This is fantastic

Comment by madhatter on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-27T12:22:01.009Z · score: 2 (2 votes) · LW · GW

Let me also add that while a sadist can parallelize torture, it's also possible to parallelize euphoria, so maybe that mitigates things to some extent.

Comment by madhatter on survey about biases in the investment context · 2017-03-25T14:50:02.263Z · score: 0 (0 votes) · LW · GW

Quick note: I put a 1 for the driving question because I don't drive

Comment by madhatter on [Humor] In honor of our new overlord decision theory acronym FDT... · 2017-03-24T23:47:53.078Z · score: 0 (0 votes) · LW · GW

This is brilliant

Comment by madhatter on Making equilibrium CDT into FDT in one+ easy step · 2017-03-21T22:28:54.512Z · score: 1 (1 votes) · LW · GW

Can someone explain why UDT wasn't good enough? In what case does UDT fail? (Or is it just hard to approximate with algorithms)?

Comment by madhatter on [Error]: Statistical Death in Damascus · 2017-03-21T00:26:49.564Z · score: 0 (0 votes) · LW · GW

So wait, why is FDT better than UDT? Are there situations where UDT fails?

Superintelligence discussed on Startalk

2017-03-19T01:12:49.122Z · score: 1 (2 votes)
Comment by madhatter on Open thread, March 13 - March 19, 2017 · 2017-03-16T11:58:32.847Z · score: 0 (0 votes) · LW · GW

Well, suppose it increases awareness of the threat of AGI, if we can prove that consciousness is not some mystical, supernatural phenomenon. Because it would be more clear that intelligence is just about information processing.

Furthermore, the ethical debate about creating artificial consciousness in a computer (mindcrime issues, etc.) would very shortly become a mainstream issue, I would imagine.

Comment by madhatter on Open thread, March 13 - March 19, 2017 · 2017-03-16T02:12:55.922Z · score: 0 (0 votes) · LW · GW

Is neuroscience research underfunded? If so, I've been thinking more and more that trying to understand human consciousness has a huge expected value, and maybe EA should pay it more attention.

Comment by madhatter on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-10T21:47:29.343Z · score: 1 (1 votes) · LW · GW

I read somewhere NK is collapsing, according to a top-level defector. Maybe it's best to wait things out.

Comment by madhatter on Stupid Questions February 2017 · 2017-02-09T01:35:21.215Z · score: 1 (1 votes) · LW · GW

Thanks for this topic! Stupid questions are my specialty, for better or worse.

1) Isn't cryonics extremely selfish? I mean, couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?

2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?

3) Is the study linking nut consumption to longevity found in the link below convincing?

http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094

And if so, is it worth a lot of effort promoting nut consumption in moderation?

Comment by madhatter on new study finds performance enhancing drugs for chess · 2017-01-27T19:17:51.461Z · score: 2 (2 votes) · LW · GW

Actually, as a tournament player I feel I can help explain the slowness:

The article suggests that this isn't due to increased computational speed or focus, but I think that's wrong. Playing slowly doesn't imply thinking slowly. In a chess game, you have a certain amount of time overall, and often when the position is very complicated players will spend half an hour delving into variations and sub-variations. If it's hard to concentrate, they may just rely on low-calc alternatives, and play faster.

Comment by madhatter on new study finds performance enhancing drugs for chess · 2017-01-27T00:42:03.757Z · score: 1 (1 votes) · LW · GW

I'm not surprised. But I also don't see much utility from this study; most people already believe that coffee helps them focus.