Comment by alex_altair on Meetup : Maine: Automatic Cognition · 2015-06-19T17:09:08.852Z · score: 0 (0 votes) · LW · GW

Sorry I can't be there for this! The Bay stole me three years ago. But it's cool to see a meetup there.

Comment by alex_altair on The Bay Area Solstice · 2014-12-23T14:14:26.364Z · score: 2 (2 votes) · LW · GW

That is correct!

## The Bay Area Solstice

2014-12-03T22:33:17.760Z · score: 21 (23 votes)
Comment by alex_altair on Why CFAR? · 2013-12-29T04:56:48.509Z · score: 8 (8 votes) · LW · GW

In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.

I'd love to hear more detailed plans or ideas for achieving these.

we’ll be devoting more resources to epistemic curriculum development

This is really exciting! I think people tend to have a lot more epistemic rationality than instrumental rationality, but that they still don't have enough epistemic rationality to care about x-risk or other EA goals.

Comment by alex_altair on Why CFAR? · 2013-12-29T03:31:18.220Z · score: 9 (11 votes) · LW · GW

MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.

Comment by alex_altair on Why CFAR? · 2013-12-28T23:18:25.599Z · score: 7 (7 votes) · LW · GW

Excellent post! I wish my donation didn't have to wait a few months.

Comment by alex_altair on Book Review: Naïve Set Theory (MIRI course list) · 2013-10-03T20:16:54.443Z · score: 1 (1 votes) · LW · GW

The material covered in Causality is more like a subset of that in PGM. PGM is like an encyclopedia, and Causality is a comprehensive introduction to one application of PGMs.

Comment by alex_altair on How to Have Space Correctly · 2013-06-23T21:18:59.545Z · score: 3 (3 votes) · LW · GW

Maybe just add a section with a few more examples or advice. The post was a quick read for me, I could have handled more.

Comment by alex_altair on How to Have Space Correctly · 2013-06-23T21:17:38.750Z · score: 4 (6 votes) · LW · GW

I love this post.

spatial arrangements that simplify perception

This is why you should make your bed in the morning. Also this is why paragraphs exist. And why math notation isn't linear. And why parentheses look like they're encircling the text. And periods and kerning and oh god I can't stop coming up with examples

I'm an extremely visual thinker, and I think I think about these things all the time. I wonder though, if this stuff is as useful to people who aren't visual thinkers. I've experienced having disagreements with people on what heuristics to use, based on the fact that the other person wasn't a visual thinker.

I also find that this has huge application to my computer usage. For example, I always put my cursor off of text. I always have the line I'm reading at the top of the window. I always strategically place my chat windows in the margins of websites so I can see them while reading the site.

Comment by alex_altair on Good luck, Mr. Rationalist · 2013-04-29T21:50:33.427Z · score: 6 (6 votes) · LW · GW

I prefer the variant, "May you choose wisely." Also "May your premises be sound."

Comment by alex_altair on A thought-process testing opportunity · 2013-04-23T03:01:24.875Z · score: 9 (11 votes) · LW · GW

Saw the video before this post, thought to make a prediction, and was correct! :D

Comment by alex_altair on Standard and Nonstandard Numbers · 2013-04-19T17:05:50.180Z · score: 3 (3 votes) · LW · GW
Comment by alex_altair on Cold fusion: real after all? · 2013-04-18T03:25:11.630Z · score: 3 (3 votes) · LW · GW

I once had my friend calculate the probability of a single pair of hydrogen nuclei fusing in the reaction of 2H2 with O2 in a balloon (which produces a cool boom resulting in water vapor). Despite the enormous number of atoms, and the fact that at the high energy tail of the distribution some fraction of atoms should be going really fast, the probability that any were going fast enough to fuse was e^-somethinghuge.

Comment by alex_altair on Help us name the Sequences ebook · 2013-04-16T02:27:38.317Z · score: 7 (9 votes) · LW · GW

Slight rework; From AI to Zombies: Thinking Clearly About Truth, Morality and Winning in This And Other Worlds.

Comment by alex_altair on Why AI may not foom · 2013-03-25T02:15:21.408Z · score: 0 (0 votes) · LW · GW

technically, any decision problem implemented by a circuit is at least O(n) because that's how the length of the wires scale.

That is a pretty cool idea.

Comment by alex_altair on Boring Advice Repository · 2013-03-07T13:06:52.515Z · score: 6 (6 votes) · LW · GW

What is the actual evidence for this? I've only heard gwern say that Kevin said it was good. Google thinks it's for everything but mental performance.

Comment by alex_altair on Idea: Self-Improving Task Management Software · 2013-02-27T17:33:13.621Z · score: 4 (4 votes) · LW · GW

This sounds like an awesome idea. It also sounds like a computer-assisted human EURISKO fooming device.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-02-15T20:57:35.187Z · score: 3 (3 votes) · LW · GW

Unfortunately not enough of an effect for me to claim anything. I do like it brighter though, so I will continue using the lights.

Comment by alex_altair on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-01T21:07:49.590Z · score: 2 (2 votes) · LW · GW

You can profit, but that's not the goal of normative rationality. We want to maximize utility.

Comment by alex_altair on Naturalism versus unbounded (or unmaximisable) utility options · 2013-02-01T18:49:49.005Z · score: 8 (8 votes) · LW · GW

This is like the supremum-chasing Alex Mennen mentioned. It's possible that normative rationality simply requires that your utility function satisfy the condition he mentioned, just as it requires the VNM axioms.

I'm honestly not sure. It's a pretty disturbing situation in general.

Comment by alex_altair on Save the princess: A tale of AIXI and utility functions · 2013-02-01T18:06:39.050Z · score: 2 (2 votes) · LW · GW

Possible model of semi-dualism:

The agent could have in its map that it is a computer box subject to physical laws, but only above the level where information processing occurs. That is, it could know that its memory was in a RAM stick, which was subject to material breaking and melting, but not have any predictions about inspecting individual bits of that stick. It could know that that box was itself by TDT-type analysis. Unfortunately this model isn't enough for hardware improvements; it would know that adding RAM was theoretically possible, but it wouldn't know the information-theoretic implications of how its memory protocols would react to added RAM.

Now that I think about it, that's kind of what humans do. Except, to get to the point where we can learn that we are a thing that can be damaged, we need parents and pain-mechanisms to keep us safe.

Comment by alex_altair on Singularity Institute is now Machine Intelligence Research Institute · 2013-01-31T17:34:27.287Z · score: 22 (22 votes) · LW · GW

My impression is that anyone who has ever heard of Singularity University doesn't even have it in their hypothesis space that you mean something different when you say Singularity Institute.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-20T16:43:18.719Z · score: 0 (0 votes) · LW · GW

Completely non-rigorously. I'll probably just look at my hours, and reflect on how I feel about its effect.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T19:56:32.828Z · score: 2 (2 votes) · LW · GW

Sample sizes are dismal, but at least they tried. Thanks for looking this up!

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T19:47:00.609Z · score: 12 (12 votes) · LW · GW

We bought three of these.

We've tried to put them around the room evenly. You can see all three in the right picture (one in the foreground). We'll report back in a few weeks about how much it helped.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T19:05:48.533Z · score: 2 (2 votes) · LW · GW

haha You beat me to buying, but I beat you to installing.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T17:57:48.478Z · score: 1 (3 votes) · LW · GW

Fiat lux!

You must be a latin scholar.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T17:54:48.313Z · score: 24 (24 votes) · LW · GW

Anja and I just went out and bought three 120W lights (for construction sites; we had trouble getting good bulbs).

Here at the Singularity Institute, we take ideas seriously.

Comment by alex_altair on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T13:52:36.794Z · score: 10 (10 votes) · LW · GW

I immediately like this idea.

Comment by alex_altair on Open Thread, January 16-31, 2013 · 2013-01-16T16:36:41.501Z · score: 1 (3 votes) · LW · GW

Can someone make a Chrome extension that tells me when LW gets a new article (either in discussion or main)? That would help me not check it 100 times a day. It could be as simple as lighting up when this link changes.

Comment by alex_altair on A fungibility theorem · 2013-01-12T15:48:27.832Z · score: 0 (0 votes) · LW · GW

Thanks for writing this up! It's really too bad that we couldn't do better than Pareto optimal. (I also think this is mathematically the same as Harsanyi, this writeup worked better for me.)

Comment by alex_altair on Course recommendations for Friendliness researchers · 2013-01-09T15:30:45.204Z · score: 7 (7 votes) · LW · GW

Thanks for posting this! I love how you put course numbers on the left, to make it extra-actionable!

Comment by alex_altair on A utility-maximizing varient of AIXI · 2013-01-04T09:58:42.207Z · score: 0 (0 votes) · LW · GW

Making the assumption that...

Yeah, I was intentionally vague with "the probabilistic nature of things". I am also thinking about how any AI will have logical uncertainty, uncertainty about the precision of its observations, et cetera, so that as it considers further points in the future, its distribution becomes flatter. And having a non-dualist framework would introduce uncertainty about the agent's self, its utility function, its memory, ...

Comment by alex_altair on A utility-maximizing varient of AIXI · 2013-01-03T21:59:00.644Z · score: 0 (0 votes) · LW · GW

This is great!

I really like your use of $U\(q,y\_\{1:\\infty\}\$). The seems to be an important step along the way to eliminating the horizon problem. I recently read in Orseau and Ring's "Space-Time Embedded Intelligence" that in another paper, "Provably Bounded-Optimal Agents" by Russell and Subramanian, they define $V\(\\pi, q\$%20=%20u(h(\pi,%20q))) where$h\(\\pi, q\$) generates the interaction history $ao\_\{1:\\infty\}$. (I have yet to read this paper.)

Your counterexample of supremum chasing is really great; it breaks my model of how utility maximization is supposed to go. I'm honestly not sure whether one should chase the path of U = 0 or not. This is clouded by the fact that the probabilistic nature of things will probably push you off that path eventually.

The dilemma reminds me a lot of exploring versus exploiting. Sometimes it seems to me that the rational thing for a utility maximizer to do, almost independent of the utility function, would be to just maximize the resources it controlled, until it found some "end" or limit, and then spend all its resources creating whatever it wanted in the first place. In the framework above we've specified that there is no "end" time, and AIXI is dualist, so there are no worries of it getting destroyed.

There's something else that is strange to me. If we are considering infinite interaction histories, then we're looking at the entire binary tree at once. But this tree has uncountably infinite paths! Almost all of the (infinite) paths are incomputable sequences. This means that any computable AI couldn't even consider traversing them. And it also seems to have interesting things to say about the utility function. Does it only need to be defined over computable sequences? What if we have utility over incomptuable sequences? These could be defined by second-order logic statements, but remain incomputable. It gives me lots of questions.

Comment by alex_altair on A definition of wireheading · 2012-11-29T06:27:24.763Z · score: 2 (2 votes) · LW · GW

I'm curious, what other options are you thinking of?

Comment by alex_altair on Causal Universes · 2012-11-28T18:23:53.553Z · score: 0 (0 votes) · LW · GW

by the way you have a typo

Fixed.

Comment by alex_altair on Mathematical Measures of Optimization Power · 2012-11-27T04:48:02.927Z · score: 2 (2 votes) · LW · GW

I can't think of a way of fitting a forest fire into this model either, which suggests it isn't useful to think of forest fires under this paradigm.

Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests). They're probably something like chemical energy minimizers; if I were to measure their efficacy, it would be something like number of carbon-based molecules turned into CO2. But the only reason we can come up with semi-formal measures like CO2 molecules or output on wires is because we're smart human-things. I want to figure out how to algorithmically measure it.

Isn't the crux of the decision-making process pretending that you could choose any of your options, even though, as a matter of fact, you will choose one?

Yes. But what does "could" mean? It doesn't mean that you they all have equal probability. If literally all you know is that there are n outputs, then giving them 1/n weight is correct. But we usually know more, like the fact that it's an AI, and it's unclear how to update on this.

Are you saying that an "AI" outputting random noise could do worse than an "AI" with optimization power measured at zero (i.e. zero intelligence)?

Absolutely. Like how random outputs of a car cause it to jerk around and hit things, whereas a zero-capability car just sits there. Also, we're averaging over all possible outputs with equal weights. Even if most outputs are neutral or harmless, there are usually more damaging outputs than good ones. It's generally easier to harm than destroy. The more powerful actuators the AI has, the most damage random outputs will do.

Comment by alex_altair on Mathematical Measures of Optimization Power · 2012-11-26T22:22:50.037Z · score: 2 (2 votes) · LW · GW

We considered random output as a baseline. It doesn't seem correct, to me.

1) You'd need a way to even specify the set of "output" of any possible OP. This seems hard to me because many OPs do not have clear boundaries or enumerable output channels, like forest fires or natural selection or car factories.

2) This is equal to a flat prior over your OPs outputs. You need some kind of specification for what possibilities are equally likely, and a justification thereof.

3) Even if we consider an AGI with well-defined output channels, it seems to me that random outputs are potentially very very very destructive, and therefore not the "default" or "status quo" against which we should measure.

I think the idea should be explored more, though.

Comment by alex_altair on Mathematical Measures of Optimization Power · 2012-11-24T19:49:58.054Z · score: 0 (0 votes) · LW · GW

which definitely seems like it should be possible.

Yeah, this normal type of negative OP is possible with OP = EU'/|EU| and EU positive but decreasing. I'm worried about the weirdness of decreasing and negative EU.

## Mathematical Measures of Optimization Power

2012-11-24T10:55:17.145Z · score: 5 (9 votes)
Comment by alex_altair on How can I reduce existential risk from AI? · 2012-11-17T21:36:57.056Z · score: 0 (0 votes) · LW · GW

Are you suggesting that we encourage consumers to have safety demands? I'm not sure this will work. It's possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.

Comment by alex_altair on LessWrong help desk - free paper downloads and more · 2012-11-14T19:25:28.132Z · score: 0 (0 votes) · LW · GW

Yay!

Comment by alex_altair on LessWrong help desk - free paper downloads and more · 2012-11-14T01:31:40.995Z · score: 0 (0 votes) · LW · GW

Computable surreals anyone?

http://www.jstor.org/discover/10.2307/2586835?uid=3739560&uid=2&uid=4&uid=3739256&sid=21101434675947

Comment by alex_altair on How can I reduce existential risk from AI? · 2012-11-14T00:01:38.369Z · score: 1 (1 votes) · LW · GW

Yeah, quite possibly. But I wouldn't want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.

Comment by alex_altair on How can I reduce existential risk from AI? · 2012-11-12T06:54:16.790Z · score: 14 (14 votes) · LW · GW

I expect significant strategic insights to come from the technical work (e.g. FAI math).

Interesting point. I'm worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won't learn from FAI math which of those other paths are dangerous or likely.

I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.

Comment by alex_altair on How can I reduce existential risk from AI? · 2012-11-11T23:32:28.916Z · score: 15 (15 votes) · LW · GW

Thanks for putting all this stuff in one place!

It makes me kind of sad that we still have more or less no answer to so many big, important questions. Does anyone else share this worry?

Comment by alex_altair on 2012 Less Wrong Census/Survey · 2012-11-11T21:47:31.747Z · score: 13 (13 votes) · LW · GW

Took it.

I wish there were more questions! Non-jokingly, I wish there were more questions about FAI, MWI, and other complex content things. I want more people to pick my brain and tell me if I'm consistent.

Comment by alex_altair on AI risk-related improvements to the LW wiki · 2012-11-07T22:03:06.600Z · score: 2 (2 votes) · LW · GW

I am pretty excited about the AI risk wiki.

Comment by alex_altair on Causal Diagrams and Causal Models · 2012-10-12T19:56:03.949Z · score: 1 (1 votes) · LW · GW

Fixed! Thanks for being persistent.

Comment by alex_altair on Causal Diagrams and Causal Models · 2012-10-12T17:13:35.017Z · score: 2 (2 votes) · LW · GW

Fixed.

Comment by alex_altair on Causal Diagrams and Causal Models · 2012-10-12T17:13:13.024Z · score: 1 (1 votes) · LW · GW

Fixed.

Comment by alex_altair on Causal Diagrams and Causal Models · 2012-10-12T16:56:21.743Z · score: 0 (2 votes) · LW · GW

This makes me thing "T Python Operating System".

## Modifying Universal Intelligence Measure

2012-09-18T23:44:08.864Z · score: 2 (7 votes)

## An Intuitive Explanation of Solomonoff Induction

2012-07-11T08:05:20.544Z · score: 66 (62 votes)

## Should LW have a separate AI section?

2012-07-10T01:42:39.259Z · score: 9 (14 votes)

## How Bayes' theorem is consistent with Solomonoff induction

2012-07-09T22:16:02.312Z · score: 9 (14 votes)

## Computation Hazards

2012-06-13T21:49:19.986Z · score: 15 (20 votes)

## How do you notice when you're procrastinating?

2012-03-02T09:25:08.917Z · score: 4 (7 votes)

## [LINK] The NYT on Everyday Habits

2012-02-18T08:23:32.820Z · score: 6 (9 votes)

## [LINK] Learning enhancement using "transcranial direct current stimulation"

2012-01-26T16:18:55.714Z · score: 7 (10 votes)