Posts

Foundations of Inference 2011-10-31T19:48:33.263Z

Comments

Comment by amcknight on [LINK] Updating Drake's Equation with values from modern astronomy · 2016-06-15T21:11:51.590Z · LW · GW

But the lower bound of this is still well below one. We can't use our existence in the light cone to infer there's at least about one per light cone. There can be arbitrarily many empty light cones.

Comment by amcknight on [LINK] Updating Drake's Equation with values from modern astronomy · 2016-06-14T00:44:10.289Z · LW · GW

They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here

Comment by amcknight on The Brain Preservation Foundation's Small Mammalian Brain Prize won · 2016-03-09T02:01:42.856Z · LW · GW

Here's an eerie line showing about 200 new Cryonics Institute members every 3 years.

Comment by amcknight on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-01-28T00:53:53.129Z · LW · GW

Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)

Comment by amcknight on Questions of Reasoning under Logical Uncertainty · 2015-01-15T08:23:12.237Z · LW · GW

A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.

Comment by amcknight on January Monthly Bragging Thread · 2014-11-08T07:58:08.928Z · LW · GW

Links are dead. Is there anywhere I can find your story now?

Comment by amcknight on 2014 Less Wrong Census/Survey · 2014-10-28T04:49:03.137Z · LW · GW

Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.

Comment by amcknight on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-25T03:05:56.315Z · LW · GW

I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.

Comment by amcknight on Donating to MIRI vs. FHI vs. CEA vs. CFAR · 2014-01-06T05:59:20.698Z · LW · GW

It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?

Comment by amcknight on Why CFAR? · 2013-12-31T06:43:24.967Z · LW · GW

From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.

More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.

I'm extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?

Comment by amcknight on 2013 Less Wrong Census/Survey · 2013-12-06T05:01:28.184Z · LW · GW

Finally did it. I'd like exactly 7 karma please.

Comment by amcknight on MIRI's 2013 Summer Matching Challenge · 2013-07-28T20:54:13.546Z · LW · GW

For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.

Do you have any plans to tackle the humane values problem? Do MIRI-folk have strong opinions on which direction is most promising? My worry is that if this problem really is as intractable as it seems, then working on problem (2) is not helpful, and our only option might be to prevent AGI from being developed through global regulation and other very difficult means.

Comment by amcknight on Start Under the Streetlight, then Push into the Shadows · 2013-07-04T17:51:18.678Z · LW · GW

Are you thinking of this 80k hours post?

Comment by amcknight on Effective Altruism Through Advertising Vegetarianism? · 2013-06-18T23:59:02.307Z · LW · GW

This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations

I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).

Comment by amcknight on A Proposed Adjustment to the Astronomical Waste Argument · 2013-05-28T19:29:43.647Z · LW · GW

Value drift fits your constraints. Our ability to drift accelerates as enhancement technologies increase in power. If values drift substantially and in undesirable ways because of, e.g. peacock contests, (a) our values lose what control they currently have (b) could significantly lose utility because of the fragility of value (c) is not an extinction event (d) seems as easy to effect as x-risk reduction.

Comment by amcknight on Pay other people to go vegetarian for you? · 2013-04-13T08:22:01.898Z · LW · GW

I can't figure out what you mean by:

Hiding animal suffering probably makes us "more ethical".

Do you mean that it just makes us appear more ethical?

Comment by amcknight on An attempt to dissolve subjective expectation and personal identity · 2013-02-26T06:24:16.782Z · LW · GW

One major difference is that you are talking about what to care about and Eliezer was talking about what to expect.

Comment by amcknight on Morality is Awesome · 2013-02-13T05:47:05.096Z · LW · GW

What do you mean?

Comment by amcknight on A reply to Mark Linsenmayer about philosophy · 2013-01-05T20:13:41.499Z · LW · GW

According to the PhilPapers survey results, 4.3% believe in idealism (i.e. Berkeley-style reality).

Comment by amcknight on How to Disentangle the Past and the Future · 2013-01-03T22:55:45.368Z · LW · GW

This seems to me like a major spot where the dualistic model of self-and-world gets introduced into reinforcement learning AI design (which leads to the Anvil Problem). It seems possible to model memory as part of the environment by simply adding I/O actions to the list of actions available to the agent. However, if you want to act upon something read, you either need to model this by having atomic read-and-if-X-do-Y actions, or you still need some minimal memory to store the previous item(s) read in.

Comment by amcknight on The Relation Projection Fallacy and the purpose of life · 2013-01-01T02:02:46.950Z · LW · GW

Let's say you think a property, like 'purpose', is a two parameter function and someone else tells you it's a three parameter function. An interesting thing to do is to accept that it is a three parameter function and then ask yourself which of the following holds:
1) The third parameter is useless and however it varies, the output doesn't change.
2) There is a special input you've been assuming is the 'correct' input, which allowed you to treat the function as if it were a two parameter function.

Comment by amcknight on Mixed Reference: The Great Reductionist Project · 2012-12-08T21:24:12.255Z · LW · GW

MrMind is talking about an "oracle" in the sense of a mathematical tool. Oracles in this sense are are well-defined things that can do stuff traditional computers can't.

Comment by amcknight on Mixed Reference: The Great Reductionist Project · 2012-12-08T21:16:10.434Z · LW · GW

This crossed my mind, but I thought there might be other deeper reasons.

Comment by amcknight on Mixed Reference: The Great Reductionist Project · 2012-12-07T07:01:14.590Z · LW · GW

where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form.

Can anyone say a bit more about why physical references would need to be described 'effectively'/computably? Is this based on the assumption that the physical universe must be computable?

Comment by amcknight on 2012 Less Wrong Census/Survey · 2012-11-20T08:20:55.313Z · LW · GW

C83

I'm jealous

Comment by amcknight on Checklist of Rationality Habits · 2012-11-08T22:43:19.233Z · LW · GW

For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.

Comment by amcknight on Desired articles on AI risk? · 2012-11-08T22:09:41.580Z · LW · GW

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.

Comment by amcknight on Desired articles on AI risk? · 2012-11-08T22:06:04.842Z · LW · GW

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.

Comment by amcknight on LessWrong help desk - free paper downloads and more · 2012-11-02T00:46:43.250Z · LW · GW

I would be happy to be able to read Procrastination and the five-factor model: a facet level analysis ScienceDirect IngentaConnect (I'm not sure if adding these links helps you guys, but here they are anyways)

Comment by amcknight on Looking for alteration suggestions for the official Sequences ebook · 2012-10-17T01:21:05.770Z · LW · GW

Quantum mechanics and Metaethics are what initially drew me to LessWrong. Without them, the Sequences aren't as amazingly impressive, interesting, and downright bold. As solid as the other content is, I don't think the Sequences would be as good without these somewhat more speculative parts. This content might even be what really gets people talking about the book.

Comment by amcknight on Notes from Online Optimal Philanthropy Meetup: 12-10-09 · 2012-10-14T02:42:30.268Z · LW · GW

Another group I recommend investigating that is working on x-risk reduction is the Global Catastrophic Risk Institute, which was founded in 2011 and has been ramping up substantially over the last few months. As far as I can tell they are attempting to fill a role that is different from SIAI and FHI by connecting with existing think tanks that are already thinking about GCR related subject matter. Check out their research page.

Comment by amcknight on LessWrong help desk - free paper downloads and more · 2012-10-10T22:34:30.297Z · LW · GW

Churchland, Paul M., State-space Semantics and Meaning Holism in Philosophy and Phenomenological Research JStor Philosophy Documentation Center

Comment by amcknight on [SEQ RERUN] Measuring Optimization Power · 2012-10-08T07:02:25.685Z · LW · GW

Problems with this approach have been discussed here.

Comment by amcknight on The Useful Idea of Truth · 2012-10-04T17:44:41.266Z · LW · GW

Well it doesn't seem to be inconsistent with reality.

Comment by amcknight on The Useful Idea of Truth · 2012-10-04T04:59:51.464Z · LW · GW

I'm definitely having more trouble than I expected. Unicorns have 5 legs... does that count? You're making me doubt myself.

Comment by amcknight on The Useful Idea of Truth · 2012-10-02T22:49:01.922Z · LW · GW

I think this includes too much. It would includes meaningless beliefs. "Zork is Pork." True or false? Consistency seems to me to be, at best, a necessary condition, but not a sufficient one.

Comment by amcknight on The Useful Idea of Truth · 2012-10-02T22:31:26.736Z · LW · GW

I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.

Comment by amcknight on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2012-10-01T05:28:33.401Z · LW · GW

It seems to me that we can mean things in both ways once we are aware of the distinction.

Comment by amcknight on Stupid Questions Open Thread Round 4 · 2012-08-27T01:53:23.436Z · LW · GW

what is anthropic information? what is indexical information? Is there a difference?

Comment by amcknight on r/HPMOR on heroic responsibility · 2012-08-21T22:41:20.576Z · LW · GW

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

Comment by amcknight on Is Politics the Mindkiller? An Inconclusive Test · 2012-07-29T19:37:35.728Z · LW · GW

In the United States it's kind of neither. When you get an id card there is a yes/no checkbox you need to check.

Comment by amcknight on Some simple existential risk math · 2012-07-24T21:16:12.228Z · LW · GW

In Probabilistic Graphical Modeling, the win probability you describe is called a Noisy OR.

Comment by amcknight on Magic players: "How do I lose?" · 2012-07-15T16:44:50.155Z · LW · GW

The opponent is attacking you with a big army. You have a choice: you can let the attack through and lose in two turns, or you can send your creature out to die in your defense and lose in three turns. If you were trying to postpone losing, you would send out the creature. But you're more likely to actually win if you keep your forces alive ... [a]nd so you ask "how do I win?" to remind yourself of that.

This specific bit on it's own is probably quite fruitfully generalizable. You have so many heuristics and subgoals that, after holding them for a long time, they may be partially converted by your brain into intrinsic values and top-level goals. When things get hairy, it's probably normal to lose sight of your initial purpose that generated those heuristics and goals and to follow them when they no longer apply.

Comment by amcknight on Rationality Quotes July 2012 · 2012-07-03T04:54:52.283Z · LW · GW

I wouldn't use wikipedia to get the gist of a philosophical view. At least to me, I find it to be way off a lot of the time, this time included. Sorry I don't have a clear definition for you right now though.

Comment by amcknight on Efficient philanthropy: local vs. global approaches · 2012-07-03T01:53:41.810Z · LW · GW

What is the risk from Human Evolution? Maybe I should just buy the book...

Comment by amcknight on Thoughts on the Singularity Institute (SI) · 2012-07-03T01:37:55.314Z · LW · GW

It is often a useful contribution for someone to assess an argument without necessarily countering its points.

Not really.

Comment by amcknight on Thoughts and problems with Eliezer's measure of optimization power · 2012-06-15T23:06:30.266Z · LW · GW

It seems that optimization power as it's currently defined would be a value that doesn't change with time (unless the agent's preferences change with time). This might be fine depending what you're looking for, but the definition of optimization power that I'm looking for would allow an agent to gain or lose optimization power.

Comment by amcknight on Wanted: "The AIs will need humans" arguments · 2012-06-14T20:52:53.763Z · LW · GW

I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.

Comment by amcknight on Thoughts and problems with Eliezer's measure of optimization power · 2012-06-11T18:51:47.104Z · LW · GW

I'm not an expert either. However, the OP function has nothing to do with ignorance or probabilities until you introduce them in the mixed states. It seems to me that this standard combining rule is not valid unless you're combining probabilities.

Comment by amcknight on Thoughts and problems with Eliezer's measure of optimization power · 2012-06-08T21:09:44.080Z · LW · GW

If OP were an entropy, then we'd simply do a weighted sum 1/2(OP(X4)+OP(X7))=1/2(1+3)=2, and then add one extra bit of entropy to represent our (binary) uncertainty as to what state we were in, giving a total OP of 3.

I feel like you're doing something wrong here. You're mixing state distribution entropy with probability distribution entropy. If you introduce mixed states, shouldn't each mixed state be accounted for in the phase space that you calculate the entropy over?