Posts
Comments
But the lower bound of this is still well below one. We can't use our existence in the light cone to infer there's at least about one per light cone. There can be arbitrarily many empty light cones.
They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here
Here's an eerie line showing about 200 new Cryonics Institute members every 3 years.
Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)
A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.
Links are dead. Is there anywhere I can find your story now?
Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.
I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.
It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?
From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.
More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.
I'm extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?
Finally did it. I'd like exactly 7 karma please.
For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.
Do you have any plans to tackle the humane values problem? Do MIRI-folk have strong opinions on which direction is most promising? My worry is that if this problem really is as intractable as it seems, then working on problem (2) is not helpful, and our only option might be to prevent AGI from being developed through global regulation and other very difficult means.
Are you thinking of this 80k hours post?
This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations
I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).
Value drift fits your constraints. Our ability to drift accelerates as enhancement technologies increase in power. If values drift substantially and in undesirable ways because of, e.g. peacock contests, (a) our values lose what control they currently have (b) could significantly lose utility because of the fragility of value (c) is not an extinction event (d) seems as easy to effect as x-risk reduction.
I can't figure out what you mean by:
Hiding animal suffering probably makes us "more ethical".
Do you mean that it just makes us appear more ethical?
One major difference is that you are talking about what to care about and Eliezer was talking about what to expect.
According to the PhilPapers survey results, 4.3% believe in idealism (i.e. Berkeley-style reality).
This seems to me like a major spot where the dualistic model of self-and-world gets introduced into reinforcement learning AI design (which leads to the Anvil Problem). It seems possible to model memory as part of the environment by simply adding I/O actions to the list of actions available to the agent. However, if you want to act upon something read, you either need to model this by having atomic read-and-if-X-do-Y actions, or you still need some minimal memory to store the previous item(s) read in.
Let's say you think a property, like 'purpose', is a two parameter function and someone else tells you it's a three parameter function. An interesting thing to do is to accept that it is a three parameter function and then ask yourself which of the following holds:
1) The third parameter is useless and however it varies, the output doesn't change.
2) There is a special input you've been assuming is the 'correct' input, which allowed you to treat the function as if it were a two parameter function.
MrMind is talking about an "oracle" in the sense of a mathematical tool. Oracles in this sense are are well-defined things that can do stuff traditional computers can't.
This crossed my mind, but I thought there might be other deeper reasons.
where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form.
Can anyone say a bit more about why physical references would need to be described 'effectively'/computably? Is this based on the assumption that the physical universe must be computable?
C83
I'm jealous
For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.
A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.
A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.
I would be happy to be able to read Procrastination and the five-factor model: a facet level analysis ScienceDirect IngentaConnect (I'm not sure if adding these links helps you guys, but here they are anyways)
Quantum mechanics and Metaethics are what initially drew me to LessWrong. Without them, the Sequences aren't as amazingly impressive, interesting, and downright bold. As solid as the other content is, I don't think the Sequences would be as good without these somewhat more speculative parts. This content might even be what really gets people talking about the book.
Another group I recommend investigating that is working on x-risk reduction is the Global Catastrophic Risk Institute, which was founded in 2011 and has been ramping up substantially over the last few months. As far as I can tell they are attempting to fill a role that is different from SIAI and FHI by connecting with existing think tanks that are already thinking about GCR related subject matter. Check out their research page.
Churchland, Paul M., State-space Semantics and Meaning Holism in Philosophy and Phenomenological Research JStor Philosophy Documentation Center
Problems with this approach have been discussed here.
Well it doesn't seem to be inconsistent with reality.
I'm definitely having more trouble than I expected. Unicorns have 5 legs... does that count? You're making me doubt myself.
I think this includes too much. It would includes meaningless beliefs. "Zork is Pork." True or false? Consistency seems to me to be, at best, a necessary condition, but not a sufficient one.
I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.
It seems to me that we can mean things in both ways once we are aware of the distinction.
what is anthropic information? what is indexical information? Is there a difference?
citations please! I doubt that most dictators think they are benevolent and are consequentialists.
In the United States it's kind of neither. When you get an id card there is a yes/no checkbox you need to check.
In Probabilistic Graphical Modeling, the win probability you describe is called a Noisy OR.
The opponent is attacking you with a big army. You have a choice: you can let the attack through and lose in two turns, or you can send your creature out to die in your defense and lose in three turns. If you were trying to postpone losing, you would send out the creature. But you're more likely to actually win if you keep your forces alive ... [a]nd so you ask "how do I win?" to remind yourself of that.
This specific bit on it's own is probably quite fruitfully generalizable. You have so many heuristics and subgoals that, after holding them for a long time, they may be partially converted by your brain into intrinsic values and top-level goals. When things get hairy, it's probably normal to lose sight of your initial purpose that generated those heuristics and goals and to follow them when they no longer apply.
I wouldn't use wikipedia to get the gist of a philosophical view. At least to me, I find it to be way off a lot of the time, this time included. Sorry I don't have a clear definition for you right now though.
What is the risk from Human Evolution? Maybe I should just buy the book...
It is often a useful contribution for someone to assess an argument without necessarily countering its points.
Not really.
It seems that optimization power as it's currently defined would be a value that doesn't change with time (unless the agent's preferences change with time). This might be fine depending what you're looking for, but the definition of optimization power that I'm looking for would allow an agent to gain or lose optimization power.
I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.
I'm not an expert either. However, the OP function has nothing to do with ignorance or probabilities until you introduce them in the mixed states. It seems to me that this standard combining rule is not valid unless you're combining probabilities.
If OP were an entropy, then we'd simply do a weighted sum 1/2(OP(X4)+OP(X7))=1/2(1+3)=2, and then add one extra bit of entropy to represent our (binary) uncertainty as to what state we were in, giving a total OP of 3.
I feel like you're doing something wrong here. You're mixing state distribution entropy with probability distribution entropy. If you introduce mixed states, shouldn't each mixed state be accounted for in the phase space that you calculate the entropy over?