Posts
Comments
Mine also shows up indistinguished (I've noticed this a few other places on the site. And sometimes it is distinguished, but the line spacing is cramped). Firefox 54.0, Linux Mint 18.2
Oh right, I forgot this part. I have taken the survey (like two weeks ago)
Survey Complete!
Namecoin is an attempt to use a blockchain to implement a decentralized DNS. (It also has an associated cryptocurrency, but that's not the important part.) I know someone who is doing some domain squatting on this. I don't think it's particularly likely to take over the current DNS, but names are only a few cents.
Only when being good at a game increases your propensity to play it. In my personal experience I think that's been true for less than half the games I've played.
I actually have a list of about ten of these, which I will happily make available on request (i.e. I’ll write another discussion post about them if people are interested) but I don’t want the whole discussion of this post to be about this one single issue, which it was when I tried the content of the post out on my friend. This is about the cryonics strategy-space only, not the living-forever strategy space, which is much bigger
I would like that, I am far more interested in the general live forever space.
Any ecosystems which do not involve more suffering than pleasure shouldn't be exterminated, by that line of reasoning.
I believe the question is about things that are currently being done, not potential ways to legally maximize utility loss.
Huh, same here, it was much easier than I expected. Elsewhere in the comments, buybuydandavis noted a distinction between 'hearing' and 'saying', and I think that's what's going on here, for me it least. I say what I'm counting, but mostly hear what I'm reading.
I can't read while listening to someone, so at least somewhat different things are going on between us.
My single datapoint says no. I almost always subvocalize, but get quite vivid pictures while reading.
I live in a region of the US where they are only sort of enforced.
In my experience people mostly ignore the speed limits and drive at whatever speed feels right for the circumstances. Speed limits might have a role in building peoples' intuitions, though.
This is a link to the Google group which you can ask to join.
If I take a minute to locate the right source for an argument that's completely fine for a discussion on Lesswrong and even IRC.
It's not fine for a live face to face conversation.
I think that depends on local norms. In one of my old social groups finding information online was practically expected. It helped that conversations were generally between four or five people, so there could be related tangential discussion while someone was looking something up.
Another interpretation is that "trans identity" is a symptom of a diseased mind and culture, whereas a normal and healthy understanding of gender would understand that it's simply the correct cultural roles assigned to each sex - either as part of a Schelling point necessitated by our need for roles and divisions of duty, or as part of inherent biological differences.
Until recently, there were a lot of trans people who had this interpretation of gender and the associated world-view, but just thought their minds had their identified gender's biological characteristics so they fit better there. See "Harry Benjamin Syndrome". Though I'll warn you that it mostly fell out of favor before the modern internet, so there isn't much information on it online.
I found the WAIS helpful, but only because it factored it into multiple components and the structure of my scores was illuminating. (I had a severe discrepency between two groups of components, and very little variation within them)
Also, reassignment surgery isn't the same thing as socially and culturally transitioning.
This is a long time after the fact, but I found this.
Nevermind, you already covered this, though in a different fashion.
Surveyed. I liked the game.
If there are any naturalistic neopagans reading this, I'm curious how they answered the religion questions.
The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.
Unless you're using timeless decision theory, if I understand TDT correctly (which I very well might not). In that case, the calculations by Zack show the amount of causal entanglement for which cooperation is a good choice. That is, P(others cooperate | I cooperate) and P(others defect | I defect) should be more than 0.8 for cooperation to be a good idea.
I do not think my decisions have that level of causal entanglement with other humans, so I defected.
Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.
I see trading bots as a not unlikely source of human indifferent AI, but I don't see how a transaction tax would help. Penalizing high frequency traders just incentivizes smarter trades over faster trades.
From my experience doing group study for classes, there don't seem to be any major advantages or disadvantages for pairs vs small groups. The most relevant factor is how many eyeballs looking at something, but even that isn't a huge effect. Both are more effective than working alone (as the article concludes).
For a lot of things, getting together IRL looks like it would work best, but the logistics there can be difficult. For people who have Lesswrong meetups nearby, those are an obvious way to potentially coordinate meatspace study groups.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.
Ah. I see what you mean. That makes sense.
As someone with personal experience with a tulpa, I agree with most of this.
I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.
I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.
I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
Or even a non-category theorist?
He didn't actually synthesize a whole living thing. He synthesized a genome and put it into a cell. There's still a lot of chemical machinery we don't understand yet.
It doesn't directly relate. I'm currently learning Korean and don't want to try learning multiple languages at the same time. Also, I want a broader experience with languages before I try to make my own.
The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.
-- Tillaume, The Alloy of Law
In local parlance, "terminal" values are a decision maker's ultimate values, the things they consider ends in themselves.
A decision maker should never want to change their terminal values.
For example, if a being has "wanting to be a music star" as a terminal value, than it should adopt "wanting to make music" as an instrumental value.
For humans, how these values feel psychologically is a different question from whether they are terminal or not.
See here for more information
We're curious how you've used information theory in RPGs. It sounds like there are some interesting stories there.
It's much more like choosing not to have kids when you're in a situation where those kids' lives will be horrible.
I think the easiest way to steelman the loneliness problem presented by the given scenario is to just have a third person, let's say Jane, who stays around regardless of whether you kill Frank or not.
They could probably get a decent amount from fusing light elements as well.
I would have liked to see a proper DefectBot as well, however contenstant K defected every time and only one of the bots that cooperated with it would have defected against DefectBot, so it makes a fairly close proxy.
I like this plan. I'd be willing to run it, unless AlexMennan wants to.
Several of the bots using simulation also used multithreading to create timer processes so they can quit and defect against anyone who took to long to simulate.
I was also thinking of doing something similar, which was to infinite loop if the opposing programs code was small enough, since that probably meant something more complex was simulating it against something.
I checked the behavior of all the bots that cooperated with K, and all but two (T and Q) would have always cooperated with a defectBot. Specifically the defect bot:
(lambda (opp) 'D)
Sometimes they cooperated for different reasons. For example, U cooperates with K because it has "quine" in the code, while it cooperates with defectBot because it doesn't have "quine", "eval", or "thread" in it.
Q, of course, acts randomly. T is the only one that doesn't cooperate with defectBot but was tricked by K into cooperating. Though I'm having trouble figuring out why because I'm not sure what T is doing.
Anyway, it looks like K is reasonable proxy for how defectBot would have done here.
Of course, many works traditionally labeled fantasy also prefer to explore the consequences of worlds with different physics (HPMoR, for example). I've heard this called "Hard fantasy".
I find that the internet is generally better indexed, though I suppose that if you can afford it, a large enough private library could give more easily accessible depth. I also suspect that, like me, most people here with many more books than they have read have libraries that are composed mostly of fiction, which is less useful for research purposes.
My guess would be only as large as necessary to capture your terminal values, in so far as humans have terminal values.
I've wondered about this as well.
We can try to estimate New Harvest's effectiveness using the same methodology attempted for SENS research in the comment by David Barry here. I can't find New Harvest's 990 revenue reports, but it's donations are routed through the Network for Good, which has a total annual revenue of 150 million dollars, providing an upper bound. An annual revenue of less than 1000 dollars is very unlikely, so we can use the geometric mean of $400 000 per year as an estimated annual revenue. There are about 500 000 minutes in a year, so right now $1 brings development just over a minute closer.*
There currently 24 billion chicken, 1 billion cattle, and 1 billion pigs. Assuming the current factory farm suffering rates as an estimate for suffering rates when artificial/substitute meat becomes available, and assuming (as the OP does) that animals suffer roughly equally, then bringing faux meat one minute closer prevents about (25 billion animals)/(500 000 minutes per year) = 50 animal years of suffering.
If we assume that New Harvest has a 10% chance of success, $1 dollar there prevents an expected 5 animal years of suffering, or expressed as in the OP, preventing 1 expected animal year of suffering costs about 20 cents.
So, these (very rough) estimates show about similar levels of effectiveness.
*Assuming some set amount of money is necessary and the bottleneck and you aren't donating enough for diminishing marginal returns.
What is DivergeBot?
I'm not suggesting anything, just pointing out downsides to be considered. Everything you stated (and the original post I linked to) I consider to be worth it.
Yes, but on a much larger scale.
Or possibly just a more dramatic scale. Three mile island had a significant effect on public opinion even without any obvious death toll.
I agree. Reddit has a "controversial" sorting that favors posts with lots of up and down votes, and I prefer to use it for finding interesting discussions.
This isn't unprecedented, though that post had a (quite facetious) disclaimer.
Downside: Advocating intentionally breaking the law would bring negative attention to the community, and in severe cases could bring legislative action against important members of the community. This would be less of problem for the meatspace community (Meetups and such) since everything they do isn't posted online.
It seems like a well publicized notarious event where a lethally autonomous robot killed a lot of innocent people would significantly broaden the appeal of friendliness research, and even could lead to disapproval of AI technology, similar to how Chernobyl had a significant impact on the current widespread disapproval of nuclear power.
For people primarily interested in existential UFAI risk, the likeliness of such an event may be a significant factor. Other significant factors are:
National instability leading to a difficult environment in which to do research
National instability leading to reckless AGI research by a group in attempt to gain an advantage over other groups.
I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I >feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something. I >just recognize urges directly, including ones which are more subtle and don't need to be expressed externally, >without needing to mediate them through language.
In our case at least, you are correct that we don't need to vocalize impulses. Emotions and urges seem to run on a different, concurrent modality.
Do ideas and impulses both use the same modality for you?
A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized >GLUTs.
What are GLUTs? I'm guessing you're not talking about Glucose Transporters.
Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?
This seems like a plausible hypothesis. Alternatively, perhaps your working memory is less differentiated from your long-term memory.
Hmm, this seems related to another datapoint: reportedly, when I'm asked about my current mood and distracts, >I answer "I can't remember".
Hm. I have the same reaction if I'm asked what I'm thinking about, but I don't think it's because my thoughts are running through my long-term memory, so much as my train of thought usually gets flushed out of working memory when other people are talking.