Posts

Comments

Comment by hylleddin on Notes From an Apocalypse · 2017-09-25T09:35:08.064Z · LW · GW

Mine also shows up indistinguished (I've noticed this a few other places on the site. And sometimes it is distinguished, but the line spacing is cramped). Firefox 54.0, Linux Mint 18.2

Comment by hylleddin on Lesswrong 2016 Survey · 2016-04-14T08:11:38.451Z · LW · GW

Oh right, I forgot this part. I have taken the survey (like two weeks ago)

Comment by hylleddin on 2014 Less Wrong Census/Survey · 2014-11-14T23:06:51.938Z · LW · GW

Survey Complete!

Comment by hylleddin on Look for the Next Tech Gold Rush? · 2014-07-24T06:59:48.057Z · LW · GW

Namecoin is an attempt to use a blockchain to implement a decentralized DNS. (It also has an associated cryptocurrency, but that's not the important part.) I know someone who is doing some domain squatting on this. I don't think it's particularly likely to take over the current DNS, but names are only a few cents.

Comment by hylleddin on Skills and Antiskills · 2014-05-01T04:51:25.013Z · LW · GW

Only when being good at a game increases your propensity to play it. In my personal experience I think that's been true for less than half the games I've played.

Comment by hylleddin on The Cryonics Strategy Space · 2014-05-01T04:27:35.179Z · LW · GW

I actually have a list of about ten of these, which I will happily make available on request (i.e. I’ll write another discussion post about them if people are interested) but I don’t want the whole discussion of this post to be about this one single issue, which it was when I tried the content of the post out on my friend. This is about the cryonics strategy-space only, not the living-forever strategy space, which is much bigger

I would like that, I am far more interested in the general live forever space.

Comment by hylleddin on What legal ways do people make a profit that produce the largest net loss in utility? · 2014-03-29T04:55:52.240Z · LW · GW

Any ecosystems which do not involve more suffering than pleasure shouldn't be exterminated, by that line of reasoning.

Comment by hylleddin on What legal ways do people make a profit that produce the largest net loss in utility? · 2014-03-29T04:46:15.194Z · LW · GW

I believe the question is about things that are currently being done, not potential ways to legally maximize utility loss.

Comment by hylleddin on Mental Subvocalization --"Saying" Words In Your Mind As You Read · 2014-02-17T09:10:05.703Z · LW · GW

Huh, same here, it was much easier than I expected. Elsewhere in the comments, buybuydandavis noted a distinction between 'hearing' and 'saying', and I think that's what's going on here, for me it least. I say what I'm counting, but mostly hear what I'm reading.

I can't read while listening to someone, so at least somewhat different things are going on between us.

Comment by hylleddin on Mental Subvocalization --"Saying" Words In Your Mind As You Read · 2014-02-17T09:07:16.569Z · LW · GW

My single datapoint says no. I almost always subvocalize, but get quite vivid pictures while reading.

Comment by hylleddin on Humans can drive cars · 2014-02-01T01:52:31.819Z · LW · GW

I live in a region of the US where they are only sort of enforced.

Comment by hylleddin on Humans can drive cars · 2014-02-01T01:16:13.938Z · LW · GW

In my experience people mostly ignore the speed limits and drive at whatever speed feels right for the circumstances. Speed limits might have a role in building peoples' intuitions, though.

Comment by hylleddin on Meetup : I moved to Portland! I want to meet you! · 2014-01-05T18:50:22.169Z · LW · GW

This is a link to the Google group which you can ask to join.

Comment by hylleddin on Online vs. Personal Conversations · 2014-01-04T02:37:54.629Z · LW · GW

If I take a minute to locate the right source for an argument that's completely fine for a discussion on Lesswrong and even IRC.

It's not fine for a live face to face conversation.

I think that depends on local norms. In one of my old social groups finding information online was practically expected. It helped that conversations were generally between four or five people, so there could be related tangential discussion while someone was looking something up.

Comment by hylleddin on Rationality Quotes December 2013 · 2013-12-22T23:40:12.875Z · LW · GW

Another interpretation is that "trans identity" is a symptom of a diseased mind and culture, whereas a normal and healthy understanding of gender would understand that it's simply the correct cultural roles assigned to each sex - either as part of a Schelling point necessitated by our need for roles and divisions of duty, or as part of inherent biological differences.

Until recently, there were a lot of trans people who had this interpretation of gender and the associated world-view, but just thought their minds had their identified gender's biological characteristics so they fit better there. See "Harry Benjamin Syndrome". Though I'll warn you that it mostly fell out of favor before the modern internet, so there isn't much information on it online.

Comment by hylleddin on 2013 Less Wrong Census/Survey · 2013-12-22T13:23:10.098Z · LW · GW

I found the WAIS helpful, but only because it factored it into multiple components and the structure of my scores was illuminating. (I had a severe discrepency between two groups of components, and very little variation within them)

Comment by hylleddin on Rationality Quotes December 2013 · 2013-12-20T13:28:47.887Z · LW · GW

Also, reassignment surgery isn't the same thing as socially and culturally transitioning.

Comment by hylleddin on The Trouble With "Good" · 2013-12-05T05:51:43.491Z · LW · GW

This is a long time after the fact, but I found this.

Comment by hylleddin on 2013 Less Wrong Census/Survey · 2013-11-22T09:55:19.360Z · LW · GW

Nevermind, you already covered this, though in a different fashion.

Comment by hylleddin on 2013 Less Wrong Census/Survey · 2013-11-22T09:49:47.837Z · LW · GW

Surveyed. I liked the game.

If there are any naturalistic neopagans reading this, I'm curious how they answered the religion questions.

Comment by hylleddin on 2013 Less Wrong Census/Survey · 2013-11-22T09:45:22.307Z · LW · GW

The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.

Unless you're using timeless decision theory, if I understand TDT correctly (which I very well might not). In that case, the calculations by Zack show the amount of causal entanglement for which cooperation is a good choice. That is, P(others cooperate | I cooperate) and P(others defect | I defect) should be more than 0.8 for cooperation to be a good idea.

I do not think my decisions have that level of causal entanglement with other humans, so I defected.

Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.

Comment by hylleddin on AI Policy? · 2013-11-16T04:07:27.081Z · LW · GW

I see trading bots as a not unlikely source of human indifferent AI, but I don't see how a transaction tax would help. Penalizing high frequency traders just incentivizes smarter trades over faster trades.

Comment by hylleddin on MIRI course list study pairs · 2013-11-13T02:24:06.872Z · LW · GW

From my experience doing group study for classes, there don't seem to be any major advantages or disadvantages for pairs vs small groups. The most relevant factor is how many eyeballs looking at something, but even that isn't a huge effect. Both are more effective than working alone (as the article concludes).

For a lot of things, getting together IRL looks like it would work best, but the logistics there can be difficult. For people who have Lesswrong meetups nearby, those are an obvious way to potentially coordinate meatspace study groups.

Comment by hylleddin on Open Thread, November 1 - 7, 2013 · 2013-11-08T22:43:03.348Z · LW · GW

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.

Comment by hylleddin on Open Thread, November 1 - 7, 2013 · 2013-11-08T01:40:30.113Z · LW · GW

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment by hylleddin on The best 15 words · 2013-10-22T02:12:07.963Z · LW · GW

Or even a non-category theorist?

Comment by hylleddin on Mysterious Answers to Mysterious Questions · 2013-08-25T03:33:22.529Z · LW · GW

He didn't actually synthesize a whole living thing. He synthesized a genome and put it into a cell. There's still a lot of chemical machinery we don't understand yet.

Comment by hylleddin on Post ridiculous munchkin ideas! · 2013-08-17T00:10:24.013Z · LW · GW

It doesn't directly relate. I'm currently learning Korean and don't want to try learning multiple languages at the same time. Also, I want a broader experience with languages before I try to make my own.

Comment by hylleddin on Rationality Quotes August 2013 · 2013-08-02T21:24:40.468Z · LW · GW

The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.

-- Tillaume, The Alloy of Law

Comment by hylleddin on Open thread, July 16-22, 2013 · 2013-07-25T20:33:19.547Z · LW · GW

In local parlance, "terminal" values are a decision maker's ultimate values, the things they consider ends in themselves.

A decision maker should never want to change their terminal values.

For example, if a being has "wanting to be a music star" as a terminal value, than it should adopt "wanting to make music" as an instrumental value.

For humans, how these values feel psychologically is a different question from whether they are terminal or not.

See here for more information

Comment by hylleddin on Welcome to Less Wrong! (6th thread, July 2013) · 2013-07-25T19:50:31.684Z · LW · GW

We're curious how you've used information theory in RPGs. It sounds like there are some interesting stories there.

Comment by hylleddin on Why Eat Less Meat? · 2013-07-25T04:54:01.384Z · LW · GW

It's much more like choosing not to have kids when you're in a situation where those kids' lives will be horrible.

Comment by hylleddin on The Empty White Room: Surreal Utilities · 2013-07-23T20:51:32.228Z · LW · GW

I think the easiest way to steelman the loneliness problem presented by the given scenario is to just have a third person, let's say Jane, who stays around regardless of whether you kill Frank or not.

Comment by hylleddin on "Stupid" questions thread · 2013-07-15T08:20:12.531Z · LW · GW

They could probably get a decent amount from fusing light elements as well.

Comment by hylleddin on Prisoner's dilemma tournament results · 2013-07-11T03:02:29.701Z · LW · GW

I would have liked to see a proper DefectBot as well, however contenstant K defected every time and only one of the bots that cooperated with it would have defected against DefectBot, so it makes a fairly close proxy.

Comment by hylleddin on Prisoner's dilemma tournament results · 2013-07-09T23:11:08.174Z · LW · GW

I like this plan. I'd be willing to run it, unless AlexMennan wants to.

Comment by hylleddin on Prisoner's dilemma tournament results · 2013-07-09T22:47:13.869Z · LW · GW

Several of the bots using simulation also used multithreading to create timer processes so they can quit and defect against anyone who took to long to simulate.

I was also thinking of doing something similar, which was to infinite loop if the opposing programs code was small enough, since that probably meant something more complex was simulating it against something.

Comment by hylleddin on Prisoner's dilemma tournament results · 2013-07-09T22:41:54.338Z · LW · GW

I checked the behavior of all the bots that cooperated with K, and all but two (T and Q) would have always cooperated with a defectBot. Specifically the defect bot:

(lambda (opp) 'D)

Sometimes they cooperated for different reasons. For example, U cooperates with K because it has "quine" in the code, while it cooperates with defectBot because it doesn't have "quine", "eval", or "thread" in it.

Q, of course, acts randomly. T is the only one that doesn't cooperate with defectBot but was tricked by K into cooperating. Though I'm having trouble figuring out why because I'm not sure what T is doing.

Anyway, it looks like K is reasonable proxy for how defectBot would have done here.

Comment by hylleddin on Rationality Quotes July 2013 · 2013-07-09T04:21:05.455Z · LW · GW

Of course, many works traditionally labeled fantasy also prefer to explore the consequences of worlds with different physics (HPMoR, for example). I've heard this called "Hard fantasy".

Comment by hylleddin on Home Economics · 2013-07-08T23:25:10.376Z · LW · GW

I find that the internet is generally better indexed, though I suppose that if you can afford it, a large enough private library could give more easily accessible depth. I also suspect that, like me, most people here with many more books than they have read have libraries that are composed mostly of fiction, which is less useful for research purposes.

Comment by hylleddin on Bad Concepts Repository · 2013-06-28T15:04:44.399Z · LW · GW

My guess would be only as large as necessary to capture your terminal values, in so far as humans have terminal values.

Comment by hylleddin on Effective Altruism Through Advertising Vegetarianism? · 2013-06-18T22:01:20.033Z · LW · GW

I've wondered about this as well.

We can try to estimate New Harvest's effectiveness using the same methodology attempted for SENS research in the comment by David Barry here. I can't find New Harvest's 990 revenue reports, but it's donations are routed through the Network for Good, which has a total annual revenue of 150 million dollars, providing an upper bound. An annual revenue of less than 1000 dollars is very unlikely, so we can use the geometric mean of $400 000 per year as an estimated annual revenue. There are about 500 000 minutes in a year, so right now $1 brings development just over a minute closer.*

There currently 24 billion chicken, 1 billion cattle, and 1 billion pigs. Assuming the current factory farm suffering rates as an estimate for suffering rates when artificial/substitute meat becomes available, and assuming (as the OP does) that animals suffer roughly equally, then bringing faux meat one minute closer prevents about (25 billion animals)/(500 000 minutes per year) = 50 animal years of suffering.

If we assume that New Harvest has a 10% chance of success, $1 dollar there prevents an expected 5 animal years of suffering, or expressed as in the OP, preventing 1 expected animal year of suffering costs about 20 cents.

So, these (very rough) estimates show about similar levels of effectiveness.

*Assuming some set amount of money is necessary and the bottleneck and you aren't donating enough for diminishing marginal returns.

Comment by hylleddin on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-15T03:23:44.334Z · LW · GW

What is DivergeBot?

Comment by hylleddin on Can we dodge the mindkiller? · 2013-06-15T00:18:56.997Z · LW · GW

I'm not suggesting anything, just pointing out downsides to be considered. Everything you stated (and the original post I linked to) I consider to be worth it.

Comment by hylleddin on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-15T00:00:26.198Z · LW · GW

Yes, but on a much larger scale.

Or possibly just a more dramatic scale. Three mile island had a significant effect on public opinion even without any obvious death toll.

Comment by hylleddin on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-14T23:47:55.847Z · LW · GW

I agree. Reddit has a "controversial" sorting that favors posts with lots of up and down votes, and I prefer to use it for finding interesting discussions.

Comment by hylleddin on Can we dodge the mindkiller? · 2013-06-14T19:29:02.551Z · LW · GW

This isn't unprecedented, though that post had a (quite facetious) disclaimer.

Downside: Advocating intentionally breaking the law would bring negative attention to the community, and in severe cases could bring legislative action against important members of the community. This would be less of problem for the meatspace community (Meetups and such) since everything they do isn't posted online.

Comment by hylleddin on Near-Term Risk: Killer Robots a Threat to Freedom and Democracy · 2013-06-14T19:08:26.142Z · LW · GW

It seems like a well publicized notarious event where a lethally autonomous robot killed a lot of innocent people would significantly broaden the appeal of friendliness research, and even could lead to disapproval of AI technology, similar to how Chernobyl had a significant impact on the current widespread disapproval of nuclear power.

For people primarily interested in existential UFAI risk, the likeliness of such an event may be a significant factor. Other significant factors are:

  • National instability leading to a difficult environment in which to do research

  • National instability leading to reckless AGI research by a group in attempt to gain an advantage over other groups.

Comment by hylleddin on Rationality Quotes June 2013 · 2013-06-13T18:14:56.757Z · LW · GW

I'd be kind of surprised if people who have internal monologues need an inner voice telling them "I'm so angry, I >feel like throwing something!" in order to recognize that they feel angry and have an urge to throw something. I >just recognize urges directly, including ones which are more subtle and don't need to be expressed externally, >without needing to mediate them through language.

In our case at least, you are correct that we don't need to vocalize impulses. Emotions and urges seem to run on a different, concurrent modality.

Do ideas and impulses both use the same modality for you?

Comment by hylleddin on Rationality Quotes June 2013 · 2013-06-13T18:01:54.772Z · LW · GW

A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized >GLUTs.

What are GLUTs? I'm guessing you're not talking about Glucose Transporters.

Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?

This seems like a plausible hypothesis. Alternatively, perhaps your working memory is less differentiated from your long-term memory.

Hmm, this seems related to another datapoint: reportedly, when I'm asked about my current mood and distracts, >I answer "I can't remember".

Hm. I have the same reaction if I'm asked what I'm thinking about, but I don't think it's because my thoughts are running through my long-term memory, so much as my train of thought usually gets flushed out of working memory when other people are talking.