## Posts

My Take on a Decision Theory 2013-07-09T10:46:25.548Z

## Comments

Comment by ygert on Too good to be true · 2014-07-15T10:31:12.035Z · LW · GW

Stupid mathematical nitpick:

The chances of this happening are only .95 ^ 39 = 0.13, even before taking into account publication and error bias.

Actually, it is more correct to say that .95 ^ 39 = 0.14.

If we calculate it out to a few more decimal places, we see that .95 ^ 39 is ~0.135275954. This is closer to 0.14 than to 0.13, and the mathematical convention is to round accordingly.

Comment by ygert on Open Thread, May 26 - June 1, 2014 · 2014-05-27T09:41:14.363Z · LW · GW

What you are observing is part of the phenomenon of meta-contrarianism. Like everything Yvain writes, the aforementioned post is well worth a read.

Comment by ygert on Open Thread, May 19 - 25, 2014 · 2014-05-25T10:38:14.549Z · LW · GW

Hmm. To me it seemed intuitively clear that the function would be monotonic.

In retrospect, this monotonicity assumption may have been unjustified. I'll have to think more about what sort of curve this function follows.

Comment by ygert on Open Thread, May 19 - 25, 2014 · 2014-05-23T14:42:25.388Z · LW · GW

or they could even restrict options to typical government spending.

JoshuaFox noted that the government might tack on such restrictions

That said, it's not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.

For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it's one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...

Or what if I'm a government employee? Could I give my money to the part of government spending that is assigned as my salary?

I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department "as is", with no restrictions on what it is spent on?

The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Comment by ygert on Siren worlds and the perils of over-optimised search · 2014-05-18T23:16:22.691Z · LW · GW

Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.

As soon as it cares about the future, the future is a part of the AI's goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI's will behave, but I see no reason to suspect it would be small-minded and short-sighted.

You call this trait of planning for the future "consciousness", but this isn't anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.

Comment by ygert on Open Thread April 16 - April 22, 2014 · 2014-04-22T09:57:33.816Z · LW · GW

No, no, no: He didn't say that you don't have permission if you don't steal it, only that you do have permission if you do.

What you said is true: If you take it without permission, that's stealing, so you have permission, which means that you didn't steal it.

However, your argument falls apart at the next step, the one you dismissed with a simple "etc." The fact that you didn't steal it in no way invalidates your permission, as stealing => permission, not stealing <=> permission, and thus it is not necessarily the case that ~stealing => ~permission.

Comment by ygert on Open Thread April 16 - April 22, 2014 · 2014-04-22T09:44:12.858Z · LW · GW

You could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you'd need for git.

Comment by ygert on AI risk, new executive summary · 2014-04-20T11:43:13.239Z · LW · GW

By observing the lack of an unusual amount of paperclips in the world which Skynet inhabits.

Comment by ygert on Solutions and Open Problems · 2014-03-16T10:04:49.761Z · LW · GW

I have some rambling thoughts on the subject. I just hope they aren't too stupid or obvious ;-)

Let's take as a framework the aforementioned example of the last digit of the zillionth prime. We'll say that the agent will be rewarded if it gets it right, on, shall we say, a log scoring rule. This means that the agent is incentivised to give the best (most accurate) probabilities it can, given the information it has. The more unreasonably confident it is, the more it loses, and the same with underconfidence.

By the way, for now I will assume the agent fully knows the scoring rule it will be judges by. It is quite possible that this assumption raises problems of its own, but I will ignore them for now.

So, the agent starts with a prior over the possible answers (a uniform prior?), and starts updating itself. But it wants to figure out how long it will spend doing so, before it should give up and hand in for grading its "good enough" answer. This is the main problem we are trying to solve here.

In the degenerate case in which it has nothing else in the universe other than this to give it utility, I actually think it is the correct answer to work forever (or as long as it can before physically falling apart) on the answer. But we shall make the opposite assumption. Let's call the amount of utility lost to the agent as an opportunity cost in a given unit of time by the name C. (We shall also make the assumption that the agent knows what C is, at least approximately. This is perhaps a slightly more dangerous assumption, but we shall accept it for now.)

So, the agent want to work for as many units of time as it can before the marginal amount of extra utility it would earn from the scoring rule from the work of a unit time is less than C.

The only problem left is figuring out that margin. But, by the assumption that the agent knows the scoring rule, it knows the derivative of the scoring function as well. At any given point in time, it can figure out the amount of change to the potential utility it would get from the change to the probabilities it assigns. Thus, if the agent knows approximately the range in which it may update in the next step, it can figure out whether or not the next stage is worthwhile.

In other words, once it is close enough to the answer that it predicts that a marginal update would move it closer to the answer by an amount that gives less than C utility, it can quit, and not perform the next step.

This makes sense, right? I do suspect that this is the direction to drive at in the solution to this problem.

Comment by ygert on Open Thread February 25 - March 3 · 2014-03-03T12:27:35.680Z · LW · GW

If a comment has 100% upvotes, then obviously the amount of upvotes it got is exactly equal to the karma score of the post in question.

Comment by ygert on March 2014 Media Thread · 2014-03-03T08:12:07.111Z · LW · GW

In this writup of the 2013 Boston winter solstice celebration, there is a list of songs sung there. I would suggest this as a primary resource for populating your list.

Comment by ygert on Reason as memetic immune disorder · 2014-02-25T08:55:03.924Z · LW · GW

Upvoted for explicitly noticing and noting your confusion. One of the best things about Less Wrong is that noticing the flaws in one's own argument is respected and rewarded. (As it should be, in a community of truth-seekers.)

Good for you!

Comment by ygert on Open Thread for February 3 - 10 · 2014-02-10T12:41:42.684Z · LW · GW

As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.

I also would like to reiterate what I said on PredictionBook: I don't think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.

Comment by ygert on Open Thread for February 3 - 10 · 2014-02-09T17:43:43.659Z · LW · GW

Right. Many people use the word "utilitarianism" to refer to what is properly named "consequentialism". This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn't really work mathematically, if anyone wants me to explain further, I'll write a post on it.)

But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.

Comment by ygert on Publication: the "anti-science" trope is culturally polarizing and makes people distrust scientists · 2014-02-09T17:31:23.714Z · LW · GW

In a sense, most certainly yes! In the middle ages, each fiefdom was a small city-state, controlling in its own right not all that much territory. There certainly wasn't the concept of nationalism as we know it today. And even if some duke was technically subservient to a king, that king wasn't issuing laws that directly impacted the duke's land on a day to day basis.

This is unlike what we have today: We have countries that span vast areas of land, with all authority reporting back to a central government. Think of how large the US is, and think of the fact that the government in Washington DC has power over it all. That is a centralized government.

It is true that there are state governments, but they are weak. Too weak, in fact. In the US today, the federal government is the final source of authority. The president of the US has far more power over what happens in a given state than a king in the middle ages had over what happened in any feudal dukedom.

Comment by ygert on How big of an impact would cleaner political debates have on society? · 2014-02-06T10:17:09.543Z · LW · GW

Or, prediction markets.

Same thing really, just cleaner and more elegant.

Comment by ygert on Open thread, January 25- February 1 · 2014-01-28T22:34:22.057Z · LW · GW

Could the article you had in mind be this?

In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can't Cooperate.) It's an important point, regardless.

Comment by ygert on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-28T11:40:32.387Z · LW · GW

Yes. What I wrote was a summery, and not as perfectly detailed as one may wish. One can quibble about details: "the market"/"a market", and those quibbles may be perfectly legitimate. Yes, one who buys S&P 500 indices is only buying shares in the large-cap market, not in all the many other things in the US (or world) economy. It would be silly to try to define a index fund as something that invests in every single thing on the face of the planet, and some indices are more diversified than others.

That said, the archetypal ideal of an index fund is that imaginary one piece of everything in the world. A fund is more "indexy" the more diversified it is. In other words, when one buys index funds, what one is buying is diversity. To a greater or lesser extent, of course, and one should buy not only the broadest index funds available, but of course also many different (non-overlapping?) index funds, if one wants to reap the full benifit of diversification.

Comment by ygert on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-22T22:04:18.070Z · LW · GW

Not an economist or otherwise particularly qualified, but these are easy questions.

I'll answer the second one first: This advice is exactly the same as advice to hold a diversified portfolio. The concept of an index fund is a tiny little piece of each and every thing that's on the market. The reasoning behind buying index funds is exactly the reasoning behind holding a diversified portfolio.

For the second question, remember the idea is to buy a little bit of everything, to diversify. So go meta, and buy little bits of many different index funds. But actually, as this is considered a good idea, people have made such meta-index funds, that are indices of indices, that you can buy in order to get a little bit of each index fund.

But as an index is defined as "a little bit of everything", the question of which one fades a lot in importance. There are indices of different markets, so one might ask which market to invest in, but even there you want to go meta and diversify. (Say, with one of those meta-indices.) And yes, you want to find one with low fees, which invests as widely as possible, etc. All the standard stuff. But while fiddling with the minueta may matter, it does pale when compared to the difference between buying indices and stupidly trying to pick stocks yourself.

Comment by ygert on 2013 Survey Results · 2014-01-22T07:48:35.927Z · LW · GW

This is a very appropriate quote, and I upvoted. However, I would suggest formatting the quote in markdown as a quote, using ">".

Something like this

In my opinion, this quote format is better: it makes it easier to distinguish it as a quote.

In any case, I'm sorry for nitpicking about formatting, and no offence is intended. Perhaps there is some reason I missed that explains why you put it the way you did?

Comment by ygert on Dark Arts of Rationality · 2014-01-22T07:42:16.311Z · LW · GW

Yeah, I agree, it is weird. And I think that Hofstadter is wrong: With such a vague definition of being "smart", his conjecture fails to hold. (This is what you were saying: It's rather vague and undefined.)

That said, TDT is an attempt to put a similar idea on firmer ground. In that sense, the TDT paper is the exploration in mathematical language of this idea that you are asking for. It isn't Hofstadterian superrationality, but it is inspired by it, and TDT puts these amorphous concepts that Hofstadter never bothered solidifying into a concrete form.

Comment by ygert on Dark Arts of Rationality · 2014-01-21T11:50:16.108Z · LW · GW

Agreed. But here is what I think Hofstadter was saying: The assumption that is used can be weaker than the assumption that the two players have an identical method. Rather, it just needs to be that they are both "smart". And this is almost as strong a result as the true zero knowledge scenario, because most agents will do their best to be smart.

Why is he saying that "smart" agents will cooperate? Because they know that the other agent is the same as them in that respect. (In being smart, and also in knowing what being smart means.)

Now, there are some obvious holes in this, but it does hold a certain grain of truth, and is a fairly powerful result in any case. (TDT is, in a sense, a generalization of exactly this idea.)

Comment by ygert on Group Rationality Diary, January 16-31 · 2014-01-20T21:55:48.147Z · LW · GW

Let's not pat ourselves on the back too much.

That was never my intention. I actually initially meant to stress this more, but I cut it as it didn't really fit.

The most important note that it is not necessarily a good thing to ignore social cues. They exist for good reasons. Discourse flows a lot better when it is polite and well presented. Those who ignore that do so at their own peril.

Some do, however. Including us, to some exten. You cannot deny that the population of Less Wrongers is weighted heavily towards the type of people that might be known as nerds, who dismiss the social glue, and prefer more bluntness in their discourse than is usual. Again, this is not necessarily good: See Why Our Kind Can't Cooperate.

In some ways it is good, though: It encourages the virtues of truth-seeking and of not responding to tone, and in general is an attitude that is conductive to the types of things discussed around here. (This is why Less Wrong is neuroatypical in specifically this direction.)

Comment by ygert on Group Rationality Diary, January 16-31 · 2014-01-20T20:30:30.013Z · LW · GW

Perhaps people on Less Wrong are less attuned to the nuances of social norms, and rather upvote/downvote based only on the content of the post in question?

The ideal of upvoting/downvoting based only on value is one that has appeal to many of the sort of people who hang around here. We are all still human, but I would not be surprised to be told that many or most Less Wrongers are atypical in this way. (Pay less attention to social contexts, and more to content.)

Comment by ygert on Karma query · 2014-01-13T11:33:11.045Z · LW · GW

Karma points count as "last 30 days karma" if they are votes on a post you made within the past month. If someone upvotes/downvotes/removes a previously made upvote/removes a previously made downvote from an older comment, you get/lose karma, but not 30 day karma. I assume that is what happened here.

Comment by ygert on Open thread for January 1-7, 2014 · 2014-01-08T08:36:51.991Z · LW · GW

I see what you are saying, but the whole point behind anti-fragility is that change is for good, not bad. By default, in fragile things, change is bad. But in antifragile things, that change is harnessed for good.

Hm. The best way to clearly demarcate that would probably to move the word "bad" from describing the word "change", and put it as part of the first sentence.

Things sometimes break, and that is a bad thing that you do not want happening. It happens when outside forces cause changes to it and to the world it acts in. ...

Comment by ygert on Open thread for January 1-7, 2014 · 2014-01-07T18:48:26.649Z · LW · GW

That's a fun challenge. It was hard to try to summarize the motivation behind the idea of antifragility in such a restricted vocabulary. Here is my attempt:

Things sometimes break. This happens when outside forces cause bad changes to it and to the world it acts in. Things that this can happen to are not things you can put much trust in. It would be a lot better to have something that does not change because of things happening to it, or even better, one that gets better the more those bad things happen to it. It is a good idea to make the things you have be of this sort, but that can not be done all of the time. To find things that match this idea is not easy, but it is true that there are some situations in the real world where this does happen.

Comment by ygert on [LINK] Why I'm not on the Rationalist Masterlist · 2014-01-07T12:01:06.372Z · LW · GW

Very well put. I agree entirely with what you are saying, and I think you said it very well.

I want to add though an emphasis that the line specifically between libertarianism and reactionary-ism is a very narrow one. Both philosophies come from the same background, with similar axioms. It is surprising, so it bears emphasis.

I am in the same boat as Apprentice when it comes to these matters, and whenever I read a reactionary post I feel a certain familiarity, along the lines of: "this may not be fully valid, but the people arguing it are very smart, and what's more actually started from a surprisingly similar position to me. I may not agree with most of the conclusions, or with the tone it is presented in, but I cannot deny that they have certain good points."

Comment by ygert on Open thread for January 1-7, 2014 · 2014-01-07T10:45:05.942Z · LW · GW

I'm glad you like my recommendation. After you have used it for a while, perhaps consider writing up a post about your experiences teaching using an SRS. It's a topic which could be very interesting, and I'm sure that many would wish to read such a report. I certainly would.

Comment by ygert on Open thread for January 1-7, 2014 · 2014-01-06T18:05:21.065Z · LW · GW

Look into memrise.

It has an app, it has a lot of the bells and whistles that Anki lacks (like a scoring/gamification system) that could be helpful with the population you are teaching, and it is all around a solid SRS system. The only thing I think it lacks are those Easy/Good/Hard buttons that Anki has to differentiate between how well you know the answer, but that's something I can live without. I use both it and Anki on a day to day basis.

Comment by ygert on January 2014 Media Thread · 2014-01-05T09:26:33.873Z · LW · GW

Evangelion is... Evangelion. It's the kind of work that is very hard to apply adjectives to. That said, it's very good.

Just be sure that you watch The End of Evangelion after watching all the episodes. I have a friend who watched all the episodes of Evangelion, then went around for quite some time thinking he had finished watching the whole show. Only months later did he find out that there was more, and that he had in fact missed out on the entire climax of the show.

Comment by ygert on A proposed inefficiency in the Bitcoin markets · 2014-01-03T10:11:35.342Z · LW · GW

This is a long and well presented comment: I will chime in with army1987 that you could certainly write this up as a top level discussion post.

My response to it is that I think you are overestimating the value of our current form of government. This could be taken the wrong way, so let me be clear: It is a very good thing that w have a government. Without it, our lives would be nasty, brutish and short. Despite this, government-as-we-know-it (nationalism) is a very recent invention, and while it does some great things (and some not-so-great things), it (in its current form) is far from essential for society.

Democracy is better than monarchy, yes, but that does not mean it is the ideal government. (Recall that famous Churchill quote.) Trying to preserve it when future technology renders it obsolete is a bad idea, in my opinion.

So what should replace it? That is a deep and important issue, and one I can philosophise in depth over. This comment is already long, so I will spare a lengthy discussion here, but it suffices to say that while I am not sure, and this is a topic that needs much deep research and serious thought, I do see several possible directions and solutions that could bare fruit. If you are interested in continuing this conversation I would be happy to expound upon them.

In short, my two related arguments are that: 1) While Bitcoin may or may not bring down nationalism, that in itself does not mean anarchy and and a Hobbesian state of nature; and 2) Democracy is nice, in that it's better than most other forms of government, but it's by no means essential.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2014-01-02T14:49:42.473Z · LW · GW

I don't understand: could you explain what specifically you are claiming remains? Social power implies that it impacts other people and their actions, which I don't think is the case in this situation.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2014-01-02T10:29:43.080Z · LW · GW

Sorry, but I disagree. Personally, I rather dislike going through arbitrary pointless motions. The "magic" is already gone, and mindlessly trying to go through the same motions to bring it back is futile. We are better off without it.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-30T11:52:13.777Z · LW · GW

I think the point is that she enjoyed getting free money more than she disliked being chuckled at, so she was willing to suffer being chuckled at in order to receive the free money.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-29T21:26:43.633Z · LW · GW

The answer to that is "But maybe the parents are misinformed about the tooth fairies' abilities?" You can go on and on like this, but at this point I would stop praisuing the child for pursuing the ratinal method for solving problems, and strat educatting the child in the next lesson of rationality: 0 and 1 are not probabilities, all knowledge is probibalistic, and you need to do VoI calculations before rushing off to try to rule out narrow and increasingly unlikly options.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-29T21:19:34.602Z · LW · GW

Yes...

But seriously, there are simpler tests to do, or to do first. Try telling your parents not out loud, but in a written note. That would rule out audio bugging. Try telling an empty room, when no one else is around. That could rule out your parents. Try telling someone you know won't understand you. (Like a younger sibling.) Try miming it to your parents without using words. Try falsely telling your parents that a tooth fell out, when none did. Try telling your parents about your tooth that fell out, but not putting it under your pillow that night. Try giving your fallen-out tooth to a younger sibling and tricking him into pretending that that tooth was his to your parents. (Although that would probably mean giving up the income from that tooth.)

All in all, there are a lot of possible open tests that could be done, to narrow down the search space dramatically.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-29T11:10:59.666Z · LW · GW

Oh, but the money did keep on flowing in! My parents may not have handled the situation perfectly, but they most certainly didn't cut off the money just because I uncovered their lies. To do so would be punishing me for finding out, which was certainly not their intention.

After that point, whenever a tooth fell out, I'd just hand it to my mother and she would dig out the cash for me, without the whole ritual of putting the tooth under the pillow and having it be replaced by an imaginary being who collects teeth for some reason.

Comment by ygert on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-28T20:55:04.987Z · LW · GW

That's a good rationalist success story. You remind me of my own story with the tooth fairy: I will not relate it in detail here, as it is similar to yours, just less dramatic. At a certain point, I doubted the existence of the tooth fairy, so the next time a tooth fell out I put it under my pillow without telling anyone, and it was still there the next day. I confronted my parents, and they readily admitted the non-existence of the tooth fairy.

In fact, it went off as a perfect experiment, which kind of ruins its value as a story, at least when compared with yours. I did an experiment, got a result, and that was that. The one thing I'm still kind of bitter about is my parents' first reaction to my confrontation of them: Rather than praising me on my discovery and correct use of the scientific method, their reaction was along the lines of "If you suspected, why didn't you just tell us? We would have just admitted it. There was no need for that test to find proof to confront us with."

Comment by ygert on Review of Scott Adams’ “How to Fail at Almost Everything and Still Win Big” · 2013-12-26T13:54:00.476Z · LW · GW

And so many of those books seem to have that same piece of advice: "Actually go out and do things; don't just read about them and forget to do them!"

Comment by ygert on Critiquing Gary Taubes, Part 1: Mainstream Nutrition Science on Obesity · 2013-12-26T13:48:57.427Z · LW · GW

All this is true. However, these variables are not always perfectly correlated. It is important to recognize cases where some or all of (1), (2), or (3) are easier to answer than the object level question. That is when trusting the expert consensus is a good idea.

Comment by ygert on The Shadow Question · 2013-12-26T11:56:18.069Z · LW · GW

Huh. Maybe it wasn't a reference to what I thought it was. Let's just say that a while ago I had the rather annoying habit of answering people who asked the time by repeating their question back to them. I assumed that whoever this was drew from the same source, although I now relize I may have been mistaken. (It really is that obscure...)

The thing I was thinking of was this really obscure RPG from more than a decade ago called Continuum (Tvtropes page Wikipedia page Official (semi-abandoned) website) in which time travveler's identify one another by one asking the other for the time, and then the other repeating the question right back. Thus, time-travellers can identify one another, while at worst confusing normal people with strange demands for the time or weird non-answers to the question of what the time is.

So, the obvious thing for a fan to do in order to try to identify nearby time-travelers is to go around asking a lot of people what the time is or answering such questions with the time-traveller recognized response.

As I said, very obscure.

Comment by ygert on Open thread for December 24-31, 2013 · 2013-12-26T09:25:21.317Z · LW · GW

I often find myself reading PDF material on my Kindle, and I think I found some pretty decent workarounds. My three workarounds are:

• If possible, try to find an epub or mobi version. For the more obscure, technical stuff, this is impossible, but for the more popular stuff, this is doable.

• Try to use calibre to convert the PDF to a mobi. For some PDFs, this comes out with a good quality mobi, but often the PDF is formatted so that it does not.

• But what often end up doing is a lot simpler: I turn the screen rotation sideways. Rather than the height of the Kindle being the height of the page, if it is the width of the page, you actually get a decent view. The width of the e-book reader becomes the height of the part of the page you can see, but thats what scrolling is for.

The third option works so well that I often don't bother with the first two, but all three are on the table for when I find a PDF I want to read.

Comment by ygert on One Sided Policy Debate - The Science of Literature · 2013-12-25T21:11:54.015Z · LW · GW

I think you are wrong in saying that no one claims benefits from it: claiming benefits is practically all the linked article does. (BTW, your link goes to page two of the article. You may want to fix that. [Edit: Fixed.])

The article gave one viewpoint (and left out the other), and so everyone else is trying to give the counterpoint. (Not that I'm saying it's wrong for the article to only give one side: maybe debates work better for transmitting information than balanced pieces. But it certainly is the correct response to try to steelman the other viewpoint when you see an article in favour of one side.)

Comment by ygert on Building Phenomenological Bridges · 2013-12-24T15:14:42.402Z · LW · GW

Fair enough. I am just pointing out the solution to your confusion: You are talking past one another. Words can be wrong, and it is essential to make sure that such things are sorted out properly for an intelligent discussion.

Comment by ygert on Open thread for December 24-31, 2013 · 2013-12-24T14:21:49.321Z · LW · GW

And it even gives a mostly accurate description of the relevant risk factors!

These researchers are not exactly thinking about a Battlestar Galactica-type situation in which robots resent their enslavement by humans and rise up to destroy their masters out of vengeance—a fear known as the “Frankenstein complex,” which would happen only if we programmed robots to be able to resent such enslavement. That would be, suffice it to say, quite unintelligent of us. Rather, the modern version of concern about long-term risks from AI, summarized in a bit more detail in this TED talk, is that an advanced AI would follow its programming so exactly and with such capability that, because of the fuzzy nature of human values, unanticipated and potentially catastrophic consequences would result unless we planned ahead carefully about how to maintain control of that system and what exactly we wanted it to do.

Comment by ygert on Building Phenomenological Bridges · 2013-12-24T14:19:19.978Z · LW · GW

Well, if subjectivity means "I decide what it is", then this is tautologically true. If you have a broader definition of subjectivity, then yes, they don't seem to have much to do with each other. It seems that he was using the first definition, or something similar to it.

Comment by ygert on Building Phenomenological Bridges · 2013-12-23T20:51:12.332Z · LW · GW

Shminux's point is definitely valid about the different levels, but there is more than that: You have not shown that the contents of the registers etc. are not visible from within the program. If fact, quite the opposite: In a good programing language, it is easy to access those other (non-source code) parts from within the program: Think of, for instance, the "self" that is passed into a Python class's methods. Thus, each method of the object can access all the data of the object, including all the object's methods and variables.

Comment by ygert on Building Phenomenological Bridges · 2013-12-23T16:50:55.110Z · LW · GW

5 In either case, we shouldn't be surprised to see Cai failing to fully represent its own inner workings. An agent cannot explicitly represent itself in its totality, since it would then need to represent itself representing itself representing itself ... ad infinitum. Environmental phenomena, too, must usually be compressed.

This is obviously false. An agent's model can most certainly include an exact description of itself by simple quining. That's not to say that quining is the most efficient way, but this shows that it certainly possible to have a complete representation of oneself.

Comment by ygert on Open thread for December 17-23, 2013 · 2013-12-23T15:26:05.331Z · LW · GW

If I had to guess, I'd say that as Konkvistador is against democracy and voting in general, he wants voting rights to be denied to everyone, and as such, starting with 51% of the population is a good step in that direction.

Am I correct, or is there something more?