Comment by jiro on Integrity and accountability are core parts of rationality · 2019-07-16T20:51:13.109Z · score: 5 (3 votes) · LW · GW

I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”

Leaving out "parents" gets rid of some of the obvious objections, but even then, I don't want my children to know about my sexual fetishes. Other objections may include, for instance, letting your friends know that you voted for someone who they think will ruin the country. And I certainly wouldn't want rationalist-but-unpopular opinions I hold to be on the front page of the local paper to be seen by everyone (Go ahead, see what happens when the front page of the newspaper announces that you think that you should kill a fat man to stop a trolley.) This aphorism amounts to "never compartmentalize your life" which doesn't seem very justifiable.

Comment by jiro on Everybody Knows · 2019-07-05T15:00:31.472Z · score: 7 (4 votes) · LW · GW

Bob does not know X. That’s why Alice is telling Bob in the first place.

Conversational phrases aren't supposed to be interpreted literally. "Everybody knows" never means "literally every single person knows". This is about equivalent to complaining that people say "you're welcome" when the person really wouldn't be welcome under some circumstances.

Don't be the literal Internet guy who thinks this way.

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T21:26:49.939Z · score: 2 (1 votes) · LW · GW

I think the word “unbiased” there may be a typo; your statement would make a lot more sense if the word you meant to put there was actually “biased”.

I meant "unbiased" in scare quotes. Typical newsfeeds that are claimed to be unbiased in the real world (but actually may not be).

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T19:51:17.950Z · score: 2 (1 votes) · LW · GW

Typical unbiased newsfeeds in the real world are created by organizations with bias who have an interest in spreading biased news. It could, of course, be that this was about a rare instance where this was not the case, but the odds are against it.

Comment by jiro on An Increasingly Manipulative Newsfeed · 2019-07-02T18:09:26.798Z · score: 5 (5 votes) · LW · GW

Manipulative newsfeeds aren't an example of an AI becoming manipulative when the human just wanted it to be unbiased. They're an example of an AI becoming manipulative when the human also wanted it to be manipulative, but didn't want to be too obvious about it.

Comment by jiro on Logic, Buddhism, and the Dialetheia · 2019-06-12T22:03:52.195Z · score: 2 (1 votes) · LW · GW

Don't Godel sentences rebut the ideas of groundedness or of creating a system where self-referential sentences are blocked? Their existence means that you can create something that behaves as a self-referential sentence and has the associated paradoxes while using only normal arithmetic and without a "this sentence".

Comment by jiro on Drowning children are rare · 2019-05-31T22:33:47.352Z · score: 5 (7 votes) · LW · GW

I would not, in fact, save a drowning child.

Or rather, I'd save a central example of a drowning child, but I wouldn't save a drowning child under literally all circumstances, and I think most people wouldn't either. If a child was drowning in a scenario similar to one that Singer uses it as an analogy for, it would be something like a scenario where there is an endless series of drowning children in front of me with an individually small but cumulatively large cost to saving them. Under those circumstances, I would not save every drowning child, or even try to maximize the number of drowning children I do save.

Comment by jiro on [AN #56] Should ML researchers stop running experiments before making hypotheses? · 2019-05-23T23:07:17.465Z · score: 2 (1 votes) · LW · GW

It seems to me that if you expect that the results of your experiment can be useful in and generalized to other situations, then it has to be possible to replicate it. Or to put it another way, if the principle you discovered is useful for more than running the same program with a different seed, shouldn't it be possible to test it by some means other than running the same program with a different seed?

Comment by jiro on By default, avoid ambiguous distant situations · 2019-05-23T16:25:46.276Z · score: 4 (2 votes) · LW · GW

the pre-brain­washed per­son had prefer­ences about their fu­ture selves

That would qualify as

for in­stance, you might think that forcibly chang­ing prefer­ences is differ­ent from cre­at­ing a be­ing with un­usual prefer­ences

Also, it's possible for people to have preferences about either their descendants, or about other sentient beings, just like they have preferences about their future selves. In fact, I would suggest that pretty much all the opposition to the idea is because people have preferences about their descendants or about other sentient beings. Again, it may be useful to spell out why you think those preferences merit less respect than preferences about one's future self.

(Note that some answers to this require making assumptions about how to aggregate preferences, that are also serious points of disagreement. Fo instance, you might say that if you create a lot of slaves, the preferences of that large number should have a large weight. Such assumptions can also be questioned, and by most people, would be questioned.)

Comment by jiro on [AN #56] Should ML researchers stop running experiments before making hypotheses? · 2019-05-22T19:17:59.664Z · score: 11 (2 votes) · LW · GW

Instead of preregistering all experiments, maybe researchers could run experiments and observe results, formulate a theory, and then preregister an experiment that would test the theory—but in this case I would expect that researchers end up “preregistering” experiments that are very similar to the experiments that generated the theory, such that the results are very likely to come out in support of the theory.

Why would you expect this? Assuming you are not suggesting "what if the researchers lie and say they did the experiment again when they didn't", then doing a similar experiment again is called "replication". If the initial result was caused by p-hacking, then the similar experiment won't support the theory. This is why we do replication.

Also, I notice the term "p-hacking" appears nowhere in your post.

Comment by jiro on By default, avoid ambiguous distant situations · 2019-05-22T19:12:26.863Z · score: 1 (2 votes) · LW · GW

Consider a similar situation without creating a race: some wizard brainwashes an existing person into becoming a willing slave. Is it moral to thwart the preferences of the brainwashed person bu not enslaving him, or by forcibly modifying his brain to desire freedom again? Most people would say yes.

You might argue that there is a difference (for instance, you might think that forcibly changing preferences is different from creating a being with unusual preferences) but it may be useful to spell out those differences and distinguish between objections that are affected by those differences and objections which are not.

Comment by jiro on Tales From the American Medical System · 2019-05-10T19:54:04.561Z · score: -9 (5 votes) · LW · GW

My friend explains again that he does not have the time to see any doctor the next day

He had the time to see a doctor. He didn't have the time to see a doctor without disrupting his life, which isn't the same thing as actually not having the time to see the doctor. And the fact that seeing a doctor disrupts his life is his own fault for delaying the appointment.

The doctor shouldn't be required to alter his procedure just because doing so would alleviate the consequences of the patient's own decisions.

Comment by jiro on Tales From the American Medical System · 2019-05-10T19:41:08.014Z · score: 4 (2 votes) · LW · GW

The conceptual gap between a standard use of a poison from a hardware store and a deadly use is much larger than the gap between a standard use and a deadly use of a medication, so I would expect far more tragedies to come from the medication than from the hardware store poison.

Nobody's going to self-diagnose and inject themselves with poison from a hardware store.

Comment by jiro on Episode 3 of Tsuyoku Naritai! (the 'becoming stronger podcast): Nike Timers · 2019-05-05T16:23:08.807Z · score: 3 (2 votes) · LW · GW

Calling this "Tsuyoku Naritai" is marginally better than calling your image editing program "GIMP". The name signals something really unfavorable to a lot of people (weeabooism in this case). And yes, I know it comes from an essay by Eliezer. He still seems to have gotten the phrase from shonen anime. (Also, if he did, it counts as appealing to fictional evidence.)

Geeks like to ignore the name "GIMP" because after all, the name of something has no relation to how it functions so, of course, you should never shy away from something just because of its name if its functionality is good, right? Which is a way of thinking that ignores the real world.

Comment by jiro on Counterspells · 2019-04-30T14:32:43.001Z · score: 4 (2 votes) · LW · GW

That isn't enough, though. First of all, some of what I said applies directly to the quality of the argument--someone could be sincere, but biased, and I may have a reason to avoid arguments based on personal experience or personal expertise from him about certain subjects, without completely avoiding conversation with him. Second, what I said applies when you're arguing with person A (who you can have a discussion with) and they're referencing person B (who you can't), and you want to dismiss the reference to B--in the example above, someone is referring back to the argument made by a senator, but he is not the senator himself.

Comment by jiro on The Forces of Blandness and the Disagreeable Majority · 2019-04-30T05:33:43.591Z · score: 18 (7 votes) · LW · GW

Since the 1970’s, Americans have become more tolerant of allowing people with controversial views to speak in public

The question of whether Americans have become more tolerant of speech is about recent changes, not changes since the 1970s.

There's also the problem of how speech is classified. Those figures show that the tolerance for letting racists speak has gone down recently, which may be concerning--but it's much more concerning if more things get moved into the "racist" category, which seems to be happening. Also, I don't see "sexists" or any other category aside from "racists" that is hated by the left, and it's quite possible that adding more such categories would show more downturns.

Comment by jiro on Counterspells · 2019-04-29T22:09:57.622Z · score: 4 (2 votes) · LW · GW
If there's something wrong with the senator's argument, you should say what it is; and if there isn't, what difference does it make that he's a senator?

Finding things wrong with an argument is not effort-free. The fact that someone may be biased may in some cases be enough to make me not want to spend the effort. Furthermore, most real-life arguments are not purely logical deductions and involve a certain amount of trusting that the other person has presented facts honestly and in a way that is not one-sided or based on motivated reasoning, especially when perceptions and personal experience are involved.

There's also a certain chance that someone will sneak a bad argument by me simply because I am human and imperfect at analyzing arguments. I can minimize the chance of this without causing other problems if I only argue with people who are relatively unbiased.

It matters much more whether [person] is wrong or right than what their tone is.

No, it doesn't. Imagine replacing "abusive tone" with "breaks the windows of my house". Whether someone is right or wrong is unrelated to whether he breaks the windows of my house, but I'd probably call the police and ignore his arguments.Abusive tone is negative utility for me and I'm not interested in getting negative utility when I can avoid it.

Comment by jiro on Wirehead your Chickens · 2018-06-27T15:16:15.605Z · score: 8 (2 votes) · LW · GW

There are two related but separate ideas. One is that if you want to find out if someone is harmed by X, you need to consider whether they would prefer X in a base state, even if X affects their preferences. Another is that if you want to find out if someone is harmed by X, you need to consider what they would prefer if they knew about and understood X, even if they don't.

Modifying an animal to have a smaller brain falls in the second category; pretty much any being who can understand the concept would consider it harmful to be modified to have a smaller brain, so it should also be considered harmful for beings who don't understand the concept. It may also fall in the first category if you try to argue "their reduced brain capacity will prevent them from knowing what they're missing by having reduced brain capacity". Modifying it so that it enjoys pain falls in the second category for the modification, and the first category for considering whether the pain is harmful.

Comment by jiro on Wirehead your Chickens · 2018-06-25T21:14:58.209Z · score: 5 (2 votes) · LW · GW

Most non-rationalists think that whether doing Y on target X is good depends on whether X would prefer Y in a base state where X is unaltered by Y and is aware of the possibility of Y, even if having Y would change his perception or is completely concealed from his perception.

If you're going to create animals who want to be eaten (or who enjoy actions that would otherwise cause suffering), you need to assess whether this is good or bad based on whether a base state animal with unaltered desires would want to be eaten or would want to be subject to those actions. If you're going to amputate animals' body parts, you need to consider whether a base state animal with those parts would want them amputated.

The proposals above all fail this standard.

Comment by jiro on Why Destructive Value Capture? · 2018-06-20T22:31:06.039Z · score: 13 (3 votes) · LW · GW

Generally, people have a heuristic of "if this is straightforwardly and immediately harmful, I'm going to be very skeptical about claims that contradict that. And this is not just because they're stubbornly being irrational--it's because it's a lot easier to make a mistake or be convinced by sophistry when looking at long indirect chains of causation than direct ones.

The straightforward and immediate effect of not trying to sell a seat is that you lose money because you forego the possible income from selling that seat. It is possible that ripping out those seats has secondary effects that cumulatively result in you making more money anyway. But actually doing that calculation is hard (and your original post did not do the calculation--it speculated instead), and you are limited in how well you can assess the correctness of such a calculation. It ends up becoming a form of epistemic learned helplessness where the correct thing to do is to massively discount arguments for doing things that straightforwardly harm you.

Comment by jiro on Resolving the Dr Evil Problem · 2018-06-17T16:57:37.094Z · score: 7 (1 votes) · LW · GW

"I am a stubborn git who would destroy the Earth and ignore the possibility of cloning, even if such an action produces negative utility for me" is just another way of saying "I have precommitted to destroying the Earth".

Comment by jiro on The Curious Prisoner Puzzle · 2018-06-17T16:53:36.756Z · score: 7 (1 votes) · LW · GW

The whole thing is basically the Monty Hall problem.

Comment by jiro on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-06-06T20:14:35.722Z · score: 7 (3 votes) · LW · GW

Given how people actually act, a norm of "no literal falsehoods, but you can say deceptive but literally true things" will encourage deception in a way that "no deception unless really necessary" will not. "It's literally true, so it isn't lying" will easily slip to "it's literally true, so it isn't very deceptive", which will lead to people being more willing to deceive.

It's also something that only Jedi, certain religious believers, autists, Internet rationalists, and a few other odd groups would think is a good idea. "It isn't lying because what I said was literally true" is a proposition that most people see as sophistry.

Comment by jiro on Expressive Vocabulary · 2018-06-06T19:44:51.288Z · score: 5 (2 votes) · LW · GW

I don't know perfectly well what someone means when they say the dip is full of chemicals. I know roughly what they mean, but I can't figure out exactly what they mean, or even know if they have a consistent or thought out definition at all.

When telling them that the dip contains dihydrogen monoxide, I am not being pedantic; I am saying "the plain meaning of what you are saying doesn't make sense. And any not-plain meanings are beyond my ability to guess, so could you please tell me what you're really trying to say?"

Comment by jiro on Against accusing people of motte and bailey · 2018-06-04T01:46:28.084Z · score: 46 (13 votes) · LW · GW

If different people in the group make sensible and crazy interpretations, and you're arguing with someone who claims to be making only the sensible interpretation, I'd expect that that person would at least be willing to

1) admit that other members of the group are saying things that are crazy. They don't have to preemptively say it ahead of time, but they could at least say it when they are challenged on it.

2) treat known crazy-talking people as crazy-talking people, rather than glossing over their craziness in the interests of group solidarity.

I'm also very suspicious when the person with the reasonable interpretation benefits too much from the existence of (and the failure to challenge) the person with the crazy interpretation. His refusal to condemn the other guy then looks suspicious. The term for this is "good cop, bad cop", and the fact that we have already have a term for it should hint that it actually happens.

And finally, sometimes as a practical matter, it's necessary to go against the bad cops. If the motte is some kind of reasonable objection to James Damore, and the bailey is "Damore said (list of things he didn't actually say)" and the bailey is all over the media and Internet and is used to attack engineers, that bailey is the one to be concerned about and the one to focus most of my effort against. It's not just argument, it's argument in service of a goal, in this case, not to be stomped on by people using baileys.

Comment by jiro on A tentative solution to a certain mythological beast of a problem · 2018-05-09T18:42:09.558Z · score: 2 (2 votes) · LW · GW

Similarly, our history is marked by an evolution from a disregard for living creatures that impeded our survival, to a respect of other living creatures and life (abolishment of slavery, vegan, better treatment of creatures, etc). With sentience comes greater respect for life and self actualization.

That seems to imply that as society advances, abortion will be prohibited, at least at stages where the fetus has as much mental capacity as an animal.

Comment by jiro on Predicting Future Morality · 2018-05-09T18:35:49.380Z · score: 9 (2 votes) · LW · GW

Unfortunately many attempts to figure this out end up as "I believe X is good morality, but not a lot of people do it. Well, everyone who disagrees with me about X is obviously biased by the fact that doing X is difficult. If doing X was easy, they would all be enlightened and recognize that I am correct about X". Is there something that you don't personally already think is moral, where changing circumstances would lead more people to think it's moral in the future?

Also, it's worth looking at the past as well and seeing what things did not change even though the theory you are using to predict changes in the future seems like it would predict them.

Comment by jiro on Naming the Nameless · 2018-03-24T16:54:06.392Z · score: 12 (6 votes) · LW · GW

Replying to the intro topic instead of to the actual topic: “light-contrast, minimalist elegance” is exactly what the lesserwrong interface is not.

One reason sites have this problem is that designers want to be Doing Something. Nobody gets a promotion based on a web interface that is good because it's easy to ignore. Nobody gets the satisfaction from making a boring interface as they do from making an exciting one, and nobody gets the praise. Because nobody wants to not be noticed, even if the best interface is one that doesn't have to be noticed.

Comment by jiro on Caring less · 2018-03-19T09:51:38.869Z · score: 4 (2 votes) · LW · GW

Some things are central examples of caring (caring about the homeless), and other things are noncentral examples of caring (caring about sleeping late, caring about leisure activities). Whether a speaker describes something as "you should care more about X" or "you should care less about Y" does communicate information--it depends partly on how central an example of caring he considers X and Y to be.

(It also depends on how broad X and Y are. If you want to tell someone "you should care less about the entire range of activities that includes everything except climate change", you would probably describe it as "you should care more about climate change". So it doesn't follow that any "care more" can be reasonably phrased as a "care less".)

Comment by jiro on Cash transfers are not necessarily wealth transfers · 2017-12-09T18:38:42.356Z · score: 3 (1 votes) · LW · GW

If you give poor people money to spend on positional goods, the market will eventually respond, but it doesn't respond instantly. They may actually be able to be able to purchase the positional goods in the time it takes for the market to respond. Furthermore, if you give the money to only a relatively small number of poor people, the effect of your money may not be enough for the market to respond much.

Now, apparently what's actually happening is that some poor people are spending the money they get from cash transfers on school fees but (I guess) most aren't. What then? Staying in Econ-101-land, what this indicates is that different people have different values and the poor people who get most utility from having educated kids will do that.

But you're interested in making the donations effective. If only a small portion of the recipients will spend them in utility-increasing ways, you have to discount the effectiveness accordingly.

Comment by jiro on Cash transfers are not necessarily wealth transfers · 2017-12-09T18:31:18.753Z · score: 1 (2 votes) · LW · GW

By poor third world country standards, all education is uber expensive.

Comment by jiro on Some suggestions (desperate please, even) · 2017-11-13T03:59:24.866Z · score: 2 (1 votes) · LW · GW

Firefox under Linux at home also doesn't show the conversations icon or do anything if I click "login". I knoiw that no cookies are disabled on this. I have to use Chromium in order to get it to work.

Comment by jiro on Some suggestions (desperate please, even) · 2017-11-10T17:18:49.158Z · score: 19 (6 votes) · LW · GW

Something I noticed just now:

  • "All Posts" doesn't include Meta. It should be named something which indicates this instead of falsely implying that it shows all posts.
Comment by jiro on The Copernican Revolution from the Inside · 2017-11-09T23:29:10.589Z · score: 1 (1 votes) · LW · GW

The intuition that leads them to accept the tower argument doesn't include an explicit step "I am going to think about the drift componenet. Okay, I decided to ignore it", but people don't think out all steps that way. At some point they will implicitly assume that the drift component is negligible (and they will be correct).

Comment by jiro on An Equilibrium of No Free Energy · 2017-11-09T23:24:02.279Z · score: 1 (1 votes) · LW · GW

That doesn't help because there's no baseline. How many times did he have public positions that didn't pan out?

But the point is that "Eliezer knew better than the experts with respect to lamps" doesn't imply "Eliezer knows better than the experts on typical LW topics about which Eliezer claims to know better than the experts".

Some suggestions (desperate please, even)

2017-11-09T23:14:14.427Z · score: 15 (6 votes)
Comment by jiro on The Copernican Revolution from the Inside · 2017-11-03T01:57:10.036Z · score: 4 (2 votes) · LW · GW
Now if at the end of thinking you convinced yourself of yadda yadda straight line physics yadda yadda you were unfortunately mistaken. The tower argument is correct.

No, it isn't, not really.

If the motion of the point on the Earth at the tower has two components, one which is a straight line and one which isn't, and the straight line component is orders of magnitude larger than the other one (as it is over the course of a tower experiment), then it's fair to say that "straight line physics" is the answer. It's not literally 100% of the answer, of course, because of that small second component, but it's almost 100% of the answer. It isn't "mistaken" except to the same kind of pedant who insists that "humans have two legs" is mistaken because you really need to say that they average 1.99987 legs.

Comment by jiro on Inadequacy and Modesty · 2017-11-02T18:09:09.241Z · score: 7 (4 votes) · LW · GW

"It is unlikely that there's a 20 dollar bill in the street" doesn't imply "if you see a 20 dollar bill in the street, it's probably fake". Whether it's fake depends on the relative likelihood of fake and real 20 dollar bills. The relative proportion of failed to successful ideas isn't anywhere near as favorable as the proportion of fake to real $20 bills.

Comment by jiro on An Equilibrium of No Free Energy · 2017-11-02T14:29:48.711Z · score: 1 (1 votes) · LW · GW

"Hindsight bias" seems like the wrong term"

Quoting Less Wrong wiki: " Hindsight bias is a tendency to overestimate the foreseeability of events that have actually happened. I.e., subjects given information about X, and asked to assign a probability that X will happen, assign much lower probabilities than subjects who are given the same information about X, are told that X actually happened, and asked to estimate the foreseeable probability of X. "

Eliezer claims that he knows better than the experts. The event being foreseen is "my claim to know better than the experts pans out". He's pointing to a single instance of that, where it did indeed pan out, and using it to suggest that the event is relatively likely to happen in general. That's a form of hindsight bias.

"I think it would be helpful to explicitly state how you'd expect it to be unrepresentative. "

We know that there are areas where Eliezer claims to know better than the experts. We also know that the most prominent ones of those are not medical at all. There are tons of experts who deny LW-style AI danger, or say that cryonics is pointless, or that you don't have to believe many worlds theory to be a competent physicist. So the answer is "those things are so far from SAD that I'd be surprised if there was any way they could be representative."

Comment by jiro on An Equilibrium of No Free Energy · 2017-11-01T19:47:24.392Z · score: -3 (4 votes) · LW · GW

>Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.

This is hindsight bias. Eliezer gives this example because it's an example which happened to work.

But the relevant question is not "would immodesty, in this cherry-picked case, produce the right result", but "would immodesty, when applied to many cases whose truth value you don't know about in advance, produce the right result". The procedure that has the greatest chance of working overall might fail in this particular case.

There are all sorts of things which can help you in a cherry-picked case subject to hindsight bias and availability bias, which are bad overall. There are automobile accidents where people were saved by not having seatbelts, but it would be dumb to point to one of those and use it as justification for a policy of not wearing a seatbelt.

Comment by jiro on Feedback on LW 2.0 · 2017-10-12T20:45:16.898Z · score: 0 (0 votes) · LW · GW

Context here suggests that it's something like "the idea that typographical choices for LW2 should match those for the web as a whole"

The idea that the study of typographical choices for the web is a mature science whose (nontrivial) recommendations can all be taken at face value.

Comment by jiro on Feedback on LW 2.0 · 2017-10-11T19:35:54.583Z · score: 0 (0 votes) · LW · GW

Looks like LW 2.0 is using a 20px font size, and 25px line height, which is in range of what is recommended.

Is "what was recommended" similar to "mistakes were made"? It blames it on someone else, while leaving the "someone else" unnamed.

Existing recommendations about text size (and particularly, about not fitting too much text on a line) do not consider that Lesswrong has a different usage pattern than most sites. There are references dating back to 1971, but I can't figure out if any scientific studies were actually conducted at the time to determine this, and at any rate, printed text is not the web.

Also, beware of using some recommendation just because it's easy to measure.

This is basically breaking the site in order to fit "recommendations". LW 2.0 is bad, and everyone involved should feel bad. It is fundamentally designed around a bad idea.

Comment by jiro on Feedback on LW 2.0 · 2017-10-03T19:12:24.613Z · score: 1 (1 votes) · LW · GW

Not only is that obscure, it's shows the comments as abbreviated and doesn't let you reply to them. It's not so much as a list of comments as it is a list of things that you can use as comments if you take a couple of extra steps.

Comment by jiro on Feedback on LW 2.0 · 2017-10-03T19:03:07.208Z · score: 1 (1 votes) · LW · GW

Trying the site right now from work using Chrome, Firefox, and IE 11:

  • Firefox fails to load images for the magnifying glass used for "search" at the top of the page, and the "expand _ less" and "expand _ more" arrows. Otherwise it mostly works.
  • Clicking the capitalized "LOG IN" on the home page does nothing on IE.
  • On IE (but not Firefox) going to Codex briefly puts up the actual page, then it disappears and switches to a different page "Sorry, we couldn't find what you were looking for." The location bar still shows https://www.lesserwrong.com/codex. This page has a mixed case "Log in" at the top.
  • Going to a featured post doesn't make it actually disappear, but it starts as a properly formatted post (title centered, uppercase LOGIN) then switches to one improperly formatted with the title on the left and the mixed case "Log in". It does not show comments, instead endlessly throbbing the o o o at the bottom of the page.
  • Clicking the mixed case "Log in" produces the normal login box, which lets me type in a username and password and click "SIGN IN", except that it is too far to the right (going off the page if I don't stretch it) and I can't actually click the "SIGN IN". When I hover over it I get a slashed circle and clicking it produces no response.
  • Neither Firefox nor IE produces the "hi, welcome to lesswrong 2.0" at the bottom right of the page, or shows a red number in the conversations icon there.
  • Chrome has no problems.

Only Firefox is restricting any cookies (and I already unrestricted the one I need to log in).

Comment by jiro on Feedback on LW 2.0 · 2017-10-03T18:34:32.754Z · score: 5 (5 votes) · LW · GW

Weighted karma is a system that heavily violates user expectations and is a bad idea for that reason alone.

Comment by jiro on LW 2.0 Open Beta Live · 2017-09-22T20:29:39.306Z · score: 1 (1 votes) · LW · GW

I am unable to use this open beta because of the problem I describe here.

Comment by jiro on LW 2.0 Strategic Overview · 2017-09-21T17:14:08.403Z · score: 0 (0 votes) · LW · GW

It seems to have been a cookie problem so I got it working.

However, I ended up with two logins here. One I never used much, and the other is this one. Naturally, lesserwrong decided that the one that it was going to associate with my email address is the one that I never used much.

I'd like to get "Jiro" on lesserwrong, but I can't, since changing password is a per-email thing and it changes the password of the other login. Could you please fix this?

Comment by jiro on LW 2.0 Strategic Overview · 2017-09-20T19:35:05.618Z · score: 0 (0 votes) · LW · GW

And the expected behavior when using IE or Firefox is that you can't even get to the login screen? I find that unlikely.

Comment by jiro on LW 2.0 Strategic Overview · 2017-09-20T16:02:41.247Z · score: 0 (0 votes) · LW · GW

That can't explain it, unless the private beta is accessed by going somewhere other than lesserwrong.com. The site isn't going to know that someone is a participant in the private beta until they've logged in. And the problems I described happen prior to logging in.

Comment by jiro on LW 2.0 Strategic Overview · 2017-09-19T20:34:04.314Z · score: 0 (0 votes) · LW · GW

People are clearly posting things there that postdate the DB import, so they must be logging in. Also, that doesn't explain it working better on Chrome than on other browsers.

Comment by jiro on LW 2.0 Strategic Overview · 2017-09-19T18:21:53.955Z · score: 0 (0 votes) · LW · GW

I just tried lesserwrong.com. Neither IE nor Firefox would do anything when I clicked "login". I had to use Chrome. Even using Chrome, I tried to sign in and had no feedback when I used a bad user and password, making it unclear whether the values were even submitted to the server.