Posts

Comments

Comment by Anon User (anon-user) on Don't Sell Your Soul · 2021-04-06T22:10:29.645Z · LW · GW

Here is yet another reason this trade may be irrational. If souls were real, then I'd expect the value of a soul to be quite high. For the sake of the argument, let's posit the value of a soul (if it existed) at $1M. Now the question is - can you make 100,000 statements that you are about as certain of being true as the statement "souls do not exist" and not make even a single mistake? If the answer is "no" (and it's probably "no" for all but the most careful people), then the habit of selling souls for $10 is a bad habit to have - sooner or later you'd mess up and sell something way too valuable.

Comment by Anon User (anon-user) on What is the Difference Between Cheerful Price and Shadow Price? · 2021-03-28T20:36:19.574Z · LW · GW

I'd say that the cheerful price is a primarily psychological concept, while shadow price is a more analytical one, and that is the whole point - when what you feel and what you think you ought to feel disagrees, the concept of cheerful price is explicitly telling you to not worry about the mismatch, and go with the former.

Comment by Anon User (anon-user) on Against evolution as an analogy for how humans will create AGI · 2021-03-23T16:40:24.387Z · LW · GW

It seems you are actually describing a 3-algorithm stack view for both the human and AGI. For human, there is 1) evolution working on the genome level, there is 2) long-term brain development / learning, and there is 3) the brain solving a particular task. Relatively speaking, evolution (#1) works on much smaller number much more legible parameters than brain development (#2). So if we use some sort of genetic algorithm for optimizing AGI meta-parameters, then we'd get a very stack that is very similar in style. And in any case we need to worry about "base" optimizer used in the AGI version of #1+#2 producing an unaligned mesa-optimizer for AGI version of the #3 algorithm.

Comment by Anon User (anon-user) on Calculating Kelly · 2021-02-22T19:30:43.068Z · LW · GW

Note that Kelly is valid under the assumption that you know the true probabilities. II do not know whether it is still valid when all you know is a noisy estimate of true probabilities - is it? It definitely gets more complicated when you are betting against somebody with a similarly noisy estimate of the same probably, as at some level you now need to take their willingness to bet into account when estimating the true probability - and the higher they are willing to go, the stronger the evidence that your estimate may be off. At the very least, that means that the uncertainty of your estimate also becomes the factor (the less certain you are, the more attention you should pay to the fact that somebody is willing to bet against you). Then the fact that sometimes you need to spend money on things, rather than just investing/betting/etc, and that you may have other sources of income, also complicates the calculus.

Comment by Anon User (anon-user) on Using Betting Markets to Make Decisions · 2021-02-19T21:02:09.650Z · LW · GW

The way you described the chess/marriage/etc market, it's a bit vulnerable. Imagine there is a move that appears to be a very strong one, but with a small possibility of a devastating countermove that is costly for market participants to analyze. There is an incentive to bet on it - if the countermove exists, hopefully somebody will discover it, heavily bet against the move, and cause the price to drop enough that it is not taken, and the bets are refunded. If no countermove exists, the bet is a good one, and is profitable. But if nobody bothers to check for the countermove, and it exists, everybody (those who bet on the move, and the decision makers who made the move) are in trouble, but it could still be the case that no bettors have enough incentive to check for countermove (if it exists, they do not derive any benefit from the significant mispricing of the move, as you just refund the bets).

Comment by Anon User (anon-user) on The Lottery Paradox · 2021-02-01T17:59:22.828Z · LW · GW

Right, which is why the claim is immediately more suspect if Xavier is a close friend/relative/etc.

Comment by anon-user on [deleted post] 2021-01-29T03:06:20.368Z

I do not see the connection. The gist of Newcomb's Problem does not change if the player is given a time limit (you have to choose within an hour, or you do not get anything). Time-limited halting problem is of course trivially decidable.

Comment by Anon User (anon-user) on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-10T23:28:34.579Z · LW · GW

I think your analysis of "you're only X because of Y" is missing the "you are doing it wrong" implicit accusation in the statement. Basically, the implied meaning, I think, is that while there are acceptable reasons to X, you are lacking any of them, but instead your reason for X is Y, which is not one of the acceptable reasons. Which is why your Z is a defense - claiming to have reasons in the acceptable set. And another defense might be to respond entirely to the implied accusation and explain why Y should be an OK reason to X. "You're only enjoying that movie scene because you know what happened before it" - "Yeah, and what's wrong with that?"

Comment by Anon User (anon-user) on 2020 Election: Prediction Markets versus Polling/Modeling Assessment and Postmortem · 2020-11-19T00:25:37.295Z · LW · GW

Random data point - https://ftx.com/trade/TRUMPFEB ("Trump is the President on Feb 1st, 2021") is currently at 0.142 (14.2% probability it will happen)...

Comment by Anon User (anon-user) on My Confusion about Moral Philosophy · 2020-11-15T02:08:26.949Z · LW · GW

In mathematics, axioms are not just chosen based of what feels correct - instead, the implications of those axioms are explored, and only if those seem to match the intuition too, then the axioms have some chance of getting accepted. If a reasonably-seeming set axioms allows you to prove something that clearly should not be provable (such as - in the extreme case - a contradiction), then you know your axioms are no good.

Axiomatically stating a particular ethical framework, then exploring the consequences of the axioms in the extreme and tricky cases can serve a similar purpose - if simingly sensible ethical "axioms" lead to completely unreasonable conclusions, then you know you have to revise the stated ethical framework in some way.

Comment by Anon User (anon-user) on Why are deaths not increasing with infections in the US? · 2020-11-02T16:03:34.740Z · LW · GW

Perhaps also higher availability of testing and higher awareness means more people with mild symptoms get tested?

Comment by Anon User (anon-user) on AI race considerations in a report by the U.S. House Committee on Armed Services · 2020-10-04T22:55:05.119Z · LW · GW

Well, this is Committee on Armed Services - obviously the adversarial view of things is kind of a part of their job description... (Not that this isn't a problem, just pointing out that they are probably not the best place to look for a non-adversarial opinion).

Comment by Anon User (anon-user) on I'm looking for research looking at the influence of fiction on changing elite/public behaviors and opinions · 2020-08-07T19:47:12.457Z · LW · GW

More of an anecdote than research, but I recently became aware of Dr. A.J Cronin’s novel “The Citadel” published in 1937 and the claim that the book prompted new ideas about medicine and ethics, inspiring to some extent the UK NHS and the ideas behind it. Did not look into this much myself, but certainly a very fascinating story, if true.

Comment by Anon User (anon-user) on How to persuade people/groups out of sunk cost fallacy? · 2020-07-14T20:02:54.386Z · LW · GW

The existence of the "do not throw good money after bad" idiom is indirect evidence that this kind of reframing is helpful in pursuading people against the fallacy, at least in some contexts.

Comment by Anon User (anon-user) on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T19:12:59.794Z · LW · GW

First, poor have lower savings rate, and consume faster, so money velocity is higher. Second, minimal wages are local, and I would imagine that poor people on average spend a bigger fraction of their consumption locally (but I am not as certain about this one).

Comment by Anon User (anon-user) on Covid-19: Analysis of Mortality Data · 2020-07-13T19:05:44.850Z · LW · GW

What are the "unnatural" deaths - are they things like car accidents? For those I'd expect them to actually go down pretty significantly because of the significantly reduced mobility.

Comment by Anon User (anon-user) on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T03:29:51.383Z · LW · GW

Perhaps one aspect of minimum wage that you are missing is that this is different from price control of fungible goods is several important aspects, that everything else being equal:

  1. Higher minimum wage means higher demand for goods consumed by minimum wage employees.
  2. Higher minimum wage incentivises employers to invest more in their employee productivity (training, better work conditions, etc)
  3. Same employees may be more productive if you pay them higher wages, and you may be able to get better employees.

In some cases 2+3 might means that there may be several equilibrium points that are roughly equally good for the employers - either hire high-turnover low-productivity people with lower wages, or hire lower-turnover higher-productivity people for higher wages, and effect #1 is enough for the higher minimum wage to just be a win-win (which is perhaps why some employers actually support minimum wage laws).

Comment by Anon User (anon-user) on Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate · 2020-06-22T02:04:14.516Z · LW · GW

Your world descriptions and your objections seem to focus on HRAD being the only prerequisite to being able to create an aligned AGI, rather than simply one of them (and is the one worth focusing on because of a combination of factors, such as - which areas of research are the least attended to by other researches, which areas could provide insights useful to then attack other ones, which ones are the most likely to be on a critical path, etc). It could very well be an "overwhelming priority" as you stated the position you are trying to understand, without the goal being "to come up with a theory of rationality [...] that [...] allows one to build an agent from the ground up".

I am thinking of the following optimization problem. Let R1 be all the research that we anticipate getting completed by the mainstream AI community by the time they create an AGI. Let R2 be the smallest amount of successful research such that R1+R2 allows you to create an aligned AGI. What research questions we know to formulate today, and have a way to start attacking today that are the most likely to be in R2? And among the top choices, which ones are also 1) more likely to produce insights that would help with other parts of R2, and 2) less likely to compress the AGI timeline even further? It seems possible to believe in HRAD being such a good choice (working backwards from R2) without being in one of your world's (all of which work forward from HRAD).

Comment by Anon User (anon-user) on Should I self-variolate to COVID-19 · 2020-06-08T18:34:03.853Z · LW · GW

I saw guidelines along the lines of "You can stop self-quarantining if you had two negative tests taken more than 24hrs apart, with first test at least 3 days after an exposure". I do not know where this came from, but I saw it from an org that I would expect to be fairly sane in making evidence-based decisions.

Comment by Anon User (anon-user) on Pasteur's quadrant · 2020-06-05T21:19:37.369Z · LW · GW

I think you might be trying to apply the concept at a wrong granularity. Yes, there is often an iterative combination of the fundamental and applied, but then you need to be classifying each iterative step, rather than the white sequence, and the point is that it's a "Pasteur-Edison" iteration, not a "Bohr-Edison" one. Almost any new fundamental advance has to go through the "Edison" phase as the technology readiness grows, before it becomes practical. This is true whether the advance came from "Bohr" quadrant, it "Pasteur" one. The distinction is whether you are mindful of the potential applications when you were embarking on doing the fundamental part ("Pasteur"), or whether the practical implications were only figured out after the fact ("Bohr"). The distinction becomes particularly pronounced when the research effort is only proposed, and you are asking for funding.

Comment by Anon User (anon-user) on Should I self-variolate to COVID-19 · 2020-05-25T22:02:56.021Z · LW · GW

Another issue to consider is that the test could have a high false negative rate (I have seen reports as high as 15% - e.g. https://www.npr.org/sections/health-shots/2020/04/21/838794281/study-raises-questions-about-false-negatives-from-quick-covid-19-test), and it appears that false positives are more likely for asymptomatic people.

Comment by Anon User (anon-user) on How should AIs update a prior over human preferences? · 2020-05-16T04:25:14.644Z · LW · GW

I wonder whether you may be conflating two somewhat distinct (perhaps even orthogonal) challenges not modeled in the CIDR model:

  • Human actions may be reflecting human values very imperfectly (or worse - can be an imperfect reflection of inconsistent conflicting values).
  • Some actions by AI may damage the human, at which point the human actions may stop being meaningfully correlated with the value function. This is a problem that would have still be relevant if we somehow found an ideal human capable of acting on their values in a perfectly rational manner.

The first challenge "only" requires the AI to be better at deducing the "real" values. ("Only" is in quotes because it's obviously still a major unsolved problem, and "real" is in quotes because it's not a given what that actually means.). The second challenge is about AI needing to be constrained in its actions even before it knows the value function - but there is at least a whole field of Safe RL on how do do this for much simpler tasks, like learning to move a robotic arm without breaking anything in the process.

Comment by Anon User (anon-user) on Why do you (not) use a pseudonym on LessWrong? · 2020-05-08T21:50:17.137Z · LW · GW

My job is related to AI safety and I do not have employer's permission to discuss any details of my work. I do not intend to do it anyway, but being anonymous reduces the chances of something being misinterpreted, taking out of context, etc and causing trouble for me.

Even unrelated to my employment, my default policy is to be very careful about anything I say publicly under my real name - particularly if it has any chance to be seen as controversial (again, vary of repotational risks). Using an alias reduces the transaction cost of posting (still have to think twice sometimes, but do not have to policy my posts as hard).

Comment by Anon User (anon-user) on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T05:38:54.812Z · LW · GW

But don't you see - those infections are a second wave, so do not have to be counted. The model is almost tautologically true that way. But terribly misleading, and very irresponsibly so.

Comment by Anon User (anon-user) on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T00:19:20.892Z · LW · GW

They are not very explicit about it (which is a huge problem by itself), but they seem to be saying that they are only predicting the "first wave" - so they are not predicting 0 deaths after July - they just defining them to not be a part of the "first wave" anymore. So the way they present the model predictions is even more unbelievably wrong than the model itself!

Comment by Anon User (anon-user) on Charity to help people get US stimulus payments? · 2020-03-27T16:19:30.092Z · LW · GW

There are already free online filing options for people with incomes up to 69K. https://www.irs.gov/filing/free-file-do-your-federal-taxes-for-free

Comment by Anon User (anon-user) on What do you make of AGI:unaligned::spaceships:not enough food? · 2020-02-22T19:21:31.970Z · LW · GW

One big difference is that "having enough food" admits a value function ("quantity of food") that is both well understood and for the most part smooth and continuous over the design space, given today's design methodology (if we try to design a ship with a particular amount of food and make a tiny mistake it's unlikely that the quantity of food will change that much). In contrast, the "how well is it aligned" metric is very poorly understood (at least compared with "amount of food on a spaceship") and a lot more discontinuous (using today's techniques of designing AIs, a tiny error in alignment is almost certain to cause catastrophic failure). Basically - we do not know what exactly if means to get it right, and even if we knew, we do not know what the acceptable error tolerances are, and even if we knew, we do not know how to meet them. None of that applies to the amount of food on a spaceship.

Comment by Anon User (anon-user) on Is there a moral obligation to respect disagreed analysis? · 2020-01-12T06:17:04.538Z · LW · GW

I think you are focusing on the wrong aspect of your proposed action. The question is not whether you owe it to P to accept their class arguments, the question is whether you owe it to P to be open about doing the deed anyway. While I think you do not have a more obligation to listen to P's argument, having had this conversation with P, you are morally obligated to tell them ahead of time that you are going to do it anyway. Going behind their back and hoping they never find out seems like a betrayal of their trust / lie of omission. Having told in strong terms that they are against the action, P can reasonably expect you not do it behind their back.

Comment by Anon User (anon-user) on Why are people so bad at dating? · 2019-10-28T19:46:54.537Z · LW · GW

The strategies for finding a mate must have been highly optimized by evolution, and that likely included making us hesitant to deviate from evolved strategies. Perhaps in the ancestral environment, dating advice tended to be unhelpful, if not an outright sabotage from the competition, and so we evolved a strong resistance to paying attention to certain kinds of dating advice?