## Posts

## Comments

**dvasya**on Has Moore's Law actually slowed down? · 2019-08-21T05:14:19.091Z · score: 4 (3 votes) · LW · GW

The 5nm in "5nm scale" no longer means "things are literally 5nm in size". Rather, it's become a fancy way of saying something like "200x the linear transistor density of an old 1-micron scale chip". The gates are still larger than 5nm, it's just that things are now getting put on their side to make more room ( https://en.wikipedia.org/wiki/FinFET ). Some chip measures sure are slowing down, but Moore's law (referring to the number of transistors per chip and nothing else) still isn't one of them despite claims of impending doom due to "quantum effects" originally dating back to (IIRC) the eighties.

**dvasya**on I'm looking for alternative funding strategies for cryonics. · 2019-07-01T04:17:08.276Z · score: 2 (2 votes) · LW · GW

I know some people who (at least used to) maintain a group pool of cash to fund the preservation of whoever died first (at which point the pool would need to be refilled). So if you're unlucky first to die out of people, you only pay of the full price, and if you're lucky (last to die) you eventually pay about times the price, but at least you get more time to earn the money. Not sure how it was all structured legally. Of course if you're really pressed for time it may be hard to convince other people for such an arrangement.

Fundraisers have helped in the past: https://alcor.org/Library/html/casesummary2643.html - although it fell quite short of the sticker price, and ultimately Alcor had to foot most of the bill anyway.

**dvasya**on [deleted post] 2019-06-13T03:17:56.345Z

There aren't that "many" other companies. Talk to KrioRus, I know they explored setting up a cryonics facility in Switzerland at some point.

**dvasya**on Swarm AI (tool) · 2019-05-03T01:37:58.890Z · score: 6 (4 votes) · LW · GW

I'm pretty sure (epistemic status: Good Judgment Project Superforecaster) the "AI" in the name is pure buzz and the underlying aggregation algorithm is something very simple. If you want to set up some quick group predictions for free, there's https://tinycast.cultivatelabs.com/ which has a transparent and battle-tested aggregation mechanism (LMSR prediction markets) and doesn't use catchy buzzwords to market itself. For other styles of aggregation there's "the original" Good Judgment Inc, a spinoff from GJP which actually ran an aggregation algorithm contest in parallel with the forecaster contest (somehow no "AI" buzz either). They are running a public competition at https://www.gjopen.com/ where anyone can forecast and get scored, but if you want to ask your own questions that's a bit more expensive than Swarm. Unfortunately there doesn't seem to be a good survey-style group forecasting platform out in the open. But that's fine, TinyCast is adequate as long as you read their LMSR algorithm intro.

**dvasya**on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2019-01-30T04:54:20.090Z · score: 1 (1 votes) · LW · GW

The books are marketed as "hard" sci-fi but it seems all the "science" (at least in the first book, didn't read the others) is just mountains of mysticism constructed around statements that can sound "deep" on some superficial level but aren't at all mysterious, like "three-body systems interacting via central forces are generally unstable" or "you can encode some information into the quantum state of a particle" (yet of course they do contain nuance that's completely lost on the author, such as "what if two of the particles are heavy and much closer to each other than to the third?", or "which basis do you want to measure the state of your particle in?"). Compare to the Puppeteers' homeworld from the Ringworld series (yes, cheesy, but still...)

**dvasya**on Beliefs at different timescales · 2018-11-05T00:39:54.994Z · score: 15 (6 votes) · LW · GW

(epistemic status: physicist, do simulations for a living)

Our long-term thermodynamic model Pn is less accurate than a simulation

I think it would be fair to say that the Boltzmann distribution and your instantiation of the system contain not more/less but _different kinds of_ information.

Your simulation (assume infinite precision for simplicity) is just one instantiation of a trajectory of your system. There's nothing stochastic about it, it's merely an internally-consistent static set of configurations, connected to each other by deterministic equations of motion.

The Boltzmann distribution is [the mathematical limit of] the distribution that you will be sampling from if you evolve your system, under a certain set of conditions (which are generally very good approximations to a very wide variety of physical systems). Boltzmann tells you how likely you would be to encounter a specific configuration in a run that satisfies those conditions.

I suppose you could say that the Boltzmann distribution is less *precise* in the sense that it doesn't give you a definite Boolean answer whether a certain configuration will be visited in a given run. On the other hand a finite number of runs is necessarily less *accurate* viewed as a sampling of the system's configurational space.

we can't run simulations for a long time, so we have to make do with the Boltzmann distribution

...and on the third hand, usually even for a simple system like a few-atom molecule the dimensionality of the configurational space is so enormous anyway that you have to resort to some form of sampling (propagation of equations of motion is one option) in order to calculate your partition function (the normalizing factor in the Boltzmann distribution). Yes that's right, the Boltzmann distribution is actually *terribly expensive* to compute for even relatively simple systems!

Hope these clarifications of your metaphor also help refine the chess part of your dichotomy! :)

**dvasya**on The Second Law of Thermodynamics, and Engines of Cognition · 2018-11-04T23:09:16.387Z · score: 1 (1 votes) · LW · GW

(the paper: https://journals.aps.org/pr/abstract/10.1103/PhysRev.106.620)

**dvasya**on The Second Law of Thermodynamics, and Engines of Cognition · 2018-11-04T23:09:03.622Z · score: 1 (1 votes) · LW · GW

There's nothing magical about reversing particle speeds. For entropy to decrease to the original value you would have to know and be able to change the speeds with perfect precision, which is of course meaningless in physics. If you get it even the tiniest bit off you might expect _some_ entropy decrease for a while but inevitably the system will go "off track" (in classical chaos the time it's going to take is only logarithmic in your precision) and onto a different increasing-entropy trajectory.

Jaynes' 1957 paper has a nice formal explanation of entropy vs. velocity reversal.

**dvasya**on Safely and usefully spectating on AIs optimizing over toy worlds · 2018-07-31T19:55:40.226Z · score: 1 (1 votes) · LW · GW

design the AI in such a way that it can create agents, but only

This sort of argument would be much more valuable if accompanied by a specific recipe of how to do it, or at least a proof that one must exist. Why worry about AI designing agents, why not just "design it in such a way" that it's already Friendly!

**dvasya**on Applying Bayes to an incompletely specified sample space · 2018-07-31T04:25:25.705Z · score: 2 (2 votes) · LW · GW

I agree, it did seem like one of the more-unfinished parts. Still, perhaps a better starting point than nothing at all?

**dvasya**on Applying Bayes to an incompletely specified sample space · 2018-07-30T21:30:30.981Z · score: 2 (2 votes) · LW · GW

Check the chapter on the A_p distribution in Jaynes' book.

**dvasya**on The Value of Those in Effective Altruism · 2016-02-18T00:07:23.281Z · score: -2 (2 votes) · LW · GW

Losing a typical EA ... decreasing ~1000 utilons to ~3.5, so a ~28500% reduction per person lost.

You seem to be exaggerating a bit here: that's a 99.65% reduction. Hope it's the only inaccuracy in your estimates!

**dvasya**on Rationality Quotes Thread September 2015 · 2015-09-02T16:48:41.122Z · score: -9 (13 votes) · LW · GW

The main problem with quotes found on the Internet is that everyone immediately believes their authenticity.

-- Vladimir I. Lenin

**dvasya**on Behavior: The Control of Perception · 2015-01-21T21:30:50.557Z · score: 2 (2 votes) · LW · GW

Here's another excellent book roughly from the same time: "The Phenomenon of Science" by Valentin F. Turchin (http://pespmc1.vub.ac.be/posbook.html). It starts from largely similar concepts and proceeds through the evolution of the nervous system to language to math to science. I suspect it may be even more AI-relevant than Powers.

**dvasya**on Bayesianism for humans: "probable enough" · 2014-11-12T23:49:36.890Z · score: 0 (0 votes) · LW · GW

Hi shminux. Sorry, just saw your comment. We don't seem to have a date set for November yet, but let me check with the others. Typically we meet on Saturdays, are you still around on the 22nd? Or we could try Sunday the 16th. Let me know.

**dvasya**on Bayesianism for humans: "probable enough" · 2014-09-03T18:25:41.039Z · score: 1 (1 votes) · LW · GW

The Planning Fallacy explanation makes a lot of sense.

**dvasya**on Meetup : Houston, TX · 2014-08-12T15:41:55.170Z · score: 0 (0 votes) · LW · GW

I hope it's not *really* at 2AM.

**dvasya**on Too good to be true · 2014-07-12T08:52:37.587Z · score: 1 (1 votes) · LW · GW

While the situation admittedly is oversimplified, it does seem to have the advantage that anyone can replicate it exactly at a very moderate expense (a two-headed coin will also do, with a minimum amount of caution). In that respect it may actually be more relevant to real world than any vaccine/autism study.

Indeed, every experiment should get a pretty strong p-value (though never exactly 1), but what gets reported is not the actual p but whether it is above .95 (which is an arbitrary threshold proposed once by Fisher who never intended it to play the role it plays in science currently, but merely as a rule of thumb to see if a hypothesis is worth a follow-up at all.) But even the exact p-values refer to only one possible type of error, and the probability of the other is generally *not* (1-p), much less (1-alpha).

**dvasya**on Too good to be true · 2014-07-11T23:36:40.956Z · score: 1 (1 votes) · LW · GW

(1) is obvious, of course--in hindsight. However changing your confidence level after the observation is generally advised against. But (2) seems to be confusing Type I and Type II error rates.

On another level, I suppose it can be said that *of course* they are all biased! But, by the actual two-tailed coin rather than researchers' prejudice against normal coins.

**dvasya**on Too good to be true · 2014-07-11T23:20:19.405Z · score: 1 (1 votes) · LW · GW

Treating ">= 95%" as "= 95%" is a reasoning error

Hence my question in another thread: Was that "exactly 95% confidence" or "at least 95% confidence"? However when researchers say "at a 95% confidence level" they typically mean "*p* < 0.05", and reporting the actual *p*-values is often even explicitly discouraged (let's not digress into whether it is justified).

Yet *the* mistake I had in mind (as opposed to other, less relevant, merely "*a*" mistakes) involves Type I and Type II error rates. Just because you are 95% (or more) confident of not making one type of error doesn't guarantee you an automatic 5% chance of getting the other.

**dvasya**on Too good to be true · 2014-07-11T22:56:12.243Z · score: 1 (7 votes) · LW · GW

Well, perhaps a bit too simple. Consider this. You set your confidence level at 95% and start throwing a coin. You observe 100 tails out of 100. You publish a report saying "the coin has tails on both sides at a 95% confidence level" because that's what you chose during design. Then 99 other researchers repeat your experiment with the same coin, arriving at the same 95%-confidence conclusion. But you would expect to see about 5 reports claiming otherwise! The paradox is resolved when somebody comes up with a trick using a mirror to observe both sides of the coin at once, finally concluding that the coin *is* two-tailed with a 100% confidence.

What was the mistake?

**dvasya**on Too good to be true · 2014-07-11T22:42:33.826Z · score: 2 (2 votes) · LW · GW

How does your choice of threshold (made beforehand) affect your actual data and the information about the actual phenomenon contained therein?

**dvasya**on Meetup : Houston, TX · 2014-07-11T20:32:54.411Z · score: 0 (0 votes) · LW · GW

suggestion posted to the Google Group:

Another idea might be to decide ahead of each meetup on a few topics for discussion to allow some time to prepare, research and think about things for some time before discussing with each other.

**dvasya**on Too good to be true · 2014-07-11T20:26:59.281Z · score: 4 (6 votes) · LW · GW

Also, different studies have different statistical power, so it may not be OK to simply add up their evidence with equal weights.

**dvasya**on Too good to be true · 2014-07-11T20:25:29.491Z · score: 10 (12 votes) · LW · GW

Was that "exactly 95% confidence" or "at least 95% confidence"?

**dvasya**on Meetup : Houston, TX · 2014-07-09T19:06:33.199Z · score: 2 (2 votes) · LW · GW

*(I highly recommend that everyone join the Google Group so that we can all communicate in a single place by email)*

Does anyone else feel like trying to get this meeting a little bit more structured?

For example, something as simple as brief but *prepared* self-introductions covering your interests (related or unrelated to LW) and anything else about yourself that you might consider worth a mention. We partially covered it last time but it was pretty chaotic.

Or maybe someone even wants to give a brief talk about something they find exciting. Back in the day Jon used to educate us in computational neroscience, which was extremely interesting.

Also, on getting there:

The map in the post is not completely accurate, this is the actual location

Parking on Main St (across from campus, from TMC to ZaZa

**dvasya**on Meetup : Houston, TX · 2014-07-08T20:57:00.442Z · score: 1 (1 votes) · LW · GW

Oh yes, and last time somebody discovered that there's free parking on Main St across from campus (the stretch between Med Center and Hotel ZaZa).

**dvasya**on Meetup : Houston, TX · 2014-07-08T20:18:30.939Z · score: 0 (0 votes) · LW · GW

Hopefully, this time Valhalla should be open for, um, follow-up discussions. http://valhalla.rice.edu/

**dvasya**on The Power of Noise · 2014-06-18T02:44:31.948Z · score: 2 (2 votes) · LW · GW

It seems that in the rock-scissors-paper example the opponent is quite literally an adversarial superintelligence. They are more intelligent than you (at this game), and since they are playing against you, they are adversarial. The RCT example also has a lot of actors with different conflicts of interests, especially money- and career-wise, and some can come pretty close to adversarial.

**dvasya**on Meetup : Houston, TX · 2014-06-12T02:18:18.737Z · score: 1 (1 votes) · LW · GW

Free parking is available in the small streets across Rice Boulevard from the campus (north of it). This is also closer.

**dvasya**on Common sense quantum mechanics · 2014-05-20T16:04:28.867Z · score: 0 (0 votes) · LW · GW

Here are some nice arguments about different what-if/why-not scenarios, not fully rigorous but sometimes quite persuasive: http://www.scottaaronson.com/democritus/lec9.html

**dvasya**on Common sense quantum mechanics · 2014-05-19T19:27:03.053Z · score: 0 (0 votes) · LW · GW

I'm not sure if we can say much about a classical universe "in practice" because in practice we do not live in a classical universe. I imagine you could have perfect information if you looked at some simple classical universe from the outside.

For classical universes with complete information you have Newtonian dynamics. For classical universes with incomplete information about the state you can still use Newtonian dynamics but represent the state of the system with a probability distribution. This ultimately leads to (classical) statistical mechanics. For universes with incomplete information about the state *and* about its evolution ("category 3a" in the paper) you get quantum theory.

[Important caveat about classical statistical mechanics: it turns out to be a problem to formulate it without assuming some sort of granularity of phase space, which quantum theory provides. So it's all pretty intertwined.]

**dvasya**on Common sense quantum mechanics · 2014-05-19T16:58:01.084Z · score: 0 (0 votes) · LW · GW

Thanks! The list of assumptions seems longer than in the De Raedt *et al.* paper and you need to first postulate branching and unitarity (let's set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.

**dvasya**on Common sense quantum mechanics · 2014-05-19T16:38:56.905Z · score: 0 (0 votes) · LW · GW

Luke, please correct me if I'm misunderstanding something.

The rule follows directly if you require that the wavefunction behaves like a "vector probability". Then you look for a measure that behaves like probability should (basically, nonnegative and adding up to 1). And you find that for this the wavefunction should be complex-valued and the probability should be its squared amplitude. You can also show that anything "larger" than complex numbers (e.g. quaternions) will not work.

But, as you said, the question is not how to derive the Born rule from "vector probability", but rather why would we make the connection of wavefunction with probability in the first place (and why the former should be vector rather than scalar). And in this respect I find the exposition that starts from probability and gets to the wavefunction very valuable.

**dvasya**on Common sense quantum mechanics · 2014-05-19T16:18:18.931Z · score: 1 (1 votes) · LW · GW

I certainly would not rule out number 5 ;) As for 3, the arguments seem to apply to any universe in which you can carry out a reproducible experiment. However, in a "classical universe" everything is, in principle, exactly knowable, and so you just don't *need* a probabilistic description.

Unless there is limited information, in which case you use statistical mechanics. With perfect information you know which microstate the system is in, the evolution is deterministic, there is no entropy (macrostate concept), hence no second law, etc. Only when you have imperfect information -- an ensemble of possible microstates, a macrostate -- mechanics "becomes" statistical.

Using probabilistic logic in a situation where classical logic applies is either overkill or underconfidence.

**dvasya**on Meetup : Houston, TX · 2014-05-18T18:03:25.340Z · score: 0 (0 votes) · LW · GW

Here is the Houston LW Google Group: https://groups.google.com/forum/?forum/houston-lesswrong#!forum/houston-lesswrong

**dvasya**on Common sense quantum mechanics · 2014-05-17T17:33:49.428Z · score: 0 (0 votes) · LW · GW

Can this argument be summarized in some condensed form? The paper is long.

**dvasya**on Common sense quantum mechanics · 2014-05-17T17:19:31.333Z · score: 0 (0 votes) · LW · GW

I'm not sure I understood you well, could you please elaborate? If the triggering of detectors depends only on the (known) positions of detectors then it seems your experiment should be well describable by classical logic.

**dvasya**on Common sense quantum mechanics · 2014-05-16T16:30:31.893Z · score: 0 (0 votes) · LW · GW

I guess one could argue that "bayesianism" (probability-as-logic) is testable practically and, indeed, well-tested by now. (But I still don't understand how raisin proposes to reject physics in favor of probability theory or vice versa.)

**dvasya**on Common sense quantum mechanics · 2014-05-16T16:21:36.774Z · score: 1 (1 votes) · LW · GW

I'm not sure "not-MWI" is a single coherent interpretation :) Under Copenhagen, for example, the Born rule has to be postulated. The present paper

does not support the Copenhagen interpretation (in any form)

MWI also postulates it, see V_V's comment.

As for the paper's assumptions, they seem to be no different than the assumptions of normal probabilistic reasoning as laid out by Cox/Polya/Jaynes/etc., with all that ensues in regard to relevance.

(edit: formatting)

**dvasya**on Common sense quantum mechanics · 2014-05-16T16:04:57.441Z · score: 4 (4 votes) · LW · GW

In short, they mostly seem far-fetched to me, probably due to a superficial reading of the paper (as Mitchell_Porter admits). For example:

I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information"

The Fisher information in this paper arises automatically at some point and is only noted in passing. There is no more derivation *from* Fisher information as there is from the wavefunction.

they describe something vaguely like an EPR experiment ... a similarly abstracted description of a Stern-Gerlach experiment

The vagueness and abstraction are required to (1) precisely define the terms (2) under the most general conditions possible, i.e., the minimum information sufficient to define the problem. This is completely in line with Jaynes' logic that the prior should include all the information that we have and no other information (the maximum entropy principle). If you have some more concrete information about the specific instance of Stern-Gerlach experiment you are running then by all means you should include it in your probability assignment.

They make many appeals to symmetry, e.g. ... that the experiment will behave the same regardless of orientation. Or ... translational invariance.

Again, a reader who is familiar with Jaynes will immediately recognize here the principle of transformation groups (extension of principle of indifference). If nothing about the problem changes upon translation/rotation then this fact must be reflected in the probability distribution.

hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper

- in fact this is what I was trying to do here.

**dvasya**on Common sense quantum mechanics · 2014-05-15T23:41:43.254Z · score: 4 (6 votes) · LW · GW

Thank you. The title plays on the idea of deriving quantum mechanics from the rules of "common-sense" probabilistic reasoning. Suggestions for a better title are, of course, welcome.

In my view this is not so much "QM foundations" or "adding to physics" (one could argue it *takes away* from physics) as it is an interesting application of Bayesian inference, providing another example of its power. It is however interesting to discuss it in the context of MWI which is a relatively big thing for some here on Less Wrong.

Regarding testability I'm reminded of the recent discussion at Scott Aaronson's blog: http://www.scottaaronson.com/blog/?p=1653

**dvasya**on [Link] Quantum theory as the most robust description of reproducible experiments · 2014-05-15T20:38:37.853Z · score: 5 (5 votes) · LW · GW

Here's a condensed summary of the paper's main points:

http://lesswrong.com/r/discussion/lw/k88/common_sense_quantum_mechanics/

**dvasya**on Common sense quantum mechanics · 2014-05-15T20:29:39.621Z · score: 1 (1 votes) · LW · GW

Thanks. Took me a while to write the post.

**dvasya**on Meetup : Houston, TX · 2014-05-09T16:22:03.274Z · score: 0 (0 votes) · LW · GW

Hm. So no time travelers here. (I'm pretty sure it used to say the 3rd before though...)

I'll try to make it.

**dvasya**on Meetup : Houston, TX · 2014-05-07T04:05:03.690Z · score: 0 (0 votes) · LW · GW

So how did it go?

**dvasya**on Meetup : Houston, TX · 2014-05-02T16:36:54.317Z · score: 1 (1 votes) · LW · GW

Can't make it this Saturday but will try next time!

**dvasya**on Book Review: Linear Algebra Done Right (MIRI course list) · 2014-02-17T21:41:50.216Z · score: 3 (3 votes) · LW · GW

Alexander or Axler?

**dvasya**on Dark Arts of Rationality · 2014-01-23T18:23:32.744Z · score: 0 (0 votes) · LW · GW

I'd say that all points are too long by themselves, so if you split the post into several they will still be too long.

**dvasya**on A proposed inefficiency in the Bitcoin markets · 2013-12-27T21:48:18.497Z · score: 3 (3 votes) · LW · GW

The point can be formulated even stronger: An additive random walk *will* go negative.