Posts
Comments
Children now aren't necessarily mutually exclusive with children in the future. You're not creating disutility by starting now and then "upping your game" when technology is more accessible!
Of course the right thing to do is to pull the lever. And the right time to do that is once the trolley's front wheels have passed the switch but the rear wheels haven't yet. The trolley gets derailed, saving all 6 lives.
Aside from basic math (calculus, linear algebra, probability, ODE, all with proofs), take courses in topics that feel interesting to you just by themselves. Don't count on things you learn being actually useful in real life, and accordingly don't try to prioritize courses by that metric. You'll learn what you need for your job by yourself or be taught at the job anyway, so instead spend this time building up an inventory of things to draw upon for useful metaphors. It's easier to learn what's intrinsically interesting so you'll end up learning more. For real world skills, do some academic research projects and industry internships.
This is the correct answer to the question. Bell and CHSH and all are remarkable but more complicated setups. This - entanglement no matter which basis you'll end up measuring your particle in, not known at the time of state preparation, - is what's salient about the simple 2-particle setup.
As I've argued previously, a natural selection process maps cleanly onto RL in the limit.
The URL is broken (points to edit page)
Regarding safer assets, when you put your money into a savings account (loan it to the bank), what is the bank to do with it? Presumably it has promised you interest. Or if you buy treasuries - someone must have sold them to you - what do they do now with all the cash? Just because you personally didn't put your money into stocks does't mean nobody else downstream from you did.
And because most securities aren't up for sale at any given time, a small fraction of market participants can have outsized effects on prices. Consider oil back in April: sure the "prices" turned "negative" when a few poor suckers realized they had forgotten to roll their futures to the next month back when everybody else did and could get stuck with a physical delivery, but how many barrels worth of contracts did actually change hands at those prices?
Not sure how this would support the OP's point specifically, but just wanted to point out that 1%-level things can sometimes have large manifestations in "prices", just because liquidity is finite.
Here's a (high) schools data point... https://twitter.com/EricTopol/status/1266976828549238785?s=19
Regarding HCQ, the recent large-N studies were observational and looks like patients there were given HCQ late and if they were relatively sicker. Using it early on could still work (but now there won't be an RCT for that thanks to numerous delendae).
Regarding schools, did the countries that reopened those already fare particularly worse?
A while back TinyCast seemed pretty friendly: https://tinycast.cultivateforecasts.com/questions/new
That's terrible news! It means that on top of the meager coronavirus there's another unidentified disease overcrowding the hospitals, causing respirator shortages all over the world, and threatening to kill millions of people!
> The idea of “flattening the curve” is the worst, as it assumes a large number of infections AND a large number of virus generation AND high selective pressure
Flattening _per se_ doesn't affect the evolution of the virus much. It doesn't evolve on a time grid, but rather on an event grid where an event is spreading from a person to another. As long as it spreads the same number of times it will have the same number of opportunities to evolve.
"Overreacting to underestimates" - great way of putting it!
Fewer waiting lines?
Congratulations!
If you're trying to be homo economicus and maximize your expected utility, probably it's not worth it. But if you're not, you can still do it! We did (blood and tissue).
- How valuable are the stem cells right now
- not very valuable
- and how valuable are they expected to be in the future?
- very valuable but that's been the expectation for a long time and yet here we are
- How hard is it to get stem cells for yourself / your child right now vs in the future?
- anything you harvest later on will have had more cellular divisions in its history, so in some (not yet practical) sense this opportunity is unique
- Will the collected stem cells be only useful for the baby or the mother too?
- right now probably neither, but the kind of application you might hope for in the future would be benefiting the baby
- Can we reasonably expect the cryo companies to last long enough and not go under?
- sure - but if they have an accident you're also not losing much expected utility, either
- Have you had experience donating it?
- no
- Have you had experience storing it?
- yes
I don't see how it would explain double descent on training time. This would imply that gradient descent on neural nets first has to memorize noise in one particular way, and then further training "fixes" the weights to memorize noise in a different way that generalizes better
For example, the (random, meaningless) weights used to memorize noise can get spread across more degrees of freedom, so that on the test their sum will be closer to 0.
The 5nm in "5nm scale" no longer means "things are literally 5nm in size". Rather, it's become a fancy way of saying something like "200x the linear transistor density of an old 1-micron scale chip". The gates are still larger than 5nm, it's just that things are now getting put on their side to make more room ( https://en.wikipedia.org/wiki/FinFET ). Some chip measures sure are slowing down, but Moore's law (referring to the number of transistors per chip and nothing else) still isn't one of them despite claims of impending doom due to "quantum effects" originally dating back to (IIRC) the eighties.
I know some people who (at least used to) maintain a group pool of cash to fund the preservation of whoever died first (at which point the pool would need to be refilled). So if you're unlucky first to die out of people, you only pay of the full price, and if you're lucky (last to die) you eventually pay about times the price, but at least you get more time to earn the money. Not sure how it was all structured legally. Of course if you're really pressed for time it may be hard to convince other people for such an arrangement.
Fundraisers have helped in the past: https://alcor.org/Library/html/casesummary2643.html - although it fell quite short of the sticker price, and ultimately Alcor had to foot most of the bill anyway.
There aren't that "many" other companies. Talk to KrioRus, I know they explored setting up a cryonics facility in Switzerland at some point.
I'm pretty sure (epistemic status: Good Judgment Project Superforecaster) the "AI" in the name is pure buzz and the underlying aggregation algorithm is something very simple. If you want to set up some quick group predictions for free, there's https://tinycast.cultivatelabs.com/ which has a transparent and battle-tested aggregation mechanism (LMSR prediction markets) and doesn't use catchy buzzwords to market itself. For other styles of aggregation there's "the original" Good Judgment Inc, a spinoff from GJP which actually ran an aggregation algorithm contest in parallel with the forecaster contest (somehow no "AI" buzz either). They are running a public competition at https://www.gjopen.com/ where anyone can forecast and get scored, but if you want to ask your own questions that's a bit more expensive than Swarm. Unfortunately there doesn't seem to be a good survey-style group forecasting platform out in the open. But that's fine, TinyCast is adequate as long as you read their LMSR algorithm intro.
The books are marketed as "hard" sci-fi but it seems all the "science" (at least in the first book, didn't read the others) is just mountains of mysticism constructed around statements that can sound "deep" on some superficial level but aren't at all mysterious, like "three-body systems interacting via central forces are generally unstable" or "you can encode some information into the quantum state of a particle" (yet of course they do contain nuance that's completely lost on the author, such as "what if two of the particles are heavy and much closer to each other than to the third?", or "which basis do you want to measure the state of your particle in?"). Compare to the Puppeteers' homeworld from the Ringworld series (yes, cheesy, but still...)
(epistemic status: physicist, do simulations for a living)
Our long-term thermodynamic model Pn is less accurate than a simulation
I think it would be fair to say that the Boltzmann distribution and your instantiation of the system contain not more/less but _different kinds of_ information.
Your simulation (assume infinite precision for simplicity) is just one instantiation of a trajectory of your system. There's nothing stochastic about it, it's merely an internally-consistent static set of configurations, connected to each other by deterministic equations of motion.
The Boltzmann distribution is [the mathematical limit of] the distribution that you will be sampling from if you evolve your system, under a certain set of conditions (which are generally very good approximations to a very wide variety of physical systems). Boltzmann tells you how likely you would be to encounter a specific configuration in a run that satisfies those conditions.
I suppose you could say that the Boltzmann distribution is less *precise* in the sense that it doesn't give you a definite Boolean answer whether a certain configuration will be visited in a given run. On the other hand a finite number of runs is necessarily less *accurate* viewed as a sampling of the system's configurational space.
we can't run simulations for a long time, so we have to make do with the Boltzmann distribution
...and on the third hand, usually even for a simple system like a few-atom molecule the dimensionality of the configurational space is so enormous anyway that you have to resort to some form of sampling (propagation of equations of motion is one option) in order to calculate your partition function (the normalizing factor in the Boltzmann distribution). Yes that's right, the Boltzmann distribution is actually *terribly expensive* to compute for even relatively simple systems!
Hope these clarifications of your metaphor also help refine the chess part of your dichotomy! :)
(the paper: https://journals.aps.org/pr/abstract/10.1103/PhysRev.106.620)
There's nothing magical about reversing particle speeds. For entropy to decrease to the original value you would have to know and be able to change the speeds with perfect precision, which is of course meaningless in physics. If you get it even the tiniest bit off you might expect _some_ entropy decrease for a while but inevitably the system will go "off track" (in classical chaos the time it's going to take is only logarithmic in your precision) and onto a different increasing-entropy trajectory.
Jaynes' 1957 paper has a nice formal explanation of entropy vs. velocity reversal.
design the AI in such a way that it can create agents, but only
This sort of argument would be much more valuable if accompanied by a specific recipe of how to do it, or at least a proof that one must exist. Why worry about AI designing agents, why not just "design it in such a way" that it's already Friendly!
I agree, it did seem like one of the more-unfinished parts. Still, perhaps a better starting point than nothing at all?
Check the chapter on the A_p distribution in Jaynes' book.
Losing a typical EA ... decreasing ~1000 utilons to ~3.5, so a ~28500% reduction per person lost.
You seem to be exaggerating a bit here: that's a 99.65% reduction. Hope it's the only inaccuracy in your estimates!
The main problem with quotes found on the Internet is that everyone immediately believes their authenticity.
-- Vladimir I. Lenin
Here's another excellent book roughly from the same time: "The Phenomenon of Science" by Valentin F. Turchin (http://pespmc1.vub.ac.be/posbook.html). It starts from largely similar concepts and proceeds through the evolution of the nervous system to language to math to science. I suspect it may be even more AI-relevant than Powers.
Hi shminux. Sorry, just saw your comment. We don't seem to have a date set for November yet, but let me check with the others. Typically we meet on Saturdays, are you still around on the 22nd? Or we could try Sunday the 16th. Let me know.
The Planning Fallacy explanation makes a lot of sense.
I hope it's not really at 2AM.
While the situation admittedly is oversimplified, it does seem to have the advantage that anyone can replicate it exactly at a very moderate expense (a two-headed coin will also do, with a minimum amount of caution). In that respect it may actually be more relevant to real world than any vaccine/autism study.
Indeed, every experiment should get a pretty strong p-value (though never exactly 1), but what gets reported is not the actual p but whether it is above .95 (which is an arbitrary threshold proposed once by Fisher who never intended it to play the role it plays in science currently, but merely as a rule of thumb to see if a hypothesis is worth a follow-up at all.) But even the exact p-values refer to only one possible type of error, and the probability of the other is generally not (1-p), much less (1-alpha).
(1) is obvious, of course--in hindsight. However changing your confidence level after the observation is generally advised against. But (2) seems to be confusing Type I and Type II error rates.
On another level, I suppose it can be said that of course they are all biased! But, by the actual two-tailed coin rather than researchers' prejudice against normal coins.
Treating ">= 95%" as "= 95%" is a reasoning error
Hence my question in another thread: Was that "exactly 95% confidence" or "at least 95% confidence"? However when researchers say "at a 95% confidence level" they typically mean "p < 0.05", and reporting the actual p-values is often even explicitly discouraged (let's not digress into whether it is justified).
Yet the mistake I had in mind (as opposed to other, less relevant, merely "a" mistakes) involves Type I and Type II error rates. Just because you are 95% (or more) confident of not making one type of error doesn't guarantee you an automatic 5% chance of getting the other.
Well, perhaps a bit too simple. Consider this. You set your confidence level at 95% and start throwing a coin. You observe 100 tails out of 100. You publish a report saying "the coin has tails on both sides at a 95% confidence level" because that's what you chose during design. Then 99 other researchers repeat your experiment with the same coin, arriving at the same 95%-confidence conclusion. But you would expect to see about 5 reports claiming otherwise! The paradox is resolved when somebody comes up with a trick using a mirror to observe both sides of the coin at once, finally concluding that the coin is two-tailed with a 100% confidence.
What was the mistake?
How does your choice of threshold (made beforehand) affect your actual data and the information about the actual phenomenon contained therein?
suggestion posted to the Google Group:
Another idea might be to decide ahead of each meetup on a few topics for discussion to allow some time to prepare, research and think about things for some time before discussing with each other.
Also, different studies have different statistical power, so it may not be OK to simply add up their evidence with equal weights.
Was that "exactly 95% confidence" or "at least 95% confidence"?
(I highly recommend that everyone join the Google Group so that we can all communicate in a single place by email)
Does anyone else feel like trying to get this meeting a little bit more structured?
For example, something as simple as brief but prepared self-introductions covering your interests (related or unrelated to LW) and anything else about yourself that you might consider worth a mention. We partially covered it last time but it was pretty chaotic.
Or maybe someone even wants to give a brief talk about something they find exciting. Back in the day Jon used to educate us in computational neroscience, which was extremely interesting.
Also, on getting there:
The map in the post is not completely accurate, this is the actual location
Parking on Main St (across from campus, from TMC to ZaZa
Oh yes, and last time somebody discovered that there's free parking on Main St across from campus (the stretch between Med Center and Hotel ZaZa).
Hopefully, this time Valhalla should be open for, um, follow-up discussions. http://valhalla.rice.edu/
It seems that in the rock-scissors-paper example the opponent is quite literally an adversarial superintelligence. They are more intelligent than you (at this game), and since they are playing against you, they are adversarial. The RCT example also has a lot of actors with different conflicts of interests, especially money- and career-wise, and some can come pretty close to adversarial.
Free parking is available in the small streets across Rice Boulevard from the campus (north of it). This is also closer.
Here are some nice arguments about different what-if/why-not scenarios, not fully rigorous but sometimes quite persuasive: http://www.scottaaronson.com/democritus/lec9.html
I'm not sure if we can say much about a classical universe "in practice" because in practice we do not live in a classical universe. I imagine you could have perfect information if you looked at some simple classical universe from the outside.
For classical universes with complete information you have Newtonian dynamics. For classical universes with incomplete information about the state you can still use Newtonian dynamics but represent the state of the system with a probability distribution. This ultimately leads to (classical) statistical mechanics. For universes with incomplete information about the state and about its evolution ("category 3a" in the paper) you get quantum theory.
[Important caveat about classical statistical mechanics: it turns out to be a problem to formulate it without assuming some sort of granularity of phase space, which quantum theory provides. So it's all pretty intertwined.]
Thanks! The list of assumptions seems longer than in the De Raedt et al. paper and you need to first postulate branching and unitarity (let's set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.