Posts
Comments
Ah, my "what do you mean" may have been unclear. I think you took it as, like, "what is the thing that Kelly instructs?" But what I meant is "why do you mean when you say that Kelly instructs this?" Like, what is this "Kelly" and why do we care what it says?
That said, I do agree this is a broadly reasonable thing to be doing. I just wouldn't use the word "Kelly", I'd talk about "maximizing expected log money".
But it's not what you're doing in the post. In the post, you say "this is how to mathematically determine if you should buy insurance". But the formula you give assumes bets come one at a time, even though that doesn't describe insurance.
The probability should be given as 0.03 -- that might reduce your confusion!
Aha! Yes, that explains a lot.
I'm now curious if there's any meaning to the result I got. Like, "how much should I pay to insure against an event that happens with 300% probability" is a wrong question. But if we take the Kelly formula and plug in 300% for the probability we get some answer, and I'm wondering if that answer has any meaning.
I disagree. Kelly instructs us to choose the course of action that maximises log-wealth in period t+1 assuming a particular joint distribution of outcomes. This course of action can by all means be a complicated portfolio of simultaneous bets.
But when simultaneous bets are possible, the way to maximize expected log wealth won't generally be "bet the same amounts you would have done if the bets had come one at a time" (that's not even well specified as written), so you won't be using the Kelly formula.
(You can argue that this is still, somehow, Kelly. But then I'd ask "what do you mean when you say this is what Kelly instructs? Is this different from simply maximizing expected log wealth? If not, why are we talking about Kelly at all instead of talking about expected log wealth?")
It's not just that "the insurance calculator does not offer you the interface" to handle simultaneous bets. You claim that there's a specific mathematical relationship we can use to determine if insurance is worth it; and then you write down a mathematical formula and say that insurance is worth it if the result is positive. But this is the wrong formula to use when bets are offered simultaneously, which in the case of insurance they are.
This is where reinsurance and other non-traditional instruments of risk trading enter the picture.
I don't think so? Like, in real world insurance they're obviously important. (As I understand it, another important factor in some jurisdictions is "governments subsidize flood insurance.") But the point I was making, that I stand behind, is
- Correlated risk is important in insurance, both in theory and practice
- If you talk about insurance in a Kelly framework you won't be able to handle correlated risk.
If one donates one's winnings then one's bets no longer compound and the expected profit is a better guide then expected log wealth -- we agree.
(This isn't a point I was trying to make and I tentatively disagree with it, but probably not worth going into.)
Whether or not to get insurance should have nothing to do with what makes one sleep – again, it is a mathematical decision with a correct answer.
I'm not sure how far in your cheek your tongue was, but I claim this is obviously wrong and I can elaborate if you weren't kidding.
I'm confused by the calculator. I enter wealth 10,000; premium 5,000; probability 3; cost 2,500; and deductible 0. I think that means: I should pay $5000 to get insurance. 97% of the time, it doesn't pay out and I'm down $5000. 3% of the time, a bad thing happens, and instead of paying $2500 I instead pay $0, but I'm still down $2500. That's clearly not right. (I should never put more than 3% of my net worth on a bet that pays out 3% of the time, according to Kelly.) Not sure if the calculator is wrong or I misunderstand these numbers.
Kelly is derived under a framework that assumes bets are offered one at a time. With insurance, some of my wealth is tied up for a period of time. That changes which bets I should accept. For small fractions of my net worth and small numbers of bets that's probably not a big deal, but I think it's at least worth acknowledging. (This is the only attempt I'm aware of to add simultaneous bets to the Kelly framework, and I haven't read it closely enough to understand it. But there might be others.)
There's a related practical problem that a significant fraction of my wealth is in pensions that I'm not allowed to access for 30+ years. That's going to affect what bets I can take, and what bets I ought to take.
The reason all this works is that the insurance company has way more money than we do. ...
I hadn't thought of it this way before, but it feels like a useful framing.
But I do note that, there are theoretical reasons to expect flood insurance to be harder to get than fire insurance. If you get caught in a flood your whole neighborhood probably does too, but if your house catches fire it's likely just you and maybe a handful of others. I think you need to go outside the Kelly framework to explain this.
I have a hobby horse that I think people misunderstand the justifications for Kelly, and my sense is that you do too (though I haven't read your more detailed article about it), but it's not really relevant to this article.
I think the thesis is not "honesty reduces predictability" but "certain formalities, which preclude honesty, increase predictability".
I kinda like this post, and I think it's pointing at something worth keeping in mind. But I don't think the thesis is very clear or very well argued, and I currently have it at -1 in the 2023 review.
Some concrete things.
- There are lots of forms of social grace, and it's not clear which ones are included. Surely "getting on the train without waiting for others to disembark first" isn't an epistemic virtue. I'd normally think of "distinguishing between map and territory" as an epistemic virtue but not particularly a social grace, but the last two paragraphs make me think that's intended to be covered. Is "when I grew up, weaboo wasn't particularly offensive, and I know it's now considered a slur, but eh, I don't feel like trying to change my vocabulary" an epistemic virtue?
- Perhaps the claim is only meant to be that lack of "concealing or obfuscating information that someone would prefer not to be revealed" is an epistemic virtue? Then the map/territory stuff seems out of place, but the core claim seems much more defensible.
- "Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace." Let's limit this to the social graces that are epistemically harmful. Still, I don't see how this follows.
- Idealized honest Bayesian reasoners wouldn't need to stop and pause to think, but a human trying to imitate one will need to do that. A human getting closer in some respects to an idealized honest Bayesian reasoner might need to spend more time thinking.
- And, where does "bare minimum" come from? Why will these humans do approximately-none-at-all of the thing, rather than merely less-than-maximum of it?
- I do think there's something awkward about humans-imitating-X, in pursuit of goal Y that X is very good at, doing something that X doesn't do because it would be harmful to Y. But it's much weaker than claimed.
- There's a claim that "distinguishing between the map and the territory" is distracting, but as I note here it's not backed up.
- I note that near the end we have: "If the post looks lousy, say it looks lousy. If it looks good, say it looks good." But of course "looks" is in the map. The Feynman in the anecdote seems to have been following a different algorithm: "if the post looks [in Feynman's map, which it's unclear if he realizes is different from the territory] lousy, say it's lousy. If it looks [...] good, say it's good."
- Vaniver and Raemon point out something along the lines of "social grace helps institutions perservere". Zack says he's focusing on individual practice rather than institution-building. But both his anecdotes involve conversations. It seems that Feynman's lack of social grace was good for Bohr's epistemics... but that's no help for Feynman's individual practice. Bohr appreciating Feynman's lack of social grace seems to have been good for Feynman's ability-to-get-close-to-Bohr, which itself seems good for Feynman's epistemics, but that's quite different.
- Oh, elsewhere Zack says "The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes", which doesn't sound like it's focusing on individual practice?
- Hypothesis: when Zack wrote this post, it wasn't very clear to himself what he was trying to focus on.
Man, this review kinda feels like... I can imagine myself looking back at it two years later and being like "oh geez that wasn't a serious attempt to actually engage with the post, it was just point scoring". I don't think that's what's happening, and that's just pattern matching on the structure or something? But I also think that if it was, it wouldn't necessarily feel like it to me now?
It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that's true.
In any case, I'm pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.
Ooh, I didn't see the read filter. (I think I'd have been more likely to if that were separated from the tabs. Maybe like, [Read] | [AI 200] [World Modeling 83] [Rationality 78] ...
.) With that off it's up to 392 nominated, though still neither of the ones mentioned. Quick review is now down to 193, my current guess is that's "posts that got through to this phase that haven't been reviewed yet"?
Screenshot with the filter off:
and some that only have one positive review:
Btw, I'm kinda confused by the current review page. A tooltip on advanced voting says
54 have received at least one Nomination Vote
Posts need at least 2 Nomination Votes to proceed to the Review Phase
And indeed there are 54 posts listed and they all have at least one positive vote. But I'm pretty sure this and this both had at least one (probably exactly one) positive vote at the end of the nomination phase and they aren't listed.
Guess: this is actually listing posts which had at least two positive votes at the end of the nomination phase; the posts with only one right now had two at the time?
...but since I started writing this comment, the 54 has gone up to 56, so there must be some way for posts to join it, but I don't have a guess what it would be.
And then the quick review tab lists 194 posts. I'm not sure what the criteria for being included on it is. It seems I can review and vote on each of them, where I can't do that for the two previous posts, so again there must be some criteria but I don't have a guess what.
I think it's good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven't fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that.
I don't think it's a good fit for the 2023 review. Unless Ziz gets back in the news, there's not much reason for someone in 2025 or later to be reading this.
If I was going to recommend it, I think the reason would be some combination of
- This is a good example of investigative journalism, and valuable to read as such.
- It's a good case study of a certain type of person that it's important to remember exists.
But I don't think it stands out as a case study (it's not trying to answer questions like "how did this person become Ziz"), and I weakly guess it doesn't stand out as investigative journalism either. E.g. when I'm thinking on these axes, TracingWoodgrains on David Gerard feels like the kind of thing I'd recommend above this.
Which, to be clear, not a slight on this post! I think it does what it wanted to do very well, and what it wants to do is valuable, it's just not a kind of thing that I think the 2023 review is looking to reward.
Self review: I really like this post. Combined with the previous one (from 2022), it feels to me like "lots of people are confused about Kelly betting and linear/log utility of money, and this deconfuses the issue using arguments I hadn't seen before (and still haven't seen elsewhere)". It feels like small-but-real intellectual progress. It still feels right to me, and I still point people at this when I want to explain how I think about Kelly.
That's my inside view. I don't know how to square that with the relative lack of attention the post got, and it feels weird to be writing it given that fact, but oh well. There are various stories I could tell: maybe people were less confused than I thought; maybe my explanation is unclear; maybe I'm still wrong on the object level; maybe people just don't care very much; maybe it just happened not to get seen.
If I were writing this today, my guess is:
- It's worth combining the two posts into one.
- The rank optimization stuff is fine to cut, given that I tentatively propose it in one post and then in the next say "probably not very useful". Maybe have a separate post for exploring it. No need to go into depth on "extending Kelly outside its original domain".
- The charity stuff might also be fine to cut. At any rate it's not a focus.
- Someone sent me an example function satisfying the "I'm pretty sure yes" criteria, so that can be included.
- Not sure if this belongs in the same place, but I'd still like to explore more the "what if your utility function is such that maximizing expected utility at time doesn't maximize expected utility at time ?" thing. (I thought I wrote this in the post somewhere, but can't see it: the way I'd explore this is from the perspective of "a utility function is isomorphic to a description of betting preferences that satisfy certain constraints, so when we talk about a utility function like that, what betting preferences are we talking about?" Feels like the kind of thing someone's likely already explored, but I haven't seen it if so.)
(At least in the UK, numbers starting 077009 are never assigned. So I've memorized a fake phone number that looks real, that I sometimes give out with no risk of accidentally giving a real phone number.)
Okay. Make it £5k from me (currently ~$6350), that seems like it'll make it more likely to happen.
If you can get it set up before March, I'll donate at least £2000.
(Though, um. I should say that at least one time I've been told "the way to donate with gift said is to set up an account with X, tell them to send the money to Y, and Y will pass it on to us", and the first step in the chain there had very high transaction fees and I think might have involved sending an email... historical precedent suggests that if that's the process for me to donate to lightcone, it might not happen.)
Do you know what rough volume you'd need to make it worthwhile?
I don't know anything about the card. I haven't re-read the post, but I think the point I was making was "you haven't successfully argued that this is good cost-benefit", not "I claim that this is bad cost-benefit". Another possibility is that I was just pointing out that the specific quoted paragraph had an implied bad argument, but I didn't think it said much about the post overall.
My guess: [signalling] is why some people read the Iliad, but it's not the main thing that makes it a classic.
Incidentally, there was one reddit comment that pushed me slightly in the direction of "yep, it's just signalling".
This was obviously not the intended point of that comment. But (ignoring how they misunderstood my own writing), the user
- Quotes multiple high status people talking about the Iliad;
- Tantalizingly hints that they are widely-read enough to be able to talk in detail about the Iliad and the old testament, and compare translations;
- Says approximately nothing about the Iliad;
- And says nothing at all about why they think the Iliad is good, and nor do roughly 3/4 of the people they quote. (Frye explains why it's important, but that's different. The last 6 lines of Keats talk about how Keats reacted to it, but that doesn't say what's good about it. Borges says a particular line is more beautiful than some other line (I think both lines are fine). Only Santayana tells me what he thinks is good about the Iliad.)
So like, you're trying to convince me the Iliad isn't just signalling by quoting Keats, saying essentially "I'd heard the Iliad was so good, but it took me forever to track down a copy[1]. When I did? Blew my mind, man. Blew my mind." Nonspecific praise feels like signalling, appeal to authority feels like signalling, and the authority giving nospecific praise? This just really solidly rings my signalling bells, you know?
- ^
I misunderstood Keats when I first replied to the comment. I'd assumed that when he said he "heard Chapman speak out loud and bold", he had, you know, heard someone named Chapman speak, perhaps loudly and boldly. Apparently it was what is called a "metaphor", and he had actually just read Chapman's translation.
Complex Systems (31 Oct 2024): From molecule to medicine, with Ross Rheingans-Yoo
When you first do human studies with a new drug, there's something like a 2/3 chance it'll make it to the second round of studies. Then something like half of those make it to the next round; and there's a point where you talk to the FDA and say "we're planning to do this study" and they say "cool, if you do that and get these results you'll probably be approved" and then in that case there's like an 85% chance you'll be approved; and I guess at least one other filter I'm forgetting. Overall something like 10-15% of drugs that start on this pipeline get approved, typically taking at least 7 years.
A drug that gets approved needs to make about $2 billion, to make up for the costs of all those trials plus the trials for the drugs that didn't get approved. And it has about 10 years to do that before patent protections expire, because you filed the patent before doing the first human studies and you only get 20 years from that point.
Typically what happens is someone forms a company for a specific drug, and while it's in fairly early trials the company gets bought by a big pharma company. The trials themselves are done by companies that specialize in running clinical trials.
Ross says Thalidomide was sort of the middle of a story. The story started with Upton Sinclair's The Jungle, which he wrote as a "look at the horrible conditions meat packers have to endure" but what the public took from it is "excuse me, there are human fingers in my sausages?" So after that was the pure food and drug act which said that anything had to be just the thing it said it was.
But then a drug came which was exactly what it said it was, and that thing was bad for people. So after that you needed to do studies to show safety, but they were less rigorous than they are now?
And then thalidomide happened, which was fine for most people but caused birth defects when taken by a pregnant person. When it came up for approval, the beurocrat looking at it happened to have previously looked at rabbits and seen that drug uptake and metabolization could be different in pregnant rabbits, making something otherwise non-toxic become toxic. And so she said the company needed data about safety in pregnant people, even though this was a non-standard requirement at the time. The company tried to avoid that, she insisted, and it never got approved in the US. But standards still got stricter.
(It did get approved in Europe. It's relevant that Germany didn't like tracking birth defects due to previous history, so the problems weren't noticed as early as they might have been.)
One of the times when regulations got stricter, part of the story is that at the same time as public outrage, there also happened to be a bill in progress for reasons of punishing pharma for something something, so that's the bill that got through.
At some point you started to get patient advocacy groups, saying "we are dying while you hold this drug up", and the FDA would pay attention to that. And then the pharma companies would get involved in those groups, and now it's at the point where you kinda need one of those or the FDA will be like "why would we prioritize you?"
There are drugs that the FDA wants to encourage but which aren't profitable, e.g. helping with diseases common in the third world and rare in the US. One motivation is if you make one they'll give you a priority review voucher, good to help another drug of yours get approved faster. These vouchers are transferrable, and the market price is... I think $100k or $200k?
With covid vaccines, the government said "if you produce a thing that satisfies these criteria, we will buy X amount of it for sure". That took some uncertainty out of the process and helped things get made.
Sometimes a drug will succeed in trials but not get pushed forward for various reasons, sometimes just falling through cracks. One drug this happened with was a covid treatment, which seemed to reduce hospitalizations by 70% in vaccinated people. When it was in development the FDA said it was unlikely to get emergency use authorization, and the company dropped it.
(Related: VaccinateCA got a lot of funding for a while, and then after the funders themselves got vaccinated, it got less funding.)
Later the company was going bankrupt, and they sold off their assets, which included "drugs that seemed promising but we never went anywhere with", including this one. Ross was involved in some other company buying up that drug.
You can do trials for covid much cheaper than for cancer drugs. For cancer you'll often have a list of a smallish number of people and try to find the specific individuals who give you the best chance of a statistically significant result based on comorbidities and such, and have someone specifically approach the people you want. For covid the cheap thing to do is: everyone who comes to your clinic with a cough gets the drug and gets a covid test. Later you find out if they had covid and (thanks to a phone call) what happened to their symptoms. And you can do this sort of thing somewhere like Brazil, instead of doing it in the most prestigious hospital (where there are a bunch of other studies going on distracting people). But it's kind of a weird thing to do, and if trials fail your investors might be like "why didn't you do the normal thing?"
Though this particular story for weight exfiltration also seems pretty easy to prevent with standard computer security: there’s no reason for the inference servers to have the permission to create outgoing network connections.
But it might be convenient to have that setting configured through some file stored in Github, which the execution server has access to.
Yeah, if that was the only consideration I think I would have created the market myself.
Launching nukes is one thing, but downvoting posts that don't deserve it? I'm not sure I want to retaliate that strongly.
I looked for a manifold market on whether anyone gets nuked, and considered making one when I didn't find it. But:
- If the implied probability is high, generals might be more likely to push the button. So someone who wants someone to get nuked can buy YES.
- If the implied probability is low, generals can get mana by buying YES and pushing the button. I... don't think any of the generals will be very motivated by that? But not great.
So I decided not to.
No they’re not interchangeable. They are all designed with each other in mind, along the spectrum, to maximize profits under constraints, and the reality of rivalrousness is one reason to not simply try to run at 100% capacity every instant.
I can't tell what this paragraph is responding to. What are "they"?
You explained they popped up from the ground. Those are just about the most excludable toilets in existence!
Okay I do feel a bit silly for missing this... but I also still maintain that "allows everyone or no one to use" is a stretch when it comes to excludability. (Like, if the reason we're talking about it is "can the free market provide this service at a profit", then we care about "can the provider limit access to people who are paying for it". If they can't do that, do we care that they can turn the service off during the day and on at night?)
Overall it still seems like you want to use words in a way that I think is unhelpful.
Idk, I think my reaction here is that you're defining terms far more broadly than is actually going to be helpful in practice. Like, excludability and rivalry are spectrums in multiple dimensions, and if we're going to treat them as binaries then sure, we could say anything with a hint of them counts in the "yes" bin, but... I think for most purposes,
- "occasionally, someone else arrives at the parking lot at the same time as me, and then I have to spend a minute or so waiting for the pay-and-display meter"
is closer to
- "other people using the parking lot doesn't affect me"
than it is to
- "when I get to the parking lot there are often no spaces at all"
I wouldn't even say that: bathrooms are highly rivalrous and this is why they need to be so overbuilt in terms of capacity. While working at a cinema, did you never notice the lines for the womens' bathroom vs the mens' bathroom once a big movie let out? And that like 99% of the time the bathrooms were completely empty?
My memory is we didn't often have that problem, but it was over ten years ago so dunno.
I'd say part of why they're (generally in my experience) low-rivalrous is because they're overbuilt. They (generally in my experience) have enough capacity that people typically don't have to wait, and when they do have to wait they don't have to wait long. There are exceptions (during the interval at a theatre), but it still seems to me that most bathrooms (as they actually exist, and not hypothetical other bathrooms that had been built with less capacity) are low-rivalrous.
None of your examples are a counterexample. All of them are excludable, and you explain how and that the operators choose not to.
I'm willing to concede on the ones that could be pay gated but aren't, though I still think "how easy is it to install a pay gate" matters.
But did you miss my example of the pop-up urinals? I did not explain how those are excludable, and I maintain that they're not.
Thing I've been wrong about for a long time: I remembered that the rocket equation "is exponential", but I thought it was exponential in dry mass. It's not, it's linear in dry mass and exponential in Δv.
This explains a lot of times where I've been reading SF and was mildly surprised at how cavalier people seemed to be about payload, like allowing astronauts to have personal items.
Sorry, I didn't see this notification until after - did you find us?
I agree that econ 101 models are sometimes incorrect or inapplicable. But
I don’t know how much that additional cost is, but seemingly less than the benefit, because three months later, the whole of Germany wants to introduce this card. The introduction has to be delayed by some legal issues, and then a few counties want to introduce it independently. So popular is this special card!
The argument here seems to be that the card must satisfy a cost-benefit analysis or it wouldn't be so popular, and I don't buy that either.
Ah, I can sometimes make fridays but not tomorrow. Hope it goes well.
they turn a C/G base pair to an A/T, or vice versa.
Can they also turn it into a G/C or a T/A? I wasn't sure if this was an example or a "this is the only edit they do". Or I might just be misunderstanding and this question is wrong.
I think Ben's proposal is: between rounds, it takes a while to split the whole deck into suits, all hearts in one pile and all spades in another and so on. Instead you can just pick out four hearts, and four spades, and so on, and remove 0/2/2/4 cards from those piles, and shuffle the rest back into the deck. But no matter how you shuffle, I don't think you can do that without leaking information.
The Gap Cycle by Stephen R. Donaldson
I think I've read this twice, in my early teens and early twenties, and loved it both times. But I'm now 34 and can't talk about it in depth. I think past-me especially liked the grimness and was impressed at how characters seemed to be doing things for internally motivated reasons. (IIRC Donaldson calls this giving characters "dignity". I feel like since then I've picked up another term for it that's temporarily slipped my mind.)
I still think A Dark and Hungry God Arises and This Day All Gods Die are excellent book titles.
A caveat is that back then I also loved Donaldson's Thomas Covenant books, and I think that by my mid-twenties I enjoyed them but not so much. So plausibly I'd like the Gap Cycle less now than then too? But I want to re-read.
Too Like the Lightning by Ada Palmer
I once saw a conversation that went something like: "I don't find writing quality in sci-fi that important." / "You clearly haven't read Too Like the Lightning".
I wasn't sure if the second person meant TLTL's writing is good or bad. Having read TLTL, both interpretations seemed plausible. (They meant good.)
I found it very difficult to get through this book, except that the last few chapters were kind of gripping. That was enough to get me to read the next one, which was hard to get through again. Ultimately I read the whole series, and I'm not sure how much I enjoyed the process of reading it. But they're some of my favorite books to have read, and I can imagine myself re-reading them.
Crystal trilogy by Max Harms
I enjoyed this but don't have much to say. As an AI safety parable it seemed plausible enough; I hadn't previously seen aliens like that; I occasionally thought some of the writing was amateurish in a way I couldn't put my finger on, but that wasn't a big deal.
just make 4 piles of 4 cards from each suit and remove from those
I don't think you can do this because at least one person will see which cards are in those piles, and then seeing those cards in game will give them more info than they're supposed to have. E.g. if they see 9h in one of the piles and then 9h in game, they know hearts isn't the 8-card suit.
(The rules as written are unclear on this. But I assume that you're meant to remove cards at random from the suits, rather than having e.g. A-8 in one suit, A-Q in one, and A-10 in the other two. If you did that then getting dealt the Q or J would be a dead giveaway.)
I think Causality would be good for this. Levels have their full state visible from the start, and there's no randomness. There's a relatively small number of mechanics to learn, though I worry that some of them (particularly around details of movement, like "what will an astronaut do when they can't move forward any more?") might be "there are multiple equally good guesses here" which seems suboptimal.
Actually, there's one detail of state that I'm not sure is visible, in some levels:
When you come out of a portal, which way do you face? I think there's probably a consistent rule for this but I'm not sure, I could believe that in some levels you just have to try it to see.
they are by definition rivalrous ("the consumption of a good or service by one person diminishes the ability of another person to consume the same good or service"), as only one person in a stall at a time, and the timeframe doesn't matter to this point.
Why does timeframe not matter? If there's a pay-and-display parking lot, with enough spaces for everyone, but only one ticket machine, would you say this is rivalrous because only one person can be using the ticket machine at once?
Bathrooms aren't zero rivalrous, but they seem fairly low-rivalrous to me. (There are some people for whom bathroom use is more urgent, making bathrooms more rivalrous, e.g. pregnant people and those with certain disabilities. My understanding is these people sometimes get access to extra bathrooms that the rest of us don't.)
(As for dirtiness, all I can say is that the public bathrooms I've used tend to be somewhere between "just fine" and "unpleasant but bearable". I did once have to clean shit from the toilet walls in the cinema where I used to work, but I believe it's literally once in my life I've encountered that. Obviously people will have very different experiences here.)
they are extremely excludable: "Excludability refers to the characteristic of a good or service that allows its provider to prevent some people from using it."
Depends on details. London has some street urinals that afaict pop up at night, they have no locks or even walls, they're nonexcludable. Some are "open to everyone the attendant decides to let in", and some are "open to everyone with a credit card", and these seem just straightforwardly excludable. Other bathrooms can be locked but have no attendant and no means of accepting payment, so they're either "open to everyone" or "closed to everyone", and calling that "excludable" feels like a stretch to me. I suppose you could say that you could install a pay gate so it's "excludable but currently choosing not to exclude people", but then it depends how easy it is to install one of them.
So I guess Stuart is named for John Stuart Mill and Milton for Milton Friedman, but what about Carla (is CARLA an acronym?) and Victoria (Tori?)?
Note that to the extent this is true, it suggests verification is even harder than John thinks.
In any case, where is this hedging discussion happening?
I've seen and taken part in discussions about hedging on LW, but the thing that made me write this comment was a conversation on Duncan Sabien's facebook.
What things are over-discussed?
Interesting question, but nothing comes to mind.
A thing that feels under-discussed when it comes to hedging is, hedging doesn't just have to be swapping from "X" to "I believe X". You can say "the sky looks blue" or "wikipedia says the sky is blue" or "rumor has it the sky is blue" or "IIRC the sky is blue" or "if I did the math right, the sky is blue".
I for one welcome our new AI overlord whom I unwittingly helped install. Otherwise I'd need to feel conflicted about my actions this weekend.
I still have questions about one of the puzzles. Will the solutions be made available somewhere (ideally in a format where people can try them unspoiled first), or should I just ask?
Ah, thanks, I see now. You're saying that even if it's written with the small end before the big end according to the way the words flow, the direction of eye scanning and of mentally parsing and of giving a name to the number is still big end before small end? Similarly I might write a single word sdrawkcab in English text but the reader would still read it first-letter-to-last-letter.
Curious, when handwriting, what order do you write in?
Even better, Daniel then get to keep his equity
I missed this part?
Isn't this showing that Hebrew and Arabic write numbers little-endian? Surely big-versus-little-endian isn't about left-to-right or right-to-left, it's about how numbers flow relative to word reading order.
Ask me about the 2019 NYC Solstice Afterparty sometime if you want a minor ops horror story.
Consider yourself asked.
(I confess I have no idea how to interpret the agree-votes on this.)
Yeah, I was wrong to suggest/assume that the definition is original to you and not the way it's defined in other communities that I just am not familiar with.
It still seems like you're making the core mistake I was trying to point at, which is asserting that a word means something different than what other people mean by it; rather than acknowledging that sometimes words have different meanings in different contexts.
Like, people are talking about what sort of toppings should be on a donut and how large the hole should be, and you're chiming in to say you came around on donuts when you realized that instead of being ring-shaped with toppings they're ball-shaped with fillings. You didn't come around on donuts. You just discovered that even though you don't like ring donuts, you do like filled donuts, a related but different baked good.
I only came around on faith once I realized it was just Latin for trust, and specifically trust in the world to be just as it is.
This really just seems to me like you're asserting that what a word "really means" is some weird new definition that ~no one else means when they say the word.
(I don't know Latin. Nevertheless I am extremely confident that the word "faith" in Latin does not specifically refer to the concept of "trust in the world to be just as it is".)
Also now running as an in-progress youtube short series. (I haven't read the original.)
"It seems a lot of our pills cause vomiting as a side-effect?"
"Yeah, the company knows about it but it's tricky to fix."
"How so? Our competitors don't have this problem, and we make basically the same products, right?"
"Right, no, it's a corporate structure issue."
"?"
"If a pill does too much or too little of something, we have a group of clever people whose job it is to care about that and to reformulate it slightly to improve it. If it doesn't kill enough pain, the analgesic division will step in. If it causes clotting, the anticoagulant folks have a look. If it makes your bones brittle, it'll be the antiosteoporosises. You see? But if it causes vomiting-"
"Right, yeah. There's no one to take ownership of the problem, because-"
"There is no antiemetics division."
Oh, huh. Searle's original Chinese room paper (first eight pages) doesn't say machines can't think.
"OK, but could a digital computer think?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
"Why not?"
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
I can't say I really understand what he's trying to say, but it's different from what I thought it was.
Yeah. It's still possible to program in such a way that that works, and it's always been possible to program in such a way that it doesn't work. But prepared statements make it easier to program in such a way that it doesn't work, by allowing the programmer to pass executable code (which is probably directly embedded as a literal in their application language) separately from the parameters (which may be user-supplied).
(I could imagine a SQL implementation forbidding all strings directly embedded in queries, and requiring them to be passed through prepared statements or a similar mechanism. That still wouldn't make these attacks outright impossible, but it would be an added layer of security.)
A large majority of empirical evidence reported in leading economics journals is potentially misleading. Results reported to be statistically significant are about as likely to be misleading as not (falsely positive) and statistically nonsignificant results are much more likely to be misleading (falsely negative). We also compare observational to experimental research and find that the quality of experimental economic evidence is notably higher.
I'm confused by this "falsely negative". Like, without that, that part sounds like it's saying something like
when a result is reported as "we observed a small effect here, but it wasn't statistically significant", then more often than not, there's no real effect there
but that's a false positive. If they're saying it's a false negative, it suggests something like
when a result is reported as statistically insignificant, that makes it sound like there's no effect there, but more often than not there actually is an effect
...but that's (a) not a natural reading of that part and (b) surely not true.
Were SQL a better language this wouldn’t be possible, all the command strings would separated somehow
SQL does support prepared statements which forbid injection. Maybe you're thinking of something stronger than this? I'm not sure how long they've been around for, but wikipedia's list of SQL injection examples only has two since 2015 which hints that SQL injection is much less common than it used to be.
(Pedantic clarification: dunno if this is in any SQL standard, but it looks like every SQL implementation I can think of supports them.)
Planet Money #902 (28 Mar 2019): The Phoebus Cartel
Listened to this one a few weeks ago and don't remember most of it. But half the episode was about the phoebus cartel, a case of planned obsolesence when lightbulb manufacturers decided that no light bulb should be allowed to last more than 1000 hours.
Writing this for Gell-Mann amnesia reasons: in the episode someone says there was no benefit to consumers from this, but I'd recently seen a technology connections episode on the subject saying that longer lasting incandescent light bulbs are less energy efficient (i.e. more heat less light) for physics reasons, to the extent that they could easily be more expensive over their lifetime. Seems like an important caveat that PM missed!
The other half was about psychological obsolesence, where manufacturers make long-lasting goods like cars cosmetically different to convince you you need a new one.