Posts
Comments
Are either of these relevant?
https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real
https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased
Does the LessWrong editor's documentation on its handling of LaTeX answer your question?
Quick note of caution on changing the incentive rate downwards. If you might be sampling in times/places where people have previously experienced the higher incentive, this might trigger a loss framing for people. I.e., if you move to the busier time that will eventually be your normal collection time and location, and give the people who are around then the perception that $5 is the normal compensation they'll get, when that drops to $1 they'll maybe be less inclined to contribute their sample than if they'd never seen the $5 option.
Maybe you could moderate that effect by some creative design like the board saying $1 prominently, then appending "special today $5" somehow in a way that clearly communicates it's a special temporary extra.
Checking about 2 years after my initial post, it looks like $TSLA has fallen by more than 50%: it looks like the split-adjusted price in early April 2022 was around $330 or $340, and today it's around $145.
Eyeballing the chart, it looks like it's always been lower than that in the subsequent period, and was down to around $185 at the 12 month mark that was initially the target of the competition. That last bit is the bit that was least clear to me at the time: it seemed high probability that Tesla stock would have to fall at some point, but I expressed uncertainty about when because I thought there was a fair probability the market could stay irrational for a longer period.
What timezone(s) will this programme be running in, please?
Is the opening paragraph at the top of this article the prompt you have Claude or text for us?
If the latter, could you share the prompt here, please?
There’s this nice paper where a load of different researchers are given the same (I think simulated) data and looked at how researchers result.
Might the research you were thinking of be the work by raphael Silberzahn, Eric L. Uhlmann and Brian Nosek?
Nature comment: https://www.nature.com/articles/526189a
Full research article: https://journals.sagepub.com/doi/10.1177/2515245917747646
In the UK, I think the most common assumption for cauliflower ear would be playing rugby, rather than a combat sport.
No idea if that's the statistically correct inference from seeing someone with the condition.
I enjoyed filling this out!
The question here is the opposite of its title:
Unknown features Which of the following features of the LessWrong website did you know how to use before you read this question?
That could result in some respondents answering in reverse if they skim.
As well as the generic suggestions people are making in the answers, it seems like you might be able to get more specific suggestions if the question specified whether you're looking for long distance vs. nearby/in-person dating, and (if the latter) a rough idea of where you are located.
You've got an asterisk in the first sentence, but I couldn't see it referencing anything.
~1.2e16 bases annually
Is this a typo? If I'm reading the chart correctly, it looks like it's of the order 1.2e15.
Blaise Pascal – the I Think Therefore I Am guy
The 'I think therefore I am' guy was René Descartes.
https://en.wikipedia.org/wiki/Cogito,_ergo_sum
I am strongly don't buy
Grammar: delete "am"
If the market are
Grammar: "market is" or "markets are"
You mention here that "of course" you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn't at all clear to me from reading the opening to the article.
Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. ... I have evaluated this model in detail and found it substantially incorrect...
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I've elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you've made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it's based on different reasoning.
Years back I heard that 10 is a bad number for this kind of thing.
The reasoning goes that because it's a round number people will assume that you chose it to be round and that at least some of your entries are filler to get up that number.
Whereas if you have 11 reasons or 7 or whatever, people will think that number is the actual number you needed to make all your points.
Have you looked at the Guild of the Rose at all?
UK-based researchers interested in this subject, and potentially international collaborators, could apply to the recently-announced UKRI funding opportunity:
https://www.ukri.org/opportunity/ageing-research-development-awards/
The UK effectively charges a very large tax for access to its postcode address file, making it out of reach for many uses and also being effectively a large tax on business, especially small business, that requires the information. This seems like quite an insane place to collect this much government revenue.
The article you link to points out that the PAF is controlled by Royal Mail. I think that Royal Mail gets the revenue from this. Royal Mail isn't government revenue. It was privatised several years ago.
It seems to get pretty hot, so you probably wouldn't want it on anything that might scorch or burn. Silver lining: you'll save money on your heating bill. Though I'm not looking forward to seeing my electricity bill next month.
If it's running at 500W, that's half a kWh per hour. If electricity is a little under 40p per kWh, then the running cost should be a bit under 20p per hour. If you use it for 10 hours per day, every day, then your electricity bill might rise by about £60 per month.
The one big annoyance is that there's no switch on the floodlight, so you've got to turn it on and off from the mains.
Since you're already fitting a plug, and since that sounds like it might be closer than the mains due to the short wire, you could fit a plug with a switch on it. like these:
I don't think you need to view namedropping as an appeal to authority. The natural way to do it in a scholarly document, including a poster, would be to cite a source. That's giving the reader valuable information - a way to check out the authority behind it.
Of course, if the reader is familiar with the author cited and knows that their work is invariably strong, they might choose to take it on authority as a shortcut, but they have the info at hand to check into it if they wish.
Aha. For each side of the pole, you can write the binary representation of 4 bits vertically, and where there's a 1 you have a line joining it. The middle two bits both go to the middle of the pole, so they have to curve off upward or downward to indicate which they are.
So 2 is 0010, and you have no lines in the top half and one line representing the bottom of the middle, so it curves downward.
Whereas 4 is 0100, so it has the upward-curving middle-connecting line, and none in the bottom half.
The top half of a 2 might kinda look like the curve shape, and the bottom stroke of a 2 looks like a horizontal bar. So if there were partial characters hanging from the central pole, they might look a bit like those...
But... If it's that, the curve probably only works on the bottom right anyway. So if you're willing to mirror it for the bottom left, why not mirror to the top too?
And... That doesn't really explain the 1 parts anyway. They're just using "whichever part of a 2 isn't being used for 2".
So I guess this isn't a complete explanation.
I have now checked through the whole top row of the expansion, and this does seem to be what's going on.
Each glyph represents a 4 digit base 4 number.
The digits are read in the order top left, bottom left, top right, bottom right.
An empty space off the central pole is always read as a zero.
A loop that comes from the centre of the central pole and joins to the top or bottom of the pole is always a three.
The horizontal bar and the curve coming from the centre of the bar (not re-joining, to distinguish it from the loop) vary in meaning depending on whether they are at the top or the bottom. At the top, the horizontal bar is two and the curve is one; at the bottom, the horizontal bar is one and the curve is two.
So, at the top we have a glyph representing 0003_4 = 3 (followed by a point).
Then the first few along the top row can be read off as:
0210_4 = 36_10
0333_4 = 63_10
1222_4 = 106_10
2020_4 = 136_10
And Wolfram Alpha can tell us that pi in base 256 starts:
3.36:63:106:136:133:163:8:211:19:25:138:46:3..._256
Having the strokes for one and two adopt inverse meanings top and bottom adds a little wrinkle to this as a puzzle, but otherwise as a design decision if one were semi-seriously thinking of this as a way of encoding information it seems to unnecessarily add to the thought needed to decode.
Also on design, I'm torn about whether the ordering of the parts is better or worse than top then bottom. The current design keeps the most significant parts on the left, which is mostly what we expect. But the overall presentation of the number is read across the page, left to right, so there could also be a case for having the order top-left, top-right, bottom-left, bottom-right.
Out of time to check more fully, but from the first few I think it works if a curve is 1 at the top but 2 at the bottom. (And straight lines vice versa.)
Hmm. My guess was that the order from most significant digit to least significant digit was top left, bottom left, top right, bottom right; and that an absent stroke is 0, a line is 1, a curve is 2 and a loop is 3.
And it sometimes works! The third character is 0333 = (0*64)+(3*16)+(3*4)+(3*1) = 63. Which Wolfram Alpha tells me is the 3rd term in pi_256.
But it doesn't always work. That would make the 4th character 2221_4 = 166_10. But 4th character of pi_256 is 106 (=1222_4). Which is an anagram. But maybe that's just a coincidence.
Yup, we also have matches at characters 8 and 26, which are both 8 in pi_256.
In which case, given the 1st and 14th characters are the same, and the 14th character of pi in base 256 is 3, that's my leading guess, pending checking a few more glyphs.
That was thinking of noggin-scratcher's comment about the monkly glyphs for 1-9999. I hadn't thought of their helpful analysis that there's only 54 symbols used across 113 symbols. I guess that even if this is somewhere on the right lines, the base must be something much lower, like 64.
Putting together the observations from noggin-scratcher and Yair Halberstadt, could this be the expansion of some number like e or pi in base 256?
I think if you insert an image using markdown it'll be displayed. But I don't think you can draw into it directly.
Not a perfect match for your requirements, but with some features that might meet some of the same underlying goals, you might want to take a look at Zettlr.
I'm not even American, so treat this thought accordingly, but might the military recruitment numbers you're seeing be influenced by race / ethnicity at all?
As in, if the bottom quintile has a lower-than-average proportion of white people in it (I guess this is the case in the USA), and if the military disproportionately recruits white people (I've no idea whether this is the case), maybe bottom-quintile white people could be over-represented, even while that quintile is a little under-represented.
An AI capable of programming might be able to reprogram itself smarter and smarter, soon surpassing humans as the dominant species on the planet.
The first moderately smart AI anyone develops might quickly become the last time that that people are the smartest things around. We know that people can write computer programs. Once we make an AI computer program that is a bit smarter than people, it should be able to write computer programs too, including re-writing its own software to make itself even smarter. This could happen repeatedly, with the program getting smarter and smarter. If an AI quickly re-programs itself from moderately-smart to super-smart, we could soon find that it is as disinterested in the wellbeing of people as people are of mice.
(Also not medically trained.)
Something missing from this analysis is that the expected probability of these conditions for any given pregnancy is not the same as the incidence in the population at large. The factor that I've most often heard about is increasing age being highly associated with increasing incidence of Down syndrome, though there may be others, and I'm not sure whether there are known correlates with the other conditions you mention.
That might also relate to the last point about incidence of these conditions in the wider population and the incidences that study reported. It could be that older pregnant people are more likely to opt for the test, knowing that they are at elevated probability.
You finish by suggesting people think about prevalence differences shifting by a factor of two, but from a quick Google, it looks like age can shift prevalences by orders of magnitude. The first table that popped up suggested 1 in 2000 at age 20, increasing to 1 in 100 at age 40 and 1in 10 at age 49.
A prior isn't the termination.
That sounds like you're thinking of priors in terms of beliefs. As Gelman recently quoted:
The prior distribution. In general, the prior distribution represents all previously available information regarding a parameter of interest. . . .
I really like that they express this in terms of “evidence” and “information” rather than “belief.”
(First sentence is something Gelman is quoting from Deke et al, p4, second sentence is Gelman's agreement with that.)
Why is xyz your prior? The termination is the information you've drawn on to come to that prior.
I felt like this classification system is potentially helpful.
But I felt that the title of the article could do with being refined. I read it as meaning that there were types of akrasia that could actually be beneficial for you in some surprising way, rather than it being beneficial to categorise akrasia by this typology.
"sass" becomes "cacc" ...
... "kick" becomes "xix"...
Any reason why sass would need a double c at the end, but the ck in kick just becomes one x?
cepelling
Typo here too, perhaps?
Upvoted because I really appreciated the intro test at the start that let me know that this wasn't the post for me. Thanks!
My suggestion isn't really aligned with your initial hypothesis, about the potential for LW to be more efficient than the efficient market because of comparative advantages at spotting niche things. I don't know much about cars, about car manufacturers, or about investment. So I'm not using some expert niche knowledge that I would realistically expect to be more efficient than the market.
Really, my reasoning is just that it doesn't seem that feasible that Tesla is worth more than car companies that are selling many, many times more vehicles than it.
From first principles, it seems high probability that most vehicles in the future are going to be electric vehicles. I suspect that incumbent car manufacturers forecast this too, and therefore expect that they are investing heavily in developing electric vehicles. These are large companies with very well-established dealership networks, large cashflows from sales of internal combustion engine vehicles, etc., so should have substantial capacity to pursue that development. I don't know whether Tesla claims a technical advantage in electric vehicles, but if they do, I don't see it as being likely to persist.
For the autonomous-driving parts of the technology, I could envisage there being more secret sauce than the electric vehicle parts. But for similar reasons I would expect the other vehicle companies to be investing heavily in it. And — this is really anecdote time now — I've seen some videos online that are suggestive of Tesla's self-driving abilities seeming outright dangerous.
Suggestion: a short position on Tesla (TSLA)
Reason: Tesla sold about 900k cars in 2021. This is about 1-2% of the global car market. Even in electric vehicle sales, Tesla only represents about 1/7 of global sales. [1] And yet TSLA is valued at as much as the combined value of the next 5 car companies combined. [2] Musk has been claiming since 2015 that self-driving Teslas are a year or two away [3] so it's hard to believe that they are particularly close now. It seems that Tesla's current self-driving features are not particularly more advanced than those of competitors. [4]
Personal thoughts: I have low confidence that this would be a good trade over a 12 month timescale. Markets can stay irrational longer than you can stay solvent and all that. And TSLA trades high based on Musk's personal appeal, which might have years left to run (see Matt Levine's Money Stuff newsletters on the Elon Market Hypothesis). But longer-term, some trade structured to represent the relative values of TSLA vs. the next, say, 5-10 largest car makers, seems like it should pay off. Either TSLA is currently over-valued or the other car makers are currently under-valued. Unless Tesla can capture, say, 50%+ of car sales globally, either its value will have to fall substantially or other makers' value will have to rise substantially, if investors realise that they have the same benefits as Tesla represents.
[3] https://edition.cnn.com/2022/03/14/cars/tesla-cruise-control/index.html
[4] https://www.cars.com/articles/which-cars-have-autopilot-430356/
Are the time limits enforced via parental settings for different apps? If so, I'd be interested in hearing what technical solution you use and how well they work out. Do you have to have them working across different machines / operating systems, for example?
On the question of finance, there was historically a prohibition on usury in Christianity. This was worked around by the triple contract / Contractum trinius.
I have been told that something comparable happens in currently existing Islamic finance: people devise clever schemes that are technically acceptable, though not really in the spirit of what was intended, so they replicate lending-at-interest without technically doing so. (I believe there may also be some regulatory flexibility in choosing which religious scholar you submit your scheme to, being able to select one that is known for looking favourably on such proposals.)
I believe that there are circumstances in which financial services that are in tune with the spirit of some of these rules (rather than just within the letter of them) could be desirable. But people do seem to have a habit of finding ways around them.
Re your question about whether the Zoe or official numbers are likely to be correct in the UK.
It seems likely that it's Zoe, based on other data and on the physical situation.
The other data source is the Office for National Statistics. They've been running a sampling study, going out and testing the population. They are finding very high levels of infection. It's about 1 in 16, or 3.4 million people in England who would have tested positive a week or two back. (Plus some more for Scotland, Wales and Northern Ireland.) That seems very compatible with Zoe saying 340k cases per day across the whole UK.
The relevant information about how things are going is that there has been a massive change in testing. People in the UK used to be able to request a free pack of 7 lateral flow tests to be delivered by post every day. As of today, those are no longer available. Officially, over the last few weeks you could still order them every 3 days, but in practice they have been almost completely out of stock.
The guidance and general tone has also changed, with much less attention to COVID in general and on testing in particular.
Given that tests are so much less available, it's almost surprising that as many cases are being detected as are.
Likely typos: "debiasing" became "debaising" twice, once in the title and once in the body text.