Posts
Comments
It is perfectly rational to pipe all decisions to a cheaper form of cognition that relies mostly on pattern matching, and to save your limited reserves of concentration and reasoned thought to situations that pass through this initial filter and ping your higher cognition to look into it more.
But I claim that all such priors make assumptions about the distribution of the possible number of buses
I mean, yes, that's the definition of a prior. How to calculate a prior is an old question in bayesianism, with different approaches - kolmogorov complexity being one.
Sorry, I meant to add in an example where for simplicity you saw the bus numbered 1.
Agreed it's a terrible prior, it's just an easy one for a worked example.
Agreed, I just wanted to clarify that the assumption it's double as long seems baseless to me. The point is it's usually shortly after.
As a worked example, if I start off assuming that chance of there being n busses is 1/2^n (nice and simple, adds up to 1), then the posterior is 1/n(ln(2))(2^n) - multiply the two distributions, then divide by the integral (ln(2)) so that it adds up to 1.
I'm not using this is a prior, I'm using it to update my existing prior (whatever that was). I believe the posterior will be well defined, so long as the prior was.
It would also update you towards 1600 over 2000.
Oh I see. I'm not trying to guess a specific number, I'm trying to update my distribution.
I'm sorry, I'm not sure what you mean. Under bayesianism this is straightforward.
Note the actual doomsday argument properly applied predicts that humanity is mostly likely to end right now, with probability dropping proportional to total number of humans there have ever been.
To give a simple example why: if you go to a city and see a bus with the number 1546, the number of busses that maximises the chance you would have seen that bus is 1546 busses. At 3000 busses the probability you would have seen that exact bus is halved. And at 3,000,000 it's 2000 times less likely. This gives you a Bayesian update across your original probability distribution for how many busses there are.
Why isn't the fact software developers spend 3 years not learning all that much (far less than they would in 6 months on the job) not a problem?
How? There's no law requiring software developers to have a degree, but employers often still only accept people who do.
Note this would be illegal (under the term that requires you to provide all resources specifically relating to the exam as opposed to the subject in general to external students).
It would also cause students to appeal, and the statistics would be obvious enough that the appeals committee would investigate, ask for the mark scheme, and would quickly find on the students side because the paper clearly contains arbitrary details designed to do this.
Note there's a lot of things that are like this in law, where people could in theory cheat, possibly even within the letter of the law, but they don't because the courts throw the book at them when they do.
And how exactly are universities a good signifier of that? Note I took an external degree from the university of London, which even if universities were a good signifier of that, this one definitely wasn't, and it did not in any way impact my ability to get a job. Noone cared.
As stated in the OP, I expect this to be the end result of the regulations I suggest. The advantage of this approach is that for now MIT can carry on doing it's thing instead of forcing a hard switch over where you stop it being able to assess is students without yet having an equivalently respected replacement.
If Mikhail spends 100 days proving the theorem, and fails, that acts as evidence the theorem is false, so the optimal strategy changes.
Indeed this is always the optimal strategy. Attempt to prove it true till the chance of it being true is less than 50%, then switch.
Under this method you should start off by spending 122 days trying to prove it true, then continuously alternating, so testing the oracle doesn't cost you anything at all.
Additionally for modern tools you might be able to continuously track machine settings and include that in the training data.
The particular example given (architecture) doesn't really make sense IMO:
If you're building something and want to save money on architecture, the current solution is to take a boilerplate design, tweak it to the current plot and local building code, and build it. Either AI can do that tweaking well, in which case great, or it can't in which case the architectural costs are fairly low compared to the total cost of the building that it really makes no sense to cut corners there.
If you're doing something bespoke, you'll want something that works, and if the architecture firm you use is cheaper than other bespoke firms, but the final product is subpar, you'll sue. Or at the very least not use them again.
Now this is a nitpick, but I think it might largely apply to the wider point: when you dig into the details it doesn't really make sense.
Interesting!
I wonder what results you get for Gemini 2.5 pro. It's COT seems much more structured than other thinking models and I wonder if that increases or decreases the chance it'll mention the hint.
Accountants use features of excel that are not available in the cloud (e.g. VBA) all the time.
You are lucky that you don't need these features (and that's great for you), and assuming that therefore nobody has a legitimate reason to use Windows. This is just a really silly blind spot. Excel is just one of a huge amount of software, used day in and day out by a huge number of people (many of whom are self employed so not using an enterprise laptop) for which Windows is the only sensible option.
That's fine for one offs, but if, like many, your job is essentially "use Excel" then the simplest solution is to just use windows, not mess around with emulators or VMs.
A lot of people need to use software that's only available on Windows. I don't, and on the rare occasion I need to check Windows behaviour I use a cloud instance, so I use a Chromebook instead.
Although there's at least a few Jewish born or half Jewish cardinals.
(it was a joke, yes 😀)
On the other hand, we know that Jews are very prominent whenever you're selecting for competence, yet almost no Cardinals are Jewish, suggesting that maybe competence isn't that important to be a cardinal 🤷?
It is perfectly possible that they directly exchange stocks but denominate prices (and wages, contracts etc.) in a much more stable unit. The bank takes care of working out how much stock to transfer to make a given fiat denominated payment.
The quantity of lean code in the world is orders of magnitude smaller than the quantity of python code. I imagine that most people reporting good results are using very popular languages. My experience using Gemini 2.5 to write lean was poor.
Kolmogorov complexity? Your solution takes more bits to specify than the one in the solution (at least if you've already defined a standard library with concepts like primes)?
That seems like an example of C (obscure technology)
To the extent they don't have epidemics or handle them better, and don't elect Trump, it's probably more stable.
I still can't imagine what currency would be better than this, though, because I can't think of a better way to say "I'm just as smart as the market" than to put my entire stake in the market.
Why are you assuming that the unit in which you denominate prices, and the way in which you store your savings are the same thing? Even on earth, most wealthy people only keep a small percentage of their net worth in cash.
These have two different purposes, so are done in two different ways.
Well GDP is about production, which should be relatively stable.
Stock prices are related to expectations about the future, which are far more variable. They essentially measure the interest adjusted value of future profit, and small changes in revenue/costs can lead to huge changes in profit since profit is a small percentage of revenue.
The main reason economists like inflation is because it allows companies to lower real wages of underperforming workers without having to actually give them a pay cut.
It also has other advantages - it allows a central bank to set a negative real interest rate, giving them more flexibility in the usage of interest rate as a tool.
Deflation meanwhile is considered very bad. Some of the reasons wouldn't be relevant here, but the key one is that meeting agreed on contracts becomes much more expensive than expected. If the wages you pay to your farm workers are tied to the stock market, but the income you get from selling farm produce is not, you have a real problem.
Sorry if I misunderstood, thats how it came across to me. I didn't know that was a self deprecating quote because there's no link to its origin.
The relevant index is the FTSE global all cap. It doesn't include privately traded companies, but then again neither would Dath Ilan be able to.
It has far lower growth than the S&P so is worse on that metric. Around 2021 it fell by over a quarter in the space of a year, and over the last week it's gone down by 3%. Depending on how competitive the market is, these aren't numbers that can just be rounded away.
The reason why stability is more important for prices than savings is they have different purposes. If my net worth goes down by 3% that has very little impact on my day to day choices. On the other hand if one shop is 3% more expensive than another shop, that will impact which shop I go to, and if one good gets 3% more expensive I will consider substitutes. This is much harder to do if goods fluctuate in price in correspondence to the stock market. You have an intuition for how much a coffee should cost. Eliezer's proposal is to make that price even more stable to help your intuition out, your proposal makes it much less intuitive.
I also imagine that investment advice in Dath Ilan is to split your savings between Fiat, Bonds, ETFs and other investments depending on appetite for risk, just like on earth. They probably have fewer hedge funds.
My experience is that the biggest factor is how large is the codebase, and can I zoom into a specific spot where the change needs to be made and implement it divorced from all the other context.
Since the answer to both of those in may day job is "large" and "only sometimes" the maximum benefit of an LLM to me is highly limited. I basically use it as a better search engine for things I can't remember off hand how to do.
Also, I care about the quality of the code I commit (this code is going to be continuously worked on), and I write better code than the LLM, so I tend to rewrite it all anyway, which again allows the LLM to save me some time, but severely limits the potential upside.
When I'm writing one off bash scripts, yeah it's vibe coding all the way.
It is perfectly possible that in Dath Ilan all bank accounts are by default denominated in a global ETF, and you only exchange that for fiat currency at the point of use. It still makes sense to denominated prices of fiat currency.
Firstly a word of advice: this would come off better if it was less needlessly antagonistic. "Here's a cool idea I had" is a lot less pleasant than "isn't this guy stupid for not thinking of the cool idea I had".
Also I don't think this is obviously a good idea. The S&P 500 swings hugely in a small space of time. Prices meanwhile are meant to be stable, indeed this is the chief purpose of prices, as a stable means of comparing costs of different goods. If each unit of currency denotes a fixed percentage of all publicly traded firms, then each day prices will need to be reset according to investor confidence. This makes it much harder to know whether a shop is providing you a good price or not.
Even worse you can't automatically adjust this because there's no way to measure investor confidence - the value of all ETFs is approximately constant.
You are of course right that most Dath Ilanis would keep most of their savings in ETFs. But they will exchange it for fiat currency when they need to buy things, and with good reason.
Got it. So in a ways it's more like a mathematical conjecture than a philosophical theory. We posit a statistical result, we have some toy examples which provide us with some intuition for it, but right now we're not able to prove the general case. We hope to do so in the future, and people are actively working on doing so.
Also isn't many worlds a straightforward interpretation of decoherence? Decoherence says that regions of large complex superpositions stop interfering with each other, and hence such regions will act classically, many worlds just says that the regions you're not in presumably still exist? Or are there some extra hoops there?
Am I correct that decoherence isn't really an interpretation of quantum mechanics, but an empirically verified statistical consequence of the standard model?
Didn't work consistently for me, even when I gave it multiple hints. YMMV.
I think it's convincing that the effect, if it exists, is much smaller than the one for weight. The graph for weight is so obvious you don't even need to do statistics.
If the code uses other code you've written, how do you ensure all dependencies fit within the context window? Or does this only work for essentially dependencyless code?
But then why is it outputting those kinds of outputs, as opposed to anything else?
Strongly agree, made the same case here: https://www.lesswrong.com/posts/rTveDBBavah4GHKxk/the-case-for-corporal-punishment
Out of interest, why not test this by generating one paragraph of the scratchpad, paraphrasing it, and then continuing thinking using the paraphrased scratchpad, doing this after each additional paragraph?
I agree there's better ways to do this, but:
a) the point is even the brute force stupid ways are doable and would likely work. Obviously try to the cleverer ways first.
b) the drop in fertility rate is so bad and so destructive that if we can't get this done the good way, even the dysgenic way is very much worth it.
The goal is writing good software to solve a particular problem. Using haskell to write an SPA is not going to work well whether your doing it for someone else or for yourself (assuming you care about the product and it's not just a learning/fun exercise). It is a perfectly valid decision to say that you'll only work on products where Haskell is a good fit, but I would strongly recommend against using Haskell where it's not a good fit in a production setting, and would consider it low key fraud to do so where somebody else is paying you for your time.
but terrible for job satisfaction, making you depressed or angry every time you have to use that silly language/tool.
My experience is that once you get over yourself, and put in the effort to properly understand the language, best practices, etc. you might not love the language, but you'll find it's actually fine. It's a popular language, people use it, and they've found ways to sand down the rough edges and make the good bits shine. Sure it's got problems but it's not as soul destroying as it looked at first sight, and you'll likely learn a lot anyway.
(I'm not talking about a case where a company forces you to use a deprecated language like COBOL or ColdFusion . I'm talking about a case where you pick the language because it's the best tool for the job).
This is in general good career advice. You'll lose out on a lot of opportunities if you refuse to put yourself in uncomfortable situations.
Claude already has an external memory, as do most AI agents.
In my experience truly powerful developers are able to do this, but even many Google L5s will just look up this code every time.
Indeed I am a Google L5, and I usually do look this stuff up (or ChatGPT it). I think it's more important to remember roughly what libraries do at a high level (what problems they solve, how they differ from other solutions, what can't they do) than trivia about how exactly you use them.
You are right that writing code glue code is a large part of software engineering, and that knowing what the libraries do is an important part of that. But once you know (or think you know) what the libraries do, how quickly do you bash out the code that does that? Do you struggle, or does it just come naturally?
And as faul_sname pointed out, often the quickest way to understand what the library does is to look at it. Is that something you're capable of doing, or are you forced to hope the documentation addresses it?
Other times you want to write a quick test that the library does what you expect. Is that going to take you half an hour, or 2 minutes?