Posts
Comments
I've been convinced! I'll let my wife know as soon as I'm back from Jamaica!
Similarly, the point about trash also ignores the larger context. Picking up my own trash has much less relationship to disgust, or germs, than picking up other people's trash.
Agreed, but that's exactly the point I'm making. Once you apply insights from rationality to situations outside spherical trash in a vacuum filled park you end up with all sorts of confounding affects that make the insights less applicable. Your point about germs and my point about fixing what you break are complimentary, not contradictory.
I think this post is missing the major part of what "metarational" means: acknowledging that the kinds of explicit principles and systems humans can hold in working memory and apply in real time are insufficient for capturing the full complexity of reality, having multiple such principles and systems available anyway, and skillfully switching among them in appropriate contexts.
This sounds to me like a semantic issue? Metarational isn't exactly a standard AFAIAA, (I just made it up on the spot), and it looks like you're using it to refer to a different concept from me.
Sure it is, if you accept a whole bunch of assumptions. Or it could just not do that.
Reading this reminds me of Scott Alexander in his review of "what we owe the future":
But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.
You come up with a brilliant simulation argument as to why the AI shouldn't just do what's clearly in his best interests. And maybe the AI is neurotic enough to care. But in all probability, for whatever reason, it doesn't. And it just goes ahead and turns us into paperclips anyway, ignoring a person running behind it saying "bbbbbbut the simulation argument".
I'm not sure why those shouldn't be included? If someone uses my AI to perform 500 million dollars of fraud, then I should probably have been more careful releasing the product.
The rest of the family is still into Mormonism, and his wife tried to sue him for millions, and she lost (false accusations)
In case you're interested in following this up, Tracing Woodgrains on the accusations: https://x.com/tracewoodgrains/status/1743775518418198532
His approach to achieving immortality seems to be similar to someone attempting to reach the moon by developing higher altitude planes. He's using interventions that seem likely to improve health and lifespan by a few percentage points, which is great, but can't possibly get us to where he wants to go.
My assumption is that any real solution to mortality will look more like "teach older bodies to self repair the same way younger bodies do" than "eat this diet, and take these supplements".
I very much did not miss that.
I would consider this one of the most central points to clarify, yet the OP doesn't discuss it at all, and your response to it being pointed out was 3 sentences, despite there being ample research on the topic which points strongly in the opposite direction.
Where did I say that?
I never said you said it, I said the book contains such advice:
Lintern suggests that chemotherapy is generally a bad idea.
Now it can be very frustrating to hear "you can't have an opinion on this because you're not an expert", and it sounds very similar to credentialism.
But it's not. If you'd demonstrated a mastery of the material, and came up with a convincing description of current evidence for the DNA theory and why you believe it's incorrect, evidence which is not pulled straight out of the book you're reviewing, I wouldn't care what your credentials are.
But you seem to have missed really obvious consequences of the fungi theory, like, "wouldn't it be infectious then", and all the stuff in J Bostock's excellent comment. At that point it seems like you've read a book by a probable crank, haven't even thought through the basic counterarguments, and are spreading it around despite it containing some potentially pretty dangerous advice like "don't do chemotherapy". This is not the sort of content I find valuable on LessWrong, so I heavily downvoted.
I want to take a look at the epistemics of this post, or rather, whether this post should have been written at all.
In 95% of cases someone tearing down the orthodoxy of a well established field is a crank. In another 4% of cases they raise some important points, but are largely wrong. In 1% of cases they are right and the orthodoxy has to be rewritten from scratch.
Now these 1% of cases are extremely important! It's understandable why the rationalist community, which has a healthy skepticism of orthodoxy would be interested in finding them. And this is probably a good thing.
But you have to have the expertise with which to do so. If you do not have an extremely solid grasp of cancer research, and you highlight a book like this, 95% of the time you are highlighting a crank, and that doesn't do anyone any good. From what I can make out from this post (and correct me if I'm wrong) you do not have any such expertise.
Now there are people on LessWrong who do have the necessary expertise, and I would value it if they were to either spend 10 seconds looking at the synopsis and saying "total nonsense, not even worth investigating" Vs "I'll delve into that when I get the time". But for anyone who doesn't have the expertise, your best bet is just to go with the orthodoxy. There's an infinite amount of bullshit to get through before you find the truth, and a book review of probable bullshit doesn't actually help anyone.
And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects.
Why ~0 and not exactly 0? Are these not perfectly orthogonal? If not, would it be possible to modify them slightly so they are perfectly orthogonal, then repeat, just to exclude Fabien Roger's hypothesis?
It's not that we can't colonise Alaska, it's that it's not economically productive to do so.
I wouldn't expect colonising mars to be economically productive, but instead to be funded by other sources (essentially charity).
I think the chances that something that doesn't immediately kill humanity, and isn't actively trying to kill humanity, polishes us off for good is pretty low at the very least.
Humans have survived as hunter gatherers for a million years. We've thrived in every possible climate under the sun. We're not just going to roll over and die because civilisation has collapsed.
Not that this is much of a comfort if 99% of humanity dies.
Thanks for the point. I think I'm not really taking to that sort of person? My intended audience is the average American who views the USA as mostly a force for good, even if its foreign policy can be misguided at times.
-
Historically European Jews were money lenders since Christians were forbidden to charge interest. This was one of the major forces behind the pogroms, since killing the debt holders saved you having to pay up.
-
On a country level scale this is significant. If you live in a dangerous area, you want the USA to have invested a lot of money in you which they will lose if you ever are conquered.
-
People claiming that debts are inherited by the estate: this only applies to formal, legible debts. If you lend money informally (possibly because there's a criminal element involved), or the debt is an informal obligation to provide services/respect then once the debt holder dies it's often gone.
-
Even for many types of legible debt, once the debt holder dies it's very difficult for the estate to know the debt exists or it's status. If old Joey Richbanks lends me 10 million dollars, witnessed and signed in a contract, who says his children are ever going to find the contract? And if they do, and I claim I already paid it, how certain are they I'm lying? And how likely are they to win the court case if so?
Putting on my reasonable Israeli nationalist hat:
"Of course, granting a small number of well behaved Palestinians citizenship in Israel is not a problem, and as it happens that occurs at a small scale all the time (e.g. family unification laws, east Jerusalem residents).
But there's a number of issues with that:
- No vetting is perfect, some terrorists/bad cultural fits will always slip through the gap.
- Even if this person is a great cultural fit, there's no guarantee their children will be, or that they won't pull in other people through family unification laws.
- There's a risk of a democratic transition - the more Arab voters, the more power they have, the more they can open the gates to more Arabs, till Israel siezes to be a Jewish state.
- We don't trust the government to only keep it at a small scale.
Now let's turn it around:
Why should we do this? What do we have to gain for taking on this risk?"
There seems to be a huge jump from: there's no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to... all the other stuff which seems completely unrelated?
I agree that making places which will definitely be part of Israel in any future two state solutions denser, whilst not increasing footprint, or access to neighbouring land is not inherently problematic.
But give people an inch and they will take a mile. From the US perspective far easier to just deliver an ultimatum on settlement building full stop. Besides, the fewer settlers, the fewer troublemakers, so that's another advantage.
Also that provides an incentive for those who live in the settlements to come to an agreement on a two state solution since that will free up their land for further building.
I agree that they should turn a blind eye to small scale refurbishment/rebuilding of existing housing stock, but should object to any Greenfield building, or major projects.
I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.
This makes sense, but isn't alphafold available for use? Is it possible to verify this one way or another experimentally?
Might be worth posting this as it's own question for greater visibility
Possibly, but some of the missteps just feel too big to ignore. Like what on earth is going on in the second half of the book?
I greatly enjoyed metropolitan man, but feel like web serials, especially fan fiction, are their own genre and deserve their own post.
It's a prequel in the loosest possible sense. In theory they could be set in two different universes and it wouldn't make much of a difference.
Oh (not a spoiler) the second narrator is obviously not being entirely truthful.
That's totally a spoiler :-), but for me it was one of the most brilliant twists in the book. You have this stuff that feels like the author is doing really poor sci-fi, and then it's revealed that the author is perfectly aware of that and is making a point about translation.
Thanks, really appreciate the feedback! Maybe I'll give The Three Body Problem another chance.
What about solar power? If you build a data center in the desert, buy a few square km of adjacent land and tile them with solar panels presumably that can be done far quicker and with far less regulation than building a power plant, and at night you can use off peak grid electricity at cheaper rates.
Vg pna'g or gur pnfr gung jnf gur gehyr rssbeg, fvapr gur cerfvqrag pubfr gb tb gb fcnpr vafgrnq bs gur bgure rssbeg.
The problem is the temperature of the earth rises fairly fast as you dig downwards. How fast depends on location, but always significantly enough that there's a pretty hard limit on how cold you can go.
Reserve soldiers in Israel are paid their full salaries by national insurance. If they are also able to work (which is common as the IDF isn't great at efficiently using it's manpower) they can legally work and will get paid by their company on top of whatever they receive from national insurance.
Given how often sensible policies aren't implemented because of their optics, it's worth appreciating those cases where that doesn't happen. The biggest impact of a war on Israel is to the economy, and anything which encourages people to work rather than waste time during a war is a good policy. But it could so easily have been rejected because it implies soldiers are slacking off from their reserve duties.
Not video transcripts - video. 1 frame of Video contains much more data than 1 text token, and you can train an AI as a next frame predictor much as you can a next token predictor.
I'm guessing that the sort of data that's crawled by Google but not common crawl is usually low quality? I imagine that if somebody put any effort into writing something then they'll put effort into making sure it's easily accessible, and the stuff that's harder to get to is usually machine generated?
Of course that's excluding all the data that's private. I imagine that once you add private messages (e.g. WhatsApp, email) + internal documents that ends up being far bigger than the publicly available web.
I'm interested if you have a toy example showing how Simpsons paradox could have an impact here?
I assume that has a placebo/doesn't have a placebo is a binary variable, and I also assume that the number of people in each arm in each experiment is the same. I can't really see how you would end up with Simpsons paradox with that set up.
with the best models trained on close to all the high quality data we’ve got.
Is this including images, video and audio? Or just text?
A dictionary defines all words circularly, but of course nobody learns all words from a dictionary - the assumption is you're looking up a small number of words you don't know.
Humans learn their first few words by seeing how they're used in relation to objects, and the rest can be derived from there without needing circularity.
However the dictionary provides very tight constraints on what words can mean. Whatever the words "wood", "is", "made", "from", and "trees" mean, the sentence "wood is made from trees" must be true. The vast majority of all possible meanings fail this. Using only circular definitions, is it possible to constraint words meanings so tightly that there's only one possible model which fits those constraints?
LLMs seem to provide a resounding yes to that question. Whilst 1st generation LLMs only ever saw text and had no hard coded knowledge, so could only possibly figure out what words meant based on how they're used in relation to other words, they understood the meaning of words sufficiently well to reason about the physical properties of the objects they represented.
I'm taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
Related: https://www.scattered-thoughts.net/writing/small-tech/
I frequently see debates about whether it's better to be a cog at a giant semi-monopoly, or to take investment money in the hopes of one day growing to be head cog at a giant semi-monopoly.
Role models matter. So I made a list of small companies that I admire. Neither giants nor startups - just people making a living writing software on their own terms.
Makes sense thanks!
I imagine a startup of this ilk could be based in Prospera, which wouldn't be a problem for the wealthy few to travel there for personalised treatment.
I also imagine that with a lighter regulatory regime, no need to scale up production, and no need for lengthy trials, developing a monoclonal antibody would be much quicker and cheaper. Consider how quickly COVID vaccines were found compared to when they were ready for use.
The other hurdles sound significant though.
When you say it's not yet practical, are we missing some key steps, or could it be done at high enough cost with current technology but can't scale?
I imagine a startup which cured rich people's cancers on a case by case basis would have a lot of customers, which would help drive prices down as the technology improved.
My grandmother suffered from Dementia. For a period of a couple of years I would call her every Friday, and we would have literally the exact same conversation each time, including her making the same jokes at the same points in the conversation, using the same phrasing. I concluded that people are in fact pretty deterministic, even over the long term.
You're intuition is correct when the jet has already passed ahead - those are very hard to catch and shoot down. But usually you detect an aircraft when it's heading towards you, and all the missile has to do is intercept. It doesn't even have to be faster than the jet (unless the jet detected it in time and does a 180).
But I discussed that in the post. All you need are enough cameras + processing power. Both are cheap.
To be honest, this just feels like the Euthyphro Dilemma all over again. "Good" is defined by what God does. God chooses to run the laws of physics. Laws of physics are "Good". Who gives a damn?
Also this is directly contradictory to Christianity, since the core beliefs of Christianity all assume some level of non-natural intervention in the world (e.g. resurrection of Christ). Same for almost all other religions. So who is this even for?
Lens and CCD technology is not trivial at those speeds and insane angular resolution.
But we can easily capture a picture of a fighter jet when it's close. And the further it is the higher the angular resolution required, but also the lower the angular speed, so do those cancel out to make it not much harder, or it doesn't work like that?
Note you don't even need high resolution in all directions, just high enough to see whether it's worth zooming in/switching to a better camera.
Why would you need large telescopes?
Naked eye has angular resolution of 30m at 100km, you need something slightly better. A small lense should do it. Cameras + zoom lens are well understood mass produced components. And this is a highly parallelizable task.
But then no need for stealth at all?
I wasn't referring to the A10, but the use of e.g. f-35s in ground support roles - as heavily practised by the IDF for example.