Posts
Comments
Chromebooks replace the caps lock key with a search key - which is functionally equivalent to the windows keys on windows. E.g. search+right goes to the end of the line.
Yep, and when you run out of letters in a section you use the core letter from the section with a subscript.
Also:
m: used for a second whole number when n is already taken.
p: used for primes
q: used for a second prime.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.
One thing to consider is how hard an AI needs to work to break out of human dependence. There's no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it's much easier to bootstrap them into whatever you want, than it is those nanofactories don't exist, and robotics haven't developed enough for you to create one without the human touch.
Presumably because there's a hope that having a larger liver could help people lose weight, which is something a lot of people struggle to do?
I imagine that part of the difference is because Orcas are hunters, and need much more sophisticated sensors + controls.
I gigantic jellyfish wouldn't have the same number of neurons as a similarly sized whale, so it's not just about size, but how you use that size.
Douglas Adams answered this long ago of course:
For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.
Thanks - I've rehauled that section. Note a Codorcet method is not sufficient here, as the counter-example I give shows.
Why? That's a fact about voting preferences in our toy scenario, not a normative statement about what people should prefer.
Thanks for this!
What are the chances of a variable bypass engine at some point? Any opinions?
Counterpoint: when I was about 12, I was too old to collect candy at my Synagogue on Simchat Torah, so I would beg a single candy from someone, then trade it up (Dutch book style) with naive younger kids until I had a decent stash. I was particularly pleased whenever my traded up stash included the original candy.
The single most useful thing I use LLMs for is telling me how to do things in bash. I use bash all the time for one off tasks, but not quite enough to build familiarity with it + learn all the quirks of the commands + language.
90% of the time it gives me a working bash script first shot, each time saving me between 5 minutes to half an hour.
Another thing LLMs are good at is e.g taking a picture of e.g. screw, and asking what type of screw it is.
They're also great at converting data from one format to another: here's some JSON, convert it into Yaml. Now prototext. I forgot to mention, use maps instead of nested structs, and use Pascal case. Also the JSON is hand written and not actually legal.
Similarly they're good at fuzzy data querying tasks. I received this giant error response including full stack trace and lots of irrelevant fields, where's the actual error, and what lines of the file should I look at.
Buyers have to pay a lot more, but sellers receive a lot more. It's not clear that buyers at high prices are worse off than sellers, so it's egalitarian impact is unclear.
Whereas when you stand in line, that time you wasted is gone. Nobody gets it. Everyone is worse off.
I've been convinced! I'll let my wife know as soon as I'm back from Jamaica!
Similarly, the point about trash also ignores the larger context. Picking up my own trash has much less relationship to disgust, or germs, than picking up other people's trash.
Agreed, but that's exactly the point I'm making. Once you apply insights from rationality to situations outside spherical trash in a vacuum filled park you end up with all sorts of confounding affects that make the insights less applicable. Your point about germs and my point about fixing what you break are complimentary, not contradictory.
I think this post is missing the major part of what "metarational" means: acknowledging that the kinds of explicit principles and systems humans can hold in working memory and apply in real time are insufficient for capturing the full complexity of reality, having multiple such principles and systems available anyway, and skillfully switching among them in appropriate contexts.
This sounds to me like a semantic issue? Metarational isn't exactly a standard AFAIAA, (I just made it up on the spot), and it looks like you're using it to refer to a different concept from me.
Sure it is, if you accept a whole bunch of assumptions. Or it could just not do that.
Reading this reminds me of Scott Alexander in his review of "what we owe the future":
But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.
You come up with a brilliant simulation argument as to why the AI shouldn't just do what's clearly in his best interests. And maybe the AI is neurotic enough to care. But in all probability, for whatever reason, it doesn't. And it just goes ahead and turns us into paperclips anyway, ignoring a person running behind it saying "bbbbbbut the simulation argument".
I'm not sure why those shouldn't be included? If someone uses my AI to perform 500 million dollars of fraud, then I should probably have been more careful releasing the product.
The rest of the family is still into Mormonism, and his wife tried to sue him for millions, and she lost (false accusations)
In case you're interested in following this up, Tracing Woodgrains on the accusations: https://x.com/tracewoodgrains/status/1743775518418198532
His approach to achieving immortality seems to be similar to someone attempting to reach the moon by developing higher altitude planes. He's using interventions that seem likely to improve health and lifespan by a few percentage points, which is great, but can't possibly get us to where he wants to go.
My assumption is that any real solution to mortality will look more like "teach older bodies to self repair the same way younger bodies do" than "eat this diet, and take these supplements".
I very much did not miss that.
I would consider this one of the most central points to clarify, yet the OP doesn't discuss it at all, and your response to it being pointed out was 3 sentences, despite there being ample research on the topic which points strongly in the opposite direction.
Where did I say that?
I never said you said it, I said the book contains such advice:
Lintern suggests that chemotherapy is generally a bad idea.
Now it can be very frustrating to hear "you can't have an opinion on this because you're not an expert", and it sounds very similar to credentialism.
But it's not. If you'd demonstrated a mastery of the material, and came up with a convincing description of current evidence for the DNA theory and why you believe it's incorrect, evidence which is not pulled straight out of the book you're reviewing, I wouldn't care what your credentials are.
But you seem to have missed really obvious consequences of the fungi theory, like, "wouldn't it be infectious then", and all the stuff in J Bostock's excellent comment. At that point it seems like you've read a book by a probable crank, haven't even thought through the basic counterarguments, and are spreading it around despite it containing some potentially pretty dangerous advice like "don't do chemotherapy". This is not the sort of content I find valuable on LessWrong, so I heavily downvoted.
I want to take a look at the epistemics of this post, or rather, whether this post should have been written at all.
In 95% of cases someone tearing down the orthodoxy of a well established field is a crank. In another 4% of cases they raise some important points, but are largely wrong. In 1% of cases they are right and the orthodoxy has to be rewritten from scratch.
Now these 1% of cases are extremely important! It's understandable why the rationalist community, which has a healthy skepticism of orthodoxy would be interested in finding them. And this is probably a good thing.
But you have to have the expertise with which to do so. If you do not have an extremely solid grasp of cancer research, and you highlight a book like this, 95% of the time you are highlighting a crank, and that doesn't do anyone any good. From what I can make out from this post (and correct me if I'm wrong) you do not have any such expertise.
Now there are people on LessWrong who do have the necessary expertise, and I would value it if they were to either spend 10 seconds looking at the synopsis and saying "total nonsense, not even worth investigating" Vs "I'll delve into that when I get the time". But for anyone who doesn't have the expertise, your best bet is just to go with the orthodoxy. There's an infinite amount of bullshit to get through before you find the truth, and a book review of probable bullshit doesn't actually help anyone.
And it turns out that this algorithm works and we can find steering vectors that are orthogonal (and have ~0 cosine similarity) while having very similar effects.
Why ~0 and not exactly 0? Are these not perfectly orthogonal? If not, would it be possible to modify them slightly so they are perfectly orthogonal, then repeat, just to exclude Fabien Roger's hypothesis?
It's not that we can't colonise Alaska, it's that it's not economically productive to do so.
I wouldn't expect colonising mars to be economically productive, but instead to be funded by other sources (essentially charity).
I think the chances that something that doesn't immediately kill humanity, and isn't actively trying to kill humanity, polishes us off for good is pretty low at the very least.
Humans have survived as hunter gatherers for a million years. We've thrived in every possible climate under the sun. We're not just going to roll over and die because civilisation has collapsed.
Not that this is much of a comfort if 99% of humanity dies.
Thanks for the point. I think I'm not really taking to that sort of person? My intended audience is the average American who views the USA as mostly a force for good, even if its foreign policy can be misguided at times.
-
Historically European Jews were money lenders since Christians were forbidden to charge interest. This was one of the major forces behind the pogroms, since killing the debt holders saved you having to pay up.
-
On a country level scale this is significant. If you live in a dangerous area, you want the USA to have invested a lot of money in you which they will lose if you ever are conquered.
-
People claiming that debts are inherited by the estate: this only applies to formal, legible debts. If you lend money informally (possibly because there's a criminal element involved), or the debt is an informal obligation to provide services/respect then once the debt holder dies it's often gone.
-
Even for many types of legible debt, once the debt holder dies it's very difficult for the estate to know the debt exists or it's status. If old Joey Richbanks lends me 10 million dollars, witnessed and signed in a contract, who says his children are ever going to find the contract? And if they do, and I claim I already paid it, how certain are they I'm lying? And how likely are they to win the court case if so?
Putting on my reasonable Israeli nationalist hat:
"Of course, granting a small number of well behaved Palestinians citizenship in Israel is not a problem, and as it happens that occurs at a small scale all the time (e.g. family unification laws, east Jerusalem residents).
But there's a number of issues with that:
- No vetting is perfect, some terrorists/bad cultural fits will always slip through the gap.
- Even if this person is a great cultural fit, there's no guarantee their children will be, or that they won't pull in other people through family unification laws.
- There's a risk of a democratic transition - the more Arab voters, the more power they have, the more they can open the gates to more Arabs, till Israel siezes to be a Jewish state.
- We don't trust the government to only keep it at a small scale.
Now let's turn it around:
Why should we do this? What do we have to gain for taking on this risk?"
There seems to be a huge jump from: there's no moat around generative AI (makes sense as how to make one is publicly known, and the secret sauce is just about improving performance) to... all the other stuff which seems completely unrelated?
I agree that making places which will definitely be part of Israel in any future two state solutions denser, whilst not increasing footprint, or access to neighbouring land is not inherently problematic.
But give people an inch and they will take a mile. From the US perspective far easier to just deliver an ultimatum on settlement building full stop. Besides, the fewer settlers, the fewer troublemakers, so that's another advantage.
Also that provides an incentive for those who live in the settlements to come to an agreement on a two state solution since that will free up their land for further building.
I agree that they should turn a blind eye to small scale refurbishment/rebuilding of existing housing stock, but should object to any Greenfield building, or major projects.
I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.
This makes sense, but isn't alphafold available for use? Is it possible to verify this one way or another experimentally?
Might be worth posting this as it's own question for greater visibility
Possibly, but some of the missteps just feel too big to ignore. Like what on earth is going on in the second half of the book?
I greatly enjoyed metropolitan man, but feel like web serials, especially fan fiction, are their own genre and deserve their own post.
It's a prequel in the loosest possible sense. In theory they could be set in two different universes and it wouldn't make much of a difference.
Oh (not a spoiler) the second narrator is obviously not being entirely truthful.
That's totally a spoiler :-), but for me it was one of the most brilliant twists in the book. You have this stuff that feels like the author is doing really poor sci-fi, and then it's revealed that the author is perfectly aware of that and is making a point about translation.
Thanks, really appreciate the feedback! Maybe I'll give The Three Body Problem another chance.
What about solar power? If you build a data center in the desert, buy a few square km of adjacent land and tile them with solar panels presumably that can be done far quicker and with far less regulation than building a power plant, and at night you can use off peak grid electricity at cheaper rates.
Vg pna'g or gur pnfr gung jnf gur gehyr rssbeg, fvapr gur cerfvqrag pubfr gb tb gb fcnpr vafgrnq bs gur bgure rssbeg.
The problem is the temperature of the earth rises fairly fast as you dig downwards. How fast depends on location, but always significantly enough that there's a pretty hard limit on how cold you can go.
Reserve soldiers in Israel are paid their full salaries by national insurance. If they are also able to work (which is common as the IDF isn't great at efficiently using it's manpower) they can legally work and will get paid by their company on top of whatever they receive from national insurance.
Given how often sensible policies aren't implemented because of their optics, it's worth appreciating those cases where that doesn't happen. The biggest impact of a war on Israel is to the economy, and anything which encourages people to work rather than waste time during a war is a good policy. But it could so easily have been rejected because it implies soldiers are slacking off from their reserve duties.
Not video transcripts - video. 1 frame of Video contains much more data than 1 text token, and you can train an AI as a next frame predictor much as you can a next token predictor.
I'm guessing that the sort of data that's crawled by Google but not common crawl is usually low quality? I imagine that if somebody put any effort into writing something then they'll put effort into making sure it's easily accessible, and the stuff that's harder to get to is usually machine generated?
Of course that's excluding all the data that's private. I imagine that once you add private messages (e.g. WhatsApp, email) + internal documents that ends up being far bigger than the publicly available web.
I'm interested if you have a toy example showing how Simpsons paradox could have an impact here?
I assume that has a placebo/doesn't have a placebo is a binary variable, and I also assume that the number of people in each arm in each experiment is the same. I can't really see how you would end up with Simpsons paradox with that set up.