LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
I would like to make a recommendation to Johannes that he should try to write and post content in a way that invokes less feelings of cringe in people. I know it does invoke that because I personally feel cringe.
Still, I think that there isn’t much objectively bad about this post. I’m not saying the post is very good or convincing. I think its style is super weird but that should be considered to be okay in this community. These thoughts remind me of something Scott Alexander once wrote - that sometimes he hears someone say true but low status things - and his automatic thoughts are about how the person must be stupid to say something like that, and he has to consciously remind himself that what was said is actually true.
Also, all these thoughts about this social reality sadden me a little - why oh why is AI safety such a status-concerned and “serious business” area nowadays?
it-is on Why Truth?Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world
Science is our tool to manipulate the world. Science is the instrument of truth. If you can not manipulate the meaning of reality through definition of words such as 'rational', what it is and it isn't. Where science is the substitute for truth and rationality is the substitute for truth also. As self-evident the definition of rationality is substituted for science and this forms our basic definition for rationality moving forward. Care not to define science as in rationality, science is a tool as is rationality.
In rationality, as in science. 'Curiosity, pragmatism, and quasi-moral injunctions' are injected into research questions and colour our understanding of both the world and truth.
To use science as a tool is to obtain the truth, to use rationality is to obtain the truth. We must apply a scientific approach and method to our rationality.
Great post, important concepts. Sharing it everywhere.
There was one piece, though, that I couldn't intuitively grasp, so maybe one of you could help me understand: What is it about video games that are out to get you? ("So do video games.") Elsewhere, Zvi speaks about F2P games, is it about this and their addiction-inducing skinner boxes? If it's about video games in general, I would love to learn how they are more out to get us than, say, novels.
tsvibt on robo's ShortformWell I asked this https://www.lesswrong.com/posts/X9Z9vdG7kEFTBkA6h/what-could-a-policy-banning-agi-look-like [LW · GW] but roughly no one was interested--I had to learn about "born secret" https://en.wikipedia.org/wiki/Born_secret from Eric Weinstein in a youtube video.
FYI, while restricting compute manufacture is I would guess net helpful, it's far from a solution. People can make plenty of conceptual progress given current levels of compute https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce [LW · GW] . It's not a way out, either. There are ways possibly-out. But approximately no one is interested in them.
localdeity on On PrivilegeI grew up knowing "privilege" to mean a special right that was granted to you based on your job/role (like free food for those who work at some restaurants) or perhaps granted by authorities due to good behavior (and would be taken away for misusing it). Note also that the word itself, "privi"-"lege", means "private law": a law that applies to you in particular.
Rights and laws are social things, defined by how others treat you. To say that your physical health is a privilege therefore seems like either a category error, or a claim that other people treated you better in a way that gave you your better physical health, which then raises questions like "What made you deserve that treatment?" or perhaps "Is it really because of how other people treated you, or other reasons like genetics or having made healthier life choices?". The latter may then lead to "Yeah, but you grew up being taught better and/or in a situation where healthy choices were more affordable, which are probably partly caused by wealth and are privilege", both of which might be counter-argued in the specific person's case or in general, and so on. Social justice arguments ensue.
"Advantage" seems like a more neutral term, one that doesn't inherently imply fairness-laden claims about how you got it. I would recommend it.
wassname on Instruction-following AGI is easier and more likely than value aligned AGIThat's true, they are different. But search still provides the closest historical analogue (maybe employees/suppliers provide another). Historical analogues have the benefit of being empirical and grounded, so I prefer them over (or with) pure reasoning or judgement.
jblack on Is There Really a Child Penalty in the Long Run?Yes, that was my first guess as well. Increased income from employment is most strongly associated with major changes, such as promotion to a new position with changed (and usually increased) responsibilities, or leaving one job and starting work somewhere else that pays more.
It seems plausible that these are not the sorts of changes that women are likely to seek out at the same rate when planning to devote a lot of time in the very near future to being a first-time parent. Some may, but all? Seems unlikely. Men seem more likely to continue to pursue such opportunities at a similar rate due to gender differences in child-rearing roles.
annasalamon on Stephen Fowler's ShortformI don't know the answer, but it would be fun to have a twitter comment with a zillion likes asking Sam Altman this question. Maybe someone should make one?
jblack on LLMs could be as conscious as human emulations, potentiallyI don't expect this to "cash out" at all, which is rather the point.
The only really surprising part would be that we had a way to determine for certain whether some other system is conscious or not at all. That is, very similar (high) levels of surprisal for either "ems are definitely conscious" or "ems are definitely not conscious", but the ratio between them not being anywhere near "what the fuck" level.
As it stands, I can determine that I am conscious but I do not know how or why I am conscious. I have only a sample size of 1, and no way to access a larger sample. I cannot determine that you are conscious. I can't even determine for certain when or whether I was conscious in the past, and there are some time periods for which I am very uncertain. I have hypotheses regarding all of these uncertainties, but there are no prospects of checking whether they're actually correct.
So given that, why would I be "what the fuck" surprised if some of my currently favoured hypotheses such as "ems will be conscious" were actually false? I don't have anywhere near the degree of evidence required to justify that level of prior confidence. I am quite certain that you don't either. I would be very surprised if other active fleshy humans weren't conscious, but still not "what the fuck" surprised.
o-o on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"Additionally, the AI might think it's in an alignment simulation and just leave the humans as is or even nominally address their needs. This might be mentioned in the linked post, but I want to highlight it. Since we already do very low fidelity alignment simulations by training deceptive models, there is reason to think this.