Posts
Comments
That's only true if the probability is a continuous function - perhaps the probability instantaneously went from below 28% to above 28%.
I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.
(To be clear, I agree that if you rearrange people under the concepts of infinity that mathematicians like to use, you can turn HEAVEN into HELL, but I’m claiming that we’re simply not allowed to use that type of infinity logic for ethics.)
Obviously this is taking a stance about the ways in which infinity can be used in ethics, but I think this is a reasonable way to do so without giving up the concept of infinity entirely.
I don’t think that it does? There are infinitely many arrangements, but the same proof by induction applies to any possible arrangement.
I have an argument for a way in which infinity can be used but which doesn't imply any of the negative conclusions. I'm not convinced of its reasonableness or correctness though.
I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:
Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize this as HEAVEN(n) > HELL(n). We will prove this by induction.
- Base case, HEAVEN(1) > HELL(1):
- The first galaxy in HEAVEN (which contains billions of happy people and one miserable person) is better than the first galaxy in HELL (which contains billions of miserable people and one happy person), by our understanding of morality.
- Induction step HEAVEN(n) > HELL(n) => HEAVEN(n+1) > HELL(n+1):
- HEAVEN(n) > HELL(n) (given)
HEAVEN(n) + billions of happy people + 1 happy person > HELL(n) + billions of miserable people + 1 miserable person (by understanding of morality)
HEAVEN(n) + billions of happy people + 1 miserable person > HELL(n) + billions of miserable people + 1 happy person (moving people around does not improve things if it changes nothing else.)
HEAVEN(n + 1) > HELL(n + 1) □
- HEAVEN(n) > HELL(n) (given)
A downside of this approach is that you lose the ability to reason about uncountably infinite numbers. However, I think that's a bullet that I am willing to bite, to only be able to reason about a countably infinite number of moral entities.
One downside to using video games to measure "intelligence" is that they often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they'll perform less well on many video games than people who have good hand-eye coordination.
A related problem is that video games in general have a large element of a "shared language", where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that are certainly more intelligent than I am, but who are less able when playing a new video game, because their parents wouldn't let them play video games growing up (or, they're older and didn't grow up with video games at all).
I like the idea of using a different tool to measure "intelligence", if you must measure "intelligence", but I'm not sure that video games are the right one.
There's not direct rationality commentary in the post, but there's plenty of other posts on LW that also aren't direct rationality commentary (for example, a large majority of posts here about COVID-19). I think that this post is a good fit because it provides tools for understanding this conflict and others like it, which I didn't possess before and now somewhat do.
It's not directly relevant to my life, but that's fine. I imagine that for some here it might actually be relevant, because of connections through things like effective altruism (maybe it helps grant makers decide where to send funds to aid the Sudanese people?).
Interesting post, thanks!
A couple of formatting notes:
This post gives a context to the deep dives that should be minimally accessible to a general audience. For an explanation of why the war began, see this other post.
It seems like there should be a link here, but there isn't one.
Also, all of the footnotes don't link to each other properly, so currently one has to manually scroll down to the footnotes and then scroll back up. LessWrong has a footnote feature that you could use, which makes the reading experience nicer.
It used to be called Find Friends on iOS, but they rebranded it, presumably because family was a better market fit.
There are others like that too, like Life360, and they’re quite popular. They solve the problem of parents wanting to know where their kids are. It’s perhaps overly zealous on the parents part, but it’s a real desire that the apps are solving.
Metaculus isn’t very precise near zero, so it doesn’t make sense to multiply it out.
Also, there’s currently a mild outbreak, while most of the time there’s no outbreak (or less of one), so the risk for the next half year is elevated compared to normal.
I'm not familiar with how Stockfish is trained, but does it have intentional training for how to play with queen odds? If not, then it might be able to start trouncing you if it were trained to play with it, instead of having to "figure out" new strategies uniquely.
Are there other types of energy storage besides lithium batteries that are plausibly cheap enough (with near-term technological development) to cover the multiple days of storage case?
(Legitimately curious, I'm not very familiar with the topic.)
If you're on the open-air viewing platform, it might be feasible to use something like a sextant or shadow lengths to figure out the height from the platform to the top, and then use a different tool to figure out the height of the platform.
I often realize that I've had a headache for a while and had not noticed it. It has real effects - I'm feeling grumpy, I'm not being productive - but it's been filtered out before my conscious brain noticed it. I think it's unreasonable to say that I didn't have a headache, just because my conscious brain didn't notice it, when the unconscious parts of my brain very much did notice it.
After a split-brain surgery, patients can experience someone on one side of their body and not notice it with the portion of the brain that is controlling speaking, that is, the portion that seems conscious, but the other portion of the brain still experiences the sensation and reacts to it in a way that can seem inexplicable to the conscious portion of the brain (though the conscious brain will try to make up some sort of explanation for it).
The brain is not unitary, and it is so un-unitary that it seems like a mistake to even act as if subjective experience is a single reality.
The problem is that prior to ~1990, there were lots of supposed photographs of Bigfoot, and now there are ~none. So Bigfoots would have to previously been common close to humans but are now uncommon, or all the photos were fake but the other evidence was real. Plus, all of that other evidence has also died out (now that it's less plausible that they couldn't have taken any photos). So it's possible still that Bigfoot exists, but you have to start by throwing out all of the evidence that people have that Bigfoot exists, and then why believe in Bigfoot?
I really enjoyed the parts of the post that weren't related to consciousness, and it helped me think more about the assumptions I have about how the universe works. The Feynman quote was new for me, so thank you for sharing that!
However, when you brought consciousness into the post, it brought along additional assumptions that the rest of the post wasn't relying on, weakening the post as a whole. Additionally, LessWrong has a long history of debating whether consciousness is "emergent" or not. Most readers here already hold fixed positions on the debate and would need substantial evidence to be convinced to change their position. Simply stating that "that idea feels wrong" doesn't suffice, especially when many people often feel otherwise (notably, people who have spent time meditating and feel that they have become "one with the universe").
Any position that could be considered safe enough to back a market is only going to appreciate in proportion to inflation, which would just make the market zero-sum after adjusting for inflation. Something like ETH or gold wouldn't be a good solution because it's going to be massively distorted on questions that are correlated with the performance of that asset, plus there's always the possibility that they just go down, which would be the opposite of what you want.
I haven't read Fossil Future, but it sounds like he's ignoring the option of combining solar and wind with batteries (and other types of electrical storage, like pumped water). The technology is available today and can be more easily deployed than fossil fuels at this point.
Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn't take this at face value.
There can also be meaning that the author simply didn't intend. In biblical interpretation, for instance, there have been many different (and conflicting!) interpretations given to texts that were written with a completely different intent. One reader reads the story of Adam and Eve as a text that supports feminism, another reader sees the opposite, and the original writer didn't intend to give either meaning. But both readers still get those meanings from the text.
Interestingly, it apparently used to be Zebra, but is now Zulu. I'm not sure why they switched over, but it seems to be the predominant choice since the early 1950s.
I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.
I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria - the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?
From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer program also be conscious? (If you want to say that a computer can’t possibly perfectly simulate a bacteria, then great, we have a testable crux, albeit one that can’t be tested right now.)
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolution of something that doesn’t have experience - with possibly some “half conscious” steps along the way.
From that point of view, I don’t see any problem with supposing that a future AI could have experience, even if current ones don’t. I think it’s reasonable to even suppose that current ones do, though their lack of persistent memory means that it’s very alien to our own, probably more like one of those “half conscious” steps.
Nit: "if he does that then Caplan won't get paid back, even if Caplin wins the bet" misspells "Caplan" in the second instance.
Cable companies are forcing you to pay for channels you don’t want. Cable companies are using unbundling to mislead customers and charge extra for basic channels everyone should have.
I think this would be more acceptable if either everything was bundled or nothing was. But generally speaking companies bundle channels that few people want, to give the appearance of a really good deal, and unbundle the really popular channels (like sports channels) to profit. So you sign up for a TV package that has "hundreds of channels", but you get lots of channels that you don't care about and none of the channels you really want. You're screwed both ways.
I think you're totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.
Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element - the AI just needs to be good at guessing what requirements the product team and customers want and then asking questions and running tests to further divine how the product should work. If specifying a piece of software in English is a nightmare, then your boss's job is already a nightmare, since that's what they do. The key is that they can give a specification, answer questions about the specification, and review implementations of that specification along the way, and those are all things that an AI could do.
I'm already an intelligence that takes in English specifications and produces code, and there's no fundamental reason that my intelligence can't be replaced by an artificial one.
Prediction market on whether the lawsuit will succeed:
https://manifold.markets/Gabrielle/will-the-justice-department-win-its
I’m not a legal expert, but I expect that this sort of lawsuit, involving coordination between multiple states’ attorneys general and the department of justice, would take months of planning and would have to have started before public-facing products like ChatGPT were even released.
The feared outcome looks something like this:
- A paperclip manufacturing company puts an AI in charge of optimizing its paperclip production.
- The AI optimizes the factory and then realizes that it could make more paperclips by turning more factories into paperclips. To do that, it has to be in charge of those factories, and humans won’t let it do that. So it needs to take control of those factories by force, without humans being able to stop it.
- The AI develops a super virus that will be an epidemic to wipe out humanity.
- The AI contacts a genetics lab and pays for the lab to manufacture the virus (or worse, it hacks into the system and manufactures the virus). This is a thing that already could be done.
- The genetics lab ships the virus, not realizing what it is, to a random human’s house and the human opens it.
- The human is infected, they spreads it, humanity dies.
- The AI creates lots and lots of paperclips.
Obviously there’s a lot of missed steps there, but the key is that no one intentionally let the AI have control of anything important beyond connecting it to the internet. No human could or would have done all these steps, so it wasn’t seen as a risk, but the AI was able to and wanted to.
Other dangerous potential leverage points for it are things like nanotechnology (luckily this hasn’t been as developed as quickly as feared), the power grid (a real concern, even with human hackers), and nuclear weapons (luckily not connected to the internet).
Notably, these are all things that people on here are concerned about, so it’s not just concern about AI risk, but there are lots of ways that an AI could lever the internet into an existential threat to humanity and humans aren’t good at caring about security (partially because of the profit motive).
We're worried about AI getting too powerful, but logically that means humans are getting too powerful, right?
One of the big fears with AI alignment is that the latter doesn't logically proceed from the first. If you're trying to create an AI that makes paperclips and then it kills all humans because it wasn't aligned (with any human's actual goals), it was powerful in a way that no human was. You do definitely need to worry about what goal the AI is aligned with, but even more important than that is ensuring that you can align an AI to any human's preferences, or else the worry about which goal is pointless.
The Flynn effect isn't really meaningful outside of IQ tests. Most medieval and early modern peasants were uneducated and didn't know much about the world far from their home, but they definitely weren't dumb. If you look at the actual techniques they used to run their farms, they're very impressive and require a good deal of knowledge and fairly abstract thinking to do optimally, which they often did.
Also, many of the weaving patterns that they've been doing for thousands of years are very complex, much more complex than a basic knitting stitch.
- At least 90% of internet users could solve it within one minute.
While I understand the reasoning behind this bar, having a bar greater than something like 99.99% of internet users is strongly discriminatory and regressive. Captchas are used to gate parts of the internet that are required for daily life. For instance, almost all free email services require filling out captchas, and many government agencies now require you to have an email address to interact with them. A bar that cuts out a meaningful number of humans means that those humans become unable to interact with society. Moreover, the people who are most likely to fail at solving this are people who already struggle to use computers and the internet, so the uneducated, the poor, and the elderly. Those groups can ill afford yet another barrier to living in our modern society.
Workers at a business are generally more aligned with each other than they are with the shareholders of the business. For example, if the company is debating a policy that has a 51% chance of doubling profit and a 49% chance of bankrupting the company, I would expect most shareholders to be in favor (since it's positive EV for them). But for worker-owners, that's a 49% chance of losing their job and a 51% chance of increasing salary but not doubling (since it's profit that is doubling, not revenue, and their salaries are part of the expenses), so I would expect them to be against the policy.
The same goes for things like policies around worker treatment - if a proposed policy would increase profit by 10% but make workers have a much more unpleasant environment, shareholders would probably vote in favor while worker-owners would vote against.
Obviously there are some shareholders who would go against their profit motive for improving the lives of stakeholders (see ESG funds), and workers who would choose a chance for more money over better working conditions or a lower chance of lowing their job. But I would generally expect the two groups to disagree with each other but be aligned internally.
I think the biggest issue in software development is the winner-takes-all position with many internet businesses. For the business to survive, you have to take the whole market, which means you need to have lots of capital to expand quickly, which means you need venture capital. It's the same problem that self-funded startups have. People generally agree that self-funded startups are better to work at, but they can't grow quite as fast as VC-funded startups and lose the race. But that doesn't apply outside of the software sphere (which is why VC primarily focuses on software startups).
Beyond that, they're just not that well known as an option in the US, and all of the narrative is about venture capital based startups, so founders haven't considered co-ops as an option. Despite that I am aware of a few software co-ops (primarily consultancies, since they don't have the large capital needs).
So Diplomacy is not a computationally complex game, it's a game about out-strategizing your opponents where roughly all of the strategy is convincing other of your opponents to work with you. There are no new tactics to invent and an AI can't really see deeper into the game than other players, it just has to be more persuasive and make decisions about the right people at the right time. You often have to do things like plan ahead to make your actions so that in a future turn someone else will choose to ally with you. The AI didn't do any specific psychological manipulation, it was just good at being persuasive and strategic in the normal human way. It's also notable for being able to both play the game and talk with people about the game.
This could translate into something like being good at convincing that the AI should be let out of its box, but I think mostly it's just being better at multiple skills simultaneously than many people expected.
(Disclaimer: I've only played Diplomacy in person before, and not at this high of a level)
What does this picture [pale blue dot] make you think about?
This one in particular seems unhelpful, since the picture is only meaningful if the viewer knows what it's a photo of. Sagan's description of it does a lot to imbue so much emotion into it.
That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent.
I don’t have any experience with data centers or with deploying machine learning at scale. However, I would expect that for performance reasons it is much more efficient to have a local cache of the current data and then either have a manual redeploy at a fixed schedule or have the software refresh the cache automatically after some amount of time.
I would also imagine that reacting immediately could result in feedback loops where the AI overreacts to recent actions.
A mitigating factor for the criminality is that smarter people are usually less in need of committing crimes. Society values conventional intelligence and usually will pay well for it, so someone who is smarter will tend to get better jobs and make more money, so they won't need to resort to crime (especially petty crime).
My understanding of Spanish (also not a Spanish speaker) is that it's a palatal nasal /ɲ/, not a palatalized alveolar nasal /nʲ/. With a palatal nasal, you're making the sound with the tip of your tongue at the soft palate (the soft part at the top of your mouth, behind the alveolar ridge). With a palatalized nasal, it's a "secondary" articulation, with the body of your tongue moving to the soft palate.
That said, the Spanish ñ is a good example of a palatal or palatalized sound for an English speaker.
Yeah, that's absolutely more correct, but it is at least a little helpful for a monolingual English speaker to understand what palatalization is.
Not sure I can explain it in text to a native English speaker what palatalization is; you would need to hear actual examples.
There are some examples in English. It's not quite the same as how Slavic languages work*, but it's close enough to get the idea: If you compare "cute" and "coot", the "k" sound in "cute" is palatalized while the "k" sound in "coot" is not. Another example would be "feud" and "food".
British vs American English differ sometimes in palatalization. For instance, in British English (RP), "tube" is pronounced with a palatalized "t" sound, while in American English (SAE), "tube" is pronounced with a normal "t" sound.
* In English, the palatalization is more like a separate phoneme, so "cute" is /kjut/ and "coot" is /kut/, but in Slavic languages, the palatalization is directly on the consonant, so it would be /kʲut/. With the Slavic version, the tongue is in a different spot for the entire sound, while in the English version the /k/ is like normal and then the tongue moves to the soft palate.
The risk is a good point given some of the uncertainties we’re dealing with right now. I’d estimate maybe 1% risk of those per year (more weighted towards the latter half of the time frame, but I’ll assume that it’s constant), so perhaps with a discounting rate of that it would need to be more like $1400. That’s still much less than the assumption.
Looking at my consumption right now, I objectively would not spend the $1000 on something that lasts for more than 30 years, so I believe that shouldn’t be relevant. To make this more direct, we could phrase it as something like “a $1000 vacation now or a $1400 vacation in 30 years”, though that ignores consumption offsetting.
For the point about smoothing consumption, does that actually work given that retirement savings are usually invested and are expected to give returns higher than inflation? For instance, my current savings plan means that although my income is going to go up, and my amount saved will go up proportionally, the majority of my money when I retire will be from early in my career.
For a more specific example, consider two situations where I'm working until I'm 65 and have returns of 6% per annum (and taking all dollar amounts to be inflation adjusted):
- I start investing immediately when I start working as an adult (at 21)
- I wait to start investing until I'm 35
In the first situation, if I contribute $1000 monthly, I'll retire with about $2.4 million, 79% of which is from interest. In the second situation, to get the same amount at retirement, I have to contribute $2500 monthly, and only 63% of the balance will be from interest. I don't expect to be making 2.5 times as much at 35 as I was at 21, so smoothing consumption is worse for me.
This gets even worse when you consider than you should move to lower risk investment plans as you get closer to retirement, since you'll have to be contributing even more. As an counterpoint to this, some people even recommend investing on the margin when you're young to get even higher returns, though I'm not bold enough to do that.
I guess my biggest disagreements with the paper is that income (for at least me and for people I have compared notes with) is not hump-shaped enough for the effect to dominate, and their assumption that "the rate of time preference equals the interest rate" seems to be to simply not be true even though economists like to assume it is. I think the second assumption is the one I disagree with more, since I (and again, many people I have talked to) have very little time preference for the present. If I had two buttons, to give me $1000 of consumption today, or $1001 of consumption in thirty years (inflation adjusted of course), I would press the second button. But economists assume that I would only do that if the second button was something like $6000 (a 6% annual rate of return).
I suspect that assumptions like this are why economists disagree with personal finance guidelines, and I suspect that the personal finance guidelines are more often correct about their assumptions than economists are.
if I put things in my cart, don't check out, and come back the next day, I'm going to be frustrated if the site has forgotten my selections!
Ironically, I get frustrated by the inverse of this. If I put something in my shopping cart, I definitely don’t still want it tomorrow. I keep on accidentally having items that I don’t want in my order when I check out, and then I have to go back through all the order steps to remove the item (since there’s hardly ever a removal button from the credit card screen). It’s so frustrating! I don’t want you to remember things about me from previous days, just get rid of it all.
A single human is always going to have a risk of a sudden mental break, or perhaps simply not having been trustworthy in the first place. So it seems to me like a system where the most knowledgeable person has the single decision is always going to be somewhat more risky than a situation where that most knowledgeable person also has to double check with literally anyone else. If you make sure that the two people are always together, it doesn’t hurt anything (other than the salary for that person, I suppose, but that’s negligible).
For political reasons, we say that the US President is definitionally that most knowledgeable person, which probably isn’t actually the case, but they are at least the person that the US voting system has said should make the decision.
Which is all to say, even in the most urgent response most critical system, adding a structure that takes sole power away from a single person increases safety. Of course, I don’t think we’ll have a world where that structure involves everyone, but I think that increasing individual inequality is a bad choice.
For another angle with nuclear weapons, if we could somehow teach people so that some people only understood half of building the weapon and other people only understood the other half, it would decrease the odds that a single person would be able to build a nuclear weapon or teach a terrorist organization, even if more people now have some knowledge. Decreasing inequality of nuclear-weapon-knowledge would create a safer society.
The policy could just be “at least one person has to agree with the President to launch the nuclear arsenal”. It probably doesn’t change the game that much, but it at least gets rid of the possible risk that the President has a sudden mental break and decides to launch missiles for no reason. Notably it doesn’t hurt the ability to respond to an attack, since in that situation there would undoubtedly be at least one aide willing to agree, presumably almost all of them.
Actually consulting with the aide isn’t necessary, just an extra button press to ensure that something completely crazy doesn’t happen.
What I’m referring to is the two-man rule: https://en.m.wikipedia.org/wiki/Two-man_rule
US military policy requires that for a nuclear weapon to actually be launched, two people at the silo or on the submarine have to coordinate to launch the missile. The decision still comes from a single person (the President), but the people who follow out the order have to be double checked, so that a single crazy serviceman doesn’t launch a missile.
It wouldn’t be crazy for the President to require a second person to help make the decision, since the President is going to be surrounded by aides at all times. For political reasons we don’t require it, but it sounds reasonable as a military policy.
In the case of nuclear weapons, they infamously have been made to require two individuals to both press the button (or turn the key) to launch the missile. Even if some situations aren’t currently setup like that, they certainly all could be made to require at least two people.