LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Asya: is the above sufficient to allay the suspicion you described? If not, what kind of evidence are you looking for (that we might realistically expect to get)?
peterh on OpenAI: FalloutCNBC reports:
The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” stated the memo, which was viewed by CNBC.
The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed.
“As we shared with employees, we are making important updates to our departure process,” an OpenAI spokesperson told CNBC in a statement.
“We have not and never will take away vested equity, even when people didn’t sign the departure documents. We’ll remove nondisparagement clauses from our standard departure paperwork, and we’ll release former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual,” said the statement, adding that former employees would be informed of this as well.
A handful of former employees have publicly confirmed that they received the email.
thomas-kwa on Drexler's Nanosystems is now available onlineI believe Nanosystems is mostly valid physics (though I am still unsure about this) and in the far future, after GDP has doubled ten or twenty times, we will think of it like current rocket scientists think of Tsiolkovsky's writing: speculative science that gave a glimpse surprisingly far into the future through an understanding of the timeless basic principles at play, though misses many implementation details. And just like the sense of perspective by knowing in 1914 it's theoretically possible to send people to Mars on a ship with airlocks, fueled by hydrogen-oxygen engines and steered by cold gas thrusters, I think we gain an enormously valuable perspective on the universe by knowing that it is (probably) theoretically possible to perform most chemical reactions and many molecular assembly tasks with 99% efficiency using machines that precisely place atoms, self-replicate once an hour, and require only ultrapure gases, various trace metals, and electricity as input.
tamsin-leake on Tamsin Leake's ShortformSome people who are very concerned about suffering might be considering building an unaligned AI that kills everyone just to avoid the risk of an AI takeover by an AI aligned to values which want some people to suffer.
Let this be me being on the record saying: I believe the probability of {alignment to values that strongly diswant suffering for all moral patients} is high enough, and the probability of {alignment to values that want some moral patients to suffer} is low enough, that this action is not worth it.
I think this applies to approximately anyone who would read this post, including heads of major labs in case they happen to read this post and in case they're pursuing the startegy of killing everyone to reduce S-risk.
See also: how acausal trade helps in 1 [LW(p) · GW(p)], 2 [LW · GW], but I think I think this even without acausal trade.
tenoke on Tenoke's ShortformThey are not full explanations, but as far as, I at leat can get.
>tells you more about what exists
It's still more satisfying, because a state of ~ everything existing is more 'stable' than a state of a specific something existing, in exactly the same way as to why I even think nothing makes more sense as a default state than something to be asking the queston. Nothing existing, and everything existing just require less explanation than a specific something existing. It doesn't mean it necesserily requires 0 explanation.
And, if everything mathemetically describable and consistent/computable exists, I can wrap my head around it not requiring an orgin more easily, in a similar way why I don't require an orgin for actual mathematical objects, but without it seeming like necesserily a Type error (though that's the counterargument I most consider here) like with most explanations.
>because how can you have a "fluctuation" without something already existing, which does the fluctuating
That's at least somewhat more satisfying to me because we already know about virtual particles and fluctuations from Quantum Mechanics, so it's at least a recognized low-level mechanism that does cause something to exist even while the state is zero energy (nothing).
It still leaves us with nothing existing over something overall in at least one way (zero energy), is already demonstratable with fields, which are at the lowest level of what we already know of how the universe works and which can be examined and thought about furtther.
richard_kennaway on g-w1's ShortformDo adults actually ask each other “What’s your favorite…” whatever? It sounds to me like the sort of question an adult asks a child in order to elicit a “childish” answer, whereupon the adults in the room can nod and wink at each other to the effect of “isn’t that sweeet?” so as to maintain the power differential.
If I am faced with such a question, I ignore the literal meaning and take it to be a conversation hook (of a somewhat unsatisfactory sort, see above) and respond by talking more generally about the various sorts of whatever that I favour, and ignoring the concept of a “favorite”.
lsusr on Less Anti-DakkaIf you feel slightly more free whenever you eliminate some unnecessary clutter, maybe you would benefit from removing all the clutter.
I took this to the extreme and it more than paid for itself. Benefits have been massive. Costs have been trivial.
quetzal_rainbow on MIRI 2024 Communications StrategyI yet another time say that your tech tree model doesn't make sense to me. To get immortality/mind uploading, you need really overpowered tech, far above the level when killing all humans and starting disassemble planet becomes negligibly cheap. So I wouldn't expect that "existing people would probably die" is going to change much under your model "AIs can be misaligned but killing all humans is too costly".
lsusr on AwakeningI think that's a completely reasonable question to ask. The answer is non-obvious.
To fully answer your question is beyond the scope of this post, but I think there's two systems operating in the brain. One of them is a reinforcing operant condition system that can get addicted. Jhanic bliss states require that the operant conditioning system not be active, so it's not getting reinforced.
signer on MIRI 2024 Communications StrategyGiven a low prior probability of doom as apparent from the empirical track record of technological progress, I think we should generally be skeptical of purely theoretical arguments for doom, especially if they are vague and make no novel, verifiable predictions prior to doom.
And why such use of the empirical track record is valid? Like, what's the actual hypothesis here? What law of nature says "if technological progress hasn't caused doom yet, it won't cause it tomorrow"?
MIRI’s arguments for doom are often difficult to pin down, given the informal nature of their arguments, and in part due to their heavy reliance on analogies, metaphors, and vague supporting claims instead of concrete empirically verifiable models.
And arguments against are based on concrete empirically verifiable models of metaphors.
If your model of reality has the power to make these sweeping claims with high confidence, then you should almost certainly be able to use your model of reality to make novel predictions about the state of the world prior to AI doom that would help others determine if your model is correct.
Doesn't MIRI's model predict some degree of the whole Shoggoth/actress thing in current system? Seems verifiable.