LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
It could be an interesting experiment to build up this list iteratively. Like, every question you ask for the third time, the answer gets added at the bottom of the list. How long will the list get, and what will it contain?
jiro on Q&A on Proposed SB 1047If your model is not projected to be at least 2024 state of the art and it is not over the 10^26 flops limit?
It's not going to be 2024 forever. In the future being 2024 state of the art won't be as hard as it is in actual 2024.
That developers risk going to jail for making a mistake on a form.
- This (almost) never happens.
Because prosecuting someone for making a mistake on a form happens when the government wants to go after an otherwise innocent person for unacceptable reasons, so they prosecute a crime that goes unprosecuted 99% of the time.
The bill says the $500 million must be due to cyberattacks on critical infrastructure, autonomous illegal-for-a-human activity by an AI, or something else of similar severity. This very clearly does not apply to ‘$500 million in diffused harms like medical errors or someone using its writing capabilities for phishing emails.’
"Severity" isn't defined. It's not implausible to read "severity" to mean "has a similar cost to".
johnswentworth on Some Experiments I'd Like Someone To Try With An AmnesicAnother class of applications which we discussed at the retreat: person 1 takes the amnesic, person 2 shares private information on them, and then person gives their reaction to the private information. Can be used e.g. for complex negotiations: maybe it is in our mutual best interest to make some deal, but in order for me to know that I'd need some information which you don't want to share with me, so I take the drug, you share the information, and I record some verified record of myself saying "dear future self, you should in fact take this deal".
... which is cool in theory but I would guess not of high immediate value in practice, which is why the post didn't focus on it.
viliam on If you are assuming Software works well you are deadConsider the pressures and incentives. Adding new features can help you sell the software to more users. Fixing bugs... unless the application is practically falling apart, it does not make much of a difference. After all, the bugs will only get noticed by people who already use your application, i.e. they already paid for it.
For the artificial intelligence, I assume the "killer app" will be its integration with SharePoint.
algon on Some Experiments I'd Like Someone To Try With An AmnesicImportant notice: benzodiazepines are serious business: benzo withdrawals are amongst the worst experiences a human can go through, and combinations of benzos with alcohol, barbiturates, opioids or tricyclic antidepressants are very dangerous: benzos played a role in 31% of the estimated 22,767 deaths from prescription drug overdose in the United States.
If you're experimenting with benzos, please be very careful!
You can actually use this to do the sleeping beauty experiment IRL and thereby test SIA vs SSA. Unfortunately you can only get results if you're the one being put under.
metachirality on ShortformThis sort of begs the question of why we don't observe other companies assassinating whistleblowers.
habryka4 on Introducing AI-Powered Audiobooks of Rational Fiction ClassicsAdded an embedded audio element for you.
johnswentworth on My hour of memoryless lucidityI would love to hear suggestions for other things I could try. If you have any, let me know in a comment!
My answer. [LW · GW]
niplav on Thomas Kwa's ShortformBecause[1] for a Bayesian reasoner, there is conversation [LW · GW] of [? · GW] expected evidence [LW · GW].
Although I've seen it mentioned that technically the change in the belief on a Bayesian should follow a Martingale, and Brownian motion is a martingale.
I'm not super technically strong on this particular part of the math. Intuitively it could be that in a bounded reasoner which can only evaluate programs in P, any pattern in its beliefs that can be described by an algorithm in P is detected and the predicted future belief from that pattern is incorporated into current beliefs. On the other hand, any pattern described by an algorithm in EXPTIME∖P can't be in the class of hypotheses of the agent, including hypotheses about its own beliefs, so EXPTIME patterns persist. ↩︎