Turing machines are a finite state machine that have access to a memory tape. This was intended to be sort of analogous to humans being able to take notes on unbounded amounts of paper when thinking.morpheus on What is the evidence on the Church-Turing Thesis?
Thanks for the answer!
Human brains are finite state machines. A Turing machine has unlimited memory and time.
Oops! You're right, and It's something that I used to know. So IIRC as long your tape (and your time) is not infinite you still have a finite state machine, so Turing machines are kind of finite state machines taken to the limit for (n→∞) is that right?ike on Outlawing Anthropics: Dissolving the Dilemma
You can start with Bostrom's book on anthropic bias. https://www.anthropic-principle.com/q=book/table_of_contents/
The bet is just each agent is independently offered a 1:3 deal. There's no dependence as in EYs post.donald-hobson on What is the evidence on the Church-Turing Thesis?
Sometimes in mathematics, you can right 20 slightly different definitions and find you have defined 20 slightly different things. Other times you can write many different formalizations and find they are all equivalent. Turing completeness is the latter. It turns up in Turing machines, register machines, tiling the plane, Conways game of life and many other places. There are weaker and stronger possibilities, like finite state machines, stack machines and oracle machines. (Ie a Turing machine with a magic black box that solves the halting problem is stronger than a normal Turing machine)
Human brains are finite state machines. A Turing machine has unlimited memory and time.
Physical laws are generally continuous, but there exists a Turing machine that takes in a number N and computes the laws to accuracy 1/N. This isn't philosophically forced, but it seems to be the way things are. All serious theories are computable.
We could conceivably be in a universe that wasn't simulateable by a Turing machine. Assuming our brains are simulatable, we could never know this absolutely, as simulators with a huge but finite amount of compute trying to trick us could never be ruled out. 0 and 1 aren't probabilities and you are never certain. Still, we could conceivably be in a situation were an uncomputable explanation is far simpler than any computable theory.chris_leong on Coordination Schemes Are Capital Investments
Did anything in particular motivate starting this sequence?ryan_b on Book review: The Checklist Manifesto
The tools at work I have used in the past were as much reference material as checklist; this had the effect of making them a completely separate, optional action item that people only use if they remember.
The example checklists from the post are all as basic as humanly possible: FLY AIRPLANE and WASH HANDS. These are all things everyone knows and can coordinate on anyway, but the checklist needs to be so simple that it doesn’t really register as an additional task. This feels like the same sort of bandwidth question as getting dozens or hundreds of people to coordinate on the statement USE THE CHECKLIST.
Put another way, I think that the reasoning in You Have About Five Words is recursive.gjm on Is LessWrong dead without Cox’s theorem?
If Loosemore's point is only that an AI wouldn't have separate semantics for "interpreting commands" and for "navigating the world and doing things", then he hasn't refuted "one principal argument" for ASI danger; he hasn't refuted any argument for it that doesn't actually assume that an AI must have separate semantics for those things. I don't think any of the arguments actually made for ASI danger make that assumption.
I think the first version of the paperclip-maximizer scenario I encountered had the hapless AI programmer give the AI its instructions ("as many paperclips as possible by tomorrow morning") and then go to bed, or something along those lines.
You seem to be conflating "somewhat oddly designed" with "so stupidly designed that no one could possibly think it was a good idea". I don't think Loosemore has made anything resembling a strong case for the latter; it doesn't look to me as if he's even really tried.
For Yudkowskian concerns about AGI to be worth paying attention to, it isn't necessary that there be a "strong likelihood" of disaster if that means something like "at least a 25% chance". Suppose it turns out that, say, there are lots of ways to make something that could credibly be called an AGI, and if you pick a random one that seems like it might work then 99% of the time you get something that's perfectly safe (maybe for Loosemore-type reasons) but 1% of the time you get disaster. It seems to me that in this situation it would be very reasonable to have Yudkowsky-type concerns. Do you think Loosemore has given good reason to think that things are much better than that?
Here's what seems to me the best argument that he has (but, of course, this is just my attempt at a steelman, and maybe your views are quite different): "Loosemore argues that if you really want to make an AGI then you would have to be very foolish to do it in a way that's vulnerable to Yudkowsky-type problems, even if you weren't thinking about safety at all. So potential AGI-makers fall into two classes: the stupid ones, and the ones who are taking approaches that are fundamentally immune to the failure modes Yudkowsky worries about. Yudkowsky hopes for intricate mathematical analyses that will reveal ways to build AGI safely, but the stupid potential AGI engineers won't be reading those analyses, won't be able to understand them, and won't be able to follow their recommendations, and the not-stupid ones won't need them. So Yudkowsky's wasting his time."
The main trouble with this is that I don't see that Loosemore has made a good argument that if you really want to make an AGI then you'd be stupid to do it in a way that's vulnerable to Yudkowsky-type concerns. Also, I think Yudkowsky hopes to find ways of thinking about AI that both make something like provable safety achievable and clarify what's needed for AI in a way that makes it easier to make an AI at all, in which case, it might not matter what everyone else is doing.
In any case, this is all a bit of a sidetrack. The point is: Loosemore claimed that the sort of thing Yudkowsky worries about is "logically incoherent at  a fundamental level", but even being maximally generous to his arguments I think it's obvious that he hasn't shown that; there is a reasonable case to be made that he simply hasn't understood some of what Yudkowsky has been saying; that is what Y meant by calling L a "permanent idiot"; whether or not detailed analysis of Y's and L's arguments ends up favouring one or the other, this is sufficient to suggest that (at worst) what we have here is a good ol' academic feud where Y has a specific beef with L, which is not at all the same thing as a general propensity for messenger-shooting.
And, to repeat the actually key point: what Yudkowsky did on one occasion is not strong evidence for what the Less Wrong community at large should be expected to do on a future occasion, and I am still waiting (with little hope) for you to provide some of the actual examples you claim to have where the Less Wrong community at large responded with messenger-shooting to refutations of their central ideas. As mentioned elsewhere in the thread, my attempts to check your claims have produced results that point in the other direction; the nearest things I found to at-all-credibly-claimed refutations of central LW ideas met with positive responses from LW: upvotes, reasonable discussion, no messenger-shooting.zach-stein-perlman on Great Power Conflict
Speaking about states wanting things obscures a lot.
So I assume you would frame states as less agenty and frame the source of conflict as decentralized — arising from the complex interactions of many humans, which are less predictable than "what states want" but still predictably affected by factors like bilateral tension/hostility, general chaos, and various technologies in various ways?rh on Eindhoven, Netherlands – ACX Meetups Everywhere 2021
Heads up: there is some kind of event happening and it's (at least at the moment) really busy. If you can't find us, ping me by mail or through here.
After 15:30 we'll move to: https://maps.app.goo.gl/S7Xz7FDWoFtLjR9g7kithpendragon on What should one's policy regarding dental xrays be?
From https://seer.cancer.gov/statfacts/html/thyro.html, new thyroid cancer cases occur at a rate of ~15 cases per 100k people per year, and the disease has a 98+% 5-year survival rate.
Compare that with whatever risk results from needing more invasive repair when your dentist can't detect the cavities as soon, and you can see if there's a net benefit. I'm not seeing any numbers on this in my 5 minutes of searching, but that doesn't mean they're not out there. But I suspect the connection between dental infections and heart disease (that any dentist will tell you all about if you ask) easily exceeds the increased risk from regular x-rays.