Posts

A foundation model approach to value inference 2023-02-21T05:09:29.658Z
[Link] Wavefunctions: from Linear Algebra to Spinors 2022-12-07T12:44:33.522Z
Petrov Day is not about unilateral action 2020-09-26T13:41:02.481Z
Canonical forms 2020-07-11T18:53:10.191Z

Comments

Comment by sen on Cognitive Emulation: A Naive AI Safety Proposal · 2023-02-26T23:05:52.067Z · LW · GW

Thank you. You phrased the concerns about "integrating with a bigger picture" better than I could. To temper the negatives, I see at least two workable approaches, plus a framing for identifying more workable approaches.

  • Enable other safety groups to use and reproduce Conjecture's research on CogEms so those groups can address more parts of the "bigger picture" using Conjecture's findings. Under this approach, Conjecture becomes a safety research group, and the integration work of turning that research into actionable safety efforts becomes someone else's task.
  • Understand the societal motivations for taking short-term steps toward creating dangerous AI, and demonstrate that CogEms are better suited for addressing those motivations, not just the motivations of safety enthusiasts, and not just hypothetical motivations that people "should" have. To take an example, OpenAI has taken steps towards building dangerous AI, and Microsoft has taken another dangerous step of attaching a massive search database to it, exposing the product to millions of people, and kicking off an arms race with Google. There were individual decision-makers involved in that process, not just as "Big Company does Bad Thing because that's what big companies do." Why did they make those decisions? What was the decision process for those product managers? Who created the pitch that convinced the executives? Why didn't Microsoft's internal security processes mitigate more of the risks? What would it have taken for Microsoft to have released a CogEm instead of Sydney? The answer is not just research advances. Finding the answers would involve talking to people familiar with these processes, ideally people that were somehow involved. Once safety-oriented people understand these things, it will be much easier for them to replace more dangerous AI systems with CogEms.
  • As a general framework, there needs to be more liquidity between the safety research and the high-end AI capabilities market, and products introduce liquidity between research and markets. Publishing research addresses one part of that by enabling other groups to productize that research. Understanding societal motivations addresses another part of that, and it would typically fall under "user research." Clarity on how others can use your product is another part, one that typically falls under a "go-to-market strategy." There's also market awareness & education, which helps people understand where to use products, then the sales process, which helps people through the "last mile" efforts of actually using the product, then the nebulous process of scaling everything up. As far as I can tell, this is a minimal set of steps required for getting the high-end AI capabilities market to adopt safety features, and it's effectively the industry standard approach.

As an aside, I think CogEms are a perfectly valid strategy for creating aligned AI. It doesn't matter if most humans have bad interpretability, persuadability, robustness, ethics, or whatever else. As long as it's possible for some human (or collection of humans) to be good at those things, we should expect that some subclass of CogEms (or collection of CogEms) can also be good at those things.

Comment by sen on Cognitive Emulation: A Naive AI Safety Proposal · 2023-02-26T00:44:48.677Z · LW · GW

What interfaces are you planning to provide that other AI safety efforts can use? Blog posts? Research papers? Code? Models? APIs? Consulting? Advertisements?

Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-08T02:12:16.810Z · LW · GW

Ah. Thank you, that is perfectly clear. The Wikipedia page for Scalar Field makes sense with that too. A scalar field is a function that takes values in some canonical units, and so it transforms only on the right of f under a perspective shift. A vector field (effectively) takes values both on and in the same space, and so it transforms both on the left and right of v under a perspective shift.

I updated my first reply to point to yours.

Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-08T00:52:52.985Z · LW · GW

Reading the wikipedia page on scalar field, I think I understand the confusion here. Scalar fields are supposed to be invariant under changes in reference frame assuming a canonical coordinate system for space.

Take two reference frames P(x) and G(x). A scalar field S(x) needs to satisfy:

  • S(x) = P'(x)S(x)P(x) = G'(x)S(x)G(x)
  • Where P'(x) is the inverse of P(x) and G'(x) is the inverse of G(x).

Meaning the inference of S(x) should not change with reference frame. A scalar field is a vector field that commutes with perspective transformations. Maybe that's what you meant?

I wouldn't use the phrase "transforms trivially" here since a "trivial transformation" usually refers to the identity transformation. I wouldn't use a head tilt example either since a lot of vector fields are going to commute with spatial rotations, so it's not good for revealing the differences. And I think you got the association backwards in your original explanation: scalar fields appear to represent quantities in the underlying space unaffected by head tilts, and so they would be the ones "transforming in the opposite direction" in the analogy since they would remain fixed in "canonical space".

Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-08T00:25:01.364Z · LW · GW

Interesting. That seems to contradict the explanation for Lie Algebras, and it seems incompatible with commutators in general, since with commutators all operators involved need to be compatible with both composition and precomposition (otherwise AB - BA is undefined). I guess scalar fields are not meant to be operators? That doesn't quite work since they're supposed used to describe energy, which is often represented as an operator. In any case, I'll have to keep that in mind when reading about these things.

Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-07T23:15:31.230Z · LW · GW

Thanks for the explanation. I found this post that connects your explanation to an explanation of the "double cover." I believe this is how it works:

  • Consider a point on the surface of a 3D sphere. Call it the "origin".
  • From the perspective of this origin point, you can map every point of the sphere to a 2D coordinate. The mapping works like this: Imagine a 2D plane going through the middle of the sphere. Draw a straight line (in the full 3D space) from the selected origin to any other point on the sphere. Where the line crosses the plane, that's your 2D vector representation of the other point. Under this visualization, the origin point should be mapped to a 2D "point at infinity" to make the mapping smooth. This mapping gives you a one-to-one conversion between 2D coordinate systems and points on the sphere.
  • You can create a new 2D coordinate system for sphere surface points using any point on the sphere as the origin. All of the resulting coordinate systems can be smoothly deformed into one another. (Points near the origin are always large, points on the opposite side of the sphere are always close to the 0,0,0, and the changes are smooth as you move the origin smoothly.)
  • Each choice of origin on the surface of the sphere (and therefore each 2D coordinate system) corresponds to two unit-length quaternions. You can see this as follows. Pick any choice of i,j,k values from a unit quaternion. There are now either 1 or 2 choices for what the real component of that quaternion might have been. If i,j,k alone have unit length, then there's only one choice for the real component: zero. If i,j,k alone do not have unit length, then there are two choices for the real component since either a positive or a negative value can be used to make the quaternion unit length again.
  • Take the set of unit quaternions that have a real component close to zero. Consider the set of 2D coordinate systems created from those points. In this region, each coordinate system corresponds to two quaternions EXCEPT at the points where the quaternion's real component is 0. This exceptional case prevents a one-to-one mapping between coordinate transformations and quaternion transformations.
  • As a result, there's no "smooth" way to reduce the two-to-one mapping from quaternions to coordinate systems down to a one-to-one mapping. Any mapping would require either double-counting some quaternions or ignoring some quaternions. Since there's a one-to-one mapping between coordinate systems and candidate origin points on the surface of the sphere, this means there is also no one-to-one mapping between quaternions and points on the sphere.
  • No matter what smooth mapping you choose from SU(2), unit quaternions, to SO(3), unit spheres, the mapping must do the equivalent of collapsing distinctions between quaternions with positive and negative real components. And so the double cover corresponds to the two sets of covers: one of positive-real-component quaternions over the sphere, and one of the negative-real-component quaternions over the sphere. Within each cover, there's a smooth one-to-one conversion between quaternion-coordinates mappings, but across covers there is not.
Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-07T21:45:07.487Z · LW · GW

EDIT: This post is incorrect. See the reply chain below. After correcting my misunderstanding, I agree with your explanation.

The difference you're describing between vector fields and scalar fields, mathematically, is the difference between composition and precomposition. Here it is more precisely:

  • Pick a change-of-perspective function P(x). The output of P(x) is a matrix that changes vectors from the old perspective to the new perspective.
  • You can apply the change-of-perspective function either before a vector field V(x) or after a vector field. The result is either V(x)P(x) or P(x)V(x).
  • If you apply P(x) before, the vector field applies a flow in the new perspective, and so its arrows "tilt with your head."
  • If you apply P(x) after, the vector field applies a flow in the old perspective, and so the arrows don't tilt with your head.
  • You can do replace the vector field V(x) with a 3-scalar field and see the same thing.

Since both composition and precomposition apply to both vector fields and scalar fields in the same way, that can't be something that makes vector fields different from scalar fields.

As far as I can tell, there's actually no mathematical difference between a vector field in 3D and a 3-scalar field that assigns a 3D scalar to each point. It's just a choice of language. Any difference comes from context. Typically, vector fields are treated like flows (though not always), whereas scalar fields have no specific treatment.

Spinors are represented as vectors in very specific spaces, specifically spaces where there's an equivalence between matrices and spatial operations. Since a vector is something like the square root of a matrix, a spinor is something like the square root of a spatial operation. You get Dirac Spinors (one specific kind of spinor) from "taking the square root of Lorentz symmetry operations," along with scaling and addition between them.

As far as spinors go, I think I prefer your Lorentz Group explanation for the "what" though I prefer my Clifford Algebra one for the "how". The Lorentz Group explanation makes it clear how to find important spinors. For me, the Clifford Algebra makes it clear how the rest of the spinors arise from those important spinors, and it makes it clear that they're the "correct" representation when you want to sum spatial operations, as you would with wavefunctions. It's interesting that the intuition doesn't transfer as I expected. I guess the intuition transfer problem here is more difficult than I expected.

Note: Your generalization only accounts for unit vectors, and spinors are NOT restricted to unit vectors. They can be scaled arbitrarily. If they couldn't, ψ†ψ would be uniform at every point. You probably know this, but I wanted to make it explicit.

Comment by sen on [Link] Wavefunctions: from Linear Algebra to Spinors · 2022-12-07T15:42:18.524Z · LW · GW

In the 2D matrix representation, the basis element corresponding to the real part of a quaternion is the identity matrix. So scaling the real part results in scaling the (real part of the) diagonal of the 2D matrix, which corresponds to a scaling operation on the spinor. It incidentally plays the same role on 3D objects: it scales them. Plus, it plays a direct role in rotations when it's -1 (180 degree rotation) or 1 (0 degree rotation). Same as with i, j, and k, the exact effect of changing the real part of the quaternion isn't obvious from inspection when it's summed with other non-zero components. For example, it's hard to tell by inspection what the 2 or the 3j is doing in the quaternion 2+3j.

In total, quaternions represent both scaling, rotating, and any mix of the two. I should have been clearer about that in the post. Spinors for quaternions do include any "state changes" resulting from the real part of the quaternion as well as any changes resulting from i, j, and k components, so the spinor does use all degrees of freedom.

The change in representation between 2-quaternion and 4-complex spinors is purely notational. It doesn't affect any of the math or underlying representations. Since a quaternion operation can be represented by a 2x2 complex matrix, you can represent a 2-quaternion operation as the tensor product of two 2x2 complex matrices, which would give you a 4x4 complex matrix. That's where 4x4 gamma matrices come from-- each is a tensor products of two 2x2 Pauli matrices. For all calculations and consequences, you get the exact same answers whether you choose to represent the operations and spinors as quaternions or complex numbers.

Comment by sen on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T23:05:07.542Z · LW · GW

I don't know why other people say it, but I can explain why it's nice to say it.

  • log P(x) behaves nicely in comparison to P(x) when it comes to placing iterated bets. When you maximize P(x), you're susceptible to high risk high reward scenarios, even when they lead to failure with probability arbitrarily close to 1. The same is not true when maximizing log P(x). I'm cheating here since this only really makes sense when big-P refers to "principal" (i.e., the thing growing or shrinking with each bet) rather than "probability".
  • p(x) doesn't vary linearly with the controls we typically have, so calculus intuition tends to break down when used to optimize p(x). Log p(x) does usually vary linearly with the controls we typically have, so we can apply more calculus intuition to optimizing it. I think this happens because of the way we naturally think of "dimensions of" and "factors contributing to" a probability and the resulting quirks of typical maximum entropy distributions.
  • Log p(x) grows monotonically with p(x) whenever x is possible, so the result is the same whether you argmax log p(x) or p(x).
  • p(x) is usually intractable to calculate, but there's a slick trick to approximate it using the Evidence Based Lower Bound, which requires dealing with log p(x) rather than p(x) directly. Saying log p(x) calls that trick to mind more easily than saying just p(x).
  • All the cool papers do it.
Comment by sen on Fixed Point Discussion · 2020-10-04T11:19:34.904Z · LW · GW
Comment by sen on Doing discourse better: Stuff I wish I knew · 2020-09-30T03:09:50.656Z · LW · GW

Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.

Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so common for two such people to hold contradictory claims indicates that their real metric is much weaker than truth. I'll posit that the real metric in such scenarios is robustness.

Not all disagreements will separate cleanly into true/false categorizations. Godel proved that one.

Comment by sen on The rationalist community's location problem · 2020-09-28T14:21:30.960Z · LW · GW

That was a fascinating post about the relationship with Berkeley. I wonder how the situation has changed in the last two years since people became more cognizant of the problem. Note that some of the comments there refute your idea that the community never had enough people for multiple hubs. NYC and Melbourne in particular seemed to have plenty of people, but they dissipated after core members repeatedly got recruited by Berkeley.

It seems like Berkeley was overtly trying to eat other communities, but EA did it just by being better at a thing many Rationalists hoped the Rationality Community would be. The "competition" with EA seems healthy, so perhaps that one should be encouraged more explicitly.

I'll note that for all the criticisms leveled at Berkeley in that post, I get the same impression of LW that Evan_Gaensbauer had of Berkeley. The sensible posts here (per my arrogant perspective) are much more life- and community-oriented. Jan_Kulveit in your link gave a tidy explanation of why that is, and I think it's close to spot-on. Your observations about practical plans for secondary hubs are exactly what I'd expect.

Comment by sen on Surviving Petrov Day · 2020-09-28T01:01:55.146Z · LW · GW

Your understanding is correct. Your Petrov Day strategy is the only thing I believe causes harm in your post.

I'll see if I can figure out what exactly was frustrating about the post, but I can't make promises on my ability to introspect to that level or on my ability to remember the origins of my feelings last night.

These are the things I can say with high certainty:

  • I read this post more like a list of serious suggestions interspersed with playful bits. Minus the opener and the Information Flow section, the contents here are all legit.
  • If you put way more puns into the section contents, it would feel less frustrating.

This is a best-guess as to why the post feels frustrating:

  • It feels like you draw a sharp delineation between playful bits and serious suggestions. The opener is all playful. The section headers are all serious. Minus the Information Flow section, the section contents are all serious. The "Metaphor For" lines are all playful.
  • The sharp delineation makes it feel like the playful bits were tossed in to defend the serious suggestions against critical thinking.

This is a weak best-guess, which I could probably improve on if I spent an hour or so thinking about it:

  • I'd guess that puns would help because they would blur the line between serious suggestions and playful bits. This would force the reader to think more about what you're saying for validity. With that, it wouldn't feel like the post is trying to defend itself against critical thinking.
Comment by sen on Surviving Petrov Day · 2020-09-27T21:00:45.946Z · LW · GW

I did -2. It wasn't punishment, and definitely not for saying social penalty. I think social penalties are perfectly fine approaches for some problems, particularly ones where fuzzy coordination yields value greater than the complexity it entails.

I do feel frustration, but definitely not anger. The frustration is over the tenuous connection, which in my mind leads to a false sense of understanding.

I feel relatively new to LW so I'm still trying to figure out when I give a -1 and when I give a -2. I felt that the tenuous connection in combination with the net-negative advice warranted a -2.

EDIT: I undid my -2 in light of this comment thread.

Comment by sen on Surviving Petrov Day · 2020-09-27T03:48:18.156Z · LW · GW

Do you think it makes more sense for you to punish the perpetrator after you're dead or after they're dead?

Replication is a decent strategy until secrets get involved, and this world runs on a lot of secrets that people will not back up. Even when it comes to publicly accessible things, there's a very thick and very ambiguous line between private data and public data. See, for example, the EU's right to be forgotten. This is a minor issue post-nuke, but it means gathering support for a backup effort will be difficult.

Access control is a decent strategy once you manage to set it up and figure out how to appropriately distribute trust. Trusting "your friends" is not a good strategy for exactly the reason evident today: even if they're benign, they can be compromised.

Punishing attackers just flat out doesn't work. That random person in China doesn't care if the US government says hacking is bad. Hackers don't care if selling credit card data is bad. Not even academic researchers care that reverse-engineering is illegal. You're not going to convince the world that your punishments are good, and everyone unconvinced will let it slide. All you'll do is alienate the people most capable of identifying flaws in your strategy. There are a lot of very intelligent people out there that care more about their freedom to explore and act than about net utility. They will build out the plans and infrastructure necessary for the real baddies to do their work. Please do not alienate them by telling them that their moral sensibilities are bad.

Some lessons from a decade in software security.

I like your backup strategies for LessWrong. The connection to nukes is tenuous. I think your Petrov Day strategy does more harm than good.

Comment by sen on The rationalist community's location problem · 2020-09-26T12:10:18.410Z · LW · GW

In light of some of the comments on the supposed impossibility of relocating a hub, I figured I'd suggest a strategy. This post says nothing about the optimality of creating/relocating a hub, it only suggests a method for doing so. I'm obviously not an experienced hub relocator in real life, but evidently, I'll play one on the internet for the sake of discussion. Please read these as an invitation to brainstorm.

Make the new location a natural choice.

  • Host events in the new location. If people feel a desire to spend their holidays and time off in the new location, that's a great start.
  • Pick a good common hotel. For people that visit regularly, this hotel should feel almost like a second home. Rationalists can bump into each other in the hotel, and they can carpool or get meals together.
  • Identify people that can give an "open invitation" for others to visit any time. These people are basically the ones openly ready to make friends with new rationalists. The hope is that, eventually, rationalists start coming into the area to meet up with friends.

Create opportunities to move.

  • Invite rationalists to interview for jobs in the new location. This would directly target people that choose to move for work reasons.
  • Make the new location more homely for people that have had trouble adjusting to their current location. I've known people (especially married people) to move for social and comfort reasons, especially ones that have had difficulty making friends in a new location. Make it easy to socialize in the new location, and make leisure-time activities more accessible, either with good information or social events.
  • Keep track of cheap/shared housing opportunities near the new location. Sometimes people really do move to save money. If people know where the cheap housing is, that's one less excuse not to move. Such a list might even encourage people to get a second home in the new location.
  • Create guides to help people discuss remote work options with managers & HR.

Reinforce every move.

  • Make sure the work situation is stable: support people career-wise in the area. I don't have ideas on how to do this, but if it's a common reason for people moving, then it should be a common reason for people staying.
  • Make the new location homely. Keep track of good leisure-time activities and locations, help stabilize travel by keeping track of transit options, and make sure people moving have chances to socialize and make friends in the area.
  • Keep track of housing opportunities for people to move to increasingly-stable locations. For people that want cheap, keep track of cheap housing. For people that want social, keep track of group housing. For people that want a family, keep track of good neighborhoods and school districts.

Use every move to encourage further relocation.

  • Keep a counter of the number of people that have moved into the area (but not the number of people that have left). There's something oddly satisfying about making/seeing numbers go up.
  • Help new movers host/support events and get started socializing with incoming visitors. Try to get them to do the same things for others that others did to encourage them.
  • Encourage people in the area to spread out work-wise to create more interview opportunities for rationalists not in the area.
Comment by sen on The rationalist community's location problem · 2020-09-24T09:34:03.631Z · LW · GW

We could pick a second hub instead of a new first hub. We don't need consensus or even a plurality. We just need critical mass in a location other than Berkeley. Preferably that new location would cater to a group that's not well-served by Berkeley so we can get more total people into a hub. If we're being careful, we should worry about Berkeley losing its critical mass as a result of the second hub, however, I don't think that's a likely outcome.

There's some loss from splitting people across two hubs rather than getting everyone into one hub. However, I suspect indecision is causing way more long-term loss than the split would. I would recommend first trying to get more people into some hub, then worry about consolidation later.

Comment by sen on What counts as defection? · 2020-07-16T07:35:25.239Z · LW · GW

Understood. I do think it's significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn't intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.

Comment by sen on What counts as defection? · 2020-07-15T17:48:52.445Z · LW · GW

This can turn into a very long discussion. I'm okay with that, but let me know if you're not so I can probe only the points that are likely to resolve. I'll raise the contentious points regardless, but I don't want to draw focus on them if there's little motivation to discuss them in depth.

I agree that a split in terminology is warranted, and that "defect" and "cooperate" are poor choices. How about this:

  • Coalition members may form consensus on the coalition strategy. Members of a coalition may follow the consensus coalition strategy or violate the consensus coalition strategy.
  • Members of a coalition may benefit the coalition or hurt the coalition.
  • Benefiting the coalition means raising its payoff regardless of consensus. Hurting the coalition means reducing its payoff regardless of consensus. A coalition may form consensus on the coalition strategy regardless of the optimality of that strategy.

Contentious points:

  • I expect that treating utility so generally will lead to paradoxes, particularly when utility functions are defined in terms of other utility functions. I think this is an extremely important case, particularly when strategies take trust into account. As a result, I expect that such a general notion of utility will lead to paradoxes when using it to reason about trust.
  • "Utility is not a resource." I think this is a useful distinction when trying to clarify goals, but not a useful distinction when trying to make decisions given a set of goals. In particular, once the payoff tables are defined for a game, the goals must already have been defined, and so utility can be treated as a resource in that game.
Comment by sen on What counts as defection? · 2020-07-15T08:07:56.973Z · LW · GW
The "expected coalition strategy" is, let's say, "no one gets any". By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?

In my view, yes. If we agreed that no one should get any resources, then it's a violation for you to get resources or for you to deceive me into getting resources.

I think the difference is in how the two of us view a strategy. In my view, it's perfectly acceptable for the coalition strategy to include a clause like "it's okay to do X if it's a pareto improvement for our coalition." If that's part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.

Another question: how does this idea differ from the core in cooperative game theory?

I'm not a mathematician or an economist, my knowledge on this hasn't been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don't know how correct it is.

Some type differences:

  • A core is a set of allocations. I'm going to call it core allocations so it's less confusing.
  • A defection is a change in strategy (per both of our definitions).

As far as the relationship between the two:

  • A core allocation satisfies a particular robustness property: it's stable under coalition refinements. A "coalition refinement" here is an operation with a coalition is replaced by a partition of that coalition. Being stable under coalition refinements, the coalition will not partition itself for rational reasons. So if you have coalitions {A, B} and {C}, then every core allocation is robust against {A, B} splitting up into {A}, {B}.
  • Defections (per my definition) don't deal strictly with coalition refinements. If one member leaves a coalition to join another, that's still a defection. In this scenario, {A, B}, {C} is replaced with {A}, {B, C}. Core allocations don't deal with this scenario since {A}, {B, C} is not a refinement of {A, B}, {C}. As a result, core allocations are not necessarily robust to defections.

I could be wrong about core allocations being about only refinements. I think I'm safe in saying though that core allocations are robust against some (maybe all) defections.

Comment by sen on What counts as defection? · 2020-07-13T00:27:04.064Z · LW · GW

I think your focus on payoffs is diluting your point. In all of your scenarios, the thing enabling a defection is the inability to view another player's strategy before committing to a strategy. Perhaps you can simplify your definition to the following:

  • "A defect is when someone (or some sub-coalition) benefits from violating their expected coalition strategy."

You can define a function that assigns a strategy to every possible coalition. Given an expected coalition strategy C, if the payoff for any sub-coalition strategy SC is greater than their payoff in C, then the sub-coalition SC is incentivized to defect. (Whether that means SC joins a different coalition or forms their own is irrelevant.)

This makes a few things clear that are hidden in your formalization. Specifically:

  • The main difference between this framing and the framing for Nash Equilibrium is the notion of an expected coalition strategy. Where there is an expected coalition strategy, one should aim to follow a "defection-proof" strategy. Where there is no expected coalition strategy, one should aim to follow a Nash Equilibrium strategy.
  • Your Proposition 3 is false. You would need a variant that takes coalitions into account.

I believe all of your other theorems and propositions follow from the definition as well.

This has other benefits as well.

  • It factors the payoff table into two tables that are easier to understand: coalition selection and coalition strategy selection.
  • It's better-aligned with intuition. Defection in the colloquial sense is when someone deserts "their" group (i.e., joins a new coalition in violation of the expectation). Coalition selection encodes that notion cleanly. The payoff tables for coalitions cleanly encodes the more generalized notion of "rational action" in scenarios where such defection is possible.
Comment by sen on Lurking More Before Joining Complex Conversations · 2020-07-12T00:40:25.794Z · LW · GW

"That's a good point, but I think you're behind on some of the context on what we were discussing. Can you try to get more of a feel for the conversation before joining it?"

  • It gives the person an understandable reason for their misstep. ("Sorry, I must have misunderstood what you were talking about.")
  • It gives the person a reason to stick around. ("I messed up. If I want to correct this, I need to listen and get more context.")
  • It adjusts the person's behavior in future interactions. ("I should get a feel for the conversation before joining it to avoid messing up in the future.")
  • It's no more aggressive than it needs to be to allow you to disregard what the person said and continue your conversation.
  • What little aggression is there is reasoned and grounded in something easily understood, so it doesn't come off as rude.

The above sounds better in text than it does in an actual conversation, but the same principles should apply in an actual conversation. "Name, hold on. I think you're missing some context. Can you listen for a few minutes to catch up?"

Comment by sen on Epistemic Laws of Motion · 2017-07-08T20:49:24.383Z · LW · GW

I don't see how your comment contradicts the part you quoted. More pressure doesn't lead to more change (in strategy) if resistance increases as well. That's consistent with what /u/SquirrelInHell stated.

Comment by sen on Epistemic Laws of Motion · 2017-07-08T08:29:25.055Z · LW · GW

That mass corresponds to "resistance to change" seems fairly natural, as does the correspondence between "pressure to change" and impulse. The strange part seems to be the correspondence between "strategy" and velocity.'' Distance would be something like strategy * time.

Does a symmetry in time correspond to a conservation of energy? Is energy supposed to correspond to resistance? Maybe, though that's a little hard to interpret, so it's a little difficult to apply Lagrangian or Hamiltonian mechanics. The interpretation of energy is important. Without that, the interpretation of time is incomplete and possibly incoherent.

Is there an inverse correspondence between optimal certainty in resistance strategy (momentum) and optimal certainty in strategy time (distance)? I guess, so findings from quantum uncertainty principles and information geometry may apply.

Does strategy impact one's perception of "distances" (strategy * time) and timescales? Maybe, so maybe findings from special relativity would apply. A universally-observable distance isn't defined though, and that precludes a more coherent application of special/general relativity. Some universal observables should be stated. Other than the obvious objectivity benefits, this could help more clearly define relationships between variables of different dimensions. This one isn't that important, but it would enable much more interesting uses of the theory.

Comment by sen on The Unreasonable Effectiveness of Certain Questions · 2017-07-05T04:55:03.617Z · LW · GW

The process you went through is known in other contexts as decategorification. You attempted to reduce the level of abstraction, noticed a potential problem in doing so, and concluded that the more abstract notion was not as well-conceived as you imagined.

If you try to enumerate questions related to a topic (Evil), you will quickly find that you (1) repeatedly tread the same ground, (2) are often are unable to combine findings from multiple questions in useful ways, and (3) are often unable to identify questions worth answering, let alone a hierarchy that suggests which questions might be more worth answering than others.

What you are trying to identify are the properties and structure of evil. A property of Evil is a thing that must be preserved in order for Evil to be Evil. The structure of Evil is the relationship between Evil and other (Evil or non-Evil) entities.

You should start by trying to identify the shape of Evil by identifying its border, where things transition from Evil to non-Evil and vice versa. This will give you an indication of which properties are important. From there, you can start looking at how Evil relates to other things, especially in regards to its properties. This will give you some indication of its structure. Properties are important for identifying Evil clearly. Structure is important for identifying things that are equivalent to Evil in all ways that matter. It is often the case that the two are not the same.

If you want to understand this better, I recommend looking into category theory. The general process of identifying ambiguities, characterizing problems in the right way, applying prior knowledge, and gluing together findings into a coherent whole is fairly well-worn. You don't have to start from scratch.

Comment by sen on [deleted post] 2017-07-05T00:08:35.766Z

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, ..."

I don't see how the existence of subagents complicates things in any substantial way. If the existence of competing subagents is a hindrance to optimality, then one should aim to align or eliminate subagents. (Isn't this one of the functions of meditation?) Obviously this isn't always easy, but the goal is at least clear in this case.

It is nonsensical to treat animal welfare as a special case of happiness and suffering. This is because animal happiness and suffering can be only be understood through analogical reasoning, not through logical reasoning. A logical framework of welfare can only be derived through subjects capable of conveying results since results are subjective. The vast majority of animals, at least so far, cannot convey results, so we need to infer results on animals based on similarities between animal observables and human observables. Such inference is analogical and necessarily based entirely on human welfare.

If you want a theory of happiness and suffering in the intellectual sense (where physical pleasure and suffering are ignored), I suspect what you want is a theory of the ideals towards which people strive. For such an endeavor, I recommend looking into category theory, in which ideals are easily recognizable, and whose ideals seem to very closely (if not perfectly) align with intuitive notions.

Comment by sen on Idea for LessWrong: Video Tutoring · 2017-07-02T13:05:51.376Z · LW · GW

I meant it as "This seems like a clear starting point." You're correct that I think it's easy to not get lost with those two starting points.

I'm my experience with other fields, it's easy to get frustrated and give up. Getting lost is quite a bit more rare. You'll have to click through a hundred dense links to understand your first paper in machine learning, as with any other field. If you can trudge through that, you'll be fine. If you can't, you'll at least know what to ask.

Also, are you not curious about how much initiative people have regarding the topics they want to learn?

Comment by sen on Idea for LessWrong: Video Tutoring · 2017-07-02T08:38:56.399Z · LW · GW

A question for people asking for machine learning tutors: have you tried just reading through OpenAI blog posts and running the code examples they embed or link? Or going through the TensorFlow tutorials?

Comment by sen on Open thread, June 26 - July 2, 2017 · 2017-07-02T01:12:58.089Z · LW · GW

Yes. I follow authors, I ask avid readers similar to me for recommendations, I observe best-of-category polls, I scan through collections of categorized stories for topics that interest me, I click through "Also Liked" and "Similar" links for stories I like. My backlog of things to read is effectively infinite.

Comment by sen on What useless things did you understand recently? · 2017-07-02T00:28:44.853Z · LW · GW

I see. Thanks for the explanation.

Comment by sen on What useless things did you understand recently? · 2017-07-01T20:55:23.489Z · LW · GW

How so? I thought removing the border on each negation was the right way.

I gave an example of where removing the border gives the wrong result. Are you asking why "A is a subset of Not Not A" is true in a Heyting algebra? I think the proof goes like this:

  • (1) (a and not(a)) = 0
  • (2) By #1, (a and not(a)) is a subset of 0
  • (3) For all c,x,b, ((c and x) is a subset of b) = (c is a subset of (x implies b))
  • (4) By #2 and #4, a is a subset of (not(a) implies 0)
  • (5) For all c, not(c) = (c implies 0)
  • (6) By #4 and #5, a is a subset of not(not(a))

Maybe your method is workable when you interpret a Heyting subset to be a topological superset? Then 1 is the initial (empty) set and 0 is the terminal set. That doesn't work with intersections though. "A and Not A" must yield 0, but the intersection of two non-terminal sets cannot possibly yield a terminal set. The union can though, so I guess that means you'd have to represent And with a union. That still doesn't work though because "Not A and Not Not A" must yield 0 in a Heyting algebra, but it's missing the border of A in the topological method, so it again isn't terminal.

I don't see how the topological method is workable for this.

Comment by sen on What useless things did you understand recently? · 2017-07-01T07:48:15.518Z · LW · GW

I guess today I'm learning about Heyting algebras too.

I don't think that circle method works. "Not Not A" isn't necessarily the same thing as "A" in a Heyting algebra, though your method suggests that they are the same. You can try to fix this by adding or removing the circle borders through negation operations, but even that yields inconsistent results. For example, if you add the border on each negation, "A or Not A" yields 1 under your method, though it should not in a Heyting algebra. If you remove the border on each negation "A is a subset of Not Not A" is false under your method, though it should yield true.

I think it's easier to think of Heyting algebra in terms of functions and arguments. "A implies B" is a function that takes an argument of type A and produces an argument of type B. 0 is null. "A and B" is the set of arguments a,b where a is of type A and b is of type B. If null is in the argument list, then the whole argument list becomes null. "Not A" is a function that takes an argument of type A and produces 0. "Not Not A" can be thought of in two ways: (1) it takes an argument of type Not A and produces 0, or (2) it takes an argument of type [a function that takes an argument of type A and produces 0] and produces 0.

If "(A and B and C and ...) -> 0" then "A -> (B -> (C -> ... -> 0))". If you've worked with programming languages where lambda functions are common, it's like taking a function of 2 arguments and turning it into a function of 1 argument by fixing one of the arguments.

I don't see it on the Wikipedia page, but I'd guess that "A or B" means "(Not B implies A) and (Not A implies B)".

If you don't already, I highly recommend studying category theory. Most abstract mathematical concepts have simple definitions in category theory. The category theoretic definition of Heyting algebras on Wikipedia consists of 6 lines, and it's enough to understand all of the above except the Or relation.

Comment by sen on [deleted post] 2017-06-30T08:12:17.537Z

I am, and thanks for answering. Keep in mind that there are ways to make your intuition more reliable, if that's a thing you want.

Comment by sen on [deleted post] 2017-06-30T07:40:31.802Z

Fair enough. I have a question then. Do you personally agree with Bob?

Comment by sen on [deleted post] 2017-06-30T07:38:31.475Z

Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn't matter what numbers you use. You're not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.

Comment by sen on [deleted post] 2017-06-30T07:25:53.177Z

The original post doesn't require arbitrarily fine distinctions, just 2^trillion distinctions. That's perfectly finite.

Your comment about Bob not assigning a high utility value to anything is equivalent to a comment stating that Bob's utility function is bounded.

Comment by sen on [deleted post] 2017-06-30T07:01:50.635Z

It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive or multiplicative, these are the only two options) subcomponents if the number of subcomponents is unknown. Any utility function that is summed or multiplied over an unknown number of independent (e.g.) societies must be unbounded*. Does that mean you believe that utility functions can't be aggregated over independent societies or that no two societies can contribute independently to the utility function? That latter implies that a utility function cannot be determined without knowing about all societies, which would make the concept useless. Do you believe that utility functions can be aggregated at all beyond the individual level?

  • Keep in mind that "unbounded" here means "arbitrarily additive". In the multiplicative case, even if a utility function is always less than 1, if an individual's utility can be made arbitrarily close to 0, then it's still unbounded. Such an individual still has enough to gain by betting on a trillion coin tosses.

You mentioned that a utility function should be seen as a proxy to decision making. If decisions can be independent, then their contributions to the definition of a utility function must be independent*. If the utility function is bounded, then the number of independent decisions something can decide between must also be bounded. Maybe that makes sense for individuals since you distinguished a utility function as a summary of "current" decision-making, and any individual is presumably limited in their ability to decide between independent outcomes at any given point in time. Again, though, this causes problems for aggregate utility functions.

  • Consider the functor F that takes any set of decisions (with inclusion maps between them) to the least-assuming utility function consistent with them. There exists a functor G that takes any utility function to the maximal set of decisions derivable from it. F,G together form a contravariant adjunction between set of decisions and utility functions. F is then left-adjoint to G. Therefore F preserves finite coproducts as finite products. Therefore for any disjoint union of decisions A,B, the least-assuming utility function defined over them exists and is F(A+B)=F(A)*F(B). The proof is nearly identical for covariant adjunctions.

It seems like nonsense to say that utility functions can't be aggregated. A model of arbitrary decision making shouldn't suddenly become impossible just because you're trying to model, say, three individuals rather than one. The aggregate has preferential decision making just like the individual.

Comment by sen on [deleted post] 2017-06-29T17:34:57.855Z

Also it's unclear to me what the connection is between this part and the second.

My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.

I don't believe that "I don't know" is a good answer, even if it's often the correct one. People have vague intuitions regarding phenomena, and wouldn't it be nice if they could apply those intuitions reliably? That requires a mapping from the intuition (evolution is responsible) to the problem, and the mapping can only be made reliable once the intuition has been properly decomposed into its implied structure, and even then, only if the mapping is based on the decomposition.

I started off by trying to explain all of that, but realized that there is far too much when starting from scratch. Maybe someday I'll be able to write that post...

Comment by sen on [deleted post] 2017-06-29T17:25:32.380Z

The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They literally use the exact same justification to come to opposing conclusions.

The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.

That is correct, they have zero prediction and compression power. I would argue that the same can be said of many cases where people misuse evolution as an explanation.

When people falsely pretend to have knowledge of some underlying structure or correlate, they are (1) lying and (2) increasing noise, which by various definition is negative information. When people use evolution as an explanation in cases where it does not align with the implications of evolution, they are doing so under a false pretense. My suggested approach (1) is honest and (2) conveys information about the lack of known underlying structure or correlate.

I don't know what you mean by "sensible definition". I have a model for that phrase, and yours doesn't seem to align with mine.

Comment by sen on [deleted post] 2017-06-29T05:17:51.586Z

Would your answer change if I let you flip the coin until you lost? Based on your reasoning, it should not. Despite it being an effectively-guaranteed extinction, the infinitesimal chance is overwhelmed by the gains in the case of infinitely many good coin flips.

I would not call the Kelly strategy risk-averse. I imagine that word to mean "grounded in a fantasy where risk is exaggerated". I would call the second strategy risk-prone. The difference is that the Kelly strategy ends up being the better choice in realistic cases, whereas the second strategy ends up being the better choice in the extraordinarily rare wishful cases. In that sense, I see this question as one that differentiates people that prefer to make decisions grounded in reality from those that prefer to make decisions grounded in wishful thinking. The utilitarian approach then is prone to wishful thinking.

Still, I get your point. There may exist a low-chance scenario for which I would, with near certainty, trade the Kelly-heaven world for a second-hell world. To me, that means there exists a scenario that could lull me into gambling on wildly-improbable wishful thinking. Though such scenarios may exist, and though I may bet on such scenarios when presented with them, I don't believe it's reasonable to bet on them. I can't tell if you literally believe that it's reasonable to bet on such scenarios or if you're imagining something wholly different from me.

Comment by sen on [deleted post] 2017-06-28T16:37:29.974Z

Dagon: You can artificially bound utility to some arbitrarily low "bankruptcy" point. The lack of a natural one isn't relevant to the question of whether a utility function makes sense here. On treating utility as a resource, if you can make decisions to increase or decrease utility, then you can play the game. Your basic assumption seems to be that people can't meaningfully make decisions that change utility, at which point there is no point in measuring it, as there's nothing anyone can do about it.

The point of unintuitive high utilities and upper-bounded utilities I believe deserves another post.

Comment by sen on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-10T13:02:16.688Z · LW · GW

Regarding the Buckingham Pi Theorem (BPT), I think I can double my recommendation that you try to understand the Method of Lagrange Multipliers (MLM) visually. I'll try to explain in the following paragraph knowing that it won't make much sense on first reading.

For the Method of Lagrange Multipliers, suppose you have some number of equations in n variables. Consider the n-dimensional space containing the set of all solutions to those equations. The set of solutions describes a k-dimensional manifold (meaning the surface of the manifold forms a k-dimensional space), where k depends on the number of independent equations you have. The set of all points perpendicular to this manifold (the null space, or the space of points that, projected onto the manifold, give the zero vector) can be described by an (n-k)-dimensional space. Any (n-k)-dimensional space can be generated (by vector scaling and vector addition) of (n-k) independent vectors. For the Buckingham Pi Theorem, replace each vector with a matrix/group, vector scaling with exponentiation, and vector addition with multiplication. Your Buckingham Pi exponents are Lagrange multipliers, and your Pi groups are Lagrange perpendicular vectors (the gradient/normal vectors of your constraints/dimensions).

I guess in that sense, I can see why people would make the jump to Lie groups. The Pi Groups / basis vectors form the generator of any other vector in that dimensionless space, and they're obviously invertible. Honestly, I haven't spent much time with Lie Groups and Lie Algebra, so I can't tell you why they're useful. If my earlier explanation of dimensionless quantities holds (which, after seeing the Buckingham Pi Theorem, I'm even more convinced that it does), then it has something to do with symmetry with respect to scale, The reason I say "scale" as opposed to any other x * x → x quantity is that the scale kind of dimensionlessness seems to pop up in a lot of dimensionless quantities specific to fluid dynamics, including Reynold's Number.

Sorry, I know that didn't make much sense. I'm pretty sure it will though once you go through the recommendations in my earlier reply.

Regarding Reynold's Number, I suspect you're not going to see the difference between the dimensional and the dimensionless quantities until you try solving that differential equation at the bottom of the page. Try it both with and without converting to dimensionless quantities, and make sure to keep track of the semantics of each term as you go through the process. Here's one that's worked out for the dimensionless case. If you try solving it for the non-dimensionless case, you should see the problem.

It's getting really late. I'll go through your comments on similarity variables in a later reply.

Thanks for the references and your comments. I've learned a lot from this discussion.

Comment by sen on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-09T10:44:07.842Z · LW · GW

See my response below to WhySpace on getting started with group theory through category theory. For any space-oriented field, I also recommend looking at the topological definition of a space. Also, for any calculus-heavy field, I recommend meditating on the Method of Lagrange Multipliers if you don't already have a visual grasp of it.

I don't know of any resource that tackles the problem of developing models via group theory. Developing models is a problem of stating and applying analogies, which is a problem in category theory. If you want to understand that better, you can look through the various classifications of functors since the notion of a functor translates pretty accurately to "analogy".

I have no background in fluid dynamics, so please filter everything I say here through your own understanding, and please correct me if I'm wrong somewhere.

I don't think there's any inherent relationship between dimensionless parameters and group theory. The reason being that dimensionless quantities can refer to too many things (i.e., they're not really dimensionless, and different dimensionlessnesses have different properties... or rather they may be dimensionless, but they're not typeless). Consider that the !∘sqrt∘ln of a dimensionless quantity is also technically a dimensionless quantity while also being almost-certainly useless and uninterpretable. I suppose if you can rewrite an equation in terms of dimensionless quantities whose relationships are restricted to have certain properties, then you can treat them like other well-known objects, and you can throw way more math at them.

For example, suppose your "dimensionless" quantity is a scaling parameter such that scale * scale → scale (the product of two scaling operations is equivalent to a single scaling operation). By converting your values to scales, you've gained a new operation to work with due to not having to re-translate your quantities on each successive multiplication: element-wise exponentiation. I'd personally see that as a gateway to applying generating series (because who doesn't love generating series?), but I guess a more mechanics-y application of that would be solving differential equations, which often require exponentiating things.

Any time you have a set of X quantities that can be applied to one another to get another of the X quantities, you have a group of some sort (with some exceptions). That's what's going on with the scaling example (x * x → x), and that's what's not going on with the !∘sqrt∘ln example. The scaling example just happens to be a particularly simple example of a group. You get less trivial examples when you have multiple "dimensionless" quantities that can interact with one another in standard ways. For example, if vector addition, scaling, and dot products are sensible, your vectors can form a Hilbert space, and you can use wonderful things like angles and vector calculus to meaningful effect.

I can probably give a better answer if I know more precisely what you're referring to. Do you have examples of fluid dynamicists simplifying equations and citing group theory as the justification?

Comment by sen on Beware of identifying with schools of thought · 2016-12-06T09:23:08.202Z · LW · GW

Or is it that a true sophisticate would consider where and where not to apply sophistry?

Comment by sen on Making intentions concrete - Trigger-Action Planning · 2016-12-06T09:11:27.090Z · LW · GW

Information on the discussion board is front-facing for some time, then basically dies. Yes, you can use the search to find it again, but that becomes less reliable as discussion of TAPs increases. It's also antithetical to the whole idea behind TAP.

The wiki is better suited for acting as a repository of information.

Comment by sen on Beware of identifying with schools of thought · 2016-12-06T07:12:37.138Z · LW · GW

I don't understand what point you're making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can't have them or their equivalent. It's obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It's obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because... well that's what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.

Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.

You argue that this second kind is irrelevant because these things exist solely in people's minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people's minds did not matter all that much, but that's not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.

It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.

For what it's worth, I do think there are physically-grounded reasons for why this is so.

Comment by sen on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-06T06:19:54.539Z · LW · GW

"Group" is a generalization of "symmetry" in the common sense.

I can explain group theory pretty simply, but I'm going to suggest something else. Start with category theory. It is doable, and it will give you the magical ability of understanding many math pages on Wikipedia, or at least the hope of being able to understand them. I cannot overstate how large an advantage this gives you when trying to understand mathematical concepts. Also, I don't believe starting with group theory will give you any advantage when trying to understand category theory, and you're going to want to understand category theory if you're interested in reasoning.

When I was getting started with category theory, I went back and forth between several pages (Category Theory, Functor, Universal Property, Universal Object, Limits, Adjoint Functors, Monomorphism, Epimorphism). Here are some of the insights that made things click for me:

  • An "object" in category theory corresponds to a set in set theory. If you're a programmer, it's easier to think of a single categorical object as a collection (class) of OOP objects. It's also valid and occasionally useful to think of a single categorical object as a single OOP object (e.g., a collection of fields).
  • A "morphism" in category theory corresponds to a function in set theory. If you think of a categorical object as a collection of OOP objects, then a morphism takes as input a single OOP object at a time.
  • It's perfectly valid for a diagram to contain the same categorical object twice. Diagrams only show relations, and it's perfectly valid for an OOP object to be related to another OOP object of the same class. When looking at commutative diagrams that seem to contain the same categorical object twice, think of them as distinct categorical objects.
  • Diagrams don't only show relationships between OOP objects. They can also show relationships between categorical objects. For example, a diagram might state that there is a bijection between two categorical objects.
  • You're not always going to have a natural transformation between two functors of the same category.
  • When trying to understand universal properties, the following mapping is useful (look at the diagrams on Wikipedia): A is the Platonic Form of Y, U is a fire that projects only some subset of the aspects of being like A.
  • The duality between categorical objects and OOP objects is critical to understanding the difference between any diagram and its dual (reversed-morphisms). Recognizing this makes it much easier to understand limits and colimits.

Once you understand these things, you'll have the basic language down to understand group theory without much difficulty.

Comment by sen on Beware of identifying with schools of thought · 2016-12-06T05:31:20.912Z · LW · GW

The distinction between "ideal" and "definition" is fuzzy the way I'm using it, so you can think of them as the same thing for simplicity.

Symmetry is an example of an ideal. It's not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you've never seen. You can construct a symmetry that you've never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you've constructed is a symmetry.

The set of natural numbers is an example of something that's defined, not observed. Each natural number is defined sequentially, starting from 1.

Addition is an example of something that's defined, not observed. The general notion of a bottle is an ideal.

In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.

I didn't say these things weren't influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.

Comment by sen on Making intentions concrete - Trigger-Action Planning · 2016-12-05T06:56:57.645Z · LW · GW

Fair enough, though I disagree with the idea of using the discussion board as a repository of information.

Comment by sen on Beware of identifying with schools of thought · 2016-12-05T06:09:18.239Z · LW · GW

Is there ever a case where priors are irrelevant to a distinction or justification? That's the difference between pure Bayesian reasoning and alternatives.

OP gave the example of the function of organs for a different purpose, but it works well here. To a pure Bayesian reasoner, there is no difference between saying that the heart has a function and saying that the heart is correlated with certain behaviors, because priors alone are not sufficient to distinguish the two. Priors alone are not sufficient to distinguish the two because the distinction has to do with ideals and definitions, not with correlations and experience.

If a person has issues with erratic blood flow leading to some hospital visit, why should we look at the heart for problems? Suppose there were a problem found with the heart. Why should we address the problem at that level as opposed to fixing the blood flow issue in some more direct way? What if there was no reason for believing that the heart problem would lead to anything but the blood flow problem? What's the basis for addressing the underlying cause as opposed to addressing solely the issue that more directly led to a hospital visit?

There is no basis unless you recognize that addressing underlying causes tends to resolve issues more cleanly, more reliably, more thoroughly, and more persistently than addressing symptoms, and that the underlying cause only be identified by distinguishing erroneous functioning from other abnormalities. Pure Bayesian reasoners can't make the distinction because the distinction has to do with ideals and definitions, not with correlations and experience.

It's really hard for me to see under what model of the world (correct) Bayesian analysis could be misleading.

If you wanted a model that was never misleading, you might as well use first order logic to explain everything. Or go straight for the vacuous case and don't try to explain anything. That problem is that that doesn't generalize well, and it's too restrictive. It's about broadening your notion of reasoning so that you consider alternative justifications and more applications.