Do agents with (mutually known) identical utility functions but irreconcilable knowledge sometimes fight? 2023-08-23T08:13:05.631Z
Apparently, of the 195 Million the DoD allocated in University Research Funding Awards in 2022, more than half of them concerned AI or compute hardware research 2023-07-07T01:20:20.079Z
Using Claude to convert dialog transcripts into great posts? 2023-06-21T20:19:44.403Z
The Gom Jabbar scene from Dune is essentially a short film about what Rationality is for 2023-03-22T08:33:38.321Z
Will chat logs and other records of our lives be maintained indefinitely by the advertising industry? 2022-11-29T00:30:46.415Z
[Video] How having Fast Fourier Transforms sooner could have helped with Nuclear Disarmament - Veritaserum 2022-11-03T21:04:35.839Z
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter 2022-11-03T06:47:56.376Z
I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background 2022-05-18T04:15:11.681Z
Jeff Shainline thinks that there is too much serendipity in the physics of optical/superconducting computing, suggesting that they were part of the criteria of Cosmological Natural Selection, which could have some fairly lovecraftian implications 2022-04-01T07:09:14.527Z
Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers 2022-03-14T07:27:06.955Z
Grabby Aliens could be Good, could be Bad 2022-03-07T01:24:43.769Z
Would (myopic) general public good producers significantly accelerate the development of AGI? 2022-03-02T23:47:09.322Z
Are our community grouphouses typically rented, or owned? 2022-03-02T03:36:58.251Z
We need a theory of anthropic measure binding 2021-12-30T07:22:34.288Z
Venture Granters, The VCs of public goods, incentivizing good dreams 2021-12-17T08:57:30.858Z
Is progress in ML-assisted theorem-proving beneficial? 2021-09-28T01:54:37.820Z
Auckland, New Zealand – ACX Meetups Everywhere 2021 2021-08-23T08:49:53.187Z
Violent Unraveling: Suicidal Majoritarianism 2021-07-29T09:29:05.182Z
We should probably buy ADA? 2021-05-24T23:58:05.395Z
Deepmind has made a general inductor ("Making sense of sensory input") 2021-02-02T02:54:26.404Z
In software engineering, what are the upper limits of Language-Based Security? 2020-12-27T05:50:46.772Z
The Fermi Paradox has not been dissolved - James Fodor 2020-12-12T23:18:32.081Z
Propinquity Cities So Far 2020-11-16T23:12:52.065Z
Shouldn't there be a Chinese translation of Human Compatible? 2020-10-09T08:47:55.760Z
Should some variant of longtermism identify as a religion? 2020-09-11T05:02:43.740Z
Design thoughts for building a better kind of social space with many webs of trust 2020-09-06T02:08:54.766Z
Investment is a useful societal mechanism for getting new things made. Stock trading shares some functionality with investment, but seems very very inefficient, at that? 2020-08-24T01:18:19.808Z
misc raw responses to a tract of Critical Rationalism 2020-08-14T11:53:10.634Z
A speculative incentive design: self-determined price commitments as a way of averting monopoly 2020-04-28T07:44:52.440Z
MakoYass's Shortform 2020-04-19T00:12:46.448Z
Being right isn't enough. Confidence is very important. 2020-04-07T01:10:52.517Z
Thoughts about Dr Stone and Mythology 2020-02-25T01:51:29.519Z
When would an agent do something different as a result of believing the many worlds theory? 2019-12-15T01:02:40.952Z
What do the Charter Cities Institute likely mean when they refer to long term problems with the use of eminent domain? 2019-12-08T00:53:44.933Z
Mako's Notes from Skeptoid's 13 Hour 13th Birthday Stream 2019-10-06T09:43:32.464Z
The Transparent Society: A radical transformation that we should probably undergo 2019-09-03T02:27:21.498Z
Lana Wachowski is doing a new Matrix movie 2019-08-21T00:47:40.521Z
Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours 2019-08-18T04:22:53.879Z
Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? 2019-08-05T00:12:14.630Z
Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? 2019-07-31T00:16:59.415Z
If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? 2019-07-12T01:40:48.999Z
In physical eschatology, is Aestivation a sound strategy? 2019-06-17T07:27:31.527Z
Scrying for outcomes where the problem of deepfakes has been solved 2019-04-15T04:45:18.558Z
I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it 2019-04-01T03:19:44.080Z
Is there a.. more exact.. way of scoring a predictor's calibration? 2019-01-16T08:19:15.744Z
The end of public transportation. The future of public transportation. 2018-02-09T21:51:16.080Z
A Strange Argument about the potential Importance of Multiverse Theory 2016-02-02T04:22:06.876Z


Comment by mako yass (MakoYass) on Fund Transit With Development · 2023-09-23T02:16:17.112Z · LW · GW

Yeah it would be cool if residents of an area could simply come together and make their own decisions about what could raise the value of the entire region without being punished by an increase LVT to a government who did not think to do these things. But I think that process of coming together to do things is what a local government is supposed to be.

I dunno. I'm starting to think that the very american assumption that public and private are supposed to be separate worlds that don't influence each other is actually disgustingly wrong.
All goods have both public and private components. So maybe the land taxes of any new installation should factor both of these things. When you benefit from the surroundings, you should pay, when you contribute, you should be paid back. Maybe LVT should be further generalized: The labors that we know you are quietly benefiting from, you must pay towards them.

(Imagine a ministry of aesthetics who pay a basic sum to beautiful buildings, who don't strictly block the creation of ugly buildings but do impose an ongoing cost. Imagine those incentives applied consistently over an entire city for decades.)

Comment by mako yass (MakoYass) on Fund Transit With Development · 2023-09-22T19:55:33.507Z · LW · GW

A land value tax would solve this ;-;

Yet another reminder that there's no justice in the system where land value is captured by the owner, because the value of the land is effectively never created by the owner of the land, but instead by adjacent projects and public infrastructure.

Comment by mako yass (MakoYass) on Show LW: Get a phone call if prediction markets predict nuclear war · 2023-09-18T05:17:41.429Z · LW · GW

Do you think prediction markets wouldn't respond appropriately to those things?

Comment by mako yass (MakoYass) on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2023-09-17T03:23:46.698Z · LW · GW

There's also a "rationalists and humanists" association in New Zealand. They have a nice old building next to Auckland university with "Rationalist House" written on the front. Aging membership, though. They want younger members, but I guess young people aren't looking for community in that way. I'd guess as a result of higher rates of atheism in the young, plus an entirely secular media culture, leading to a lack of recognition of any need for a secular humanist community. And the need for broad moral communities doesn't become obvious to people until they're older.
They've just started to notice the LW cluster. They seem to be on board, perhaps due to the shine of these bright young "effective altruists" who have been visiting them. It's a lot for them.

Sadly, they've decided to sell the building. It will happen at some point. They can't afford to do the maintenance.

Comment by mako yass (MakoYass) on deluks917's Shortform · 2023-09-15T22:45:59.672Z · LW · GW

Isn't weight gained/maintained from eating sugary things all downside in terms of health?

I currently don't have really any impulse to eat stuff like that. Get my fats from grass fed beef or mackerel, or olive oil. I have no noticeable body fat. But there are some mild dysfunctions indicating that I could have less than a body is supposed to. (reduced energy level when hungry, low gains, an annoying eustacean tube thing)

Comment by mako yass (MakoYass) on Daniel Kokotajlo's Shortform · 2023-09-15T22:31:40.017Z · LW · GW

There's no link preview for manifold links, so we should mention that the market is "GPT4 or better model available for download by EOY 2024?" (the model is allowed to be illegal)

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-09-15T22:28:58.421Z · LW · GW

Observation from playing Network Wars: The concept of good or bad luck is actually crucial for assessing one's own performance in games with output randomness (most games irl). You literally can't tell what you're doing well in any individual match without that, it a sensitivity that lets you see through the noise and learn more informative lessons from each experience.

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-26T19:25:32.173Z · LW · GW

Yeah I guess the "just" was in jest, we all know how complicated this gets when you're serious about it.

I considered adding a paragraph about how and why people fail to do this, how this definition characterizes ingroup and outgroup, and could probably write an entire post about it.

Comment by mako yass (MakoYass) on DanielFilan's Shortform Feed · 2023-08-26T06:27:00.436Z · LW · GW

This will always multiply error, every time, until you have a society, at which point the agents aren't really doing naked RL any more because they need to be resilient enough to not get parasitized/dutchbooked.

Comment by mako yass (MakoYass) on Shortform · 2023-08-26T06:22:52.099Z · LW · GW

Where you live.

Comment by mako yass (MakoYass) on laserfiche's Shortform · 2023-08-26T06:18:21.031Z · LW · GW

Possibly ?

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-26T06:15:43.454Z · LW · GW

Rationality is basically just comparing alternatives and picking the best one, right?

Comment by mako yass (MakoYass) on China's position on autonomous weapons · 2023-08-25T03:08:16.987Z · LW · GW

Third is impossibility of termination, meaning that once activated there is no way of terminating the device.

So the law wont apply to anything that anybody would've actually tried to make, short of prepotent superintelligence. :/

Comment by mako yass (MakoYass) on Do agents with (mutually known) identical utility functions but irreconcilable knowledge sometimes fight? · 2023-08-23T20:59:03.850Z · LW · GW

You seem to be looking away from the aspect of the question where any usefully specialized agencies cannot synchronize domain knowledge (which reasserts itself as a result of the value of specialization, an incentive to deepen knowledge differences over time, and to bring differently specialized agents closer together. Though of course, they need to be mutually legible in some ways to benefit from it.). This is the most interesting and challenging part of the question so that was kind of galling.

But the Aaronson paper is interesting. It's possible it addresses it. Thanks for that.

Comment by mako yass (MakoYass) on Do agents with (mutually known) identical utility functions but irreconcilable knowledge sometimes fight? · 2023-08-23T09:50:07.220Z · LW · GW

It's not necessarily going to be seen as a problem, it would be seen as an unavoidable inefficiency.

Note, I don't expect the fight to play out. It's a question about what sorts of tensions the conflict resolution processes reflect. This is explained in the question body.

Comment by mako yass (MakoYass) on Do agents with (mutually known) identical utility functions but irreconcilable knowledge sometimes fight? · 2023-08-23T08:20:32.047Z · LW · GW

Conjecture: There is no way to simplify the analysis of the situation, or the negotiation process, by paraphrasing an irreconcilable epistemic conflict as a values conflict (there is no useful equivalence between an error theory and a conflict theory). I expect this to be so because the conflict is a result of irreducible complexity in the knowledge sets (and parties inability to hold the same knowledge). So applying another transform to the difference between the knowledge wont give you a clearer image of the disputed borders. You just wont be able to apply the transform.

(note, if true, this would be a useful thing to say to many conflict theorists: By exaggerating difference in material interests, you make your proposals less informed and so less legitimate.)

Comment by mako yass (MakoYass) on Steven Wolfram on AI Alignment · 2023-08-21T21:18:59.136Z · LW · GW

Yeah. I don't think this actually makes ethics harder to study, but I wonder if he's getting at...

Unlike in experimental or applied science, in ethics you can't ever build a simple ethical scenario, because you can't isolate any part of the world from the judgement or interventionist drives of every single person's value systems. Values, inherently, project themselves out onto the world, nothing really keeps them localized in their concerns.
If someone runs a brutal and unnecessary medical experiment on prisoners in an underground lab, it doesn't matter how many layers of concrete or faraday shielding separate me from it, I still care about that, a bunch of other people care in different ways. You can't isolate anything. The EV considers everything.

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-18T04:18:00.438Z · LW · GW

You don't have to be able to simulate something to trust it for this or that. EG, the specification of alphazero is much simpler than the final weights, and knowing its training process, without knowing its weights, you can still trust that it will never, say, take a bribe to throw a match. Even if it comprehended bribery, we know from its spec info that it's solely interested in winning whatever match it's currently playing, and no sum would be enough.

To generalize, if we know something's utility function, and if we know it had a robust design, even if we know nothing else about its history, we know what it'll do.

A promise-keeping capacity is a property utility functions can have.

Comment by mako yass (MakoYass) on Shortform · 2023-08-17T19:34:26.612Z · LW · GW

I was talking about actual status contests, economic or mating competition. It's possible to feel acceptance in loss even in the world we have today.

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-17T19:24:00.864Z · LW · GW

So what?

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-17T18:51:31.483Z · LW · GW

Can you expand on what you mean by "demonic"?

Since acausal trade issues are basically spiritual, when the trade is bad I seek a word that means "spiritually bad." You can read it as just "bad" if you want.

So, actual torture is the strongest signal of willingness and ability to torture.  Building a torturizer shows capability, but only hints at willingness.  Having materials that could build a torturizer or an orgasmatron is pretty weak, but not zero

Probable crux: Cognitive transparency is actually easy for advanced agencies. It's difficult for a human to prove to a distant human that they have the means to build and deploy a torturizer without actually doing it. It wouldn't be difficult for brains that were designed to be capable of proving the state of their beliefs, and AGI participating in a community with other AGI would want to be capable of that. (The contemporary analog is trusted computing. The number of coordination problems it could solve for us, today, if it were fully applied, is actually depressing.)

There would still be uncertainties as a result of mutual comprehensibility issues, but they could turn out to be of negligible importance, especially once nobody's lying any more.

Comment by mako yass (MakoYass) on Shortform · 2023-08-17T08:49:25.895Z · LW · GW

Have you never lost a conflict and felt that it was fair and just and that you were honored to have gotten to duel with such a majestic being?

And then learned from it, and become the winner in the next match?

(note, am utopian fiction author)

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-17T08:14:56.535Z · LW · GW

Decision theoretic things that I'm not sure whether are demonic, or just real and inescapable and legitimate, and I genuinely don't fucking know which, yet:

  • extortion/threats/building torturizers to gain bargaining power
    • (or complying to with extortionary threats)
  • assigning bargaining power in proportion to an agent's strength or wealth, as opposed to in proportion to its phenomenological measure.
    • (arguably wrong if you extend a rawlsian veil back beyond even your awareness of which observer you are or what your utility function is, which seems mathematically elegant and morally neat.)

I'm pretty sure extortion is fine because under an advanced cooperative bargaining process the threats are never actually carried out (not even to the point of needing to build the torturizer and leave it in a warehouse), and it seems to be neater as a result of not requiring us to make assumptions about what constitutes action or inaction.

Subjectivity-weighting seems much more thorny.

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-08-12T22:22:42.589Z · LW · GW

Things that healthy people don't have innate dispositions towards: Optimism, Pessimism, Agreeability, Disagreeability, Patience, Impatience.

Whether you are those things should completely depend on the situation you're in. If it doesn't, you may be engaging in magical thinking about how the world works. Things are not guaranteed to go well, nor poorly. People are not fully trustworthy, nor are they consistently malignant. Some things are worth nurturing, others aren't. It's all situational.

Comment by mako yass (MakoYass) on marine cloud brightening · 2023-08-10T03:36:55.122Z · LW · GW

A previous discussion of marine cloud brightening.

Comment by mako yass (MakoYass) on Recommending Understand, a Game about Discerning the Rules · 2023-08-07T07:26:07.893Z · LW · GW

Is there a reason you recommend this over The Witness? I wouldn't recommend this over The Witness. It's far too dry. I often complete puzzles without trying to understand why, then just move on, because it doesn't convey a sense that the insight will be used anywhere else, nor interesting in itself.

Comment by mako yass (MakoYass) on Sinclair Chen's Shortform · 2023-08-06T01:15:34.232Z · LW · GW

Update: I'm not excited because deploying this thing in most applications seems difficult:

  • it's a ceramic, not malleable, so it's not going in powerlines apparently this was surmounted before, though those cables ended up being difficult and more expensive than the fucken titanium-niobium alternatives.
  • because printing it onto a die sounds implausible and because it wouldn't improve heat efficiency as much as people would expect (because most heat comes from state changes afaik? Regardless, this).
  • because 1D
  • because I'm not even sure about energy storage applications, how much energy can you contain before the magnetic field is strong enough to break the coil or just become very inconvenient to shield from neighboring cars? Is it really a lot?
Comment by mako yass (MakoYass) on Private notes on LW? · 2023-08-04T18:00:52.920Z · LW · GW

Include a "specific people (specify)" read permission too. Then it can be used to write private replies to comments.

Often, private replies are the entire solution to discussions becoming personal, it's a lot easier to take a hit to the ego, or acknowledge personal failings, when there isn't an audience. These side conversations are a major part of the solution to performative narcissism and atomization, even though, of course, you don't see them taking place, in public.

Comment by mako yass (MakoYass) on Recommending Understand, a Game about Discerning the Rules · 2023-08-03T20:05:38.597Z · LW · GW

Yeah, almost everything on steam works on linux. Even games like Titanfall 2. The main reason is; valve made a console called the steamdeck, that runs linux, so they have a real interest in making windows emulators work well. The other reason is computational universalizability.

Comment by mako yass (MakoYass) on Recommending Understand, a Game about Discerning the Rules · 2023-08-03T20:01:46.295Z · LW · GW

I'm interested in the general practice of integrating rationality training into puzzle games. 

I should mention I'm doing one of these types of games btw. I've also got a concept brewing where a player proposes a configuration of some components (which is scored by "the aggregate" (rules of the game)), then everyone places bets on whether it can be improved. I'm fairly sure this will beat both cynicism and optimism biases out of people and turn them into very clever pragmatists.

If you'd be interested in collaborating on that I'd be up for it.

Comment by mako yass (MakoYass) on Recommending Understand, a Game about Discerning the Rules · 2023-08-03T19:50:30.758Z · LW · GW

Oh, yall play zendo with custom rules? In auckland EA we just played it random rules generated by the game, and that was interesting enough. It's humbling to be holding one of the simplest possible rules and watch some of the brightest people you know generating epicycles and missing it for 20 minutes.

Comment by mako yass (MakoYass) on Recommending Understand, a Game about Discerning the Rules · 2023-08-03T19:48:02.955Z · LW · GW

I am also making a game in this genre, btw. It's about drawing things on a small square grid that meet the rules of a tree of symbols, whose behaviors you have to figure out by experimenting.

It's a phone game. It's rare that we find a case where touch controls are actually better than keyboard and mouse, but this is one.

It's kind of a side thing, it's been gathering dust for a few years, and may again, given that the other thing I'm supposed to be working on is fixing social network software, but I find that these days, puzzle games like these are some of the only games I enjoy, so I can't forget about my own.
If anyone would be interested in collaborating on something like this, feel free to DM and we can talk about it, it might help get it over the line.

Comment by mako yass (MakoYass) on My current LK99 questions · 2023-08-02T20:37:25.903Z · LW · GW

Just have a "ding" sound when someone's mind is going in the right direction, and a "dong" sound for the wrong directions

I'd definitely like to try that. The right UX would be a number that goes up as you get closer to the target headspace, with milestone numbers along the way, which each give you a reward. It should possibly be coupled with a puzzle game or a set of creative exercises or something. (Games are good because they can provide reward. If a person isn't already productive it may be because they didn't find practicing engineering deeply rewarding so this part of it might be important.)

Comment by mako yass (MakoYass) on My current LK99 questions · 2023-08-02T20:26:28.613Z · LW · GW

I see none. Wait, you mean this one?

Comment by mako yass (MakoYass) on My current LK99 questions · 2023-08-02T05:30:20.727Z · LW · GW

I don't believe there is that much you can do with MRI data, to develop treatments, on relevant timescales? Like, we'll probably have the compute advancement long before we have the cognitive enhancement?

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-07-29T23:32:23.708Z · LW · GW

An argument that the reason most "sasquatch" samples turn out to have human DNA is that sasquatch/wildman phenotype (real) is actually not very many mutations away from sapiens, because it's mostly just a result of re-enabling a bunch of traits that were disabled under sapiens self-domestication/neotenization

I'm wondering if the "Zana just had african DNA" finding might have been a result of measurement or interpretation error: We don't know the sasquatch markers are, so even if they diverged from sapiens like 100,000 years ago (long enough for actual evolution to occur), as long as there hadn't been interbreeding with any other ethnicities, aren't there are tests that totally would have just said "sapiens with no asian or caucasian background therefore african"?

If so, that could be investigated.

Comment by mako yass (MakoYass) on Sinclair Chen's Shortform · 2023-07-28T22:41:03.041Z · LW · GW

No because prediction markets are low

Comment by mako yass (MakoYass) on Josh Jacobson's Shortform · 2023-07-28T22:40:08.105Z · LW · GW

So factor in the cost of the other members of your decision theoretic agent-class with respect to driving SUVs for safety reasons also driving SUVs

Comment by mako yass (MakoYass) on MakoYass's Shortform · 2023-07-28T22:38:24.403Z · LW · GW

Until you learn FDT, you cannot see the difference between faith and idealism, nor the difference between pragmatism and cynicism. The tension between idealism and pragmatism genuinely cannot be managed gracefully without FDT, it defines their narrow synthesis.

More should be written about this, because cynicism, idealism afflicts many.

Comment by mako yass (MakoYass) on Neuronpedia - AI Safety Game · 2023-07-27T06:13:38.886Z · LW · GW

Unstructured feedback:

Thought of leaving when I was given my first challenge and thought, "what if these words actually don't have much in common, what if the neurons all just encode completely arbitrary categories due to being such a low strength model." Eventually I decided "No, I know what this is. This is A Thing. (words that suggest the approach towards some sort of interpersonal resolution)." Maybe that happens to everyone. I dunno.

It's kind of infuriating that you ask us to do a question, then don't accept the answer until we log in, then just waste our answer by sending us to another question after we've logged in. I guess you plan to solve the last part, in which case it's fine, but wow, like, you're going to smack every single one of your testers in the face with this?

My experience with the second question is that sending in a response does not work, I get an alert about a json parse error. Firefox.

Tell me when that's fixed, I guess.

Comment by mako yass (MakoYass) on Slowing down AI progress is an underexplored alignment strategy · 2023-07-26T05:47:24.789Z · LW · GW

It's also totally ordinary for talented researchers to move to whatever country supports their research. If you ban AI for stupid reasons, it's especially unlikely that every other country will follow suit.

Comment by mako yass (MakoYass) on Cryonics and Regret · 2023-07-24T20:04:30.360Z · LW · GW

I think most of us will feel this way about most people, after we've grown up.

It's odd that I cannot bring myself to feel this way about my own parents now.

Sometimes I try to. But I know that their present lack of knowledge or interest in cryonics is a result of their own choices, and their own vices, vices that are going to kill not just them but many other people who trusted them. I know that if I bring it up they will not be grateful. They will say something pigheaded and fearful, "do not give me the hope". They seem not to want it.
I believe, abstractly, that there must be more to them, an inner world, unique, but I cannot see it, it never comes to the fore.

There are a lot of people like this.

How can I want to save these people?

Comment by mako yass (MakoYass) on The UAP Disclosure Act of 2023 and its implications · 2023-07-23T20:57:39.862Z · LW · GW

I also think it is unfair to say that nothing in my post is worth updating over

Yes, I said "for me".

Comment by mako yass (MakoYass) on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T20:33:26.324Z · LW · GW

I think you're misrepresenting the extent to which this has been discussed on lesswrong already. I participated in most of that discussion and nothing in your post is news at all (that overstates it a little) surprising, or a strategic update, for me. Use the search function.

Comment by mako yass (MakoYass) on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T20:30:35.759Z · LW · GW

That includes Grusch (who's been photographed hanging out with Lue Elizondo et al)

What's the chronology of this. If he hung out with them after seeing his own evidence and assembling his case, that's meaningless and innocuous. Why wouldn't he meet with them. If he hung out with them before, okay, but like, so what? What can we conclude from that?

Comment by mako yass (MakoYass) on BCIs and the ecosystem of modular minds · 2023-07-22T10:22:50.091Z · LW · GW

Darkly funny that these augments via BCI might give us globe-sized expanded senses, phantasia strong enough to run climate models, and an overflowing fountain of new ideation processes, without necessarily curing the executive dysfunction that prevents many of us from actioning the suggestions of such things.

Comment by mako yass (MakoYass) on How necessary is intuition, for advanced math? · 2023-07-20T04:20:09.189Z · LW · GW

I think any high level thought or movement is intuitive and approximate and not completely trustworthy, including high level thoughts about mathematics.

You find things by looking across long distances, but constructive proof steps only cross short distances. Nothing new is actually found by applying simple rules. Mathematical proofs don't represent a way of thinking, they're artifacts produced after the thought has been done and the realization has been had, they only exist to validate and to discipline (train) the higher-level heuristics you really use when you're navigating the overarching space of mathematics.

I'm not a mathematician, but if someone had told me this when I started undergrad, much more likely I would've been better at it and I would've ended up being a mathematician in that timeline.

Comment by mako yass (MakoYass) on «Boundaries», Part 1: a key missing concept from utility theory · 2023-07-18T20:20:31.952Z · LW · GW

I'm setting a boundary here *claims the entire territory of Israel*

Comment by mako yass (MakoYass) on Are there any good, easy-to-understand examples of cases where statistical causal network discovery worked well in practice? · 2023-07-15T00:10:57.513Z · LW · GW

Just heard about this drug knowledge synthesis AI company called "Causaly", claiming "Captures causality as opposed to co-occurence, with 8 different relationship types.". Anything interesting going on here?

Comment by mako yass (MakoYass) on Are there any good, easy-to-understand examples of cases where statistical causal network discovery worked well in practice? · 2023-07-15T00:09:46.595Z · LW · GW

Just out of curiosity, is there a problem where... causality is genuinely hard to assess without experimentation, so there are always going to be multiple credible hypotheses unless you wire it out to a lab and let it try stuff and gather focused evidence for distinguishing them?