Posts

Non-loss of control AGI-related catastrophes are out of control too 2023-06-12T12:01:26.682Z
How should we think about the decision relevance of models estimating p(doom)? 2023-05-11T04:16:56.211Z

Comments

Comment by Mo Putera (Mo Nastri) on Losing Faith In Contrarianism · 2024-04-26T04:31:07.527Z · LW · GW

You might also be interested in Scott's 2010 post warning of the 'next-level trap' so to speak: Intellectual Hipsters and Meta-Contrarianism 

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

... 

Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:

- KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.

If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.

Comment by Mo Putera (Mo Nastri) on Nick Bostrom’s new book, “Deep Utopia”, is out today · 2024-04-01T05:11:49.421Z · LW · GW

In Bostrom's recent interview with Liv Boeree, he said (I'm paraphrasing; you're probably better off listening to what he actually said)

  • p(doom)-related
    • it's actually gone up for him, not down (contra your guess, unless I misinterpreted you), at least when broadening the scope beyond AI (cf. vulnerable world hypothesis, 34:50 in video)
    • re: AI, his prob. dist. has 'narrowed towards the shorter end of the timeline - not a huge surprise, but a bit faster I think' (30:24 in video)
    • also re: AI, 'slow and medium-speed takeoffs have gained credibility compared to fast takeoffs'
    • he wouldn't overstate any of this
  • contrary to people's impression of him, he's always been writing about 'both sides' (doom and utopia) 
  • in the past it just seemed more pressing to him to call attention to 'various things that could go wrong so we could avoid these pitfalls and then we'd have plenty of time to think about what to do with this big future'
    • this reminded me of this illustration from his old paper introducing the idea of x-risk prevention as global priority: 
Comment by Mo Putera (Mo Nastri) on [Linkpost] Practically-A-Book Review: Rootclaim $100,000 Lab Leak Debate · 2024-03-30T18:49:54.776Z · LW · GW

What's your take on Scott's post?

Comment by Mo Putera (Mo Nastri) on "How could I have thought that faster?" · 2024-03-12T04:52:01.298Z · LW · GW

If I take this claimed strategy as a hypothesis (that radical introspective speedup is possible and trainable), how might I falsify it? I ask because I can already feel myself wanting to believe it's true and personally useful, which is an epistemic red flag. Bonus points if the falsification test isn't high cost (e.g. I don't have to try it for years).

Comment by Mo Putera (Mo Nastri) on Using axis lines for good or evil · 2024-03-07T11:48:12.186Z · LW · GW

I was wondering about this too. I thought of Eugene Wei writing about Edward Tufte's classic book The Visual Display of Quantitative Information, which he considers "[one of] the most important books I've read". He illustrates with an example, just like dynomight did above, starting with this chart auto-created in Excel: 

chart-1.pngand systematically applies Tufte's principles to eventually end up with this:

chart-4.png

Wei adds further commentary:

No issues for color blind users, but we're stretching the limits of line styles past where I'm comfortable. To me, it's somewhat easier with the colored lines above to trace different countries across time versus each other, though this monochrome version isn't terrible. Still, this chart reminds me, in many ways, of the monochromatic look of my old Amazon Analytics Package, though it is missing data labels (wouldn't fit here) and has horizontal gridlines (mine never did).

We're running into some of these tradeoffs because of the sheer number of data series in play. Eight is not just enough, it is probably too many. Past some number of data series, it's often easier and cleaner to display these as a series of small multiples. It all depends on the goal and what you're trying to communicate.

At some point, no set of principles is one size fits all, and as the communicator you have to make some subjective judgments. For example, at Amazon, I knew that Joy wanted to see the data values marked on the graph, whenever they could be displayed. She was that detail-oriented. Once I included data values, gridlines were repetitive, and y-axis labels could be reduced in number as well.

Tufte advocates reducing non-data-ink, within reason, and gridlines are often just that. In some cases, if data values aren't possible to fit onto a line graph, I sometimes include gridlines to allow for easy calculation of the relative ratio of one value to another (simply count gridlines between the values), but that's an edge case.

For sharp changes, like an anomalous reversal in the slope of a line graph, I often inserted a note directly on the graph, to anticipate and head off any viewer questions. For example, in the graph above, if fewer data series were included, but Greece remained, one might wish to explain the decline in health expenditures starting in 2008 by adding a note in the plot area near that data point, noting the beginning of the Greek financial crisis (I don't know if that's the actual cause, but whatever the reason or theory, I'd place it there).

If we had company targets for a specific metric, I'd note those on the chart(s) in question as a labeled asymptote. You can never remind people of goals often enough.

And I thought, okay, sounds persuasive and all, but also this feels like Wei/Tufte is pushing their personal aesthetic on me, and I can't really tell the difference (or whether it matters).

Comment by Mo Putera (Mo Nastri) on If you weren't such an idiot... · 2024-03-05T10:12:22.294Z · LW · GW

I'm curious about you not doing these, since I'd unquestioningly accepted them, and would love for you to elaborate:

- save lots of money in a retirement account and buy index funds
- shower daily
- use shampoo
- wear shoes
- walk

Regarding 'diet stuff', I mostly agree and like how Jay Daigle put it:

I’ve decided lately that people regularly get confused, on a number of subjects, by the difference between science and engineering. ... Tl;dr: Science is sensitive and finds facts; engineering is robust and gives praxes. Many problems happen when we confuse science for engineering and completely modify our praxis based on the result of a couple of studies in an unsettled area. ...

This means two things. First is that we need to understand things much better for engineering than for science. In science it’s fine to say “The true effect is between +3 and -7 with 95% probability”. If that’s what we know, then that’s what we know. And an experiment that shrinks the bell curve by half a unit is useful. For engineering, we generally need to have a much better idea of what the true effect is. (Imagine trying to build a device based on the information that acceleration due to gravity is probably between 9 and 13 m/s^2).

Second is that science in general cares about much smaller effects than engineering does. It was a very long time before engineering needed relativistic corrections due to gravity, say. A fact can be true but not (yet) useful or relevant, and then it’s in the domain of science but not engineering. 

Why does this matter?

The distinction is, I think fairly clear when we talk about physics. ... But people get much more confused when we move over to, say, psychology, or sociology, or nutrition. Researchers are doing a lot of science on these subjects, and doing good work. So there’s a ton of papers out there saying that eggs are good, or eggs are bad, or eggs are good for you but only until next Monday or whatever.

And people have, often, one of two reactions to this situation. The first is to read one study and say “See, here’s the scientific study. It says eggs are bad for you. Why are you still eating eggs? Are you denying the science?” And the second reaction is to say that obviously the scientists can’t agree, and so we don’t know anything and maybe the whole scientific approach is flawed.

But the real situation is that we’re struggling to develop a science of nutrition. And that shit is hard. We’ve worked hard, and we know some things. But we don’t really have enough information to do engineering, to say “Okay, to optimize cardiovascular health you need to cut your simple carbs by 7%, eat an extra 10g of monounsaturated fats every day, and eat 200g of protein every Wednesday” or whatever. We just don’t know enough.

And this is where folk traditions come in. Folk traditions are attempts to answer questions that we need decent answers to, that have been developed over time, and that are presumably non-horrible because they haven’t failed obviously and spectacularly yet. A person who ate “Like my grandma” is probably on average at least as healthy as a person who tried to follow every trendy bit of scientistic nutrition advice from the past thirty years.

Comment by Mo Putera (Mo Nastri) on Increasing IQ is trivial · 2024-03-03T07:05:20.200Z · LW · GW

Nitpick that doesn't bear upon the main thrust of the article: 

2021: Here’s a random weightlifter I found coming in at over 400kg, I don’t have his DEXA but let’s say somewhere between 300 and 350kgs of muscle.

More plausibly Josh Silvas weighs 220-ish kg, not 400 kg, and there's no way he has anywhere near 300+ kg of muscle. To contextualize, the heaviest WSM winners ever weighed around 200-210 kg (Hafthor, Brian); Brian in particular had a lean body mass of 156 kg back when he weighed 200 kg peaking for competition ('peaking' implies unsustainability), which is the highest DEXA figure I've ever found in years of following strength-related statistics. 

Comment by Mo Putera (Mo Nastri) on How I build and run behavioral interviews · 2024-02-28T18:44:23.451Z · LW · GW

The two highest mean validity paired procedures for predicting job performance are general mental ability (GMA) plus an integrity test, and GMA + a structured interview (Schmidt et al 2016 meta-analysis of "100 years of research in personnel selection", reviewing 31 procedures, via 80,000 Hours – check out Table 2 on page 71). GMA alone beats all other single procedures; integrity tests not only beat all other non-GMA procedures but also correlate nearly zero with GMA, hence the combination efficacy. 

A bit more on integrity tests, if you (like me) weren't clear on them:

These tests are used in business and industry to hire employees with reduced probability of counterproductive work behaviors on the job, such as fighting, drinking or taking drugs, stealing from the employer, equipment sabotage, or excessive absenteeism. Integrity tests do predict these behaviors, but surprisingly they also predict overall job performance (Ones, Viswesvaran, & Schmidt, 1993).

Behavioral interviews – which Schmidt et al call situational judgment tests – are either middle of the rankings (for knowledge-based tests) or near the bottom (for behavioral tendencies). Given this, I'd be curious what value Ben gets out of investing nontrivial effort into running them, cf. Luke's comment.

Comment by Mo Putera (Mo Nastri) on The Pareto Best and the Curse of Doom · 2024-02-24T05:12:03.346Z · LW · GW

I think curse of dimensionality is apt, since the prerequisite reading directly references it:

One problem with this whole GEM-vs-Pareto concept: if chasing a Pareto frontier makes it easier to circumvent GEM and gain a big windfall, then why doesn’t everyone chase a Pareto frontier? Apply GEM to the entire system: why haven’t people already picked up the opportunities lying on all these Pareto frontiers?

Answer: dimensionality. If there’s 100 different specialties, then there’s only 100 people who are the best within their specialty. But there’s 10k pairs of specialties (e.g. statistics/gerontology), 1M triples (e.g. statistics/gerontology/macroeconomics), and something like 10^30 combinations of specialties. And each of those pareto frontiers has room for more than one person, even allowing for elbow room. Even if only a small fraction of those combinations are useful, there’s still a lot of space to stake out a territory.

That said, the way John talks about it there I think 'boon of dimensionality' might be more apt still, but in Screwtape's context 'curse' is right.

Comment by Mo Putera (Mo Nastri) on Noticing Panic · 2024-02-06T17:47:14.106Z · LW · GW

Great comment. I also like Nate Soares' Dive in:

In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.

It's important to possess a minimal level of ability to update in the face of evidence, and to actually change your mind. But by far the most important thing is to just dive in.

Comment by Mo Putera (Mo Nastri) on POC || GTFO culture as partial antidote to alignment wordcelism · 2024-01-31T17:46:34.075Z · LW · GW

Would the recent Anthropic sleeper agents paper count as an example of bullet #2 or #3? 

Comment by Mo Putera (Mo Nastri) on How do you feel about LessWrong these days? [Open feedback thread] · 2024-01-31T17:35:30.654Z · LW · GW

I've been considering writing a post about this but I think my writing style tends to be a bit ... messy ... to get upvoted here.

Please do. I've been mulling over related half-digested thoughts -- replacing the symbol / brand with the substance, etc.

Comment by Mo Putera (Mo Nastri) on Searching for outliers · 2024-01-30T11:35:44.554Z · LW · GW

Say more? (e.g. illustrative / motivating examples, related reading etc)

Comment by Mo Putera (Mo Nastri) on How to write better? · 2024-01-30T09:51:48.388Z · LW · GW

You might be interested in Scott Alexander's writing advice. In particular, ever since reading that comment a ~decade ago I find myself repeatedly doing what he said here:

The best way to improve the natural flow of ideas, and your writing in general, is to read really good writers so much that you unconsciously pick up their turns of phrase and don't even realize when you're using them. The best time to do that is when you're eight years old; the second best time is now.

Your role models here should be those vampires who hunt down the talented, suck out their souls, and absorb their powers. Which writers' souls you feast upon depends on your own natural style and your goals. I've gained most from reading Eliezer, Mencius Moldbug, Aleister Crowley, and G.K. Chesterton (links go to writing samples from each I consider particularly good); I'm currently making my way through Chesterton's collected works pretty much with the sole aim of imprinting his writing style into my brain.

Stepping from the sublime to the ridiculous, I took a lot from reading Dave Barry when I was a child. He has a very observational sense of humor, the sort where instead of going out looking for jokes, he just writes about a topic and it ends up funny. It's not hard to copy if you're familiar enough with it. And if you can be funny, people will read you whether you have any other redeeming qualities or not.

Comment by Mo Putera (Mo Nastri) on why I'm anti-YIMBY · 2024-01-29T07:33:24.061Z · LW · GW

Yeah, or adversarial collaboration-style. I'd be especially curious about (1) what would change your mind (same for the YIMBY proponent) (2) empirical data

Comment by Mo Putera (Mo Nastri) on AI #48: The Talk of Davos · 2024-01-26T07:39:57.704Z · LW · GW

I do not understand why very smart people are almost intelligence deniers.

This reminded me of Are smart people's personal experiences biased against general intelligence? It's collider bias: 

I think that people who are high in g will tend to see things in their everyday life that suggest to them that there is a tradeoff between being high g and having other valuable traits.

The post's illustrative example is Nassim "IQ is largely a pseudoscientific swindle" Taleb. 

Comment by Mo Putera (Mo Nastri) on Humans aren't fleeb. · 2024-01-24T17:22:22.115Z · LW · GW

(Some of your subsections link to a Google document instead of the relevant section in the post you intended.)

Comment by Mo Putera (Mo Nastri) on 60+ Possible Futures · 2024-01-22T11:27:32.852Z · LW · GW

This is great, I've bookmarked it for future reference, thank you for doing the work of distilling all this.

I think Anders Sandberg's grand futures might fit in under your last subsection. Long quote incoming (apologies in advance, it's hard to summarize Sandberg):

Rob Wiblin: ... What are some futures that you think could plausibly happen that are amazing from various different points of view?

Anders Sandberg: One amazing future is humanity gets its act together. It solves existential risk, develops molecular nanotechnology and atomically precise manufacturing, masters biotechnology, and turns itself sustainable: turns half of the planet into a wilderness preserve that can evolve on its own, keeping to the other half where you have high material standards in a totally sustainable way that can keep on going essentially as long as the biosphere is going. And long before that, of course, people starting to take steps to maintain the biosphere by putting up a solar shield, et cetera. And others, of course, go off — first settling the solar system, then other solar systems, then other galaxies — building this super-civilisation in the nearby part of the universe that can keep together against the expansion of the universe, while others go off to really far corners so you can be totally safe that intelligence and consciousness remains somewhere, and they might even try different social experiments.

That’s one future. That one keeps on going essentially as long as the stars are burning. And at that point, they need to turn to actually taking matter and putting it into the dark black hole accretion disks and extracting the energy and keep on going essentially up until the point where you get proton decay — which might be curtains, but this is something north of 1036 years. That’s a lot of future, most of it long after the stars had burned out. And most of the beings there are going to be utterly dissimilar to us.

But you could imagine another future: In the near future, we develop ways of doing brain emulation and we turn ourselves into a software species. Maybe not everybody; there are going to be stragglers who are going to maintain the biosphere on the Earth and going to be frowning at those crazies that in some sense committed suicide by becoming software. The software people are, of course, just going to be smiling at them, but thinking, “We’ve got the good deal. We got on this infinite space we can define endlessly.”

And quite soon they realise they need more compute, so they turn a few other planets of the solar system into computing centres. But much of a cultural development happens in the virtual space, and if that doesn’t need to expand too much, you might actually end up with a very small and portable humanity. I did a calculation some years ago that if you actually covered a part of the Sahara Desert with solar panels and use quantum dot cellular automaton computing, you could keep mankind in an uploaded form running there indefinitely, with a rather minimal impact on the biosphere. So in that case, maybe the future of humanity is instead going to be a little black square on a continent, and not making much fuss in the outside universe.

I hold that slightly unlikely, because sooner or later somebody’s going to say, “But what about space? What about just exploring that material world I heard so much about from Grandfather when he was talking? ‘In my youth, we were actually embodied.'” So I’m not certain this is a stable future.

The thing that interests me is that I like open-ended futures. I think it’s kind of worrisome if you come up with an idea of a future that is so perfected, but it requires that everybody do the same thing. That is pretty unlikely, given how we are organised as people right now, and systems that force us to do the same thing are terrifyingly dangerous. It might be a useful thing to have a singleton system that somehow keeps us from committing existential risk suicide, but if that impairs our autonomy, we might actually have lost quite a lot of value. It might still be worth it, but you need to think carefully about the tradeoff. And if its values are bad, even if it’s just subtly bad, that might mean that we lose most of the future.

I also think that there might be really weird futures that we can’t think well about. Right now we have certain things that we value and evaluate as important and good: we think about the good life, we think about pleasure, we think about justice. We have a whole set of things that are very dependent on our kind of brains. Those brains didn’t exist a few million years ago. You could make an argument that some higher apes actually have a bit of a primitive sense of justice. They get very annoyed when there is unfair treatment. But as you go back in time, you find simpler and simpler organisms and there is less and less of these moral values. There might still be pleasure and pain. So it might very well be that the fishes swimming around the oceans during the Silurian already had values and disvalues. But go back another few hundred million years and there might not even have been that. There was still life, which might have some intrinsic value, but much less of it.

Where I’m getting at with this is that value might have emerged in a stepwise way: We started with plasma near the Big Bang, and then eventually got systems that might have intrinsic value because of complex life, and then maybe systems that get intrinsic value because they have consciousness and qualia, and maybe another step where we get justice and thinking about moral stuff. Why does this process stop with us? It might very well be that there are more kinds of value waiting in the wings, so to say, if we get brains and systems that can handle them.

That would suggest that maybe in 100 million years we find the next level of value, and that’s actually way more important than the previous ones all taken together. And it might not end with that mysterious whatever value it is: there might be other things that are even more important waiting to be discovered. So this raises this disturbing question that we actually have no clue how the universe ought to be organised to maximise value or doing the right thing, whatever it is, because we might be too early on. We might be like a primordial slime thinking that photosynthesis is the biggest value there is, and totally unaware that there could be things like awareness.

Rob Wiblin: OK, so the first one there was a very big future, where humanity and its descendants go and grab a lot of matter and energy across the universe and survive for a very long time. So there’s the potential at least, with all of that energy, for a lot of beings to exist for a very long time and do all kinds of interesting stuff.

Then there’s the very modest future, where maybe we just try to keep our present population and we try to shrink our footprint as much as possible so that we’re interfering with nature or the rest of the universe as little as possible.

And then there’s this wildcard, which is maybe we discover that there are values that are totally beyond human comprehension, where we go and do something very strange that we don’t even have a name for at the moment.

Comment by Mo Putera (Mo Nastri) on Four visions of Transformative AI success · 2024-01-22T11:21:35.041Z · LW · GW

the value generators are about as simple and general as we could have gotten

Would you say it's something like empowerment? Quoting Jacob:

Empowerment provides a succinct unifying explanation for much of the apparent complexity of human values: our drives for power, knowledge, self-actualization, social status/influence, curiosity and even fun[4] can all be derived as instrumental subgoals or manifestations of empowerment. Of course empowerment alone can not be the only value or organisms would never mate: sexual attraction is the principle deviation later in life (after sexual maturity), along with the related cooperative empathy/love/altruism mechanisms to align individuals with family and allies (forming loose hierarchical agents which empowerment also serves).

The key central lesson that modern neuroscience gifted machine learning is that the vast apparent complexity of the adult human brain, with all its myriad task specific circuitry, emerges naturally from simple architectures and optimization via simple universal learning algorithms over massive data. Much of the complexity of human values likewise emerges naturally from the simple universal principle of empowerment.

Empowerment-driven learning (including curiosity as an instrumental subgoal of empowerment) is the clear primary driver of human intelligence in particular, and explains the success of video games as empowerment superstimuli and fun more generally.

This is good news for alignment. Much of our values - although seemingly complex - derive from a few simple universal principles. Better yet, regardless of how our specific terminal values/goals vary, our instrumental goals simply converge to empowerment regardless. Of course instrumental convergence is also independently bad news, for it suggests we won't be able to distinguish altruistic and selfish AGI from their words and deeds alone. But for now, let's focus on that good news:

Safe AI does not need to learn a detailed accurate model of our values. It simply needs to empower us.

Comment by Mo Putera (Mo Nastri) on On "Geeks, MOPs, and Sociopaths" · 2024-01-22T04:41:13.075Z · LW · GW

Curious what you think of Scott Alexander's Peter Turchin-inspired 'cyclic model' alternative to Chapman's model, which he argues better matches his experience, summarizable as precycle → growth (forward + upward + outward) → involution → postcycle: 

Either through good luck or poor observational skills, I’ve never seen a lot of sociopath takeovers. Instead, I’ve seen a gradual process of declining asabiyyah. Good people start out working together, then work together a little less, then turn on each other, all while staying good people and thinking they alone embody the true spirit of the movement.

Comment by Mo Putera (Mo Nastri) on What rationality failure modes are there? · 2024-01-22T04:26:38.692Z · LW · GW

Curious to see you elaborate on the last point, or just pointers to further reading. I think I agree in a betting sense (i.e. is Jan's claim true or false?) but don't really have a gears-level understanding.

Comment by Mo Putera (Mo Nastri) on What rationality failure modes are there? · 2024-01-22T04:23:59.036Z · LW · GW

I'm not sure your last sentence is true, mainly because selection bias: a fair proportion of the more instrumental folks are too busy actually doing work IRL to post frequently here anymore (e.g. Luke Muehlhauser, who I still sometimes think of as the author of posts like How to Beat Procrastination instead of his current role). 

Comment by Mo Putera (Mo Nastri) on On "Geeks, MOPs, and Sociopaths" · 2024-01-22T04:16:52.821Z · LW · GW

you can tell who are the sociopaths by their money & unnaturally high h-index, and you can tell who are the geeks by their quality work

Tangential to your comment's main point, but for non-insiders maybe PaperRank, AuthorRank and Citation-Coins are harder to game than the h-index: 

Since different papers and different fields have largely different average number of co-authors and of references we replace citations with individual citations, shared among co-authors. Next, we improve on citation counting applying the PageRank algorithm to citations among papers. Being time-ordered, this reduces to a weighted counting of citation descendants that we call PaperRank. Similarly, we compute an AuthorRank applying the PageRank algorithm to citations among authors. These metrics quantify the impact of an author or paper taking into account the impact of those authors that cite it. Finally, we show how self- and circular- citations can be eliminated by defining a closed market of citation-coins. 

They still can't be compared between subfields though, only within.

Comment by Mo Putera (Mo Nastri) on Being nicer than Clippy · 2024-01-18T08:44:23.609Z · LW · GW

I don't have anything to add other than that I really appreciate how you've articulated a morass of vague intuitions I've begun to have re: boundaries-oriented ethics, and that I hope you end up writing this up as a full standalone post sometime.

Comment by Mo Putera (Mo Nastri) on An Introduction To The Mandelbrot Set That Doesn't Mention Complex Numbers · 2024-01-18T08:36:06.658Z · LW · GW

I'm personally very glad you nevertheless decided to go ahead and publish this (pedagogically beautiful) essay; I'm already mentally drawing up a list of friends to share this with :) 

Comment by Mo Putera (Mo Nastri) on What good is G-factor if you're dumped in the woods? A field report from a camp counselor. · 2024-01-13T06:21:54.096Z · LW · GW

Really? I thought it was unsettling.

Comment by Mo Putera (Mo Nastri) on Notes on notes on virtues · 2024-01-06T05:12:58.784Z · LW · GW

I hope you do too. One of my aims this year is to try 'intentional virtue training', and your sequence has been an impetus, although I've only skimmed certain parts so I intend to read them more thoroughly later. I'm not sure whether I should try Ben Franklin's approach or SotF&E's; the former strikes me as somewhat harsher, but I have a hunch (empirically unsupported aside from my own confounder-laden upbringing) that the harshness is a feature not a bug for a certain sort of person, including me, so I'm leaning towards that. 

Comment by Mo Putera (Mo Nastri) on Notes on notes on virtues · 2024-01-05T07:59:45.059Z · LW · GW

Hi David, is the notes on virtues sequence still ongoing? 

I like the idea of the Society of the Free and Easy, but the fact that the program began to dwindle after a while does give me pause from a 'will it work for me?' perspective.

Comment by Mo Putera (Mo Nastri) on The spiritual benefits of material progress · 2024-01-02T09:13:09.815Z · LW · GW

Is that disagreement enough to change the (predicted) truth value of Jason's claim though? 

I'll admit to being biased here. I live in a rapidly-developing middle-income country; the difference in opportunity between my generation and my parents is nearly as vast as between 1910 and 2009 in Gordon's statistics. To me, while I agree wholeheartedly that Gordon's categorization doesn't cleave reality at the same joints Jason's does, it's still ~irrelevant in that it doesn't change my mind on the directionality of Jason's claim.

Comment by Mo Putera (Mo Nastri) on Memory bandwidth constraints imply economies of scale in AI inference · 2023-12-31T18:39:25.418Z · LW · GW

A few years back VCs were fooled by a number of well meaning startups based on the pitch "We can just make a big matmul chip like a GPU but with far more on chip SRAM and thereby avoid the VN bottleneck!"

Including Cerebras?

Comment by Mo Putera (Mo Nastri) on Value systematization: how values become coherent (and misaligned) · 2023-12-28T13:08:14.234Z · LW · GW

Tangentially:

See Friston's predictive-processing framework in neuroscience

Nostalgebraist has argued that Friston's ideas here are either vacuous or a nonstarter, in case you're interested.

Comment by Mo Putera (Mo Nastri) on Critical review of Christiano's disagreements with Yudkowsky · 2023-12-28T11:55:11.054Z · LW · GW

enabling people to tightly couple themselves with specialized electronic devices via high-end non-invasive BCI

Have you written more about why you think this is (to quote you) much more feasible in short-term than people usually assume / can you point me to writeups by others in this regard?  

Comment by Mo Putera (Mo Nastri) on Succession · 2023-12-27T09:23:03.977Z · LW · GW

It's that I don't like the Grand Vision.

I thought it was pretty courageous of you to state this so frankly here, especially given how the disagree-votes turned out. 

Comment by Mo Putera (Mo Nastri) on Contra Yudkowsky on AI Doom · 2023-12-25T17:30:51.625Z · LW · GW

Thanks Jacob. I've been reading the back-and-forth between you and other commenters (not just habryka) in both this post and your brain efficiency writeup, and it's confusing to me why some folks so confidently dismiss energy efficiency considerations with handwavy arguments not backed by BOTECs. 

While I have your attention – do you have a view on how far we are from ops/J physical limits? Your analysis suggests we're only 1-2 OOMs away from the ~10^-15 J/op limit, and if I'm not misapplying Koomey's law (2x every 2.5y back in 2015, I'll assume slowdown to 3y doubling by now) this suggests we're only 10-20 years away, which sounds awfully near, albeit incidentally in the ballpark of most AGI timelines (yours, Metaculus etc). 

Comment by Mo Putera (Mo Nastri) on Contra Yudkowsky on AI Doom · 2023-12-24T18:12:20.463Z · LW · GW

Curious, did this bet happen? Since Jacob said he may be up for it depending on various specifics.

Comment by Mo Putera (Mo Nastri) on Assessment of AI safety agendas: think about the downside risk · 2023-12-19T09:49:43.495Z · LW · GW

As an aside, Rethink Priorities' cross-cause cost-effectiveness (CCM) model automatically prompts consideration of downside risk as part of the calculation template so to speak. Their placeholder values for a generic AI misalignment x-risk mitigation megaproject are

  • 97.3% chance of having no effect (all parameters are changeable by the way)
  • 70% chance of positive effect conditional on the above not occurring, and hence
  • 30% chance of negative effect, which leads to 
  • 30% increase in probability of extinction (relative to the positive counterfactual's effect, not total p(doom))

The exact figures RP's CCM spits out aren't that meaningful; what's more interesting for me are the estimates under alternative weighting schemes for incorporating risk aversion (Table 1), pretty much all of which are negative. My main takeaway from this sort of exercise is the importance of reducing sign uncertainty, which isn't a new point but sometimes appears underemphasized.  

Comment by Mo Putera (Mo Nastri) on 2022 (and All Time) Posts by Pingback Count · 2023-12-17T14:02:18.926Z · LW · GW

Karma of posts linking to the post in question, I think.

Comment by Mo Putera (Mo Nastri) on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T09:47:45.161Z · LW · GW

In other words slow multipolar failure. Critch might point out that the disanalogy in "AI won't need to kill humans just as the US doesn't need to kill the sentinelese" lies in how AIs can have much wider survival thresholds than humans, leading to (quoting him)

Eventually, resources critical to human survival but non-critical to machines (e.g., arable land, drinking water, atmospheric oxygen…) gradually become depleted or destroyed, until humans can no longer survive.

Comment by Mo Putera (Mo Nastri) on Nuclear Energy - Good but not the silver bullet we were hoping for · 2023-12-17T09:37:31.624Z · LW · GW

I just tried your link but got a "This site can’t be reached" error.

Comment by Mo Putera (Mo Nastri) on re: Yudkowsky on biological materials · 2023-12-15T04:30:50.049Z · LW · GW

But when you try to come up with a plan more specific than "try to ban general-purpose computing", it turns out that the exact threat model matters.

I think this is why I'm more partial to Holden's "playbook, not plan" way of thinking about this, even if I'm not sure what to think of his 4 key categories of interventions. 

Comment by Mo Putera (Mo Nastri) on What is the next level of rationality? · 2023-12-14T16:36:52.148Z · LW · GW

Chapman's old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.

Comment by Mo Putera (Mo Nastri) on What is the next level of rationality? · 2023-12-13T04:19:35.518Z · LW · GW

What do you think of David Chapman's stuff? I'm thinking of his curriculum sketch in particular. 

I don't think most rationalists were very excited by it though, e.g. Scott's brief look at it in 2013 (and David's response downthread) and an old comment thread I can no longer find between David and Kaj Sotala.

Comment by Mo Putera (Mo Nastri) on Wacky, risky, anti-inductive intelligence-enhancement methods? · 2023-12-13T04:08:14.893Z · LW · GW

fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.

My first reaction was to wonder how this is any different from what already happens in pure math, theoretical physics & TCS etc. Reflecting on this led to my second reaction, which is that jargon brevity correlates with (utility x frequency) which is domain-specific (cf. Terry Tao's remarks on useful notation), and cross-domain work requires a lot of overhead (to manage stuff like avoiding namespace collisions, but the more general version of this) and this overhead work plausibly increases superlinearly with number of domains, which would be reflected in the language as the sort of thing the late Fields medalist Bill Thurston mentioned re: formalizing math:

Mathematics as we practice it is much more formally complete and precise than other sciences, but it is much less formally complete and precise for its content than computer programs. The difference has to do not just with the amount of effort: the kind of effort is qualitatively different. In large computer programs, a tremendous proportion of effort must be spent on myriad compatibility issues: making sure that all definitions are consistent, developing “good” data structures that have useful but not cumbersome generality, deciding on the “right” generality for functions, etc. The proportion of energy spent on the working part of a large program, as distinguished from the bookkeeping part, is surprisingly small. Because of compatibility issues that almost inevitably escalate out of hand because the “right” definitions change as generality and functionality are added, computer programs usually need to be rewritten frequently, often from scratch.

In practice the folks who I'd trust most to have good opinions on how useful such notations-for-thought would be are breadth + detail folks (e.g. Gwern), people who've thought a lot about adjacent topics (e.g. Michael Nielsen and Bret Victor), and generalists who frequently correspond with experts (e.g. Drexler). I'd be curious to know what they think.

Comment by Mo Putera (Mo Nastri) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T03:43:24.937Z · LW · GW

This is obviously not a very realistic model, but it probably produces fairly realistic results. But again, this is an area for future improvement.

Curious from a modeling perspective: what improvements would be top of mind for you? Another way to phrase this: if someone else were to try modeling this, what aspects would you look at to tell if it's an improvement or not? 

Comment by Mo Putera (Mo Nastri) on The 101 Space You Will Always Have With You · 2023-11-30T06:16:16.295Z · LW · GW

Upvoting for the multiple levels of summarization. Feels respectful of readers' attention too.

Comment by Mo Putera (Mo Nastri) on The 6D effect: When companies take risks, one email can be very powerful. · 2023-11-20T13:12:29.371Z · LW · GW

Persol's comment upthread seems to address the missing mood if I'm interpreting them (and you) correctly? 

Comment by Mo Putera (Mo Nastri) on At 87, Pearl is still able to change his mind · 2023-10-19T15:29:11.640Z · LW · GW

I had to read this sentence a few times to grok the author's point...

Comment by Mo Putera (Mo Nastri) on The 99% principle for personal problems · 2023-10-06T04:15:17.103Z · LW · GW

This was my instinctive reaction as well, made clearer by having done a few years of personal data tracking to (among others) A/B test self-improvement experiments. It's just hard to tell, especially with the low-probability high-severity recurring issues, perhaps complicated by ever-changing life contexts.

Comment by Mo Putera (Mo Nastri) on How have you become more hard-working? · 2023-09-26T08:22:43.641Z · LW · GW

I'm both inspired and curious, as someone who's attempting a mid-career change -- how did you go from being a laborer on commercial construction projects to sysadmin?

Comment by Mo Putera (Mo Nastri) on Who determines whether an alignment proposal is the definitive alignment solution? · 2023-09-26T07:58:48.281Z · LW · GW

Why was this (sincere afaict) question downvoted?