Posts

How much should e-signatures have to cost a country? 2023-11-21T22:45:48.528Z
"AI Wellbeing" and the Ongoing Debate on Phenomenal Consciousness 2023-08-17T15:47:42.128Z
Name of the fallacy of assuming an extreme value (e.g. 0) with the illusion of 'avoiding to have to make an assumption'? 2023-02-18T08:11:07.506Z
SunPJ in Alenia 2022-06-25T19:39:50.393Z
Am I anti-social if I get vaccinated now? 2021-06-11T15:42:27.699Z

Comments

Comment by FlorianH (florian-habermacher) on social lemon markets · 2024-04-25T18:39:27.221Z · LW · GW

Assuming you're the first to explicitly point out that lemon market type of feature of 'random social interaction', kudos, I think it's a great way to express certain extremely common dynamics.

Anecdote from my country, where people ride trains all the time, fitting your description, although it takes a weird kind of extra 'excuse' in this case all the time: It would often feel weird to randomly talk to your seat neighbor, but ANY slightest excuse (sudden bump in the ride; info speaker malfunction; grumpy ticket collector; one weird word from a random person in the wagon, ... any smallest thing) will an extremely frequently make the silent start conversation, and then easily for hours if the ride lasts that long. And I think some sort of social lemon market dynamics may help explain it indeed.

Comment by FlorianH (florian-habermacher) on Funny Anecdote of Eliezer From His Sister · 2024-04-23T16:50:30.570Z · LW · GW

Funny is jot the only adjective this anecdote deserves. Thanks for sharing this great wisdom/reminder!

Comment by FlorianH (florian-habermacher) on Suzie. EXE's Shortform · 2024-04-21T19:23:26.311Z · LW · GW

I would not search for smart ways to detect it. Instead look at it from the outside - and there I don't see why we should have large hope for it to be detectable:

Imagine you create your simulation. Imagine you are much more powerful than you are, to make the simulation as complex as you want. Imagine in your coolest run, your little simulatees start wondering: how could we trick Suzie so her simulation reveals the reset?!

I think you agree their question will be futile; once you reset your simulation, surely they'll not be able to detect it: while setting up the simulation might be complex, reinitialize at a given state successfully, with no traces within the simulated system, seems like the simplest task of it all.

And so, I'd argue, we might well expect it to be also in our (potential) simulation, however smart your reset-detection design might be.

Comment by FlorianH (florian-habermacher) on An ethical framework to supersede Utilitarianism · 2024-04-17T18:31:24.526Z · LW · GW

My impression is, what you propose to supersede Utilitarianism with, is rather naturally already encompassed by utilitarianism. For example, when you write

If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions.

I disagree that typical concepts of utilitarianism - not strawmans thereof - are in anyway "stuck" here at all: "Of course," a classical utilitarian might well tell you, "we'll have to trade-off between the candy bar and the fatness it provides, that is exactly what utilitarianism is about". And you can extend that to also other nuances you bring: whatever, ultimately, we desire or prefer or what-have-you most: As classical utilitarians we'd aim exactly at that, quasi by definition.

Comment by FlorianH (florian-habermacher) on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T21:37:53.226Z · LW · GW

Thanks for the link to the interesting article!

Comment by FlorianH (florian-habermacher) on What does it take to transfer the knowledge to action? · 2024-04-09T00:42:26.812Z · LW · GW

If I understand you correctly, what you describe seems a bit atypical, or at least not similar in all other people, indeed.

Fwiw, pure speculation: Maybe you learned very much from working on/examining advanced/codes type codes. So you learned to understand advanced concepts etc. But you mostly learned to code on the basis of already existing code/solutions.

Often, instead, when we systematically learn to code, we may learn bit by bit indeed from the most simple examples, and we don't just learn to understand them, but - a bit like when starting to learn basic math - we constantly are challenged to put the next element learned directly into practice, on our own. This ensures we master all that knowledge in a highly active way, rather than only  passively.

This seems to suggest there's a mechanistically simple yet potentially tedious path, for you to learn to more actively create solutions from scratch: Force yourself to start with the simplest things to code actively, from scratch, without looking at the solution first. Just start with a simple problem that 'needs a solution' and implement it. Gradually increase to more complexity. I guess it might require a lot of such training. No clue whether there's anything better.

Comment by FlorianH (florian-habermacher) on On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism · 2024-04-04T22:30:40.714Z · LW · GW

The irony in Wenar's piece is: In all he does, he just outs himself as... an EA himself :-). He clearly thinks its important to think through net impact and to do the things that do have great overall impact. Sad he caricatures the existing EA ecosystem in such an uncompelling and disrespectful way.

Fully agree with your take of him being "absurdly" unconvincing here. I guess nothing is too blatant to be printed in this world, as long as the writer makes bold & enraging enough claims on a popular scapegoat and has a Prof title from a famous uni.

I can only imagine (or hope), the traction the article got, which you mention (though I have not seen it myself), being mainly limited to usual suspects for whom EA anyway, quasi by definition, is simply all stupid, if not outright evil.

Comment by FlorianH (florian-habermacher) on Are AIs conscious? It might depend · 2024-03-15T23:44:05.834Z · LW · GW

Unconvinced. Bottom line seems to be an equation of Personal Care with Moral Worth.

But I don't see how the text really supports that: Just because we feel more attached to entities we interact with, it doesn't inherently elevate their sentience i.e. their objective moral worth.

Example: Our lesser emotional attachment or physical distance to chickens in factory farms does not diminish their sentience or moral worth, I'd think. Same for (future) AIs too.

At best I could see this equation to +- work out in a perfectly illusionist reality, where there is no objective moral relevance. But then I'd rather not invoke the concept of moral relevance at all - instead we'd have to remain with mere subjective care as the only thing there might be.

Comment by FlorianH (florian-habermacher) on Why I like Zulip instead of Slack or Discord · 2024-03-13T15:16:32.462Z · LW · GW

This page also provides a neat summary of Zulip advantages; mostly in the similar direction as here: https://stackshare.io/stackups/slack-vs-zulip

Comment by FlorianH (florian-habermacher) on Why I like Zulip instead of Slack or Discord · 2024-03-13T15:11:36.555Z · LW · GW

Interesting and good to hear, as I was thinking of using it for a class too (also surprised; I don't remember the slightest hint of counter-intuitiveness when I personally used Zulip with its threads).

Comment by FlorianH (florian-habermacher) on How is Chat-GPT4 Not Conscious? · 2024-03-03T12:08:04.267Z · LW · GW

Thanks, corrected

Comment by FlorianH (florian-habermacher) on How is Chat-GPT4 Not Conscious? · 2024-02-28T23:55:38.462Z · LW · GW

This discussion could be made more fruitful by distinguishing between phenomenal consciousness (sentience) and access/reflective consciousness ('independent cognition' in the author's terminology). The article mainly addresses the latter, which narrows its ethical implications for AI.

"[ChatGPT] would therefore be conscious by most definitions" should be caveated; the presence of advanced cognition may (arguably) be convincingly attributed to ChatGPT by the article, but this does not hold for the other, ethically interesting phenomenal consciousness, involving subjective experience.

Comment by FlorianH (florian-habermacher) on Benito's Shortform Feed · 2024-02-26T22:57:23.275Z · LW · GW

Maybe "I'm interested in the hypothesis/possibility..."

Comment by FlorianH (florian-habermacher) on What Software Should Exist? · 2024-01-21T20:22:10.336Z · LW · GW

File explorer where I don't type all of D:/F1/F2/F3/F4/X to get/open folder or file X, but I type (part of) F2 and F4 and it immediately (yes, indexing etc.) offers me X as a result (and maybe the few others that fit the pattern)

If I have an insane amount of subfolders/files, maybe it indexes better those with recent or regular access.

Extension: A version on steroids might index even file-seek results of files and index on (my or so) most common searched words. Find if that's a bit too extravagant.

Useful in traditional file structures, as we then type/think/remember less. Plus, it might encourage a step towards more tag-based file organization which I feel might be useful more generally, though that's just a potential side-effect and not the basic aim.

Comment by FlorianH (florian-habermacher) on An even deeper atheism · 2024-01-11T23:41:03.787Z · LW · GW

Love the post. One relevant possibility that I think would be worthy of consideration w.r.t. the discussion about human paperclippers/inter-human compatibility:

Humans may not be as much misaligned as our highlty idiosyncratic value theories might suggest, if a typical individual's value theory is really mainly what she uses to justify/explain/rationalize her today's local intuitions & actions, yet without really driving the actions as much as we think. A fooming human might then be more likely to simply update her theories, to again become compatible what underlying more basic intuitions dictate to her. So the basic instincts that make us today locally have often reasonably compatible practical aims, might then still keep us more compatible than our individual exicit and more abstract value theories, i.e. rationalizations, would seem to suggest.

I think there are some observatiins that might suggest sth in that direction. Give the humans a new technology , and initially some will call it the devil's tool to be abstained from - but ultimately we all converge to using it, updating our theories, beliefs.

Survey persons on whether it's okay to actively put in acute danger one life for the sake of saving 10, and you have people stronlgy diverge on the topic, based on their abstract value theories. Put them in the corrsponding leadership position where that moral question becomes an regular real choice that has to be made, and you might observe them act much more homogenously according to the more fundamental ad pragmatic instincts.

(I think Jonathan Haidt's Righteous mind wluld also support some of this)

In this case, the Yudkowskian AI alignment challenge may keep a bit more of its specialness in comparison to the human paperclipper challenge.

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T17:20:40.840Z · LW · GW

You had suggested the issues with free-riding/insufficient contributions to public goods, might not be so much of a problem. The linked post suggests otherwise, as it beautifully highlights some of the horrors that come from these issues. It's point is, even if humans are not all bad as of themselves, within the larger societies, there tend to arise strong incentives, for the individual, to act in the disinterest of society.

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T16:46:50.887Z · LW · GW

Meditations On Moloc

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T16:20:15.881Z · LW · GW

Absolutely! That's why we have free-rider problems/insufficient contribution to public goods, all over the world. The thing you can do in society's best interest, is not typically in your own (material) best interest, unless you're a perfect altruist.

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-02T19:10:16.342Z · LW · GW

The actions you suggest might represent a laudable contribution to a public good, but it doesn't directly answer the (self-concerned) question the OP raises. Given the largeness of the world and the public goods nature of the projects you mention, his own action will only marginally change the probability of a a better structure of society in general. That may still be worth it from a fully altruistic standpoint, but it has asymptotically 0 probability to improve his personal material welfare.

(If I may, an analogy: One could compare it to a situation where I'm living in Delhi and I wonder what I can do to save myself from the effects of climate change that makes the summers even more unbearable in future, and you tell me "consider not flying; plant trees in the city or in the country...". I see people or small countries get such type of advice in reality also, but it is not truly to the point.)

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-02T18:55:22.141Z · LW · GW

Largely agree with you that "EVERYTHING becomes hard to predict.", it is partly what I meant to allude to with the introductory caveat in my comment. I imagine un-graspably transformative superintelligence well within our lifetime, and cannot give much more advice on that scenario, yet I still keep a non-zero probability on the world & socio-economic structures remaining - for whichever reasons - still more recognizable, for which case #1 and #2 seem still reasonably natural defaults. But yes, they may apply in a reasonably narrow band of imaginable AI transformed futures.

Comment by FlorianH (florian-habermacher) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-01T12:09:42.521Z · LW · GW

Some hypotheses - with much simplification and limited applicability in terms of the possible future states of the world within which they would really be valuable:

  1. Invest in natural resources. Once labor as bottleneck is overcome, the real restriction will be the natural resources, earning an important scarcity rent. There are reasons why resource markets might not yet fully price that possibility in (somewhat speculative).
  2. Move to a place that represents a good combination of: (i) resource-rich and (ii) likelihood to be able to uphold basic social structures including economic redistribution, even in the face of tempting rewards for an extractive elite class.

Last but not least:

3. Mindfulness. Embrace how much of a cosmic joke our individual lives and self-centered aspirations represent - or something of that sort.

Comment by FlorianH (florian-habermacher) on Arjun Panickssery's Shortform · 2023-12-30T22:15:20.276Z · LW · GW

I had also for a long time trouble believing that Rawls' theory centered around "OP -> maximin" could get the traction it has. For what it's worth:

A. IMHO, the OP remains a great intuition pump for 'what is just'. 'Imagine, instead of optimizing for your own personal good, you optimized for that of everyone.' I don't see anything misguided in that idea; it is an interesting way to say: Let's find rules that reflect the interest of everyone, instead of only that of a ruling elite or so. Arguably, we could just say the latter more directly, but the veil may be making the idea somewhat more tangible, or memorable.

B. Rawls is not the inventor of the OP. Harsanyi has introduced the idea earlier, though Rawls seems to have failed to attribute it to Harsanyi.

C. Harsanyi, in his 1975 paper Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory uses rather strong words when he explains that claiming the OP led to the maximin is a rather appalling idea. The short paper is soothing for any Rawls-skeptic; I heavily recommend it (happy to send a copy if sb is stuck at the paywall).

Comment by FlorianH (florian-habermacher) on NYT is suing OpenAI&Microsoft for alleged copyright infringement; some quick thoughts · 2023-12-30T21:39:17.207Z · LW · GW

it'd be great to see regulations making AI companies liable for all sorts of damage from their products, including attributing statements to people who've never made them.

I see a case against punishing here, in general. Consider me asking you "What did Mario say?", and you answering - in private -

"Listen to this sentence - even though it might well be totally wrong: Mario said xxxx.",

or, even more in line with the ChatGPT situation,

"From what I read, I have the somewhat vague impression that Mario said xxxx - though I might mix this up, so you may really want to double-check."

 

Assume Mario has not said xxxx. We still have a strong case for not, in general, punishing you for the above statement. And even if I acted badly in response to your message, so that someone gets hurt, I'd see, a priori, the main blame to fall upon me, not you.

The parallels to the case of ChatGPT[1] suggests to extend a similar reservation about punishing to our current LLMs.

Admittedly, pragmatism is in order. If an LLM's hallucinations - despite warnings - end up creating entire groups of people attacking others due to false statements, it may be high time to reign in the AI. But the default for false attributions should not be that, not as long as the warning is clear and obvious: Do not trust it as of yet at all.

  1. ^

    In addition to knowing today's LLMs hallucinate, we currently even get a "ChatGPT can make mistakes. Consider checking important information." right next to its prompt.

Comment by FlorianH (florian-habermacher) on We're all in this together · 2023-12-06T06:55:45.589Z · LW · GW

I expect that the set of people who:

  • Expect to have died of old age within five years
  • Are willing to reduce how long they'll expected-live in order to be richer before they die
  • Are willing to sacrifice all of humanity's future (including the future of their loved-ones who aren't expected to die of old age within five years)
  • Take actions who impact the what superintelligence is built

is extremely small.

It would be extremely small if we'd be talking about binaries/pure certainty.

If in reality, everything is uncertain, and in particular (as I think), everyone has individually a tiny probability of changing the outcome, everyone ends up free-riding.

This is true for the commoner[1] who's using ChatGPT or whichever cheapest & fastest AI tool he finds for him to succeed in his work, therefore supporting the AI race and "Take actions who impact the what superintelligence is built".

It may also be true for CEOs of many AI companies. Yes their distopia-probability-impact is larger, but equally so do their own career, status, power - and future position within the potential new society, see jacob_cannell's comment - depend more strongly hinge on their action.

(Imperfect illustrative analogy: Climate change may kill a hundred million people or so, the being called human will tend to fly around the world, heating it up. Would anyone be willing to "sacrifice" hundred million people for her trip to Bali? I have some hope they wouldn't. But, she'll not avoid the holiday if her probability of avoiding disastrous climate change anyway is tiny. And if instead of her holiday, her entire career, fame, power depended on her to continue polluting, even if she was a global scale polluter, she'd likely enough not stop emitting for the sake of changing. I think we clearly must acknowledge this type of public good/freerider dynamics in the AI domain.

***

In my experience, most of the selfishness people claim to have to justify continuing to destroy the world instead of helping alignment is less {because that's their actual core values and they're acting rationally} and more just {finding excuses to not have to think about the problem and change their minds/actions}.

Agree with a lot in this, but w/o changing my interpretation much: Yes, humans are good in rationalization of their bad actions indeed. But they're especially good at it when it's in their egoistic interest to continue the bad thing. So both the commoner and the AI CEO alike, might well rationalize 'for complicated reason it's fine for the world if we (one way or another) heat up the AI race a bit' in irrational ways - really as they might rightly see it in their own material interest to be continuing to do so, and want to make their own brain & others see them as good persons nevertheless.

 

  1. ^

    Btw, I agree the situation is a bit different for commoners vs. Sam Altman & co. I read your post as being about persons in general, even people who are merely using the AI tools and therefore economically influence the domain via the market forces. If that was not only my wrong reading, then you might simplify the discussion if you edit your post to refer to those with significant probability of making a difference (I interpret your reply in that way; though I also don't think the result changes much, as I try to explain)

Comment by FlorianH (florian-habermacher) on We're all in this together · 2023-12-05T23:07:11.222Z · LW · GW

Your strong conclusion 

The people currently taking actions that increase the probability that {the former is solved first} are not evil people trying to kill everyone, they're confused people who think that their actions are actually increasing the probability that {the latter is solved first}.

rests on a few over-simplifications associated with

Almost all of what the future contains is determined by which of the two following engineering problems is solved first:

  • How to build a superintelligent AI (if solved first, everyone dies forever)
  • How to build an aligned superintelligent AI (if solved first, everyone gets utopia)

For example:

  • Say, I'm too old to expect aligned AI to give me eternal life (or aligned AI simply might not mean eternal life/bliss for me, for whichever reason; maybe as it's still better to start with newborns more efficiently made into bliss-enjoying automatons or whatever utopia entails), so for me individually, the intermediate years before superintelligence are the relevant ones, so I might rationally want to earn money by working on enriching myself, whatever the (un-)alignment impact of it
  • Given the public goods nature of alignment, I might fear it's unlikely for us to cooperate, so we'll all freeride by working to enrich ourselves working on various things rather leaning towards building unaligned AI. With such a prior, it may be rational for any self-interested person - also absent confusion - to indeed freeride: 'hopefully I make a bit of money for an enjoyable few or many years, before unaligned AGI destroys us with near-certainty anyway'.
  • Even if technical alignment was not too difficult, standard Moloch-type effects (again, all sorts of freeriding/power seeking) might mean chances of unaligned users of even otherwise 'aligned' technology are overwhelming, again meaning most value for most people lies in increasing their material welfare for the next few years to come, rather than 'wasting' their resources towards an futile alignment project.
  • Assume we're natural humans, instead of perfectly patient beings (I know, weird assumption). So you're eating the chocolate and drinking the booze, even if you know in the longer-term future it's a net negative for you. You need not be confused about it, you need not even be strictly irrational from today's point of view (depending on the exact definition of the term): As you might genuinely care more about the "you" in the current and coming few days or years, than about that future self that simply doesn't yet feel too close to you - in other words, just the way it can be rational to care a bit more about yourself than about others, you might care a bit more about your 'current' self than about that "you" a few decades - or an eternity - ahead. So, the near-eternal utopia might mean a lot to you, but not infinitely much. It is easy to see that then, it may well remain rational for you to use your resources towards more mundane aims of increasing your material welfare for the few intermediate years to come - given that your own potential marginal contribution to P(Alignment first) is very small (<<1).

Hence, I'm afraid, when you take into account real-world complexities, it may well be rather perfectly rational for individuals to race on towards unaligned superintelligence; you'd require more altruism rather than only more enlightenment to improve the situation. In a messy world of 8 billions in capitalistic partly-zero-sum competition (or here even partly negative-sum competition), it simply isn't simple to get cooperation even if individuals were approximately rational and informed - even if, indeed, we are all in this together.

Comment by FlorianH (florian-habermacher) on Neither Copernicus, Galileo, nor Kepler had proof · 2023-11-23T13:17:54.489Z · LW · GW

The many disagreement Karma (-7 as of now) suggest my failed joke was taken quite a bit as a serious statement, which in itself is quite interesting; maybe worth preserving as stats in a thus from now on retracted comment; we're living in strange times!

Comment by FlorianH (florian-habermacher) on Neither Copernicus, Galileo, nor Kepler had proof · 2023-11-22T22:44:38.396Z · LW · GW

And next year we may learn it is in fact flat ;-)

Thanks for sharing, surprising stuff!!

Comment by FlorianH (florian-habermacher) on How much should e-signatures have to cost a country? · 2023-11-22T14:30:06.124Z · LW · GW

https://e-estonia.com/e-governance-saves-money-and-working-hours/ "Estonian public sector annual costs for IT systems are 100M Euros in upkeep and 81M Euros in investments"

If I read you correctly, the 100+81M in Estonia is for (i) the ENTIRE gvmt IT system (not just e-signatures) serving (ii) the population. Though I could not read the report in Estonian to verify. Switzerland's is "up to 19 $bn" is specifically for e-signatures, only for within-gvmt exchanges afaik.

Comment by FlorianH (florian-habermacher) on Why not electric trains and excavators? · 2023-11-21T14:07:19.494Z · LW · GW

2. The pay of rail executives depends on short-term profits, so they're against long-term investments.

I think that is not as obvious an explanation as it may intuitively seem:

a. a company's profit is not equal to it's cash-flow. Profit includes the value of the assets invested in; so a valuable investment should normally not look bad on the balance sheet, even when evaluated in the short run.

b. If there really was a clear 19% or so ROI, even if the accounting ignored a.: You'd typically expect a train company to debt-finance an overwhelming share of the electrification capex, as is common for large infrastructure projects, attenuating the importance of the cash-flow issue, and making the investment even more attractive for equity investors.

Comment by FlorianH (florian-habermacher) on D0TheMath's Shortform · 2023-11-21T10:22:48.027Z · LW · GW

Thanks!

Taking the 'China good in marginal improvements, less in breakthroughs' story in some of these sources at face value, the critical question becomes whether leadership in AI hinges more on breakthroughs or on marginal innovations & scaling. I guess both could be argued for, with the latter being more relevant especially if breakthroughs generally diffuse quickly.

I take as the two other principal points from these sources (though also haven't read all in full detail): (i) some organizational drawbacks hampering China's innovation sector, esp. what one might call high-quality innovation (ii) that said, innovation strategies have been updated and there seems to be progress observed in China's innovation output over time.

I'm at least slightly skeptical about is the journals/citations based metrics, as I'm wary of stats being distorted by English language/US citation-circles. Though that's more of a side point.

In conclusion, I don't update my estimate much. The picture painted is mixed anyway, with lots of scope for China to become stronger in innovating any time even if it should now indeed have significant gaps still. I would remain totally unsurprised if many leading AI innovations also come out of China in the coming years (or decades, assuming we'll witness any), though I admit to remain a lay person on the topic - a lay person skeptical about so-called experts' views in that domain.

Comment by FlorianH (florian-habermacher) on D0TheMath's Shortform · 2023-11-19T18:57:03.543Z · LW · GW

Good counterpoint to the popular, complacent "China is [and will be?] anyway lagging behind in AI" view.

An additional strength

  • Patience/long-term foresight/freedom to develop AI w/o the pressure from the 4-year election cycle and to address any moment's political whims of the electorate with often populist policies

I'm a bit skeptical about the popular "Lack of revolutionary thought" assumption. Reminds me a bit of the "non-democracies cannot really create growth" that was taken as a low of nature by much too many 10-20 years ago before today's China. Keen to read more on it the Lack of revolutionary thought if somebody shares compelling evidence/resources.

Comment by FlorianH (florian-habermacher) on R&D is a Huge Externality, So Why Do Markets Do So Much of it? · 2023-11-17T22:51:09.001Z · LW · GW

1. Categories

Fundamental Research = State
Applied Research = Companies

.. is a common paradigm, and - while grossly too simplified - makes some sense: the latter category has  more tangible outputs, shorter payback etc. In line with @Dagon's comment, at the very least these two broad categories would have to be split for a serious discussion of whether 'too much' or 'too little' is done by gvmt and/or companies.

2. Incentives!

I've worked in a research startup and saw the same dollar go much further in producing high-quality research outputs than what I've directly experienced in some places (and from many more places observed) in the exact same domain in state-sponsored research (academia) where there is often a type Resource Curse dynamics.
My impression is, these observations generalize rather well (I'm talking about a general tendency; needless to say, there are many counterexamples; often those wanting to do serious research are exactly attracted by public research opportunities, where they can do great work). Your explanation leaves out this factor; it might explain a significant part of the reluctance of the state to spend more on R&D.

This does not mean the state should not do more (or support more) R&D, but I think there are very important complexities the post leaves out, limiting the explanation power.

Comment by FlorianH (florian-habermacher) on R&D is a Huge Externality, So Why Do Markets Do So Much of it? · 2023-11-17T21:55:17.719Z · LW · GW

Explains the existence of R&D, not a "too much" of it

Comment by FlorianH (florian-habermacher) on AI as Super-Demagogue · 2023-11-06T12:38:00.599Z · LW · GW

Thanks, yes, sadly seems all very plausible to me too.

Comment by FlorianH (florian-habermacher) on AI as Super-Demagogue · 2023-11-06T08:06:28.729Z · LW · GW

Thanks!

Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.

Trumpeteer numbers: I'd now remove that part from my comment. You're right. Shallowly my claim could seem substantiated by things like (Atlantic) "For many of Trump’s voters, the belief that the election was stolen is not a fully formed thought. It’s more of an attitude, or a tribal pose.", but even there upon closer reading, it comes out: In some form, they do (or at least did) seem to believe it. Pardon my shallow remark before checking facts more carefully.

AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today's AI or also for what we expect for the future.

Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a "bad idea that AI could make us go extinct", but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his "with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting", and that's a hugely relevant question. I just don't think it's a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).

Comment by FlorianH (florian-habermacher) on AI as Super-Demagogue · 2023-11-05T22:59:48.323Z · LW · GW

Agree with a lot, but major concerns:

  1. I'd bet this is entirely spurious (emphasize added):

And the bigger the shift, the more it reinforces for the followers that their reality is whatever the leader says.
Eventually people's desire to comply overcomes their sense of reality. [...]
Trump also changes positions regularly.
[...]
That explains why Ted Cruz became Trump's ally. It was not despite Trump's humiliation of Cruz, but because of it. The Corruption of Lindsey Graham is an excellent in depth read on how another opponent became a devoted supporter. [...]

Do you really believe Cruz' & co's official positions have anything to do with their genuine beliefs? That seems a weird idea. Would you have thought the same thing about Tucker Carlson before his behind-the-scenes hate speech about Trump got disclosed? I think the reality of big politics business is too obvious to claim this is in any way about true beliefs.

2. Another point of skepticism is on supposed "70 million devoted followers who hang on [Trump's] every word".

I have the impression (maybe simply a hope), a (large) bunch of these may not be as fully brainwashed as we make it to be. I can too easily imagine (and to some degree empathize with) persons not buying all that much from Trump at all, but still liking simply his style, and hating so much the conventional politics, with their anyway constant lies but more hidden than with Trump, that as protest they really 'love' him a bit compared to the (in their view) not-less-dirty but more uncanny-as-carefully-hidden.

3. Finally, on AI specifically:

  • Andrew Ng arguably rather lightheartedly dismissing various AI concerns may also speak against him rather than the other way round.
  • Granted, nothing more plausible that some AI safety ideas playing well into the hands of the big AI guys, is an effective incentive to have them pushing in that direction, stronger than they otherwise would. One could thus say, if these were the only persons calling out AI dangers and requesting AI safety measures: Mind their incentives! However, this is of course not at all the case. From whichever angle we look at the AI questions in depth, we see severe unsolved risks. Or at least: And a ton of people advocate for these even if they personally would rather have different incentives, or no straightforward incentives in any direction at all.

Of course, this leaves open the scope for big AI guys pushing AI safety regulation in a direction that specifically serves them instead of (only) making the world saver. That would barely surprise anyone. But it substantiates "AI safety as a PR effort" about as much as the fact that 'climate scientists would loose their job if there was no climate change' proofs that climate change is a hoax.

Comment by FlorianH (florian-habermacher) on The other side of the tidal wave · 2023-11-04T17:03:30.071Z · LW · GW

That's so correct. But still so wrong - I'd like to argue.

Why?

Because replacing the brain is simply not the same as replacing just our muscles. In all the past we've just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.

We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.

Statistics cannot be extrapolated when there's a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don't quickly change society all that fundamentally (although quite fundamentally) already, so the 'this time is different' guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.

The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born.

So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we're awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it'd be great to read that.

Comment by FlorianH (florian-habermacher) on The other side of the tidal wave · 2023-11-03T07:15:55.964Z · LW · GW

I think this time is different. The implications simply so much broader, so much more fundamental.

Comment by FlorianH (florian-habermacher) on ChristianKl's Shortform · 2023-11-01T22:18:17.569Z · LW · GW

I guess both countries would lose a nuclear war, if for a weird reason we'd really have one between US and CN

Comment by FlorianH (florian-habermacher) on ChristianKl's Shortform · 2023-11-01T22:16:03.405Z · LW · GW

In the grand scheme of things, that would not matter much. If China wants to fully reintegrate Taiwan, it can today, or else simply at lastest in a few years. I guess if China is not doing that in the near future, the main reason will be that (i) there is simply no big enough value in it and/or (ii) there is significant value for the government to have the Taiwan issue as a story for its citizen to focus on/sort of rally-behind-the-flag effect. But less so the effect of US deterrence.

Comment by FlorianH (florian-habermacher) on The Drowning Child · 2023-10-22T21:55:15.988Z · LW · GW

I wonder whether 1.-5. may often not be so much directly dominate in our head, but instead, mostly:

      6. The drowning child situation simply brings out the really strong warrior/fire-fighter instinct in you, so, as a direct disposition you’re willing to sacrifice a lot of comfort to save it

Doesn’t alter the ultimate conclusion of your nice re-experiment much but means a sort of non-selfish reason for your willingness to help with the drowning child, in contrast to the 5 selfish ones (even if evolutionarily, 1.-5 are underlying reasons for why we’re endowed with the instinct for 6.)

Comment by FlorianH (florian-habermacher) on Is Yann LeCun strawmanning AI x-risks? · 2023-10-19T16:02:05.486Z · LW · GW

Interesting thought. From what I've seen from Yan LeCun, he really does seem to consider AI X risk fears mainly as pure fringe extremism; I'd be surprised if he holds back elements of the discussion just to prevent convincing people the wrong way round.

For example Youtube: Yann LeCun and Andrew Ng: Why the 6-month AI Pause is a Bad Idea shows rather clearly his straightforward worldview re AI safety, and I'd be surprised if his simple dismissal - however surprising - of everything doomsday-ish was just strategic.

 

(I don't know which possibility is more annoying)

Comment by FlorianH (florian-habermacher) on Richard Ngo's Shortform · 2023-10-18T15:23:31.583Z · LW · GW

World champion in Chess: "It's really weird that I'm world champion. It must be a simulation or I must dream or.."

Joe Biden: "It's really weird I'm president, it must be a simul..."
(Donald Trump: "It really really makes no sense I'm president, it MUST be a s..")

David Chalmers: "It's really weird I'm providing the seminal hard problem formulation. It must be a sim.."

...

Rationalist (before finding lesswrong): "Gosh, all these people around me, really wired differently than I am. I must be in a simulation."

 

Something seems funny to me in the anthropic reasoning in these examples, and in yours too.

Of course we have one world champion in chess or anything, so a reasoning that means that world champion quasi by definition question's his champion-ness, seems odd. Then, I'd be lying if I claimed I could not intuitively empathize with his wondering about the odds of exactly him being the world champion among 9 billions.

 

This leads me to the following, that eventually +- satisfies me:

Hypothetically, imagine each generation has only 1 person, and there's rebirth: it's just a rebirth of the same person, in a different generation.

With some simplification:

  1. For 10 000 generations you lived in stone-age conditions
  2. For 1 generation - today - you're the hinge-of-history generation
  3. X (X being: you won't live anymore at all as AI killed everything; or you live 1 mio generations happily, served by AI, or what have you).

The 10 000 you's didn't have much reason to wonder about hinge of history, and so doesn't happen to think about it. The one you, in the hinge-of-history generation, by definition, has much reasons to think about the hinge-of-history, and does think about it.

So, it has becomes a bit like a lottery game, which you repeat so many times until you naturally once draw the winning number. At that lucky punch, there's no reason to think "Unlikely, it's probably a simulation", or anything.

I have the impression in the similar way, the reincarnated guy should not wonder about it, neither when his memory is wiped each time, and in the same vein (hm, am I sloppy here? that's the hinge of my argument) neither you have to wonder too much.

Comment by FlorianH (florian-habermacher) on Soulmate Fermi Estimate + My A(ltr)u[t]istic Mating Strategy · 2023-10-12T16:52:04.071Z · LW · GW

A lot of sympathy for the challenge you describe. My possibly bit half-baked views on this:

  1. If you take your criteria seriously, dating sites where you can answer questions and filter according to answers might help
  2. Knowing that for a marriage to work you anyway need something rather close to radical acceptance, you might simply try to put your abstract criteria a bit into perspective, and focus more on finding a partner that accepts your ways very happily rather than sharing so much of them (there is also some danger in this; maybe the truth 'lies in the middle': do that partly, but of course also not too generously; I guess what I mean to say is, reading your text, I get the feeling you might be erring too much on the side of strict requirements, though it's only a spontaneous guess)
  3. 'Red hair as a serious criterion - really?': I think the 1/625 sounds like a reasonable candidate for spurious correlation: there is such a large number of characteristics that persons have, that two of your favorite dates sharing one of them does not say much about the relevance of that individual characteristic, statistically speaking. That said, I can believe you simply have a sort of red-hair fetish/preference, but then, I'd for sure think if things fit more generally with a person, her having the 'right' hair color as well seems very unlikely to be a major relevant factors for long-term happiness with her.

And good luck!!

Comment by FlorianH (florian-habermacher) on Announcing Dialogues · 2023-10-11T17:47:26.559Z · LW · GW

Interested in dialoging on many topics, especially on economic/social policy. Threefold aims: Experiment with a favorite form of communication, and/or Real insight, and/or Fun. Particularly interested on:

  1. Nuclear: pro vs. anti-nuclear in the energy mix
    1. My ideal is to see whether my lukewarm skepticism against nuclear can be defeated with clear & good pro-nuclear arguments
    2. If there’s also a person with stronger-than-my anti-nuclear views, I’d also be keen to have a trialogue (the strong anti, the strong pro, and I'd moderate the dialogue as expert on energy systems & economics, but without deep expertise on nuclear)
  2. Refugee/immigration policy: i. how this could ideally look like, or ii. how one could realistically tweak the current system in the right direction
  3. Terrorism: Under what circumstances, if any, may terrorism be defensible?
  4. Geopolitics and Economics: How to deal with regimes we're wary of?
  5. Dictatorships vs. Democracy/Democrazy: i. Dictatorship as attractor state in today's era, or ii. "If I really were a powerful dictator, I'd have to be insane to voluntarily give up in today's world"
  6. Illusionism of Consciousness
Comment by FlorianH (florian-habermacher) on "AI Wellbeing" and the Ongoing Debate on Phenomenal Consciousness · 2023-08-19T10:07:59.085Z · LW · GW

I understand your concern, about the authors deviating from a consensus without good reasons. However, from the authors' perspective, they probably believe that they have compelling arguments to support their view, and therefore think they're rejecting the consensus for valid reasons. In this case, just pointing to Chesterton's fence isn't going to resolve the disagreement.

Since so much around consciousness is highly debated and complex (or as some might hold simple and trivial but difficult to see for the others), departing from the consensus isn't automatically a mistake, which I think is the same as or close to what @lc  points out.

Comment by FlorianH (florian-habermacher) on "AI Wellbeing" and the Ongoing Debate on Phenomenal Consciousness · 2023-08-18T23:09:38.778Z · LW · GW

Indeed, just as you do, I very much reject that statement, which are only the words I used to very bluntly put what the paper authors really imply.

Then, I find your claim slightly too strong. I would not want to claim to know for sure the authors have not tried to sanity-check their conclusions, and I'm not 100% sure they have not thought quite deeply about the consciousness concept and its origins (despite my puzzlement about their conclusions), so I wouldn't have dared to state it's a classical Chesterton's fence trespassing. That said, indeed, I find the incompatibility between their claims and how I understand consciousness quite so fundamental that I guess you're quite spot on (assuming I actually don't myself fully misinterpret your point).

Comment by FlorianH (florian-habermacher) on Why consumerism is good actually · 2023-03-24T22:04:54.501Z · LW · GW

I'm sympathetic to the idea that "Consumerism" might be too often used. But - with the risk of overlap with qjh's detailed answer:

Consumerism = (e.g.) when we consume stuff with very negligible benefit to ourselves, maybe even stuff we could ourselves easily admit is kind of pure nonsense if we thought a second about it, maybe driven by myopic short-term desire we'd ourselves not want to prioritize at all in hindsight. Consumption that nevertheless creates pollution or other harm to people, uses up resources that could otherwise contribute towards more important aims, resources we could have used to help poorer persons so easily. Things along such lines. And I have the impression such types of consumption are not rare - and they remain as sad a part of society after this post as ever before, no?

So I struggle to see what to learn from this post.

Comment by FlorianH (florian-habermacher) on Childhoods of exceptional people · 2023-02-13T08:45:41.007Z · LW · GW

But I doubt highly that most of the things are of a sort that is likely to lead many to be miserable. The two who are the most miserable in the sample are Russell and Woolf who were very constrained by their guardians; Mill also seems to have taken some toll by being pushed too hard. But apart from that?

Mind the potentially strong selection bias specifically here, though. Even if in our sample of 'extra-successful' people there were few (or zero) who were too adversely affected, this does not specifically invalidate a possible suspicion that the base rate of creating bad outcomes from the treatment is very high - if the latter have a small chance of ever getting to fame.

(This does not mean I disagree with your conclusions in general in any way; nice post!)

Comment by FlorianH (florian-habermacher) on Singapore - Small casual dinner in Chinatown #4 · 2022-08-14T11:24:30.248Z · LW · GW

I'm 10-15min late. Glad to have a sign of where you are. Whatsapp +41786760000