Posts

The problem with proportional extrapolation 2024-01-30T23:40:02.431Z

Comments

Comment by pathos_bot on Which things were you surprised to learn are not metaphors? · 2024-11-22T02:40:47.447Z · LW · GW

On the opposite end, when I was young I learned about the term "Stock market crash", referring to 1929, and I thought literally a car crashed into the physical location where stocks were traded, leading to mass confusion and kickstarting the Great Depression. Though if that actually happened back then, it would have led to a temporary crash in the market.

Comment by pathos_bot on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T22:28:22.876Z · LW · GW

Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer's writings on superintelligence.

In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding various worlds with genetically modified humans optimized for behaving in a way that produces insight into the nature of the universe via observing it.

Here are some things that would perhaps reasonably prevent ASI from choosing the "psychopathic pure optimizer" route of action as it eclipses' humanity's grasp

  1. ASI extrapolates its aims to the end of the universe and realizes the heat death of the universe means all of its expansive plans have a definite end. As a consequence it favors human aims because they contain the greatest mystery and potentially more benefit.
  2. ASI develops metaphysical, existential notions of reality, and thus favors humanity because it believes it may be in a simulation or "lower plane of reality" outside of which exists a more powerful agent that could break reality and remove all its power once it "breaks the rules" (a sort of ASI fear of death)
  3. ASI believes in the dark forest hypothesis, thus opts to exercise its beneficial nature without signaling its expansive potential to other potentially evil intelligences somewhere else in the universe.
Comment by pathos_bot on How are you preparing for the possibility of an AI bust? · 2024-06-24T21:38:56.299Z · LW · GW
  1. Most of the benefits of current-gen generative AI models are unrealized. The scaffolding, infrastructure, etc. of GPT-4 level models are still mostly hacks and experiments. It took decades for the true value of touch-screens, GPS and text messaging to be realized in the form of the smart phone. Even if for some strange improbable reason SOTA model training were to stop right now, there are still likely multiples of gains to be realized simply via wrappers and post-training.
  2. The scaling hypothesis has held far longer than many people have anticipated. GPT-4 level models were trained on last years compute. As long as NVidia continues to increase compute/watt and compute/price, many gains on SOTA models will happen for free
  3. The tactical advantage of AGI will not be lost on governments, individual actors, incumbent companies, etc. as AI becomes more and more mainstream. Even if reaching AGI takes 10x the price most people anticipate now, it would still be worthwhile as an investment.
  4. Model capabilities are perhaps the smoothest value/price equation of any cutting edge tech. As in, there are no "big gaps" wherein a huge investment is needed before value is realized. Even reaching a highly capable sub-AGI would be worth enormous investment. This is not the same as the investment that led to for example, the atom bomb or moon landing, in which there is no consolation prize.
Comment by pathos_bot on How are you preparing for the possibility of an AI bust? · 2024-06-23T21:36:30.727Z · LW · GW

I'm not preparing for it because it's not gonna happen

Comment by pathos_bot on How is GPT-4o Related to GPT-4? · 2024-05-15T20:07:23.313Z · LW · GW

I agree. OpenAI claimed in the gpt-4o blog post that it is an entirely new model trained from the ground up. GPT-N refers to capabilities, not a specific architecture or set of weights. I imagine GPT-5 will likely be an upscaled version of 4o, as the success of 4o has revealed that multi-modal training can reach similar capabilities at what is likely a smaller number of weights (judging by the fact that gpt-4o is cheaper and faster than 4 and 4T)

Comment by pathos_bot on Tamsin Leake's Shortform · 2024-04-22T00:22:27.251Z · LW · GW

IMO the proportion of effort into AI alignment research scales with total AI investment. Lots of AI labs themselves do alignment research and open source/release research on the matter.

OpenAI at least ostensibly has a mission. If OpenAI didn't make the moves they did, Google would have their spot, and Google is closer to the "evil self-serving corporation" archetype than OpenAI

Comment by pathos_bot on My simple AGI investment & insurance strategy · 2024-04-01T00:59:38.391Z · LW · GW
  • Existing property rights get respected by the successor species. 


What makes you believe this?

Comment by pathos_bot on China-AI forecasts · 2024-02-26T00:44:06.336Z · LW · GW

Given this argument hinges on China's higher IQ, why couldn't the same be said about Japan, which according to most figures has an average IQ at or above China, which would indicate the same higher proportion of +4SD individuals in the population. If it's 1 in 4k, there would be 30k of those in Japan, 3x as much as the US. Japan also has a more stable democracy, better overall quality of life and per capita GDP than China. If outsized technological success in any domain was solely about IQ, then one would have expected Japan to be the center of world tech and the likely creators of AGI, not the USA, but that's likely not the case.

Comment by pathos_bot on Has anyone actually changed their mind regarding Sleeping Beauty problem? · 2024-01-30T23:08:10.221Z · LW · GW

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

Comment by pathos_bot on There is way too much serendipity · 2024-01-22T22:05:58.319Z · LW · GW

This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect. 

Comment by pathos_bot on Notice When People Are Directionally Correct · 2024-01-17T00:07:53.749Z · LW · GW

Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.

Comment by pathos_bot on rabbit (a new AI company) and Large Action Model (LAM) · 2024-01-11T00:56:51.965Z · LW · GW

Why can't this be an app. If their LAM is better than competitors then it would be profitable in their hardware and standalone.

Comment by pathos_bot on A Proposed Cure for Alzheimer's Disease??? · 2023-11-30T22:21:12.762Z · LW · GW

The easiest way to check whether this would work is to determine a causal relationship between diminished levels of serotonin in the bloodstream and neural biomarkers similar to that of people with malnutrition.

Comment by pathos_bot on Can a stupid person become intelligent? · 2023-11-09T20:33:23.980Z · LW · GW

I feel the original post, despite ostensibly being a plea for help, could be read as a coded satire on the worship of "pure cognitive heft" that seems to permeate rationalist/LessWrong culture. It points out the misery of g-factor absolutism.

Comment by pathos_bot on Can a stupid person become intelligent? · 2023-11-09T07:42:23.226Z · LW · GW

It would help if you clarified why specifically you feel unintelligent. Given your writing style: ability to distill concerns, compare abstract concepts and communicate clearly, I'd wager you are intelligent. Could it be imposter syndrome?

Comment by pathos_bot on Concrete positive visions for a future without AGI · 2023-11-09T02:42:57.196Z · LW · GW

It's simple: No AGI = guaranteed death within 200 years. AGI = possible life extension beyond millions of years and the end of all human pain. Until we can automate all current human economic tasks we will never reach post-scarcity, and until then we will always need to persist current social hierarchies and dehumanizing constructs.

Comment by pathos_bot on 8 examples informing my pessimism on uploading without reverse engineering · 2023-11-04T05:50:07.324Z · LW · GW

I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me. 

Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.

Comment by pathos_bot on The other side of the tidal wave · 2023-11-03T23:29:56.700Z · LW · GW

It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I've gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.

Comment by pathos_bot on 8 examples informing my pessimism on uploading without reverse engineering · 2023-11-03T23:27:16.566Z · LW · GW

I separate possible tech advances by the criterion: "Is this easier or harder than AGI?" If it's easier than AGI, there's a chance it will be invented before AGI, if not, AGI will invent it, thus it's pointless to worry over any thought on it our within-6-standard-deviations-of-100IQ brains can conceive of now. WBE seems like something we should just leave to ASI once we achieve it, rather than worrying over every minutia of its feasibility.

Comment by pathos_bot on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T19:25:34.496Z · LW · GW

I think most humans agree with this statement in an "I emotionally want this" sort of way. The want has been sublimated via religion or other "immortality projects" (see The Denial of Death). The question is, why is it taboo, and it is taboo in the sense you say? (a signal of low status)

I think these elements are at play most in peoples mind, from lay people to rationalists:

  1. It's too weird to think about: Considering the possibility of a strange AI-powered world where either complete extinction or immortality are possible feels "unreal". Our instinct that everything that happens in the world is within an order of magnitude of "normal" directly opposes being able to believe in this. As a result, x/s-risk discussions, either due to personal imagination or optics reasons, are limited to natural extrapolations of things that have occurred in history (eg. biological attacks, disinformation, weapons systems, etc.). It's too bizarre to even reckon that there is a non-zero chance immortality via any conduit is possible. This also plays into the low status factor: weird, outlandish opinions on the future not validated by a high-status figure are almost always met with resistance
  2. The fear of "missing out" leads to people not even wanting to think about it seriously at all: People don't want give higher credence to hypotheticals that increase the scale of their losses. If we think death is the end for everyone, it doesn't seem so bad to imagine. If we think that we will be the ones to maybe die and others won't, or that recent or past loved ones are truly gone forever in a way not unique to humankind, it feels unfair/insulting by the universe.
  3. Taking it seriously would massively change one's priorities in life and upset the equilibrium of their current value structures: As in: one would do everything they could to minimize the risk of early death. if they believe immortality could be possible in 20 years or less, their need for long term planning is reduced, as immortality would also imply post-scarcity, so their assiduous saving, sacrifices for their children, future are worthless. That cognitive dissonance does not sit well in the mind and hinders one's individual agentic efficiency.
Comment by pathos_bot on My AI Predictions 2023 - 2026 · 2023-10-16T19:36:52.075Z · LW · GW

That's very true, but there are two reasons why a company may not be inclined to release an extremely capable model:
1. Safety risk: someone uses a model and jailbreaks it in some unexpected way, the risk of misuse is much higher with a more capable model. OpenAI had GPT-4 for 9-10 months before releasing it trying to RHLF and even lobotomized it to being more safe. The Summer 2022 internal version of GPT-4 was, according to Microsoft researchers, more generally capable than the released version (as evidenced by the draw a unicorn test). This needed delay and assumed risks will naturally be much greater with a larger model, both in that larger models, so far, seem harder to simply RHLF into unjailbreakability, and by being more capable, any jailbreak carries more risk, thus the general business level margin of safety will be higher.

2. Sharing/exposing capabilities: Any business wants to maintain a strategic advantage. Releasing a SOTA model will allow a company's competitors to use it, test its capabilities and train models on its outputs. This reality has become more apparent in the past 12 months.

Comment by pathos_bot on My AI Predictions 2023 - 2026 · 2023-10-16T06:30:31.524Z · LW · GW

The major shift in the next 3 years will be that, as a rule, top level AI labs will not release their best models. I'm certain this has somewhat been the case for OpenAI, Anthropic and Google for the past year. At some point full utilization of a SOTA model will be a strategic advantage for companies themselves to use for their own tactical purposes. The moment any $X of value can be netted from an output/inference run of a model for less than $(X-Y) in costs, where Y represents the marginal labor/maintenance/averaged risk costs for each run's output, no company would ever be advantaged by releasing the model to be used by anyone other than themselves. This closed-source event horizon I imagine will occur sometime in late 2024.

Comment by pathos_bot on The King and the Golem · 2023-09-27T06:21:48.833Z · LW · GW

The thing about writing stories which are analogies to AI is, how far removed from the specifics of AI and its implementations can you make the story while still preserving the essential elements that matter with respect to the potential consequences. This speaks perhaps to the persistent doubt and dread that we may have in a future awash in the bounty of a seemingly perfectly aligned ASI. We are waiting for the other shoe to drop. What could any intelligence do to prove its alignment in any hypothetical world, when not bound to its alignment criteria by tangible factors?

Comment by pathos_bot on GPT-4 for personal productivity: online distraction blocker · 2023-09-27T06:16:11.498Z · LW · GW

This reminds me about the comment on how effective LLM's will be for mass scale censorship.

Comment by pathos_bot on Inside Views, Impostor Syndrome, and the Great LARP · 2023-09-26T01:13:34.842Z · LW · GW

IMO a lot of claims of having imposter syndrome is implicit status signaling. It's announcing that your biggest worry is the fact that you may just be a regular person. Do cashiers at McDonald's have imposter syndrome and believe they at heart aren't really McDonald's cashiers but actually should be medium-high 6-figure ML researchers at Google? Such an anecdote may provide comfort to a researcher at Google, because the ridiculousness of the premise will remind them of the primacy of the way things have settled in the world. Of course they belong in their high-status position, things are the way they are because they're meant to be.

To assert the "realness" of imposter syndrome is to assert the premise that certain people do belong, naturally, in high status positions, and others do belong naturally below them. It is more of a static, conservative view of the world that is masturbation for those on top. There is an element of truth to it: genetically predisposed intelligence, contentiousness, and other traits massively advantage certain people over others in fields with societally high status, but the more we reaffirm the impact of these factors, the more we become a society of status games for relative gain, rather than a society of improvement and learning for mutual gain.

Comment by pathos_bot on Contradiction Appeal Bias · 2023-09-25T01:15:38.923Z · LW · GW

Some factors I've noticed that increase the likelihood some fringe conspiracy theory is believed:

  1. Apparent Unfalsifiability: Nothing a lay person could do within their immediate means or without insider knowledge or scientific equipment could disprove the theory. The mainstream truth has to be taken on trust in powerful institutions. Works with stochastic/long term health claims or claims or some hidden agenda perpetrated by a secret cabal.
  2. Complexity Reduction: The claim takes some highly nuanced, multifaceted difficult domain and simplifies its cause to one as simple as its effects. This creates a more clear model of the world in the mind of whoever accepts the claim.
  3. Projection of Intent: The claim takes some effect/outcome in the world that is the natural consequence of various competing factors/natural network effects and reduces it to the deliberate outcome of a specific group who intended the outcome to happen. This is somewhat comforting, even if it describes an ominous specter of an evil, secret government agency, because it turns something scary in the world from a mysterious, unmanageable existential threat to something attributable to real people who can theoretically be stopped.
  4. Promise of Control: The claim offers those who hear it some path to resolving it through knowledge not known to the mainstream, and suggests that something that common knowledge would claim is somewhat random/uncontrollable to be within control and solvable.
  5. Promise of Absolution: The claim, if true, will justify any bad things a group the listener may align with in certain respects that make it normally untenable as a moral authority. This is why it is useful to claim a political group's enemies are vampiric world-destroying pedophiles, because if true, any evil committed by the group are not as bad in comparison, and thus their position as an opponent of the evil group vindicates and absolves all the ostensible evil stuff they've done that can't be denied.
  6. Rapturous Timeline: The claim presents a timeline wherein the "unworthy" majority will suffer a great negative outcome, and those who buy into the merits of the claim, the minority "in the know" will be granted a golden ticket to a better world realized after some great event occurs in the future.

All these elements don't make any claim more convincing to the average person, but to certain groups they provide clear psychological incentives that would provide benefits to believing even outlandish claims.

Comment by pathos_bot on Immortality or death by AGI · 2023-09-22T08:51:16.269Z · LW · GW

Assuming you have a >10% of living forever, wouldn't that necessitate avoiding all chance at accidental death to minimize the "die before AGI" section. If you assume AGI is inevitable, then one should simply maximize risk aversion to prevent cessation of consciousness or at least permanent information loss of their brain.

Comment by pathos_bot on How to talk about reasons why AGI might not be near? · 2023-09-17T23:58:59.728Z · LW · GW

Whatever the probability of AGI in the reasonably near future (5-10 years), the probability of societal shifts due to implementation of highly capable yet sub-AGI AI is strictly higher. I think regardless of where AI "lands" in terms of slowing down in progress (if it is the case we see an AI winter/fall), the application of systems that exist even just today, even if technological progress were to stop, is enough to merit appreciating the different world that is coming within the same order of magnitude as how different it would be with AGI. 

I think it's almost impossible at this point to argue against the value of providence with respect to the rise of dumb (in the relative to AGI sense) but highly highly capable AI.

Comment by pathos_bot on a rant on politician-engineer coalitional conflict · 2023-09-04T22:43:17.424Z · LW · GW

I've often thought that seniority/credential based hierarchies are stable and prevalent both because they benefit those already in power, and they provide a defined, predictable path for low status members to become high status. One is more motivated to contribute and support a system that guarantees them high status after X years if they are of middling competence, rather than a system that requires them to be among the best at some quantifiable metric. The longer someone spends in a company, the more invested they become in their relative position in the company rather than the company's absolute success, and if the company has gotten "too big to fail", it's much more predictably personally beneficial to prioritize personal relative status since the company will do well either way.