Posts

J's Shortform 2024-08-01T00:53:18.852Z

Comments

Comment by J (j-5) on J's Shortform · 2024-08-11T20:25:04.143Z · LW · GW

The karma system here seems to bully people into conforming to popular positions and philosophies. I don't see it having a positive impact on rationalism, or reducing bias. And it seems to create an echo chamber. This may sound like a cheap shot, but it's not: I've observed more consistent objectivity on certain reddit forums (including some that have nothing to do with philosophy or science).

Comment by J (j-5) on J's Shortform · 2024-08-10T20:20:42.658Z · LW · GW

you'd have to ask a moral realist. but i think they would say hitler caused the holocaust so hitler is bad. 

Comment by J (j-5) on J's Shortform · 2024-08-10T20:07:24.916Z · LW · GW

On moral realism... Physics causes all realities so if we're saying some phenomenon is objectively bad then we're saying Physics is objectively bad. But saying physics is objectively bad is meaningless.

Comment by J (j-5) on J's Shortform · 2024-08-09T02:37:18.256Z · LW · GW

the use of hidden strategies (that afford an advantage) by political contestants could make those contestants less likely to represent the will of the people upon election if those strategies helped them win despite appealing to the desires of the people less than they otherwise would have. 

this problem could be mitigated by requiring that political campaigns document all internal communications which transpire as part of the campaign and disclose them at the conclusion of the election. this would both a) raise awareness among voters about strategies political candidates are using and b) share the strategies with future political candidates, thereby eliminating the advantage those strategies afford. the premise here is that political candidates should win or lose primarily (or preferably, entirely) based on how well the candidates' policy positions represent the voters, how much faith the voters have in candidates' commitments to those positions, and how well voters believe the candidates would enact those policies.

(I realize donor money is the other major problem corrupting politics, and there may be different solutions for that)

Comment by J (j-5) on What Other Lines of Work are Safe from AI Automation? · 2024-08-06T19:14:20.711Z · LW · GW

This was intended as agreement with the post it's replying to.

Comment by J (j-5) on What Other Lines of Work are Safe from AI Automation? · 2024-08-03T23:00:39.307Z · LW · GW

True... I don't know why i used the word 'only' there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a 'singularity'. Every step humanity is taking to develop AI feels like a huge leap of faith now.

Personally, I'm optimistic, or at least unworried, but that's probably partly because i know I'm going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don't think a superintelligence would choose a path that humans would be clearly resistant to, when it could simply incentivize us to do voluntarily do what it wants. Humans are far easier to deal with when they're duped into doing something they think they want to do. And it shouldn't be that hard for a superintelligence to figure out how to manipulate us that way. Using force or fear to control humans is probably the least efficient option.

I also have little doubt that corporations and state actors are already exploring how to use gpt-type ai for e.g. propaganda and other kinds of social and psychological manipulation. I mean that's what marketing is and algorithms designed to manipulate our behavior already drive the internet.

Comment by J (j-5) on J's Shortform · 2024-08-03T21:40:46.315Z · LW · GW

I've been thinking about how most (maybe all) thought and intelligence are simulation. Whether we're performing a mathematical calculation, planning our day, or betting on a basketball game, it's all the same mental exercise of simulating reality. This might mean the ultimate potential of ai is the ability to simulate reality at higher and higher resolutions.

As an aside, it also seems that all scientific knowledge is simulation and maybe even all intellectual endeavors. Any attempt to understand or explain our reality is simulation. The essence of intelligence is reality simulation. Our brains are reality simulators and the ultimate purpose of intelligence is to simulate potential realities.

When people muse about reality being someone's dream, that might not be terribly far from the true nature of our universe.

Comment by J (j-5) on J's Shortform · 2024-08-01T00:53:18.992Z · LW · GW

The more I think about AI the more it seems like the holy grail of capitalism. If AI agents can themselves be both producers and consumers in a society, then they can be arbitrarily multiplied to expand an economy, and they have a far smaller cost in terms of labor cost, spatial cost, resource cost, healthcare cost, housing cost, etc. compared to humans. At the extreme, this seems to solve every conceivable economic problem with modern societies as it can create arbitrarily large tax revenue without the need to scale government services per ai agent the way they would need to be scaled per human.

I guess it's possible that, long-term, AI could obsolete money and capitalism, but presumably there could be a transitionary period where experience something like the aforementioned super-capitalism.

Comment by J (j-5) on What Other Lines of Work are Safe from AI Automation? · 2024-07-13T16:33:38.809Z · LW · GW

Hmmm I guess I don't really use the terms 'investing' and 'trading' interchangeably.

Comment by J (j-5) on What Other Lines of Work are Safe from AI Automation? · 2024-07-11T20:56:29.804Z · LW · GW

Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.

Comment by J (j-5) on What Other Lines of Work are Safe from AI Automation? · 2024-07-11T20:20:38.837Z · LW · GW

For arbitrary time horizons nothing is 'safe', but that just means our economy shifts to a new model. It doesn't mean the outcome is bad for humans. I don't know if it makes sense to worry about which part of the ship will become submerged first because everyone will rush for the other parts and those jobs will be too competitive. It might be better to worry about how to pressure the political system to take proactive action to rearchitect our economy. Ubi and/or a shorter workweek are inevitable and the sooner we sort out how to implement that the better.

for the sake of understanding the roadmap for the autopocalypse, I think you can consider these factors:

The most obvious: can the work be entirely computer-based? A corollary to this is whether a unskilled human with the assistance of an ai can replace the skilled human (e.g. Healthcare roles requiring knowledge work and physical work)

Regulatory environment. Even if it were possible for software to replace a human worker licensing requirements and other laws may protect human workers for awhile beyond that point.

Eventually machines will be able to transcend software to perform every physical task a human can now perform. And that mostly won't be anthropomorphic robots but more automation of machines that currently are operated by humans, like cars and drones. The anthropomorphic robots will appear (on the job) the furthest into the future.

But again everyone will be racing away from these jobs (and toward the remaining 'safe' ones) at the same time. Financial investment may be the safest source of income. Owning your own business may possibly benefit from the autopocalypse but at the extreme you will just be an investor in a company run by machines.

TLDR:

So the best advice is probably to build an investment portfolio (and the knowledge to do that well). If you own the companies it doesn't matter who the workers are.

Comment by J (j-5) on Book review: Deep Utopia · 2024-05-31T17:13:15.325Z · LW · GW

Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).

Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it's probably more readily understood by those with philosophy background. I didn't really understand what he was saying about utilitarianism until just reading about parfit.

Comment by J (j-5) on Book review: Deep Utopia · 2024-05-31T01:08:13.569Z · LW · GW

This is a major theme in Star Trek: The Next Generation, where they refer to it as the Prime Directive. It always bothered me when they violated the Prime Directive and intervened because it seemed like it was an act of moral imperialism. But I guess that's just my morals (an objection to moral imperialism) conflicting with theirs.

A human monoculture seems bad for many reasons analogous to the ones that make an agricultural monoculture bad, though. Cultural diversity and heterogeneity should make our species more innovative and more robust to potential future threats. A culturally heterogeneous world would also seem harder for a central entity to gain control of. Isn't this largely why the British, Spanish, and Roman empires declined?

Comment by J (j-5) on Book review: Deep Utopia · 2024-05-31T00:52:39.313Z · LW · GW

I've only skimmed it but so far I'm surprised bostrom didn't discuss a possible future where ai 'agents' act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.