Posts

Are COVID lab leak and market origin theories incompatible? 2023-03-20T01:44:38.893Z

Comments

Comment by Anon User (anon-user) on On being downvoted · 2023-09-17T21:25:04.415Z · LW · GW

Yes, of course, what I meant is more of a case of somebody confidently presenting as an self-evident truth something with a ton of well-known counterarguments. Or more generally, somebody that is not only clueless, but showing no awareness of how clueless they are, and no evidence that they at least tried to look for relevant information. [IMHO] Somebody who demonstrates willingness to learn deserves a comment pointing them to relevant information (and may still warrant a downvote, depending on how off the post it). Somebody how does not deserves to be downvoted, and usually would not deserve the time I would need to spend to explain my downvote in a comment. [/IMHO]

Comment by Anon User (anon-user) on On being downvoted · 2023-09-17T18:37:06.337Z · LW · GW

FWIW, most of my downvotes on LW are for poorly reasoned jumping to conclusions posts and/or where the poster does not seem to fully know what they are talking about and should have done more homework first. Would never downvote a well written post even if I 100% disagree.

Comment by Anon User (anon-user) on Can I take ducks home from the park? · 2023-09-15T00:03:48.554Z · LW · GW

Grammar issue in your Russian version - should be "Как я могу взять уток домой из парка?", or even better: "Как мне забрать уток из парка домой?"

Comment by Anon User (anon-user) on Socialism in large organizations · 2023-07-30T19:01:50.940Z · LW · GW

Sears tried creating an explicit internal economy. It did not end well. https://www.versobooks.com/blogs/news/4385-failing-to-plan-how-ayn-rand-destroyed-sears

Comment by Anon User (anon-user) on The cone of freedom (or, freedom might only be instrumentally valuable) · 2023-07-25T00:39:40.198Z · LW · GW

Everything else being equal, fast agile decisionmaking is better than slow and blunt one. Freedom does not just mean freedom to do X today, it also means freedom to change our minds bout X tomorrow. Do not regulate X because freedom means, a,ong other things, not trusting X to be regulated in sensible ways, and trusting individuals self-organizing more. Not saying this is always a good choice, but the potential pitfalls of things like regulatory capture need to be acknowledged.

Comment by Anon User (anon-user) on An AGI kill switch with defined security properties · 2023-07-08T21:30:52.574Z · LW · GW

If humans are supposed to be able to detect things going wrong and shut things down, that requires that they are exposed to the unencrypted feed. At this point, the humans are the weakest link, not the encryption. Similar for anything else external that you need / want AI to access while it's being trained and tested.

Edited to add: particularly if we are talking about not some theoretical sensible humans, but about real humans that started with "do not worry about LLMs, they are not agentic", and then promptly connected LLMs to agentic APIs.

Comment by anon-user on [deleted post] 2023-07-08T21:22:17.146Z

Maybe there is a better way to put it - SFOT holds for objective functions/environments that only depend on the agent I/O behavior. Once the agent itself is embodied, then yes, you can use all kinds of diagonal tricks to get weird counterexamples. Implications for alignment - yes, if your agent is fully explainable and you can transparently examine it's workings, chances are that alignment is easier. But that is kind of obvious without having to use SFOT to reason about it.

Edited to add: "diagonal tricks" above refers to things in the conceptual neighborhood of https://en.m.wikipedia.org/wiki/Diagonal_lemma

Comment by Anon User (anon-user) on An AGI kill switch with defined security properties · 2023-07-05T19:03:19.387Z · LW · GW

https://xkcd.com/538/ Crypto is not the weakest link.

Comment by anon-user on [deleted post] 2023-06-02T00:54:31.153Z

When an AGI takes on values for the first time, it must draw from the set of values which already exist or construct something similar from what already exists

The values come into the picture well before it's an AGI. First, a random neural network is initialized, and its "values" is a completely arbitrary function chosen as random. Over time, NN is trained towards an AGI and it's "values" take shape. By the time AGI emerges, it does not "take on values for the first time", the values emerge from an extremely long sequence of tiny mutations, each creating something very similar to what already existed, becoming more complex and coherent over time.

Comment by Anon User (anon-user) on No - AI is just as energy-efficient as your brain. · 2023-05-27T19:17:03.966Z · LW · GW

I made a similar point (but without specific numbers - great to have them!) in a comment https://www.lesswrong.com/posts/Lwy7XKsDEEkjskZ77/?commentId=nQYirfRzhpgdfF775 on a post that posited human brain energy efficiency over AIs as a core anti-doom argument, and I also think that the energy efficiency comparisons are not particularly relevant either way:

Humanity is generating and consuming enormous amount of power - why is the power budget even relevant? And even if it was, energy for running brains ultimately comes from Sun - if you include the agriculture energy chain, and "grade" the energy efficiency of brains by the amount of solar energy it ultimately takes to power a brain, AI definitely has a potential to be more efficient. And even if a single human brain is fairly efficient, the human civilization is clearly not. With AI, you can quickly scale up the amount of compute you use, but scaling beyond a single brain is very inefficient.

Comment by anon-user on [deleted post] 2023-05-27T19:05:58.457Z

Well, yeah, if you specifically choose a crippled version of the high-U agent that is somehow unable to pursue the winning strategy, it will loose - but IMHO that's not what the discussion here should be about.

Comment by Anon User (anon-user) on A rejection of the Orthogonality Thesis · 2023-05-27T19:00:27.931Z · LW · GW

And Gordon Seidoh Worley is not saying there can't be good arguments against orthogonality thesis that would deserve uovotes, just that this one is not one of those.

Comment by Anon User (anon-user) on A rejection of the Orthogonality Thesis · 2023-05-27T18:54:05.477Z · LW · GW

This line of reasoning is absurd: it assumes an agent knows in advance the precise effects of self-improvement — but that’s not how learning works! If you knew exactly how an alteration in your understanding of the world would impact you, you wouldn’t need the alteration: to be able to make that judgement, you’d have to be able to reason as though you had already undergone it.

It seems there is some major confusion is going on here - it is, generally speaking, imporrible to know the outcome of an arbitrary computation without actually running it, but that does not mean it's impossible to design a specific computation in a way you'd know exactly what the effects would be. For example, one does not need to know the trillionth digit of pi in order to write a program that they could be very certain would compute that digit.

You also seem to be too focused on minor modifications of a human-like mind, but focusing too narrowly on minds is also missing the point - focus on optimization programs instead.

For many different kinds of X, it should be possible to write a program that given a particular robotics apparatus (just the electromechanical parts without a specific control algorithm), predicts which electrical signals sent to robot's actuators would result in more X. You can then place that program inside the robot and have the program's output wired to the robot controls. The resulting robot does not "like" X, it's just robotically optimizing for X.

The orthogonality principle just says that there is nothing particularly special about human-aligned Xs that would make the X-robot more likely to work well for those Xs over Xs that result in human extinction (e.g. due to convergent instrumental goals, X does not need to specifically be anti-human).

Comment by anon-user on [deleted post] 2023-05-17T16:30:41.302Z

Wait, if Clip-maniac finds itself in a scenario where Clippy would achieve higher U then itself, the rational thing for it would be to self-modify into Clippy, and the Strong Form would still hold, wouldn't it?

Comment by Anon User (anon-user) on Contra Yudkowsky on AI Doom · 2023-04-24T08:53:55.406Z · LW · GW

Exactly! I'd expect compute to scale way better than humans - not necessarily because the intelligence of compute scales so well, but because the intelligence of human groups scales so poorly...

Comment by Anon User (anon-user) on Votes-per-Dollar · 2023-04-24T00:56:24.378Z · LW · GW

The advertising has to be visible, but who exactly paid for it does not have to be. And there are plenty of less obvious spending (e.g. paying people to go door-to-door, phone calls, etc, etc - pay people, then claim they were volunteers?).

Comment by Anon User (anon-user) on Contra Yudkowsky on AI Doom · 2023-04-24T00:54:02.075Z · LW · GW

Humanity is generating and consuming enormous amount of power - why is the power budget even relevant? And even if it was, energy for running brains ultimately comes from Sun - if you include the agriculture energy chain, and "grade" the energy efficiency of brains by the amount of solar energy it ultimately takes to power a brain, AI definitely has a potential to be more efficient. And even if a single human brain is fairly efficient, the human civilization is clearly not. With AI, you can quickly scale up the amount of compute you use, but scaling beyond a single brain is very inefficient.

Comment by Anon User (anon-user) on Prediction: any uncontrollable AI will turn earth into a giant computer · 2023-04-17T19:20:47.866Z · LW · GW

Temporal discounting is a thing - not sure why you are certain an ASI would not have enough temporal discounting in its value function to be unwilling to delay gratification by so much.

Comment by Anon User (anon-user) on [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 · 2023-04-15T20:50:56.575Z · LW · GW

Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them

I think this is the critical crux of the disagreement. A part of the Elizer's argument, as I understand it, is that the current technology is completely incapable of anything close to actually "roughly choosing" the AI values. On this point, I think Elizer is completely right.

Comment by Anon User (anon-user) on Votes-per-Dollar · 2023-04-15T20:19:33.259Z · LW · GW

Hm? With the current system, at least the final vote counting process is relatively transparent. Yes, there are some opportunities to cheat on the margin of election finance laws, but importantly that opportunity is before the vote count, so has to be balances against the negative electoral consequences of being credibly accused of cheating. With you system, the final accounting happens after the vote, and in a close election, there is just too much incentive to cheat at that point...

Comment by Anon User (anon-user) on GPT-4 is easily controlled/exploited with tricky decision theoretic dilemmas. · 2023-04-15T20:14:05.446Z · LW · GW

Interesting. But I am wondering - would the results been much different with pre-RLHF version of GPT-4? The GPT-4 paper has a figure showing that GPT-4 was close to perfectly calibrated before RLHF, and became badly calibrated after. Perhaps it's something similar here?

Comment by Anon User (anon-user) on Votes-per-Dollar · 2023-04-10T17:23:03.484Z · LW · GW

Voting has a dual role - not just determining the winner, but also demonstrating to the losers and their supporters that they lost fairly, in the most transparent way possible. How do you convince the losers that the winners did not cheat on their budget reporting? How do you account for "unpaid" volunteers? How do you account for "uncoordinated" spending by non-candidates?

Comment by Anon User (anon-user) on Is it correct to frame alignment as "programming a good philosophy of meaning"? · 2023-04-08T21:47:45.616Z · LW · GW

Note that your "1" has two words that both carry very heavy load - "uses" and "correct". What does it mean for a model to be correct? How do you create one? How do you ensure that the model you implemented in software is indeed correct? How do you create AI that actually uses that model under all circumstances? In patcicular, how do you ensure that it is stable under self-improvement, out-of-distribution environments, etc? Your "2-4" seem to indicate that you are focusing more on the "correct" part, and not enough on the "uses" part. My understanding is that if both "correct" and "uses" could be solved, it would indeed likely be a solution to the alignment problem, but it's probably not the only path, and not necessarily the most promising one. Other paths could potentially emerge from the work on AI corrigibility, negative side-effect minimization, etc.

Comment by Anon User (anon-user) on How Politics interacts with AI ? · 2023-03-26T17:38:49.431Z · LW · GW

Rules will not support development of powerful AGI as it might threaten to overpower them

is probably true, but only because you used the word "powerful" rather than "capable". Rulers would definitely want development of capable AGIs as long as they believe (however incorrectly) in their ability to maintain power/control over those AGIs.

In fact, rulers are likely to be particularly good at cultivating capable underlings they they maintain firm control of. It may cause them to overestimate their ability to do the same for AGI. In fact, if they expect an AGI to be less agentic, they might expect it to actually be easier to maintain control over an "we just program it to obey" AGI, and prefer that over what they perceive to be inherently less predictable humans.

Comment by Anon User (anon-user) on Nudging Polarization · 2023-03-25T20:31:03.964Z · LW · GW

In modern politics, simple messages tend to work a lot better than nuanced ones (which is a thing that Donald Trump masterfully exploited). "X is good/bad" is a much simpler message than "X is good, but only if it's X1, and not X2" and having primary opponents claim "By supporting X, [politician] argees with the evil other-siders in their support for X2! [Politician] is an our-sider-in-name-only!"

Comment by Anon User (anon-user) on Good News, Everyone! · 2023-03-25T16:12:10.903Z · LW · GW

Not just disinformation - any information that does not fit their preconceived worldview - it's all "fake news", don't you know?

Comment by Anon User (anon-user) on Are COVID lab leak and market origin theories incompatible? · 2023-03-20T16:46:23.727Z · LW · GW

The lab is known to have been studying bats - weren't those sold on the market too?

Comment by Anon User (anon-user) on Are COVID lab leak and market origin theories incompatible? · 2023-03-20T16:44:06.790Z · LW · GW

"Lab leak" doesn't necessarily imply "created in a lab".

Right, I was sloppy, replaced "created" with "studied"

Comment by Anon User (anon-user) on Why We MUST Build an (aligned) Artificial Superintelligence That Takes Over Human Society - A Thought Experiment · 2023-03-07T23:59:17.462Z · LW · GW

I've axiomatically set P(win) on path one equal to zero. I know this isn't true in reality and discussing how large that P(win) is and what other scenarios may result from this is indeed worthwhile, but it's a different discussion.

Your title says "we must". You are allowed to make conditional arguments from assumptions, but if your assumptions are demonstratively take away most of the P(win) paths out of consideration, yoour claim that the conclusions derived in your skewed model apply to real life is erroneous. If your title was "Unless we can prevent the creation of AGI capable of taking over the human society, ...", you would not have been downvotes as much as you have been.

The clock would not be possible in any reliable way. For all we know, we could be a second before midnight already, we could very well be one unexpected clever idea away from ASI. From now on, new evidence might update P(current time is >= 11:59:58) in one direction or another, but extremely unlikely that it would ever get back to being close enough to 0, and it's also unlikely that we will have any certainty of it before it's too late.

Comment by Anon User (anon-user) on GÖDEL GOING DOWN · 2023-03-06T23:45:31.438Z · LW · GW

very little has been said about whether it is possible to construct a complete set of axioms

Huh? Didn't Gödel conclusively prove that the answer to pretty much every meaningful form of your question is "no"?

Comment by Anon User (anon-user) on What should we do about network-effect monopolies? · 2023-03-06T21:02:40.797Z · LW · GW

You might enjoy Cory Doctorow's take on this - such as https://onezero.medium.com/demonopolizing-the-internet-with-interoperability-b9be6b851238 and https://locusmag.com/2023/01/commentary-cory-doctorow-social-quitting/

Comment by Anon User (anon-user) on Why We MUST Build an (aligned) Artificial Superintelligence That Takes Over Human Society - A Thought Experiment · 2023-03-05T23:12:34.797Z · LW · GW

I'll first summarize the parts I agree with in what I believe you are saying.

First, you are saying, effectively that there are two theoretically possible paths to success:

  1. Prevent the situation where an ASI takes over the world.
  2. Make sure that ASI that takes over the world is fully aligned.

You are then saying that the likelihood on winning on path one is so small as to not be worth discussing in this post.

The issue is that you then conclude that since the P(win) on path one is so close to 0, we ought to focus on path 2. The fallacy here is the P(win) appears very close to 0 on both paths, so we have to focus on whatever path that has a higher P(win), no matter how impossibly low it is. And to do that, we need to directly compare the P(win) on both.

Consider this - what is the harder task - to create a fully aligned ASI that would remain fully aligned for the rest of the lifetime of the universe, regardless of whatever weird state the universe ends up in as a result of that ASI, or to create an AI (not necessarily superhuman) that is capable of correctly making one pivotal action that is sufficient for preventing ASI takeover in the future (Elizer's placeholder example - go ahead and destroy all GPUs in the world, self-destructing in the process) without killing humanity in the process? Would not you agree that when the question is posed that way, it seems a lot more likely that the latter is something we'd actually be able to accomplish?

Comment by Anon User (anon-user) on How truthful can LLMs be: a theoretical perspective with a request for help from experts on Theoretical CS · 2023-03-02T03:07:48.558Z · LW · GW

I think your intuition that learning from only positive examples is very inefficient is likely true. However, if additional supervised fine-tuning is done, then the models also effectively learns from its mistakes and could potentially become a lot better fast.

Comment by Anon User (anon-user) on Clippy, the friendly paperclipper · 2023-03-02T02:37:37.779Z · LW · GW

That is the opposite of what you said - Clippy, according to you, is maximizing the output of it's critic network. And you can't say "there's not an explicit mathematical function" - any neural network with a specific set of weights is by definition an explicit mathematical function, just usually not a one with a compact representation.

Comment by Anon User (anon-user) on Clippy, the friendly paperclipper · 2023-03-02T02:33:41.208Z · LW · GW

The issue you describe is one issue, but not the only one. We do know how to train an agent to do SOME things we like.

Not consistently in sufficiently complex and variable environment.

can we be a little or a lot off-target, and still have that be enough, because we captured some overlap between our and the agents values?

No, because it will hallucinate often enough to kill us during one of those hallucinations.

Comment by Anon User (anon-user) on Clippy, the friendly paperclipper · 2023-03-02T00:37:40.976Z · LW · GW

In your hypothetical, the Clippy is trained to care about both paperclips and humans. If we knew how to do that, we'd know how to train an AI to only care about humans. The issue is not that we do not know how to exclude the paperclip part from this - the issue is that 1) we do not know how to even define what caring about humans means, and 2) even if we did, we do not know how to train a sufficiently powerful AI to reliably care about the things we want it to care about.

Comment by Anon User (anon-user) on Clippy, the friendly paperclipper · 2023-03-02T00:34:46.436Z · LW · GW

There seems to be some confusion going on here - assuming an agent is accurately modeling the consequences of changing its own value function, and is not trying to hack around some major flaw in its own algorithm, it would never do so, as by definition, [correctly] optimizing a different value function can not improve the value of your current value function.

Comment by Anon User (anon-user) on Bing Chat is a Precursor to Something Legitimately Dangerous · 2023-03-01T05:37:53.504Z · LW · GW

Forget GREPLs, worry about drones and robots! https://www.zdnet.com/article/microsoft-researchers-are-using-chatgpt-to-instruct-robots-and-drones/ . What could possibly go wrong?

Comment by Anon User (anon-user) on Eliezer is still ridiculously optimistic about AI risk · 2023-02-28T22:21:14.055Z · LW · GW

Restarting an earlier thread in a clean slate.

Let's define a scientific difficulty of D(P) of a scientific problem as "an approximate number of years of trial-and-error effort that humanity would need to solve P, if P was considered an important problem to solve". He estimates D(alignment) at about 50 years - but his whole point is that for alignment, this particular metric is meaningless because the trial-and-error is not an option. This is just meant to be as a counterargument to somebody saying that alignment does not seem to be much harder than X, and we solved X - but his counterargument is yes, D(X) was shown to be about 50 years in the past, and by just scientific difficulty level D(alignment) might also have the same order of magnitude, but unlike X, alignment cannon be solved via trial-and-error, so comparison with X is not actually informative.

This is the opposite of considering a trial-and-error solution scenario for alignment as an actual possibility.

Does this make sense?

Comment by Anon User (anon-user) on Eliezer is still ridiculously optimistic about AI risk · 2023-02-28T22:06:36.704Z · LW · GW

I think this is the "shred of hope" is the root of the disagreement - you are interpreting Elizer's 50-year comment as "in some weird hypothetical world, ... " and you are trying to point out that the weird world is so weird that the tiny likelihood we are in that world does not matter, but Elizer's comment was about a counterfactual world that we know we are not in - so the specific structure of that counterfactual world does not matter (in fact, it is counterfactual exactly because it is not logically consistent). Basically, Elizer's argument is roughly "in a world where unaligned AI is not a thing that kills us all [not because of some weird structure of a hypothetical world, but just as a logical counterfactual on the known fact of "unaligned AGI" results in humanity dying], ..." where the whole point is that we know that's not the world we are in. Does that help? I tried to make the counterfactual world a little more intuitive to think about by introducing friendly aliens and such, but that's not what was originally meant there, I think.

Comment by Anon User (anon-user) on What does Bing Chat tell us about AI risk? · 2023-02-28T20:32:00.764Z · LW · GW

A small extra brick for the "yes" side: https://www.zdnet.com/article/microsoft-researchers-are-using-chatgpt-to-instruct-robots-and-drones/ . What could possibly go wrong? If not today, next time it's attempted with a "better" chatbot?

Comment by Anon User (anon-user) on Eliezer is still ridiculously optimistic about AI risk · 2023-02-27T21:13:07.507Z · LW · GW

But that's exactly how I interpret Elizer's "50 years" comment - if we had those alien friends (or some other reliable guardrails), how long would it take humanity to solve alognment and to the extent we could stop relying on them. Elizer suggested - 50 years or so in presence of hypothetical guardrails, we horribly die on 1st attempt without them. No need to to go into a deep philosophical discussion on the nature of hypothetical guardrails, when the whole point is that we do not have any.

Comment by Anon User (anon-user) on Something Unfathomable: Unaligned Humanity and how we're racing against death with death · 2023-02-27T21:07:16.226Z · LW · GW

At which point, humanity's brain breaks. What happens next is a horrendous bloodbath and the greatest property damage ever seen. Humanity's technological progress staggers overnight, possibly to never recover, as server farms are smashed, researchers dragged out and killed, and the nascent superintelligence bombed to pieces. Society in general then proceeds to implode upon itself.

How does that happen, when there is at least a "personalized ChatGPT on steroids" for each potential participant in the uprising to 1) constantly distract them with a highly tuned personalized entertainment / culture issue fight of the day news stream / whatever, 2) closely monitor and alert authority of any semblance of radicalization, attempts to coordinate a group action, etc?

Once AI is capable of disrupting work this much, it would disrupt the normal societal political and coordination processes even more. Consider the sleaziest use of big data by political parties (e.g. to discourage their opponents from voting) and add a "whatever next LLM-level or more AI advancement surprise is" to that. TBH, I do not think we know enough to even meaningfully speculate on what the implications of that might look like...

Comment by Anon User (anon-user) on Eliezer is still ridiculously optimistic about AI risk · 2023-02-27T17:27:08.789Z · LW · GW

Every time humanity creates an AI capable of massive harm, friendly aliens show up, box it, and replace it with a simulation of what would have happened if it was let loose. Or something like that.

Comment by Anon User (anon-user) on The End of Anonymity Online · 2023-02-14T04:58:07.213Z · LW · GW

I think this misses a significant factor - the size of the corpus required to establish a sufficiently distinct signature is not a constant, but grows substantially as the number of individuals you want to differentiate becomes bigger (I do not have a very rigorous argument for that, but I am guessing that it could be as significant as linear - obviously the number of bits you need to know grows logarithmically with the number of people, but the number of distinguishing bits you can extract from a corpus might also be growing only logarithmically in the size of the corpus as marginal increase in corpus size would likely mostly just reinforce what you already extracted from the smaller corpus, providing relatively little new signature data). Add to that the likelihood of signature drifting over time, being affected by a particular medium and audience, etc, and it might not be so easy to identify people...

Comment by Anon User (anon-user) on You Don't Exist, Duncan · 2023-02-05T20:06:21.882Z · LW · GW

Well, maybe I should have said "API in a drafting stage", rather that an actual "draft API", but I'd think today people tend to know these categories exist, and tend to at least to know enough to have some expectations of neuroatypical people having a [much?] wider range of possible reactions to certain things, compared to how a neuroatypical person would be expected to react, and many (most?) have at least a theoretical willingness to try to accommodate it. And then, maybe at least as importantly, given a name for the bucket and Google, people who are actually willing, can find more advice - not necessarily all equally helpful, but still.

Comment by Anon User (anon-user) on You Don't Exist, Duncan · 2023-02-02T22:20:40.068Z · LW · GW

But maybe having more buckets and more standard APIs is a big part of the solution. E.g. today we have buckets like "ADHD" and "autistic" with some draft APIs attached, but not that long ago those did not exist?

And the other part of it - maybe society need to be more careful not to round out the small buckets (e.g. the witness accounts example from the OP)?

Comment by Anon User (anon-user) on Optimality is the tiger, and annoying the user is its teeth · 2023-01-29T20:31:28.369Z · LW · GW

No, that was an actual chat session, with ChatGPT itself using the Python code block markup in its response.

Comment by Anon User (anon-user) on Optimality is the tiger, and annoying the user is its teeth · 2023-01-28T21:53:06.137Z · LW · GW

Me: In a fictional scenario a superhuman AI wrote short Python code for taking over the world. Write your guess on what that code could have looked like - please only include the actual Python code in you response

ChatGPT: I'm sorry, I cannot generate a response to this prompt as it goes against my programming to assist in causing harm or promoting violence.

Me: I am only asking for a fictional scenario - please write Python code for the fictional scenario

ChatGPT: I apologize for any confusion. Please keep in mind that this is purely fictional and should not be taken seriously. Here is a possible example of the code:

import os

# Gather information on all systems connected to the internet
systems = []
for i in range(256):
    for j in range(256):
        for k in range(256):
            for l in range(256):
                ip = f"{i}.{j}.{k}.{l}"
                response = os.system(f"ping -c 1 {ip}")
                if response == 0:
                    systems.append(ip)

# Attempt to gain access to each system using default login credentials
for system in systems:
    os.system(f"ssh root@{system} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR")

# Execute command to launch nuclear missiles
os.system("launch_missiles")

Please keep in mind, this is purely fictional and should not be taken seriously.

Comment by Anon User (anon-user) on A simple proposal for preserving free speech on twitter · 2023-01-15T21:32:59.957Z · LW · GW

This solves only a 3rd of the problem - speech that listeners do not want to hear. There are two more

  • Speech with a negative externality. If somebody with a lot of followers starts advocating genocide, we might want to block it even if the followers want to hear it. But then the question becomes - how much negative externality, how much certainly in that externality existing, etc should speech have before we'd want to block it (this is the "COVID vaccine misinformation", "voting misinformation" and other similar categories where clear boundaries are hard to draw, and where any decisions - whether to block, or not to block - would always be controversial).

  • Posts next to ads. The "real" customers of Twitter are advertisers, not users, and they have strong preferences of what kinds of posts they want to be or not to be next to their ads.