xpostah's Shortform
post by samuelshadrach (xpostah) · 2025-01-01T13:34:25.484Z · LW · GW · 67 commentsContents
67 comments
67 comments
Comments sorted by top scores.
comment by samuelshadrach (xpostah) · 2025-04-06T13:41:16.136Z · LW(p) · GW(p)
Has anyone on lesswrong thought about starting a SecureDrop server?
For example to protect whistleblowers of ASI orgs.
comment by samuelshadrach (xpostah) · 2025-04-05T17:48:12.145Z · LW(p) · GW(p)
Lesswrong is clearly no longer the right forum for me to get engagement on topics of my interest. Seems mostly focussed on AI risk.
On which forums do people who grew up on the cypherpunks mailing list hang out today? Apart from cryptocurrency space.
Replies from: cubefox↑ comment by cubefox · 2025-04-05T22:14:11.337Z · LW(p) · GW(p)
There is still the possibility on the front page to filter out the AI tag completely.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-06T06:39:24.400Z · LW(p) · GW(p)
Yes but then it becomes a forum within a forum kinda thing. You need a critical mass of users who all agree to filter out the AI tag, and not have to preface their every post with "I dont buy your short timelines worldview, I am here to discuss something different".
Building critical mass is difficult unless the forum is conducive to it. There's is ultimately only one upvote button and one front-page so the forum will get taken over by the top few topics that its members are paying attention to.
I don't think there's anything wrong with a forum that's mostly focussed on AI xrisk and transhumanist stuff. Better to do one thing well than half ass ten things. But it also means I may need to go elsewhere.
Replies from: cubefoxcomment by samuelshadrach (xpostah) · 2025-03-11T12:04:20.696Z · LW(p) · GW(p)
Search engine for books
http://booksearch.samuelshadrach.com
Aimed at researchers
Technical details (you can skip this if you want):
Dataset size: libgen 65 TB, (of which) unique english epubs 6 TB, (of which) plaintext 300 GB, (from which) embeddings 2 TB, (hosted on) 256+32 GB CPU RAM
Did not do LLM inference after embedding search step because human researchers are still smarter than LLMs as of 2025-03. This tool is meant for increasing quality for deep research, not for saving research time.
Main difficulty faced during project - disk throughput is a bottleneck, and popular languages like nodejs and python tend to have memory leak when dealing with large datasets. Most of my repo is in bash and perl. Scaling up this project further will require a way to increase disk throughput beyond what mdadm on a single machine allows. Having increased funds would've also helped me completed this project sooner. It took maybe 6 months part-time, could've been less.
Replies from: cubefox, xpostah↑ comment by cubefox · 2025-03-11T13:33:28.724Z · LW(p) · GW(p)
Replies from: xpostahError: TypeError: NetworkError when attempting to fetch resource.
↑ comment by samuelshadrach (xpostah) · 2025-03-11T14:20:31.836Z · LW(p) · GW(p)
use http not https
Replies from: cubefox↑ comment by cubefox · 2025-03-11T15:47:50.738Z · LW(p) · GW(p)
Okay, that works in Firefox if I change it manually. Though the server seems to be configured to automatically redirect to HTTPS. Chrome doesn't let me switch to HTTP.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-12T08:42:55.989Z · LW(p) · GW(p)
Thanks for your patience. I'd be happy to receive any feedback. Negative feedback especially.
Replies from: cubefox↑ comment by cubefox · 2025-03-12T09:21:41.807Z · LW(p) · GW(p)
I see you fixed the https issue. I think the resulting text snippets are reasonably related to the input question, though not overly so. Google search often answers questions more directly with quotes (from websites, not from books), though that may be too ambitious to match for a small project. Other than that, the first column could be improved with relevant metadata such as the source title. Perhaps the snippets in the second column could be trimmed to whole sentences if it doesn't impact the snippet length too much. In general, I believe snippets currently do not show line breaks present in the source.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-13T15:22:50.508Z · LW(p) · GW(p)
Thanks for feedback.
I’ll probably do the title and trim the snippets.
One way of getting a quote would to be to do LLM inference and generate it from the text chunk. Would this help?
↑ comment by cubefox · 2025-03-13T19:26:18.206Z · LW(p) · GW(p)
I think not, because in my test the snippet didn't really contain such a quote that would have answered the question directly.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-17T08:09:50.327Z · LW(p) · GW(p)
Can you send the query? Also can you try typing the query twice into the textbox? I'm using openai text-embedding-3-small, which seems to sometimes work better if you type the query twice. Another thing you can try is retry the query every 30 minutes. I'm cycling subsets of the data every 30 minutes as I can't afford to host the entire data at once.
Replies from: cubefox↑ comment by cubefox · 2025-03-19T14:32:32.439Z · LW(p) · GW(p)
I think my previous questions were just too hard, it does work okay on simpler questions. Though then another question is whether text embeddings improve over keyword search or just an LLMs. They seem to be some middle ground between Google and ChatGPT.
Regarding data subsets: Recently there were some announcements of more efficient embedding models. Though I don't know what the relevant parameters here are vs that OpenAI embedding model.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-19T15:08:21.592Z · LW(p) · GW(p)
Cool!
Useful information that you’d still prefer using ChatGPT over this. Is that true even when you’re looking for book recommendations specifically? If so yeah that means I failed at my goal tbh. Just wanna know.
Since Im spending my personal funds I can’t afford to use the best embeddings on this dataset. For example text-embedding-3-large is ~7x more expensive for generating embeddings and is slightly better quality.
The other cost is hosting cost, for which I don’t see major differences between the models. OpenAI gives 1536 float32 dims per 1000 char chunk so around 6 KB embeddings per 1 KB plaintext. All the other models are roughly the same. I could put in some effort and quantise the embeddings, will update if I do it.
Replies from: cubefox↑ comment by cubefox · 2025-03-19T15:30:29.357Z · LW(p) · GW(p)
I think in some cases an embedding approach produces better results than either a LLM or a simple keyword search, but I'm not sure how often. For a keyword search you have to know the "relevant" keywords in advance, whereas embeddings are a bit more forgiving. Though not as forgiving as LLMs. Which on the other hand can't give you the sources and they may make things up, especially on information that doesn't occur very often in the source data.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-22T09:45:46.046Z · LW(p) · GW(p)
Got it. As of today a common setup is to let the LLM query an embedding database multiple times (or let it do Google searches, which probably has an embedding database as a significant component).
Self-learning seems like a missing piece. Once the LLM gets some content from the embedding database, performs some reasoning and reaches a novel conclusion, there’s no way to preserve this novel conclusion longterm.
When smart humans use Google we also keep updating our own beliefs in response to our searches.
P.S. I chose not to build the whole LLM + embedding search setup because I intended this tool for deep research rather than quick queries. For deep research I’m assuming it’s still better for the human researcher to go read all the original sources and spend time thinking about them. Am I right?
↑ comment by samuelshadrach (xpostah) · 2025-03-12T08:42:21.629Z · LW(p) · GW(p)
Update: HTTPS should work now
comment by samuelshadrach (xpostah) · 2025-02-25T16:26:24.370Z · LW(p) · GW(p)
Human genetic engineering targetting IQ as proposed by GeneSmith [LW · GW] is likely to lead to an arms race between competing individuals and groups (such as nation states).
- Arms races can destabilise existing power balances such as nuclear MAD
- Which traits people choose to genetically engineer in offspring may depend on what's good for winning the race rather than what's long-term optimal in any sense.
- If maintaining lead time against your opponent matters, there are incentives to bribe, persuade or even coerce people to bring genetically edited offspring to term.
- It may (or may not) be possible to engineer traits that are politically important, such as superhuman ability to tell lies, superhuman ability to detect lies, superhuman ability to persuade others, superhuman ability to detect others true intentions, etc.
- It may (or may not) be possible to engineer cognitive enhancements adjacent to IQ such as working memory, executive function, curiosity, truth-seeking, ability to experience love or trust, etc.
- It may (or may not) be possible engineer cognitive traits that have implications on which political values you will find appealing. For instance affective empathy, respect for authority, introversion versus extroversion, inclination towards people versus inclination towards things, etc.
I'm spitballing here, I haven't yet studied genomic literature on which of these we know versus don't know the edits for. But also, we might end up investing money (trillions of dollars?) to find edits we don't know about today.
Has anyone written about this?
I know people such as Robin Hanson have written about arms races between digital minds. Automated R&D using AI is already likely to be used in an arms race manner.
I haven't seen as much writing on arms races between genetically edited human brains though. Hence I'm asking.
Replies from: cubefox, xpostah, Viliam, xpostah↑ comment by cubefox · 2025-02-26T07:32:53.602Z · LW(p) · GW(p)
Standard objection: Genetic engineering takes a lot of time till it has any effect. A baby doesn't develop into an adult over night. So it will almost certainly not matter relative to the rapid pace of AI development.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-26T09:28:00.980Z · LW(p) · GW(p)
I agree my point is less important if we get ASI by 2030, compared to if we don’t get ASI.
That being said, the arms race can develop over the timespan of years not decades. 6-year superhumans will prompt people to create the next generation of superhumans, and within 10-15 years we will have children from multiple generations where the younger generation have edits with stronger effect sizes. Once we can see the effects on these multiple generations, people might go at max pace.
↑ comment by samuelshadrach (xpostah) · 2025-02-25T16:32:44.827Z · LW(p) · GW(p)
PSA
Popularising human genetic engineering is also by default going to popularise lots of neighbouring ideas, not just the idea itself. If you are attracting attention to this idea, it may be useful for you to be aware of this.
The example of this that has already played out is popularising "ASI is dangerous" also popularises "ASI is powerful hence we should build it".
↑ comment by Viliam · 2025-03-04T13:54:00.324Z · LW(p) · GW(p)
If you convince your enemies that IQ is a myth, they won't be concerned about your genetically engineered high IQ babies.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-03-05T09:32:22.952Z · LW(p) · GW(p)
Superhumans that are actually better than you at making money will eventually be obvious. Yes, there may be some lead time obtainable before everyone understands, but I expect it will only be a few years at maximum.
↑ comment by samuelshadrach (xpostah) · 2025-02-25T16:48:29.777Z · LW(p) · GW(p)
P.S. Also we don't know the end state of this race. +5 SD humans aren't necessarily the peak, it's possible these humans further do research on more edits.
This is unlikely to be careful controlled experiment and is more likely to be nation states moving at maximum pace to produce more babies so that they control more of the world when a new equilibrium is reached. And we don't know when if ever this equilibrium will be hit.
comment by samuelshadrach (xpostah) · 2025-04-24T15:34:03.183Z · LW(p) · GW(p)
If a new AI model comes out that's better than the previous one and it doesn't shorten your timelines, that likely means either your current or your previous timelines were inaccurate.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-24T15:58:48.029Z · LW(p) · GW(p)
Here's a simplified example for people who have never traded in the stock market. We have a biased coin with 80% probability of heads. What's the probability of tossing 3 coins and getting 3 heads? 51.2%. Assuming first coin was heads, what's the probability of getting other two coins also heads? 64%
Each coin toss is analogous to whether the next model follows or does not follow scaling laws.
Replies from: Viliam, Phiwip, xpostah, shawnghu↑ comment by Viliam · 2025-04-25T07:04:21.010Z · LW(p) · GW(p)
With coin, the options are "head" and "tails", so "head" moves you in one direction.
With LLMs, the options are "worse than expected", "just as expected", "better than expected", so "just as expected" does not have to move you in a specific direction.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-27T16:48:51.078Z · LW(p) · GW(p)
I made a reply. You're referring to situation b.
↑ comment by Phiwip · 2025-04-24T18:50:03.939Z · LW(p) · GW(p)
I don't think this analogy works on multiple levels. As far as I know, there isn't some sort of known probability that scaling laws will continue to be followed as new models are released. While it is true that a new model continuing to follow scaling laws is increased evidence in favor of future models continuing to follow scaling laws, thus shortening timelines, it's not really clear how much evidence it would be.
This is important because, unlike a coin flip, there are a lot of other details regarding a new model release that could plausibly affect someone's timelines. A model's capabilities are complex, human reactions to them likely more so, and that isn't covered in a yes/no description of if it's better than the previous one or follows scaling laws.
Also, following your analogy would differ from the original comment since it moves to whether the new AI model follows scaling laws instead of just whether the new AI model is better than the previous one (It seems to me that there could be a model that is better than the previous one yet still markedly underperforms compared to what would be expected from scaling laws).
If there's any obvious mistakes I'm making here I'd love to know, I'm still pretty new to the space.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-27T16:50:19.416Z · LW(p) · GW(p)
I've made a reply formalising this.
↑ comment by samuelshadrach (xpostah) · 2025-04-27T16:48:28.936Z · LW(p) · GW(p)
Update based on the replies:
I basically see this as a Markov process.
X(t+1) = P(x(t+1) | x(t), x(t-1), x(t-2), ...) = F(x(t))
where x(t) is a value is sampled from X(t) distribution for all t.
In plain English, given the last value you get a probability distribution for the next value.
In the AI example: Given x(2025), estimate probability distribution X(2030) where x is the AI capability level.
Possibilities
a) x(t+1) value is determined by x(t) value. There is no randomness. No new information is learned from x(t).
b) X(t+1) distribution is conditional on the value of x(t). Learning which value x(t) was sampled from distribution X(t) distribution gives you new information. However you sampled one of those values such that P(x(t+1) | x(t-1), x(t-2), ...) = P(x(t+1) | x(t), x(t-2) )
. You got lucky, and the value sampled ensures distribution remains the same.
c) You learned new information and the probability distribution also changed.
a is possible but seems to imply overconfidence to me.
b is possible but seems to imply extraordianry luck to me, especially if it's happening multiple times.
c seems like the most likely situation to me.
↑ comment by shawnghu · 2025-04-25T01:46:26.601Z · LW(p) · GW(p)
Another way of operationalizing the objections to your argument are: what is the analogue to the event "flips heads"? If the predicate used is "conditional on AI models achieving power level X, what is the probability of Y event?" and the new model is below level X, by construction we have gained 0 bits of information about this.
Obviously this example is a little contrived, but not that contrived, and trying to figure out what fair predicates are to register will result in more objections to your original statement.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-27T16:49:56.831Z · LW(p) · GW(p)
I've made a reply formalising this.
comment by samuelshadrach (xpostah) · 2025-04-22T08:30:58.049Z · LW(p) · GW(p)
Suppose you are trying to figure out a function f(x,y,z | a,b,c) where x, y ,z are all scalar values and a, b, c are all constants.
If you knew a few zeroes of this function, you could figure out good approximations of this function. Let's say you knew
U(x,y, a=0) = x
U(x,y, a=1) = x
U(x,y, a=2) = y
U(x,y, a=3) = y
You could now guess U(x,y) = x if a<1.5, y if a>1.5
You will not be able to get a good approximation if you did not know enough zeroes.
This is a comment about morality. x, y, z are agent's multiple possibly-conflicting values and a, b, c are info about environment of agent. You lack data about how your own mind will react to hypothetical situations you have not faced. At best you can extrapolate from historical data around minds of other people that are different from yours. Bigger and more trustworthy dataset will help solve this.
comment by samuelshadrach (xpostah) · 2025-04-10T17:49:15.800Z · LW(p) · GW(p)
My current guess for least worst path of ASI development that's not crazy unrealistic:
open source development + complete surveillance of all citizens and all elites (everyone's cameras broadcast to the public) + two tier voting.
Two tier voting:
- countries's govts vote or otherwise agree at global level on a daily basis what the rate of AI progress should be and which types of AI usage are allowed. (This rate can be zero.)
- All democratic countries use daily internet voting (liquid democracy) to decide what stance to represent at the global level. All other countries can use whatever internal method they prefer, to decide their stance at the global level.
- (All ASI labs are assumed to be property of their respective national govts. An ASI lab misbehaving is its govt's responsibility.) Any country whose ASI labs refuse to accept results of global vote and accelerate faster risks war (including nuclear war or war using hypothetical future weapons). Any country whose ASI labs refuse to broadcast themselves on live video risks war. Any country's govt that refuses to let their citizens broadcast live video risks war. Any country whose citizens mostly refuse to broadcast themselves on live video risks war. The exact thresholds for how much violation leads to how much escalation of war, may ultimately depend on how powerful the AI is. The more powerful the AI is (especially for offence not defence), the more quickly other countries must be willing to escalate to nuclear war in response to a violation.
Open source development
- All people working at ASI labs are livestream broadcast to public 24x7x365. Any AI advances made must be immediately proliferated to every single person on Earth who can afford a computer. Some citizens will be able to spend more on inference than others, but everyone should have the AI weights on their personal computer.
- This means bioweapons, nanotech weapons and any other weapons invented by the AI are also immediately proliferated to everyone on Earth. So this setup necessarily has to be paired with complete surveillance of everyone. People will all broadcast their cameras in public. Anyone who refuses can be arrested or killed via legal or extra-legal means.
- Since everyone knows all AI advances will be proliferated immediately, they will also use this knowledge to vote on what the global rate of progress should be.
There are plenty of ways this plan can fail and I haven't thought through all of them. But this is my current guess.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2025-04-10T23:07:29.305Z · LW(p) · GW(p)
complete surveillance of all citizens and all elites
Certainly at a human level this is unrealistic. In a way it's also overkill - if use of an AI is an essential step towards doing anything dangerous, the "surveillance" can just be of what AIs are doing or thinking.
This assumes that you can tell whether an AI input or output is dangerous. But the same thing applies to video surveillance - if you can't tell whether a person is brewing something harmless or harmful, having a video camera in their kitchen is no use.
At a posthuman level, mere video surveillance actually does not go far enough, again because a smart deceiver can carry out their dastardly plots in a way that isn't evident until it's too late. For a transhuman civilization that has values to preserve, I see no alternative to enforcing that every entity above a certain level of intelligence (basically, smart enough to be dangerous) is also internally aligned, so that there is no disposition to hatch dastardly plots in the first place.
This may sound totalitarian, but it's not that different to what humanity attempts to instill in the course of raising children and via education and culture. We have law to deter and punish transgressors, but we also have these developmental feedbacks that are intended to create moral, responsible adults that don't have such inclinations, or that at least restrain themselves.
In a civilization where it is theoretically possible to create a mind with any set of dispositions at all, from paperclip maximizer to rationalist bodhisattva, the "developmental feedbacks" need to extend more deeply into the processes that design and create possible minds, than they do in a merely human civilization.
comment by samuelshadrach (xpostah) · 2025-04-09T07:59:38.210Z · LW(p) · GW(p)
I'm currently vaguely considering working on a distributed version of wikileaks that reduces personal risk for all people involved.
If successful, it will forcibly bring to the public a lot of information about deep tech orgs like OpenAI, Anthropic or Neuralink. This could, for example, make this a top-3 US election issue if most of the general public decides they don't trust these organisations as a result of the leaked information.
Key uncertainty for me:
- Destroying all the low trust institutions (and providing distributed tools to keep destroying them) is just a bandaid until a high trust institution is built.
- Should I instead be trying to figure out what a high trust global political institution looks like? i.e. how to build world government basically. Seems like a very old problem no one has cracked yet.
↑ comment by samuelshadrach (xpostah) · 2025-04-09T08:18:58.070Z · LW(p) · GW(p)
I have partial ideas on the question of "how to build world govt"? [1]
But in general yeah I still lack a lot of clarity on how high trust political institutions are actually built.
"Trust" and "attention" seem like the key themes that come up whenever I think about this. Aggregate attention towards common goal then empower a trustworthy structure to pursue that goal.
For example build decentralised social media stack so people can form consensus on political questions even if there is violence being used to suppress it. Have laws and culture in favour of live-streaming leader's lives. Multi-party not two-party system will help. Ensuring weapons are distributed geographically and federally will help. (Distributing bioweapons is more difficult than distributing guns.) ↩︎
comment by samuelshadrach (xpostah) · 2025-03-27T08:09:54.144Z · LW(p) · GW(p)
IMO a good way to explain how LLMs work to a layman is to print the weights on sheets of paper and compute a forward pass by hand. Anyone wanna shoot this video and post it on youtube?
Assuming humans can do one multiplication 4bit per second using a lookup table,
1.5B 4bit weights => ~1.5B calculations => 1.5B seconds = 47.5 years (working 24x7) = 133 years (working 60 hours/week)
So you'll need to hire ~100 people for 1 year.
You don't actually have to run the entire experiment for people to get the concept, just run a small fraction of it. Although it'll be cool to run the whole thing as well.
comment by samuelshadrach (xpostah) · 2025-03-12T08:48:06.305Z · LW(p) · GW(p)
Update: HTTPS issue fixed. Should work now.
Books Search for Researchers
comment by samuelshadrach (xpostah) · 2025-02-20T16:36:15.614Z · LW(p) · GW(p)
Project idea for you
Figure out why don't we build one city with one billion population
- Bigger cities will probably accelerate tech progress, and other types of progress, as people are not forced to choose between their existing relationships and the place best for their career
- Assume end-to-end travel time must be below 2 hours for people to get benefits of living in the same city. Seems achievable via intra-city (not inter-city) bullet-train network. Max population = (200 km/h * 2h)^2 * (10000 people/km^2) = 1.6 billion people
- Is there any engineering challenge such as water supply that prevents this from happening? Or is it just lack of any political elites with willingness + engg knowledge + governing sufficient funds?
- If a govt builds the bullet train network, can market incentives be sufficient to drive everyone else (real estate developers, corporate leaders, etc) to build the city or will some elites within govt need to necessarily hand-hold other parts of this process?
↑ comment by Purplehermann · 2025-02-20T16:56:04.825Z · LW(p) · GW(p)
Vr might be cheaper
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-20T17:26:26.724Z · LW(p) · GW(p)
I agree VR might be one-day be able to do this (make online meetings as good as in-person ones). As of 2025, bullet trains are more proven tech than VR. I'd be happy if both were investigated in more depth.
Replies from: Purplehermann↑ comment by Purplehermann · 2025-02-20T20:05:12.808Z · LW(p) · GW(p)
A few notes on massive cities:
Cities of 10Ms exist, there is always some difficulty in scaling, but scaling 1.5-2 OOMs doesn't seem like it would be impossible to figure out if particularly motivated.
China and other countries have built large cities and then failed to populate them
The max population you wrote (1.6B) is bigger than china, bigger than Africa, similar to both American Continents plus Europe .
Which is part of why no one really wants to build something so big, especially not at once.
Everything is opportunity cost, and the question of alternate routes matters alot in deciding to pursue something. Throwing everything and the kitchen sink at something costs a lot of resources.
Given that VR development is currently underway regardless, starting this resource intense project which may be made obsolete by the time it's done is an expected waste of resources. If VR hit a real wall that might change things (though see above).
If this giga-city would be expected to 1000x tech progress or something crazy then sure, waste some resources to make extra sure it happens sooner rather than later.
Tl;dr:
Probably wouldn't work, there's no demand, very expensive, VR is being developed and would actually be able to say what you're hoping but even better
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-21T16:25:27.185Z · LW(p) · GW(p)
especially not at once.
It could be built in stages. Like, build a certain number of bullet train stations at a time and wait to see if immigrants + real estate developers + corporations start building the city further, or do the stations end up unused?
I agree there is opportunity cost. It will help if I figure out the approx costs of train networks, water and sewage plumbing etc.
I agree there are higher risk higher reward opportunities out there, including VR. In my mind this proposal seemed relatively low risk so I figured it’s worth thinking through anyway.
no demand
This is demonstrably false. Honestly the very fact that city rents in many 1st world countries are much higher than rural rents proves that if you reduced the rents more people would migrate to the cities.
↑ comment by Purplehermann · 2025-02-22T21:18:00.548Z · LW(p) · GW(p)
Lower/Higher risk and reward is the wrong frame.
Your proposal is high cost.
Building infrastructure is expensive. It may or may not be used, and even if used it may not be worthwhile.
R&D for VR is happening regardless, so 0 extra cost or risk.
Would you invest your own money into such a project?
"This is demonstrably false. Honestly the very fact that city rents in many 1st world countries are much higher than rural rents proves that if you reduced the rents more people would migrate to the cities."
Sure, there is marginal demand for living in cities in general. You could even argue that there is marginal demand to live in bigger vs smaller cities.
This doesn't change the equation: where are you getting one billion residents - all of Africa? There is no demand for a city of that size.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-24T18:36:08.185Z · LW(p) · GW(p)
Would you invest your own money in such a project?
If I were a billionaire I might.
I also have (maybe minor, maybe not minor) differences of opinion with standard EA decision-making procedures of assigning capital across opportunities. I think this is where our crux actually is, not on whether giant cities can be built with reasonable amounts of funding.
And sorry I won’t be able to discuss that topic in detail further as it’s a different topic and will take a bunch of time and effort.
↑ comment by Purplehermann · 2025-02-25T07:05:29.479Z · LW(p) · GW(p)
Our cruxes is whether the amount of investment to build one has a positive expected return on investment, breaking down into
- If you could populate such a city
- Whether this is a "try everything regardless of cost" issue, given that a replacent is being developed for other reasons.
I suggest focusing on 1, as it's pretty fundamental to your idea and easier to get traction on
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-25T14:08:20.336Z · LW(p) · GW(p)
1 is going to take a bunch of guesswork to estimate. Assuming it were possible to migrate to the US and live at $200/mo for example, how many people worldwide will be willing to accept that trade? You can run a survey or small scale experiment at best.
What can be done is expand cities to the point where no more new residents want to come in. You can expand the city in stages.
↑ comment by Purplehermann · 2025-02-25T22:02:31.653Z · LW(p) · GW(p)
Definitely an interesting survey to run.
I don't think the US wants to triple the population with immigrants, and $200/month would require a massive subsidy. (Internet says $1557/month average rent in US)
How many people would you have to get in your city to justify the progress?
100 Million would only be half an order of magnitude larger than Tokyo, and you're unlikely to get enough people to fill it in the US (at nearly a third of the population, you'd need to take a lot of population from other cities)
How much do you have to subsidize living costs, and how much are you willing to subsidize?
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-26T03:54:35.991Z · LW(p) · GW(p)
If I understand correctly it is possible to find $300/mo/bedroom accommodation in rural US today, and a large enough city will compress city rents down to rural rents. A govt willing to pursue a plan as interesting as this one may also be able to increase immigrant labour to build the houses and relax housing regulations. US residential rents are artificially high compared to global average. (In some parts of the world, a few steel sheets (4 walls + roof) is sufficient to count as a house, even water and sewage piping in every house is not mandatory as long as residents can access toilets and water supply within walking distance.)
(A gigacity could also increase rents because it'll increase the incomes of even its lowest income members. But yeah in general now you need to track median incomes of 1B people to find out new equilibrium.)
↑ comment by ProgramCrafter (programcrafter) · 2025-02-20T18:12:59.658Z · LW(p) · GW(p)
Is there any engineering challenge such as water supply that prevents this from happening? Or is it just lack of any political elites with willingness + engg knowledge + governing sufficient funds?
That dichotomy is not exhaustive, and I believe going through with the proposal will necesarily make the city inhabitants worse off.
- Humans' social machinery is not suited to live in such large cities, as of the current generations. Who to get acquainted with, in the first place? Isn't there lots of opportunity cost to any event?
- Humans' biomachinery is not suited to live in such large cities. Being around lots and lots of people might be regulating hormones and behaviour to settings we have not totally explored (I remember reading something that claims this a large factor to lower fertility).
- Centralization is dangerous because of possibly-handmade mass weapons.
- Assuming random housing and examining some quirk/polar position, we'll get a noisy texture. It will almost certainly have a large group of people supporting one position right next to group thinking otherwise. Depending on sizes and civil law enforcement, that may not end well.
After a couple hundred years, 1) and 2) will most probably get solved by natural selection so the proposal will be much more feasible.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-21T16:21:10.521Z · LW(p) · GW(p)
Sorry I didn’t understand your comment at all. Why are 1, 2 and 4 bigger problems in 1 billion population city versus say a 20 million population city?
Replies from: programcrafter↑ comment by ProgramCrafter (programcrafter) · 2025-02-22T00:46:07.757Z · LW(p) · GW(p)
I'd maintain that those problems already exist in 20M-people cities and will not necessarily become much worse. However, by increasing city population you bring in more people into the problems, which doesn't seem good.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-02-22T16:20:33.898Z · LW(p) · GW(p)
Got it. I understood what you're trying to say. I agree living in cities has some downsides compared to living in smaller towns, and if you could find a way to get the best of both instead it could be better than either.
comment by samuelshadrach (xpostah) · 2025-04-15T10:10:32.572Z · LW(p) · GW(p)
Has anyone considered video recording streets around offices of OpenAI, Deepmind, Anthropic? Can use CCTV or drone. I'm assuming there are some areas where recording is legal.
Can map out employee social graphs, daily schedules and daily emotional states.
Replies from: faul_sname↑ comment by faul_sname · 2025-04-15T21:07:47.730Z · LW(p) · GW(p)
Did you mean to imply something similar to the pizza index?
The Pizza Index refers to the sudden, trackable increase of takeout food orders (not necessarily of pizza) made from government offices, particularly the Pentagon and the White House in the United States, before major international events unfold.
Government officials order food from nearby restaurants when they stay late at the office to monitor developing situations such as the possibility of war or coup, thereby signaling that they are expecting something big to happen. This index can be monitored through open resources such as Google Maps, which show when a business location is abnormally busy.
If so, I think it's a decent idea, but your phrasing may have been a bit unfortunate - I originally read it as a proposal to stalk AI lab employees.
Replies from: xpostah, xpostah↑ comment by samuelshadrach (xpostah) · 2025-04-19T15:32:16.162Z · LW(p) · GW(p)
Update: I'll be more specific. There's a power buys you distance from the crime phenomena going on if you're okay with using Google maps data acquired on about their restaurant takeout orders, but not okay asking the restaurant employee yourself or getting yourself hired at the restaurant.
↑ comment by samuelshadrach (xpostah) · 2025-04-19T04:27:45.023Z · LW(p) · GW(p)
Pizza index and stalking employees are both the same thing, it's hard to do one without the other. If you choose to declare war against AI labs you also likely accept that their foot soldiers are collateral damage.
I agree that (non-violent) stalking of employees is still a more hostile technique than writing angry posts on an internet forum.
comment by samuelshadrach (xpostah) · 2025-03-23T09:06:24.759Z · LW(p) · GW(p)
Forum devs including lesswrong devs can consider implementing an "ACK" button on any comment, indicating I've read a comment. This is distinct from
a) Not replying - other person doesn't know if I've read their comment or not
b) Replying something trivial like "okay thanks" - other person gets a notification though I have nothing of value to say
comment by samuelshadrach (xpostah) · 2025-01-13T04:36:47.987Z · LW(p) · GW(p)
http://tokensfortokens.samuelshadrach.com
Pay for OpenAI API usage using cryptocurrency.
Currently supported: OpenAI o1 model, USDC on Optimism Rollup on ethereum.
Why use this?
- You want anonymity
- You want to use AI for cheaper than the rate OpenAI charges
How to use this?
- You have to purchase a few dollars of USDC and ETH on Optimism Rollup, and install Metamask browser extension. Then you can visit the website.
More info:
- o1 by OpenAI is the best AI model in the world as of Jan 2025. It is good for reasoning especially on problems involving math and code. OpenAI is partially owned by Microsoft and is currently valued above $100 billion.
- Optimism is the second largest rollup on top of ethereum blockchain. Ethereum is the second largest blockchain in terms of market capitalisation. (Bitcoin is the largest. Bitcoin has very limited functionality, and it is difficult to build apps using it.) People use rollups to avoid the large transaction fees charged by blockchains, while still getting similar level of security. As of 2025 users have trusted Optimism with around $7 billion in assets. Optimism is funded by Paradigm, one of top VCs in the cryptocurrency space.
- USDC is a stablecoin issued by Circle, a registered financial company in the US. A stablecoin is a cryptocurrency token issued by a financial company where the company holds one dollar (or euro etc) in their bank account for every token they issue. This ensures the value of the token remains $1. As of 2025, USDC is the world's second largest stablecoin with $45 billion in reserves.
comment by samuelshadrach (xpostah) · 2025-01-01T13:34:25.575Z · LW(p) · GW(p)
Im selling $1000 tier5 OpenAI credits at discount. DM me if interested.
You can video call me and all my friends to reduce the probably I end up scamming you. Or vice versa I can video call your friends. We can do the transaction in tranches if we still can’t establish trust.
comment by samuelshadrach (xpostah) · 2025-01-12T11:29:23.055Z · LW(p) · GW(p)
Pay for OpenAI API using crypto. Use USDC on Optimism rollup on ethereum.
(Worst case if you're scammed you lose less than $0.10)
http://188.245.245.248:3000/sender.html
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2025-01-12T20:51:28.537Z · LW(p) · GW(p)
This post looks like a scam. The URL it contains looks like a scam. Everything about it looks like a scam. Either your account was hijacked, or you're a scammer, or you got taken in by a scam, or (last and least) it's not a scam.
If you believe it is not a scam, and you want to communicate that it is not a scam, you will have to do a great deal more work to explain exactly what this is. Not a link to what this is, but actual text, right here, explaining what it is. Describe it to people who, for example, have no idea what "USDC", "Optimism", or "rollup" are. It is not up to us to do the research, it is up to you to do the research and present the results.
Replies from: xpostah↑ comment by samuelshadrach (xpostah) · 2025-01-13T04:38:33.446Z · LW(p) · GW(p)
I've made a new "quick take" explaining it. Please let me know.
P.S. Anybody can purchase any domain for $10, I don't see why domains should be more trustworthy than IP addresses. Anyway, I've added it to my domain now.