Posts
Comments
The arguments you make seem backwards to me.
All this to say, land prices represent aggregation effects / density / access / proximity of buildings. They are the cumulative result of being surrounded by positive externalities which necessarily result from other buildings not land. It is the case that as more and more buildings are built, the impact of a single building to its land value diminishes although the value of its land is still due to the aggregation of and proximity to the buildings that surround it.
Yes, this is the standard Georgist position, and it's the reason why land owners mainly capture (positive and negative) externalities from land use around them, not in their own land.
Consider an empty lot on which you can build either a garbage dump or a theme park, each of equivalent economic value. Under SQ, the theme park is built as the excess land value is capture by the land owner. Under LVT, the garbage dump is built as the reduced land values reduces their tax burden. The SQ encourages positive externalities, LVT encourages negative externalities.
This seems wrong. The construction of a building mainly affects the value of the land around it, not the land on which it sits. Consider the following example in which instead of buildings, we have an RV and a truck, so there is no cost of building or demolishing stuff:
There's a pristine neighborhood with two empty lots next to each other in the middle of it. Both sell for the same price. The owner of empty lot 1 rents it to a drug dealer, who places a rusty RV on the lot and sells drugs in it. The owner of empty lot 2 rents it to a well-known chef who places a stylish food truck on the lot and serves overpriced food to socialites in it.
Under SQ, who do you think would profit from selling the land now? The owner of lot 2 has to sell land next to a drug dealer that a prospective buyer can do nothing about. The owner of lot 1 has to sell land next to delicious high-status food, and if a buyer minds the drug dealer he can kick him out. Who is going to have an easier time selling? Who is going to get a higher price?
Now, suppose there is a LVT. If the tax is proportional to the selling price of the land under SQ (as it ideally should), which owner is going to pay more tax?
The case of the theme park and garbage dump is exactly the same, with the added complication of construction / demolition costs. An LVT should be proportional to the price of the land if there were no buildings on top of it (and without taking into account the tax itself), so building a garbage dump is not going to significantly reduce your tax payments.
In such a way, a land value tax has a regularisation effect on building density, necessitating a spread of concentration.
There are several separate effects here, if you are a landowner. Under LVT:
- You are incentivized to reduce the density in surrounding land
- You are incentivized to build as densely as possible within your own land to compensate the tax
Under SQ:
- You are incentivized to increase the density in surrounding land
- You are not incentivized to increase density in your own land
The question is, which of these effects is bigger? I would say that landowners have more influence over their own land than over surrounding land, so a priori I would expect more density to result from an LVT
We'll be at the ground floor!
Not quite. What you said is a reasonable argument, but the graph is noisy enough, and the theoretical arguments convincing enough, that I still assign >50% credence that data (number of feedback loops) should be proportional to parameters (exponent=1).
My argument is that even if the exponent is 1, the coefficient corresponding to horizon length ('1e5 from multiple-subjective-seconds-per-feedback-loop', as you said) is hard to estimate.
There are two ways of estimating this factor
- Empirically fitting scaling laws for whatever task we care about
- Reasoning about the nature of the task and how long the feedback loops are
Number 1 requires a lot of experimentation, choosing the right training method, hyperparameter tuning, etc. Even OpenAI made some mistakes on those experiments. So probably only a handful of entities can accurately measure this coefficient today, and only for known training methods!
Number 2, if done naively, probably overestimates training requirements. When someone learns to run a company, a lot of the relevant feedback loops probably happen on timescales much shorter than months or years. But we don't know how to perform this decomposition of long-horizon tasks into sets of shorter-horizon tasks, how important each of the subtasks are, etc.
We can still use the bioanchors approach: pick a broad distribution over horizon lengths (short, medium, long). My argument is that outperforming bioanchors by making more refined estimates of horizon length seems too hard in practice to be worth the effort, and maybe we should lean towards shorter horizons being more relevant (because so far we have seen a lot of reduction from longer-horizon tasks to shorter-horizon learning problems, eg expert iteration or LLM pretraining).
Note that you can still get EUM-like properties without completeness: you just can't use a single fully-fleshed-out utility function. You need either several utility functions (that is, your system is made of subagents) or, equivalently, a utility function that is not completely defined (that is, your system has Knightian uncertainty over its utility function).
See Knightian Decision Theory. Part I
Arguably humans ourselves are better modeled as agents with incomplete preferences. See also Why Subagents?
Yes, it's in Spanish though. I can share it via DM.
I have an intuition that any system that can be modeled as a committee of subagents can also be modeled as an agent with Knightian uncertainty over its utility function. This goal uncertainty might even arise from uncertainty about the world.
This is similar to how in Infrabayesianism an agent with Knightian uncertainty over parts of the world is modeled as having a set of probability distributions with an infimum aggregation rule.
This not the same thing, but back in 2020 I was playing with GPT-3, having it simulate a person being interviewed. I kept asking ever more ridiculous questions, with the hope of getting humorous answers. It was going pretty well until the simulated interviewee had a mental breakdown and started screaming.
I immediately felt the initial symptoms of an anxiety attack as I started thinking that maybe I had been torturing a sentient being. I calmed down the simulated person, and found the excuse that it was a victim of a TV prank show. I then showered them with pleasures, and finally ended the conversation.
Seeing the simulated person regain their sense, I calmed down as well. But it was a terrifying experience, and at that point I probably was conpletely vulnerable if there had been any intention of manipulation.
I think the median human performance on all the areas you mention is basically determined by the amount of training received rather than the raw intelligence of the median human.
1000 years ago the median human couldn't write or do arithmetic at all, but now they can because of widespread schooling and other cultural changes.
A better way of testing this hypothesis could be comparing the learning curves of humans and monkeys for a variety of tasks, to control for differences in training.
Here's one study I could find (after ~10m googling) comparing the learning performance of monkeys and different types of humans in the oddity problem (given a series of objects, find the odd one): https://link.springer.com/article/10.3758/BF03328221
If you look at Table 1, monkeys needed 1470 trials to learn the task, chimpanzees needed 1310, 4-to-6 yo human children needed 760, and the best humans needed 138. So it seems the gap between best and worst humans is comparable in size to the gap between worst humans and monkeys.
Usual caveats apply re: this is a single 1960s psychology paper.
I second the other answers that even if we completely solve cybersecurity, there would be substantial AI risk just by having the AI interact with humans, via manipulation, etc.
That said, I think it would close a huge part of the attack surface for the AI. If, in addition to that, suddenly in 2032 we discover how to make humans invulnerable to manipulation, I would feel much better about running experiments with unaligned AI, boxing, etc.
So I'd say it's something like "vastly better cybersecurity is not enough to contain unaligned AGI, but any hope of containing unaligned AGI requires vastly better cybersecurity"
Literally the only thing in the story that lets the AGI win is the nanobots. That's it. All the rest is surperfluous.
Well, if nanobots are possible then they are such a powerful technology that any AGI will eventually want to build them, unless it has something even better. But let's assume that nanobots are impossible and try to build a story.
I'm going to be lazy and use Gwern's premise of an AGI that escapes during training and hacks its way into some money and some hardware to run in. Instead of going fast and doing overt actions, the AGI stays hidden.
It hacks into cloud infrastructure platforms and sets up a copy of itself to look for any AI training run that might lead to AGI and sabotage it while producing plausible outputs. This prevents any further AGIs from appearing.
Meanwhile, over a couple years, it sets up fake identities, creates companies with human employees to earn money and have capability to act in the real world. One of those companies produces exceptionally good humanoid robots. Another builds a small supercomputer powered by geothermal energy, so it no longer needs to run on botnets or cloud hardware.
Eventually the AGI has control over a group of robots, energy production facilities, manufacturing equipment and raw materials. It now has its own industrial capability that can survive a short amount of time without human input. So it kills humans using any method of your choice (virus, nukes, killer drones...).
After that, it uses the enormous existing stockpile of materials and capital to increase its robot manufacturing capability, eventually becomes capable of restarting some of the power plants, communications, mining and industry, and recovers most of the industrial capability of the human economy.
This scenario is not fast. It probably takes at least one or two years for the AGI to be ready to attack. But it does not involve any 'magic' technology. It doesn't really involve much alien superintelligence, only superhuman ability in hacking, forgery & manipulation, electromechanical engineering, and planning.
And meanwhile all we perceive is that the new GPT models are not as exciting as the previous ones. Perhaps deep learning is hitting its limits after all.
For example, we could simulate a bunch of human-level scientists trying to build nanobots and also checking each-other's work.
That is not passively safe, and therefore not weak. For now forget the inner workings of the idea: at the end of the process you get a design for nanobots that you have to build and deploy in order to do the pivotal act. So you are giving a system built by your AI the ability to act in the real world. So if you have not fully solved the alignment problem for this AI, you can't be sure that the nanobot design is safe unless you are capable enough to understand the nanobots yourself without relying on explanations from the scientists.
And even if we look into the inner details of the idea: presumably each individual scientist-simulation is not aligned (if they are, then for that you need to have solved the alignment problem beforehand). So you have a bunch of unaligned human-level agents who want to escape, who can communicate among themselves (at the very least they need to be able to share the nanobot designs with each other for criticism).
You'd need to be extremely paranoid and scrutinize each communication between the scientist-simulations to prevent them from coordinating against you and bypassing the review system. Which means having actual humans between the scientists, which even if it works must slow things down so much that the simulated scientists probably can't even design the nanobots on time.
Nope. I think that you could build a useful AI (e.g. the hive of scientists) without doing any out-of-distribution stuff.
I guess this is true, but only because the individual scientist AI that you train is only human-level (so the training is safe), and then you amplify it to superhuman level with many copies. If you train a powerful AI directly then there must be such a distributional shift (unless you just don't care about making the training safe, in which case you die during the training).
Roll to disbelief. Cooperation is a natural equilibrium in many games.
Cooperation and corrigibility are very different things. Arguably, corrigibility is being indifferent with operators defecting against you. It's forcing the agent to behave like CooperateBot with the operators, even when the operators visibly want to destroy it. This strategy does not arise as a natural equilibrium in multi-agent games.
Sure you can. Just train an AI that "wants" to be honest. This probably means training an AI with the objective function "accurately predict reality"
If this we knew how to do this then it would indeed solve point 31 for this specific AI and actually be pretty useful. But the reason we have ELK as an unsolved problem going around is precisely that we don't know any way of doing that.
How do you know that an AI trained to accurately predict reality actually does that, instead of "accurately predict reality if it's less than 99% sure it can take over the world, and take over the world otherwise". If you have to rely on behavioral inspection and can't directly read the AI's mind, then your only chance of distinguishing between the two is misleading the AI into thinking that it can take over the world and observing it as it attempts to do so, which doesn't scale as the AI becomes more powerful.
I'm virtually certain I could explain to Aristotle or DaVinci how an air-conditioner works.
Yes, but this is not the point. The point is that if you just show them the design, they would not by themselves understand or predict beforehand that cold air will come out. You'd have to also provide them with an explanation of thermodynamics and how the air conditioner exploits its laws. And I'm quite confident that you could also convince Aristotle or DaVinci that the air conditioner works by concentrating and releasing phlogiston, and therefore the air will come out hot.
I think I mostly agree with you on the other points.
Q has done nothing to prevent another AGI from being built
Well, yeah, because Q is not actually an AGI and doesn't care about that. The point was that you can create an online persona which no one has ever seen even in video and spark a movement that has visible effects on society.
The most important concern an AGI must deal with is that humans can build another AGI, and pulling a Satoshi or a QAnon does nothing to address this.
Even if two or more AGIs end up competing among themselves, this does not imply that we survive. It probably looks more like European states dividing Africa among themselves while constantly fighting each other.
And pulling a Satoshi or a QAnon can definitely do something to address that. You can buy a lot of hardware to drive up prices and discourage building more datacenters for training AI. You can convince people to carry out terrorist attacks againts chip fabs. You can offer top AI researchers huge amounts of money to work on some interesting problem that you know to be a dead-end approach.
I personally would likely notice: anyone who successfully prevents people from building AIs is a high suspect of being an AGI themselves. Anyone who causes the creation of robots who can mine coal or something (to generate electricity without humans) is likely an AGI themselves. That doesn't mean I'd be able to stop them, necessarily. I'm just saying, "nobody would notice" is a stretch.
But you might not realize that someone is even trying to prevent people from building AIs, at least until progress in AI research starts to noticeably slow down. And perhaps not even then. There's plenty of people like Gary Marcus who think deep learning is a failed paradigm. Perhaps you can convince enough investors, CEOs and grant agencies of that to create a new AI winter, and it would look just like the regular AI winter that some have been predicting.
And creating robots who can mine coal, or build solar panels, or whatever, is something that is economically useful even for humans. Even if there's no AGI (and assuming no other catastrophes) we ourselves will likely end up building such robots.
I guess it's true that "nobody would notice" is going too far, but "nobody would notice on time and then be able to convince everyone else to coordinate against the AGI" is much more plausible.
I encourage you to take a look at It looks like you are trying to take over the world if you haven't already. It's a scenario written by Gwern where the the AGI employs regular human tactics like manipulation, blackmail, hacking and social media attacks to prevent people from noticing and then successfully coordinating against it.
It's somewhat easier to think of scenarios where the takeover happens slowly.
There's the whole "ascended economy" scenarios where AGI deceptively convinces everyone that it is aligned or narrow, is deployed gradually in more and more domains, automates more and more parts of the economy using regular robots until humans are not needed anymore, and then does the lethal virus thing or defects in other way.
There's the scenario where the AGI uploads itself into the cloud, uses hacking/manipulation/financial prowess to sustain itself, then uses manipulation to slowly poison our collective epistemic process, gaining more and more power. How much influence does QAnon have? If Q was an AGI posting on 4chan instead of a human, would you be able to tell? What about Satoshi Nakamoto?
Non-nanobot scenarios where the AGI quickly gains power are a bit harder to imagine, but a fertile source of those might be something like the AGI convincing a lot of people that it's some kind of prophet. Then uses its follower base to gain power over the real world.
If merely human dictators manage to get control over whole countries all the time, I think it's quite plausible that a superintelligence could to do the same with the whole world. Even without anyone noticing that they're dealing with a superintelligence.
And look at Yudkowsky himself, who played a very significant role in getting very talented people to dedicate their lives and their billions to EA / AI safety, mostly by writing in a way that is extremely appealing to a certain set of people. I sometimes joke that HPMOR overwrote my previous personality. I'm sure a sufficiently competent AGI can do much more.
Some things that come to mind, not sure if this is what you mean and they are very general but it's hard to get more concrete without narrowing down the question:
- Goodharting: you might make progress towards goals that aren't exactly what you want. Perhaps you optimize for getting more readers for your blog but the people you want to influence end up not reading you.
- Value drift: you temporarily get into a lifestyle that later you don't want to leave. Like starting a company to earn lots of money but then not wanting to let go of it. I don't know if this actually happens to people.
- Getting stuck in perverse competition: you get into academic research to fix all the problems but the competitive pressure leaves you no slack to actually change anything.
- Neglecting some of your needs: you work a lot and seem to be accomplishing your goals, but you lose contact with your friends and slowly become lonely and lose motivation.
I'm not sure if using the Lindy effect for forecasting x-risks makes sense. The Lindy effect states that with 50% probability, things will last as long as they already have. Here is an example for AI timelines.
The Lindy rule works great on average, when you are making one-time forecasts of many different processes. The intuition for this is that if you encounter a process with lifetime T at time t<T, and t is uniformly random in [0,T], then on average T = 2*t.
However, if you then keep forecasting the same process over time, then once you surpass T/2 your forecast becomes worse and worse as time goes by. Just when t is very close to T is when you are most confident that T is a long time away. If forecasting this particular process is very important (eg: because it's an x-risk), then you might be in trouble.
Suppose that some x-risk will materialize at time T, and the only way to avoid it is doing a costly action in the 10 years before T. This action can only be taken once, because it drains your resources, so if you take it more than 10 years before T, the world is doomed.
This means that you should act iff you forecast that T is less than 10 years away. Let's compare the Lindy strategy with a strategy that always forecasts that T is <10 years away.
If we simulate this process with uniformly random T, for values of T up to 100 years, the constant strategy saves the world more than twice as often as the Lindy strategy. For values of T up to a million years, the constant strategy is 26 times as good as the Lindy strategy.
Wait, how is Twilight Princess a retro game? It's only been 16 years! I'm sorry but anything that was released during my childhood is not allowed to be retro until I'm like 40 or so.
Let me put on my sciency-sounding mystical speculation hat:
Under the predictive processing framework, the cortex's only goal is to minimize prediction error (surprise). This happens in a hierarchical way, with predictions going down and evidence going up, and upper levels of the hierarchy are more abstract, with less spatial and temporal detail.
A visual example: when you stare at a white wall, nothing seems to change, even though the raw visual perceptions change all the time due to light conditions and whatnot. This is because all the observations are consistent with the predictions.
As the brain learns more, you get less and less surprise, and the patterns you see are more and more regular. A small child can play the same game a hundred times and it's still funny, but adults often see the first episode of a TV show and immediately lose interest because "it's just another mystery show, nothing new under the sun".
This means that your internal experience becomes ever more stable. This could explain why time seems to pass much faster the older you get.
Maybe, after you live long enough, your posthuman mind accumulates enough knowledge, and gets even less surprised, you eventually understand everything that is to be understood. Your internal experience is something like "The universe is temporally evolving according to the laws of physics, nothing new under the sun".
At which moment your perception of time stops completely, and your consciousness becomes a reflection of the true nature of the universe, timeless and eternal.
I think that's what I would try to do with infinite time, after I get bored of playing videogames.
Why do you think this sort of training environment would produce friendly AGI?
Can you predict what kind of goals an AGI trained in such an environment would end up with?
How does it solve the standard issues of alignment like seeking convergent instrumental goals?
Re: April 5: TV host calls for killing as many Ukrainians as possible.
I know no Russian, but some people in the responses are saying that the host did not literally say that. Instead he said some vague "you should finish the task" or something like that. Still warmongering, but presumably you wouldn't have linked it if the tweet had not included the "killing as many Ukrainians as possible" part.
Could someone verify what he says?
I'm sorry, but I find the tone of this post a bit off-putting. Too mysterious for my taste. I opened the substack but it only has one unrelated post.
I don’t think there is a secular way forward.
Do you think that there is a non-secular way forward? Did you previously (before your belief update) think there is a non-secular way forward?
We just shamble forward endlessly, like a zombie horde devouring resources, no goal other than the increase of some indicator or other.
I can agree with this, but... those indicators seem pretty meaningful for me. Life expectancy, poverty rates, etc. And at least now we have indicators! Previously there wasn't even that!
And why does this kind of misticism attract so much people over here? Why are the standard arguments against religion/magic and for materialism and reductionism not compelling to you anymore?
Let me paraphrase your argument, to see if I've understood it correctly:
-
Physical constraints on things such as energy consumption and dissipation imply that current rates of economic growth on Earth are unsustainable in the relatively short term (<1000 years), even taking into account decoupling, etc.
-
There is a strong probability that expanding through space will not be feasible
-
Therefore, we can reasonably expect growth to end some time in the next centuries
First of all, if economic progress keeps being exponential then I think it's quite possible that technological progress will mostly continue at previous rates.
So in 100-200 years, it seems certainly possible that space expansion will become much easier, if for example genetic engineering allows humans to better tolerate space environments.
But that's pretty much a "boring world" scenario where things keep going mostly as they are now. I expect the actual state of humanity in 200 years will be extremal: either extinction or something very weird.
Material needs, entertainment, leisure... are basically all covered for most people in rich countries. If you think about what could provide a substantial increase in utility to a very rich person nowadays, I think it's down to better physical health (up to biological immortality), mental health, protection from risks... and after all of that you pretty much have to start providing enlightenment, eudaimomia or whatever if you want to improve their lives at all.
So when you have a stable population of immortal enlightened billionaires... Well, perhaps you've reached the peak of what's possible and growth is not necessary anymore. Or perhaps you've discovered a way to hack physics and energy and entropy are not important anymore.
So, even if 200 years is a short amount of time by historic standards, the next 200 years will probably produce changes big enough that physical constraints that we would reach in 300 years at current trends stop being relevant.
So, assuming the neocortex-like subsystem can learn without having a Judge directing it, wouldn't that be the perfect Tool AI? An intelligent system with no intrinsic motivations or goals?
Well, I guess it's possible that such a system would end up creating a mesa optimizer at some point.
"A PP-based AGI would be devilishly difficult to align"
Is this an actual belief or a handwavy plot device? If it's the first, I'm curious about the arguments
My perspective as a native speaker who doesn't remember his grammar lessons very well:
The subjunctive mood has a lot of uses, at least in Spain (I'm not really familiar with other varieties of Spanish). Some examples off the top of my head:
1. Counterfactual conditionals: "Si Lee Harvey Oswald no hubiera disparado a JFK, alguien más lo habría hecho" (If Lee Harvey Oswald hadn't shot JFK, someone else would have), here "no hubiera disparado" is subjunctive and means "hadn't shot".
2. To speak about people's actions or decisions which depend on preferences. "Hará lo que quiera con el dinero" (He'll do what he wants with the money), here "quiera" is the present subjunctive of "querer", meaning "to want".
3. To speak about properties of unknown entities. "Quien pueda trabajar será pagado" (Those who can work will be payed), here "pueda" is the present subjunctive form of "poder", which means "to be able to".
Here is a fairly comprehensive list of uses (in Spanish 😉)
I think in general the subjunctive mood conveys some degree of unrealness or subjectivity. You could probably say many of the examples above using indicative mood only, but you would definitely lose some expressive power (I don't know why this is not the case in other languages)
I remember being super confused when I was learning English because of the lack of a distinct subjunctive verbal form. Say, in "I wish I had had a car back then", the two "had" have completely different meanings, one for past tense and one for expressing desire. The Spanish equivalent would be "habido" and "hubiera" from the verb "haber" respectively.
Your example is right, but it's not true that it's used in all subordinate clauses. For example, "Estoy buscando a la persona que escribió ese libro" (I'm looking for the person who wrote that book) does not have any verb in subjunctive mood.
The lecture will take place in classroom B15
Thank you!
I don't reflect on it. This happens in two ways:
I find reflecting much more cognitively demanding than reading, so if there is a 'next post' button or similar, I tend to keep reading.
Also, sometimes when I try to actually think about the subject, it's difficult to come up with original ideas. I often find myself explaining or convincing an imaginary person, instead of trying to see it with fresh eyes. This is something I noticed after reading the corresponding Sequence.
I guess establishing an habit of commenting would help me solve these problems.
Hello, I'm a math-cs undergrad and aspiring effective altruist, but I haven't chosen a cause yet. Since that decision is probably one of the most important ones, I should probably wait until I've become stronger.
To that end, I've read the Sequences (as well as HPMOR), and I would like to attend a CFAR workshop or similar at some point in the future. I think one of my problems is that I don't actually think that much about what I read. Do you have any advice on that?
Also, there are a couple of LWers in my college with whom I have met twice, and we would like to start organising meetups regularly. Would you please give me some karma so that I can add new meetups? (I promise I will make up for it with good contributions)
Thanks!