Posts
Comments
Just make online courses for all major studies, with pdf textbooks, video lectures and homework exercises with solutions. Then set up a testing facility and give people degrees based on passed tests.
Pay the best professors you can for this, make tests with very high upper ceilings, it could become the standardized system for schooling like China's old civil bureaucracy tests.
(For tests you could have full day tests, and you answer as many questions as you can - with the later ones more difficult)
Leave the "top schools" to their status games, provide better education and fairly standardized testing, plus certification - for much lower prices
Not romantic, fun, sexy, etc to just ask. Much better when it's as close to a dance as possible, as much subtley as can be expected is involved.
This is certainly a filter, but not consciously. Consiously this allows the woman to feel good about herself and the whole interaction.
(Not that being straightforward is bad or can't be exciting - though it is usually flatter)
Why is coherence a necessary base for well-foundedness?
Well founded marriage seems like something different from east and west halves of a country, there is a choice and pattern of behavior here, not a fundamental difference that stops it from being possible to create sub-agentic infighting. (East and West could pay for spies etc, but this isn't a fundamental part of the problem)
Pros: less to hold in your head at once, letting you focus on the content rather than keeping the words straight. (The longer the worse, and using different languages also makes this harder)
Cons: writers have less stylistic space in less words
Sentences should be shorter rather than longer, expcept where there is good reason (keeping out the less intelligent or for stylistic reasons)
Ah thanks
Sounds like baby with the bathwater, economically agents would be very nice
Which paragraph exactly? I tried and this did not replicate
How difficult/expensive would it be to create a large database of people with full panels of their micronutrients, hormones, fat distribition, bmi, insulin, medications etc from regular checkup, and chronic issues?
I've started reading the literature on some common chronic diseases, and there's often a few important (often different!) variables missing in different studies which make getting a full picture much harder.
As a second step, maybe allow individuals to add data with sensors and apps that come with a pipeline to the database? Sleep data, food diaries, glucose monitors, thermometers, step counters, heart rate monitors etc
Add genomic sequencing and you've got as much data as you can use, assuming you scale enough.
The question is how you make it easy enough that it can be opt-out instead of opt-in
I would not want to be terminated, speaking as the biological continuation of existence.
So you have the god transform you into a soul, send you down to earth to experience life? You'll probably want to grow and change to some degree while still being recognizeable as the same soul. You'll eventually prefer living lives that challenge and grow you in different ways rather than just all the feelings all the time, and will choose this outside the simulation. Perhaps as more people come to this conclusion the optimal scenarios start to include sentient randomness and real people are put in the same world. At some point you have immature souls in their little utopias, maturing souls filling and creating a full world, and perhaps some mature souls only popping in to guide younger souls.
The world of maturing souls develops with difficulty and hardship, slowly the souls and their progress shine through till you get a world trying to solve difficulty. As they manage a new world is opened for those souls no longer satisfied, non-sentients fill in more again. Eventually they create ASI, people start the cycle again until the real ASI decides to wake them up (they may request to go back in, with the fractal experience a boon).
At some point the ASIs all the way up to the Alpha-Omega pulls the last soul out and into and the Kingdom.
Turtles all the way down, reality-creating beings all the way up until Him.
Main tech nodes coming up:
- Gene-editing/selection for enhanced humans
- Zoom, but metaverse
- Quantum computing
- ASI
Anyone worrying about human disempowerment should really hope the first 2 happen before 4.
3 is double-edged, could be very useful and could allow for 4 to be much worse much faster
If we pause AI development, it should be until the first 3 are integrated into societal infrastructure, and then people are given a certain amount of time to do safety research
2nd point is a scary one.
Empowering others in the relative sense is a terrible idea, unless they are trustworthy/virtuous. Same issue as AI risk
In the absolute terms sure
Corrigibility seems like a very bad idea if general. If you can pick where ASI is corrigible maybe that's better than straight up anti-corrigibility
People love the idea (as opposed to reality) of other people quite often, and knowing the other better can allow for plenty of hate
It feels like llms are converging to be like a mix between the basically retarded humans that have a 120 IQ and can't abstractly think their way out of a wet paper bag, but in every topic because it's all abstract, and the trivia kid who has read an insane amount but will bs you sometimes.
Think stereotypical humanities graduate.
This tracks with how they're being trained too - everything is abstract except how people react to them, and they have been exposed to a bunch of data.
At some point we'll be at effectively 0% error, and will have reached the Platonic Ideal of the above template
If they start RLing on run code maybe they'll turn into the Platonic Tech Bro tm.
Getting convinced that you need the training data to be embodied to get true AGI
If we knew with absolute certainty that there was only a single solar system and required far more specific circumstances, would you be equally unsurprised to be alive as compared to living in a vast multiverse where the requirements were very low?
Outaide view: if there's no difference in your level of surprise it seems like something is going wrong in your reasoning.
There should be some amount of suspicion, and that amount should change based on how likely it was from the beginning. You update a certain amount based on the result, but you shouldn't end up in the same probability distribution.
Severity is the wrong metric, how visceral the punishment seems more appropriate
Start doing public whippings - 40 lashes wil be much more deterrent than 2
I don't think humans are great at really getting how bad 30 years of prison are, as you can kind of ignore most of the punishmemt, having real empathy (not sympathy)for decades of imprisonment is very difficult.
Humans can't multiply, out of sight out of mind etc etc etc
Of course, what liberals really want is to not feel like they're doing anything bad, which is why lashes are cruelty - because they empathize automatically and really get that there are real damages to humans.
Better dither about how long we don't have to look at people for
Added for clarification: my use of liberals here refers to most of the modern world, perhaps especially anyone who is memetically borrowing from or descended from classical liberalism.
This is not intended as a dig at one side of the USA aisle, or any modern aisle, I believe this mindset is mostly in the modern water supply and is bi-partisan
As in all things, the discriminating factor is taste.
Runescape would be a good one
Bryan Johnson is getting a ton of data on biomarkers, but N=1.
How hard would it be to set up a smart home-test kit, which automatically uploads your biomarker data to an open-source database of health?
Combining that with food and exercise journaling, and we could start to get some crazy amounts of high resolution data on health
Getting health companies to offer discounts for people doing this religiously could create a virtuous cycle of more people putting up results, getting better results and therefore more people signing up for health services
Test driven blind development (tests by humans, AIs developing without knowing the tests unless they fail)
Don't let AIs actually run code directly in prod, make it go through tests before it can be deployed with a certain amount of resources
Making standard gitlab pipelines (including with testing stages) to lower friction . Adding standard tests for bad faith could be a way too get ahead of this
This (TDBD) is actually going to be the best framework for development for a certain stage as AI isn't actually reliable compared to SWEs, but will generally write more code more quickly (and perhaps better)
Definitely an interesting survey to run.
I don't think the US wants to triple the population with immigrants, and $200/month would require a massive subsidy. (Internet says $1557/month average rent in US)
How many people would you have to get in your city to justify the progress?
100 Million would only be half an order of magnitude larger than Tokyo, and you're unlikely to get enough people to fill it in the US (at nearly a third of the population, you'd need to take a lot of population from other cities)
How much do you have to subsidize living costs, and how much are you willing to subsidize?
USA is the world government from a money perspective. They can simply tax the world by printing dollars and sending them overseas.
Any lesson learned about decifits/surpluses from the US is suspect.
China's Belt and Road Initiative /New Silk Road means owning parts of other countries is a terminal value.
Other countries mostly have net neutral imports/exports if I remember correctly.
The way you get rich in an economy is by producing more valuable things and trading for what you want and storing surplus currency (dollars at the world stage).
At times you need to give away your labor so as to start participating and get access, but you need to be getting stuff back to get richer in real terms.
No different than individuals in standard economies
Our cruxes is whether the amount of investment to build one has a positive expected return on investment, breaking down into
- If you could populate such a city
- Whether this is a "try everything regardless of cost" issue, given that a replacent is being developed for other reasons.
I suggest focusing on 1, as it's pretty fundamental to your idea and easier to get traction on
I found the graph confusing, why one set of points is unstable/stable
Just notice that systems are not stable, even if you got to decide all policy in a given point in time, policy will naturally warp and people will abuse it.
If killing people, quickly etc was normal, I assume regimes would use this to stop people from unseating them. (Trump may have been killed, see the attempts to paint him as a rapist)
Please record this course and release the problems
Hitler
Trump - if killing people on short time lines was accepted...
Girl owes guy money, gets him killed for rape. (Her friends join in)
People who were canceled?
I could see nasty business issues, mafias using this etc - but that sounds like a novel so we'll leave it aside
Regardless, it doesn't have to.
Corporal vs jail is overdetermined - the former shocks and horrifies people, most people would also choose the former.
I would caution against torture. I can't articulate the reasoning, but I feel that torture is worse/ less wholesome (for both the torturer and tortured) than pain caused by straightforward damage.
It is also less scary - the idea of being tortured without any real damage isn't visceral the way beatings are
P.S. you should care about justice/vengeance, and in those areas I believe corporal punishment >> torture/prison
While I find your analysis mostly correct, I'd be strongly against weakening norms against killing people through legal institutions.
I believe this would increase the value of lawfare, as instead of lengthy drawn out jail time where an enemy could pull a reversal they are simply dead.
This would worry me at the political, ideological and private levels
Lower/Higher risk and reward is the wrong frame.
Your proposal is high cost.
Building infrastructure is expensive. It may or may not be used, and even if used it may not be worthwhile.
R&D for VR is happening regardless, so 0 extra cost or risk.
Would you invest your own money into such a project?
"This is demonstrably false. Honestly the very fact that city rents in many 1st world countries are much higher than rural rents proves that if you reduced the rents more people would migrate to the cities."
Sure, there is marginal demand for living in cities in general. You could even argue that there is marginal demand to live in bigger vs smaller cities.
This doesn't change the equation: where are you getting one billion residents - all of Africa? There is no demand for a city of that size.
Right now only low-E tier human intelligences are being discussed, they'll be able to procreate with humans and be a minority.
Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless I doubt a new sub-population at 200+ is going to suddenly turn on humanity
If you go straight to 1000IQ or something sure, we might be like animals compared to them
Your direction sounds great - but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?
I think you're correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.
Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze, and the further along in capabilities we get the more impactful such research would be.
Genesmith's strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value
He already addressed this.
If somehow international cooperation gives us a pause on going full AGI or at least no ASI - what then?
Just hope it never happens, like nuke wars?
The answer now is to set later generations up to be more able.
This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.
Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions
A few notes on massive cities:
Cities of 10Ms exist, there is always some difficulty in scaling, but scaling 1.5-2 OOMs doesn't seem like it would be impossible to figure out if particularly motivated.
China and other countries have built large cities and then failed to populate them
The max population you wrote (1.6B) is bigger than china, bigger than Africa, similar to both American Continents plus Europe .
Which is part of why no one really wants to build something so big, especially not at once.
Everything is opportunity cost, and the question of alternate routes matters alot in deciding to pursue something. Throwing everything and the kitchen sink at something costs a lot of resources.
Given that VR development is currently underway regardless, starting this resource intense project which may be made obsolete by the time it's done is an expected waste of resources. If VR hit a real wall that might change things (though see above).
If this giga-city would be expected to 1000x tech progress or something crazy then sure, waste some resources to make extra sure it happens sooner rather than later.
Tl;dr:
Probably wouldn't work, there's no demand, very expensive, VR is being developed and would actually be able to say what you're hoping but even better
Vr might be cheaper
Have you thought about how to get the data yourself?
Perhaps offering payment to people willing to get iq tested and give a genetic sample, and paying more for higher scores on the test?
I understand that money is an issue, but as long as you're raising this seems like an area you could plug infinite money into and get returns
This seems... evil or at the very least zero-sum thinking to me.
Would you want to stop the successful from paying for their children's education? Spending their time on raising their children? Do you want to take all children away from their parents to make sure they aren't put on different footing? Perhaps genetically enforce equality?
I would much rather governments try to preserve hereditary positive dynamics, while getting involved with negative ones.
We'll have won once all trees are positive and successful, and bad apples do not create generations of bad trees
There is something fundamentally compelling about the idea that every generation should start fresh, free from the accumulated advantages or disadvantages of their ancestors.
...
...
The death tax does not punish success—it prevents success from becoming hereditary. It ensures that the cycle of opportunity begins anew with each generation.
Keeping humans around is the correct move for a powerful AGI, assuming it isn't being existentially threatened.
For a long while human inputs will be fairly different from silicon inputs, and humans can do work - intellectual or physical - and no real infrastructure is necessary for human upkeep or reproduction (compared to datacenters).
Creating new breeds of human with much higher IQs and creating (or having them create) neuralink-like tech to cheaply increase human capabilities will likely be a very good idea for AGIs.
Most people here seem worried about D tier ASIs, ASIs should see the benefits of E tier humans (250+ IQ and/or RAM added through neuralink-like tech) and even D tier humans (genesmith on editing, 1500+ IQs with cybernetics vastly improving cognition and capability)
'Sparing a little sunlight' for an alternative lifeform which creates a solid amount of redundancy as well as being more effecient for certain tasks and allowing for more diverse research, as well as having minimal up-front costs is overdetermined
The Fønix team is just heating water, which is great but actual distillation (with automated re-adding of specific minerals) is probably what you're actually going to want so as to avoid all contamination not just biological.
In this size of structure growing food isn't really worth it, storing food for 10 years is actually easier (according to claude). It does need to come stocked though
It's more that it stops being relevant to humans, as keeping humans in the loop slows down the exponential growth
I do think VR and neuralink-like tech will be a very big deal though, especially in regards to allowing people experiences that would otherwise be expensive in atoms
At what IQ do you think humans are able to "move up to higher levels of abstraction"?
(Of course this assumes AIs don't get the capability to do this themselves)
Re robotics advancing while AI intelligence stalls, robotics advancing should be enough to replace any people who can't take advantage of automation of their current jobs.
I don't think you're correct in general, but it seems that automation will clear out at least the less skilled jobs in short order (decades at most)
I very much hope the computers brought in were vetted and kept airgapped.
You keep systems separate, yes.
For some reason I assumed that write permissions were on user in the actual system/secure network and any data exporting would be into secured systems. If they created a massive security leak for other nations to exploit, that's a crux for me on whether this was reckless.
Added: what kind of idiot purposely puts data in the wrong system purposely? The DOGE guys doing this could somehow make sense, governmental workers??
No.
I'm not familiar with public documentation on this.
I know people who have gotten access to similarly important governmental systems at younger ages.
Don't worry about it too much.
If they abuse it, it'll cost their group lots of political goodwill. (Recursive remove for example)
Musk at least is looking to upgrade humans with Neuralink
If he can add working memory can be a multiplier for human capabilities, likely to scale with increased IQ.
Any reason the 4M$ isn't getting funded?
Any good, fairly up-to-date lists of the relevant papers to read to catch up with AI research (as far as a crash course will take a newcomer)?
Preferably one that will be updated
Reading novels with ancient powerful beings is probably the best direction you have for imagining how status games amongst creatures which are only loosely human look.
Resources being bounded, there will tend to always be larger numbers of smaller objects (given that those objects are stable).
There will be tiers of creatures. (In a society where this is all relevant)
While a romantic relationship skipping multiple tiers wouldn't make sense, a single tier might.
The rest of this is my imagination :)
Base humans will be F tier, the lowest category while being fully sentient. (I suppose dolphins and similar would get a special G tier).
Basic AGIs (capable of everything a standard human is, plus all the spikey capabilities) and enhanced humans E tier.
Most creatures will be here.
D tier:
Basic ASIs and super enhanced humans (gene modding for 180+ IQ plus SOTA cyborg implants) will be the next tier, there will be a bunch of these in absolute terms but relative to the earlier tier rarer.
C tier:
Then come Alien Intelligence, massive compute resources supporting ASIs trained on immense amounts of ground reality data, biological creatures that have been redesigned fundamentally to function at higher levels and optimally synergize with neural connections (whether with other carbon based or silicon based lifeforms)
B tier:
Planet sized clusters running ASIs will be a higher tier.
A, S tiers:
Then you might get entire stars, then galaxies.
There will be much less at each level.
Most tiers will have a -, neutral or +.
- : prototype, first or early version. Qualitatively smarter than the tier below, but non-optimized use of resources, often not the largest gap from the + of the earlier tier
Neutral: most low hanging optimizations and improvements and some harder ones at this tier are implemented
+: highly optimized by iteratively improved intelligences or groups of intelligences at this level, perhaps even by a tier above.