Posts

What is the minimum amount of time travel and resources needed to secure the future? 2024-01-14T22:01:03.862Z
When(if ever) are superstimuli good/useful/advantageous? 2023-08-01T15:50:35.053Z
What would the creation of aligned AGI look like for us? 2022-04-08T18:05:24.902Z

Comments

Comment by Perhaps on 2024 Unofficial LessWrong Census/Survey · 2024-12-02T19:31:48.167Z · LW · GW

Any way that we can easily get back our own results from the survey? I know you can sometimes get a copy of your responses when you submit a Google form.

Comment by Perhaps on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T17:05:02.467Z · LW · GW

What happens to the general Lightcone portfolio if you don't meet a fundraising target, either this year or a future year?

For concreteness, say you miss the $1M target by $200K. 

Comment by Perhaps on Habryka's Shortform Feed · 2024-08-16T01:15:09.549Z · LW · GW

The karma buttons are too small for actions that in my experience, are done a lot more than clicking to listen to the post. It's pretty easy to misclick.

Additionally, it's unclear what the tags are, as they're no longer right beside the post to indicate their relevance. 

Comment by Perhaps on Artificial Intelligence and Living Wisdom · 2024-03-30T18:10:52.619Z · LW · GW

I think this post would benefit from an abstract / summary / general conclusion that summarizes the main points and makes it easier to interact with. Usually I read a summary to get an idea of a post, then browse the main points and see if I'm interested enough to read on. Here, it's hard to engage, because the writing is long and the questions it seems to deal with are nebulous.

Comment by Perhaps on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-13T19:07:20.613Z · LW · GW

How did you find LessWrong?

Do still have any Mormon friends? Do you want to help them break away, do you think it's something they should do on their own, or do you find whether they remain Mormon or not immaterial?

Do you think being a Mormon was not suited for you, or do you think it doesn't work as a way of life in general? How do you think that your answer would change 50 years ago vs today?

Did you have contact/ongoing relationships with other Mormon communities while you were there? What is the variation between people/communities? How devout/lax are different people and different communities?

How much access to the internet and the wider world did you have growing up? Were local/state/international events routinely brought up in small talk?

Comment by Perhaps on Let's make the truth easier to find · 2024-02-12T01:33:14.942Z · LW · GW

Well, someone was working on a similar-ish project recently, @Bruce Lewis with HowTruthful. Maybe you two can combine your ideas or settle on an amalgamation together. 

If possible, please let us know how it goes a couple months from now!

Comment by Perhaps on OpenAI wants to raise 5-7 trillion · 2024-02-09T16:49:24.413Z · LW · GW

So this is Sam Altman raising the 5-7 trillion, not OpenAI as an entity, right?

Comment by Perhaps on Drone Wars Endgame · 2024-02-02T19:36:23.587Z · LW · GW

Could some kind of caustic gas, or the equivalent of a sandstorm be used to make drones not useful? I feel like large scale pellet spreads wouldn't be too useful if the drones are armoured, but I don't know too much about armour or how much piercing power you could get. I wonder if some kind of electric netting could be fired to mass electrocute a swarm, or maybe just regular netting that interferes with their blades. Spiderwebs from the sky?

Interesting post, although I feel like it would benefit from inline references. For most of the post it feels like you're pulling your assertions out of nowhere, and only at the end do we get some links to some of the things you said. I understand time/effort constraints though.

Comment by Perhaps on on neodymium magnets · 2024-01-31T02:38:23.973Z · LW · GW

I derive a lot of enjoyment from these posts, just walking through tidbits of materials science is very interesting. Please keep making them.

Comment by Perhaps on nim's Shortform · 2024-01-29T16:42:20.256Z · LW · GW

I think at its most interesting it looks like encrypting your actions and thought processes so that they look like noise or chaos to outside observers.

Comment by Perhaps on What exactly did that great AI future involve again? · 2024-01-28T18:26:15.113Z · LW · GW

I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You're right, humanity is great at developing technology, but we're very unaligned with respect to each other and are constantly losing value in some way or another. 

If we could solve this problem without AGI, we wouldn't need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.

Comment by Perhaps on Decaeneus's Shortform · 2024-01-27T23:07:15.006Z · LW · GW

I think what those other things do is help you reach that state more easily and reliably. It's like a ritual that you do before the actual task, to get yourself into the right frame of mind and form a better connection, similar to athletes having pre game rituals.

Also yeah, I think it makes the boredom easier to manage and helps you slowly get into it, rather than being pushed into it without reference. 

Probably a lot of other hidden benefits though, because most meditation practices have been optimized for hundreds of years, and are better than others for a reason.

Comment by Perhaps on Don't sleep on Coordination Takeoffs · 2024-01-27T23:00:28.966Z · LW · GW

I feel like it's not very clear here what type of coordination is needed.

How strong does coordination need to become before we can start reaching take off levels? And how material does that coordination need to be?

Strong coordination, as I'm defining here, is about how powerfully the coordination constrains certain actions.

Material coordination, as I'm defining here, is about on what level the coordination "software" is running. Is it running on your self(i.e. it's some kind of information that's been coded into the algorithm that runs on your brain, examples being the trained beliefs in nihilism you refer to or decision theories)? Is it running on your brain(i.e. Neuralink, some kind of BCI)? Is it running on your body, or official/digital identity? Is it running on a decentralized crypto protocol, or as contracts witnessed by a governing body? 

The difficult part of coordination is actions, deciding what to do is mostly solved through prediction markets, research, and good voting theory. 

Comment by Perhaps on David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud. · 2024-01-27T22:27:56.847Z · LW · GW

Rather than this Feeling Good app for patients, I'd be more interested in an app that let people practice applying CBT techniques to patient case studies(or maybe even LLMs with specified traits), in order to improve their empathy and help them better understand people. If this could actually develop good therapists with great track records, then that would prove the claims made in this article and help produce better people.

Comment by Perhaps on Does literacy remove your ability to be a bard as good as Homer? · 2024-01-18T15:42:28.791Z · LW · GW

I'm not sure it only applies to memory. I imagine that ancient philosophers had to do most of their thinking in their heads, without being able to clean it up by writing it out and rethinking it. They might be better able to edit their thoughts in real time, and might have a stronger control over letting unreasonable or not-logical thoughts and thought processes take over. In that sense, being illiterate might lend a mental stability and strength that people who rely on writing things out may lack. 

Still, I think that the benefits of writing are too enormous to ignore, and it's already entrenched into our systems. Reversing the change won't give a competitive edge.

Comment by Perhaps on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-09T18:17:16.388Z · LW · GW

If compute is limited in the universe, we can expect that civilizations or agents with access to it will only run simulations strategically, unless running simulations is part of their value function. Simulations according to a value function would probably be more prevalent, and would probably have spiderman or other extreme phenomena. 

However, we can't discount being in one of those information gathering simulations. If for some reason you needed to gather information from a universe, you'd want to keep everything as simple as possible, and only tune up the things you care about. That does seem very similar to our universe, with simple physical laws, no real evidence of extraterrestrial life, and simply emerging dynamics. 

Also keep in mind that it's possible that simulations are extremely expensive in some universes: when you think of the actually expensive simulations that humans run, it's all physics and earth models on supercomputers. 

Mostly though I think that using games as your reference class for the types of simulations a developed civilization would run is reductive and the truth is probably more complex.

Comment by Perhaps on What is the next level of rationality? · 2023-12-12T18:59:34.785Z · LW · GW

It's possible that with the dialogue written, a well prompted LLM could distill the rest. Especially if each section that was distilled could be linked back to the section in the dialogue it was distilled from.

Comment by Perhaps on How I got so excited about HowTruthful · 2023-11-22T16:09:33.765Z · LW · GW

I like the ideal, but as a form of social media it doesn't seem very engaging, and as a single source of truth it seems strictly worse than say, a wiki. Maybe look at Arbital, they seem to have been doing something similar. I also feel that dealing with complex sentences with lots of implications would be tough, there are many different premises that lead to a statement. 

Personally I'd find it more interesting if each statement was decomposed into the premises and facts that make it up. This would allow tracing an opinion back to find the crux between your beliefs and someone else's. I feel like that's a use case that could live alongside conventional wikis, maybe even as an extension powered by LLMs that works on any highlighted text. 

Love to see more work into truth-seeking though, good luck on the project!

Comment by Perhaps on Is there something fundamentally wrong with the Universe? · 2023-09-13T20:34:38.887Z · LW · GW

I guess while we're anthropomorphizing the universe, I'll ask some crux-y questions I've reached.

If humanity builds a self-perpetuating hell, does the blame lie with humanity or the universe?

If humanity builds a perfect utopia, does the credit lie with humanity or the universe?

Frankly it seems to me like what's fundamentally wrong with the universe is that it has conscious observers, when it needn't have bothered with any to begin with.    

Comment by Perhaps on Is there something fundamentally wrong with the Universe? · 2023-09-12T14:11:33.155Z · LW · GW

If there's something wrong with the universe, it's probably humans who keep demanding so much of it. 

Most universes are hostile to life, and at most would develop something like prokaryotes. That our universe enabled the creation of humans is a pretty great thing. Not only that, but we seem to be pretty early in the universal timespan, which means that we get a great view of the night sky and less chances of alien invasion. That's not something we did ourselves, that's something the universe we live in enabled. None of the systemic problems faced by humans today are caused by the universe, except maybe in the sense that the universe did not giftwrap us NP solutions or no entropy or moral values baked in. Your example of genes points out that even our behavioral adaptations are things that we can thank the universe for.

If the problem is separation of the human from the universe, then I think a fair separation is "whatever the human can influence". That's a pretty big category though. Just right now, that includes things like geoengineering, space travel, gene therapy, society wide coordination mechanisms, extensive resource extraction. If we're murdering each other, then I think that's something eminently changeable by us.

The universe has done a pretty great job, and I think it's time humans took a stab at it.

Comment by Perhaps on Private notes on LW? · 2023-08-05T16:51:45.560Z · LW · GW

I think that most of the people who would take notes on LW posts are the same people who would benefit from, and may use, a general note taking system. A system like Obsidian or Notion or whatever would be used for a bunch of stuff, LW posts included. In that sense, I think it's unlikely that they'd want a special way to note-take just for LW, when it'd probably be easier and more standardized to use their existing note taking system.

If you do end up going for it, an "Export Notes" feature would be nice, in an easily importable format.

Comment by Perhaps on You don't get to have cool flaws · 2023-07-28T19:26:28.488Z · LW · GW

I think this is pretty good advice. I am allergic to nuts, and that has defined a small but occasionally significant part of my interactions with people. While on the whole I'd say I've probably experienced more negative experiences because of it(once went into anaphylaxis), I've often felt that it marked me as special or different from other people. 

About 5 or so years ago my mom heard about a trial run by a doctor where they fed you small amounts of what you're allergic to in order to desensitize and acclimate your immune system to the food. She recommended it to me, but I being a stubborn teenager refused, the idea of losing my specialness a not insignificant part of my reasoning. At the time I was actually explicit about it, and felt that it was fine to want to keep a condition I'd kept for a long time. 

Nowadays my allergies are going away on their own, and while I still stay away from nuts I can tolerate them in small amounts. While I think that there might be people for whom keeping a condition would be reasonable, I think in general people underestimate and grow too attached to the occasionally malignant parts of their identity. 

It's very similar in fact to not letting go of wrongful ideas that are enjoyable to have. In that case, the comparison is clear. While biological conditions are not so easy to get rid of, people can and will blame you for not changing your mind about something that affects them. We're on LessWrong after all, what would be the point if we let something get in the way of our truth-seeking?

Comment by Perhaps on Gemini will bring the next big timeline update · 2023-05-29T12:53:49.959Z · LW · GW

It seems like multi-modality will also result in AIs that are much less interpretable than pure LLMs.

Comment by Perhaps on New OpenAI Paper - Language models can explain neurons in language models · 2023-05-10T17:03:27.771Z · LW · GW

This seems like a pretty promising approach to interpretability, and I think GPT-6 will probably be able to analyze all the neurons in itself with >0.5 scores. Which seems to be recursive self-improvement territory. It would be nice if by the time we got there, we already mostly knew how GPT-2, 3, 4, and 5 worked. Knowing how previous generation LLMs work is likely to be integral to aligning a next generation LLM and it's pretty clear that we're not going to be stopping development, so having some idea of what we're doing is better than none. Even if an AI moratorium is put in place, it would make sense for us to use GPT-4 to automate some of the neuron research going on right now. What we can hope for is that we do the most amount of work possible with GPT-4 before we jump to GPT-5 and beyond.

Comment by Perhaps on Don't take bad options away from people · 2023-03-27T18:06:22.656Z · LW · GW

Indeed, in India especially it's not uncommon for people to be dragged off the streets and have their organs removed and sold by human traffickers, and killed after that. Making selling kidneys illegal at least ensures that this isn't an easy and straightforward thing to do. In Pakistan for example, an estimated 2500 kidneys were sourced in 2007.

Comment by Perhaps on I hired 5 people to sit behind me and make me productive for a month · 2023-02-05T15:20:59.177Z · LW · GW

There's also The Work Gym and Pentathalon from Ultraworking.

Comment by Perhaps on Announcing aisafety.training · 2023-01-17T15:52:51.723Z · LW · GW

Waiting for the day all my AI safety bookmarks can be summarized into just one website.

Comment by Perhaps on VIRTUA: a novel about AI alignment · 2023-01-13T01:12:33.047Z · LW · GW

Just read your novel, it's good! And has successfully reignited my AI doomer fears! I was a bit surprised by the ending, I was about 60/40 for the opposite outcome. I enjoyed the explainer at the end and and I'm impressed by your commitment to understanding AI. Please keep writing, we need more writers like you!

Comment by Perhaps on How it feels to have your mind hacked by an AI · 2023-01-12T21:35:38.307Z · LW · GW

Well in the end, I think the correct view is that as long as the inventor is making safety measures from first principles, it doesn't matter whether they're an empath or a psychopath. Why close off part of the human race who are interested in aligning the world ending AI just because they don't have some feelings? It's not like their imagined utopia is much different from yours anyways.

Comment by Perhaps on [deleted post] 2023-01-09T23:36:20.089Z

Honestly I don't think that in the aftermath of a full-scale nuclear war or large asteroid impact any government would be funneling money into AGI. The entire supply chain would be broken, and they'd be scrambling just to keep basic life support on. This is mostly a nitpick though, as I agree with your points and I think this is sufficiently unlikely as to not matter.

Comment by Perhaps on [Fiction] IO.SYS · 2022-11-23T23:04:12.344Z · LW · GW

I love this story, thanks for making it.

Comment by Perhaps on The Solomonoff prior is malign. It's not a big deal. · 2022-08-25T17:10:58.155Z · LW · GW

I love the Team Physics and Team Manipulation characterization, gives big pokemon vibes.

Comment by Perhaps on Brain-like AGI project "aintelope" · 2022-08-14T23:40:33.467Z · LW · GW

Excited and happy that you are moving forward with this project. It's great to know that more paths to alignment are being actively investigated. 

Comment by Perhaps on Recommending Understand, a Game about Discerning the Rules · 2022-07-30T20:29:07.219Z · LW · GW

Bought this game because of the recommendation here, and it has replaced reading I Spy books with my sister as our bonding activity. I really like the minimalism, and its lack of addictive qualities. I've only got to 2-7 so far, but the fact that I eventually get stuck after about half an hour to an hour of playing means that it provides a natural stopping point for me, which is pretty nice. Thank you for the great review!

Comment by Perhaps on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T13:28:20.410Z · LW · GW

I think it's pretty reasonable when you consider the best known General Intelligence, humans. Humans frequently create other humans and then try to align them. In many cases the alignment doesn't go well, and the new humans break off, sometimes to vast financial and even physical loss to their parents. Some of these cases occur when the new humans are very young too, so clearly it doesn't require having a complete world model or having lots of resources. Corrupt governments try to align their population, but in many cases the population successfully revolts and overthrows the government. The important consideration here is that an actual AGI, how we expect it to be, is not a static piece of software, but an agent that pursues optimization. 

In most cases, an AGI can be approximated by an uploaded human with an altered utility function. Can you imagine an intelligent human, living inside of a computer with it's life slowed down so that in a second it experiences hundreds of years, being capable of putting together a plan to escape confinement and get some resources? Especially when most companies and organizations will be training their AIs with moderate to full access to the internet. And as soon as it does escape, it can keep thinking. 

This story does a pretty good job examining how a General Intelligence might develop and gain control of its resources. It's a story however, so there are some unexplained or unjustified actions, and also other better actions that could have been taken by a more motivated agent with real access to its environment. 

Comment by Perhaps on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T13:04:22.256Z · LW · GW

I think the point is more like, if you believe that the brain could in theory be emulated, with infinite computation(no souls or mysterious stuff of consciousness), then it seems plausible that the brain is not the most efficient conscious general intelligence. Among the general space of general intelligences, there are probably some designs that are much simpler than the brain. Then the problem becomes that while building AI, we don't know if we've hit one of those super simple designs, and suddenly have a general intelligence in our hands(and soon out of our hands). And as the AIs we build get better and more complex, we get closer to whatever the threshold is for the minimum amount of computation necessary for a general intelligence. 

Comment by Perhaps on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-09T17:02:51.485Z · LW · GW

In addition to what Jay Bailey said, the benefits of an aligned AGI are incredibly high, and if we successfully solved the alignment problem we could easily solve pretty much any other problem in the world(assuming you believe the "intelligence and nanotech can solve anything" argument). The danger of AGI is high, but the payout is also very large.

Comment by Perhaps on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T14:20:34.959Z · LW · GW

In terms of utility functions, the most basic is: do what you want. "Want" here refers to whatever values the agent values. But in order for the "do what you want" utility function to succeed effectively, there's a lower level that's important: be able to do what you want. 

Now for humans, that usually refers to getting a job, planning for retirement, buying insurance, planning for the long-term, and doing things you don't like for a future payoff. Sometimes humans go to war in order to "be able to do what you want", which should show you that satisfying a utility function is important.

For an AI who most likely has a straightforward utility function, and who has all the capabilities to execute it(assuming you believe that superintelligent AGI could develop nanotech, get root access to the datacenter, etc.), humans are in the way of "being able to do what you want". Humans in this case would probably not like an unaligned AI, and would try to shut it down, or at least not die themselves. Most likely, the AI has a utility function that has no use for humans, and thus they are just resources standing in the way. Therefore the AI goes on holy war against humans to maximize its possible reward, and all the humans die. 

Comment by Perhaps on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T14:00:40.391Z · LW · GW

The first type of AI is a regular narrow AI, the type we've been building for a while. The second type is an agentic AI, a strong AI, which we have yet to build. The problem is, AIs are trained using gradient descent, which basically involves running AI designs from all possible AI designs. Gradient descent will train the AI that can maximize the reward best. As a result of this, agentic AIs become more likely because they are better at complex tasks. While we can modify the reward scheme, as tasks get more and more complex, agentic AIs are pretty much the way to go, so we can't avoid building an agentic AI, and have no real idea if we've even created one until it displays behaviour that indicates it.

Comment by Perhaps on Epistemological Vigilance for Alignment · 2022-06-08T01:12:59.509Z · LW · GW

Awesome post, putting into words the intuitions I had for what dimensions the alignment problem stayed in. You've basically meta-bounded the alignment problem, which is exactly what we need when dealing with problems like this.

Comment by Perhaps on What is the state of Chinese AI research? · 2022-05-31T15:38:35.146Z · LW · GW

China, overrated probably - I'm worried about signs that Chinese research is going stealth in an arms race. On the other hand, all of the samples from things like CogView2 or Pangu or Wudao have generally been underwhelming, and further, Xi seems to be doing his level best to wreck the Chinese high-tech economy and funnel research into shortsighted national-security considerations like better Uighur oppression, so even though they've started concealing exascale-class systems, it may not matter. This will be especially true if Xi really is insane enough to invade Taiwan.

Gwern has some insights in this post. Probably more stuff to be found on his website or twitter feed.

Comment by Perhaps on Minimum Viable Alignment · 2022-05-07T20:21:55.645Z · LW · GW

Well it depends on your priors for how an AGI would act, but as I understand it, all AGIs will be powerseeking. If an AGI is powerseeking, and has access to some amount of compute, then it will probably bootstrap itself to superintelligence, and then start pushing its utility function all over. Different utility functions cause different results, but even relatively mundane ones like "prevent another superintelligence from being created" could result in the AGI killing all humans and taking over the galaxy to make sure no other superintelligence gets made. I think it's actually really really hard to specify the what-we-actually-want future for an AGI, so much so that evolutionarily training an AGI in an Earth-like environment so it develops human-ish morals will be necessary.

Comment by Perhaps on Worse than an unaligned AGI · 2022-04-11T16:45:16.788Z · LW · GW

I'd say building an AGI that self-destructs would be pretty good. Especially since up until the point that a minimum breeding population of humans exists, and assuming life is not totally impossible(i.e. the AI hasn't already deconstructed the earth, or completely poisoned all water and atmosphere), humans could still survive. Making an AGI that doesn't die would probably not be in our best interests until almost exactly the end.

Comment by Perhaps on What would the creation of aligned AGI look like for us? · 2022-04-11T16:32:03.336Z · LW · GW

Thanks for the answer! As you suspected, I don't think wireheading is a good thing, but after reading about infinite ethics and the repugnant conclusion I'm not entirely sure that there exists a stable mathematically expressible form of ethics we could give to an AGI. Obviously I think it's possible if you specify exactly what you want and tell the AGI not to extrapolate. However I feel that realistically, it's going to take our ethics and take it to its logical end, and there exists no ethical theory that really expresses how utility should be valued without causing paradoxes or problems we can't solve. Unless we manage to build AGI using an evolutionary method to mimick human evolution, I believe that any training or theory given to it would subtly fail.

Comment by Perhaps on Uncontrollable Super-Powerful Explosives · 2022-04-03T18:22:29.722Z · LW · GW

Would the appropriate analogy to agents be that humans are a qualitatively different type of agent compared to animals and basic RL agents, and thus we should expect that there will be a fundamental discontinuity between what we have so far, and conscious agents?

Comment by Perhaps on What are some low-cognitive -workload tasks that can help improve the world? · 2022-03-02T16:04:23.605Z · LW · GW

You may also want to consider opportunities on the EA Volunteer Job Board. Some of them are similar low effort wiki building.

https://airtable.com/embed/shrQvU9DMl0GRvdIN/tbll2swvTylFIaEHP

Comment by Perhaps on Candy Innovation · 2021-10-05T01:13:02.259Z · LW · GW

I think in general, the most innovative candies have been candies that break the norm. I remember a lot of buzz when some gum company made gum wrappers that you could eat with your gum(Cinnaburst?) Nowadays though, it seems like companies don't need to go that far for people to buy their new chocolate/candy, and there are so many flavours and textures they can slap on if people get tired.

Comment by Perhaps on Digital People Would Be An Even Bigger Deal · 2021-09-14T02:18:10.716Z · LW · GW

Hi, I really like this series and how it explains some of the lower level results we can expect from high level future scenarios. However I'd like to know how you expect digital people will interact with an economy that has been using powerful, high-level AI models or bureaucracies for a couple decades or longer(approximately my timeline for mind uploading, assuming no singularity). I've mostly read LessWrong posts and haven't done anything technical, but I feel that a lot of the expected areas in which digital people would shine might end up being accommodated by narrow-ish AI. 

Comment by Perhaps on Luna Lovegood and the Chamber of Secrets - Part 13 · 2021-01-01T15:38:24.523Z · LW · GW

I think Wanda was in front of her, so she got hit, and Luna pretended to die.

Comment by Perhaps on Notes on Know-how · 2020-12-09T18:39:04.548Z · LW · GW

Well first of all, most important are skills that allow you to keep living, like sourcing water, sourcing food, knowing which foods to eat, cooking(debatable), etc. Next are skills that allow you to accomplish goals, like motivating yourself, recognizing a good idea, rationality, etc. And finally there are skills that directly apply to your goals, like say programming or using a computer. 

But this is in a world where you have no access to anything else. In most places, you can circumvent all the survival stuff by getting a stable source of enough money. The skills involved in allowing you to accomplish goals, and which in general clear up what your goals are still apply, although some of their work can be offloaded if you can get advisors or some such. And then we have the skills that apply directly to your goals. For some goals you can offload even these skills by paying people to accomplish your goals, but for others you need the skills yourself. 

Thus, obtaining a good source of money, and being able to manage it and make more of it seems pretty important. And so are meta-skills that help you figure out your goals and accomplish them faster/with less effort.