Posts

ARIA's Safeguarded AI grant program is accepting applications for Technical Area 1.1 until May 28th 2024-05-22T06:54:55.206Z
Safety-First Agents/Architectures Are a Promising Path to Safe AGI 2023-08-06T08:02:30.072Z
Notes on the importance and implementation of safety-first cognitive architectures for AI 2023-05-11T10:03:38.721Z
Can You Give Support or Feedback for My Program to Alleviate Poverty? 2015-06-25T23:18:11.130Z
Positive Book and Other Media Recommendations for a Teen Audience 2014-10-12T06:10:01.251Z
Effective Rationality Training Online 2013-08-10T01:58:14.667Z

Comments

Comment by Brendon_Wong on Which paths to powerful AI should be boosted? · 2024-09-19T23:15:37.493Z · LW · GW

Unfortunately I see this question didn’t get much engagement when it was originally posted, but I’m going to put a vote in for highly federated systems along the axes of agency, cognitive processes, and thinking, especially those that maximize transparency and determinism. I think that LM agents are just a first step into this area of safety. I write more about this here: https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to

For specific proposals I’d recommend Drexler’s work on federating agency https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model and federating cognitive processes (memory) https://www.lesswrong.com/posts/FKE6cAzQxEK4QH9fC/qnr-prospects-are-important-for-ai-alignment-research

Comment by Brendon_Wong on What does it look like for AI to significantly improve human coordination, before superintelligence? · 2024-01-15T20:34:29.025Z · LW · GW

Not sure the extent to which this falls under “coordination tech” but are you familiar with work in collective intelligence? This article has some examples of existing work and future directions: https://www.wired.com/story/collective-intelligence-democracy/. Notably, it covers enhancements in expressing preferences (quadratic voting), prediction (prediction markets), representation (liquid democracy), consensus in groups (Polis), and aggregating knowledge (Wikipedia).

As you reference above, there’s non-AI collective action tech: https://foresight.org/a-simple-secure-coordination-platform-for-collective-action/

In the area of cognitive architectures, the open agency proposals contain governance tech, like Drexler’s original Open Agency model (https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model), Davidad’s dramatically more complex Open Agency Architecture (https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation), and the recently proposed Gaia Network (https://www.lesswrong.com/posts/AKBkDNeFLZxaMqjQG/gaia-network-a-practical-incremental-pathway-to-open-agency).

The main way I look at this is that software can greatly boost collective intelligence (CI), and one part of collective intelligence is coordination. Collective intelligence seems really under explored and I think there are very promising ways to improve it. More on my plan for CI + AGI here if of interest: https://www.web10.ai/p/web-10-in-under-10-minutes

While I think CI can be useful for things like AI governance, I think collective intelligence is actually very related to AI safety in the context of a cognitive architecture (CA). CI can be used to federate responsibilities in a cognitive architecture, including AI systems reviewing other AI systems as you mention. It can be used to enhance human control and participation in a CA, including allowing humans to set the goals of a cognitive architecture–based system, allow humans to perform the thinking and acting in a CA, and allow humans to participate in the oversight and evaluation of the granular and high-level operation of a CA. I write more on the safety aspects here if you’re interested: https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to

In my view, it is most optimal to integrate CI and AI together in the same federated cognitive architecture, but CI systems can themselves be superintelligent, and that could be useful for developing and working with safe artificial super intelligence (including AI to help with primarily human-orchestrated CI, which blurs the line between CI and a combined human-AI cognitive architecture).

I see certain AI developments as boosting the same underlying tech required for next-level collective intelligence (modeling reasoning, for example, which would fall under symbolic AI) and augmenting collective intelligence (e.g. helping to identify areas of consensus in a more automated manner, like: https://ai.objectives.institute/talk-to-the-city).

I think many examples of AI engagement in CI and CA boil down to translating information from humans into various forms of unstructured, semi-structured, and structured data (my preference is for the latter, which I view is pretty crucial in next-gen cognitive architecture and CI systems) which are used to perform many functions from identifying each person’s preferences and existing beliefs to performing planning to conducting evaluations.

Comment by Brendon_Wong on Some for-profit AI alignment org ideas · 2023-12-14T20:08:13.393Z · LW · GW

This is an interesting point. I also feel like the governance model of the org and culture of mission alignment with increasing safety is important, in addition to the exact nature of the business and business model at the time the startup is founded. Looking at your examples, perhaps by “business model” you are referring both to what brings money in but also the overall governance/decision-making model of the organization?

Comment by Brendon_Wong on Some for-profit AI alignment org ideas · 2023-12-14T16:32:17.343Z · LW · GW

Great article! Just reached out. A couple ideas I want to mention are working on safer models directly (example: https://www.lesswrong.com/posts/JviYwAk5AfBR7HhEn/how-to-control-an-llm-s-behavior-why-my-p-doom-went-down-1), which for smaller models might not be cost prohibitive to make progress on. There’s also building safety-related cognitive architecture components that have commercial uses. For example, world model work (example: https://www.lesswrong.com/posts/nqFS7h8BE6ucTtpoL/let-s-buy-out-cyc-for-use-in-agi-interpretability-systems) or memory systems (example: https://www.lesswrong.com/posts/FKE6cAzQxEK4QH9fC/qnr-prospects-are-important-for-ai-alignment-research). My work is trying to do a few of these things concurrently (https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to).

Comment by Brendon_Wong on How to Control an LLM's Behavior (why my P(DOOM) went down) · 2023-12-04T09:03:58.963Z · LW · GW

I appreciate your thoughtful response! Apologies, in my sleep deprived state, I appear to have hallucinated some challenges I thought appeared in the article. Please disregard everything below "I think some of the downsides mentioned here are easily or realistically surpassable..." except for my point on "many-dimensional labeling."

To elaborate, what I was attempting to reference was QNRs which IIRC are just human-interpretable, graph-like embeddings. This could potentially automate the entire labeling flow and solve the "can categories/labels adequately express everything?" problem.

Comment by Brendon_Wong on How to Control an LLM's Behavior (why my P(DOOM) went down) · 2023-12-02T06:18:46.518Z · LW · GW

This approach is alignment by bootstrapping. To use it you need some agent able to tag all the text in the training set, with many different categories.

Pre GPT4, how could you do this?

Well, humans created all of the training data on our own, so it should be possible to add the necessary structured data to that! There are large scale crowdsourced efforts like Wikipedia. Extending Wikipedia, and a section of the internet, with enhancements like associating structured data with unstructured data, plus a reputation-weighted voting system to judge contributions, seems achievable. You could even use models to prelabel the data but have that be human verified at a large scale (or in semi-automated or fully automated, but non-AI ways). This is what I'm trying to do with Web 10. Geo is the Web3 version of this, and the only other major similar initiative I'm aware of.

Comment by Brendon_Wong on How to Control an LLM's Behavior (why my P(DOOM) went down) · 2023-12-02T06:11:43.010Z · LW · GW

This is a fantastic article! It's great to see that there's work going on in this space, and I like that the approach is described in very easy to follow and practical terms.

I've been working on a very expansive approach/design for AI safety called safety-first cognitive architectures, which is vaguely like a language model agent designed from the ground up with safety in mind, except extensible to both present-day and future AI designs, and with a very sophisticated (yet achievable, and scalable from easy to hard) safety- and performance-minded architecture. I have intentionally not publicly published implementation details yet, but will send you a DM!

It seems like this concept is related to the "Federating Cognition" section of my article, specifically a point about the safety benefits of externalizing memory: "external memory systems can contain information on human preferences which AI systems can learn from and/or use as a reference or assessment mechanism for evaluating proposed goals and actions." At a high level, this can affect both AI models themselves as well as model evaluations and the cognitive architecture containing models (the latter is mentioned at the end of your post). For various reasons, I haven't written much about the implications of this work to AI models themselves.

I think some of the downsides mentioned here are easily or realistically surpassable. I'll post a couple thoughts.

For example, is it really true that this would require condensing everything into categories? What about numerical scales for instance? Interestingly, in February, I did a very-small-scale proof-of-concept regarding automated emotional labeling (along with other metadata), currently available at this link for a brief time.  As you can see, it uses numerical emotion labeling, although I think that's just the tip of the iceberg. What about many-dimensional labeling? I'd be curious to get your take on related work like Eric Drexler's article on QNRs (which is unfortunately similar to my writing in that it may be high-level and hard to interpret) which is one of the few works I can think of regarding interesting safety and performance applications of externalized memories.

With regard to jailbreaking, what if approaches like steering GPT with activation vectors and monitoring internal activations for all model inputs are used?

Comment by Brendon_Wong on World-Model Interpretability Is All We Need · 2023-11-19T11:40:40.484Z · LW · GW

One possibility that I find plausible as a path to AGI is if we design something like a Language Model Cognitive Architecture (LMCA) along the lines of AutoGPT, and require that its world model actually be some explicit combination of human natural language, mathematical equations, and executable code that might be fairly interpretable to humans. Then the only potions of its world model that are very hard to inspect are those embedded in the LLM component.

Cool! I am working on something that is fairly similar (with a bunch of additional safety considerations). I don't go too deeply into the architecture in my article, but would be curious what you think!

Comment by Brendon_Wong on Safety-First Agents/Architectures Are a Promising Path to Safe AGI · 2023-08-07T06:39:01.855Z · LW · GW

Yep, I agree that there's a significant chance/risk that alternative AI approaches that aren't as safe as LMAs are developed, and are more effective than LMAs when run in a standalone manner. I think that SCAs can still be useful in those scenarios though, definitely from a safety perspective, and less clear from a performance perspective.

For example, those models could still do itemized, sandboxed, and heavily reviewed bits of cognition inside an architecture, even though that's not necessary for them to achieve what the architecture working towards. Also, this is when we start getting into more advanced safety features, like building symbolic/neuro-symbolic white box reasoning systems that are interpretable, for the purpose of either controlling cognition or validating the cognition of black box models (Davidad's proposal involves the latter).

Comment by Brendon_Wong on Internal independent review for language model agent alignment · 2023-07-20T01:24:14.638Z · LW · GW

I implied the whole spectrum of "LLM alignment", which I think is better to count as a single "avenue of research" because critiques and feedback in "LMA production time" could as well be applied during pre-training and fine-tuning phases of training (constitutional AI style).

If I'm understanding correctly, is your point here that you view LLM alignment and LMA alignment as the same? If so, this might be a matter of semantics, but I disagree; I feel like the distinction is similar to ensuring that the people that comprise the government is good (the LLMs in an LMA) versus trying to design a good governmental system itself (e.g. dictatorship, democracy, futarchy, separation of powers, etc.). The two areas are certainly related, and a failure in one can mean a failure in another, but the two areas can involve some very separate and non-associated considerations.

It's only reasonable for large AGI labs to ban LMAs completely on top of their APIs (as Connor Leahy suggests)

Could you point me to where Connor Leahy suggests this? Is it in his podcast?

or research their safety themselves (as they already started to do, to a degree, with ARC's evals of GPT-4, for instance)

To my understanding, the closest ARC Evals gets to LMA-related research is by equipping LLMs with tools to do tasks (similar to ChatGPT plugins), as specified here. I think one of the defining features of an LMA is self-delegation, which doesn't appear to be happening here. The closest they might've gotten was a basic prompt chain.

I'm mostly pointing these things out because I agree with Ape in the coat and Seth Herd. I don't think there's any actual LMA-specific work going on in this space (beyond some preliminary efforts, including my own), and I think there should be. I am pretty confident that LMA-specific work could be a very large research area, and many areas within it would not otherwise be covered with LLM-specific work.

Comment by Brendon_Wong on Internal independent review for language model agent alignment · 2023-07-18T21:47:04.818Z · LW · GW

Do you have a source for "Large labs (OpenAI and Anthropic, at least) are pouring at least tens of millions of dollars into this avenue of research?" I think a lot of the current work pertains to LMA alignment, like RLHF, but isn't LMA alignment per say (I'd make a distinction between aligning the black box models that compose the LMA versus the LMA itself).

Comment by Brendon_Wong on CAIS-inspired approach towards safer and more interpretable AGIs · 2023-05-03T11:45:08.243Z · LW · GW

Have you seen Seth Herd's work and the work it references (particularly natural language alignment)? Drexler also has an updated proposal called Open Agencies, which seems to be an updated version of his original CAIS research. It seems like Davidad is working on a complex implementation of open agencies. I will likely work on a significantly simpler implementation. I don't think any of these designs explicitly propose capping LLMs though, given that they're non-agentic, transient, etc. by design and thus seem far less risky than agentic models. The proposals mostly focus on avoiding riskier models that are agentic, persistent, etc.

Comment by Brendon_Wong on Capabilities and alignment of LLM cognitive architectures · 2023-04-28T03:42:14.481Z · LW · GW

Have you read Eric Drexler's work on open agencies and applying open agencies to present-day LLMs? Open agencies seem like progress towards a safer design for current and future cognitive architectures. Drexler's design touches on some of the aspects you mention in the post, like:

The system can be coded to both check itself against its goals, and invite human inspection if it judges that it is considering plans or actions that may either violate its ethical goals, change its goals, or remove it from human control.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-07-01T05:02:35.295Z · LW · GW

My experience on Upwork is actually the same as yours! In our tests of the platform, it appears to be very difficult to find jobs due to the intense competition. I was unpleasantly surprised at first when I saw how difficult it was to earn money on Upwork as a new user. However, that was the whole point of the initial tests we did, so we expanded and have still been expanding the program to encompass other forms of virtual work that pay reliably and still have room to grow. Upwork will be a minor or non-existent part of our program.

If my program was just on Upwork, then I would be inclined to side with your analysis. Thankfully, it's not.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T19:31:45.744Z · LW · GW

I think I understand the point: hypothetically, this program would take work away from people more in need, possibly even making the world worse off because of that. But if I magically made half of the virtual workforce disappear, then the half of the people that were removed would be really poor and the other half would be twice as rich. But is that creating more good? No, because the richer half would not need the money as much as the poorer half. If I added more people who were earning less money before being added then I am creating a net good, and that's what I am trying to do. I don't think the impact of helping several dozen people (just at first!) get out of poverty is insignificant, and since the program could be expanded if our tests indicate it works effectively, I think it could be considered high impact it terms of the number of people it could help and how much it could change their lives.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T19:21:08.764Z · LW · GW

Well the total pool of work available for everyone is imperceptibly decreased in the short run, not aversely affecting anyone to any significant degree, while giving more of the poor who really need the money work opportunities... Is employing several dozen more people a small net good? I guess it's a matter of opinion.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:39:25.080Z · LW · GW

We are continuing our search for similar projects, thank you for your suggestion. I hope that we have not missed any pitfalls, but like Strangeattractor wrote, we are indeed doing tests of the concept in various stages of development, and this project is kind of a pilot in and of itself, so hopefully we can catch anything we might have missed.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:36:49.727Z · LW · GW

'In a charitable way" meaning good for the people. Just because there are for-profit companies out there doing this doesn't mean they are doing what is best for the people, they are distributing wealth, but also keeping a lot of it for themselves. A charitable venture would give most of the profits to the people involved, and this project also involves providing many things to people like internet and computer access, training, opportunities, something a lot of freelancers have to acquire for themselves in developing countries. It is very difficult for a would-be-freelancer to find access to all of the technology, one-on-one help, etc, hence the value of this project. While there are virtual employment companies, there are no companies helping freelancers get started, which is unique and fills a need.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:33:42.696Z · LW · GW

Thank you, that is one of the markets we are looking to branch out to.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:31:58.353Z · LW · GW

I did not throw every detail into the video and fundraiser/my post in LessWrong, that is correct... I do think I described the jist of it. I explicitly stated that funds will go towards providing computer and internet access, training given by staff, and opportunities that staff have to find. As implied, expenses will go towards computer acquisition, internet, and helping staff implement various facets of the project. I could have explained each and every detail, but it would be too long for the target audience to read. The campaign is not noticeably more vague than other related Indiegogo fundraisers I have seen.

I especially do not think dishonesty should be assumed. It's just common sense, but to try and put it in words, the effort put into the campaign and video, the numerous people involved, the fact that I'm a high school student putting my reputation on the line, the fact that we are a "verified nonprofit" shown by Indiegogo after confirming our 501(c)(3) status... It would be a very unlikely and elaborate scam, especially for the very low amount of money that this is likely to earn.

For the record, this project is operating under close scrutiny by the faculty sponsor mentioned in the video, by the nonprofit sponsor we have mentioned in the bio of Silicon Rainforest, by our adult volunteers, by our business partners, etc. If I wanted to do this as a scam, I would try to sell miraculously affordable virtual employment services, take the money, and run ;)

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:19:48.797Z · LW · GW

MTurk employs a lot of people in developed countries. I have read they are starting to reject Indian based workers because of poor work quality. I can find employment for people who can provide a similarly high standard of work relative to workers in more developed countries, but who need the income more. Member participants would otherwise have had difficulties joining, say, MTurk because of a lack of computers, internet access, proper guidance, training... I don't think there are any companies helping freelancers find work because it's not very profitable, and yet there is a great need to reach people who are not working to their potential.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:15:49.788Z · LW · GW

I'm having trouble seeing how a for-profit corporation would create more good and be a more effective structure in this case. A non-profit organization can operate without income tax and attract donations which can be tax-deductible to donors. A for-profit organization could get investment capital, but I think it's highly unlikely I would be able to find any interested investors, and it otherwise performs worse compared to a non-profit with the same business model.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:14:05.392Z · LW · GW

The way I see it, making the project a nonprofit allows it to better compete with for-profit companies because of tax-advantages. It can also get donations. A for-profit corporation has the advantage of attracting investments from people hoping to make a profit, but I am quite sure that I would not be able to attract large sums of investment capital. That pretty much gives starting this program as a nonprofit the only logical choice.

Regarding your point about re-compensation, I don't think I cannot extract the value, it will just be difficult to pay myself an extraordinarily large sum of money all at once, in the hundreds of thousands of dollars. If that ever did become a reality, then hypothetically I could create a for-profit branch of the organization that could partner up with the nonprofit branch in managing core revenue generating operations, thus allowing me to siphon income out of the nonprofit.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T09:02:26.452Z · LW · GW

Belizeans would probably be competing with wealthier people for work because their high level of English mastery allows them to compete for more advanced positions. The websites I mentioned have many workers from more developed countries. For example, half of MTurk's users are from the United States.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T08:59:50.476Z · LW · GW

Many people in developing countries do not have access to the technology needed to participate in virtual employment, so we will provide computer and internet access. We will be doing marketing in a way, yes, although it is guidance and training as well. In the future, we will move on from guiding people through using third party systems to directly selling virtual employment services, which should be much more profitable.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-30T08:56:29.798Z · LW · GW

Thank you for your suggestions. I have in fact surveyed people and organizations in Belize. The general consensus is that there are a lot of people who are unemployed or working for very low wages, and getting higher paying employment would improve their standard of living. You mentioned a small scale pilot, we have actually run many such pilots, which is how we found that it would be possible to help people earn around $3 USD an hour. We are currently working on remote testing of our program before actually sending staff to Belize.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-26T00:38:19.477Z · LW · GW

There is definitely no prominent implementation of this concept and its related variations. Many nonprofits offer job training and give people computer and internet access, but starting what is essentially a virtual employment company to help people is not something I have heard about before, hence this program. It is possible that this idea was not implemented before in a charitable way because people start virtual employment companies for for-profit purposes, and those companies are very successful. As to the idea of connecting the impoverished with virtual employment services, it is possible many people are not aware of virtual employment services and thus have not implemented the idea.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-26T00:20:36.455Z · LW · GW

The venture could be profitable, yes. Would it generate massive amounts of income? That is also possible. I did not consider a for-profit version of the idea because the project itself was supposed to be charitable in nature. I am considering starting a for-profit branch of this idea, and would be open to hearing other people's ideas and motivations. Is your motivation and other's in getting involved in a for-profit implementation of this idea to earn money?

To elaborate more on profits, the initial implementation of this idea might not be incredibly profitable because we are relying on third party virtual employment services like the aforementioned upwork.com to ensure the initial implementation (this summer!) would be a success and members would be able to find guaranteed work. Directly contracting with people and organizations wanting virtual workers is expected to be a lot more profitable.

Comment by Brendon_Wong on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-25T23:56:38.762Z · LW · GW

Thanks for your question! This particular project is charitable in nature and would probably require funding to get off the ground and expand more rapidly. Since it is not expected to attract for-profit support, especially because it would probably not be a particularly profitable venture, most funding would probably come from people/organizations with non-profit motives. People/organizations with non-profit motives generally only donate to nonprofits, which have a better public image and are more trusted to pursue altruistic goals like donors expect. We can also offer tax-deductible donations to donors, and we do not get taxed on donations or income, which gives us a financial advantage for attracting more donations and for earning more income. The only benefits to starting a for-profit venture I can think of would be greater freedom in compensating supporters and in our operations, but since this project is not expected to be a huge profit maker and has charitable intentions, I did not choose the for-profit path, although of course I could consider it in more detail.

Comment by Brendon_Wong on Positive Book and Other Media Recommendations for a Teen Audience · 2014-10-12T20:59:30.164Z · LW · GW

Thank you for your help! I have edited my post with additional information. My audience is a general youth audience, think of promoting content to an entire high school, with "average teenagers" and people that might be more interested in content. Of course, some people will be more interested than others, so a wide variety of recommendations for different interest groups is better. I'm primarily looking for books that promote ethical/altruistic behavior, I'm not sure if any of your beforementioned recommendations do so.

Comment by Brendon_Wong on Positive Book and Other Media Recommendations for a Teen Audience · 2014-10-12T20:57:01.032Z · LW · GW

Thanks for your recommendations! I've edited my post to make it more specific, I am looking for content that promotes altruistic behavior, either fiction or nonfiction.

Comment by Brendon_Wong on 2013 Less Wrong Census/Survey · 2013-11-29T08:25:44.744Z · LW · GW

Answered all questions, I hope I helped!

I'm very curious to see how the monetary reward works out.

Comment by Brendon_Wong on [deleted post] 2013-09-16T04:48:01.024Z

Thank you very much. How sure are you that top colleges will trash unschooling applications?

Comment by Brendon_Wong on [deleted post] 2013-09-16T04:44:35.888Z

Since you asked me about the most effective technique I've used since this point, I started using the pomodoro technique with Beeminder. I have experienced a very dramatic increase in productivity. Thank you very much!

Comment by Brendon_Wong on [deleted post] 2013-09-16T04:42:48.907Z

I don't think I've accomplished enough at this point in my life to go to college immediately.

Comment by Brendon_Wong on [deleted post] 2013-09-16T04:36:46.651Z

If I unschooled, I would be engaged in many other activities with social support like internships, classes, and possibly even school extracurricular activities.

Historically, I have had no problem getting things done with no social support.

Comment by Brendon_Wong on [deleted post] 2013-09-16T04:34:55.058Z

It's true, my parents kind of forced me into it...

You are right, I should try to free up more time while taking the conventional and safer route.

Comment by Brendon_Wong on [deleted post] 2013-09-10T07:12:43.066Z

My problem is that the akrasia seems to be partially caused by staying in a highly structured environment. I don't have much trouble doing things I believe are beneficial towards my goals.

I currently believe if I pursued option 2 I could get into a top college just like I would have done if I stayed in high school but more useful things would get done.

If this belief is false, then my akrasia would be slightly reduced.

Comment by Brendon_Wong on [deleted post] 2013-09-10T06:51:50.349Z

Yes, but with all those activities I listed above I have minimal free time during the school year. I do have much more time during the summer though.

Comment by Brendon_Wong on [deleted post] 2013-09-10T06:20:00.163Z

I believe the pomodoro technique had me accomplishing many tasks for one day, then it failed. It failed because I failed to start using the pomodoro method itself, I just procrastinated on starting it. I also got distracted while working. I either stopped working and never got on track again, or I forgot about the rules about distraction (record it, apply the 3 steps) and wasted a lot of time. Over time I just forgot about it. Thanks for reminding me, I'll give it another go because it was so close to working and I can try different motivational techniques to get started.

I believe an "energy pill" (Elebra) also helped me get things done for several days before I succumbed to procrastination. I should try that again as well...

Comment by Brendon_Wong on [deleted post] 2013-09-10T02:42:54.528Z

Unfortunately, Akrasia still strikes in the morning and I don't always have the motivation or energy to finish everything. But that's only with minimal sleep due to procrastination the night before.

I have to see if I can get my parents to agree...

Comment by Brendon_Wong on [deleted post] 2013-09-10T02:23:40.388Z

So I leave high school... and then what comes next on the path to world optimization? :)

Comment by Brendon_Wong on [deleted post] 2013-09-10T02:14:50.430Z

Huh. Ask Eliezer about that :)

I think that's the best guaranteed way to improve the world. There is almost no uncertainty. But I'd rather not subject myself to decades of monotonous work, especially since there are so many other organizations and individuals who could create an impact thousands or even millions of times more than mine.

I was thinking more along the lines of actually working at a nonprofit, starting businesses to raise money, something at least a little higher impact then earn several hundred thousand and donate it.

With those updated plans, which of my three options (or neither of them) are the best?

Comment by Brendon_Wong on [deleted post] 2013-09-10T02:10:14.083Z

Thanks for your helpful replies!

I've actially spend years trying to fix this problem, with little success. I've tried multiple books, read about every productivity system ever invented, tried thousands of articles over hundreds of websites... No luck.

Recently, I've been looking into Akrasia on Less Wrong because I thought the suggestions might finally have an impact. I memorized The Motivation Hacker (along with all the techniques shared in the Procrastination Equation). I've also tried PJ Eby's materials. I found that unfortunately it had no impact on my akrasia at all :(

Do you have any recommendations? Am I doing something wrong? It certainly feels like it!

Thanks for your suggestions, I'll give them a try and see how it goes.

Comment by Brendon_Wong on [deleted post] 2013-09-10T02:05:42.810Z

Heh. I wake up at 7:00 AM, attend a full 7 periods with all the Advanced Standing classes I can, and leave at 3:15 PM. From there, I go to cross country, taking my time until 6:00 PM at the earliest. Then I eat dinner, shower... Then it's 8:00 PM already and all my homework is there waiting. Then there are family activities, chores, distractions, and other projects I need to do thrown in. Did I mention I have a serious akrasia problem? Then I sleep at like 12:00 AM... Not that much time if you ask me.

Comment by Brendon_Wong on [deleted post] 2013-09-09T12:02:39.522Z

College does sounds pretty useful, so I guess the question is whether I should leave high school, unschool for 3 years, then reapply to college. If that does not significantly reduce my college admissions potential it would seem like it is the most strategic thing to do.

Comment by Brendon_Wong on [deleted post] 2013-09-09T11:59:27.431Z

Carefully putting off assignments sounds like a potential solution, but usually assignments are assigned one day and due the very next. I have around 4 hours to get an average of 2 hours of homework done on school nights. But I fail to shift into homework mode which caused me to write this article at 10:00 PM last night, get 5 hours of sleep, then wake up early to finish studying because it seems easier in the morning.

Comment by Brendon_Wong on Effective Rationality Training Online · 2013-08-11T19:48:54.365Z · LW · GW

Thanks for the clarification.

I'll focus on resources rather than topics, and collect crowd opinion on resources.

I'd call it success. Really, I am more afraid of the opposite situation: too few people caring enough to comment; because then I wouldn't know what to do. If there are too many comments, you could for example collect the resources and make a poll. Or just start another discussion a month later, where the first comment would contain the poll about the resources recommended in the previous discussion. Or anything else. The big problem is IMHO if people generally endorse the idea, but the discussion is followed by... silence.

Remember Instrumental rationality/self help resources, and more recently Proposal: periodic repost of the Best Learning resources? I think the success of those discussions means the idea is already a success. I saw that the post asking for resources became hard to navigate because all the different life categories listed generated too many recommendations. To avoid that, should I start discussions with different life categories every time? Other people have already tested the idea and it is popular, making an effective instrumental rationality resource collection program is the hard part.

How come you suggested a poll to overcome too any comments, and then reposting the discussion? I don't think a poll would solve the too many comments problem because there are simply too many useful things to recommend improving. Many things would be useful. Look at all of lukeprog's social skill resouces! Ask just for social skill resources, dump that in, then throw in another 20 recommendations and even more low impact suggestion and the discussion would be swamped. A poll with so many different resources will just exacerbate the problem.

The only solution I can think of is having many different discussions, each on a separate area of life or even separate categories in one area of life. Whether or not to space it out or just post ~7 discussions at once is the question.

Comment by Brendon_Wong on Effective Rationality Training Online · 2013-08-11T06:14:55.836Z · LW · GW

Thanks for suggesting concrete actions, I'll go ahead and post it ASAP.

Questions before I start (thanks in advance)!

  1. What's better, recommending a resource to improve something or recommending a specific topic to improve with resource suggestions as reply's? Ex. Watch The Blueprint Decoded to learn PUA vs improve PUA and add resources as replies.

  2. What do you mean by collecting data? Do you mean collecting the self-improvement resource suggestions themselves, or opinions/ratings/votes on the suggestions?

  3. What if there are too many comments on the discussion for people to navigate through it? Should I have separate discussions on separate areas of life? Ex. Health, Mind, Finance...

  4. Just to verify, in the comments area of the first discussion asking for self-improvement recommendations, write a comment polling people where to put the data, right?

Comment by Brendon_Wong on Effective Rationality Training Online · 2013-08-10T18:04:49.635Z · LW · GW

It seems like people find discussions more rewarding than posting to a wiki. There could be weekly discussions on the many aspects of self improvement, and then those ideas could be posted on a wiki for organization and further updates.

Do you think using a separate wiki is a good idea? It seems like the LW wiki is not being used for collecting self-improvement articles, and a new wiki with a separate purpose, community, and article format might be better. After all, the current wiki is organized only for rationality articles, and changing the layout and article format might cause some conflict and confusion.