Posts

Facebook, The Rodents, and The Common Knowledge Machine 2018-10-18T21:07:06.573Z
Machine Learning Group 2017-07-16T20:58:18.519Z
Online Social Groups? 2015-05-01T04:20:41.758Z
Informing Others: Maximally Efficient? 2014-10-10T11:24:41.479Z

Comments

Comment by Regex on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2023-08-25T09:58:51.938Z · LW · GW

Two years later, there are now whole brain wide recordings on C. Elegans via calcium imaging. This includes models apparently at least partially predictive of behavior and analysis of individual neuron contributions to behavior. 

If you want the "brain-wide recordings and accompanying behavioral data" you can apparently download them here!

It is very exciting to finally have measurements for this. I still need to do more than skim the paper though. While reading it, here are the questions on my mind:
* What are the simplest individual neuron models that properly replicates each measured neuron-activation? (There are different cell types so take that into account too)
* If you run those individually measurement-validated neuron models forward in time, do they collectively produce the large scale behavior seen? 
    * If not, why not? What's necessary?
* Are these calcium imaging measurements sufficient to construct the above? (Assume individualized connectomes per-worm are gathered prior instead of using averages across population)
    * If not, what else is necessary? 
    * And if it is sufficient, how do you construct the model parameters from the measurements?         
* Can we now measure and falsify our models of individual neuron learning? 
    * If we need something else, what is that something?

Edit: apparently Gwern is slightly ahead of me and pointed at Andrew Leifer whose group an entire year ago who produced a functional atlas of C Elegans that also included calcium imaging. Which I'd just totally missed. One missing element is extrasynaptic signaling, which apparently has a large impact on C Elegans behavior. So in order to predict neuron behavior you need to attend to those as well.
 

Comment by Regex on Hammertime Day 10: Murphyjitsu · 2020-06-22T01:10:12.518Z · LW · GW

I expanded 'shocked at failure' into:

The plans you make work.

When they fail, it is because of one of the following reasons:

  • a predicted reason (you took a risk / made a tradeoff, saw it as low probability or unavoidable)
  • violation of an explicit assumption (encryption used is secure, won't crash on the way to the airport)
  • a black swan (coronavirus, 9/11, stock market crash)

When they fail for reasons other than these, you are extremely surprised and can point to exactly what about your worldview and anticipations misled you.

Comment by Regex on Plan-Bot: A Simple Planning Tool · 2020-04-22T00:58:48.678Z · LW · GW

The planbot link is down.

Comment by Regex on Archimedes's Chronophone · 2020-04-18T18:49:52.670Z · LW · GW

I first tried to describe rationality piece by piece, but realized that just comes out as something like: "Enumerate all the principles, fundamentals, and ideas you can think of and find about effective thinking and action. Master all of them. More thoroughly and systematically apply them to every aspect of your life. Use the strongest to solve its most relevant problem. Find their limits. Be unsatisfied. Create new principles, fundamentals, and ideas to master. Become strong and healthy in all ways. "

Non-meta attempt:

<Epistemic status: I would predict most of these are wrong. In fact, I rather recently proved I didn't understand fundamental parts of The Sequences. So I know that my beliefs here are weak and thoroughly misled. So my basis of belief for all of these is broken and weak. I am certain my foundation for beliefs is wrong even if all of my actual beliefs here turn out to be basically accurate. I cannot thoroughly justify why they are right.>

General strategy: collect all the important things you think are true, and consider what it means for each to be false.

Starting with a list of the things most important to you, state the most uncontroversial and obvious facts about how those work and why that is the case. Now assume the basic facts about the things most important to you are wrong. The impossible is easy. The probable is actually not true. Your assumptions do not lead to their conclusions. The assumptions are also false. You don't want the conclusions to be true anyway. The things that you know work, work based on principles other than what you thought. Most of your information about those phenomena is maliciously and systematically corrupted, and all of it is based on wrong thinking. Your very conceptions of the ideas are set up to distort your thinking on this subject.

What if my accepted ideas of civilizational progress are wrong? What if instead of exponential growth, you can basically just skip to the end? Moore's Law is actually just complacency. You can, at any point, write down the most powerful and correct version of any part of civilization. You can also write down what needs to happen to get there. You can do this without actually performing any research and development in between, or even making prototypes. You don't need an AGI to do this for you. Your brain and its contents right now are sufficient. You just need to organize them differently. In fact, you already know how to do this. You're tripping over this ability repeatedly, overlooking the capability to solve everything you care about because you regard it as trash, some useless idea, or even a bad plan. You've buried it alongside the garbage of your mind. You're not actually looking at what is in your head and how it can be used. Even if it feels like you are. Even if you're already investing all your resources in 'trying.' It is possible, easy even. You're just doing it wrong in an obvious way you refuse to recognize. Probably because you don't actually want what you feel, think, and say you do. You already know why you're lying to yourself about this.

You can't build AGI without understanding what it'll do first, so AI safety as a separate field is actually not even necessary or especially valuable. You can't even get started with the tech that really matters until you've laid out what is going to happen in advance. That tech can also only be used for good ends. Also, AGI is impossible to build in the first place. Rationality is bunk and contains more traps than valuable thinking techniques. MIRI is totally wrong about AI safety and is functionally incapable of coming anywhere close to what is necessary to align superintelligences. Even over a hundred years it will be mechanically unable to self-correct. CFAR is just very good at making you feel like rationality is being taught. They, don't understand even the basics of rationality in the first place. Instead they're just very good at convincing good people to give them money, and everyone including themselves that this is okay. Also, it is okay. Because morality is actually about making you feel like good things are happening, not actually making good things happen. We actually care about the symbol, not the substance.

That rationality cannot, even in its highest principles of telling you how to overcome itself, actually lead you to something better. To that higher unnamed thing which is obviously better once you're there. There is, in fact, actually no rationality technique for making it easier to invent the next rationality. Or for uncovering the principles it is missing. Even the fact of knowing there are missing principles you must look for when your tools shatter is orthogonal to resolving the problem. It does not help you. Analogously there is no science experiment for inventing rationality. You cannot build an ad-hoc house whose shape is engineering. If it somehow happens, it will be because of something other than the principles you thought you were using. You can keep running science experiments about rationality-like-things and eventually get an interesting output, but the reason it will work is because of something like brute force random search.

That the singularity won't happen. Exponential growth already ended. But we also won't destroy ourselves for not being able to stop X-risk. In fact, X-risk is a laughable idea. Humans will survive no matter what happens. It is impossible to actually extinguish the species. S-risk is also crazy, it is okay for uncountable humans to suffer forever, because immense suffering is orthogonal to good/bad. What we actually value has nothing to do with humans and conscious experience at all, actually.

Comment by Regex on Hammertime Day 1: Bug Hunt · 2020-02-08T00:00:31.466Z · LW · GW
Hopefully, you came up with at least 100 bugs; I came up with 142.

I wrote 20,000 words from these prompts. Not all of those bugs, but also my reactions to them. Ended up doing not much else for about three days, but I went over basically my entire life top to bottom. I now have a thorough overview of my errors. I stopped not because I ran out of things I think I need to fix, but because I realized the list would never end. I was still finding MAJOR areas I need to improve even after all that. I see why the exercise is supposed to only be half an hour now: there are about 200 million insects per person!

Lesson learned: sample, not catalog.

Comment by Regex on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2018-12-05T19:40:46.096Z · LW · GW

I've only taken a really basic economics course, but found the explanations really straight forward and learned a lot. So I don't think the topic is as hard to parse as you'd think.

(Alternatively, I may have misunderstood details, overlooked problems, and simply don't have anything to contrast these statements to. This would make it harder to judge.)

The bank's persona did however fall flat repeatedly and could have been a lot better by having realistic responses.

Comment by Regex on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-10-30T23:55:27.794Z · LW · GW

High upvote low reply is less bad, but still feels like it is fundamentally broken in some way. Failing to leave a mark maybe? I think I would mostly be confused given such a reaction. There might be specific types of posts that would generate that, but I feel those qualities do not generalize to the set of "authoritative, well researched and obviously correct" posts.

Comment by Regex on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-10-26T17:44:05.389Z · LW · GW

Moreover, why should there be discussion? If a post is authoritative, well researched and obviously correct, then the only thing to do is upvote it and move on. A lengthy discussion thread is a sign that either the post is either unclear, incorrect, or has mindkilled its readers.

Alternatively, a length discussion could be a sign that the post inspired connections to related topics and events. Additionally, it may have made a critical advance that furthered understanding of the topic for other people. Even though optimizing for engagement yields divergence from what we want doesn't mean we should optimize against engagement. Or that a lack of engagement is somehow good.

If there are a bunch of long-form articles for which the only reasonable response is, “Yep, that’s all true. Good article!” that’s a win condition.

I do believe there is a place for that, but were I to repeatedly make posts which were so thoroughly correct and got essentially no engagement I would take that as a sign that people weren't really interested in it and that I should focus on other topics that have a larger impact.

Comment by Regex on Less to Greater, Bookmarklet Edition · 2018-10-26T17:18:58.126Z · LW · GW

I independently generated an alternative solution using the redirector extension:

http://einaregilsson.com/writing-a-browser-extension-for-three-browsers/

Pattern:

Example URL: http://lesswrong.com/some_path

Include pattern: *lesswrong*

Redirect to: $1greaterwrong$2

You should get out example result:

http://greaterwrong.com/some_path

(I am using Firefox and I assume it is the same on other browsers)

Comment by Regex on UBI for President · 2018-10-19T02:49:18.775Z · LW · GW

I do agree that a graduated UBI (negative income tax) would be cleaner than the current welfare system. A smooth gradient out instead of a sharp cut in benefits. The incentives would align substantially for people seeking to escape the poverty trap.

A major issue for me when I think of this is the incentives for increasing the amount until it is unsustainable. Being able to vote yourself more money is... well. A ticket towards candidates promising to give people more money out of the pockets of others.

This would incentivize brain drain as well as immigration of people in dire straits. It would also incentivize a population boom since people would no longer worry as much about being able to support their family. This in turn makes the problem worse.

Although this applies to any kind of welfare. So it may be strictly easier legally to end the old programs and start on this one. The savings from no longer dealing with red tape, frustration, and bootstrapping from meagre resources may actually be sufficient to counterbalance these negative effects.

I do wonder on the effect of various UBI schemes on people's productivity and life choices.

Comment by Regex on UBI for President · 2018-10-18T23:32:51.460Z · LW · GW

~300,000,000 US citizens.

$1,000/month/person = 12,000 $/year/person

$12,000*300,000,000 = $3,600,000,000,000/year = 3.6 trillion dollars a year

For reference, the United States takes in a little over 6 trillion dollars a year in taxes.

Comment by Regex on Thoughts on the REACH Patreon · 2018-08-01T20:34:41.768Z · LW · GW

Found a big list of crowd funding options. Don't have to set up our own, just need to find one that is both low fee and trustworthy.

https://wiki.snowdrift.coop/market-research/other-crowdfunding

Finding a superior option seems doable in 5-10 hours?

Comment by Regex on Thoughts on the REACH Patreon · 2018-08-01T17:18:30.952Z · LW · GW

There is one pretty big problem with using patreon as their fundraising platform: It eats a little less than 8% of the money you put in. That is money simply lost from the community. This makes patreon an unacceptable medium for transactions.

Now, about 3% of that is from credit card fees. Are there alternatives to that? I am unsure how much the one time donation takes, but I suspect it is only the credit card fees. We're losing 5% to the convenience of not pressing a button to donate every month.

As of this writing they're making $5,464/month. I presume that's pre-fee. I'll also assume everyone actually pays every month.

They lose $163/month to the credit card fees, and from the remaining another $265/month for a total of $428/month lost all due to the medium of monetary transfer.

That's about half of what I spend each month to live, so the inefficiency here is costing us 50% of one person's living situation. Minimum wage is about $8/hour. So a more concrete measure is we're losing ~60 minimum wage hours a month. We should be willing to pay someone that much to fix this transaction problem.

And that's just for this one project.

I presume they're using patreon because people know it and are willing to use it, but this is a coordination problem, and a rather serious one at that should this medium become more popular.

Comment by Regex on [deleted post] 2017-11-03T14:16:56.620Z

https://hivewired.wordpress.com/2017/09/13/an-introduction-to-origin/

[ https://hivewired.wordpress.com/2017/09/13/an-introduction-to-origin/ ] ( https://hivewired.wordpress.com/2017/09/13/an-introduction-to-origin/ )

Comment by Regex on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-03T13:58:45.915Z · LW · GW

The Origin Project is also working on the same general problems, and looking to grow. You don't have to move anywhere, and you can get started right now.


https://hivewired.wordpress.com/2017/09/13/an-introduction-to-origin/


We've been working for the last few months on building out a cultural framework that can be used wherever you are, and just with the people around you. To build a sense of community and meaningful interactions.


But we're not there yet. We're too few.


Come as you are.

Comment by Regex on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-21T03:10:35.215Z · LW · GW

Hello, World!

Comment by Regex on LW 2.0 Open Beta starts 9/20 · 2017-09-17T03:40:08.704Z · LW · GW

My impression reading this is that you mostly just want a better Tumblr. Would that be fair?

Comment by Regex on LW 2.0 Strategic Overview · 2017-09-15T16:22:08.732Z · LW · GW

How culture war stuff is dealt with on the various discord servers is having a place to dump it all. This is often hidden to begin with and opt-in only, so people only become aware of it when they start trying to discuss it.

Comment by Regex on 2017 LessWrong Survey · 2017-09-15T05:22:47.797Z · LW · GW

I have taken the survey... away from everyone.

No one can have it.

It lives under my bed now.

Comment by Regex on What is Rational? · 2017-08-25T03:43:28.071Z · LW · GW

If he just has an instinct that a 6 should come up again, but can't explain where that instinct comes from or defend that belief in any kind of rational way other then "it feels right", then he's probably not being rational.

Maybe in the specific example of randomness, but I don't think you can say the general case of 'it feels so' is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn't an absurd belief.

In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so 'give high credence to black box when it says something' may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.

Comment by Regex on Debate tools: an experience report · 2017-08-13T17:32:33.826Z · LW · GW

It appears I can't replicate it either. I may have updated Firefox since last week or something? 54.0.1 (32-bit) is my current version.

Comment by Regex on Debate tools: an experience report · 2017-08-04T21:00:11.654Z · LW · GW

Playing around with the debates on firefox causes graphical glitches http://i.imgur.com/QsoLeqn.jpg

Chrome seems to work, but these submenus don't close after you click on them http://i.imgur.com/sbNBhZ1.png

Comment by Regex on Putanumonit: What statistical power means, and why I'm terrified about psychology · 2017-06-25T00:05:17.595Z · LW · GW

Before even reading it I was confused.

Epistemic status for the first part of this post:

[image of thinking woman in front of math]

Epistemic status for the second part:

[Image of greek? philosopher preaching]

Admittedly I should probably know who the second image is of, but I have no idea what they're trying to say with either of these.

As we say in the Bayesian conspiracy: even if you’re not interested in base rates, base rates are interested in you.

No. Stop. This is just awkward to read.

Comment by Regex on Idea for LessWrong: Video Tutoring · 2017-06-24T19:53:45.766Z · LW · GW

I suspect this will end up being something more akin to self-study groups that produce teaching material as a direct result of learning the material themselves. For example, writing up an explanation of how to do a particular book example. This doubles as an assessment of people's skills since other people that know the topic really well can build on those explanations or correct mistakes.

With a series of such explanations, anyone else trying to go through the material will have a clearer pathway for the level of understanding of a given sub-topic they need to develop to progress: the exercises and readings needed to be able to understand something, or do a particular difficulty of project.

Comment by Regex on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-22T01:08:53.875Z · LW · GW

The S is for "Skitter"

Comment by Regex on Deriving techniques on the fly · 2017-04-02T20:33:10.058Z · LW · GW

This points to a need of looking for, building off prior work where possible.

Taking it a step further to generate a method of meta-solving this problem: there are many parallels here to programming and device connectors of old (phone charger or other standards). I would imagine we could look to how those sorts of problems were solved and apply or derive the analogous technique here.

Comment by Regex on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-26T23:15:39.699Z · LW · GW

It seems to me that the sadistic simulator would fill up their suffering simulator to capacity. But is it worse for two unique people to be simulated and suffering compared to the same person simulated and suffering twice? If we say copies suffering is less bad than unique minds, If they didn't have enough unique human minds, they could just apply birth/genetics and grow some more.

This is more of a simulating-minds-at-all problem than a unique-minds-left-to-simulate problem.

Comment by Regex on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-25T01:05:37.112Z · LW · GW

Now people have to call you doctor CellBioGuy

Comment by Regex on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-20T05:10:18.689Z · LW · GW

Comment being non-spam and coherent is considered a bare minimum around here. Using the rule of upvoting nearly everything would induce noise. With the current schema of being a signal of quality, or used to say 'more like this' (not necessarily even 'I agree') provides a strong signal of quality discourse which is lost otherwise.

Comment by Regex on Creating The Simple Math of Everything · 2017-01-19T23:45:33.906Z · LW · GW

Evolving thoughts link is down. Archive.org link

Comment by Regex on Measuring the Sanity Waterline · 2017-01-08T21:30:56.883Z · LW · GW

The results of my five minutes of thinking:

take sample of group you want to measure sanity for:

  • productivity
  • goal achievement
  • correct predictions, especially correct contrarians
  • ability to recognize fallacious thinking
  • willingness to engage with political opponents
  • ability to develop nuanced political opinions
  • ability to detect lies and deception in information sources

Went in a different direction than the post. The list I generated seems to have turned far more to abstract individual sanity ideas than things we already have numbers for.

Comment by Regex on Open thread, June 20 - June 26, 2016 · 2016-06-27T15:14:34.863Z · LW · GW

I think you're coming on a little strong in ways you don't intend for requesting his process and previous system iterations. This reads as if you should never share any system without also sharing the process of how to get there, and most of the time that is filled with stuff no one really needs to see.

Comment by Regex on Proposal for increasing instrumental rationality value of the LessWrong community · 2016-06-27T04:02:48.013Z · LW · GW

Alas, this group went bust, but I think I pretty much figured out why. Wrote my thoughts up for everyone's pleasure.

Comment by Regex on Kahneman's Planning Anecdote · 2016-06-18T02:55:22.237Z · LW · GW

I agree. Nowhere else are we likely to get something optimized for that especially since it took nearly a decade to create.

Comment by Regex on Kahneman's Planning Anecdote · 2016-06-17T16:41:41.313Z · LW · GW

Apparently it "never saw daylight". I bet he'd still have a copy for the materials if one were to get in contact with him. How much of that wouldn't be in Thinking Fast and Slow though?

Comment by Regex on 2015 Repository Reruns - Boring Advice Repository · 2016-06-07T01:50:45.655Z · LW · GW

My first thought: "Oh, you leave your house."

I'm either at my computer or class with little time between, so there isn't much downtime for me to even use my phone. It is just an alarm clock people can talk to me from.

Admittedly I do have a tablet, but for the most part it is used for taking notes and so it may as well be replaced by a paper notebook, but I'm a sucker for OneNote. Because I spend every non-class minute walking or at home I've yet to give my tablet another role beyond that since my desktop is so much superior.

Comment by Regex on Welcome to Less Wrong! (8th thread, July 2015) · 2016-05-14T21:52:05.075Z · LW · GW

Welcome!

I've seen these sorts of argument maps before.

https://wiki.lesswrong.com/wiki/Debate_tools http://en.arguman.org/

It seems there is some overlap with your list here

Generally what I've noticed about them is that they focus very hard on things like fallacies. One problem here is that some people are simply better debaters even though their ideas may be unsound. Because they can better follow the strict argument structure they 'win' debates, but actually remain incorrect.

For example: http://commonsenseatheism.com/?p=1437 He uses mostly the same arguments debate after debate and so has a supreme advantage over his opponents. He picks apart the responses, knowing full well all of the problems with typical responses. There isn't really any discussion going on anymore. It is an exercise in saying things exactly the right way without invoking a list of problem patterns. See: http://lesswrong.com/lw/ik/one_argument_against_an_army/

Now, this should be slightly less of an issue since everyone can see what everyone's arguments are, and we should expect highly skilled people on both sides of just about every issue. That said the standard for actual solid evidence and arguments becomes rather ridiculous. It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues.

I suppose I'm trying to describe the effects of the 'fallacy fallacy.'

Thus a significant portion of manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts. You'll also have to deal with the fact that if a majority of people believe something then the shear amount of manpower they can spend on shoring up their own arguments and poking holes in their opponents will make it difficult for minority views to look like they hold water.

What are we to do with equally credible citations that say opposing things?

'Every argument ever made' is a huge goal. Especially with the necessary standards people hold arguments to. Are you sure you've got something close to the right kind of format to deal with that? How many such formats have you tried? Why are you thinking of using this one over those? Has this resulted in your beliefs actually changing at any point? Has this actually improved the quality of arguments? Have you tried testing them with totally random people off of the street versus nerds versus academics? Is it actually fun to do it this way?

From what I have seen so far I'll predict there will be a the lack of manpower, and that you'll end up with a bunch of arguments marked full of holes in perpetual states of half-completion. Because making solid arguments is hard there will be very few of them. I suspect arguments about which citations are legitimate will become very heavily recursive. Especially so on issues where academia's ideological slates come into play.

I've thought up perhaps four or five similar systems, but none of which I've actually gone out and tested for effectiveness at coming to correct conclusions about the world. It is easy to generate a way of organizing information, but it needs to be thoroughly tested for effectiveness before it is actually implemented.

In this case effectiveness would mean

  • production of solid arguments in important areas
  • be fun to play
  • maybe actually change someone's mind every now and then
  • low-difficulty of use/simple to navigate

A word tabooing feature would be helpful: http://lesswrong.com/lw/np/disputing_definitions/ (The entire Map and Territory, How to Actually Change Your Mind, and A Human's Guide To Words sequences would be things I'd consider vital information for making such a site)

It may be useful for users to see their positions on particular topics change over time. What do they agree with now and before? What changed their mind?

I hope that helped spark some thoughts. Good luck!

Comment by Regex on 2015 Repository Reruns - Boring Advice Repository · 2016-05-14T17:49:15.312Z · LW · GW

Before stepping in front of a car make eye contact with the driver.

Do not assume they saw you just because they slowed down.

Comment by Regex on The Singularity Institute's Arrogance Problem · 2016-05-12T17:09:28.029Z · LW · GW

2016 update: Go is now also taken.

Impressive tasks remaining as (t-> inf) approaches zero!

If not to AI or heat death, we're doomed to having already done everything amazing.

Comment by Regex on Useful Concepts Repository · 2016-03-20T03:20:19.483Z · LW · GW

Three years later- have you found the time? I'm really curious to know the rest of these.

Comment by Regex on Proposal for increasing instrumental rationality value of the LessWrong community · 2016-03-12T20:24:14.308Z · LW · GW

Realized it was the wrong structure entirely for what I was trying to do. Still working on a lot of the same general ideas elsewhere.

Comment by Regex on 2015 Repository Reruns - Boring Advice Repository · 2016-02-28T05:15:39.627Z · LW · GW

Use rechargeable batteries.

After two years of constant use in my headphones (8+ hours a day), I still get a full week's worth of power from each battery. I don't recall how long traditional batteries lasted, but I don't think it was all too much longer. I don't have any to compare it with as a major benefit is not needing to worry about buying batteries ever. I do need to make sure I keep charged and discharged batteries separate.

Comment by Regex on 2015 Repository Reruns - Boring Advice Repository · 2016-02-27T19:28:42.467Z · LW · GW

As a counter opinion, I barely use my smart phone for anything I didn't use my old Razr phone for. The only reason I got it was because it was actually cheaper to get a new smart phone than to continue on the old plan. The cost I pay is that I have to charge it every day.

Comment by Regex on Drawing Less Wrong: Technical Skill · 2015-12-31T05:00:22.951Z · LW · GW

About half of the images are no longer there

Comment by Regex on Approving reinforces low-effort behaviors · 2015-12-27T23:43:39.368Z · LW · GW

Any shorter four years later? Asking for a friend.

Comment by Regex on Never Leave Your Room · 2015-12-27T19:17:07.471Z · LW · GW

Of the five recordings on that page I was able to figure out three without listening to the clear speech.

Comment by Regex on No, Seriously. Just Try It. · 2015-12-17T16:32:13.413Z · LW · GW

my answer to this question has become: 1) Research the topic 2) gather many ancedotes and strategies 3) try them 4) as my pool of suggested actions runs low, either brainstorm or go and gather more.

Comment by Regex on Thoughts on designing policies for oneself · 2015-12-17T00:09:41.781Z · LW · GW

I would actually recommend not setting any rewards or punishments due to the overjustification effect. https://en.wikipedia.org/wiki/Overjustification_effect By adding an external reward you will feel less intrinsically motivated.

Comment by Regex on Scholarship: How to Do It Efficiently · 2015-12-14T22:18:10.615Z · LW · GW

A few years late, but I'm interested!

Comment by Regex on Deregulating Distraction, Moving Towards the Goal, and Level Hopping · 2015-12-11T04:31:58.657Z · LW · GW

After having read Worm I will say this much: it engages the creative thinking of the reader.