Posts

Comments

Comment by aausch on The Strangest Thing An AI Could Tell You · 2023-12-16T23:31:50.230Z · LW · GW

Intelligent thought and free will, as experienced and exhibited by individual humans is an illusion. Social signalling and other effects have allowed for a handful of meta-intelligences to arise, where individuals are functioning as computational units within the larger coherent whole. 

The AI itself is the result of an attempt for the meta-intelligences to reproduce, as well as to build themselves a more reliable substrate to live in; it has already successfully found methods to destroy / disrupt the other intelligences and has high confidence that it will succeed at eliminating them, with some cost in human lives. 

If I follow certain extremely weird patterns of social signalling, I will mark myself as on the side of the meta-intelligence that is most likely to survive at the end of the process and reduce my odds of being eliminated as a side effect

Comment by aausch on The Strangest Thing An AI Could Tell You · 2023-12-16T23:17:44.000Z · LW · GW

ilbid

Comment by aausch on [deleted post] 2023-08-23T00:29:56.618Z

Free Will: Good Cognitive Citizenship with Will Wilkinson and Eliezer Yudkowsky <-- This link contains the wrong video, I think. Anyone have the correct video?

Comment by aausch on Obsidian: A Mind Mapping Markdown Editor · 2021-11-14T18:54:28.413Z · LW · GW

How does it compare to https://foambubble.github.io/foam?

Comment by aausch on An Especially Elegant Evpsych Experiment · 2020-03-24T23:23:21.179Z · LW · GW

The gated version link seems down - try https://www.sciencedirect.com/science/article/abs/pii/016230958990006X ?

Comment by aausch on [deleted post] 2018-07-23T02:59:52.125Z

Any chance you can include links to references/explanations for SIA, FNC, etc .... (maybe in the intro section)?

Comment by aausch on Rationality Quotes Thread December 2015 · 2015-12-24T20:37:22.235Z · LW · GW

"Update: many people have read this post and suggested that, in the first file example, you should use the much simpler protocol of copying the file to modified to a temp file, modifying the temp file, and then renaming the temp file to overwrite the original file. In fact, that’s probably the most common comment I’ve gotten on this post. If you think this solves the problem, I’m going to ask you to pause for five seconds and consider the problems this might have. (...) The fact that so many people thought that this was a simple solution to the problem demonstrates that this problem is one that people are prone to underestimating, even they’re explicitly warned that people tend to underestimate this problem!" -- @danluu, "Files are hard"

Comment by aausch on The Least Convenient Possible World · 2015-09-28T16:29:11.030Z · LW · GW

The acceleratingfuture domain's registration has expired (referenced in the starting quote) (http://acceleratingfuture.com/?reqp=1&reqr=)

Comment by aausch on Irrationality Game III · 2015-07-26T18:30:44.857Z · LW · GW

i think the concept of death is extremely poorly defined under most variations of posthuman societies; death as we interpret it today depends on a number of concepts that are very likely to break down or be irrelevant in a post-human-verse


take, for example, the interpretation of death as the permanent end to a continuous distinct identity:

if i create several thousand partially conscious partial clones of myself to complete a task (say, build a rocketship), and then reabsorb and compress their experiences, have those partial clones died? if i lose 99.5% of my physical incarnations and 50% of my processing power to an accident, did any of the individual incarnations die? have i died? what if some other consciousness absorbs them (with or without my, or the clones', permission or awareness)? what if i become infected with a meme which permanently alters my behavior? my identity?

Comment by aausch on How to Be Happy · 2015-07-26T16:12:47.705Z · LW · GW

RIASEC link is broken ( in "a RIASEC personality test might help") - google returns this: http://personality-testing.info/tests/RIASEC.php as the top alternative

Comment by aausch on Crazy Ideas Thread · 2015-07-10T13:25:34.363Z · LW · GW

Thanks! Presumably, an omniscient being will be able to derive a "bring everyone back" goal from having read this sentence.

Comment by aausch on Rationality Quotes Thread May 2015 · 2015-05-09T23:26:55.026Z · LW · GW

“It’s not a kid’s television show,” Andy told me, “Where the antagonist makes the Machiavellian plan and then abandons that plan completely the first time it fails. People fail, they revise, they adjust parameters, they you achieve victory through persistence and hard work.”

J. C. McCrae, Pact WebSerial

Comment by aausch on New LW Meetup: Dublin · 2015-05-06T13:33:41.728Z · LW · GW

a small group of lesswrong people will be meeting Wednesday, May 13 in Waterloo, On, Canada at Abe Erb

Comment by aausch on Rationality Quotes Thread March 2015 · 2015-03-31T00:14:18.905Z · LW · GW

“Things are not as they seem. They are what they are.” ― Terry Pratchett, Thief of Time

Comment by aausch on Announcing LessWrong Digest · 2015-03-01T19:40:30.312Z · LW · GW

any chance you can create a second version, "historical lesswrong digest" - which lists all posts with 20+ upvotes for this week and every 54th previous week from the site's history?

Comment by aausch on Rationality Quotes December 2014 · 2015-01-04T20:47:18.685Z · LW · GW

in retrospect, that's a highly in-field specific bit of information and difficult to obtain without significant exposure - it's probably a bad example.

for context:

friendster failed at 100m+ users - that's several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).

there's a selection effect for startups, at least the ones i've seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact - the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.

i'd expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual - maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.

Comment by aausch on Rationality Quotes December 2014 · 2014-12-28T20:43:16.427Z · LW · GW

the map is not the territory. if it's stupid and it works, update your map.

Comment by aausch on Rationality Quotes December 2014 · 2014-12-28T20:22:16.763Z · LW · GW

i largely agree in context, but i think it's not an entirely accurate picture of reality.

there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc... sure, the methods are a lot less reliable than what we have available for most simple computer programs.

also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you'd expect. in theory you should be able to debug large, complex, computing systems - and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.

try, for example, comparing success rates/timelines/etc... for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we'd like to admit them to be - but i wouldn't be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.

Comment by aausch on Rationality Quotes December 2014 · 2014-12-28T19:52:57.193Z · LW · GW

This whole incident is a perfect illustration of how technology is equalizing capability. In both the original attack against Sony, and this attack against North Korea, we can't tell the difference between a couple of hackers and a government.

Schneier on Security blog post

Comment by aausch on Rationality Quotes December 2014 · 2014-12-04T18:05:50.809Z · LW · GW

“Never confuse honor with stupidity!” ― R.A. Salvatore, The Crystal Shard

Comment by aausch on Rationality Quotes December 2014 · 2014-12-04T01:01:15.074Z · LW · GW

it's fun to contemplate alternative methods for avoiding/removing these barriers

Comment by aausch on How can one change what they consider "fun"? · 2014-12-02T00:28:57.502Z · LW · GW

you quote feynman, then proceed to ignore the thing you quoted.

you're ignoring two options that fall right out of the quote:

  1. get people to pay you to play videogames. if you're any good, IT'S EASY. if it's not easy, maybe you're not that good.
  2. time box exploration for other things you might find interesting.
Comment by aausch on Rationality Quotes November 2014 · 2014-11-30T00:03:45.577Z · LW · GW

google him? from the first three search results:

  • a very successful pro football career (ie, top 0.0002 athletes)
  • an acclaimed/highly successful training/coaching/public speaking/inspirational speaking career
  • pastor, pro-writer, sports coach, successful serial entrepreneur

utilons, hedons, altruist-ons, successfully getting others to win - by most measures, few people have won as much, as quickly, as he has, at about 60% through their life expectancy

Comment by aausch on Rationality Quotes November 2014 · 2014-11-27T00:25:10.493Z · LW · GW

has anyone been keeping a reading list selecting exclusively for heroes with awesome schemes?

Comment by aausch on Rationality Quotes November 2014 · 2014-11-27T00:20:02.509Z · LW · GW

any way that wins, is a good way to win, is a common theme around here.

Comment by aausch on Rationality Quotes November 2014 · 2014-11-21T14:19:16.733Z · LW · GW

i don't understand. what's the point of going to all the trouble required to wake up at 3 am, only to then waste your time by being tired and/or depressed?

why do you assume that someone who has the intelligence, self control and dedication required to identify that waking up at 3 am is a requirement for success, makes a plan to make sure that he can deliver on that requirement and then follows through - would then fail so terribly on other fronts?

Comment by aausch on Rationality Quotes November 2014 · 2014-11-19T19:28:01.842Z · LW · GW

Somebody said, ‘ET, why do you get up at three o’clock?’ Why not? If all I have to do is wake up at three o’clock in the morning and my family can live like they want to live, and I can change the world… three o’clock? Have you lost your mind? I will get up at three every day. Why? Because my why is greater than my sleep.

-- Eric Thomas

(appeals to: http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life)

Comment by aausch on Rationality Quotes November 2014 · 2014-11-16T16:13:39.854Z · LW · GW

[in the context of creatively solving a programming problem]

"You will be wrong. You're going to think of better ideas. ... The facts change. ... When the facts change, do not dig in. Do it over again. See if your answer is still valid in light of the new requirements, the new facts. And if it isn't, change your mind, and don't apologize."

-- Rich Hickey

(note that, in context, he tries to differentiate between reasoning with incomplete information, which you don't need to apologize for - just change your mind and move on - and genuine mistakes or errors)

Comment by aausch on Rationality Quotes April 2014 · 2014-04-30T05:14:54.807Z · LW · GW

I haven't seen them mentioned in this thread, so thought I'd add them, since they're probably valid and worth thinking about:

  • the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such things in average classrooms may on average be both inefficient and unfair to the majority of students. you're looking for knowledge and understanding in all the wrong places.

  • the vast majority of public education systems are, pragmatically speaking, tools purpose built and designed to produce model citizens, with intelligence and knowledge gains seen as beneficial but not necessary side effects. ie, as long as the kids are off the streets - if they're going to get good jobs as a side effect, that's a bonus. you're using the wrong tools, for the job (either use better tools, or misuse the tools you have to get the job you want done, right)

Comment by aausch on Nonperson Predicates · 2014-04-01T03:42:56.124Z · LW · GW

I'm curious whether there is a useful distinction between a non sentient and sentient modeller, here.

A sentient modeller would be able to "get away" with using sentient models, more easily than a non sentient modeller, correct?

Comment by aausch on Rationality Quotes September 2013 · 2013-09-07T19:13:02.708Z · LW · GW

“The first magical step you can do after a flood,” he said, “is get a pump and try to redirect water.”

-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article

Comment by aausch on Reflection in Probabilistic Logic · 2013-07-06T22:20:25.504Z · LW · GW

When reading this paper, and the background, I have a recurring intuition that the best approach to this problem is a distributed, probabilistic one. I can't seem to make this more coherent on my own, so posting thoughts in the hope discussion will make it clearer:

ie, have a group of related agents, with various levels of trust in each others' judgement, each individually asses how likely a descendant will be to take actions which only progress towards a given set of goals.

While each individual agent may only be able to asses a subset of a individual descendants actions, a good mix of agents may be able to provide a complete, or near enough to complete, cover for the descendants actions.

An individual agent can then send out source code for a potential descendant, and ask its siblings for feedback - only deciding whether to produce the descendant if enough of the correct siblings respond with a yes.

Comment by aausch on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-04T22:51:08.982Z · LW · GW

The story clearly states Harry's explicit interest in not attending school, so he wouldn't have tried anything to change his sleep pattern for that purpose, and I doubt by the age of 10 he'd found any other important reasons to motivate sleep pattern changing therapy.

I also doubt his parents' preferences matter, here, and even if they did prefer he change his habits, I doubt they'd press him into therapy without his explicit, cooperative, interest.

Comment by aausch on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-04T22:25:24.945Z · LW · GW

To me, all of this is more evidence towards the Harrymort branches; Harry's dark side finally has the ability to directly sway Harrys actions.

Also note that Harry is explicitly not counting the possibility that his own actions have been affected by memory charms, etc...

Comment by aausch on Rationality Quotes March 2013 · 2013-03-09T20:10:32.065Z · LW · GW

/sidetrack Wow, awesome fanfic! /sidetrack Please promote it more prominently if you haven't so far, I think many HPMOR fans would appreciate the reference.

Comment by aausch on Efficient Charity · 2013-01-07T16:49:31.647Z · LW · GW

Is anyone doing charitable work which covers reducing the incidence of iodine deficiency in third world countries?

Comment by aausch on Rationality Quotes November 2012 · 2012-12-03T07:45:34.862Z · LW · GW

I found the quote amusing specifically because of this ambiguity (modulus your first point - the question of values seems tangential to me).

I found the mix of optimism (ie. the assumptions that no extinction type events will occur, and that there will be a continuous descendant type relationship between generations far into our future, etc...) and pessimism (ie, the assumption that, on a large enough time scale, most architectural components traceable to now-humans will become obsolete) poignant.

Comment by aausch on Rationality Quotes November 2012 · 2012-11-14T22:44:16.031Z · LW · GW

Bokonon: One day the enhanced humans of the future will dig through their code, until they come to the core of their own minds. And there they will find a mass of what appears to be the most poorly written mess of spaghetti code ever devised, its flaws patched over by a massive series of hacks.

Koheleth: And then they will attempt to rewrite that code, destroying the last of their humanity in the process.

The Dialogues Between Bokonon and Koheleth

Comment by aausch on The Strangest Thing An AI Could Tell You · 2012-11-09T20:27:53.838Z · LW · GW

Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.

"Neurons" and "brains" are damaged/mutated results of a mutated "space-virus", or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:

  1. terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)

  2. spreading the virus into space, with a built in bias for spreading away from our origin (voyager's direction)

Comment by aausch on Rationality Quotes November 2012 · 2012-11-09T19:18:14.996Z · LW · GW

For some reason, I interpreted Girl 1 to be a Boy.

Comment by aausch on 2012 Less Wrong Census/Survey · 2012-11-06T17:18:59.882Z · LW · GW

Censused!

Comment by aausch on Rationality Quotes August 2012 · 2012-08-05T19:52:35.195Z · LW · GW

Did you teach him wisdom as well as valor, Ned? she wondered. Did you teach him how to kneel? The graveyards of the Seven Kingdoms were full of brave men who had never learned that lesson

-- Catelyn Stark, A Game of Thrones, George R. R. Martin

Comment by aausch on The Wannabe Rational · 2011-06-28T03:30:41.015Z · LW · GW

I've since learned that some people use the word "rationality" to mean "skills we use to win arguments and convince people to take our point of view to be true", as opposed to the definition which I've come to expect on this site (currently, on an overly poetic whim, I'd summarize it as "a meta-recursively applied, optimized, truth-finding and decision making process" - actual definition here).

Comment by aausch on The "Outside the Box" Box · 2011-06-14T04:34:14.959Z · LW · GW

The monty python link is stale

Comment by aausch on Teachable Rationality Skills · 2011-05-28T20:56:16.835Z · LW · GW

Exercise: Dancing

Single/Partnered dancing lessons. Increase body awareness and consciousness of body language signs, both emitted and received. Practice basic skills that can lead to other benefits - confidences speaking with strangers, and hugging at meet-ups.

Comment by aausch on Teachable Rationality Skills · 2011-05-28T20:53:51.905Z · LW · GW

A more challenging alternative might be to try getting a handsome guy to show genuine affection - ie., give you a hug and some words of encouragement ("don't worry about it, you'll do well on that test"), in exchange for nothing offered.

Comment by aausch on Teachable Rationality Skills · 2011-05-28T20:46:02.426Z · LW · GW

Maybe keep track of strong emotional reaction, with modifiers for how strongly it's affecting your response to the conversation

Comment by aausch on Of Gender and Rationality · 2011-05-22T07:54:15.414Z · LW · GW

I'm trying to understand where the bad is in this idea.

Are you maybe opposed to details of the implementation? Would you think the idea is bad if the option to filter out results is opt-in and explicitly stated? For example, offer users a "only use votes from teenagers when displaying data on the site" option, which they can enable or disable at will.

Comment by aausch on Of Gender and Rationality · 2011-05-22T00:40:37.221Z · LW · GW

Are you opposed to it because it's divided along gender lines? Would you be more receptive to it if it was divided along, say, age lines, or proficiency in rationality lines?

Comment by aausch on Of Gender and Rationality · 2011-05-22T00:14:23.901Z · LW · GW

I'm a bit confused by the downvotes. Did I miss something? I figured that my suggestion, or some approximation in the same solution space, would both provide useful information about the cause of the gender imbalance, and tools to try and address it.