Posts

I need to understand more about... 2010-10-07T09:15:22.910Z
Proposal for a structured agreement tool 2010-09-30T23:31:24.793Z

Comments

Comment by DilGreen on Words as Mental Paintbrush Handles · 2016-07-12T11:50:22.623Z · LW · GW

It's been a few years, but the answer is now - yes. Here's a link to a New Scientist article from earlier this year. I'm afraid there's a pay barrier: https://www.newscientist.com/article/2083706-my-minds-eye-is-blind-so-whats-going-on-in-my-brain/ The article documents recent experiments and thinking about people who are poor or incapable (about 2 to 3% report this) of forming mental pictures (as opposed to manipulating concepts). Key quote:

To find out how MX’s brain worked, Zeman put him into an MRI scanner and showed him pictures of people he was likely to recognise, including former UK prime minister Tony Blair. The visual areas towards the back of his brain lit up in distinctive patterns as expected. However, when MX was asked to picture Blair’s face in his mind’s eye, those areas were silent. In other words, the visual circuits worked when they had a signal from the outside world, but MX couldn’t switch them on at will (Neuropsychologia, vol 48, p 145).

Test yourself here: http://socrates.berkeley.edu/~kihlstrm/MarksVVIQ.htm

Comment by DilGreen on Excluding the Supernatural · 2010-10-13T00:49:26.781Z · LW · GW

No sane, rational, and sufficiently-educated person puts forward arguments incompatible with science.

The problem with this statement is that it puts 99.999% of everyone 'beyond the pale'. It disallows meaningful conversations about things which have huge functional impacts on all humans, but about which science has little of use or coherence to say. It cripples conversation about things which our current science deems impossible, without allowing for the certainty that key aspects of what is currently accepted science will be superseded in the future.

In other words, it is an example of a reasonable sounding thing to say that is almost perfectly useless. You have argued yourself into a box.

I would suggest that no sane, rational and sufficiently-educated person ascribes zero probability to irrational seeming propositions.

Comment by DilGreen on Superexponential Conceptspace, and Simple Words · 2010-10-11T14:05:50.836Z · LW · GW

Infants do not possess many inborn categories, if they have any at all. They perceive the world as directly as their senses permit. But they do not remain this way for long.

This seems to be objectively untrue. Many ingenious experiments with very young children forcefully suggest a wide range of inborn categories, including faces,. There is even evidence that male and female children pay different attention to different categories long before they can talk.

Further, there is strong evidence that children have inborn expectations of relationships between sensory input. The physics of the eye ensures that images focussed on the retina are upside-down, and experiment shows that, for a few days, this is how the world is perceived. But babies learn to invert the image, so that it tallies with reality. This happens automatically, and within days - presumably through some hard-wired expectation of the interrelation between senses - eg proprioception and sight.

Comment by DilGreen on Planning Fallacy · 2010-10-11T13:09:00.511Z · LW · GW

As an architect and sometime builder, as an excellent procrastinator, I heartily concur with this comment.

The range of biases, psychological and 'structural' factors at work is wide. Here are a few:

  • 'tactical optimism' : David Bohm's term for the way in which humans overcome the (so far) inescapable assessment that; 'in the long run, we're all dead'. Specifically, within the building industry, rife with non-optimal ingrained conditions, you wouldn't come to work if you weren't an optimist. Builders who cease to have an optimistic outlook go and find other things to do.

  • maintaining flexibility has benefits: non-trivial projects have hidden detail. It often happens that spending longer working around the project - at the expense of straight-ahead progress - can lead to higher quality at the end, as delayed completion has allowed a more elegant/efficient response to inherent, but unforeseen problems.

  • self-application of pressure: as someone tending to procrastinate, I know that I sometimes use ambitious deadlines in order to attempt to manage myself - especially if I can advertise that deadline - as in the study

  • deadline/sanction fatigue: if the loss incurred for missing deadlines is small, or alternatively if it is purely psychological, then the 'weight' of time pressure is diminished with each failure.

I'm going to stop now, before I lose the will to live.

Comment by DilGreen on Magical Categories · 2010-10-11T12:23:53.361Z · LW · GW

So many of the comments here seem designed to illustrate the extreme difficulty, even for intelligent humans interested in rationality, and trying hard to participate usefully in a conversation about hard-edged situations of perceived non-trivial import, to avoid fairly simplistic anthropomorphisms of one kind or another.

Saying, of a supposed super-intelligent AI - one that works by being able to parallel, somehow, the 'might as well be magic' bits of intelligence that we currently have at best a crude assembly of speculative guesses for - any version of "of course, it would do X", seems - well - foolish.

Comment by DilGreen on Magical Categories · 2010-10-11T12:16:02.378Z · LW · GW

Whether or not the AI finds the abstraction of human happiness to be pertinent, and whether it considers increasing it to be worthwhile sacrificing other possible benefits for, are unpredictable, unless we have succeeded in achieving EY's goal of pre-destining the AI to be Friendly.

Comment by DilGreen on Magical Categories · 2010-10-11T12:08:04.891Z · LW · GW

Surely the discussion is not about the issue of whether an AI will be able to be sophisticated in forming abstractions - if it is of interest, then presumably it will be.

But the concern discussed here is how to determine beforehand that those abstractions will be formed in a context characterised here as Friendly AI. The concern is to pre-ordain that context before the AI achieves superintelligence.

Thus the limitations of communicating desirable concepts apply.

Comment by DilGreen on Magical Categories · 2010-10-11T11:48:29.717Z · LW · GW

A utility function measured in dollars seems fairly unambiguously to lead to decisions that are non-optimal for humans, without a sophisticated understanding of what dollars are.

Dollars mean something for humans because they are tokens in a vast, partly consensual and partially reified game. Economics, which is our approach to developing dollar maximising strategies, is non-trivial.

Training an AI to understand dollars as something more than data points would be similarly non-trivial to training an AI to faultlessly assess human happiness.

Comment by DilGreen on Surprised by Brains · 2010-10-11T02:42:05.422Z · LW · GW

Surely this is not an example of search-space compression, but an example of local islands of fitness within the space? Evolution does not 'make observations', or proceed on the basis of abstractions.

An even number of legs 'works best' precisely for the creatures who have evolved in the curtailed (as opposed to compressed) practical search space of a local maxima. This is not a proof that an even number of legs works best, period.

Once bilateral symmetry has evolved, the journey from bilateralism to any other viable body plan is simply too difficult to traverse. Nature DOES search the fringes of the space of centipedes with an odd number of legs- all the time.

http://www.wired.com/magazine/2010/04/pl_arts_mutantbugs/

That space just turns out to be inhospitable, time and time again. One day, under different conditions, it might not.

BTW, I am not claiming, either, that it is untrue that an even number of legs works best - simply that the evolution of creatures with even numbers of legs and any experimental study showing that even numbers of legs are optimal are two different things. Mutually reinforcing, but distinct.

Comment by DilGreen on Surprised by Brains · 2010-10-11T02:20:21.745Z · LW · GW

This comment crystallised for me the weirdness of this whole debate (I'm not picking sides, or even imagining that I have the capacity to do so intelligently).

In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I'm using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum - that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);

Worm1: I tell you it's really important to consider the possibility that these "intelligent beings" might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....

Worm2: But why can't you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won't understand that disrupting this flow will be sub-optimal....

I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those 'how many angels would fit onto the head of a pin' ones that we fondly ridicule.

The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method - and they didn't distinguish between them!

Comment by DilGreen on Surprised by Brains · 2010-10-11T02:02:43.340Z · LW · GW

The space of possible gene combinations is not compressible - under the evolutionary mechanism.

The space of behaviours of Turing machines is not compressible, in the terms in which that compression has been envisaged.

The mechanism that compresses search space that Believer posits is something to do with brains; something to do with intelligence. And it works - we know it does; Kekule works on the structure of benzene without success; sleeps, dreams of a serpent biting its own tail, and waking, conceives of the benzene ring.

The mechanism (and everyone here believes that it is a mechanism) is currently mysterious. AI must possess this mechanism, or it will not be AI.

Comment by DilGreen on Surprised by Brains · 2010-10-11T01:52:36.548Z · LW · GW

If nothing else, hugely entertaining, and covers a great deal of ground too.

Comment by DilGreen on The First World Takeover · 2010-10-11T00:35:29.688Z · LW · GW

Even a very small step forward in evolution, taken as a 'short-cut', would result in failure. - life changed the chemistry around it - headline is the relative abundance and influence of free oxygen relative to CO2.

The point is that the search is ALWAYS for near neighbour variants, and even then, the huge majority of these are failures.

The (seemingly) vastly improbable success of variants that are not near neighbours has, I think, to do with complexity and the concomitant law of unintended (in this context, 'unwelcome' would be a better word, since no intention is involved) consequences. The larger the step, the exponentially larger probability of corollary catastrophic implications.

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-09T23:34:13.943Z · LW · GW

I imagine this is getting up-voted here in response to the sentiment, and I'm not going to vote it down. But this approach is more often used by deists against rationalists, and the next step is book-burning.

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-09T23:17:00.445Z · LW · GW

The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.

EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a 'mysterious question' - the key words are 'appears to' - in other words, this is an aspect which needs significant further work..

I consider almost all code 'banal', in that almost all code 'performs little computation of interest'. Pavitra clearly distinguishes between 'interest' and 'value'.

Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-09T23:08:48.858Z · LW · GW

The implication is that connections between data are made by minds, and that minds that are not banal can make new and interesting connections between data.

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-09T22:14:44.744Z · LW · GW

from a European perspective, and simultaneously from the perspective of one who sees most state-sanctioned educational approaches as almost comically counter-productive, the idea that appears common in the US, that home schooled = fundamentalist christian parents is confusing. Many home educators in europe are specifically atheist.

Comment by DilGreen on The Irrationality Game · 2010-10-09T21:47:49.207Z · LW · GW

Um. I've still failed to be clear.

The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.

I'm saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.

It's the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I've ended up risking appearing mean. I'm going to stop here.

Comment by DilGreen on The Irrationality Game · 2010-10-07T14:40:41.593Z · LW · GW

I accept this analysis of what I wrote. In the attempt to be concise, I haven't really said what I meant very clearly.

I don't mean that "we can't say anything about AI", but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.

By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week's weather. It's worth pushing the quality of the tools and the analysis, but don't expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.

Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.

Comment by DilGreen on I need to understand more about... · 2010-10-07T14:28:31.038Z · LW · GW

Well, I promised I wouldn't, so I won't, but there are lots of broad statements without justification here, that I would love to expand.

Nevertheless my actual question is being answered, if you and SarahC are at all representative (obviously, I understand that many other opinions will exist). Climate change IS seen as a serious threat, but the idea of changing lifestyles/direction of industrilaisation is seen variously as difficult/impossible/not objectively worthwhile - so we'll deal with the consequences as they arise.

Comment by DilGreen on I need to understand more about... · 2010-10-07T14:11:05.867Z · LW · GW

Thank you

EDIT - I've tried altering my prefs and still can't find my post - even though I CAN find ones with scores <-30...

Comment by DilGreen on Sam Harris' surprisingly modest proposal · 2010-10-07T09:35:18.598Z · LW · GW

That is the awful thing about the interaction of humans and dogma.

What can start out primarily as a means of advertising allegiance can easily, particularly in stressed circumstances , become a trap.

To me, someone who undertakes a suicide bombing mission would appear to be someone who believes in the stuff about heaven et al very seriously indeed. However, journalists who have taken care to look into the real circumstances of these people have suggested that some at least of them are not particularly fervent believers, and have diverse reasons for participating. [http://www.newsweek.com/2008/07/29/dressed-to-kill.html] Perhaps this is an extreme case of the argument set out in this post: http://lesswrong.com/lw/2r0/dont_judge_a_skill_by_its_specialists/

Comment by DilGreen on Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book · 2010-10-07T09:01:17.751Z · LW · GW

This is interesting.

I'd read all of Egan before finding LW/encountering serious singularity/AI thinkers. (I'm a generalist). I read Zendegi recently but didn't immediately connect it with here - I may go and re-read it now.

For the record, I would have to say, though, the Egan's characterisations of all protagonists is weak - a tendency that is, I find, widespread among hard SF writers. Not surprisingly; they are interested in the interactions of imaginably real science with the history and future of humanity. Significant emphasis on the characteristics of particular individuals (making them seem real by letting us understood their particular identity as distinctive) is likely to undermine their purpose in examining these interactions. It takes a great artist to unite disparate angles one a topic into a whole (I hesitate to use my usual word for this achievement here - I call this achievement 'transcendent')

Comment by DilGreen on The Irrationality Game · 2010-10-05T22:51:41.312Z · LW · GW

Whenever I see someone with one of those badges that says; 'Lose weight now, ask me how!", I check that they have all their limbs.

Comment by DilGreen on The Irrationality Game · 2010-10-05T19:30:41.350Z · LW · GW

As:

formal complexity [http://en.wikipedia.org/wiki/Complexity#Specific_meanings_of_complexity] is inherent in may real-world systems that are apparently significantly simpler than the human brain,

and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]

and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,

any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident

Comment by DilGreen on The Irrationality Game · 2010-10-05T19:15:34.248Z · LW · GW

Human activity is responsible for a significant proportion of observable climate change. 90% confidence

Comment by DilGreen on Break your habits: be more empirical · 2010-10-05T19:03:29.140Z · LW · GW

Nicely written and useful post. In the context of LW, the singular take on Alice behaviour as arising from aversion to non-optimal situations seems less strange than it might in a more general arena.

However (without having data to hand), I would suggest that a significant proportion of people in the world whose activity patterns (and even internal reasoning patterns) match Alice', would in fact be behaving that way as a result of fear, rather than even-handed assessment of experience.

In which case the prescription might be naive.

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-05T18:40:15.094Z · LW · GW

I think that this quote misses an important point - and am in agreement with Academician.

Although the particular social etiquette habits of different cultures vary widely, many of them serve similar, underlying purposes.

Kurt Vonnegut makes my case beautifully, and as gently as always in 'Cat's Cradle'. Without going into the plot, there is a 'holy man' (actually, a rationalist in an impossible situation, IMHO); followers of this holy man, when they meet each other, undertake a ritual called "the meeting of souls" (or similar) :- they remove their shoes and socks, and sit down, legs extended, foot to foot.

Abstract: Ritual forms of social etiquette are human and beneficial (if not essential): the form that they take is non-essential.

There is a higher order of information in this than in the assumption that all rituals are simply arbitrary game-playing.

Comment by DilGreen on Rationality quotes: October 2010 · 2010-10-05T18:33:53.894Z · LW · GW

"Everything has been said, yet few have taken notice of it. Since all our knowledge is essentially banal, it can only be of value to minds that are not"

Raoul Vaneigem

Comment by DilGreen on Proposal for a structured agreement tool · 2010-10-04T23:35:17.447Z · LW · GW

While I am certainly not against the idea of a tool that can be used to create formal arguments, the proposal has a subtle but radical difference.

DISCLAIMER: I am not a mathematician, and do not fully understand the concepts I attempt to explain in the following.

In his work published as 'Notes on the Synthesis of Form', Chris. Alexander developed an algorithm for converting a matrix of relationship strengths between analysed sub-elements of a design problem into a 'tree-like' structure. In other words, a hierarchical diagram in which each node can have one connection only, to a higher status node. The number of nodes in each level decreases as one moves upwards, culminating in a single 'master' or 'root' node.

Following the success the publication of 'Notes...' brought, Alexander was employed to work on the development of the metro rail system in San Francisco (the BART), and put his method to work. As a rationalist, he was concerned to find that the results of his work appeared to be failing to fully address the realities of the design problems involved.

His conclusion was that the necessary function of his transformative algorithm which selected the least significant relationship linkages to be broken in order to derive the tree-like diagram was the cause of the problem; some identified real-world relationships were being ignored. And even though these might be ranked lowly, omitting them altogether was destructive.

The essay which captures this understanding is published as 'A City is not a Tree' - read it here: http://www.rudi.net/pages/8755.

In it, Alexander contrasts the tree-like diagram with another; the semi-lattice diagram, which, although still hierarchical, allows for connections across branches, as it were, so that overlapping sets of relationships are legal. Semi-lattices, I believe, are not susceptible to formal logical analysis, but nevertheless can be better mapping tools for complex, real-world systems.

My proposal would deliberately allow for semi-lattice linkages. This would allow, to come up with a quick example, a proposition that called for more cycling to link both to a proposition for less carbon emissions and a proposition for congestion reducing transport initiatives.

Tree diagrams are fairly useless in addressing real-world conditions, as these are usually formally complex, with elements occurring in overlapping sets more often than not. As a result, policy documents are not structured like tree diagrams, and do adduce all sorts of linkages, but do this in a totally unstructured manner, and are often functionally weak, while appearing to address everything. As EY says (everywhere); "A theory that can explain everything, prohibits nothing, and so gives us no advice about what to expect."

My hope for the proposal is that it could bring structured, coherent agreement on sets of principles without the need for total agreement on every aspect of every point.

Comment by DilGreen on The Correct Contrarian Cluster · 2010-10-04T22:53:48.550Z · LW · GW

Can I humbly suggest that a tool along the lines of the one proposed here:

http://lesswrong.com/lw/2rw/proposal_for_a_structured_agreement_tool/

might be useful for the purpose?

Comment by DilGreen on Taking Ideas Seriously · 2010-10-04T22:26:19.313Z · LW · GW

I share the puzzlement of others here that after a post where bioterrorism, cryonics and molecular nanotechnology are listed as being serious ideas that need serious consideration - by implication, to the degree that they might significantly impact upon the shape of one's 'web of beliefs' - that the topics of climate change and mass extinction are given such short shrift, and in terms that, from my point of view, only barely pass muster in the context of a community ostensibly dedicated to increasing rationality and overcoming bias.

I find little rationality and enormous bias in phrases like; "... why the extinction of various species that nobody cares about is such a bad thing".

The ecosystem of the planet is the most complex sub-system of the universe of which we are aware - containing, as it does, among many only partially explored sub-systems, a little over 6 billion human brains.

Given that one defining characteristic of complex systems is that they cannot be adequately modelled without the use of computational resources which exceed the resources of the system itself [colloquially understood as the 'law of unintended consequences'], it seems manifestly irrational to be dismissive of the possible consequences of massive intervention in a system upon which all humans rely utterly for existence.

Whether or not one chooses to give credence to the Gaia hypothesis, it is indisputable that the chemical composition of the atmosphere and oceans are conditioned by the totality of the ecosystem; and that the climate is in turn conditioned largely by these.

Applying probabilistic thinking to the likely impact of bio-terrorism on the one hand, and climate change on the other, we might consider that, um, five people have died as a result of bioterrorism (the work, as it appears, of a single maverick and thus not even firmly to be categorised as terrorism) since the second world war, while climate change has arguably killed tens of thousands already in floods, droughts, and the like, and certainly threatens human habitat as low-lying islands are inundated as sea-levels rise.

Upon these considerations it would appear bizarre to consider expending any energy whatsoever upon bioterrorism before climate change.

Comment by DilGreen on Correspondence Bias · 2010-10-04T16:06:26.599Z · LW · GW

Anecdote exemplifying the point.

My father used to appear in plays at university.

His mother attended a performance in which he played Lucky in http://en.wikipedia.org/wiki/Waiting_for_Godot (please, do tell me how to do this properly here!) . Lucky is Pozzo's slave, and is badly treated. Afterwards, she commented that the actor playing Pozzo seemed a deeply unpleasant character, and insisted she could tell, even when my father protested that he was only playing his part, and was in fact a nice chap.

6 months later, she attended another performance, in which the same actor played a very sympathetic character. After the performance, she commented on what a lovely fellow he was. On being confronted with her earlier assessment, she was deeply confused.

Comment by DilGreen on Proposal for a structured agreement tool · 2010-10-04T15:47:18.561Z · LW · GW

This looks very interesting - I will have a more thorough look and report back.

Comment by DilGreen on Guessing the Teacher's Password · 2010-10-01T15:13:25.880Z · LW · GW

This reminds me of my own experience as a student who loved chemistry. We were told a series of useful untruths about what matter is as we went through the system.

Molecules and atoms were like billiard balls.

No, that was an approximation - atoms are made of nuclei and electrons which can be visualised as little planetary systems.

No, that was an approximation - electrons, protons, neutrons are more usefully considered as probability functions.

I didn't do science at university level, so I never got to the next level, but quantum theory was waiting for me there.

I did start an electronic engineering course, and there we learned another useful half-truth - the equations that describe the behaviour of a transistor. Only they don't. They describe a manageable function which is something like the behaviour of a transistor - the real-world behaviour is non-linear and discontinuous (truly horrible - I didn't finish the course...).

All of these useful untruths are like passwords - they allow us to reliably accomplish things in the world, but they do not give us real power over or understanding of the domain they address. Nevertheless, it would be hard to do without them.

Comment by DilGreen on Fake Explanations · 2010-10-01T14:32:50.117Z · LW · GW

I think that EY's problem with this point of view is a typical one that I find here at LW: a consideration of the rational thinker as loner in heroic mode, who is expected to ignore all contexts (social, environmental, whatever) that are not explicitly stated as part of the problem presentation. On the other hand, these students were in a physics class, and the question is obviously not part of normal conversation.

Comment by DilGreen on Fake Explanations · 2010-10-01T14:16:28.555Z · LW · GW

Interestingly enough, my teacher, Chris. Alexander (author of A Pattern Language), recounts his entrance test for a physics degree at Cambridge. The applicants were asked to experimentally determine the magnetic field of the earth. He performed the experiment, and came up with an answer he knew to be wrong. Wrong by too large a margin to put down to experimental error. A smart chap, he had time to repeat the key part of the experiment, and recalculate - got the same answer. He used the last part of his time to write down his hypothesis for having achieved such a result. And, alone among the students, he was right. A massive electro-magnet was being used on the floor below as part of another experiment.

I believe the advice offered to me as an 18yr old physics student encountering similar circumstances was simply to show my workings and the incorrect result, and to add that I knew this was not the 'right' answer.

Comment by DilGreen on Proposal for a structured agreement tool · 2010-10-01T12:15:19.321Z · LW · GW

It's good to have positive responses!

Recommendations: I have a feeling that this will turn out to be relatively trivial - as I see it, it is essentially a parallel problem to the one that google solved, but with a data set many orders of magnitude smaller. The system surveys the relationships between propositions that have been supplied by users (I don't think it need/should infer connections for itself), and determines a relevance score.

Moderators: I am sure you are right, and that this could easily be a fraught area. Ideally, the system would be flexible enough to allow for groups to self-identify and self-manage in a variety of ways - unlike wikipedia, where a single ecosystem is the be-all and end-all, there is space here for a wide range of ecosystems - groups would certainly rise and fall, fail, feud, coalesce; but no propositions will disappear when a group associated with it collapses - the propositions remain for other users to find and work with. The system should be set up as an evolutionary playground, with mechanisms that are rigged to marginally favour groups that coalesce more, flame less.some sort of ranking system would allow neglected propositions/networks to fall to the bottom.

Groups: I believe that as humans are a social species, human communities are essential structures and the necessary (but not sufficient) locus for creative responses to problems (even if the responses are associated with individuals, those individuals will generally be more effective within a supportive community). However, the communities of the past generally used restrictive and non-rational codes as social glue (variously, religion/tribal culture/geographic isolation/economic dependence/political domination etc) People who wish to use their consciousness to improve their lives have often had a hard time in relation to these codes, and at an accelerating pace in the last century, have abandoned traditional communities - leaving these dominated by the happily or cynically non-rational. Secular cities with weak communities, fundamentalist rural areas with strong but misguided communities. I am interested in developing tools and mechanisms which can allow people to form communities on the basis of conscious assent. I am enough of a darwinist not to want to attempt to say what I think those communities should aspire to.

I would use such a tool to attempt to set out my own web of beliefs and interests, in the hope of connecting with like minded individuals. My hope is, that with a large subset of the 'ground rules' clearly communicated and the relationships between them identified, that highly effective development of new/difficult/ill defined areas could be worked on. I would also use such a platform to make proposals like the one we are discussing here. I would expect certain groups to turn up quickly; cults; obsessives and the like - but I fondly hope that these will either remain tiny or will disintegrate or, most likely, find that the structural characteristics of the tool tends to expose inconsistencies in their thinking, and leave of their own accord. I would expect that political groups might follow the same trajectory, albeit more slowly and with less fireworks. I would hope that purposeful individuals with positive and constructive intentions will find it a more congenial arena that wider forums or more narrow blogs.

A concrete example: as a founder of a small school, I attempted to establish the principal policies of the school as a network of 'patterns', ranging from fundamentals (Support the developing child; Learning is part of living; Self-governance; Family involvement is fundamental; Ethical consciousness and action) to intermediates (Relationship with the state) to specifics ( In the absence of a tool like the one proposed, collaboration with other people involved was impractical, and we retreated to the standard solution of sets of prose policies, which no-one ever reads, and which are inaccessible and becoming less relevant to actual practice with every passing week.

As to what groups might accomplish, I can say this very succinctly; they might well succeed in expanding the range of metaphysics available to humanity. By which I mean, expanding the number of ways in which we can usefully communicate about phenomena. As this seems to me to be the only reliable yardstick by which to measure 'progress', I modestly suggest that this might be a Good Thing.

Business Plans: given some sort of 'freemium' model, than yes, users/groups wanting privacy / security would be able to pay for various levels of same, or presumably for implementation of stand-alone installations on their own servers. Open-source software offers successful models for this (Drupal is one example).

"basically, it's an interactive prose design document" - I hope not. The structured and templated nature of the Pattern Language approach has a higher order than prose, and should increase both the information content and the utility of the resulting collection of propositions. The book 'A Pattern Language' , which was published in 1977, and conceived years previously, works exactly like a hypertext document, with directed but essentially free-form navigation the implied mode of use. The requirement that all content is directed towards the resolution of the particular problem/proposition at hand is rather powerful in practice, and the requirement that each proposition should be assigned larger scale propositions which it helps to refine, and needs smaller scale propositions to flesh it out in turn imposes a need for clarity of thought about the outcome of the system as a whole.

Engagement with the project: at this stage, comment and discussion is what is needed.

Comment by DilGreen on Proposal for a structured agreement tool · 2010-10-01T10:46:02.459Z · LW · GW

Thanks for the comments. I will try to address them.

I am less concerned about hosting/maintenance than I am about development costs. If it is not successful, then it won't be large, or need large bandwidth - costs will be low. If it is successful, then some sort of freemium, ad-supported model may be possible. Of course, the territory between these two extremes might be awkward. Joel Spolsky [http://en.wikipedia.org/wiki/Joel_Spolsky] has taken his Stack_Overflow engine [http://stackoverflow.com/] and gotten VC funding for an attempt to roll it out as a social network platform [http://stackexchange.com/].

All sorts of propositions are acceptable, as I tried to make clear - although I do think that it will be important to seed the site with 'good' models - perhaps some of the 'sequences' here at LW, for instance. It doesn't matter to me if the system is used by a cult who think that John Lennon was the second coming of the Messiah.

On the 'nit-picking' point, there is a characteristic of the model offered by 'A Pattern Language' which might help - the Conclusion section of the proposition is stated deliberately in open terms, rather than attempting to be rigourously prescriptive. So, instead of 'Thou shalt not kill', one might conclude that killing people is almost never justifiable, and offer links to other propositions on, say 'self-defence', and 'assisted suicide'. One need not aim for a structure that would satisfy a formal logic analysis (although there is no reason not to attempt such). My own propositions would be a set which describe my own web of beliefs, in a way that is cloudy enough to admit nuanced interpretation, but clear enough to be useful. I would hope that this would make it possible;1 for others to register interest in them, and 2 for me to accept refinements suggested by others. I think that many of EY's posts conclude in the same way - offering tools and strategies, rather than hard-and-fast rules.

I think you are right about forking. However, I would contrast this with the status quo. I have been trying to locate my own 'community of interest' on the web for over a decade - I believed that the internet would allow like-minded people - particularly ones with minority interests that could be adequately communicated in words - to find each other. Sites like LW are the poor best that I have found. Why poor best? Well, while there is much of interest here, I am already concerned that certain ideas I hold dear may prove to be anathema here. EY set the site up, and it has a clear and strong thrust. It would be foolish/rude/ineffectual to push ideas which are antithetical to that thrust here. So, sites like this are too rigid - forking is not allowed. More general forums are too loose for the concept of forking to even have a meaning. You see, I would be happy to work with even 5 people at a very high level of agreement. I imagine that there would be many sets of groups which would share broad foundations of shared propositions, but grow increasingly fragmented as the propositions began to operate at more detailed levels. Perhaps it will be important to allow associations of groups,

Precisely these sort of considerations are what I hoped for. Even if this remains a thought experiment, I will find it immensely valuable.

Comment by DilGreen on On the Care and Feeding of Young Rationalists · 2010-10-01T00:46:17.939Z · LW · GW

I am some sort of a rationalist, although not perhaps as hard-line as some. I WAS brought up by rationalist atheists, and have discussed with my sister our mutual absolute inability to imagine becoming religious. While our parents' views were always explicit, they were not imposed. My wife's experience is parallel, and she is similarly perplexed by the idea of religiosity. I don't think rationalist parents need do more than love their childrn, and do their rational best to talk rationally to their children about issues that arise. I will say that I consider it irrational to deny the impact of irrational/unexplainable phenomena on our existence- perhaps virulent, foaming-at-the mouth atheism a la Dawkins might enegender a flight into religion in a damaged child.

Comment by DilGreen on Book Recommendations · 2010-10-01T00:21:26.551Z · LW · GW

The powerful characteristic of Chris. Alexander's 'A Pattern Language' is not immediately obvious - the patterns themselves are not rigourously researched (the authors admit this, and use a rating system to make it clear their own level of confidence in their proposals), and many do not stand the test of time. There are a few which seem to me to be worth paying deep attention to, but I won't go into that here.

The real invention is the idea of a pattern language. What it is. The work of an architect involves dealing at a wide variety of scales, and along the whole gamut from subjectivity to objectivity. It involves crossing and re-crossing between a number of only loosely related sets of systems; all with the aim of producing something acceptable in terms of function, aesthetics, economy and constructional/structural viability. In short, it is a complex task in an irreducibly (this side of any putative singularity) complex environment.

Humans are not good at dealing with complexity - there is well established research on the limits of the human brain in handling more than a few ideas at once. This is why reductionist practices have served us so well. However, reductionist practices are all but useless in complex environments, unless you are happy to ignore aspects of that environment which you can't handle- the error of the logical positivists.

Pattern languages offer a tool for managing our understanding of a complex environment, without self-defeating reduction (the error of Notes on the Synthesis of Form: in order to arrive at his tree-like diagrams of problems, Alexander had to develop an algorithm that decided which of the relationships between parts of the problem which had been identified should be ignored. The approach of 'Notes' involves deliberately ignoring parts of the analysis of the problem).

Each pattern allows for reductive thinking at an appropriate level of perspective, while the explicit links maintained between patterns at larger and smaller scales in a non tree-like 'semi-lattice' help maintain in consciousness and in the design process all the connections which make the situation complex in the first place.

I will admit to being an architect.

Comment by DilGreen on Proposal for a structured agreement tool · 2010-09-30T23:56:27.001Z · LW · GW

I will make some comments myself. As a newcomer to LessWrong, looking around, I have found some 'prior art' - unsurprising, as it was thinking about this proposition that led me here. so some LessWrong links: Web of Beliefs/A rational Identity: http://lesswrong.com/lw/2g0/a_rational_identity/ Navigating IdentitySpace: http://lesswrong.com/r/discussion/lw/2rm/request_for_rough_draft_review_navigating/

Comment by DilGreen on Passing the Recursive Buck · 2010-09-30T21:22:17.088Z · LW · GW

Excellent post - and excellent advice.

I'm fairly new here, and very definitely not an AI 'insider'. But it sems to me that the approach to rationality promulgated here is strongly influenced by experience in thinking about what and how AI might be.

As someone who aspires to rationality, I find much of interest, and much to chew on, as I look around.

This post has crystallised a feeling I was getting that the approach taken here is perhaps fixated on a certain sort of mechanistic rationalism - of the type assumed in many game/economics theoretical approaches.

The example that launches the post is fatally undermined by the philosophically and experientally obvious point (and luckily for me, a point which is increasingly based in the science that comes from using fMRI) that the decision taken was NOT a rational decision. It was largely taken by the unconscious (I prefer pre-conscious, for the same reasons that you dislike the word 'emergent' - unconscious has come to be a mystical term - and that is not what I intend).

Rational behaviour is a mode of behaviour - one of many. The reason that increased rationality among humans is desirable is that it is a mode that is almost never practiced. We are - like it or not - creatures with an only lately evolved capacity for rational thought of any kind. Almost everything that we can be or do, can be achieved without rational thought (if I ever get the nerve to write a post in this forbiddingly precise atmosphere, I may be able to make this seem less tendentious).

Thus the impact of rational thinking has been out of all proportion to its prevalence. Rational behaviour is like the one-eyed man who is king in the nation of the blind. But for the one eyed man to declare that anything not encompassed by sight is irrelevant or dangerous would not be optimal for his subjects.

So I end up thinking, with regard to progress in artificial intelligence (without the slightest expectation of originality), that if research is focussed on 'artificial rationality', then any recognisable 'intelligence' is unlikely to result.

Comment by DilGreen on Words as Mental Paintbrush Handles · 2010-09-30T20:43:30.683Z · LW · GW

I am the same - when I read, I hear my own voice speaking the words, and am also a poor visualiser of sense data in its sensory form.

The odd thing is that I am highly discriminating in terms of music, art, food (you might call it fussy, I would say I have an eye/ear for true quality...).

Even stranger is that I am an architect, and for the last twenty years I have been developing and practicing (as best I can) imagining non-existent three dimensional forms and relationships, and being very particular about them.

I do have fuzzy mental pictures, but what I really experience is more of a 'gestalt' of the character of the 'right' solution- which I then sketch and attempt to integrate into the technical/practical aspects of the design often testing back against the 'feeling' of the imagined result. I happen to think that I am quite good at making things which are considered beautiful using this method.

My dreams (of which I remember very few) are experienced as narrative - again more experientially than as pictures or words.

The interesting thing about this thread is that it would seem that there is a wide range of ways in which individuals experience the mental 'currency' brought into consciousness by any given concept or word - yet there is wide acceptance of the quality of certain types of writing. Thus individuals reading Dickens, say, will happily converse together, but are likely to be experiencing the narrative in radically different ways.

Comment by DilGreen on The Virtue of Narrowness · 2010-09-30T06:18:19.322Z · LW · GW

In point of fact, Isaac Newton did not "explain just gravity" - he also invented the calculus and developed important insights into the nature of light, among numerous other contributions to science.

During the same life (presumably as indivisible to him as mine seems to me - but that's another issue), he apparently wrote more on aspects of religiosity than he did on science (according to a lazy skim through the wikipedia entry), dabbled extensively in alchemical investigations, ran the Royal Mint (and as such was in fact deeply concerned with the "role of money in society" - to significant practical effect at the time), and became an MP.

Of course, this might not impact upon the point you are trying to make - you might just have selected a poor example.

However, casting about for a better example (immediately recognisable names who have made a singular contribution to science but did nothing else of note/had no significant, tangential side interests) - I find it hard to come up with one. Even if there is one, I think that s/he might well be an exception, rather than a rule.

So what's my point?

I feel that your defence of narrowness is too narrow, and that your denunciation of "everything is connected" is too broad.

Everything is indeed connected - this is trivially true; philosophically, logically and physically. As you say, though, the statement only becomes interesting when we start to examine what the connections are; how they function, what the relationship of different connections is, what networks these connections form which can be recognised as recurring patterns that have real effects/can be affected.

In the context of these investigations, narrowness is just a question of perspective, and any notion that operating only at particular level of perspective is 'correct' seems fatuous. Even the suggestion that one level of perspective is generally to be preferred would need careful justification.

In a current, 'real-world' context, consider the designer of a functional aspect of, say, a transport system. We expect the designer to produce something efficient, safe, economical, and practical. We might say; that's it - you have no other responsibility. But each of those requirements can be viewed more or less narrowly.

For the last three hundred years or so, western culture has been tending to suggest to people that they should view the requirements of their task more and more narrowly. And this has appeared to be highly 'successful' - in terms of valuable and significant parameters such as mortality, increasing education, enlarged franchise, standard of living etc. - so that the trend becomes reinforced.

However, it has become evident that this narrowness has led us to ignore the wider network within which we live - the ecosystem of the planet. Our transport designer should no longer consider environmental impacts as 'externalities' that can only distract from the task at hand.

It is becoming incumbent upon us to develop a range of perspectives, and to understand the usefulness and application of them, and how to change perspective while working on a single task. This is hard for an individual. For it to become a cultural mode is monumental.

Narrowness is an effective mode of operation only when it is appropriate. Opening our eyes wide and jumping into a sea of possible connections without prejudging them is another viable mode in appropriate circumstances.

As an architect, I find I need to employ a range of modes, from extreme breadth to extreme narrowness. One metric of an effective architect might well be to look at how well s/he judges what level of breadth/narrowness is appropriate in a given situation.