Posts

Comments

Comment by boni_bo on The Brain as a Universal Learning Machine · 2015-06-24T17:41:59.415Z · LW · GW

Yes. That paper has been cited by Stuart J. Russell's "Rationality and Intelligence: A Brief Update" and in Valiant 's second paper on evolvability.

Comment by boni_bo on The Brain as a Universal Learning Machine · 2015-06-21T22:52:29.290Z · LW · GW

One of the best posts I've here on LW, congratulations. I think that the most important algorithms that the brain implements will probably be less complex than anticipated. Epigenesis and early ontogenetic adaptation are heavily depended on feedback from the environment and probably very general, even if the 'evolution of learning' and genetic complexity provides some of the domain specifications ab initio. Results considering bounded computation (computational resources and limited information) will probably show that the ULM viewpoint cluster is compatible with the existence of cognitive biases and heuristics in our cognition http://www.pnas.org/content/103/9/3198

Comment by boni_bo on No peace in our time? · 2015-05-27T17:18:51.560Z · LW · GW

Pinker tries to provide several complementary explanations for his thesis, including game-theoretic ones (asymmetric growth, comparative advantages and overall economic interdependence) which could be considered "not really nice reasons for measuring our (lack of) willingness to destroy each other". Like SA said, Braumoeller seems to conflate 'not very nice reasons to maintain cooperation' with 'our willingness to engage in war hasn't changed'. And this is one of the reasons why Taleb et al. missed the point on Pinker's thesis. To test if it's business as usual, if our willingness, what ever that is isomorphic to, is the same, one needs to verify if State actors are more likely to adopt the risk dominant equilibrium than the payoff equilibrium or if there is intransitivity. There is a connection between the benefits of cooperation and the players willingness to coordinate. What if the inability to dominate places our future in a context where we see don't see fat tails in deaths from deadly conflicts? What if the benefits of cooperation increases over time along with the the willingness to coordinate? What if it's not business as usual?

Comment by boni_bo on How to Avoid the Conflict Between Feminism and Evolutionary Psychology? · 2014-02-06T04:14:13.488Z · LW · GW

Just to update this thread with recent discussions on EP: the list of all commentaries and responses to SWT's 'The Ape That Thought It Was a Peacock: Does Evolutionary Psychology Exaggerate Human Sex Differences?' is here and most of the papers are available online freely by googling them. Very easy:

http://www.tandfonline.com/toc/hpli20/24/3#.UvLJIzJdXkU

Comment by boni_bo on An attempt to dissolve subjective expectation and personal identity · 2013-02-23T12:33:29.789Z · LW · GW

"The “subjective system” evolved from something like a basic reinforcement learning architecture, and it models subjective > expectation and this organism's immediate rewards, and isn't too strongly swayed by abstract theories and claims."

I think this overestimates the degree to which a) (primitive) subjective systems are reward-seeking and that b) "personal identity" are really definable non-volatile static entities and not folk-psychological dualistic concepts in a cartesian theater (c.f. Dennett). For sufficiently complex adaptive systems (organisms), there is no sufficiently good correlation between its reward signal and its actual intended LONG TERM goals. A non-linear relationship between the present reward signal and the actual long-term/terminal goal in our sensorial and declarative memory creates a selective pressure for multiple senses of personal identities over time. This is precisely the reason why high level abstract models and rich integration of all kinds of timestamped and labeled instances of sensory data start to emerge inside the unitary phenomenal world-simulations of these organisms, when social dilemmas where the Nash Equilibrium is not Pareto-efficient and noncooperative self-interest disadvantageous: we have forever-changing episodic simulations of possible identities over time and some of these simulations are very hardcoded in our sense of fairness (e.g. Ultimatum Game) and empathetic understanding (yourselves in other organisms's shoes). Organisms started to have encoded abstractions (memory) about it's strategies and goals and possible associated rewards with different changing identities when competing against opponent organisms that uses the same level of memory to condition their playing on the past (be it for punishment or for helping parents or indirect reciprocity of other identities). So I don't think "we" don't base our decisions on our abstract world-model. I think "we" (the personal identities that are possibly encoded in my organism) do base decisions on the abstract world-model that the organism that is "us" is capable of maintaining coherently. Or vice versa: that the organisms that encodes "us" is basing it's decision on top of several potential first person entities that exist over time. Yes, the subjective expectations are/were important, but to who?

This conflict between several potential self-identifiable volatile identities is what creates most social dilemmas, paradoxes, problems of collective action and protection of the commons (tragedy of the commons). The thing is not that we have this suboptimal passable evolutionary solution of "apparently one fuzzy personal identity": we have this solution of several personal identities over time that are plagued with inter-temporal hyperbolically discounted myopia and unsatisfactory models of decision theory.

So I agree with you, but seems that I'm not thinking about learning in terms of rationally utility maximizing organisms with one personal identity over time. It seems this position is more related to the notion of Empty Individualism: http://goo.gl/0h3I0

Comment by boni_bo on A brief history of ethically concerned scientists · 2013-02-08T09:28:55.078Z · LW · GW

In 1948 Norbert Wiener, in the book Cybernetics: Or the Control and Communication in the Animal and the Machine, said: "Prefrontal lobotomy... has recently been having a certain vogue, probably not unconnected with the fact that it makes the custodial care of many patients easier. Let me remark in passing that killing them makes their custodial care still easier."

Comment by boni_bo on How and Why to Granularize · 2011-05-18T17:09:06.150Z · LW · GW

Now I know why most LWers reported being aspies. I feel at home, I think :)

Comment by boni_bo on How not to be a Naïve Computationalist · 2011-04-16T13:51:12.491Z · LW · GW

What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can't preserve high levels of empathy and share the same "computational space". If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I'm not a homophobe), but I'm able to extrapolate it from my own values and be motivated to respect its preservation as if it were mine (How? Simulating it. As a highly empathic person, I can say that it hurts to make others miserable. So it works as an intrinsic motivation and goal)