Comment by Agathodaimon on AI: requirements for pernicious policies · 2015-08-02T00:54:15.271Z · LW · GW

I think tool AIs are very dangerous because they allow the application of undigested knowledge

Comment by Agathodaimon on Privileging the Question · 2015-07-12T22:02:00.023Z · LW · GW

Another form of this is asking questions that are motivated by obtaining material goods for the self. More altruistic questions that seek first the profit of others seem less privileged in general.

Comment by Agathodaimon on Ethical Diets · 2015-01-16T23:50:19.866Z · LW · GW

It is better for one's health and the planet in terms of emissionsto eat less meat as well.

Comment by Agathodaimon on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T16:23:09.715Z · LW · GW

A breakdown of the risks

Comment by Agathodaimon on Consistent extrapolated beliefs about math? · 2014-09-05T01:03:33.335Z · LW · GW

It sounds like mathematical platonism which appeals to some like (seemingly) Roger Penrose, but it seems connected to other networks of concepts at least in part and I do not think it should be taken as a given. Perhaps in the future we can model when such belief systems will arise given access to other information based on the history of the individual in questiin, but we are not quite there yet. For further reference one could examine the examples of computationally model led social behavior in a topic someone created here regarding the reverse engineering of belief systems

Comment by Agathodaimon on Truth and the Liar Paradox · 2014-09-03T21:12:40.196Z · LW · GW

Using the non - cognitive approach you could dismiss statements in symbolic logic that do not refer to constructs or events that could come into being. I am referring to constructs in Deutsch's theory

Comment by Agathodaimon on Overly convenient clusters, or: Beware sour grapes · 2014-09-03T21:06:14.363Z · LW · GW

Thank you

Comment by Agathodaimon on Reverse engineering of belief structures · 2014-08-30T08:43:15.111Z · LW · GW


Comment by Agathodaimon on [Link] Feynman lectures on physics · 2014-08-30T02:28:37.612Z · LW · GW

I wish the audio was available for free

Comment by Agathodaimon on The metaphor/myth of general intelligence · 2014-08-29T05:24:56.984Z · LW · GW

Brains are like cars. Some are trucks made for heavy hauling, but travel slow. Some are sedans: economical, fuel efficient, but not very exciting. Some are ferraris sleek, fast, and sexy but burn through resources like a mother. I'm sure you can come up with more of your own analogies.

Comment by Agathodaimon on Why we should err in both directions · 2014-08-29T05:14:21.639Z · LW · GW

This sounds like the principle of entropy maximization. I recommend reading wissner - gross ' s causal entropic forces

Comment by Agathodaimon on Another type of intelligence explosion · 2014-08-29T05:08:41.869Z · LW · GW

How would you measure aptitude gain?

Comment by Agathodaimon on Reverse engineering of belief structures · 2014-08-29T03:51:56.933Z · LW · GW

These are on related but seperate subjects, but may be useful to you in your search for knowledge

Comment by Agathodaimon on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T18:03:03.297Z · LW · GW

Re: bullet 2

Comment by Agathodaimon on Ethical Choice under Uncertainty · 2014-08-11T01:08:52.409Z · LW · GW

Is there a word analogous to prior for actions decided on the basis of one's matrix of possible foreseen futures?

Comment by Agathodaimon on Maximize Worst Case Bayes Score · 2014-07-24T16:37:19.927Z · LW · GW

Ignore the second part of my initial comment, I was hadn't read your blog post explaining your idea at that point. I believe your problem can be formulated differently in order to introduce other necessary information to answer it. I appreciate your approach because it is generalized to any case which is why I find it appealing, but I believe it cannot be answered in this fashion because if you examine integrated systems of information by constituent parts in order to access that information you must take a "cruel cut" of the integrated system of information which may result in it becoming differentiable into 2 or more subsystems of information neither of which can contain the integrated information of the initial system.

I would recommend reading:

To be honest I am still in the midst of digesting this and several other ideas relevant to the computability of consciousness/intelligence and my remarks will have to stay brief for now. The first part of my initial comment referred to something similar to the example of a 2D isling model referred to in section C. The information you obtain from a random model of the consistent but incomplete theory will be bounded by other operators on the system from outside which limit the observable state(s) it produces. As for the second perhaps I can rephrase it now.... If viewed in this fashion your initial question becomes one of maximizing the information you get from a physical system however if the operators acting on it evolve in time then the information you receive will not be of T but rather of a set of theories. For example take a box of gas sitting in the open. As the sun rises and sets changing the temperature, the information you get about the gas will be of it operating under different conditions so uppercase phi will be changing in time and you will not be getting information about the same system. However, I do believe you can generalize these systems to a degree and predict how they will operate in a given context while bounded by particular parameters. I am still clarifying my thoughts on this issue.

Comment by Agathodaimon on Maximize Worst Case Bayes Score · 2014-07-22T04:24:21.314Z · LW · GW

I will have to think through my response so it will be useful. It may take a few days.

Comment by Agathodaimon on Maximize Worst Case Bayes Score · 2014-07-19T20:25:43.730Z · LW · GW

This seems useful to me. However there should be regions of equal probability distribution. Have you considered that the probabilities would shift in real time with the geometry of the events implied by the sensor data? For example if you translated these statements to a matrix of likely interaction between events?

Comment by Agathodaimon on False Friends and Tone Policing · 2014-07-19T07:19:17.083Z · LW · GW

Thank you

Comment by Agathodaimon on Downvote stalkers: Driving members away from the LessWrong community? · 2014-07-18T22:04:08.647Z · LW · GW

Your intuition appears to be good. There was a recent paper published on this very topic.

Comment by Agathodaimon on Good books for incoming college students? · 2014-07-18T21:57:41.517Z · LW · GW

Meditations, M. Aurelius

Comment by Agathodaimon on Communicating forecast uncertainty · 2014-07-18T20:05:52.780Z · LW · GW

Unfortunately, there seem to be unavoidable gaps in the spreading of information

Comment by Agathodaimon on Communicating forecast uncertainty · 2014-07-18T19:49:20.463Z · LW · GW

By being a loving person you can convey sincerity and that you are willing to submit your own interests to that of the greater whole.

Comment by Agathodaimon on How deferential should we be to the forecasts of subject matter experts? · 2014-07-18T19:19:15.383Z · LW · GW

I believe we should use analytics to find the commonalities in the opinions of groups of interacting experts

Comment by Agathodaimon on Wealth from Self-Replicating Robots · 2014-07-18T18:59:03.406Z · LW · GW

I am not sure your idea is fesdible, but I do wish you luck. I know mit has been doing interesting work on modular robots and of all that I have encountered I would direct your attention there. Good luck.

Comment by Agathodaimon on Confused as to usefulness of 'consciousness' as a concept · 2014-07-18T18:40:28.494Z · LW · GW

Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain