Posts

Leukemia Has Won 2019-02-20T07:11:13.914Z
Has The Function To Sort Posts By Votes Stopped Working? 2019-02-14T19:14:15.414Z

Comments

Comment by capybasilisk on Building AGI Using Language Models · 2020-11-10T13:01:31.268Z · LW · GW

These teaser excerpt posts are really annoying.

Just post the whole thing here. If it's good, I'll check out the rest of your stuff, but as it is I don't click through out of principle.

Comment by capybasilisk on Prediction = Compression [Transcript] · 2020-09-21T18:21:54.204Z · LW · GW

The Arbital entry on Unforeseen Maximums [0] says:

"Juergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, gave a talk proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.

Yudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI."

[0] https://arbital.greaterwrong.com/p/unforeseen_maximum/

Comment by capybasilisk on My (Mis)Adventures With Algorithmic Machine Learning · 2020-09-21T12:21:42.127Z · LW · GW

Thanks for sharing this.

Would there be any advantages to substituting brute force search with metaheuristic algorithms like Ant Colony Optimization?

Comment by capybasilisk on The universality of computation and mind design space · 2020-09-14T12:32:15.936Z · LW · GW

Of possible interest, Roman Yampolskiy's paper, "The Universe Of Minds".

https://arxiv.org/pdf/1410.0369.pdf

The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some interesting properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology. A list of open problems for this new field is presented.

Comment by capybasilisk on Charting Is Mostly Superstition · 2020-08-24T10:16:41.441Z · LW · GW

Are random trading strategies more successful than technical ones?

In this paper we explore the specific role of randomness in financial markets, inspired by the beneficial role of noise in many physical systems and in previous applications to complex socioeconomic systems. After a short introduction, we study the performance of some of the most used trading strategies in predicting the dynamics of financial markets for different international stock exchange indexes, with the goal of comparing them with the performance of a completely random strategy. In this respect, historical data for FTSE-UK, FTSE-MIB, DAX, and S&P500 indexes are taken into account for a period of about 15-20 years (since their creation until today).

...

Our main result, which is independent of the market considered, is that standard trading strategies and their algorithms, based on the past history of the time series, although have occasionally the chance to be successful inside small temporal windows,on a large temporal scale perform on average not better than the purely random strategy, which, on the other hand, is also much less volatile.In this respect, for the individual trader, a purely random strategy represents a costless alternative to expensive professional financial consulting, being at the same time also much less risky, if compared to the other trading strategies.

Comment by capybasilisk on Search versus design · 2020-08-23T14:33:33.658Z · LW · GW

But a lot of that feeling depends on which animal's insides you're looking at.

A closely related mammal's internal structure is a lot more intuitive to us than, say, an oyster or a jellyfish.

Comment by capybasilisk on AI safety as featherless bipeds *with broad flat nails* · 2020-08-20T12:18:51.002Z · LW · GW

For example, take the idea that an AI should maximise “complexity”. This comes, I believe, from the fact that, in our current world, the category of “is complex” and “is valuable to humans” match up a lot.

The Arbital entry on Unforeseen Maximums elaborates on this:

Juergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, gave a talk proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.

Yudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI.

Comment by capybasilisk on What are some Civilizational Sanity Interventions? · 2020-06-14T10:03:39.722Z · LW · GW

Robin Hanson posits that the reason why there isn’t wider adoption of prediction markets is because they are a threat to the authority of existing executives.

Before we reach for conspiracies, maybe we should investigate just how effective prediction markets actually are. I'm generally skeptical of arguments in the mold of "My pet project x isn't being implemented due to the influence of shadowy interest group y."

As someone unfamiliar with the field, are there any good studies on the effectiveness of PM?

Comment by capybasilisk on What are some Civilizational Sanity Interventions? · 2020-06-14T09:53:09.243Z · LW · GW

This would just greatly increase the amount of credentialism in academia.

I.e., unless you're affiliated with some highly elite institution or renowned scholar, no one's even gonna look at your paper.

Comment by capybasilisk on Seeing the Smoke · 2020-02-29T11:15:28.825Z · LW · GW

Look on the bright side. If it turns out to be a disaster of Black Death proportions, the survivors will be in a much stronger bargaining position in a post-plague labour market.

Comment by capybasilisk on Since figuring out human values is hard, what about, say, monkey values? · 2020-01-03T11:14:34.992Z · LW · GW

Consider the trilobites. If there had been a trilobite-Friendly AI using CEV, invincible articulated shells would comb carpets of wet muck with the highest nutrient density possible within the laws of physics, across worlds orbiting every star in the sky. If there had been a trilobite-engineered AI going by 100% satisfaction of all historical trilobites, then trilobites would live long, healthy lives in a safe environment of adequate size, and the cambrian explosion (or something like it) would have proceeded without them.

https://www.lesswrong.com/posts/cmrtpfG7hGEL9Zh9f/the-scourge-of-perverse-mindedness?commentId=jo7q3GqYFzhPWhaRA

Comment by capybasilisk on Approval Extraction Advertised as Production · 2019-12-17T12:29:23.445Z · LW · GW

Altman's responsiveness test might be more useful if it measured a founder's reply time to an underling rather than to the guy holding the purse strings/ contact book.

Comment by capybasilisk on Approval Extraction Advertised as Production · 2019-12-17T12:17:36.419Z · LW · GW

An alternative hypothesis is that Graham has a bunch of tests/metrics that he uses to evaluate people on, and those tests/metrics work much better when people do not try to optimize for doing well on them

Isn't it a bit odd that PG's secret filters have the exact same output as those of staid, old, non-disruptive industrialists?

I.e., strongly optimizing for passing the tests of <Whitebread Funding Group> doesn't seem to hurt you on YC's metrics.

Comment by capybasilisk on Apocalypse, corrupted · 2019-06-27T12:16:15.342Z · LW · GW

Relatedly, is there any work being done to come up with distance measures for agent states (including any of momentary experiences, life-sums, or values)? We talk a lot about marginal and absolute value of lives and make comparisons of experiences (motes vs torture) - what, if anything, has been done to start quantifying?

It turns out that measuring the distance between possible worlds is...complicated.

Comment by capybasilisk on For the past, in some ways only, we are moral degenerates · 2019-06-08T15:00:33.560Z · LW · GW

What about when the poor men come barging through the rich man’s gate? I take it that too is factored into the divine plan?

Comment by capybasilisk on Evolution "failure mode": chickens · 2019-04-28T08:27:16.676Z · LW · GW

Thank you. Greatly appreciated.

Comment by capybasilisk on Why does category theory exist? · 2019-04-27T11:01:55.895Z · LW · GW

And what interesting insights into reality does it give us?

Risky question to ask when dealing with pure mathematics

Comment by capybasilisk on Evolution "failure mode": chickens · 2019-04-27T10:50:25.502Z · LW · GW

I don’t think people should use this site to promote their personal blogs. Sure, you can add a link to your blog at the bottom of your post, but this teaser excerpt BS is really irritating. I don’t click through just out of principle.

If you have something to say, post the whole thing here. If I like what you have to say, I might check out your other stuff, but I’m not going to be forced into it.

Comment by capybasilisk on Has The Function To Sort Posts By Votes Stopped Working? · 2019-02-15T06:58:59.360Z · LW · GW

Thanks!