Open thread, Apr. 03 - Apr. 09, 2017
post by Elo · 2017-04-03T06:58:26.213Z · LW · GW · Legacy · 76 commentsContents
76 comments
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
76 comments
Comments sorted by top scores.
comment by [deleted] · 2017-04-07T19:30:56.482Z · LW(p) · GW(p)
Final version of thesis going out within 4 days. Getting back into a semi-regular schedule after PhD defense, death in the family, and convergence of job-search on a likely candidate in quick succession. Astrobiology writing likely to restart soon. Possible topics include:
- schools of thought in life origins research
- the nature of LUCA
- recent work on the evolution of potentiated smart animal lineages on Earth
- WTF are eukaryotes anyway
- the fallacies of the Fermi paradox/ 'great filter' concepts
- the fallacies of SETI as it is currently performed
comment by MaryCh · 2017-04-07T15:35:29.714Z · LW(p) · GW(p)
I'm thinking on writing a post on doing 'lazy altruism', meaning 'something having a somewhat lasting effect that costs the actor only a small inconvenience, and is not specifically calculated to do the most amount of good - only the most amount per this exact effort."
Not sure I'm not too lazy to expand on it, though.
Replies from: Elo, Viliamcomment by Lumifer · 2017-04-03T17:04:30.771Z · LW(p) · GW(p)
Tyler Cowen and Ezra Klein discuss things. Notably:
Replies from: 9eB1, Viliam, Lumifer, Elo, Nate_Rausch, 9eB1Ezra Klein: The rationality community.
Tyler Cowen: Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?
Ezra Klein: Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.
Tyler Cowen: Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.
↑ comment by 9eB1 · 2017-04-04T16:06:36.654Z · LW(p) · GW(p)
I think no one would argue that the rationality community is at all divorced from the culture that surrounds it. People talk about culture constantly, and are looking for ways to change the culture to better address shared goals. It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.
Where Tyler is wrong is that it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions, and it's nihilistic to imply that all cultures are equal no matter from what shared assumptions they issue forth. Cultures are not interchangeable. Tyler would also have to admit (and I'm guessing he likely would admit if pressed directly) that his culture of mainstream academic thought is "just another kind of religion" to exactly the same extent that rationality is, it's just less self-aware about that fact.
As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.
Replies from: Lumifer, Brillyant↑ comment by Lumifer · 2017-04-04T17:30:46.800Z · LW(p) · GW(p)
It's sort of silly to say that that means it should be called the "irrationality community."
Notice the name of this website. It is not "The Correct Way To Do Everything".
it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions
Don't ALL cultures have their own particular set of shared assumptions? Tyler's point is that the rationalist culture, says Tyler, sets itself above all others as it claims to possess The Truth (or at least know the True Paths leading in that general direction) -- and yet most cultures have similar claims.
Lucifer
Lucifer is the bringer of light (Latin: lux). Latin also has another word for light: lumen (it's the same root but with the -men suffix). Just sayin' :-P
But I will also admit that the idea of an all-singing all-dancing candelabra has merit, too :-)
↑ comment by Brillyant · 2017-04-04T16:47:54.835Z · LW(p) · GW(p)
It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.
It did seem to be a pretty bold and frontal critique. And "irrationality community" is probably silly. But I agree LW, et al has at times a religious and dogmatic feel to it. In this way the RC becomes something like the opposite of the label it carries. That seems to be his point.
As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.
Yes. Yes.
If this wasn't exactly the mental image I had of Lumifer before, then it is now.
↑ comment by Elo · 2017-04-04T21:00:00.838Z · LW(p) · GW(p)
irrationality community
No one is more critical of us than ourselves. "LessWrong" is lesswrong for being humble about it. Hopefully that humility sticks around for a very long time.
Replies from: philh, MrMind↑ comment by philh · 2017-04-05T09:31:22.611Z · LW(p) · GW(p)
No one is more critical of us than ourselves.
This seems untrue. For example, RationalWiki.
In the past I could also have pointed to some individuals (who AFAIK were not associated with RW, but they could have been) who I think would have counted. I can't think of any right now, but I expect they still exist.
Replies from: dxu↑ comment by dxu · 2017-04-05T21:38:17.837Z · LW(p) · GW(p)
That's fair, but I also think it largely misses the point of Elo's comment. Here, have (an attempt at) a rephrase:
No community is as prone to self-criticism as the rationalist community.
Replies from: philh, Elo↑ comment by philh · 2017-04-06T09:22:50.838Z · LW(p) · GW(p)
I think I would rather say, less superlatively, that we're unusually good at self-criticism.
(I do note that I'm comparing my inside view of the rationalist community with my outside view of other communities, so I shouldn't put too much confidence in this.)
(But yes, I agree that I was ignoring the thing that Elo was actually trying to point at.)
↑ comment by Nate_Rausch · 2017-04-05T04:35:56.475Z · LW(p) · GW(p)
A bit tongue-in-cheeck, but how about taking Tyler's unfair label as a proposal?
We could start the rationality religion, without the metaphysics or ideology of ordinary religion. Our God could be everything we do not know. We worship love. Our savior is the truth. We embrace forgiveness as the game-theoretical optimal modified tit-for-tat solution to a repeated game. And so on.
We thoroughly investigate and aggregate the best knowledge humanity currently has on how to live. And we create Rationality Temples worldwide. There will be weekly congregations, with talks on a sequence, with following discussions, on topics such as signalling, bayesian thinking, cognitive biases. We propose a three step way to heaven on earth: identifying worthwhile causes, charting effective solutions and taking actions to achieve it. Lifetime goal is writing a sequence. Compassion meditation and visualisation prayer once per day. Haha, okay perhaps I'm overdoing it.
Using the well-established concepts, rituals and memes of religion is easy to mock, but what if it is also an effective way to build our community and reach our goals?
Replies from: MrMind, Lumifer↑ comment by MrMind · 2017-04-05T08:02:11.305Z · LW(p) · GW(p)
what if it is also an effective way to build our community and reach our goals?
It surely is an effective way, since by this mean all kinds of silly causes have been pursued. But creating a religion out of rationality (lowercase) would defeat its purpose: in the span of a year, rationality would become the password to learn by memory and the beginning structures will solidify as an attire.
Religions are appealing exactly because they exempt their members from thinking on their own and accepting hard truths: rationality has instead more in common with martial arts, they are mostly a question of training and learning to take many hits.
↑ comment by Nate_Rausch · 2017-04-05T20:15:25.099Z · LW(p) · GW(p)
Well, yes, but couldn't one just make a new religion without those attributes. For example the first of the 10 commandments could be: Question everything, including these texts. Be a student, not a follower. Finding fault in ourself is the highest virtue, free speech etc. ? :-)
↑ comment by Lumifer · 2017-04-05T15:04:38.331Z · LW(p) · GW(p)
Our God could be everything we do not know.
Literally God of the Gaps! :-)
Replies from: gjmWe worship love
↑ comment by gjm · 2017-04-05T16:52:05.537Z · LW(p) · GW(p)
Tangential: I think the "four loves" thing is a bit of a cheat, like the "fifty Eskimo words for snow" meme. The Greeks had different words for describing different kinds of positive interpersonal affect -- but so do we! We have "affection" and "friendship" and "devotion" and "lust" and "loyalty" and "benevolence" and so on.
That doesn't mean there's anything wrong with noticing when one of those words (in this case "love") gets used very broadly, and it doesn't change the fact that "see how other people classify things" is a useful heuristic. But I am entirely unconvinced that there's anything very special about the Ancient Greeks, or the notion of love, in this regard -- other than various historical contingencies involving C S Lewis, Christianity, and the history of the New Testament.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-05T17:29:53.870Z · LW(p) · GW(p)
But I am entirely unconvinced that there's anything very special about the Ancient Greeks
Well, they were pretty special, being the cradle of the Western civilization 'n'all, but in this specific case all I intended was to give the OP a possible list of specific meanings of the word "love" to consider.
Replies from: Nate_Rausch↑ comment by Nate_Rausch · 2017-04-05T20:13:12.644Z · LW(p) · GW(p)
Fair point, well I don't think romantic love is worthy of sacred status in the irrationality religion. Tough those four neither seemed quite to fit the love I had in mind.
Perhaps something closer the buddhist concept of bodhisattva, meaning altruistic love for all sentient beings?
Replies from: Lumifer↑ comment by Lumifer · 2017-04-06T00:57:47.079Z · LW(p) · GW(p)
altruistic love for all sentient beings?
Sounds like the plain old Christian love, but with a new cool label :-/
Replies from: bogus↑ comment by bogus · 2017-04-06T01:11:59.766Z · LW(p) · GW(p)
Ah, Christian love... the kind of altruistic love that makes people tell you that you're a fallen and depraved creature due to the sin of Adam and thus will be burning in Hell forever unless you "accept Jesus as your savior" by becoming a Christian ASAP. (If precedent is any guide, I can already guess that Roko's basilisk will be featured prominently in the new "rationality religion"!)
Replies from: Lumifer↑ comment by Lumifer · 2017-04-06T02:09:30.711Z · LW(p) · GW(p)
Well, our baseline is what, Buddhist love? There is Buddhist hell) as well and surprise! it doesn't sound like a pleasant place. You get there through being enslaved by your lusts and desires -- unless, of course, you accept the teachings of Buddha ASAP :-P
↑ comment by 9eB1 · 2017-04-04T22:15:50.777Z · LW(p) · GW(p)
Bryan Caplan responded to this exchange here
Replies from: tristanm↑ comment by tristanm · 2017-04-05T16:55:13.056Z · LW(p) · GW(p)
I would object to calling these "devastating counter-examples", they're more like unsolved problems. It seems overly dramatic. I'm not a perfect Bayesian agent, I use my intuitions a lot, but that is not grounds on which to reject Bayesianism, and I think we could say something similar about consequentialism. I may not know how to perfectly measure relative happiness, or perfectly predict the future, but it doesn't seem like that should be grounds to reject consequentialism entirely, in favor of alternatives which don't cope with those issues either.
Replies from: dxu↑ comment by dxu · 2017-04-05T22:01:26.775Z · LW(p) · GW(p)
One very common error people make is to treat "utilitarianism" and "consequentialism" as if they were one and the same thing. Utilitarianism makes claims about what is moral and what is not. Consequentialism makes claims about what sort of properties a moral criterion should have. Criticisms about utilitarianism, therefore, are often taken also as criticisms of consequentialism, when in fact the two are distinct concepts!
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2017-04-05T23:07:28.402Z · LW(p) · GW(p)
On the other hand, utilitarianism is a subset of consequentialism, so whereas a criticism of utilitarianism is not necessarily a criticism of consequentialism, the converse is true.
comment by turchin · 2017-04-05T08:09:08.299Z · LW(p) · GW(p)
Our article about using nuclear submarines as refuges in case of a global catastrophe has been accepted for the Futures journal and its preprint is available online.
Abstract
Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are surface independent, and could provide energy, oxygen, fresh water and perhaps even food for their inhabitants for years. They are able to withstand close nuclear explosions and radiation. They are able to maintain isolation from biological attacks and most known weapons. They already exist and need only small adaptation to be used as refuges. But building refuges is only “Plan B” of existential risk preparation; it is better to eliminate such risks than try to survive them.
Replies from: morganismcomment by Elo · 2017-04-03T07:04:24.653Z · LW(p) · GW(p)
Curious about if this is worth making into it's own weekly thread. Curious as to what's being worked on, in personal life, work life or just "cool stuff". I would like people to share, after all we happen to have similar fields of interest and similar fields we are trying to tackle.
Projects sub-thread:
- What are you working on this week (a few words or a serious breakdown)(if you have a list feel free to dump it here)?
- What do you want to be asked about next week? What do you expect to have done by then?
- Have you noticed anything odd or puzzling to share with us?
- Are you looking for someone with experience in a specific field to save you some search time?
- What would you describe are your biggest bottlenecks?
↑ comment by Elo · 2017-04-04T09:58:47.962Z · LW(p) · GW(p)
I am working on:
- learning colemak
- vulnerability, circling (relational therapy stuff) investigating the ideas around it.
- trialling supplements: Creatine, Protein, Citrulline malate. Adding in Vitamins: C, D, Fish oil, Calcium, Magnesium, Iron, Distant future trials: 5HTP, SAMe. preliminary SAMe and 5HTP were that they make me feel like crap.
- promoting the voluntary euthanasia party of NSW (Australia) (political reform of the issue)
- emptying my schedule to afford me more "free" time in which to write up posts.
- contemplating book topic ideas.
- trying to get better routines going, contemplating things like "I only sleep at home", and cancelling all meeting and turning off all connected media for a week.
my biggest bottlenecks are myself (sometimes focus) and being confident at what will have a return VS not have a return. (hence many small experiements)
Replies from: WalterL, username2↑ comment by WalterL · 2017-04-05T15:51:15.909Z · LW(p) · GW(p)
You are working on vulnerability? I'm not sure I understand.
Replies from: Elo↑ comment by Elo · 2017-04-06T00:38:39.402Z · LW(p) · GW(p)
https://www.youtube.com/watch?v=iCvmsMzlF7o
yes. This talk seems to imply value from being vulnerable in social settings to connect with people. I am experimenting to see if it's worth it or better.
↑ comment by username2 · 2017-04-05T13:59:30.011Z · LW(p) · GW(p)
Where else do you sleep?
Replies from: Elo↑ comment by Elo · 2017-04-06T00:40:26.002Z · LW(p) · GW(p)
This may seem obvious to me but I sleep at the houses of partners, my parents house, a property I was doing some work for ~1.5 hrs away from home. The point is that I have less control over the bed, bedroom setup, my sleep and my morning routine if I am not in the bed that I engineered the routine around.
↑ comment by MaryCh · 2017-04-03T17:48:08.463Z · LW(p) · GW(p)
Translating a novel (really, a collection of essays) about WWII massacres of Jews in Kyiv & the rise of neo-nazism in post-Soviet republics (and much in between). It will take me a few months, probably, since this is a side job.
Overall impression: the past is easier to deal with, because it is too horrible. Imagine 10^5 deaths. Although I unfortunately know the place where it happened, & he includes personal stories (more like tidbits), so the suspension of disbelief takes some effort to maintain. But the 'present' part - a series of the author's open letters to mass-media and various officials about pogroms and suchlike that went unpunished - is hard: he keeps saying the same over and over. (Literally. And his style is heavy on the reader.) After a while the eye glazes over & notices only that the dates and the addresses change, but the content doesn't, except for the growing list of people who had not answered.
Just had not answered.
Now this is - easy to imagine.
Maybe this isn't odd, but I had thought it would be the other way around.
Replies from: Lumifer, MaryCh↑ comment by Lumifer · 2017-04-03T18:15:37.633Z · LW(p) · GW(p)
the past is easier to deal with, because it is too horrible. Imagine 10^5 deaths.
" A single death is a tragedy; a million deaths is a statistic" -- a meme
Replies from: MaryCh↑ comment by MaryCh · 2017-04-03T18:18:04.306Z · LW(p) · GW(p)
Ah, but what if you have walked above their bones?
Replies from: Lumifer↑ comment by MaryCh · 2017-05-14T11:12:22.074Z · LW(p) · GW(p)
From the book, on 'The Doctors' plot' of 1953:
Among the listed people who provided medical help to the party and state leaders, there was an abrupt addition - V. V. Zakusov, professor of pharmacology. He didn't take part directly in the leaders' treatment - he was at first only called in for an opinion, and given to sign the conclusion about the prescriptions that the 'doctors-murderers' had issued to hasten their patients' death. Vasili Vasilyevitch Zakusov took the feather and, well aware of what lied ahead, wrote this: "The best doctors in the world will sign such prescriptions." In that moment he stopped being an expert and became a suspect. In jail, even after tortures, he didn't withdraw his conclusion.
↑ comment by philh · 2017-04-03T09:44:24.367Z · LW(p) · GW(p)
I'm working on
- a graphing library for python (ggplot2's conceptual model with a pythonic API)
- writing cliffs notes for Order Without Law (a book about how people settle disputes without turning to the legal system)
- learning ukelele with Yousician (currently making extremely slow progress on the "double stops" lesson, I find it really hard to consistently strum two strings at once)
- trying to write an essay about how heuristics can sometimes be anti-inductive (they become less accurate the more they're applied), and how we don't really seem to have any cultural awareness of this problem even though it seems important
I might have the last one complete by next week, but the others are fairly long-term projects.
↑ comment by Viliam · 2017-04-03T08:50:50.380Z · LW(p) · GW(p)
Reminds me, we didn't have a bragging thread for some time.
What would you describe are your biggest bottlenecks?
I think I'd like to see this as a separate topic (probably monthly, because such things take time).
However, just do as you wish, and then we'll see how it works.
↑ comment by moridinamael · 2017-04-06T22:13:28.008Z · LW(p) · GW(p)
I'm working on a podcast read-through of the web serial Worm. We've just put out the fifth episode of the series. It's becoming pretty popular by our standards.
↑ comment by Darklight · 2017-04-06T10:22:33.793Z · LW(p) · GW(p)
I recently made an attempt to restart my Music-RNN project:
https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB
Basically went and made the dataset five times bigger and got... a mediocre improvement.
The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V
Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the high frequency, low magnitude quakes, rather than the low frequency, high magnitude quakes that would actually be useful... you can see the site where I would be posting daily updates if I weren't so lazy...
http://www.earthquakepredictor.net/
Other than that... I was recently also working on holographic word vectors in the same vein as Jones & Mewhort (2007), but shelved that because I could not figure out how to normalize/standardize the blasted things reliably enough to get consistent results across different random initializations.
Oh, also was working on a Visual Novel game with an artist friend who was previously my girlfriend... but due to um... breaking up, I've had trouble finding the motivation to keep working on it.
So many silly projects... so little time.
comment by morganism · 2017-04-10T23:40:33.439Z · LW(p) · GW(p)
"Modafinil-Induced Changes in Functional Connectivity in the Cortex and Cerebellum of Healthy Elderly Subjects"
http://journal.frontiersin.org/article/10.3389/fnagi.2017.00085/full
"CEDs may also help to maintain optimal brain functioning or compensate for subtle and or subclinical deficits associated with brain aging or early-stage dementia."
"In the modafinil group, in the post-drug period, we found an increase of centrality that occurred bilaterally in the BA17, thereby suggesting an increase of the FC of the visual cortex with other brain regions due to drug action." FC analysis revealed connectivity increase within the cerebellar Crus I, Crus II areas, and VIIIa lobule, the right inferior frontal sulcus (IFS), and the left middle frontal gyrus (MFG)."
"These frontal areas are known to modulate attention levels and some core processes associated with executive functions, and, specifically, inhibitory control and working memory (WM). These functions depend on each other and co-activate frontal areas along with the posterior visual cortex to re-orient attention toward visual stimuli and also enhance cognitive efficiency.
Data on behavioral effects of modafinil administration in our study group are missing and, at this stage, we can only provide theoretical speculations for the functional correlates of the regional activations that we have found to be promoted by the drug."
comment by MrMind · 2017-04-07T10:43:40.329Z · LW(p) · GW(p)
I'm still mulling over the whole "rationalism as a religion". I've come to the conclusion that there are indeed two axioms that are shared by the rational-sphere that we-cannot-quite-prove, and whose variations produce different cultures.
I call them underlying reality and people are perfect.
"Underlying reality" (U): refers to the existence of a stratum of reality that is independent from our senses and our thoughts, whose configurations gives the notion of truth as correspondence.
"People are perfect" (P): instead refers to the truth of mental ideation that people might have, whether they are (or there's a subset that is) always right.
Here's a rough scheme:
U, P: religion. Our feelings reflect directly the inspiration of a higher source of truth.
U, not P: rationalism. We are imperfect hardware in a vast and mostly unknowable world.
not U, P: the most uncertain category. Perhaps magic? There's no fixed, underlying truth but our thoughts can influnce it.
not U, not P: postmodernism. Nothing is true and everything's debatable.
I might make this a little more precise in a proper post.
comment by tristanm · 2017-04-06T23:51:23.532Z · LW(p) · GW(p)
In "Strong AI Isn't Here Yet", Sarah Constantin writes that she believes AGI will require another major conceptual breakthrough in our understanding before it can be built, and it will not simply be scaled up or improved versions of the deep learning algorithms that already exist.
To argue this, she makes the case that current deep learning algorithms have no way to learn "concepts" and only operate on "percepts." She says:
I suspect that, similarly, we’d have to have understanding of how concepts work on an algorithmic level in order to train conceptual learning.
However, I feel that her argument was lacking in terms of tangible evidence for the claim that deep-learning algorithms do not learn any high-level concepts. It seems to be based on the observation that we currently do not know how to explicitly represent concepts in mathematical or algorithmic terms. But I think if we are to take this as a belief, we should try to predict how the world would look differently if deep-learning algorithms could learn concepts entirely on their own, without us understanding how.
So what kind of problems, if solved by neural networks, would surprise us if this belief was held? Well, to name a couple of experiments that surprise me, I would probably point out DCGAN and InfoGAN. In the former, they are able to extract visual "concepts" out of the generator network by taking the latent vectors of all the examples that share one kind of attribute of their choosing (in the paper they take "smiling" / "not smiling" and "glasses" / "no glasses") and averaging them. Then they are able to construct new images by doing vector arithmetic in the latent space using this vector and passing them through the generator, so you can take a picture of someone without glasses and add glasses to them without altering the rest of their face, for example. In the second paper, their network learns a secondary latent variable vector that extracts disentagled features from the data. Most surprisingly, their network seems to learn such concepts as "rotation" (among other things) from a data set of 2D faces, even though there is no way to express the concept of three dimensions in this network or have that encoded as prior knowledge somehow.
Just this morning in fact, OpenAI revealed that they had done a very large scale deep-learning experiment using multiplicative LSTMs on Amazon review data. What was more surprising than just the fact they had beaten the benchmark accuracy on sentiment analysis, was that they had done it in an unsupervised manner by using the LSTMs to predict the next character in a given sequence of characters. They discovered that a single neuron in the hidden layer of this LSTM seemed to extract the overall sentiment of the review, and was somehow using this knowledge to get better at predicting the sequence. I would find this very surprising if I believed it were unlikely or impossible for neural networks to extract high-level "concepts" out of data without explicitly encoding it into the network structure or the data.
What I'm getting at here is that we should be able to set benchmarks on certain well-defined problems and say "Any AI that solves this problem has done concept learning and does concept-level reasoning", and update based on what types of algorithms solve those problems. And when that list of problems gets smaller and smaller, we really need to watch out to see if we have redefined the meaning of "concept" or drawn to the tautological conclusion that the problem really didn't require concept level reasoning after all. I feel like that has already happened to a certain degree.
Replies from: bogus↑ comment by bogus · 2017-04-07T11:25:24.128Z · LW(p) · GW(p)
The problem with AGI is not that AIs have no ability to learn "concepts", it's that the G in 'AGI' is very likely ill-defined. Even humans are not 'general intelligences', they're just extremely capable aggregates of narrow intelligences that collectively implement the rather complex task we call "being a human". Narrow AIs that implement 'deep learning' can learn 'concepts' that are tailored to their specific task; for instance. the DeepDream AI famously learns a variety of 'concepts' that relate to something looking like a dog. And sometimes these concepts turn out to be usable in a different task, but this is essentially a matter of luck. In the Amazon reviews case, the 'sentiment' of a review turned out to be a good predictor of what the review would say, even after controlling for the sorts of low-order correlations in the text that character-based RNNs can be expected to model most easily. I don't see this as especially surprising, or as having much implication about possible 'AGI'.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-04-07T14:25:22.567Z · LW(p) · GW(p)
Humans are general intelligences, and that is exactly about having completely general concepts. Is there something you cannot think about? Suppose there is. Then let's think about that thing. There is now nothing you cannot think about. No current computer AI can do this; when they can, they will in fact be AGIs.
comment by Thomas · 2017-04-03T16:44:36.055Z · LW(p) · GW(p)
Problem to solve:
https://protokol2020.wordpress.com/2017/04/02/another-intermezzo-intermezzo-problem/
Replies from: Lumifer↑ comment by Lumifer · 2017-04-03T17:07:53.595Z · LW(p) · GW(p)
Your link is broken.
ETA: Thomas fixed the problem.
Replies from: Thomas↑ comment by Thomas · 2017-04-03T18:02:41.850Z · LW(p) · GW(p)
I don't know. I can't reproduce the error, it just works when I click it.
It is strange however, that this number 9056 has been given by the Wordpress. Usually it is the title of the post.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-03T18:08:36.147Z · LW(p) · GW(p)
Your link leads me to a Wordpress login page. To reproduce, try deleting your cookies (or try another browser).
Specifically, your link resolves to this: https://wordpress.com/wp-login.php?redirect_to=https%3A%2F%2Fwordpress.com%2Fpost%2Fprotokol2020.wordpress.com%2F9056
Is that -- https://protokol2020.wordpress.com/2017/04/02/another-intermezzo-intermezzo-problem/ -- the link you mean?
Replies from: Thomascomment by Jordan2386_duplicate0.022053955821320415 · 2017-04-06T21:18:28.411Z · LW(p) · GW(p)
I have an idea about how to make rationality great again (bad joke, but I'm serious). The term "theoretical rationality" may be coined by me - idk, but the new meanings to "theoretical and applied rationality" are mine and include 1)optimisation 2)fast completion of goals and ideals 3)updating the list of desirable goals and ideals 4) repeat. Any comments?
Replies from: Elo, MrMind↑ comment by Elo · 2017-04-06T22:28:13.919Z · LW(p) · GW(p)
how does this compare to the instrumental and epistemic rationality separation?
Replies from: Jordan2386_duplicate0.022053955821320415↑ comment by Jordan2386_duplicate0.022053955821320415 · 2017-04-18T09:30:59.355Z · LW(p) · GW(p)
The distinctions are in the application. In my proposed system, the theoretic (epistemic) rationality should mostly be a byproduct of the applied (instrumental) rationality. My view puts a huge emphasis on the knowledge being derived mostly and predominantly derived through (many fast) interactions with the environment, avoiding the pitfalls of fixing scientific "laws" (which themselves are products of many observations). This is not the Bayesian view with priors on what one could expect from looking into some unexpected phenomenon. If it works, it works. If the theory says it can't work, it would still work.
↑ comment by MrMind · 2017-04-07T08:39:40.405Z · LW(p) · GW(p)
Replies from: Jordan2386_duplicate0.022053955821320415the new meanings to "theoretical and applied rationality" are mine
↑ comment by Jordan2386_duplicate0.022053955821320415 · 2017-04-15T05:42:31.645Z · LW(p) · GW(p)
The conference looks cool, but does not follow the main framework of "theoretical and applied rationality", as defined by me. More specifically, I can use the term "whatever works (most) effectively" not to cause confusion with the otherwise defined "theoretical and applied rationality".What I do with my approach is to completely avoid theorization, unless the interactions with the environment uncover a pattern that needs recording. The main focus of the approach is on the applied end. If one needs to, say, improve wealth for the poorest I can't think of a more efficient way to do so than fast experimentation and coming up with new and better way to do so. The cool bit is that it can be sustainable and empowering, blah, blah, blah (not the focus of this conversation). The idea is that one interacts with the environment to acquire new knowledge. Otherwise one can become detached from the environment as is most of modern maths theory (I study maths in Oxford btw). Collecting theory is very very backwards to the second pillar of rationality - achieving one's goals and ideals.
comment by whpearson · 2017-04-06T11:13:25.485Z · LW(p) · GW(p)
Is there a place in the existential risk community for a respected body/group to evaluate peoples ideas and put them on a danger scale? Or dangerous given assumptions.
If this body could give normal machine learning a stamp of safety then people might not have to worry about death threats etc?
comment by MaryCh · 2017-04-05T18:12:15.724Z · LW(p) · GW(p)
In some situations my thinking becomes much more structured, I throw out the syntax and the remaining words come in very clear hierarchy and kind of seem to echo shortly. It lasts, perhaps, less than a second.
Examples: "snake - nonvenomous (snake, snake) - dead, where's the head, somebody struck it with something, probably a stick, curse them, ought to collect it, where's vodka"; "snake - viper (viper, viper) - back off, where's camera, damn, it's gone, ought to GPS the place"; "orchid - Epipactis (...pactis) - why not something rarer, again this weed".
Has it been like that for you?
comment by whpearson · 2017-04-05T10:24:06.635Z · LW(p) · GW(p)
I'm wondering what people would think about adopting the term Future Super Intelligences or FSI, rather than AGI or SAGI.
This would cover more scenarios (e.g. uploads/radical augments) where the motivational systems of super powerful actors may not be what we are used to. It would also show that we are less worried about current tech than talking about AIs does, there is that moment when you have to explain you are not worried about backprop.
Replies from: g_pepper↑ comment by g_pepper · 2017-04-05T12:04:01.883Z · LW(p) · GW(p)
I never really thought that AGI implied any specific technology; certainly the "G" in AGI rules out the likelihood of AGI referring to current technology since current technology is not generally intelligent. AGI seems to capture what it is we are talking about quite well, IMO - Artificial (i.e. man-made) General Intelligence.
Do you really find yourself explaining that AGI is not the same as backpropagation very often?
Replies from: whpearson↑ comment by whpearson · 2017-04-06T09:38:07.475Z · LW(p) · GW(p)
AGI seems to capture what it is we are talking about quite well, IMO - Artificial (i.e. man-made) General Intelligence.
I think AGI gets compressed to AI by the mainstream media and then people working on current ML think that you people are worried about their work (which they find ridiculous and so don't want to engage).
An example of the compression is here.
We don't help ourselves calling it airisk.
Replies from: g_pepper↑ comment by g_pepper · 2017-04-06T22:35:39.613Z · LW(p) · GW(p)
I think AGI gets compressed to AI by the mainstream media
Actually, the term AI has traditionally encompassed both strong AI (a term which has been mostly replaced by LW and others with "AGI") and applied (or narrow) AI, which includes expert systems, current ML, game playing, natural language processing, etc. It is not clear to me that the mainstream media is compressing AGI into AI; instead I suspect that many mainstream media writers simply have not yet adopted the term AGI, which is a fairly recent addition to the AI jargon. The mainstream media's use of "AI" and the term "AIRISK" are not so much wrong as they are imprecise.
I suspect that the term AGI was coined specifically to differentiate strong AI from narrow AI. If the mainstream media has been slow in adopting the term AGI, I don't see how adding yet another, newer term will help - in fact, doing so will probably just engender confusion (e.g. people will wonder what if anything is the difference between AGI and FSI).
Replies from: whpearson↑ comment by whpearson · 2017-04-06T23:22:06.544Z · LW(p) · GW(p)
AGI has been going back over 10 years? Longer than the term AIrisk has been around, as far as I can tell. We had strong vs weak before that.
AGIrisk seems like a good compromise? Who runs comms for the AGIrisk community?
Imprecision matters when you are trying to communicate and build communities.
Replies from: g_pepper↑ comment by g_pepper · 2017-04-07T01:02:06.559Z · LW(p) · GW(p)
AGIrisk seems like a good compromise?
I certainly prefer it to FSIrisk.
Who runs comms for the AGIrisk community?
I doubt anyone does. Terms catch on or fail to catch on organically (or memetically, to be precise).
Imprecision matters when you are trying to communicate and build communities.
Perhaps. But, I doubt that a significant amount of the reluctance to take the unfriendly AGI argument seriously is due to confusion over terminology. Nor is changing terminology likely to cause a lot of people who previously did not take the argument seriously to begin to take it seriously. For example, there are some regulars here on LW who do not think that unfriendly AGI is a significant risk. But I doubt that any LW regular is confused about the distinction between AGI and narrow AI.
comment by ChristianKl · 2017-04-03T09:48:37.052Z · LW(p) · GW(p)
Are there any studies that determine whether regular coffein consumption has a net benefit? Or does the body produce enough additional receptors to counteract it?
Replies from: Dagon, PotterChrist↑ comment by PotterChrist · 2017-04-03T12:52:46.819Z · LW(p) · GW(p)
Google is your friend. Here is an example link:
http://www.sciencedirect.com/science/article/pii/0278691595000933 I have smird'(proof)#regular question, did this just safe it but only for me?
comment by PotterChrist · 2017-04-03T09:05:22.388Z · LW(p) · GW(p)
Normal "introducing myself", normal I am of faith.
Replies from: PotterChrist↑ comment by PotterChrist · 2017-04-03T10:29:57.625Z · LW(p) · GW(p)
Regular, "dont think about the irony"