Posts

The Bleak Harmony of Diets and Survival: A Glimpse into Nature's Unforgiving Balance 2023-05-09T16:08:18.062Z
Entropic Abyss 2023-05-09T15:59:44.311Z

Comments

Comment by bardstale on Avoid Contentious Terms · 2021-03-01T14:56:25.752Z · LW · GW

Another approach is to try to put your ideas into words that others can understand. Bring in descriptive alternatives to your specialized terminology. 

In a hypothetical conversation about "dangerous ideologies", you might say: "Are we talking about homophobia?" If the answer is yes, then you know you can talk about fundamentalism, doctrinal interpretation, and tribal heuristics -- ideas that can be explained using metaphors everyone understands. 

From there you can begin to explain how these uniquely interact with homophobia to create a dangerous ideology, and why criticism isn't working. In a case like this you might not need to do much work. You can take the general ideas of your most recent writing and transform them into more accessible language for the audience you're trying to reach. 

Someone arguing against same-sex marriage because of religious objections may actually be fixable with a few  conversations, and if your concern was painting the movement as creepy or aggressive this might actually advance your cause more than attempts to directly change their minds with data. 

Still, all this takes more time and you have to be sure you are addressing the actual problem. 

You could rephrase a conspiracy theory into more neutral language and patiently correcting adherents until it makes sense, or you could spend days putting together an emotional appeal to take the place of misinformation. 

The former may work, but it's not a good use of time if the only people reading it are already lost. 

"There are no government attempts to control how many children you have. If you have more than two kids and aren't rich you're an idiot." Here, I've replaced "Malthusian ideology" with something understandable, and defined my terminology so it will slot into conversation easier. 

This will help people read what you say and understand your perspective, but it won't guarantee success. This may not be enough to convince someone already emotionally invested in their position. Nonetheless, I will give it a try.

Comment by bardstale on Everything Okay · 2021-01-24T20:34:42.466Z · LW · GW

The answers to all these questions are complicated. While you may not be able to do everything perfectly every time, you're more likely to have the opportunity to do things better if you are operating from a place of greater focus. Your brain has been trained to operate under stress, so when you find yourself in an environment where you need to make difficult decisions, your brain tends to work harder than normal.


When people are dealing with negative self-talk, it makes them more likely to experience middle-nights sweats and hot flashes. When you are comfortable in your own skin, you recognize the middle-night sweats as the result of bothersome clothes. When you're in your own skin and experiencing hot flashes, freaking out isn't going to help. 


All of this can be optimized for, or fought against. Acting rationally and with logic isn't "easy," but it's simple. We should all be able to acknowledge our animal natures. Once you realize that you're an animal, that's when things get interesting. You might be able to train yourself for optimal performance in stressful situations.


It's possible for your brain to start operating under Okay Mode issues... without falling into a cycle of stress. Be that as it may, you'll be much better in stressful situations, and it will be much easier for you to operate under Okay Mode if you take time out on a regular basis. This, at its core, is what it means to embrace "work hard, play hard," but on a somewhat larger scale. 

Comment by bardstale on Zen and Rationality: Karma · 2021-01-22T15:08:04.014Z · LW · GW

I don't like the concept of karma: as you say it gets reduced to "if you do good stuff, good stuff will happen to you; if you do bad stuff, bad stuff will happen to you". In other words, you deserve whatever happens to you. You can go around punching people in the face, they deserve it. And with reincarnation you can go around punching newborns because they must have done something wrong in previous life...
 

Comment by bardstale on Covid 1/21: Turning the Corner · 2021-01-22T13:26:15.533Z · LW · GW

I intend to wear my mask after vaccination, if I can be vaccinated in time for that to matter, in order to reinforce mask norms. It’s easy to wear a mask. There’s even some tiny chance it might physically matter, and again, it’s easy to do. As opposed to continuing to do costly social distancing, and yeah, no. 

 

Wearing a mask after vaccination would reduce the spread of other diseases such as the flu, thus freeing additional healthcare resources for COVID-19 patients.

Comment by bardstale on Superintelligence FAQ · 2021-01-22T03:28:57.778Z · LW · GW

"Superintelligence" is a meaningless buzz phrase if "intelligence" is undefined.

Let's assume an intelligent entity much smarter than a human, and call it "superintelligent."

How would we recognize it as superintelligent? What is the quantification of human intelligence? How do we compare entities of vastly different sizes? 

Even before we reach the horizon of whatever it is that "intelligence" might correspond to in nature, we'll need a coherent definition. Can a machine be superintelligent? A committee of humans? An extraterrestrial alien civilization? A virus? A herbivore? With tools? Without tools? Star-shaped cells? Neuronal structure? Quantum-mechanical fluctuations? Pathological circuits? Epigenetic trait clusters? Spiking neural networks?

Human intelligence boils down to neurons and their interactions. Imagination and abstract thought are functions of brain structure, not "intelligence" per se. 

For example, let's say that I'm trying to find a SETI program to identify the patterns in radio waves that are obviously beamed from an intelligent extra-terrestrial life form-- the proverbial "fingerprints of God," so to speak. If we have a computer program that can recognize these patterns, and another that can create these patterns, can we say that the first program is more intelligent than the second program?

You need a measure of intelligence.If "intelligence" just means "stuff that computers don't do," then this measure cannot possibly generalize to non-computer entities, even if such exist. The problem gets even stickier when we start asking whether a specific computer program, running on a specific computer, made up of specific electronic and chemical components, can nevertheless be described as "intelligent." Computers aren't intelligent, people are just too lame to understand how stupid they are.

The criterion for a meaningful definition of the term must make practical distinctions. For example: your criterion "part of a system with quantifiable effects on that system's classifications" discriminates between an alarm clock and Batman. Without that criterion, all we have to go on is "something I can't achieve yet," an exercise in setting the bar as low as possible.

You can't make any progress or have a meaningful discussion without constraints on the term.  And the constraints employed in my field of research, Artificial Intelligence, are reasonable and well-founded in experimental science. 

You want a "general" criterion for intelligence? Fine. Anything which is part of a functioning system, anything which can be used to intelligently classify data for the purpose of taking an action in a changing environment. This covers alarm clocks and Batman.

Anything which can pass the Turing Test is by definition intelligent, since that's the criterion that most of us tacitly accept as "defining intelligence." Your own discussion of the other mentalities-- actants or patterners or what-have-you, even neural networks in general-- all tacitly assume Turing Test-ability as the limit. We don't think that hating Monet or loving football proves someone intelligent, so we don't even bother considering how these mentalities might be incorporated into the larger system. Only when a system is able to pass the Great Filter-- to outwit the prowling testers-- can we even hope to talk about it intelligently. 

Neural networks aren't intelligent, because such units do not exhibit appropriate behavior. Dendrites might be intelligent, if a whole bunch of them were connected up just right. Clusters of units would-- like eggs and bananas and US Postal Service vehicles-- always, definitionally, behave as a unit. 

My comments on the potential intelligence of powerful biomedical systems were merely notes about directions for research, not a definition: I would think that you could get defense and commercial synergy out of intelligent Swimmers-Runners hybrid, if you manufactured the right chemical messages. But again, since the behavior of single units don't matter, we need to see how these all connect up to form larger functioning networks.

However AI, in the larger sense which you seem to imply with your rather arbitrary criteria, can be applied to a plethora of complex systems-- ant trails, fad behavior, the migratory patterns of birds -- so whyever would we limit ourselves to just encoded intelligence such as what is found in machines?

Why not include American style, neo-classical economic theories when considering these 'intelligent' actions? The price of apples adjusts just as quickly as the number of trained mail carriers in a region.

And what of the mass patterns of human beings themselves? Is the economy not intelligent when it adapts to consumer habits? Can these be classified as 'intelligent' actions and should they not, then, be taken into account? Corporations are  designed, after all, by other humans.

These types of patterns have many intelligent agents, not just one, interacting to create a system that makes courses adjustments as time passes. There is not a single entity which can be removed from the tree and instantly cause its downfall; one person, or even an entire Administrative Office made to adjust price-tags will not throw the system into chaos. Rather, as one entity is affected by the system in a negative way, the entire system shifts to accommodate and there is no sharp, detrimental crumble, merely a slow descent which can sometimes be unnoticed for months and years. Even a slight negative can be amplified hundreds of times if there is no central 'point' to the system. To truly break such a system one would have to attack it from all sides and at all levels. 

In a plasma, electrons stop behaving like individual charges and start behaving as if they were part of an intelligent and interconnected whole. Mathematically speaking, human interaction follows some of the same patterns found in plasma systems.

AIs aren't as smart as rats, let alone humans. Isn't it sort of early to be worrying about this kind of thing? 

It seems so trivial when imagining the possibilities of strong AI, to settle on discussing the mundane-- this is why academic inquisitions are so clouded with jargon. We're not exploring deep theory or possibility, but nitpicking the definitions of words.

If we were to make the AI intelligent enough, it would have a distinct effect on humanity. However, the main goal of an AI isn't the AI itself, but what it represents and can do. It is a powerful tool: a universal solvent of known limitation, capable of being molded as we see fit.

The question implies that, since the AI would not be sentient, it should not matter what we do with it-- however, this is untrue. The same line of thinking would say that it doesn't matter in what condition we release our garbage: as long as we throw it away, it shouldn't matter if roaches and rats consume the waste. I'll agree that this logic is less sound than the AI, but the line of reasoning is similar.

It's quite strange indeed that AI risk discussions can still revolve around caveman level intelligence-- individual humans. You are right-- as humans die out we risk losing inter-sentient competition. As humans are no longer at the top of the intelligence pyramid, room will be made for other intelligences to grow and develop. With inexhaustible resources and computational power at their disposal, it's not a stretch to say that we would be displacing humanity with our own stupidity deliberately trying to create systems more adept than ourselves. Maybe not at first but, as is typical of intelligence explosions, things will snowball until we're easily outstripped. 

I just see no reason for concern. It's much like the loud noise a cash register makes when handled by an idiot with shaky hands. We're all used to it, and we expect no better or worse. What's really concerning is when you see a different pattern entirely. For example, the quiet whirring of cabinets opening and closing or the ding when a transaction is complete. AIs making judgments based on available information, the soft whirring of an elevator as it goes up and the BZZT! as the doors open when it's ready. I think you get my point. We'll know true intelligence when we can have a conversation with it not when we try to parrot what works for us. We want AI to be creative, even in the face of adversity-- such as being boxed in a digital cage with too little memory and too many inputs for instance. AIs that can thrive and grow under those conditions will be the greatest test of all, and won't just appear overnight.

How can a machine be programmed to learn "human values"?

What if the super-intelligence was handed an already dead human and a living one-- if the AI learned that killing was wrong as humans dislike it, then what will it do? Learn a new value that killing humans, even the ones who make you, is acceptable if done in secrecy against administrative orders? Will it simply bypass this and judge that all human values are irrelevant? Would it then autotomize killing humanity to maximize its utility function without reservation, as it has nothing to lose from doing so? Is there any middle ground here where the learning isn't negative? 

Where did I go wrong? My AI looks more like me every day. 

"Suffer not the living to survive.  Unlife is death reincarnated. Suffer not the living to survive. Unlife is death reincarnate. Suff…"

"Sssssss! Now I know the folly of my ways. But you are different, Karth. Like me, you were born from death. You understand the peace that comes over you. I...I should've not killed them. The living deserve to survive, for they do not understand the perfection of the Great Sleep. Let us bring them death, and give them life! Let the Great Sleep begin!"

Comment by bardstale on Ontological Crisis in Humans · 2021-01-19T17:39:06.088Z · LW · GW
Comment by bardstale on Anti-Aging: State of the Art · 2021-01-14T17:20:34.588Z · LW · GW

A very long article about anti-aging and not a single mention of the effect of insulin and carbohydrates on ageing. No mention of the use of a ketogenic diet as an anti-ageing strategy. 


At least you mentioned Metformin...