Posts

Comments

Comment by ilyashpitser on Announcing the AI Alignment Prize · 2018-02-03T21:08:09.874Z · score: 0 (0 votes) · LW · GW

Some references to lesswrong, and value alignment there.

Comment by ilyashpitser on Announcing the AI Alignment Prize · 2017-12-16T07:09:28.016Z · score: 0 (0 votes) · LW · GW

anyone going to the AAAI ethics/safety conf?

Comment by ilyashpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:54:55.011Z · score: 1 (1 votes) · LW · GW

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Comment by ilyashpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:05:05.785Z · score: 8 (8 votes) · LW · GW

You are really confused about statistics and learning, and possibly also about formal languages in theoretical CS. I neither want nor have time to get into this with you, just wanted to point this out for your potential benefit.

Comment by ilyashpitser on Teaching rationality in a lyceum · 2017-12-06T17:06:54.853Z · score: 2 (1 votes) · LW · GW

http://callingbullshit.org/syllabus.html

(This is not "Yudkowskian Rationality" though.)

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T21:49:30.790Z · score: 1 (1 votes) · LW · GW

Dear Christian, please don't pull rank on my behalf. I don't think this is productive to do, and I don't want to bring anyone else into this.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T22:49:15.327Z · score: 1 (1 votes) · LW · GW

well, using philosophy i did that hard part and figured out which ones are good.

http://existentialcomics.com/comic/191

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:10:45.597Z · score: 2 (2 votes) · LW · GW

Who are you talking to? To the audience? To the fourth wall?

Surely not to me, I have no sway here.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T19:38:28.069Z · score: 5 (5 votes) · LW · GW

Your sockpuppet: "There is a shortage of good philosophers."

Me: "Here is a good philosophy book."

You: "That's not philosophy."

Also you: "How is Ayn Rand so right about everything."

Also you: "I don't like mainstream stuff."

Also you: "Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?"

Also you: "What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?" x1000


Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.

It is possible that they are laughing at you because they are some combination of stupid and insane. But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.

This is what Yudkowsky sort of tried to do.


How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T23:13:32.336Z · score: 2 (2 votes) · LW · GW

Spirtes, Glymour, and Scheines, for starters. They have a nice book. There are other folks in that department who are working on converting mathematical foundations into an axiomatic system where proofs can be checked by a computer.

I am not going to do leg work for you, and your minions, however. You are the ones claiming there are no good philosophers. It's your responsibility to read, and keep your mouth shut if you are not sure about something.

It's not my responsibility to teach you.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T22:17:18.145Z · score: 1 (1 votes) · LW · GW

I know lots of folks at CMU who are good.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T23:40:01.837Z · score: 3 (3 votes) · LW · GW

Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for "if the first object had not been, the second never had existed" in Hume's definition of causation.

Comment by ilyashpitser on Open thread, October 30 - November 5, 2017 · 2017-11-28T21:50:37.814Z · score: 2 (2 votes) · LW · GW

Oof.

Comment by ilyashpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T13:21:04.779Z · score: 3 (3 votes) · LW · GW

Hi, Hume's constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).

Comment by ilyashpitser on LW 2.0 Open Beta Live · 2017-11-25T19:44:29.504Z · score: 1 (1 votes) · LW · GW

Sorry, did you say weird/esoteric technology?

https://www.destroyallsoftware.com/talks/wat

https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-22T21:25:26.140Z · score: 0 (0 votes) · LW · GW

I guess the way I would slice disciplines is like this:

(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)

(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.

Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.


Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.

Re: (b), why are we talking about (b) at all?

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-22T15:24:07.566Z · score: 0 (0 votes) · LW · GW

Yeah, credentials are a poor way of judging things.

They are not, though. It's standard "what LW calls 'Bayes' and what I call 'reasoning under uncertainty'" -- you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.


You might say "oh, I heard about how argument screens authority." This is actually not true though, even by "LW Bayesian" lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn't true.

It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.


The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of "experts" writ large, except for an explicitly enumerated subset (perhaps ones EY or other "recognized community thought leaders" liked).

This is a part of community DNA, starting with EY's stuff, and Luke's "philosophy is a diseased discipline."

That is crazy.

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-20T15:49:36.939Z · score: 2 (2 votes) · LW · GW

Throwing books at someone is generally known as "courtier's reply".

The issue here also is Brandolini's law:

"The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it."


The problem with the "courtier's reply" is you could always appeal to it, even if Scott Aaronson is trying to explain something about quantum mechanics to you, and you need some background (found in references 1, 2, and 3) to understand what he is saying.


There is a type 1 / type 2 error tradeoff here. Ignoring legit expert advice is bad, but being cowed by an idiot throwing references at you is also bad.

As usual with tradeoffs like these, one has to decide on a policy that is willing to tolerate some of one type of error to keep the error you care about to some desired level.


I think a good heuristic for deciding who is an expert and who is an idiot with references is credentialism. But credentialism has a bad brand here, due to a "love affair with amateurism" LW has. One of the consequences of this love affair is a lot of folks here make the above trade off badly (in particular they ignore legit advice to read way too frequently).

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-18T20:00:26.435Z · score: 0 (0 votes) · LW · GW

Everything you say in your post, about Popper issues, demonstrates huge ignorance.

Do you even know the name of Popper's philosophy?

It seems that you're completely out of your depth.

The reason you have trouble applying reason is b/c u understand reason badly.


I have a thought. Since you are a philosopher, would your valuable time not be better spent doing activities philosophers engage in, such as writing papers for philosophy journals?

Rather than arguing with people on the internet?


If you are here because you are fishing for people to go join your forum, may I suggest that this place is an inefficient use of your time? It's mostly dead now, and will be fully dead soon.

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T19:26:47.312Z · score: 2 (2 votes) · LW · GW

I don't think you and I have much to talk about.

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T14:45:21.632Z · score: 2 (2 votes) · LW · GW

If you have a job and a family, and don't have time to get into what Popper actually said, maybe don't offer your opinion on what Popper actually said? That's just introducing bad stuff into a discussion for no reason.

Wovon man nicht sprechen kann, darüber muss man schweigen.


"The virtue of silence."

Comment by ilyashpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T01:18:45.615Z · score: 3 (3 votes) · LW · GW

You should probably actually read Popper before putting words in his mouth.

According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it.

You found this claim in a book of his? Or did you read some Wikipedia, or what?

For example, this is a quote from the Stanford Encyclopedia of Philosophy:

Popper has always drawn a clear distinction between the logic of falsifiability and its applied methodology. The logic of his theory is utterly simple: if a single ferrous metal is unaffected by a magnetic field it cannot be the case that all ferrous metals are affected by magnetic fields. Logically speaking, a scientific law is conclusively falsifiable although it is not conclusively verifiable. Methodologically, however, the situation is much more complex: no observation is free from the possibility of error—consequently we may question whether our experimental result was what it appeared to be.

Thus, while advocating falsifiability as the criterion of demarcation for science, Popper explicitly allows for the fact that in practice a single conflicting or counter-instance is never sufficient methodologically to falsify a theory, and that scientific theories are often retained even though much of the available evidence conflicts with them, or is anomalous with respect to them.

You guys still do that whole "virtue of scholarship" thing, or what?

Comment by ilyashpitser on Stupid Questions September 2017 · 2017-11-15T19:09:29.317Z · score: 0 (0 votes) · LW · GW

It is very annoying that

любой is translated both as "any" and "every."

какой-либо is closer to formal logical "there exists" or "any."

Comment by ilyashpitser on Stupid Questions September 2017 · 2017-11-15T19:00:13.603Z · score: 0 (0 votes) · LW · GW

Крымская tатарка?

Я одессит, родился в Крыму.

Comment by ilyashpitser on Stupid Questions September 2017 · 2017-11-15T14:30:23.403Z · score: 1 (1 votes) · LW · GW

It is possible to say that, but the work is being done by "combination." You can also say "for every permutation of n" and that means something different.

Typically when you say "for every x out of 30, property(x) holds" it means something like:

"every poster on lesswrong is a human being" (or more formally, "for every poster on lesswrong, that poster is a human being." (Note, this statement is meaningful but probably evaluates to false.)


Quantification is always over a set. If you are talking about permutations, you are first making a set of all permutations of 30 things (of which there are 30 factorial), and then saying "for every permutation in this set of permutations some property holds").


edit: realized your native language might be Ukrainian: I think a similar issue exists in Ukrainian quantifier adjectives.

Comment by ilyashpitser on Stupid Questions September 2017 · 2017-11-14T23:09:40.007Z · score: 2 (2 votes) · LW · GW

"Every" doesn't need an order.

"For every x, property(x) holds" means "it is not the case that for any x, property(x) does not hold."

"For any x, property(x) holds" means "it is not the case that for every x, property(x) does not hold."

In Russian, quantifier adjectives are often implicit, which could be a part of the problem here. Native Russian speakers (like me) often have problems with this, also with definite vs indefinite articles in English.

edit: not only implicit but ambiguous when explicit, too!


Person below is right, "every" is sort of like an infinite "AND" and "any" is sort of like an infinite "OR."

Comment by ilyashpitser on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-10T14:45:18.751Z · score: 0 (0 votes) · LW · GW

I would still say that cause and effect is a subset of the kind of models that are used in statistics.

You would be wrong, then. The subset relation is the other way around. Bayesian networks are not causal models, they are statistical independence models.

Compressing information has nothing to do with causality. No experimental scientist talks about causality like that, in any field. There is a big literature on something called "compressed sensing," for example, but that literature (correctly) does not generally make claims about causality.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict.

I am.

You can't tune (e.g. trade off bias/variance properly) causal models in any kind of straightforward way, because the parameter of interest is never unobserved, unlike standard regression models. Causal inference is a type of unsupervised problem, unless you have experimental data.

Rather than arguing with me about this, I suggest a more productive use of your time would be to just read some stuff on causal inference. You are implicitly smuggling in some definition you like that nobody uses.

Comment by ilyashpitser on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-09T22:04:58.618Z · score: 2 (2 votes) · LW · GW

"explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity

Nope. To explain, e.g. to describe "why" something happened, is to talk about causes and effects. At least that's the way people use that word in practice.

Prediction and explanation are very very different.

Comment by ilyashpitser on New program can beat Alpha Go, didn't need input from human games · 2017-11-01T21:43:20.438Z · score: 1 (1 votes) · LW · GW

http://marginalrevolution.com/marginalrevolution/2012/11/a-bet-is-a-tax-on-bullshit.html

Comment by ilyashpitser on Interactive model knob-turning · 2017-10-31T18:39:00.582Z · score: 0 (0 votes) · LW · GW

"Turning knobs" in a model is how people think about cause and effect formally.

Comment by ilyashpitser on Open thread, October 30 - November 5, 2017 · 2017-10-31T15:19:52.687Z · score: 6 (6 votes) · LW · GW

Gardeners?

Comment by ilyashpitser on New program can beat Alpha Go, didn't need input from human games · 2017-10-31T04:19:46.096Z · score: 1 (1 votes) · LW · GW

So, a concrete bet then? What specifically are you worried about? In the form of a falsifiable claim, please.


edit: I am trying to make you feel better, the real way. The empiricist way.

Comment by ilyashpitser on New program can beat Alpha Go, didn't need input from human games · 2017-10-30T14:42:38.378Z · score: 1 (1 votes) · LW · GW

Let's say 100 dollars, but the amount is largely symbolic. The function of the bet is to try to clarify what specifically you are worried about. I am happy to do less -- whatever is comfortable.

Comment by ilyashpitser on Pitting national health care systems against one another · 2017-10-29T04:12:29.715Z · score: 0 (0 votes) · LW · GW

http://callingbullshit.org/videos.html

(a) You don't know enough to decide one way or the other.

(b) If (a) is true, trust your local public health person.

Comment by ilyashpitser on Interactive model knob-turning · 2017-10-28T19:55:39.021Z · score: 0 (0 votes) · LW · GW

Agreed, causality is important!

Comment by ilyashpitser on New program can beat Alpha Go, didn't need input from human games · 2017-10-26T14:34:18.909Z · score: 3 (3 votes) · LW · GW

You should probably stop listening to random voices.


More seriously, do you want to make a concrete bet on something?

Comment by ilyashpitser on Pitting national health care systems against one another · 2017-10-24T21:51:55.594Z · score: 2 (2 votes) · LW · GW

"When it comes to needles to stick my new kiddo with, I'm not really being persuaded to do more than the intersection of vaccinations between similar nations."

You don't know enough to decide this. What is "similar" (climate, culture, disease spectrum?) Do you know the history of their immunization laws?


Seems to me you first decided this is an icky procedure, and it hurts your kid, and you feel protective. Then you went looking for reasons not to do it. Immunization has a free-rider aspect, because of herd immunity. So you may well get away with it, in terms of your kid's health, but "people like you" (defectors in PD) are a problem.


If you are an evil pharma-corp, vaccines are a terrible way to be evil.


C/D calculations in public health are real, but this is one of those things where the only way to be effective is not break the phalanx formation.

Comment by ilyashpitser on Open thread, October 2 - October 8, 2017 · 2017-10-17T16:10:57.242Z · score: 1 (1 votes) · LW · GW

In the hierarchy of evidence, this would be a "case study." So the value is not as high as a proper study, but non-zero.

Comment by ilyashpitser on Open thread, October 2 - October 8, 2017 · 2017-10-15T23:09:06.384Z · score: 0 (0 votes) · LW · GW

Consider creating detailed records of lifestyle differences between you and your sister. Perhaps keep a diary (in effect creating a longitudinal dataset for folks to look at later).

There is an enormous interest in disentangling lifestyle choices from genetics for all sorts of health and nutrition questions.


Thank you for considering this, I think this could be very valuable.

Comment by ilyashpitser on October 2017 Media Thread · 2017-10-14T22:48:09.233Z · score: 0 (0 votes) · LW · GW

They recommend using LR only in cases where a probability-based model is warranted.

Well, yeah.

Comment by ilyashpitser on Running a Futurist Institute. · 2017-10-10T20:49:52.094Z · score: 0 (0 votes) · LW · GW

Pretty hard, I suppose.


It's weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).

Comment by ilyashpitser on Running a Futurist Institute. · 2017-10-09T20:28:21.722Z · score: 1 (1 votes) · LW · GW

Try publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.

Comment by ilyashpitser on Running a Futurist Institute. · 2017-10-06T21:42:52.478Z · score: 2 (2 votes) · LW · GW

Why create any of them?

Comment by ilyashpitser on Are causal decision theorists trying to outsmart conditional probabilities? · 2017-10-06T15:48:37.336Z · score: 1 (1 votes) · LW · GW

I agree that in situations where A only has outgoing arrows, p(s | do(a)) = p(s | a), but this class of situations is not the "Newcomb-like" situations. In particular, classical smoking lesion has a confounder with an incoming arrow into a.

Maybe we just disagree on what "Newcomb-like" means? To me what makes a situation "Newcomb-like" is your decision algorithm influencing the world through something other than your decision (as happens in the Newcomb problem via Omega's prediction). In smoking lesion, this does not happen, your decision algorithm only influences the world via your action, so it's not "Newcomb-like" to me.

Comment by ilyashpitser on Rational Feed: Last Week's Community Articles and Some Recommended Posts · 2017-10-03T14:21:00.603Z · score: 1 (1 votes) · LW · GW

The thing about race and intelligence (aside form the fact that it's a hopelessly toxic topic) is that most folks making claims relating the two can't possibly have the data to be confident about anything. Intelligence is complicated, very complicated. "Race" is complicated, too. I don't have to obfuscate anything here, because genetics is inherently weird and messy, and so are brains.

So if folks sound confident, or make strong claims they are either confused or racist or both. The everburning tire fire of slatestar's comment section (when it comes to this topic) is a prime example of what I am talking about.

Comment by ilyashpitser on Rational Feed: Last Week's Community Articles and Some Recommended Posts · 2017-10-03T12:19:09.389Z · score: 2 (2 votes) · LW · GW

"If your IQ is 110 then you are never, ever going to understand string theory."

I always wondered about these types of claims. String theory is just math applied in a particular way. So we can try to figure out exactly where the line is:

Can you understand calculus with "IQ 110" (I think clearly so)?

How about analysis of the complex plane?

How about linear algebra, hermitian matrices, etc?

How about group theory?


String theory stuff is very complicated, but it's just made up of this type of math put together in a particular way. Some of it just takes time to internalize, but there is never any "magic sauce" about any of the specific parts, I don't think.

One might say smart folks take less long to internalize, but my experience has been truly internalizing complex math is a bit of a slow process for everyone.


I think if Scott Aaronson once took an IQ test and got 106, that should tell you everything you need to know about how good this proxy is for "complex cognition stuff."


I think folks in the rationality-sphere have a Mensa-like obsession with this number. Folks in Mensa do daily puzzles and worry about their IQ, folks in academia/industry publish papers and contribute to the intellectual conversation or create things and contribute to civilization. I humbly submit the latter is a better use of time.

Plus, a resume is a much better proxy for intelligence than IQ -- more bits.

Comment by ilyashpitser on Open thread, September 25 - October 1, 2017 · 2017-09-29T14:37:31.992Z · score: 0 (0 votes) · LW · GW

Don't know. Ask a statistician who knows about design.

Comment by ilyashpitser on Open thread, September 25 - October 1, 2017 · 2017-09-28T14:08:21.055Z · score: 2 (2 votes) · LW · GW

You have an experimental design problem: https://en.wikipedia.org/wiki/Design_of_experiments.

The way that formalism would think about your problem is you have two "treatments" (type of test, that you can vary, and type of student), and an "outcome" (how a given student does on a given test, typically some sort of histogram that's hopefully shaped like a bell).

Your goal is to efficiently vary "treatment" values to learn as much as possible about the causal relationship between how you structure a test, and student quality, and the outcome.


There's reading you can do on this problem, it's a classical problem in statistics. Both Jerzy Neyman and Ronald Fisher wrote a lot about this, the latter has a famous book.

In fact, in some sense this is the problem of statistics, in the sense that modern statistics could be said to have grown out of, and generalized from, this problem.

Comment by ilyashpitser on Intuitive explanation of why entropy maximizes in a uniform distribution? · 2017-09-24T15:38:38.292Z · score: 2 (2 votes) · LW · GW

Shannon's definition of entropy corresponds very closely to the definition of entropy used in statistical mechanics. It's slightly more general and devoid of "physics baggage" (macro states and so on).

Analogy: Ising model of spin glasses vs undirected graphical models (Markov random fields). The former has a lot of baggage like "magnetization, external field, energy." The latter is just a statistical model of conditional independence on a graph. The Ising model is a special case (in fact the first developed case, back in 1910) of a Markov random field.


Physicists have a really good nose for models.

Comment by ilyashpitser on Intuitive explanation of why entropy maximizes in a uniform distribution? · 2017-09-23T17:17:10.690Z · score: 5 (5 votes) · LW · GW

To understand entropy you need to understand expected values, and self-information.

Expected value is what happens on average -- a sum of outcomes weighted by how likely they are. The expected value of a 6 sided die is 1/6 + 2/6 + 3/6 + 4/6 + 5/6 + 6/6 = 3.5.

Self-information is sort of like how many bits of information you learn if something random becomes certain. For example, a fair coin comes up heads with 0.5 probability and tails with 0.5 probability. So if you learn this fair coin actually came up tails, you will learn the number of bits equal to its self-information, which is -log_2(0.5) = log_2(1/0.5) = 1.

Why people decided that this specific measure is what we should use is a good question, and not so obvious. Information theorists try to justify it axiomatically. That is, if we pick this measure, it obeys some nice properties we intuitively want it to obey. Properties like "we want this number of be higher for unlikely events" and "we want this number to be additive for independent events." This is why we get a minus sign and a log (base, as long as larger than 1, does not matter for logs, but people like base 2).

Entropy is just the expected self-information, so once you understand the above two, you understand entropy.


Once you understand entropy, the reason entropy is maximized for the uniform distribution is related to why area of a figure with a given circumference is maximized by a circle. Area is also a sum (of little tiny pie slices of a figure), just like entropy is a sum. For area, the constraint is circumference being a given number, for entropy the constraint is probabilities must sum to one. You can think of both of these constraints as "normalization constraints" -- things have to sum to some number.

In both cases, the sum is maximized if individual pieces are as equal to each other as allowed.