Posts

Idea: NV⁻ Centers for Brain Interpretability 2024-02-18T05:28:27.291Z
James Camacho's Shortform 2023-07-22T01:55:40.819Z
The Platonist’s Dilemma: A Remix on the Prisoner's. 2022-04-12T03:49:42.695Z

Comments

Comment by James Camacho (james-camacho) on Evaluating the truth of statements in a world of ambiguous language. · 2024-10-07T19:32:05.356Z · LW · GW

I claim that there's just always a distribution over meanings, and it can be sharp or fuzzy or bimodal or any sort of shape.

The issue is you cannot prove this. If you're considering any possible meaning, you will run into recursive meanings (e.g. "he wrote this") which are non-terminating. So, the truthfulness of any sentence, including your claim here is not defined.

You might try limiting the number of steps in your interpretation: only meanings that terminate to a probability within steps count; however, you still have to define or believe in the machine that runs your programs.

Now, I'm generally of the opinion that this is fine. Your brain is one such machine, and being able to assign probabilities is useful for letting your brain (and its associated genes) proliferate into the future. In fact, science is really just picking more refined machines to help us predict the future better. However, keep in mind that (1) this eventually boils down to "trust, don't verify", and (2) you've committed suicide in a number of worlds that don't operate in the way you've limited yourself. I recently had an argument with a Buddhist whose point was essentially, "that's the vast majority of worlds, so stop limiting yourself to logic and reason!"

Comment by James Camacho (james-camacho) on Why I’m not a Bayesian · 2024-10-07T02:00:00.705Z · LW · GW

Natural languages, by contrast, can refer to vague concepts which don’t have clear, fixed boundaries

 

I disagree. I think it's merely the space is so large that it's hard to pin down where the boundary is. However, language does define natural boundaries (that are slightly different for each person and language, and shift over time). E.g., see "Efficient compression in color naming and its evolution" by Zaslavsky et al.

Comment by James Camacho (james-camacho) on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review · 2024-10-06T03:30:35.856Z · LW · GW

"The boundary of a boundary is zero"

 

I think this is mostly arbitrary.

So, in the 20th century Russel's paradox came along and forced mathematicians into creating constructive theories. For example, in ZFC set theory, you begin with the empty set {}, and build out all sets with a tower of lower-level sets. Maybe the natural numbers become {}, {{}}, {{{}}}, etc. Using different axioms you might get a type theory; in fact, any programming language is basically a formal logic. The basic building blocks like the empty set, or the builtin types are called atoms.

In algebraic geometry, the atom is a simplex—lines in one dimension, triangles in two dimensions, tetrahedrons in three dimensions, and so on. I think they generally use an axiom of infinity, so each simplex is infinitely small (convenient when you have smooth curves like circles), but they need to be defined at the lowest level. This includes how you define simplices from lower-dimensional simplices! And this is where the boundary comes in.

Say you have a triangle (2-simplex) [A, B, C]. Naively, we could define it's boundary as the sum of its edges:

However, if we stuck two of them together, the shared edge [A, C] wouldn't disappear from the boundary:

This is why they usually alternate sign, so

Then, since

you could also write it like

It's essentially a directed loop around the triangle (the analogy breaks when you try higher dimensions, unfortunately). Now, the famous quote "the boundary of a boundary is zero" is relatively trivial to prove. Let's remove just the two indices $A_i, A_j$ from the simplex $[A_1, A_2, \dots, A_i, \dots, A_j, \dots, A_n]$. If we remove $A_i$ first, we'd get

while removing $A_j$ first gives

The first is $-1$ times the second, so everything will zero out. However, it's only zero because we decided edges should cancel out along shared boundaries. We can choose a different system where they add together, which leads to the permanent as a measure of volume instead of the determinant. Or, one that uses a much more complex relationship (re: immanent).

I'm certainly not an expert here, but it seems like fermions (e.g. electrons) exchange via the determinant, bosons (e.g. mass/gravity) use the permanent, and more exotic particles (e.g. anyons) use the immanent. So, when people base their speculations on the "boundary of a boundary" being a fundamental piece of reality, it bothers me.

Comment by James Camacho (james-camacho) on A Path out of Insufficient Views · 2024-09-27T22:36:00.472Z · LW · GW

That assumes the law of non-contradiction. I could hold the belief that everything will happen in the future, and my prediction will be right every time. Alternatively, I can adjust my memory of a prediction to be exactly what I experience now.

Also, predicting the future only seems useful insofar as it lets the belief propagate better. The more rational and patient the hosts are, the more useful this skill becomes. But, if you're thrown into a short-run game (say ~80yrs) that's already at an evolutionary equilibrium, combining this skill with the law of non-contradiction (i.e. only holding consistent beliefs) may get you killed.

Comment by James Camacho (james-camacho) on A Path out of Insufficient Views · 2024-09-27T04:33:50.329Z · LW · GW

Within a system with competition, why would the most TRUE thing win? No, the most effective thing wins.

 

Why do you assume truth even exists? To me, it seems like there are just different beliefs that are more or less effective at proliferation. For example, in 1300s Venice, any belief except Catholicism would destroy its hosts pretty quickly. Today, the same beliefs would get shouted down by scientific communities and legislated out of schools.

Comment by James Camacho (james-camacho) on things that confuse me about the current AI market. · 2024-08-30T23:25:04.960Z · LW · GW
  1. Is this apparent parity due to a mass exodus of employees from OpenAI, Anthropic, and Google to other companies, resulting in the diffusion of "secret sauce" ideas across the industry?

No. There isn't much "secret sauce", and these companies never had a large amount of AI talent to begin with. Their advantage is being in a position with hype/reputation/size to get to market faster. It takes several months to setup the infrastructure (getting money, data, and compute clusters), but that's really the only hurdle.

  1. Does this parity exist because other companies are simply piggybacking on Meta's open-source AI model, which was made possible by Meta's massive compute resources? Now, by fine-tuning this model, can other companies quickly create models comparable to the best?

No. "Everyone" in the AI research community knew how to build Llama, multi-modal models, or video diffusion models a year before they came out. They just didn't have $10M to throw around.

Also, fine-tuning isn't really the way to go. I can imagine people using it as a teacher during the warming up phase, but the coding infrastructure doesn't really exist to fine-tune or integrate another model as part of a larger one. It's usually easier to just spend the extra time securing money and training.

  1. Is it plausible that once LLMs were validated and the core idea spread, it became surprisingly simple to build, allowing any company to quickly reach the frontier?

Yep. Even five years ago you could open a Colab notebook and train a language translation model in a couple of minutes.

  1. Are AI image generators just really simple to develop but lack substantial economic reward, leading large companies to invest minimal resources into them?

No, images are much harder than language. With language models, you can exactly model the output distribution, while the space of images is continuous and much too large for that. Instead, the best models measure the probability flow (e.g. diffusion/normalizing flows/flow-matching), and follow it towards high-probability images. However, parts of images should be discrete. You know humans have five fingers, or text has words in it, but flows assume your probabilities are continuous.

Imagine you have a distribution that looks like

__|_|_|__

A flow will round out those spikes into something closer to

_/^\/^\/^\__

which is why gibberish text or four-and-a-half fingers appear. In video models, this leads to dogs spawning and disappearing into the pack.

  1. Could it be that legal challenges in building AI are so significant that big companies are hesitant to fully invest, making it appear as if smaller companies are outperforming them?

Partly when it comes to image/video models, but this isn't a huge factor.

  1. And finally, why is OpenAI so valuable if it’s apparently so easy for other companies to build comparable tech? Conversely, why are these no name companies making leading LLMs not valued higher?

I think it's because AI is a winner-takes-all competition. It's extremely easy for customers to switch, so they all go to the best model. Since ClosedAI already has funding, compute, and infrastructure, it's risky to compete against them unless you have a new kind of model (e.g. LiquidAI), reputation (e.g. Anthropic), or are a billionaire's pet project (e.g. xAI).

Comment by James Camacho (james-camacho) on James Camacho's Shortform · 2024-08-29T08:09:49.351Z · LW · GW

Religious freedoms are a subsidy to keep the temperature low. There's the myth that societies will slowly but surely get better, kind of like a gradient descent. If we increase the temperature too high, an entropic force would push us out of a narrow valley, so society could become much worse (e.g. nobody wants the Spanish Inquisition). It's entirely possible that the stable equilibrium we're being attracted to will still have religion.

Comment by James Camacho (james-camacho) on Isomorphisms don't preserve subjective experience... right? · 2024-07-05T23:46:51.899Z · LW · GW

Can't you choose an arbitrary encoding procedure? Choosing a different one only adds a constant number of bits. Also, my comment on discounted entropy was a little too flippant. What I mean is closer to entropy rate with a discount factor, like in soft-actor critic. Maximizing your ability to have options in the future requires a lot of "agency".

Maybe consciousness should be more than just agency, e.g. if a chess bot were trained to maximize entropy, games wouldn't be as strategic as if it wants to get a high*-energy payoff. However, I'm not convinced energy even exists? Humans learn strategy because their genes are more likely to survive, thrive, and have choices in the future when they win. You could even say elementary particles are just the ones still around since the Big Bang.

*Note: The physicists should reverse the sign on energy. While they're at it, switch to inverse-temperature.

Comment by James Camacho (james-camacho) on Isomorphisms don't preserve subjective experience... right? · 2024-07-04T00:40:27.768Z · LW · GW

Consider all programs encoding isomorphisms from a rock to something else (e.g. my brain, or your brain). If the program takes bits to encode, we add times the other entity to the rock (times some partition number so all the weights add up to one). Since some programs are automorphisms, we repeatedly do this until convergence.

The rock will now possess a tiny bit of consciousness, or really any other property. However, where do we get the original "sources" of consciousness? If you're a solipsist, you might say, "I am the source of consciousness." I think a better definition is your discounted entropy.

Comment by James Camacho (james-camacho) on Isomorphisms don't preserve subjective experience... right? · 2024-07-03T23:05:22.931Z · LW · GW

An isomorphism isn't enough. Stealing from Robert (Lastname?), you could make an isomorphism from a rock to your brain, but you likely wouldn't consider it "conscious". You have to factor out the Kolmogorov complexity of the isomorphism.

Comment by James Camacho (james-camacho) on Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted · 2024-07-03T19:03:04.587Z · LW · GW

Would insider trading work out if everyone knew who was asking to trade with them ahead of time?

Comment by James Camacho (james-camacho) on Fertility Rate Roundup #1 · 2024-06-29T03:24:25.574Z · LW · GW

It could be a case of a backward-bending curve. Fewer children make the economy worse, so more people choose to work rather than have children.

Comment by James Camacho (james-camacho) on Nathan Helm-Burger's Shortform · 2024-04-30T18:48:33.175Z · LW · GW

The computer vision researchers just chose the wrong standard. Even the images they train on come in [pixel_position, color_channels] format.

Comment by James Camacho (james-camacho) on Open Thread Spring 2024 · 2024-04-02T22:58:02.521Z · LW · GW

Age limits do exist: you have to be at least 35 to run for President, at least 30 for Senator, and 25 for Representative. This automatically adds a decade or two to your candidates.

Comment by James Camacho (james-camacho) on You Don't Exist, Duncan · 2024-04-01T05:02:24.062Z · LW · GW

In earlier times, I spent an incredible amount of my mental capacity trying to accurately model those around me. I can count on zero hands the number of people that reciprocated. Even just treating me as real as I treated them would fit on one hand. On the other hand, nearly everyone I talk to does not have "me" as even a possibility in their model.

Comment by James Camacho (james-camacho) on Evidential Correlations are Subjective, and it might be a problem · 2024-03-07T21:47:24.357Z · LW · GW

It just takes a very long time in practice, see "Basins of Attraction" by Ellison.

Comment by James Camacho (james-camacho) on Ideological Bayesians · 2024-02-27T18:22:22.130Z · LW · GW

I've been thinking about something similar, and might write a longer post about it. However, the solution to both is to anneal on your beliefs. Rather than looking at the direct probabilities, look at the logits. You can then raise the temperature, let the population kind of randomize their beliefs, and cool it back down.

See "Solving Multiobjective Game in Multiconflict Situation Based on Adaptive Differential Evolution Algorithm with Simulated Annealing" by Li et. al.

Comment by James Camacho (james-camacho) on What's this 3rd secret directive of evolution called? (survive & spread & ___) · 2024-02-07T18:26:49.197Z · LW · GW

Perhaps "fit", from the Latin fio (come about) + English fit (fit). An object must fit, survive, and spread.

Comment by James Camacho (james-camacho) on A short 'derivation' of Watanabe's Free Energy Formula · 2024-01-30T05:07:43.483Z · LW · GW

To see how much the minimal point  contributes to the integral we can integrate it in its vicinity

 

I think you should be looking at the entire stable island, not just integrating from zero to one. I expect you could get a decent approximation with Lie transform perturbation theory, and this looks similar to the idea of macro-states in condensed matter physics, but I'm not knowledgeable in these areas.

Comment by James Camacho (james-camacho) on A short 'derivation' of Watanabe's Free Energy Formula · 2024-01-30T04:37:30.235Z · LW · GW

−N∑i=1logp(yi|xi,w)

 

You have a typo, the equation after Free Energy should start with 

Also the third line should be , not minus.

Also, usually people use  for model parameters (rather than ). I don't know the etymology, but game theorists use the same letter (for "types" = models of players).

Comment by James Camacho (james-camacho) on What Software Should Exist? · 2024-01-20T02:10:22.011Z · LW · GW

Also sometimes when I explain what a hyperphone is well enough for the other person to get it, and then we have a complex conversation, they agree that it would be good. But very small N, like 3 to 5.

 

It's difficult to understand your writing, and I feel like you could improve in general at communication based on this quote. The concept of a hyperphone isn't that complex---the ability to branch in conversations---so the modifiers "well enough", "complex", and "very small N" make me believe it's only complex because you're unclear.

For example, the blog post you linked to is titled "Hyperphone", yet you never define a hyperphone. I can infer from the section on streaming what you imagine, but that's the second-to-last section!

Comment by James Camacho (james-camacho) on Bayesians Commit the Gambler's Fallacy · 2024-01-07T23:58:00.208Z · LW · GW

There's the automorphism

which turns a switchy distribution into a sticky one, and vice versa. The two have to be symmetric, so your conclusion cannot be correct.

Comment by James Camacho (james-camacho) on Bayesians Commit the Gambler's Fallacy · 2024-01-07T23:27:16.813Z · LW · GW

This means the likelihood distribution over data generated by Steady is closer to the distribution generated by Switchy than to the distribution generated by Sticky.

 

Their KL divergences are exactly the same. Suppose Baylee's observations are . Let  be the probability if there's a  chance of switching, and similar for . By the chain rule,

In particular, when either  or  is equal to one half, this divergence is symmetric for the other variable.

Comment by James Camacho (james-camacho) on Memory bandwidth constraints imply economies of scale in AI inference · 2023-12-17T00:45:56.847Z · LW · GW

The problem with etching specific models is scale. It costs around $1M to design a custom chip mask, so it needs to be amortized over tens or hundreds of thousands of chips to become profitable. But no companies need that many.

Assume a model takes 3e9 flops to infer the next token, and these chips run as fast as H100s, i.e. 3e15 flops/s. A single chip can infer 1e6 tokens/s. If you have 10M active users, then 100 chips can provide each user a token every 10ms, around 600wpm.

Even OpenAI would only need hundreds, maybe thousands of chips. The solution is smaller-scale chip production. There are startups working on electron beam lithography, but I'm unaware of a retailer Etched could buy from right now.

 

EDIT: 3 trillion flops/token (similar to GPT-4) is 3e12, so that would be 100,000 chips. The scale is actually there.

Comment by James Camacho (james-camacho) on Self-Referential Probabilistic Logic Admits the Payor's Lemma · 2023-11-28T21:34:41.540Z · LW · GW

 so 

It should be .

Comment by James Camacho (james-camacho) on Please speak unpredictably · 2023-07-24T03:44:17.436Z · LW · GW

如果需要更長的時間來理解,那麼效率就很低。

Comment by James Camacho (james-camacho) on James Camacho's Shortform · 2023-07-22T01:55:40.896Z · LW · GW

Graph Utilitarianism:

People care about others, so their utility function naturally takes into account utilities of those around them. They may weight others' utilities by familiarity, geographical distance, DNA distance, trust, etc. If every weight is nonnegative, there is a unique global utility function (Perron-Frobenius).

Some issues it solves:

  • Pascal's mugging.
  • The argument "utilitarianism doesn't work because you should care more about those around you".

Big issue:

  • In a war, people assign negative weights towards their enemies, leading to multiple possible utility functions (which say the best thing to do is exterminate the enemy).
Comment by James Camacho (james-camacho) on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2023-07-08T04:37:33.261Z · LW · GW

Did you check if there was a significant age difference between the two groups? I would expect proto-rationalists to be younger, so they would have less money and fewer chances to have signed up for cryonics.

Comment by James Camacho (james-camacho) on GPT4 is capable of writing decent long-form science fiction (with the right prompts) · 2023-05-23T22:53:51.916Z · LW · GW

There's a recent paper doing something similar: https://arxiv.org/pdf/2305.13304.pdf. They tell GPT to create long- and short-term memory, then use semantic search to look up the long-term memory.

Comment by James Camacho (james-camacho) on Proposal: Butt bumps as a default for physical greetings · 2023-04-01T19:04:40.250Z · LW · GW

James Camacho

Comment by James Camacho (james-camacho) on The Social Recession: By the Numbers · 2022-10-31T06:18:02.082Z · LW · GW

Have you considered that signaling could play a large part into this? A European friend of mine once said, "people in the US try to do everything in high school." Because, to get into a top undergraduate program, Americans have to signal very hard. Worse, a master's degree is quickly becoming the new high school diploma, due to signaling to employers.

When kids are spending their lives trying to signal stronger, it's a lot harder to balance it with friends. It used to be dating as an undergraduate made sense--people would actually get married during or out of college! Now, it makes less sense to date for a year or two and try to maintain a long-distance relationship as you split off into different PhD programs.

Comment by James Camacho (james-camacho) on The Futility of Religion · 2022-10-24T19:48:41.496Z · LW · GW

I think the correct reasoning is, if you didn't get the job you didn't pray hard enough. You weren't faithful enough to be rewarded. Or maybe you were, and this is just a trial of your faith. It's easy to have faith when faith seems to work, it's only when all experiments you perform seem to contradict your faith that it is really tested.

Comment by James Camacho (james-camacho) on Untapped Potential at 13-18 · 2022-10-19T02:33:26.322Z · LW · GW

I think this needs to be done for >18 year-olds as well. Most research positions require a PhD as a prerequisite, when there are many talented undergraduates who could drop out of college and perform the research after a few weeks' training.

Comment by James Camacho (james-camacho) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-28T05:14:15.295Z · LW · GW

AIs need immense databases to provide decent results. For example, to recognize if something is a potato, an AI will take 1,000 pictures of a potato and 1,000 pictures of not-a-potato, so that it can tell you if something is a potato with 95% accuracy.

Well, 95% accurate isn't good enough--that's how you get Google labelling images of African Americans as gorillas. So what's the solution? More data! But how do you get more data? Tracking consumers.

Websites track everything you do on the internet, then sell your data to Amazon, Netflix, Facebook, etc. to bolster their AI predictions. Phone companies tracks your location, credit card companies track your purchases.

Eventually, true AI will replace these pattern matching pretenders, but in the meantime data has become a new currency, and it's being stolen from the general public. Many people know and accept their cookies being eaten by every website, but more have no idea.

Societally, this threatens a disaster for AI research. Already people say to leave your phones at home when you go to a protest--no matter which side of the political spectrum it's on. Soon enough, people will turn on AI altogether if this negative perception isn't fixed.

So, to tech executives: Put more funds into true AI, and less into growing databases. Not only is it fiscally costly, but the social cost is too high.

To policymakers: Get your data from consenting parties. A checkbox at the end of a three page legal statement is hardly consent. Instead, follow the example of statisticians. Use studies, but instead of a month-long trial, all you ask is a picture and a favorite movie.

To both: Invest more money in the future of AI. In the past ten years we've gone from 64x64 pixel ghoulish faces to high-definition GAN's and chess grandmasters trained in hours on a home computer. Imagine how much better AI will be in another ten years. Fifteen thousand now could save you Fifteen million or more in your companies' lifetime.

Comment by James Camacho (james-camacho) on The Platonist’s Dilemma: A Remix on the Prisoner's. · 2022-04-21T18:38:26.815Z · LW · GW

The Carthusian Order practiced vows of silence.