Posts

Comments

Comment by hobs on Robert Long On Why Artificial Sentience Might Matter · 2022-08-29T03:59:28.523Z · LW · GW

I like the hypothetical Nigeria question answer pair. It takes advantage of the latest thinking about how to detect and quanitify sentience with black box tests. I think Artificial You listed several questions in its intelligence and sentience tests that this one QA pair accomplishes in one fell swoop.

Comment by hobs on Kurzgesagt – The Last Human (Youtube) · 2022-08-28T18:37:30.369Z · LW · GW

Inspiring and convincing long term thinking perspective.

Really enjoyed the population growth visualization that made concrete the "7% of all humans ever born are alive right now." That means that 7% of all our genetic diversity as a species is available to us now. Of course this deweights compounding of mutations over multiple generations so it's a different kind of diversity. But 7% of all the kinds of humans that are viable (can survive) are alive now. Same goes for cultures and organization structures and religions or belief frameworks. That's a hopeful thought for me, inspired by this video, that the best ideas and systems of cooperating (economic systems) are likely alive and being tested right now.

Comment by hobs on Common misconceptions about OpenAI · 2022-08-28T17:55:59.628Z · LW · GW

I might add the most glaring misconception, at least for me in the early days... I assumed their primary goal was to support Open Source AI, and would "default to open" on all their projects. Instead orgs like HuggingingFace expend significant resources reverse engineering the AI papers and models that OpenAI releases.

Comment by hobs on Taking the parameters which seem to matter and rotating them until they don't · 2022-08-27T18:07:20.501Z · LW · GW

Wouldn't your explainable rotated representation create a more robust model? Kind of like Newton's model of gravity was a better model than Kepler and Copernicus computing nested ellipses. Your model might be immune to adversarial examples and might generalize outside of the training set.

Comment by hobs on Taking the parameters which seem to matter and rotating them until they don't · 2022-08-27T17:57:02.905Z · LW · GW

I was going to ask the same thing. It may not be possible to create a simple vector representation of the circuit if the circuit must simulate a complex nonlinear system. Doesn't seem possible, if the inputs are embeddings or sensor data from real world causal dynamic systems like images and text; and the outputs are vector space representations of the meaning in the inputs (semantic segmentation, NLP, etc). If it were possible it's like saying there's a simple, consitent vector representation of all the common sense reasoning about a particular image or natural language text. And your rotated explainable embedding would be far more accurate and robust than the original spaghetti circuit/program. Some programs are irreducible.

I think the best you can do is something like Capsule Nets (Hinton) which are many of your rotations (just smaller, 4 d quaternions, I think) distributed throughout the circuit.

Comment by hobs on MikkW's Shortform · 2022-08-27T17:31:14.058Z · LW · GW

Indeed. Good SciFi does both for me - terror of being a passenger in this train wreck and ideas for how heroes can derail the AI commerce train or hack the system to switch tracks for the public transit passenger train. Upgrade and Recursion did that for me this summer.