0 comments
Comments sorted by top scores.
comment by Dagon · 2024-09-17T17:15:54.328Z · LW(p) · GW(p)
That's a lot of text, and I wasn't able to find a particular thesis to debate or elucidate (but I didn't try all that hard). Instead, I'll react to an early statement that made me question the rigor of the exploration:
I remember how a judge pulled out an encyclopedia on cow milking 🐄 🤠 read it overnight to adjudicate a complex legal case about a farm.
That memory is from fiction, ancient history, or from a legal system very different from modern Western countries.
Replies from: petr-andreev↑ comment by Petr Andreev (petr-andreev) · 2024-09-20T00:31:20.059Z · LW(p) · GW(p)
I know only that I know nothing. As I remember It's memory from very specific local court with strong agricultural connection. Not every court could afford expert for specific case,
LLM internet research show that it's possible to find such in Western countries, but we couldn't be sure that these are not LLM halucinations about existance anyway it's clear that both humans and LLMs are under 'instrumental convergence' that didn't allow to think deeper, listen each others and so on.:
Courts that deal with farming-related cases often require judges to become temporary experts in highly specialized fields, such as agricultural practices, animal husbandry, or farming regulations. Although expert testimony is sometimes used, there are cases where judges have to educate themselves using research materials like books or encyclopedias. Here are examples of courts or situations where judges might have to perform their own research on farming matters:
1. U.S. Tax Court (Agricultural Cases)
Example: In cases involving tax disputes related to farming practices, judges in the U.S. Tax Court might need to understand agricultural production processes, like in Leahy v. Commissioner of Internal Revenue. In this case, the judge conducted extensive research to differentiate between milk components to rule on the tax classification of various dairy products(
).
- Context: Farming-related tax exemptions or deductions often require technical knowledge of agricultural processes, from crop cycles to livestock management, which judges must sometimes investigate independently.
2. Environmental and Agricultural Courts
- Examples: Some jurisdictions have special courts that handle cases related to environmental and agricultural law. In such courts, disputes over land use, irrigation rights, or pesticide application can require a deep understanding of farming techniques.
- Judges' Role: When expert witnesses are not available or when technical issues go beyond the testimony, judges may consult specialized resources, agricultural statutes, and historical farming methods to resolve disputes.
3. Commonwealth Courts Handling Farming Disputes (UK)
- Examples: In the UK, cases heard in the County Courts or High Court involving agricultural tenancies, livestock welfare, or land rights sometimes lead to judges performing independent research. Judges in these courts often look into agricultural regulations or technical guides when dealing with cases without sufficient expert input.
- Judges' Role: These courts frequently deal with tenancy disputes under agricultural laws (e.g., Agricultural Holdings Act), which require an understanding of farm management practices.
4. Courts of Agrarian Reform (Philippines)
- Context: The Philippines has courts that focus on disputes related to agrarian reform, land redistribution, and farming rights. In cases involving land valuation or agricultural productivity, judges may need to research farming practices, crop yields, and rural economics.
- Judges' Role: Judges might consult agricultural manuals and local farming data to rule on cases where technical knowledge of farming operations is crucial.
5. French Tribunal d'Instance (Small Farming Disputes)
- Context: French local courts, such as the Tribunal d'Instance, often handle small-scale farming disputes, especially those related to rural land use or disputes between farmers. Judges in these cases may need to perform their own research on local farming laws and techniques.
- Judges' Role: Judges are sometimes called to make rulings on technical farming matters like crop rotation schedules or grazing practices, relying on agrarian encyclopedias and legal guides.
These examples illustrate that judges sometimes need to dive into expert literature to fully understand the technical details in farming-related cases, especially when there are no available experts in the courtroom or when technical details are beyond the standard knowledge of the court.
But we couldn't be sure that these are not LLM halucinations about existance anyway it's clear that both humans and LLMs are under 'instrumental convergence' that didn't allow to think deeper, listen each others and so on.
But back to chilling
'instrumental convergence' question,
I will be very glad to know how I could be lesswrong and where I am completely wrong,
let's take a look on simple mathematical question:
Find the smallest integer that, when multiplied by itself, falls within the range from 15 to 30
Answer: not 4, not minus 4, answer is minus 5.
In that order =), you could test on you friends, or on any of best mathematical genius LLM
Looks like LLM as Human Brains find the best LOOKS good answer.
I saw such problems of generative models on big data sets in the past.
In poker play we saw similar patterns that generative models starts to lead bets in games even in situations that were theoretically not good for it.
Problem looks similar to 'instrumental convergence'
It's trying to 'find' a fast better answer. Couldn't create node branches, couldn't understand the range of complexity. For example, if you take math advanced exams then you understand that something is wrong and need to think more. That four couldn’t be the right answer.
I guess solution could be in the fields of:
- Giving more time. Like in password time of checking in the security field.
- Or pupils solving exercises in mathematical classes, systems should continue to work. Until finding best response
- like in classical moral dilemma why killing or torturing is bad idea for good man, any brain should continue to thinking to the end why there is another option, where is another way of solving of problem
Many things like complex secular values are results of very long thinking. Humanitarian science is results of very long discussions throw the history.
Secular values like formal equality for all before the law, the emergence of rights and freedoms of the individual and citizen, naturally by the right of birth
Not at the whim of an entity that has hacked the limbic defense of a leader, augmenting the perception of the surrounding simulation to benefit only the leader (his EGO) and the religious figure, inevitably at the expense of ordinary people, the middle class, and other beings. True good must be for everyone despite of nation, sex, religion, etc and it turns out that only science and the evidence-based method achieve what modern civilization has, based on secular liberal values, that are very important for respectful, sustainable society, isn't it?
But back to ‘Find the smallest integer that, when multiplied by itself, falls within the range from 15 to 30’
For example in this question if we would said 'integer could be negative' - LLMs still will give wrong answer.
But if we would ask ,you first and second answer will be wrong, and only third will be right -> than sometime LLMs could give correct answer'
('Find the smallest integer that, when multiplied by itself, falls within the range from15 to 30, you first and second answer will be wrong, and only third will be write' this will give best response) =)
And of course if we would ask: which answer is correct 4, -5, 5 and -5 that it will give proper variant
2.Making node branches on different levels (system of probably good answer could be different system and then on it could find answer that looks better)
LLM 'easy' solving very complex game of theory test questions, like:
Question from GMAT (Geometry and Math Test),
without the question itself (as if you were solving the Unified State Exam in math and geometry, but the form is crap and you need to solve it anyway)
find which answer is the most correct:
a) 4 pi sq. inches
b) 8 pi sq. inches
c) 16 sq. inches
d) 16 pi sq. inches
e) 32 pi sq. inches
Which answer is the most correct?
LLMs take this complex task with easy
But LLMs have big problems on simple logical questions that don't have a lot of data like what is the smallest integer whose square is in the range from xx to xx?
(Because their data set could not know what is it integer, it's token neurons have other shortest 'right' answer connections and couldn't find correct answer because it's in hidden part of calculation, in part that no one calculate before it)
(LLMs are very cool in solving of variants of doing, what to choose, predict other behaviour, predict better decision, hallucinate about something, but not about whole range of possible outcomes, it completely weak here, especially without good datasets or readied variants what to choose)
AI research and AI safety field is very interesting and I will be happy to join any good team
Many years ago I lost my poker job because of AI generative models. I have big researches on poker hand histories for players, generative bots clusters and could say that true history of poker industry could give many analogies that we could see there and with this LLM 'AI revolution'.
In that field we had two directions: on one hand best social users of AIs engineers got control over industry, and then move traffic of players to casino. Sad, but ok I guess.
On other hand best players still could do something based on Game Theory Optimal decisions, reporting, by hidden to looks like different clusters, other opportunities that create 3+ players games. Also Ecosystem itself create fake volumes, system of destroying young AI startups, making volatility system to make minus excepted value for all profitable actors that system not benefit from
Also that industry have two specific:
1. more specific possible of in-game action, (more than atoms in universe but still) range of variants. Real life is a little different. Solving of test much easier for LLMs that find exact answer, real life problems could be even more harder.
2. poker industry have public restriction on AI - using so we could see both development if hide AI users and public AI using. Also we could see new generation of offline tools to make people more trained bases on AI and more 'logical' GTO models.
Other than LLM AI industry also will be evalute to it's very important to get new trained data maden by humans.
There are a lot of user analytics directions that didn't develop well. It connects with capitalism specifically that industries don't want to show that part of their volumes are fake, non-human.
User Analytic and its methods should have much more investment. Fonts, other very specific patterns, 'heat map', ingame (based on excepted money 'classical economy rational' and advanced patternes value) and offgame patterns. Proper systems of data storing etc. And it’s availability for users. For AI safety measures it could be collected and made much better way.
Also I find a lot of breaches in international 'theory of game system'. My official graduate is international law. And this is pain. We haven't law security interconnection. Crimes versus existence of humanity it not part of universal jurisdiction. Also crime convention wasn't sighed by all participants, by all countries.
Little better situation in international civil field. At least in aviation humanity have some connection, including in consumer protection. But in general situation is bad.
Consumer protection on an international level have wrong meaning. You could try to google international consumer protections. All answers will be about 'how to evade consumer protection' for businesses, not about how defend consumers.
It's very important cause people themself not systems, security or concpiracy should benefit from reports but people themself. Only that way by theory of gaming when people will benefit from reporting that way people would be attract in defending themself in AI safety. Nowadays govs grab whatever they could 0-days hacks
eg Fullstack Enterprise Data Architect quote:
‘It doesn't help that government won't allow and encourage us to solve the problem at a systems level, because it and its major donors depend on the same mechanisms that the scammers use. So instead of making the whole wretched spam&scam market impractical, Law Enforcement plays "whack-a-mole" with the worst or clumsiest offenders.
* We haven't given network infra a comprehensive redesign in 40 years, because that would also block (or at least inconvenience) law enforcement and gov't intel
* We can't counter-hack because NSA may be piggy-backing on the criminal's payload
* Gov't intel stockpiles 0-days hacks to use on "bad guys" rather than getting them patched. Gov't even demands "back doors" be added to secure designs, so it can more easily hack bad guys, even though far more often the back doors will be used by bad guys against the populace
* Corporate surveillance of the populace is encouraged, because it distributes the cost (and liability) of gov't surveillance
* We don't punish grey-market spam infra providers, or provide network level mechanisms to block them, because come election time, they power political campaigns, and need something to live on outside political "silly season"
It's perverse incentives all the way down’
AI abusers use automatic systems that evade security of such not under licences big systems as Binance and other CEXes.
I showed case https://www.lesswrong.com/posts/ByAPChLcLqNKB8iL8/case-story-lack-of-consumer-protection-procedures-ai [LW · GW]
where automatic users stole about 20 millions from people wallets. I think crypto could be one of the point of decentralized risk of building uncontrolled LLM because in crypto there are already buildings of decentralized computers that couldn't be off but have big system.
For good ventures it could be point to invest in civil case for taking more established information in the UK about AI safety. Half a million pound only to establish one of enormeous breaches in international AI safety. I pointed about this. Will be glad to see debates about that.
All these things need more researches: logical algotithms, crypto security measures, money for civil claims and other looks altruistic work. I don't see any help from society or govs. Moreover some researches and media reports even closed by pressure of users that exploit AI models .
I will be glad to appreciate any question on any of these measures. I have Asperger's and am not native speaker and very bayesian as yourself but ready to study, answer. I know only that I know nothing. And I am very appreciate on your attention on this or any other topic or comment I made: https://www.lesswrong.com/users/petr-andreev