Posts
Comments
The CCP once ran a campaign asking for criticism and then purged everyone who engaged.
I'd be super wary of participating in threads such as this one. A year ago I participated in a similar thread and got the rate limit ban hit.
If you talk about the very valid criticisms of LessWrong (which you can only find off LessWrong) then expect to be rate limited.
If you talk about some of the nutty things the creator of this site has said that may as well be "AI will use Avada Kadava" then expect to be rate limited.
I find it really sad honestly. The group think here is restrictive and bound up by verbose arguments that start with claims that someone hasn't read the site. Or that there are subjects that are settled and must not be discussed.
Rate limiting works to push away anyone even slightly outside the narrow view.
I think the creator of this site is like a bad L. Ron Hubbard quite frankly except they never succeeded with their sci-fi and so turned to being a doomed prophet.
But hey, don't talk about the weird stuff he has said. Don't talk about the magic assumption that AI will suddenly be able to crack all encryption instantly.
I stopped participating because of the rate limit. I don't think a read of my comments show that I was participating in bad faith or ignorance.
I just don't fully agree...
Forums that do this just die eventually. This place will because no new advances can be made so long as there exists a body of so-called knowledge that you're required to agree with to even start participating.
Better conversations are happening elsewhere and have been for a while now.
There's no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn't supported.
There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.
The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it's going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network's required just to make a sparkplug?
One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.
We do in fact have multiple tries to get AI "right".
We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It's not mathematically possible.
This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there... which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that's how you'd do it.
You can see it at work - when a post is upvoted is it because it's well-written/useful or because it's saying the groupthink? When a post is downvoted is it because it contains forbidden ideas?
When you talk about making a new faction - that is what this place is. And naming it Rationalists says something very direct to those who don't agree - they're Irrationalists.
Perhaps looking to other communities is the useful path forward. Over on reddit there's science and also askhistorians. Both have had "scandals" of a sort that resulted in some of the most iron-fisted moderation that site has to offer. The moderators are all in alignment about what is okay and not. Those communities function extremely well because a culture is maintained.
LessWrong has posts where nanites will kill us all. A post where someone is afraid, apparently, of criticizing Bing ChatGPT because it might come kill them later on.
There is moderation here but I can't help to think of those reddit communities and ask whether a post claiming someone is scared of criticizing Bing ChatGPT should be here at all.
When I read posts like that I think this isn't about rationality at all. Some of them are a kind of written cosplay, hyped up fiction, which when it remains, attracts others. Then we end up with someone claiming to be an AI running on a meat substrate... when in fact they're just mentally ill.
I think those posts should have been removed entirely. Same for those gish gallop posts of AI takeover where it's nanites or bioweapons and whatever else.
But at the core of it, they won't be and will remain in the future because the bottom level of this website was never about raising the waterline of sanity - it was AI is coming, it will kill us, and here's all the ways it will kill us.
It's a keystone, a basic building block. It cannot be removed. It's why you see so few posts here saying "hey, AI probably won't kill us and even if something gets out of hand, we'll be able to easily destroy it".
When you have fundamental keystones in a community, sure there will be posts pointing out things but really the options become leave or stay.
Google lesswrong criticism and you'll find them easily enough.
I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms.
I used time-travel as my example because I didn't want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn't at Flat Earther levels yet but it's easy to see the similarities.
There's the unspoken things you must not say otherwise you'll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down.
Talking about these problems just invites people in the problems to attempt to draw you in with the flawed arguments.
Saying, hey, take three big steps back from the picture and look again doesn't get anywhere.
Some of the posts I've seen on here are some sort of weird doom cosplay. A person being too scared to criticize Bing Chatgpt? Seriously? That can't be real. It reminds me of the play-along posts I've seen in antivaxxer communities in a way.
The idea of "hey, maybe you're just totally wrong" isn't super useful to move anything but it seems obvious that fan-fiction of nanites and other super techs that exist only in stories could probably be banned and this would improve things a lot.
But beyond that, I'm not certain this place can be saved or eventually be useful. Setting up a place proclaiming it's about rationality is interesting and can be good but it also implicitly states that those who don't share your view are irrational, and wrong.
As the group-think develops any voice not in line is pushed out all the ways they can be pushed out and there's never a make-or-break moment where people stand up and state outright that certain topics/claims are no longer permitted (like nanites killing us all).
The OP may be a canary, making a comment but none of the responses here produced a solution or even a path.
I'd suggest one: you can't write nanite until we make nanites. Let's start with that.
You have no atomic level control over that. You can't grow a cell at will or kill one or release a hormone. This is what I'm referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.
But we suggest an AI will have atomic control. Or that code control is the same as control.
Total control would be you sitting there directing cells to grow or die or change at will.
No AI will be there modifying the circuitry it runs on down at the atomic level.
I'd suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can't gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it's not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn't supported by evidence. We haven't produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to "thinking" based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification - we don't have atomic level control over the meat we run on. A program or model doesn't have atomic level control over its hardware. It can't move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.
We don't know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it's possible using metal and silica.
No being has cellular level control. Can't direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.
Teleportation doesn't exist so we shouldn't make arguments where teleportation is part of it.
You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.
We do have examples of these things in nature, in degrees. Like flowers turning to the sun because they contain light-sensing cells. Thus, it exists in nature and we eventually replicate it.
Steam engines is just energy transfer and use, and that exists. So does flying fast.
Something not in nature (as far as we can tell) is teleportation. Living inside a star.
I don't mean specific narrow examples in nature. I mean the broader idea.
So I can see intelligence evolving over enormous time-frames, and learning exists, so I do concur we can speed up learning and replicate it... but the underlying idea of a being modifying itself? Nowhere in nature. No examples anywhere on any level.
Imagine LessWrong started with an obsessive focus on the dangers of time-travel.
Because the writers are persuasive there are all kinds of posts filled with references that are indeed very persuasive regarding the idea that time-travel is ridiculously dangerous, will wipe out all human life and we must make all attempts to stop time-travel.
So we see some new quantum entanglement experiment treated with a kind of horror. People would breathlessly "update their horizon" like this matters at all. Physicists completing certain problems or working in certain areas would be mentioned by name and some people would try to reach out to them to convince them how dangerous time-travel and what they're doing is.
Meanwhile, from someone not taken in by very persuasive writing, vast holes are blindingly obvious. When those vast holes are discussed... well, they're not discussed. They get nil traction, are ignored, aren't treated with any seriousness.
Examples of magical thinking (they're going to find unobtainium and that'll be it, they'll have a working time-machine within five years) are rife but rarely challenged.
I view a lot of LessWrong like this.
I'll provide two examples.
- AI will improve itself very quickly, becoming the most intelligent being that can exist and then will have the power to wipe humans out.
- AI will be able to make massive technological jumps, here come nanites, bye humans
For 1 - we don't have any examples of this in nature. We have evolution over enormous timelines which has eventually produced intelligence in humans and varying degrees of it in other species. We don't have any strong examples of computers improving code which in turn improves code which in turn improves code. ChatGPT for all the amazing things it can do -- okay, so here's the source code for Winzip, make compression better. I do agree "this slow thing but done faster" is possible but it is an extraordinarily weak claim that self-improvement can exist at all. Just because learning exists, does not mean fundamental architecture upgrades can be made self-recursively.
For 2 - AI seems to always be given near godlike magical powers. It will be able to "hack" any computer system. Oh, so it worked out how to break all cryptography? It will be able to take over manufacturing to make things to kill people? How exactly? It'll be able to work up a virus to kill all humans and then hire some lab to make it... are we really sure about this?
I wrote about the "reality of the real world" recently. So many technologies and processes aren't written down. They're stored in meat minds, not in patents, and embodied in plant equipment and vast, intricate supply chains. Just trying to take over Taiwan chip manufacturing would be near impossible because they're so far out on the the cutting edge they jealously guard their processes.
I love sci-fi but there are more than a few posts that are pretty close to sci-fi fan fiction than actual real problems.
The risk of humans using ChatGPT and so on to distort narratives, destroy opponents, screw with political processes and so on seems vastly more deadly and serious than an AI will self-improve and kill us all.
Going back to the idea of LessWrong obsessed with time-travel - what would you think of such a place? It would have all the predictions, and persuasive posts, and people very dedicated to it... and they could all just be wrong.
For what it's worth, I strongly support the premise that anything possible in nature is possible for humans to replicate with technology. X-rays exist, we learn how to make and use them. Fusion exists, we will learn how to make fusion. Intelligence/sentience/sapience exists - we will learn how to do this. But I rarely see anyone touch on the idea of "what if we only make something as smart as us?"
You've touched on a point that many posts don't address - the realities of the real world. So many AI is going to kill us posts start with AI is coming then ? and then we all die.
Look at something like Taiwan chip manufacture - it's incredibly cutting edge and complicated and some of it isn't written down! There are all kinds of processes that we don't patent for various reasons. So much of our knowledge is in process rather than actually written anywhere.
And all of these processes are themselves the pinnacle of hundreds of other interlinked processes.
Not only does the malevolent AI need to take over chip manufacturing, but it needs to control the Australian lithium mine, plus the Saudi oilfields to run the ship to get the stuff to where it needs to go plus everything else. Hope it has rubber ring manufacture down for when an engine needs a part.
So many actual real world processes are incredibly complex involving webs of production which simply cannot be taken over by AI in any meaningful sense.
Not even with a hoard of dexterous robots.
Even if an AI had the ability to make dexterous robots, millions of them, and then scatter them all over the world, it would be required to take on almost the entirety of human manufacturing -- and we're goddamn lazy and don't write so much of it down anywhere. Even with observation over time it wouldn't pick up so many things.
There are machines that operate specific processes where the knowledge group is incredibly small. Whoops, you killed them AI, sorry, no more ultra-pure materials for those chips you need.
As for nanobots, it always reads a bit like a joke that an AI will have these incredible knowledge leaps that produce magical robots to do all the things.
I think most of the time it's a handwavey attempt at addressing the real world realities of manufacturing and production.
We had one flood and it nearly wiped out the world supply of RAM years ago. The truth is that so many processes the AI would need are incredibly concentrated, run by a small number of people, not written down in clear enough detail for anyone to pick up easily, and contain all kinds of secret knowledge no AI can obtain.
I read a study a few years back that found some women still had iron deficiency symptoms even as high as 60 on the ferritin test. Also was pointed out the "normal" scale for iron was devised the way most things were in the past - on healthy college age white males.
What is problematic about the ferritin test is it is treated like a yes/no rather than a continuum. Get 14 on the rest that where 10 is anemia and could be told it's not iron deficiency.
The best advice is likely "if you have the symptoms of iron deficiency, treat it".
It's definitely one of the most prevalent health problems affecting women globally.
It really is horrific just how many women suffer needlessly from it, when a pill and a vitamin C can work wonders.
How is that a flaw?
The harms of it are well known and established. You can look them up.
It's beside the point however. Replace it with whatever cause you want - spreading democracy, ending the war on drugs, ending homelessness, making more efficient electrical devices.
The argument is the path to the end is convoluted, not clear ahead of time. Although we can have guideposts and learn from history, the idea that today you can "optmize" on an unsolved problem can be faintly ridiculous.
James Clear has zero idea of what is good or great and the idea that you can sit there and start crossing off "good" things in favor of "great" is also highly flawed.
Hence the examples of reducing HIV infection rates and reducing health consequences of infection. Not an OR gate but an AND and the idea of opportunity cost doesn't really apply.
Now let’s factor in two additional facts:
-- are these facts though?
I see this a bit on here, a kind of rapid fire and then and then, and this is a fact therefore... when perhaps slowing down and stopping on some of those points to break the cognitive cage being assembled is the move.
Such as opportunity cost. We can make clear examples of this, invest in stock A means you can't invest in stock B.
But in the world, there are plenty of examples that are not OR gates but AND gates. It's not an opportunity cost to choose between providing clean needles to homeless drug addicts to reduce HIV infections OR giving money/time/effort to better HIV treatment drugs.
That situation is an AND gate, not an OR. Both are needed.
Calling it opportunity cost pits one thing against another when both made be needed, or neither, or a hundred things might be required. It restricts your thinking too.
As for James Clear - he doesn't know what is good or great and it's arguable that he doesn't really know the answer here. That quote, which I've read before, sounds lovely but can also be trite and somewhat useless.
What is good, what is great? Can you know ahead of time?
The story of the man who has the wild horse come to his farm comes to mind. What good luck, a new horse for free! They put a saddle on it and the son immediately breaks his leg getting thrown off. What bad luck that horse is! The army marches through town the next day taking young men off to war. What good luck that horse was!
You can keep that story going endlessly.
The idea of opportunity cost in social or scientific progress is itself a flawed one because it is based on knowing what is good, great, bad, etc, and many times that cannot be known, certainly not ahead of time and sometimes not even retrospectively.
It may apply for stock A vs stock B but in reality it doesn't to most things. People are fired from jobs and devastated and then because of their response or consequence of the firing, end up somewhere else in a far better position. Ahead of time they may have pondered the opportunity cost of taking that job, how they didn't do X. Later on they may have bitterly regretted it. Much later on their wedding day with the person they met at their new job, they may think that firing was the best thing that ever happened.
Here's another example: you want to end the prison industrial complex in the US. What should you do today? What is optimal? If you frame it in terms of opportunity cost, you needlessly harm yourself, judge yourself, and ultimately do worse work because you're beating yourself up or spending a lot of time and effort trying to "optimize".
You can't know for example this path:
Some place bring in medical marijuana cards.
The regulation of them is a bit lax. Some doctors arguably commit a type of fraud by mass issuing of them but it slips on by for a while.
It becomes a bit of a social joke. It's for my glaucoma.
The "drugs are evil!" message is diluted.
Public attitudes change as a result. Comedians are making jokes about it, it appears in movies and tv.
A state ends marijuana prohibition. Enormous sums of money are made for the state and business.
The public attitudes change even more over the years of all this money being made and no problems.
Pressure comes to bear as to why there are non-violent prisoners in jail for years on end for something that is completely legal now.
They are let out, records are expunged.
One of the key evils the private prison industry grew upon is systematically destroyed.
-- eventually you may see the private prison industry die and it all started from lax regulations over marijuana cards in some specific area.
Where I am in Australia, we have medical marijuana but not legal. The system is a bit lax though. So if I want to end the pointless war on drugs, where should I optimize my time? Perhaps the best use is advocating for some state in the US to legalise, such that it spreads and eventually is exported here. Or maybe it's spreading how to get a card? Or writing another letter to the Prime Minister.
In this example, does the concept of opportunity cost even exist? If you were working on spreading the word of how easy it is to get a medical marijuana card vs protesting, which was the "optimal" move?
I would very strongly suggest that the idea of opportunity cost is broken in many ways and subsequent ideas that it grows with power are meaningless in many situations.
"For example, every time you take a break, you let people die. You let many bad things happen and many problems pile up. Yet by resting adequately, you become able to tackle far bigger and harder problems. Both these things are true. Both are facts you need to grapple with if you want to become stronger and help save the world."
Such as this - I see it as an incorrect conclusion of highly flawed premises. Taking a break may result in people living because your efforts are pushing in the wrong direction. You cannot know and may never know.
As for far bigger and harder problems - I'd dispute that you can know this in many cases. You can construct arbitrary "great" and "good" for problems but you can be flat-out wrong. Great - we cure HIV. Good - we reduce it and delay it long enough that it's effectively a cure. So are we sure that's not two "greats" right there? Perhaps the only great is the long term delay for the seventy years it takes for technology to eventually provide a cure? Who are any of us to tell ahead of time?
I dispute that either of these things are true. They're not facts, and you certainly don't need to grapple with them.
Building a cognitive cage is easy to do and even easier to hand it to someone else.
What if the model is that 1000 switches need to be flicked? We know some of them but not others. We can't know sometimes how many switches we will flick with our efforts. You cannot know for example that the most effective thing you ever do in your entire life is a drunken reddit rant at 1am next year because of the downstream impact it has on others.
All your optimizing and suffering and grappling and really the game was flick switches, or cause others to flick switches, or to talk or to listen or to participate in the great swell of water much the way an individual droplet of water is part of a wave.
There are models for how progress is made - organizing, spreading ideas, debating them, ways of thinking, and so on and of course persistent effort in a direction does appear to give results mostly.
But massively flawed concepts such as opportunity cost, which come from one discipline and then get shoved into another just turns thinking off, produces a kind of frantic worrying and delays action.
The other side of opportunity cost is the person who endlessly researches every option possible and cannot make a first step on any of them because they become hopelessly paralyzed.
Sometimes, rather than endlessly researching all the types of flour, it's better to get out the bowl, put the flour, egg and milk in it and start mixing. Opportunity cost doesn't apply there and even later if you retrospectively try to fit it, you can still be wrong because you can't know the long-term future.
The entirety of Less Wrong, every post, every comment, everything that has come from it my end up being a single comment on one post two years from now that the GPT-6 researcher reads and makes them realize something. Everything else was meaningless except for that one comment.
How can you define opportunity cost in this?
Animals can suffer - duty to prevent animal suffering - stop that lion hunting that gazelle - lion suffering increase - work out how to feed lions - conclude predators and prey exist - conclude humans are just very smart predators - eating meat ok.
I'd contend that some positions are taken very seriously but what the next perceived logical step for people is varies. An animal activist might be pro the world becoming vegetarian. A non-animal activist is pro strong animal welfare laws to prevent needless suffering.
Trying to resolve "humans are just smart predators so we can eat prey animals" vs "humans have moved beyond the need to behave as predator animals" is unlikely to be resolved by suggesting any parties don't take their ideas seriously.
Otherwise you end up in all sorts of quagmires. Food security is a big problem. Ok, go start a farm then. No, I'm going to write letters to politicians. You're not taking the idea seriously.
Where can that go?
I mean, this forum talks often about existential AI risk. The climate catastrophe is known and real and here now. So... are people not taking it seriously because they're highly concerned about AI?
It does open up the possibility of other people writing any comic that has existed. More Snoopy. More Calvin & Hobbes.
1st panel: John cooking lasagna, garfield watching.
2nd panel: garfield tangling in john's legs, lasagna going flying
3rd panel garfield eating lasagna from the floor, happy.
No words, copy style, short comic.
Wow, this is going to explode picture books and book covers.
Hiring an illustrator for a picture book costs a lot, as it should given it's bespoke art.
Now publishers will have an editor type in page descriptions, curate the best and off they go. I can easily imagine a model improvement to remember the boy drawn or steampunk bear etc.
Book cover designers are in trouble too. A wizard with lighting in hands while mountain explodes behind him - this can generate multiple options.
It's going to get really wild when A/B split testing is involved. As you mention regarding ads you'd give the system the power to make whatever images it wanted and then split test. Letting it write headlines would work too.
Perhaps a full animated movie down the line. There are already programs that fill in gaps for animation poses. Boy running across field chased by robot penguins - animated, eight seconds. And so on. At that point it's like Pixar in a box. We'll see an explosion of directors who work alone, typing descriptions, testing camera angles, altering scenes on the fly. Do that again but more violent. Do that again but with more blood splatter.
Animation in the style of Family Guy seems a natural first step there. Solid colours, less variation, not messing with light rippling etc.
There's a service authors use of illustrated chapter breaks, a black and white dragon snoozing, roses around knives, that sort of thing. No need to hire an illustrator now.
Conversion of all fiction novels to graphic novel format. At first it'll be laborious, typing in scene descriptions but graphic novel art is really expensive now. I can see a publisher hiring a freelancer to produce fifty graphic novels from existing titles.
With a bit of memory, so once I choose the image of each character I want, this is an amazing game changer for publishing.
Storyboarding requires no drawing skill now. Couple sprinting down dark alley chased by robots.
Game companies can use it to rapid prototype looks and styles. They can do all that background art by typing descriptions and saving the best.
We're going to end up with people who are famous illustrator who can't draw but have created amazing styles using this and then made books.
Thanks so much for this post. This is wild astonishing stuff. As an author who is about to throw large sums of money at cover design, it's incredible to think a commercial version of this could do it for a fraction of the price.
edit: just going to add some more
App design that requires art. For example many multiple choice story apps that are costly to make due to art cost.
Split-tested covers designs for pretty much anything - books, music, albums, posters. Generate, ad campaign, test clicks. An ad business will be able to throw up a 1000 completely different variations in a day.
All catalogs/brochures that currently use stock art. While choosing stock art to make things works it also sucks and is annoying with the limited range. I'm imagining a stock art company could radically expand their selection to keep people buying from them. All those searches that people have typed in are now prompts.
Illustrating wikipedia. Many articles need images to demonstrate a point and rely on contributors making them. This could open up improvements in the volume of images and quality.
Graphic novels/comic books - writers who don't need artists essentially. To start it will be describing single panels and manually adding speech text but that's still faster and cheaper than hiring an artist. For publishers - why pick and choose what becomes a graphic novel when you can just make every title into a graphic novel.
Youtube/video interstitial art. No more stock photos.
Licensed characters (think Paw Patrol, Disney, Dreamworks) - creation of endless poses, scenes. No more waiting for Dreamworks to produce 64 pieces of black and white line art when it may be able to take the movie frames and create images from that.
Adaptations - the 24-page storybook of Finding Nemo. The 24-page storybook of Pinocchio. The picture book of Fast and The Furious.
Looking further ahead we might even see a drop-down option of existing comics, graphic novels but in a different art style. Reading the same Spiderman story but illustrated by someone else.
Character design - for games, licensing, children's animation. This radically expands the volume of characters that can be designed, selected and then chosen for future scenes.
With some sort of "keep this style", "save that character" method, it really would be possible to generate a 24-page picture book in an incredibly short amount of time.
Quite frankly, knowing how it works, I'd write a picture book of a kid going through different art styles in their adventure. Chasing their puppy through the art museum and the dog runs into a painting. First Van Gogh, then Da Vinci and so on. The kid changes appearance due to the model but that works for the story.
As a commercial produce, this system would be incredible. I expect we'll see an explosion in the number of picture books, graphic novels, posters, art designs, etsy prints, downloadable files and so on. Publishers with huge backlists would be a prime customer.
Allow it to display info on a screen. Set up a simple poleroid camera that takes a phone every X seconds.
Ask the question, take physical photos of the screen remotely.
View the photos.
Large transmission of information in analog format.
Sell 180 visas per day over 6 hours between 9am-3pm for 361 days of the year. A new auction every two minutes of one visa. Final day of the year sell remainder of visas and take four days off until new year.
Start Jan 1 and say Google bids $10000 x 10 visas. They win the first ten auctions over the first 20 minutes. The reference price is set at $10,000 for auction #11.
But $10,000 is too high for the next bidder who wants to pay $9000. No sale on auction #11.
Auction #12 starts with 2 visas for sale. You decrease the reference price by 2/180.
So new min reference price is $9888 and there are two visas for sale. If no sale we move to 3/180 decrease for next round.
Down to $9724 and three visas for sale. Keep repeating this decrease until the price reaches the next highest bid.
As prices decrease with each unsold round buyers who are looking for, say, six visas for the year would see prices that are perhaps higher than they'd like but they can get all six visas bought in one lot.
If a sale occurs of any number then the next round doesn't decrease in price.
Bidders can bid for any number of visas at any price at any time of the year. There are always 180 new visas up for sale every day. If some big company wants 1000 visas and is willing to pay then the first 1000 auctions go to them at their price point.
By only decreasing the price by the percentage of unsold, it stops large price drops. You can set a minimum figure it never goes below.
Big businesses might want 200, 500, 1000 visas and be willing to play the auction to see if they can get a good price. So if they log in on day 50 of the year and see 140 visas for sale at $5000 each, they might buy all of them just to guarantee they at least get 140.
Then two minutes later there is 1 visa for sale at $5000 and the bidding keeps going.
Carryover any unsold and add it to the total. If 30 carry over then the next day has 210 visas for sale. First auction of the say is 30 unsold + 1 new. If no sale the new cost cut is 31/210 x reference price.
If the first 10 visas went for $10K each and then there was no sale until auction number twenty, you'd see the price drop to $7363 at ten visas for sale.
If a single business just wants one, they have an opportunity to buy at $7363. Or they can toss their bid in at $5000 and let it simmer to see if the reference price ever drops to $5000.
~ I have zero idea what these visas go for by the way so plug in whatever figure is closer to reality. If the gov wants $20K min then that's the min price. Jan 1st 9am the auction opens, you'd see the price rise to $32K and sold. Then $33K sold. Then no sale, drop of 2/180 x reference price and so on.