Your algorithm still 'suffers' from the fact that it does only evaluate the direct negative consequences of a match being made/removed, even though the negative consequences could manifest themselves some steps later.
...but you can't think of an example of how it happens, even when you're not restricted to realistic or environment-related distributions. Sounds like your comment boils down to "but like... something unforeseen might occur at some point" which is unavoidably true on merit of the definition of "unforeseen".
Assume 4 coaches A,B,C,D each team in TV range of its direct neighours (in that list), but with A and D being out of TV-reach of each other. If your algorithm removes the 'bridge'-pair (B,C) because B was the starting coach in your ordered list and C was its best match and B would obviously not be 'orphaned otherwise' (it still has possibility A), then you would orphan A and D unnecessarily.
Again, you will argue that this situation is very contrived and isn't found in usual distributions.
No, I'll argue that you didn't understand the method, because you're incorrect about how it works. Here's your scenario in usable format:
The results are:
[CoachD] TeamD (1900/1900) vs [CoachC] TeamC (1600/1600)
[CoachA] TeamA (1000/1000) vs [CoachB] TeamB (1300/1300)
Because the mean value of the ratings is 1450, which puts 1900 and 1000 at the top of the list to be matched. It works from the outside in. The relevant scenario is going to be
Where it works from the outside (1900 is farthest from the mean) in until there's 3 or fewer coaches remaining, so all matches are treated as the only match the coach can get... so unless there is a priority flag, it gives the match to the closest rating match left in the pool. Thus, the matches made are:
[CoachE] TeamE (1900/1900) vs [CoachD] TeamD (1600/1600)
[CoachB] TeamB (1300/1300) vs [CoachC] TeamC (1450/1450)
Even though TeamA+TeamB and TeamC+DeamD would have been the matches that minimize the mean distance for the group.
Removing two coaches from the list could possibly affect the desirability-order concerning the ones connected to the removed ones, so they might need to be reordered, but in general, removal only decreases desirability of the remaining ones.
There's no way that removing a coach can alter the preference order for a given team beyond deleting that coach's teams from the preference list. Teams aren't changing ratings mid-matchmaking. The distance between two teams will not change.
The key to - maybe - solving that problem in your algorithm, is, in my opinion, to not use TVPlus-differences as your ordering criterium, but to use a criterium that relates to the constraining parts of the algorithm, i.e. TV-ranges and priorities.
I think the key is to demonstrate the problem exists rather than allude to its possible existence, if only because unless you have a problem to test a solution against, you don't really know if its a solution.
The "perfect" solution to matchmaking is the n! solution, where every possible pairing of coaches is tried, and for each of those, the best match between the two coach's teams is found... and the rating distance for each is added together, as are the number of legitimate matches made in that attempt.... and the pairing setup that has the maximum matches and minimum summed difference is the one you use. That's wonderful when we're sitting around with our thumbs up our asses, but its an impractical solution in terms of processing. That's why we don't just sit around and theorize, we actually do some work and implement methods and try them.
If we had, say, 100 coaches, each with 4 teams queued:
OINK does 39,600 comparisons or less.
"perfect" solution does ~ 3.73 * 10 ^ 158 comparisons. That's 3 followed by 158 zeroes. In case you don't know what that looks like:
Though, as I say, if you can come up with a method that accomplishes all the same things but creates better generalized matches.. and does it using processing that falls inside the realm of reality... I'm interested to see it. Make sure you implement your ideas rather than just imagine them. Lots of things work great when they only exist in your imagination.
Outside-In Number Kruncher (OINK)
Create a pairing system that creates decently-optimized pairings within the context of BB2's matchmaking needs. Priorities include maximizing number of coaches that get matches during a pool, allowing for a maximum metric distance that is separate from the matching metric (in our case, TV differences no greater than 500), finding the best match for a coach from among multiple possible teams, and finding matches across multiple leagues even for coaches who queue teams from multiple leagues in one pool... finally, we want to be able to "prioritize" a coach, ensuring (or as close as is possible) that that coach gets a match during a given pooling and is not the odd man out.
The stable roommates problem is a famous pairing system that finds the optimum pairings from a pool of potential pairings. The problem is that the standard algorithms for solving that problem simply fail if it cannot find a match for one or more participants... and even the modified versions that allow for an odd man out will happily exclude people if they are nobody's first choice.
BB2 matchmaking requires we allow for people that are nobody's favourite match as well as all the various problems listed in the Goal section -- it also requires that we make sub-optimal matches if doing so increases the total number of matches created... and that we have a way of prioritizing participants, and allowing multiple teams from a single participant. It is, in short, much more complex!
The method for accomplishing this is much more complicated than the previously laid out WorstBest System which I've since renamed Largest Least Difference or LLD. Given that matchmaking need only be done once every pooling period, the complexity is unlikely to be a major issue even for large numbers of teams, and finding smallest distances does require all teams be compared to all other teams.
First, we run through our list of queued teams, which we'll do in the same format as before. From this we build a list of participating coaches, and for each coach, a list of all valid, possible matches for their teams. We then order that list of possible matches by rating proximity and calculate the mean rating difference... and then we order the list of coaches by largest distance from the mean rating for their first choice and total possible matches.
At this point we begin our matchmaking loop. We take the first coach on our list, which is the coach with highest deviation from the mean on their best match and the least possible matches that can be made with the teams they queued with. We take the first match on their list of possible matches, which is the absolute best rating match for that coach... and we run our orphan check.
Our orphan check tests to see if removing the two coaches involved in that possible match will result in any other coach having no possible matches left. If not, then we make that match and remove both those coaches and all their teams from our lists, and start the next iteration of our loop.
If the possible match WILL orphan a coach, then comes the more complicated logic. We make the pairing if:
- There are only 3 coaches left in the pool so we treat this as the only possible match and...
- This coach will be orphaned otherwise, and this coach has priority while the other does not.
- This coach will be orphaned otherwise, both coaches have equal priority, and this rating difference is lower.
If we do make the match we again remove both coaches and all their teams and start the next iteration of the loop. If we do NOT make the match, we remove that possible match from the coach's possible matches, and check the next best one.
With this system we do try to find the shortest distances for everybody among the possible matches, but we prioritize matching the coaches with the largest minimum distance from the mean and fewest possible matches. Like the stable roommates algorithm, we're looking at each participant's preferences, but we're not optimizing the combined preferences, we're just taking the first preferences from the farthest deviating coach, and making sure it doesn't orphan anybody in the process.... then repeating that process until we run out of valid matches.
Since we use lists of "all possible matches for a coach's team" we're automatically covering the different leagues aspect as well as the secondary exclusion aspect, as cross-league matches don't get added to the list... nor do matches with > 500 TV difference. The key to making it work is the orphan check step, which lets us make sub-optimal matches if doing so will maximize the number of pairings, and it allows us to apply the prioritization flag.
While we do go through all possible matches for each team (n(n-1)), we don't try all combinations of coaches and the best matches for each (aka "brute force"), which would be an n! set of calculations - we're using the best-match-left for each coach at the time they're up to be matched, while still giving preference to teams at the outside edges of the distribution, but are using the distribution of the pool not the expected distribution of the population.
Like the LLD system, this system tries to push outliers toward the middle. Unlike the LLD system, this doesn't use maximum rating differences as the priority, but rather maximum rating distance from the mean rating difference in the pool. This means that the outlier will still be given a match even if he is the farthest from the majority of people in the pool, leaving one of the people who was in the tighter grouping to be left out if the pool is such that a coach cannot get a match (odd number, excess secondary exclusion metric distances, etc).
This is a design choice based on the fact that outliers in a pool will typically be outliers in the environment, and an outlier in one pool would probably be an outlier in the next pool too. Also typically, the outlier will be a coach fielding teams that have been played more than the usual team which is something we don't want to punish with generally increased pooling times.
As always, not everyone will agree with that decision. An example of it occurring is included below.
We'll use the same examples as before, and add in some of the "theoretical" examples that use inverted distributions. First, the OP situation:
CoachA,Da Stompin Tribez,1400,6
CoachF,Big Foot Little Foot,1000,0
[CoachA] Da Stompin Tribez (1400/1700) vs [CoachC] Hungry Sun (1160/1210)
[CoachB] Da Blacktoofz (960/960) vs [CoachE] Dethbridge Destroyers (980/980)
[CoachF] Big Foot Little Foot (1000/1000) vs [CoachD] Blessed Mongrels (1070/1120)
Mean TVP Diff: 210 Mean TV Diff : 110
Now, we'll do the same set, but remove one coach, ensuring at least one odd-man-out. This is an example where the outlier priority can be seen, making the orphaned coach one closer to the center:
CoachA,Da Stompin Tribez,1400,6
CoachF,Big Foot Little Foot,1000,0
[CoachA] Da Stompin Tribez (1400/1700) vs [CoachC] Hungry Sun (1160/1210)
[CoachB] Da Blacktoofz (960/960) vs [CoachF] Big Foot Little Foot (1000/1000)
Mean TVP Diff: 265 Mean TV Diff : 140
Without the outliar prioritization, the Blessed Mongrel team would be expected to be paired with Hungry Sun, leaving Da Stompin Tribez as the orphan.
Next, one of the theoretical examples that made sub-optimal pairings under LLD:
[CoachA] TeamA (900/900) vs [CoachB] TeamB (1000/1000)
[CoachH] TeamH (2100/2100) vs [CoachG] TeamG (1950/1950)
[CoachF] TeamF (1900/1900) vs [CoachE] TeamE (1550/1550)
[CoachC] TeamC (1200/1200) vs [CoachD] TeamD (1450/1450)
Mean TVP Diff: 212.5 Mean TV Diff : 212.5
Finally, our multi-league "theoretical" example:
[CoachB] TeamB2 (1000/1000) vs [CoachC] TeamC2 (1100/1100)
[CoachA] TeamA2 (1600/1600) vs [CoachD] TeamD2 (1300/1300)
Mean TVP Diff: 200 Mean TV Diff : 200
This system can be tested using BB2Match2.exe, which is an updated AIR program that implements the matching algorithm and takes in team/coach data in the above formats (placed in top box, results in bottom box). In spite of the n(n-1) initial team comparisons, the execution time is trivial.
As with LLD the multi-league matchmaking requires all leagues use the same pool and pooling timer, which they presently do not do (but should).
Your denseness continues to amaze me. Why does anybody need protection? Ever? Because they're at a disadvantage otherwise they can't overcome by themselves. Thus, if there is no evidence that they need protection, and your conclusion is that they don't need protection, they can't be at a disadvantage to be protected from.
You're conflating "new teams" with "low TV teams". While new teams are more likely to be at lower TVs, the question of whether or not they need protection is not related to their TV, it is related to the number of games played.
The data shows no difference in the overall challenge faced by teams with low games played versus teams with higher games played while controlling for things like TV differences. For the idea of new team protection to be rational, we'd have to see new teams suffering a disadvantage related to their newness.
What we're talking about is not new teams, its low TV teams versus high TV teams, and the effect control of maximum TV differences would have on them... which is bad news for you trying to imply that I'm the dumb one in this conversation.
I still fail to see any actual advantage manifesting here.
Then you may simply lack the required intelligence to understand the topic. As I say, though, we're lucky the people in a position to make or promote change do not.
So, cards on the table. How much is the win-rate positively influenced by a TV restriction of 300?
It depends on the team in question's TV, and the current distribution of active teams in the environment being played in - a fact that you should have understood from my picture-heavy explanation. Did you get distracted and start colouring them instead of bothering to read?
Of course, there is. The question is just how efficient a complete solution is and if its application is feasible in real-life situations.
Haha, that's basically a confession that you can't find a usable solution, only ones that are untenable like exhaustive brute force comparisons. Nobody cares about unimplementable theoretical solutions.. this isn't a game being played in theory, it's a game being played by real people who face real problems that need real solutions.
The stable roommate problem, in my opinion, isn't even the best way to tackle this problem because it doesn't help in those situations where there is no maximal matching or where no stable matching exists. In those cases where the algorithm says 'no stable matching found', you would have to fall back on some other algorithm to find the best or at least any solution regarding your optimization criteria.
So that means you have the alternate solution, then, right? You have a neverending supply of opinions on what's wrong with everything else. Seems high time you ponied up something constructive for a change.
Those (except for the multi-league-thing, which I find a curious, but not really desirable idea, personally) were the given constraints that the optimal solution has to satisfy. Of course, this is what makes the algorithm complicated.
Heh, this is a sour-grapes leadup to you confessing you couldn't make a solution that handled it. You already said you had difficulty with multiple team queuing. My guess is the best you can come up with is middle-school brute force testing. That's probably why we'll never see a solution from you.
But, if your algorithm already fails in cases where most of these constraints are not even there, it is clear that it will fail in far more cases when they are.
You need to work on your definition of "fail", champ. It doesn't explode, it just doesn't make the matches you think it should, much like the current system doesn't. We, the people who actually DO things, toss around ideas all the time, throw ideas away, start over, try new things, test them, trade data and let other people test them, and so on.
You, on the other hand, have some weird princess-and-the-pea complex where you want people to bring you things on a pillow for you to turn your nose up at, over and over. Get off your useless ass and find solutions instead of moping around whining about problems.
Yes, that statistical argumentation again. You would be right in that regard if pools were large and thus each pool-instance would be a good sample. However, that isn't the case. Pools are small and the teams for any specific instance can be all over the place, depending on how long the season is already running (first, clustering around the lower end when most teams are fresh, later clustering around the middle or even higher end as no fresh teams will join the fray anymore).
Statistics based on the data - that is quite literally a record of how things actually are. Clearly you don't take issue with that since your entire complaint was based on looking at small snippets of data. Go plug teams from actual pools into the system and see what happens. That is, as you said above, seeing if its application is feasible in real-life situations.
Or, y'know, think up a way to solve the problem you believe is important either by starting from the ground up, or suggesting changes to what is in place or has been suggested by someone else. I keep suggesting you do something constructive and you keep cleaving to your negation, negation, negation.
As for pools approximating the distribution... in general they will. They're pulled FROM the distribution, which means they'll model it more often than model its inversion.
We don't know how many people were excluded, how many and what other teams they were spinning with and what their distribution was, so your assumptions of them ALWAYS following the general distribution of teams is pure conjecture.
They won't ALWAYS follow the same distribution, but they absolutely will more often than they do not based solely on the fact that they are pulled FROM that distribution. Every pool can be treated as a sampling of data from the population that is the environment... a sample of presently active teams that gives us a look at what the totality of active teams looks like. Those samples will, in turn, create a distribution which collectively creates a picture of that environment... we know that to be true.
All of your theoretical examples spread the exclusion variable wide and apart and put multiple entities at idential positions on both metrics... yet all of your real examples don't show that happening ever. This is why I say the theoretical situations are interesting but not critical. In the situations that actually do happen, we don't see the problem popping up... and given how rare (if any) examples we can find of where it might have manifested in actual play conditions, the fallout of its manifestation is one pooling period with exclusions that might not have been necessary.
I get that some situations can't be avoided, but those that can be avoided with a reasonable effort, should be.
So that means you're ready to put in a reasonable effort? Lets see that algorithm that does all the things required AND avoids the situations that are apparently reasonably avoidable.
I've built multiple systems, taking different approaches to make everything work... so far all of them work but run into certain situations where the output is not as good as hand-crafted pairing, which is ultimately what a "perfect" solution would look like. Some sort of hybrid, patchwork mutant system could probably special case all sorts of things, but wouldn't be something that could generalize to all cases and environments. Brute force testing would work, but would spiral out of control with larger pool sizes, especially when the pools are combined to allow for multi-league multi-queuing.
While there's something to look at, it should be looked at and not be dismissed, as you seem to like to do.
I dismiss YOU and YOUR complaints, Ugh, not actual problems. You don't lift a finger to solve problems, you just bitch about them and criticize other people and the work they've done... while doing absolutely fuck-all yourself. The shortcomings of the current pairing system aren't unknown and weren't unknown prior to you noticing them, and it sure as hell isn't your threads that will make them take notice as compared to the behind the scenes discussions and pressures.
You want change? Find a solution to a problem, and convince people it's the right one. Or just whine about how you don't like things and be ignored the way the other 16,000 whines are. Your call.
Players only begin to retire once they reach 120 matches played.. so I think you still have quite a way
I think 120 is the maximum age - the point at which ALL players will have retired - not the beginning age... at least if the BB2 rulebook (which hasn't been updated since dinosaurs roamed the earth) is to be believed. The first ageing role was done at 104 games, and by 120 you had a 100% chance of retiring.
Spoken like a true statistician who shouldn't be let even near any programming language, I guess.
Yeah, very few things mix worse than math, data, and computers!
What you're describing is just a heuristic algorithm which is neither complete nor generally effective and which is only efficient because it doesn't solve the problem in general.
Oh it's absolutely not a mathematically complete solution, but no such thing exists for the situation we're discussing. There's a reason the stable roommate problem is such a big deal in algorithmic design, and the matchmaking in BB2 is a fare more intricate thing than the scenario in that problem. Regardless of what method gets used, there will be areas where efficiency is traded for satisfaction of certain problem criteria.
There are two main problems to be tackled by the algorithm which are in conflict: finding a maximal matching (to reduce spin-time) and minimizing the overall TVPlus-differences (I guess mean and variance could be used as metric) among those matches found.
No, those are far from the main issues. If those were all we cared about then the standard efficient stable roommate algorithm would be all we need. The main issues with BB's pairing system is that it needs to accommodate four additional points:
- A secondary exclusion metric (TV distance based)
- Multiple potential-yet-exclusive grouping categories (different leagues)
- Multiple entries for a single entity, possibly in different groups (multi-queuing)
- Prioritization of matching for certain entities (for coaches who were excluded)
That's where things get complicated. You don't actually bother trying to solve these problems, so you gloss over them in your mind and say "well, there's a complete answer out there somewhere and yours just isn't it". I suggest you give it a try
I'm sure you are. A child is also happy when it makes some small progress towards a far away goal. So, good for you! Now go play with your statistics tools and leave the algorithm design to the people who know how that works.
Really? I've yet to see anyone else design anything, unless we're counting Cyanide's current system.
The "issue" as you call it, with the system I proposed is one of multi-modality. The system was designed around promoting inclusion of outliers by pushing the outside edges toward the middle of the general distribution of teams as such:
All of your examples of where it will exclude people are based on a wide, inverted distribution of teams in the pool. This is not an unexpected side-effect: when there is tight, modal clustering on outside edges of the normal distribution, with a breadcrumb or two between, the "pushing inward" effect will lead to potential orphaning of teams in those modal clusters:
Even this is only a problem if those clusters manifest at distances on the exclusionary secondary metric that put them outside of one another, while having a low number of mid-range bread crumbs to lure members of the modal cluster away.
The reason I'm not concerned is that the pooling distribution needed to cause such a situation is not only dissimilar to the general distribution of teams always, it's literally the inversion of it... which makes the probability of it occurring very low... and even if it does, it just means it'll fail to make matches for two people who will find them next go 'round.
Of course, you'd rather I hadn't done it, but I'm sorry, Dave, I can't do that for you.
Not at all - I want people to find problems. As usual you misread what is stated - I'm not opposed to finding those issues, I just think your declaration that ANY issue negates a system's utility to be silly. It fits in well with your initial posting - that you can find problems doesn't mean the sky is falling, it means its something to take a look at.
If your algorithm can't even reliably find an acceptable maximal matching for the even-team single-spin case, I can only imagine what it will come up with for multi-team spinning or even that weird multi-league spinning which complicates matters even further.
It works just peachy with multi queuing and multi leagues. All of your theoreticals are based on TV matching and placing the teams in a pool in what essentially constitutes the inverse of the metric distributions that we see in the MM environments, with the lowest number of YOUR teams in the range where the most teams actually are, and the highest number of your example teams being in the ranges at the thin edges of the distributions.
The general distribution for CCL, is around 1200 with an SD of 200. That means 67% of all teams fall between 1000 and 1400, and 98% of them fall between 800 and 1600. This is why I wanted it to make matches for the teams that fall farthest outside the group first, in as those are statistically most likely to be the outliers.
When there are tight clusters at extreme (meaning outside acceptable distances) ends and fewer teams between those clusters, then yes it tends to orphan teams at the edges because it creates matches based on a normal distribution which matches how teams are generally distributed, rather than an bi-modal (inverse-normal) distribution, which is what all of your examples use.
I find the inverse-normal pools orphaning issue interesting, but not critical since the pools showing a wide, bi-modal distribution is highly unlikely based on what we've seen in the data, especially since contrary to your theoretical examples we're not in a strictly TV matched environment - we matchmake on a separate metric from the exclusion metric, and the number of teams with ZERO zSum after a few games (which it takes to get to the upper end of the TV ranges) are almost non-existent.
This is a prime example why you shouldn't take 5 minutes to design an algorithm.
And yet it is, to date, the only system that covers all the required aspects for BB's matchmaking, and does so well when the pools reflect the distribution of teams in the environment which they generally do. Additionally, it's easy to implement and easy to adjust, with new exclusion criteria being a single line of code.
As always, your interest seems entirely about griping about problems rather than solving them. Go take 6+ minutes and find something that does everything this system does... but better!
I beg to differ to your rationalization.
Nope. I'm happy with the results. It satisfies all the stated criteria (multiple teams, multiple leagues, applied prioritization, secondary exclusion criteria) and in most cases creates near-optimum rating matches. Who to exclude from a given pool when someone has to be excluded is a judgment call... you not liking the call is irrelevant.
I do note that, as usual, you're doing the picky child routine - you bring nothing to the table but want to piss and moan about what's on it. Now is your opportunity to show that you can think of and demonstrate a system that hits all those buttons while making objectively better matches! Or, y'know, to be you and not do that... instead coming up with an excuse for why you never do any work
This would only be relevant if the zsum-prediction would also be non-monotonous, i.e. there is some zsum-gap where 'bigger is worse' doesn't apply anymore. What's the size of that gap, pray?
Go figure it out. I already do too much of your math homework. Other than pointing out that the objective number is not relevant, only relative magnitudes (which took you ages to understand during your "ArrrrG! gnash teeth! we MUST limit TVPlus differences or the world will end!" thread) that line of discussion is completely uninteresting to me.
So you're questioning your own data now? Interesting. I thought you established in your argument against fresh-team-protection that there was no advantage for reduced maximum TV difference for new teams.
Once again your inability or unwillingness to read leads to you trying to put words in someone's mouth. I didn't say there would be no benefit to new teams, I said there is no evidence that new teams require protection. Go back and read that thread until you understand it or possibly die of dehydration. Either one would suit me fine, though obviously the latter would be more useful to the community in the long term.
Since it contradicts your other findings, of course, I had to ask.
As usual, it only contradicts it in your misunderstood version of things.
How does that advantage manifest? The choice of teams is reduced, the own advantage and disadvantage over the possible candidates is also reduced (in variance). That, in itself, gives no advantage, neither at the beginning nor later.
Dear god you're a moron. Lets do this with pictures... I'll even leave plenty of room so you can colour them with your safety crayons once you fail to understand it once again.
Lets say that on a given day of CCL competition the mean TV for teams is 1200 with a standard deviation of 200. The protagonist of this story is a team that has a TV of 1100. This places the team in the following position on the distribution.
As the area under the curve represents the cumulative proportion of teams, we can see that there are more teams of higher TV than our protagonist than there are of lower TV. Since there is a restriction on maximum TV difference, the area from 1100 down to 600:
Is smaller than the area from 1100 up to 1600:
In raw numeric form, the percentage of teams between 600 and 1100 is 30.72% while the percentage of teams between 1100 and 1600 is 66.87%, making the general probability of getting a lower team 31.47% vs 68.52% chance of getting a higher TV opponent.
Now, what if we let people control their maximum TV distance? Lets look at the same situation, but with a maximum 300 TV difference. Now the percentage from 800 to 1100:
represents 28.58% of teams, while the percentage from 1100 to 1400:
represents 53.28% of teams, giving us final probabilities of 34.91% vs 65.09%, which represents more than a 10% increased chance of playing at a TV advantage as compared to having a maximum TV distance of 500. If we take that even further, and let people control their maximum TV difference down to, say, 100... the distance from 1000 to 1100:
Represents 14.99% of all teams, versus 1100 to 1200:
Which represents 19.15%, giving us the probability split of 43.91% vs 56.09%, which now gives a 39.5% improved probability of being paired with a team of lower TV as compared to people who leave their maximum at 500.
This trend then reverses once a team's TV goes past the mean, becoming advantageous to have larger maximum TV differences the farther above the mean a team's TV becomes.
If we gave coaches the ability to control their maximum TV differences we would be handing a mathematical advantage to anyone who opted to engage in TV difference control over those who did not. It's not "all the same", it is distinctly biased based on the metric distributions that punishes anyone who does not employ selective TV difference control if such a thing is available.... and this doesn't even take into consideration the removal of those closer matches from the pools and their effect on other queued teams.
Now, go get your crayons. Colouring in those pictures will be the first constructive thing you've ever done around here, Jacob... and this concludes my obligation to teach remedial thinking to the village idiot.
Who would those people be that 'can deliberately reduce the difficulty of their matches'? The ones using the option to reduce TV or the ones not using it?
Clearly it is the ones deliberately reducing their TV differences during the early games of their team's development. It would be inherently advantageous to those teams to do so until their TV was a ways past the mean TV value for the environment at which point it would become advantageous to no longer limit their maximum TV differences.
That you had to ask at all suggests you're a long way off from wrapping your head around any of this.
When I talk about 'more fair' I talk about that elusive 50% winchance that the MM system aspires to. How is that easier to win more often than if the variance is higher?
The entire environment is going to be one large normal curve for each of the ratings we're concerned with. Each team represents a line dividing that curve in two, with the area on either side of that line representing the percentage of teams with higher or lower ratings. If you give people the ability to limit the rating differences more than others are limiting it, then it becomes inherently advantageous to do exactly that until your team's rating passes the midpoint, at which point it becomes increasingly advantageous NOT to do so.
When the environment affects all teams equally then even if there are aspects that are "unfair" they are universally applied and thus, everything is equal. When that is applied unequally you have not improved fairness you've reduced it further.
So, your argument here is that the people who deny themselves a high TV-difference or of using their allowed inducements and thereby handicapping themselves more than necessary should not be allowed to do so, even if they WANT it for some misguided reason or other?
Jesus you're dense. The advantage doesn't come from a high TV difference, it comes from having control over TV difference. The lower the mean difference, the better things are for you as you're building your team. When your team is well-developed and at a higher rating, that's when allowing yourself higher TV differences becomes advantageous because the likelihood that the high TV differences come out in your favour goes up as your rating does.
A system that allows people to choose their maximum TV difference and wait for a match with that difference or less creates an environment in which people who do NOT choose to wait for lower TV differences are at a distinct disadvantage during the team development phase. They are at the lower end of the rating curve and thus are more likely to be at a rating DISadvantage, making it inherently advantageous to wait for a lower difference. In doing so, they improve their chances of winning, while those who do not do so have reduced chances of winning.
Additionally, by taking the close matches consistently they are likely to leave only larger rating difference matches for the people who don't opt to restrict their maximum TV differences. This is massively exacerbated if the system gives priority matches to the coaches that have been queued the longest, as it currently does.
Would still be the case.
No, it would not. Different degrees of variance means the ultimate system is NOT truly random anymore, especially when you control the amount of variance. If I get to roll a d8 when I dodge and you only get to roll a d6 we're not being affected by the same amount of randomness anymore.
They wouldn't be.
Yes, they would be. You just can't understand why that is, and you're reaching the limits of my patience with trying to help you understand this particular topic. Thankfully people like Netheos and Dode, who actually have a say in what might or what absolutely will not be done with CCL, do understand.