- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
lostinbr
| Favorite team: | LSU |
| Location: | Baton Rouge, LA |
| Biography: | |
| Interests: | |
| Occupation: | |
| Number of Posts: | 12703 |
| Registered on: | 10/15/2017 |
| Online Status: | Not Online |
Recent Posts
Message
re: How many of LSU’s transfer portal entrants will not land a new home?
Posted by lostinbr on 1/16/26 at 1:40 pm to TopWaterTiger
quote:
So you bring up a good point. How do scholarships work now in NIL world? NCAA allows for so many scholarship players and then walk ons. But if everyone is getting paid, isn’t that a scholarship?
Scholarship limits were eliminated under the House settlement. Instead, there are strict roster limits. There were already squad size restrictions for games (for example SEC rules dictate how many players may dress and how many players may participate for home/away teams) but now there are firm roster limits that take it a step further.
So in football, previously you had 85 scholarships but might have 120+ players on the roster. Now you only have 105 roster spots, but you can give scholarships to all of them if you choose.
The net result is that schools can offer a lot more scholarships than they could previously. This adds quite a bit of cost. Up to $1.5 million of that new cost counts against the $21 million revenue sharing cap. Since most P4 programs are fully-funding the new scholarships, this means that they actually only have $18.5 million for direct revenue sharing payments.
re: How many of LSU’s transfer portal entrants will not land a new home?
Posted by lostinbr on 1/16/26 at 1:29 pm to NotaStarGazer
quote:
I will say there have only been a couple of surprises to me...OL...who signed with good SEC teams. But then again, the pudding proof will be do they start or are they just "depth pieces.' If depth pieces, then we could say
LSU bad OL starters = other teams good depth pieces
So I’m looking at the OL who transferred to SEC schools:
Tyler Miller (Miss State)
Ory Williams (Tenn)
Tyree Adams (aTm)
DJ Chester (Miss State)
Coen Echols (aTm)
Carious Curne (Ole Miss)
Miller and Curne were true freshmen this year. Williams and Echols were RS freshmen. I think it’s tough to say much regardless of what those guys do elsewhere. If Adams and Chester go on to become all-SEC players, that’s gonna say a lot about Brad Davis.
re: LSU baseball is getting screwed in revenue share
Posted by lostinbr on 1/16/26 at 9:26 am to Hermit Crab
quote:
Did you do the math for each sport?
As a percent of total "Team sports" revenue from the chart:
Football 82.9%
Men's Basketball 8.93%
Women's BB 1.8%
Baseball 5.33%
All others 1.05%
So baseball is getting screwed, basketball is benefitting on both mens and womens side. And this is as of June 2024, I would bet men's basketball has fallen, WBB might be up since then. Baseball is likely up. Football is down
It’s even more stark when you look at revenue over expenditures (effectively “profit”) for each sport.
For the FY ending June 2025 (so these numbers do not include revenue sharing):
Football: $64.4M
Men’s Basketball: $2.5M
Baseball: ($0.9M)
Men’s Tennis: ($1.2M)
Women’s Golf: ($1.3M)
Women’s Beach Volleyball: ($1.3M)
Men’s Golf: ($1.4M)
Women’s Tennis: ($1.5M)
Women’s Volleyball: ($2.2M)
Women’s Soccer: ($2.5M)
Women’s Gymnastics: ($2.9M)
Softball: ($3.0M)
Swimming & Diving: ($3.6M)
Track & Field: ($6.8M)
Women’s Basketball: ($8.0M)
It’s a little odd to me that WBB would get a dedicated 5% while baseball gets lumped in with the 5% to “all other sports,” when WBB is bringing in less revenue while losing more money. I get it with MBB, but it seems kind of wild to tack another ~$1M onto your $8M annual loss for WBB, unless they’re making big cuts elsewhere.
quote:
The funding is not public money
Revenue share isn’t public money? Isn’t it paid directly by the athletic department?
Regardless I wouldn’t be surprised if schools try to dodge FOIA requests by considering the payments/agreements to be confidential student records or something.
re: Fox 13 Seattle. D Williams gone Uw wants damages
Posted by lostinbr on 1/7/26 at 9:39 pm to Mickey Goldmill
quote:
Take the rumored liquidated damages portion of this contract that is “solely in the discretion of Washington” as to the amount. No chance that survives if challenged. They can’t just make up a number. They have to back it up.
Did you look at the agreement text linked on the previous page? This is apparently language from a University of Washington contract (from last summer) that a reported obtained via FOIA request. The liquidated damages are pretty clear:
quote:
If Athlete transfers or enters the transfer portal prior to the end of a Consideration Period set forth in Annex A, the Athlete will: (a) reimburse, or cause the transferee institution to reimburse, the Institution a prorated portion of the Consideration, equal to the amount paid by the Institution for the remainder of the Consideration Period; and (b) pay or cause the transferee institution to pay, as liquidated damages, the remainder of the Consideration not paid under Section 3(a) above.
If his agreement has the same language, the liquidated damages would be the full value remaining on the contract. It doesn’t say anything about those damages being up to UW’s discretion.
I think the part you’re referencing might be this:
quote:
The Institution in its discretion may, after good faith discussion with the Athlete, adjust the Consideration to reflect an increase or decrease in the Athlete’s NIL value (e.g., a Heisman Trophy win may increase the NIL value and reduced playing time may decrease the NIL value).
I’m not a lawyer, so not sure how this provision would be interpreted in conjunction with the LD’s. If the written contract says his Consideration is $4 million, can Washington try to say his NIL value increased to $8 million, therefore he owes them $8 million as liquidated damages? Surely that wouldn’t stand up as you said. But then what if Washington doesn’t try to take that stance?
Seems like it’ll be interesting to see how this plays out regardless.
re: As employees, athletes should have to adhere by same rules as other employees
Posted by lostinbr on 1/6/26 at 1:39 pm to Nutriaitch
quote:
the courts clearly stated that the athletes are NOT employees.
The NCAA has been fighting (on the schools’ behalf) to make sure the players aren’t classified as employees for decades. The courts aren’t keeping the schools from treating players as employees; the schools are.
In an era where schools are spending $20+ million of direct athletic department funds on rosters anyway, it might be time for them to re-evaluate that strategy. I suspect the biggest hurdle is that acknowledging them as employees will require them to collectively bargain to maintain a cap. But it seems like we are headed in that direction regardless at this point.
quote:
It was supposed to be the golden test for a while because I can remember having discussions about the implications if a machine were able to pass it. Now we don’t care, how many people would think they were talking to an actual person when using a chat bot.
It’s just a benchmark. It’s significant in the sense that it seemed like an incredibly difficult bar to pass for a long time, where now it seems almost trivial.
It doesn’t really tell you anything about actual “intelligence,” or sentience, or anything of that sort. It’s not particularly relevant to any discussion about an AI “singularity” other than as a demonstration of what AI has already achieved.
re: Elon: ‘We have entered the singularity’
Posted by lostinbr on 1/4/26 at 9:19 pm to mmmmmbeeer
quote:
The Turing test….and no AI has passed it yet and no sign anyone is particularly close.
I’m just going to copy and paste something I posted in another thread on the same subject:
arXiv link
quote:
Moreover, GPT-4.5-PERSONA achieved a win rate that was significantly above chance in both studies. This suggests that interrogators were not only unable to identify the real human witness, but were in fact more likely to believe this model was human than that other human participants were. This result, replicated across two populations, provides the first robust evidence that any system passes the original three-party Turing test.
A 50% win rate would “pass” the three-party Turing test, as it would mean that participants were unable to distinguish between the AI and another human. GPT-4.5’s win rate was 73%.
That means that when asked to identify the human between GPT-4.5 and another actual human, nearly 3/4 of participants said that GPT-4.5 was human and said that the actual human was AI.
And that’s a model that was released a year ago.
That being said, I’m not sure what the Turing test really has to do with the singularity in the first place. :dunno:
Part Two - Methodology
I suspect most people won't care about this, but for those who do: I wanted to explain where the numbers come from.
Strength of record is a way of looking at a team's record and asking "how would other top teams fare against that schedule?" Generally it's reported as a probability. In this case, I'm reporting it as probability that the team's record is better than the record of an average top-12 team against the same schedule.
In order to build up strength of schedule and strength of record, you need some sort of predictive metric. There are several of them out there, and I'd say ESPN FPI and Bill Connelly's SP+ are the two big ones. I chose to use SP+ because Connelly has been pretty open about how the ratings are built up, which gives me a lot more confidence in them.
You also need some sort of "reference team" to measure SOS/SOR against. Usually you will see published SOS/SOR metrics use either "an average FBS team" or "an average top-25 team." The reference that you use can make a big difference on the calculations. Here's a simplified example to illustrate the issue:
An average FBS team would be expected to win 50% of their games against team A's schedule, because all 4 games are against other average FBS teams. However, they would be expected to win 55% of their games against team B's schedule because 3 of the opponents are really bad. In other words, team A has a stronger SOS for an average FBS team.
However, a better reference team (in this case an average top-12 team) is expected to win almost all their games against mediocre opponents. As such, team B's schedule is actually more difficult - and therefore has a stronger SOS - for an average top 12 team.
Here's a real-world example using Texas' and Oklahoma's 2025 regular-season schedules:
An average FBS team would find Oklahoma's schedule more difficult, but an average top-12 team would find Texas' schedule more difficult.
I actually looked at three different reference points for this analysis: average FBS team, average top-25 team, and average top-12 team. Here is the distribution of 2025 strength of record based on each reference point:
Ultimately I found that there wasn't a ton of difference between using top-12 and top-25 as the reference. The most notable difference happens when you use average FBS instead. I went with top-12 because to me, it makes logical sense when you're trying to compare top-12 teams.
So how do we actually calculate this stuff? Basically it comes down to calculating game-by-game win probabilities using the predictive metric of choice (SP+ in my case). We can convert the SP+ differential between two teams (our reference team and each opponent on the schedule) to a Z-score. To do this, we need the standard deviation. In the past I've used 17 points as the STDev for SP+. However, now I actually have enough data to calculate it since I'm already looking at 11 years' worth of games anyway:
This is also how I went about verifying home field advantage, which remained at 2.5 points as expected. So using our ~14 point standard deviation and 2.5 point home advantage, we can calculate a Z-score for any matchup and then convert that to a win probability. That's actually the easy part.
The hard part is then crunching the numbers on 11 years of data. In the past when I looked at SEC schedules only (for only 1 year) I used a Monte Carlo simulation. But I really didn't use enough discrete simulations then, and doing enough discrete simulations now takes a long arse time because of the size of the dataset.
As it turns out, it was easier to solve everything analytically. I used a script that actually generates every win/loss permutation of a given team's schedule, at which point I can use the single-game probabilities to determine overall probability of each win/loss record.
I tested my script by running some sample probability distributions:
By the way, this is why I've been ragging on Ole Miss' schedule for 2 years now. An average top-12 team would have just under 50% probability to win 10+ games against LSU's 2025 regular season schedule, but would have over 50% probability to win 11+ games against Ole Miss' 2025 schedule. In other words, the schedule difference between Ole Miss and LSU is basically equivalent to spotting an entire game. Wild stuff.
Anywho, once I know my script works, I can run it over the entire 11 year period and then start comparing data with the CFP rankings. :geauxtigers:
There is one issue that I've noticed - as I mentioned in OP, I used the penultimate CFP rankings (prior to conference championship weekend) to remove the somewhat subjective value of conference championships from the analysis. However, the Big 12 did not play a conference championship game from 2011-2016. Instead, their final regular season game happened during conference championship weekend. So unfortunately, this means the snapshot is looking at Big 12 teams before they actually completed their regular season (at least from 2014-2016). I don't really have an elegant solution for this problem, so at this point it is what it is.
No idea whether anybody actually cares about any of this crap, but it's a side project I've been working on for a while (because I'm a nerd) and I have nowhere else to share it. :lol:
I suspect most people won't care about this, but for those who do: I wanted to explain where the numbers come from.
Strength of record is a way of looking at a team's record and asking "how would other top teams fare against that schedule?" Generally it's reported as a probability. In this case, I'm reporting it as probability that the team's record is better than the record of an average top-12 team against the same schedule.
In order to build up strength of schedule and strength of record, you need some sort of predictive metric. There are several of them out there, and I'd say ESPN FPI and Bill Connelly's SP+ are the two big ones. I chose to use SP+ because Connelly has been pretty open about how the ratings are built up, which gives me a lot more confidence in them.
You also need some sort of "reference team" to measure SOS/SOR against. Usually you will see published SOS/SOR metrics use either "an average FBS team" or "an average top-25 team." The reference that you use can make a big difference on the calculations. Here's a simplified example to illustrate the issue:
An average FBS team would be expected to win 50% of their games against team A's schedule, because all 4 games are against other average FBS teams. However, they would be expected to win 55% of their games against team B's schedule because 3 of the opponents are really bad. In other words, team A has a stronger SOS for an average FBS team.
However, a better reference team (in this case an average top-12 team) is expected to win almost all their games against mediocre opponents. As such, team B's schedule is actually more difficult - and therefore has a stronger SOS - for an average top 12 team.
Here's a real-world example using Texas' and Oklahoma's 2025 regular-season schedules:
An average FBS team would find Oklahoma's schedule more difficult, but an average top-12 team would find Texas' schedule more difficult.
I actually looked at three different reference points for this analysis: average FBS team, average top-25 team, and average top-12 team. Here is the distribution of 2025 strength of record based on each reference point:
Ultimately I found that there wasn't a ton of difference between using top-12 and top-25 as the reference. The most notable difference happens when you use average FBS instead. I went with top-12 because to me, it makes logical sense when you're trying to compare top-12 teams.
So how do we actually calculate this stuff? Basically it comes down to calculating game-by-game win probabilities using the predictive metric of choice (SP+ in my case). We can convert the SP+ differential between two teams (our reference team and each opponent on the schedule) to a Z-score. To do this, we need the standard deviation. In the past I've used 17 points as the STDev for SP+. However, now I actually have enough data to calculate it since I'm already looking at 11 years' worth of games anyway:
This is also how I went about verifying home field advantage, which remained at 2.5 points as expected. So using our ~14 point standard deviation and 2.5 point home advantage, we can calculate a Z-score for any matchup and then convert that to a win probability. That's actually the easy part.
The hard part is then crunching the numbers on 11 years of data. In the past when I looked at SEC schedules only (for only 1 year) I used a Monte Carlo simulation. But I really didn't use enough discrete simulations then, and doing enough discrete simulations now takes a long arse time because of the size of the dataset.
As it turns out, it was easier to solve everything analytically. I used a script that actually generates every win/loss permutation of a given team's schedule, at which point I can use the single-game probabilities to determine overall probability of each win/loss record.
I tested my script by running some sample probability distributions:
By the way, this is why I've been ragging on Ole Miss' schedule for 2 years now. An average top-12 team would have just under 50% probability to win 10+ games against LSU's 2025 regular season schedule, but would have over 50% probability to win 11+ games against Ole Miss' 2025 schedule. In other words, the schedule difference between Ole Miss and LSU is basically equivalent to spotting an entire game. Wild stuff.
Anywho, once I know my script works, I can run it over the entire 11 year period and then start comparing data with the CFP rankings. :geauxtigers:
There is one issue that I've noticed - as I mentioned in OP, I used the penultimate CFP rankings (prior to conference championship weekend) to remove the somewhat subjective value of conference championships from the analysis. However, the Big 12 did not play a conference championship game from 2011-2016. Instead, their final regular season game happened during conference championship weekend. So unfortunately, this means the snapshot is looking at Big 12 teams before they actually completed their regular season (at least from 2014-2016). I don't really have an elegant solution for this problem, so at this point it is what it is.
No idea whether anybody actually cares about any of this crap, but it's a side project I've been working on for a while (because I'm a nerd) and I have nowhere else to share it. :lol:
Offseason-ish thread: Grading the CFP Committee rankings after 11 years (deep dive)
Posted by lostinbr on 1/3/26 at 2:11 pm
TRIGGER WARNING - LONG POST AHEAD
TL;DR: Look at the graphs. I don't know how to talk about this in less than 1,000 words. I am who I am. Sorry.
Around this time last year, I posted this topic analyzing the disparity between conference schedules among SEC teams in 2024. At the time, I thought it would be interesting to do an expanded strength of schedule/strength of record analysis looking at not just SEC teams, but all of FBS. One thing I was particularly curious about was how the CFP Committee rankings compare with calculated strength of record over the years.
There are various places to find this information - for example, FPI has strength of record data that you can compare to the CFP rankings - but I really wanted a data source that I could dive into beyond some FPI numbers on a web page. So.. I built my own.
I'll add a separate post detailing the process, but here's the short(ish) version: I pulled historical SP+ ratings, CFP and poll rankings, and game results from collegefootballdata.com. I pulled this data for the entire CFP era to-date - 2014 through 2025. I then built a tool to calculate strength of schedule, using SP+ data, for every FBS team over that 11-year period. My tool also calculates strength of record using the same SP+ data, and there are several levers I can pull to tweak the parameters / evaluate the results.
Methodology
Before I get into some of the results, a couple of quick notes & definitions just to make sure it's clear what we are looking at:
Snapshot in Time - End of Regular Season, Prior to Conference Championships
This is probably the most critical piece of the puzzle. You see, one of the issues with evaluating the CFP Committee rankings is that there's a subjective value placed on conference championships. There's no way for me to tell analytically whether this subjective value makes sense, and it really muddies the waters. To deal with this, all of the analyses that follow are based on the end of the regular season, prior to conference championship weekends. The entire snapshot for each season - records, rankings, schedule strength, etc. - is based on the end of the regular season.
Strength of Record
Strength of record, at it's most simplistic level, is a measure of how a team performed relative to the strength of their schedule. In this case, strength of record is reported as the probability that the team had a better record than an average top-12 team (in the given season) would have against the same schedule.
The Data
So with that out of the way, let's look at some data. My biggest question going into this was "is the CFP Committee focusing too much on W/L record and not enough on schedule?" So my first step was to take a look at calculated SOR vs. the CFP Committee rankings. Here is what that dataset looks like for the CFP top 25 over the past 11 years:
Note that these SOR values have been further normalized and re-centered, which allows comparison across multiple years (as long as we focus that comparison near the re-centering point, which is around the #10 team in this case). Here's what the same data looks like without that normalization and re-centering, for reference:
So going back to the normalized chart, I chose the top 10 as my center point for analysis. Originally I was looking at the top 11 - my logic was that most years, the top 11 teams in the CFP rankings should make the 12-team playoff. However, as it turned out, that wasn't the case either of the first two years of the expanded playoff. So I figured top 10 might make more sense.
The data points in magenta represent teams who were ranked in the top 10 by the committee, but did not have a top 10 strength of record. The data points in green represent teams ranked outside the top 10 by the committee, despite having strength of record in the top 10.
So the next question is.. who were these teams? Let's take a look:
Some of these are interesting. 2022 LSU obviously jumps out, but if you look at the SOR you'll notice that it's very low compared to the rest of the list. LSU had the 9th best SOR at the end of the '22 regular season primarily because there was a pretty weak field in 2022. Also worth noting that considering this is a snapshot before the SECCG, LSU very well may not have made a 12-team CFP in 2022 even if they were "properly" ranked by the committee.
Another that jumps out is 2025 BYU. Their 0.627 SOR means that their record, given the teams they played this year, is 62.7% likely to be better than an average top-12 team playing the same competition. They had a top-4 SOR but the committee had them ranked #11 prior to the conference championships. Ouch.
Here is another way of visualizing the same data:
The magenta dots represent teams that were ranked in the CFP top 10 at the end of the regular season. The x-axis is strength of schedule (schedules get harder as you go to the right) while the y-axis is strength of record (resume gets better as you go up).
I think this plot kind of tells the story I expected to tell, but only if you squint at it just right. The story would be that teams are better off at 10-2 with a weaker schedule than 9-3 with a harder schedule, even if that 9-3 record would actually be better because of the schedule difficulty. But you aren't talking about that many cases, and it's really at the margins (in that 0.2-0.4 SOR range, near the bottom of the expected CFP field).
The last thing I thought about was the reality that the CFP committee probably didn't care that much who was ranked #10 back in 2015. The 12-team playoff puts a higher level of scrutiny on the #8-12 (or so) teams in the rankings. So what if we only look at the two years so far of the 12-team playoff?
I think this looks a bit tighter. Again the biggest outlier is 2025 BYU, who really seems to have been screwed in the penultimate rankings.
Conclusions
All-in-all, I would say this analysis makes the CFP rankings look... better than I expected, actually. There are some clear head-scratchers, but overall it seems fairly reasonable considering we are looking at 11 years of data here. I have to admit, I was a bit surprised.
One thing that this analysis does not tackle, though, is how the rankings change following conference championship weekend. This is much harder to objectively analyze as I mentioned before. How do you put an objective value on a conference championship, beyond simply adding it to the win total/SOS calculation? It's also worth noting that some of the most controversial CFP committee decisions - particularly moving FSU out of the top 4 in 2023 - happened after conference championship weekend.
TL;DR: Look at the graphs. I don't know how to talk about this in less than 1,000 words. I am who I am. Sorry.
Around this time last year, I posted this topic analyzing the disparity between conference schedules among SEC teams in 2024. At the time, I thought it would be interesting to do an expanded strength of schedule/strength of record analysis looking at not just SEC teams, but all of FBS. One thing I was particularly curious about was how the CFP Committee rankings compare with calculated strength of record over the years.
There are various places to find this information - for example, FPI has strength of record data that you can compare to the CFP rankings - but I really wanted a data source that I could dive into beyond some FPI numbers on a web page. So.. I built my own.
I'll add a separate post detailing the process, but here's the short(ish) version: I pulled historical SP+ ratings, CFP and poll rankings, and game results from collegefootballdata.com. I pulled this data for the entire CFP era to-date - 2014 through 2025. I then built a tool to calculate strength of schedule, using SP+ data, for every FBS team over that 11-year period. My tool also calculates strength of record using the same SP+ data, and there are several levers I can pull to tweak the parameters / evaluate the results.
Methodology
Before I get into some of the results, a couple of quick notes & definitions just to make sure it's clear what we are looking at:
Snapshot in Time - End of Regular Season, Prior to Conference Championships
This is probably the most critical piece of the puzzle. You see, one of the issues with evaluating the CFP Committee rankings is that there's a subjective value placed on conference championships. There's no way for me to tell analytically whether this subjective value makes sense, and it really muddies the waters. To deal with this, all of the analyses that follow are based on the end of the regular season, prior to conference championship weekends. The entire snapshot for each season - records, rankings, schedule strength, etc. - is based on the end of the regular season.
Strength of Record
Strength of record, at it's most simplistic level, is a measure of how a team performed relative to the strength of their schedule. In this case, strength of record is reported as the probability that the team had a better record than an average top-12 team (in the given season) would have against the same schedule.
The Data
So with that out of the way, let's look at some data. My biggest question going into this was "is the CFP Committee focusing too much on W/L record and not enough on schedule?" So my first step was to take a look at calculated SOR vs. the CFP Committee rankings. Here is what that dataset looks like for the CFP top 25 over the past 11 years:
Note that these SOR values have been further normalized and re-centered, which allows comparison across multiple years (as long as we focus that comparison near the re-centering point, which is around the #10 team in this case). Here's what the same data looks like without that normalization and re-centering, for reference:
So going back to the normalized chart, I chose the top 10 as my center point for analysis. Originally I was looking at the top 11 - my logic was that most years, the top 11 teams in the CFP rankings should make the 12-team playoff. However, as it turned out, that wasn't the case either of the first two years of the expanded playoff. So I figured top 10 might make more sense.
The data points in magenta represent teams who were ranked in the top 10 by the committee, but did not have a top 10 strength of record. The data points in green represent teams ranked outside the top 10 by the committee, despite having strength of record in the top 10.
So the next question is.. who were these teams? Let's take a look:
Some of these are interesting. 2022 LSU obviously jumps out, but if you look at the SOR you'll notice that it's very low compared to the rest of the list. LSU had the 9th best SOR at the end of the '22 regular season primarily because there was a pretty weak field in 2022. Also worth noting that considering this is a snapshot before the SECCG, LSU very well may not have made a 12-team CFP in 2022 even if they were "properly" ranked by the committee.
Another that jumps out is 2025 BYU. Their 0.627 SOR means that their record, given the teams they played this year, is 62.7% likely to be better than an average top-12 team playing the same competition. They had a top-4 SOR but the committee had them ranked #11 prior to the conference championships. Ouch.
Here is another way of visualizing the same data:
The magenta dots represent teams that were ranked in the CFP top 10 at the end of the regular season. The x-axis is strength of schedule (schedules get harder as you go to the right) while the y-axis is strength of record (resume gets better as you go up).
I think this plot kind of tells the story I expected to tell, but only if you squint at it just right. The story would be that teams are better off at 10-2 with a weaker schedule than 9-3 with a harder schedule, even if that 9-3 record would actually be better because of the schedule difficulty. But you aren't talking about that many cases, and it's really at the margins (in that 0.2-0.4 SOR range, near the bottom of the expected CFP field).
The last thing I thought about was the reality that the CFP committee probably didn't care that much who was ranked #10 back in 2015. The 12-team playoff puts a higher level of scrutiny on the #8-12 (or so) teams in the rankings. So what if we only look at the two years so far of the 12-team playoff?
I think this looks a bit tighter. Again the biggest outlier is 2025 BYU, who really seems to have been screwed in the penultimate rankings.
Conclusions
All-in-all, I would say this analysis makes the CFP rankings look... better than I expected, actually. There are some clear head-scratchers, but overall it seems fairly reasonable considering we are looking at 11 years of data here. I have to admit, I was a bit surprised.
One thing that this analysis does not tackle, though, is how the rankings change following conference championship weekend. This is much harder to objectively analyze as I mentioned before. How do you put an objective value on a conference championship, beyond simply adding it to the win total/SOS calculation? It's also worth noting that some of the most controversial CFP committee decisions - particularly moving FSU out of the top 4 in 2023 - happened after conference championship weekend.
re: NASA has announced that Artemis II is go for launch...
Posted by lostinbr on 1/2/26 at 10:49 pm to Btrtigerfan
quote:
More government funding.
They have fallen very far behind the US commercial sector and China.
Money won't speed progress, but it supports the grifters.
1. In what world has NASA “fallen very far behind” China? :lol:
2. The US commercial sector exists primarily because of NASA / US government funding. This is especially true when it comes to heavy launch systems like Starship and the Super Heavy booster.
That said, there is grift involved with SLS.. but it’s Congress, not NASA. They’ve managed to keep SLS alive in spite of recommendations from multiple NASA administrators over the years to kill it, because a handful of senators really wanted to keep the jobs in their states.
quote:
Why are they orbiting so high? Apollo 8 was 70 miles above the moon.
Apollo 8 actually inserted into a lunar orbit. Artemis II will just do a flyby on a free return trajectory. So that’s part of the reason.
That said, even for a flyby on a free return trajectory, it’s very high compared to the Apollo missions. I believe the Apollo free return trajectories put them something like 160 miles above the Moon. So there must be some other mission requirement affecting it as well.
re: Among Other Significant Events of 2025
Posted by lostinbr on 12/30/25 at 7:05 pm to Bjorn Cyborg
quote:
What do you think .8 centimeters represents?
Ah OK. Fair enough.
As I said, it’s an average based on 50 years of measurements using lasers pointed at stationary reflectors.
Here’s a NASA link that describes the science.
quote:
By measuring how long it takes laser light to bounce back — about 2.5 seconds on average — researchers can calculate the distance between Earth laser stations and Moon reflectors down to less than a few millimeters. This is about the thickness of an orange peel.
re: WSJ Article: America’s Biggest Oil Field Is Turning Into a Pressure Cooker
Posted by lostinbr on 12/30/25 at 7:00 pm to TulsaSooner78
quote:
They need a constant supply of fresh water to keep their Godless computer farms cool.
quote:
Absolutely false.
How often do you refill the radiator in your vehicle?
Again, this is an apples-to-oranges comparison. Your vehicle’s radiator doesn’t use evaporative cooling.
re: Among Other Significant Events of 2025
Posted by lostinbr on 12/30/25 at 6:56 pm to Bjorn Cyborg
quote:
To the millimeter level? No one is observing or even assuming that. bullshite.
Land surveys have a higher margin of error than that.
Not sure where you’re getting “millimeter level” from OP. That being said, the other guy is mistaken. We actually do have direct measurements of the Moon’s rate of recession from Earth.
Astronauts left reflective panels on the surface of the Moon during the Apollo missions. Scientists have been running experiments for the past 50 years, where they fire a laser at the reflective panels from Earth and measure the amount of time it takes for the light to return.
You’re probably correct that this would not be precise enough to accurately measure the recession over a single year. But over 50 years, there’s enough data that they can approximate the Moon’s average recession at ~3.8 cm/year.
The rate of Earth’s recession from the Sun is more of a calculation based on a model, as the other guy said. We can calculate the amount of energy released by the Sun as electromagnetic radiation using direct measurements. We can then determine the mass lost to fusion using E=mc^2. Meanwhile we can also estimate the mass lost to solar wind using measurements from satellites and probes. Once you have an estimate of the amount of mass the Sun loses each year, you can calculate how that change in mass impact Earth’s orbit.
All of that said, OP is a bit misleading in the sense that these aren’t estimates for 2025 specifically; they’re annual averages based on a long history of data collection. We know that some (all?) of these rates will change over time, but the timescale is so long compared to the history of human measurement that there’s no point trying to factor that in.
quote:
OR engineer it to tie into a resource not impacted by evaporative loss.
I don’t disagree with this in principle, and data center water usage is certainly a looming problem.
But when you say shite like…
quote:..and..
Because the so called geniuses are too stupid to scalena simple PC rig upward.
It wouldn't even be hard. Automotive engineers have already spent the last century figuring out which coolant is safe for which metals.
quote:…it kind of makes it seem like you don’t know what you’re talking about. “Closed loop” cooling systems can still use evaporative cooling. The “closed loop” usually just means that the actual cooling medium being circulated through the equipment is an isolated circuit. It doesn’t address the actual method of final heat rejection to atmosphere. Many closed loop systems just exchange heat from the closed loop to a second circuit that flows through a cooling tower (and evaporates).
Or perhaps a chemical solution specifically designed to stop mineral buildup and offer corrosion protection.
We could call this solution... Coolant. We could offer it in many colors and ph levels.
So all that stuff about “coolant” is meaningless. They aren’t using so much water because it’s their choice of coolant. They’re using so much water because there isn’t a more cost-effective method of heat rejection from the system.
The stuff about scaling up “a simple PC rig” is silly as well. The reason your PC can be cooled by air (aside from the fact that it puts out orders of magnitude less heat per unit volume than a rack full of B200’s) is that it’s located inside of an air conditioned building. Your PC doesn’t really impact the cost of cooling your home, but the racks are basically the entire cost of cooling a data center.
If you think they should have to go with the more expensive option - specifically vapor-compression refrigeration - in areas where they don’t have access to virtually unlimited surface water like the Mississippi River, that’s a perfectly valid viewpoint. But don’t act like it’s because they don’t know what coolant is. :lol:
quote:
X123F45
I think you’re missing the issue entirely. The cooling medium doesn’t matter. Whether they use a closed loop with chilled water, a closed loop with some sort of glycol coolant, or forced-air convection with chilled air for the actual heat transfer from the server racks doesn’t really affect their water usage.
The issue is how they ultimately eject the heat from the larger system. There are basically three options for this:
1. Air cooling by passing coolant through an exchanger where ambient air is blown over the coils outside.
2. Vapor-compression refrigeration (which still ultimately uses air cooling to remove heat from the refrigerant, but allows for much smaller air cooled exchangers because the refrigerant gives higher approach temperatures).
3. Evaporative cooling.
Evaporative cooling in cooling towers is simply the most economical option most of the time. The downside of evaporative cooling is that you lose coolant (water) to evaporation and have to make it up somehow, hence the high water usage.
quote:
Their cost savings method is going to make the cost go up for the rest of us. Right?
The cost of water? I don’t know. Maybe. It’s a complicated question.
Refineries and chemical plants use a shitload of cooling water. Around here, most of that water comes out of the Mississippi River. It’s cleaned up, goes through the cooling water system, and then the blowdown (water with high dissolved solids) is cleaned again and discharged back to the river. Does that drive up the cost of fresh water for everyone else? Not really.
If they’re using municipal water, that’s a different story. If they’re using less abundant surface water (say from smaller lakes or streams) that’s a different story as well. There’s nothing inherently wrong with using water for evaporative cooling, it’s more an issue of where that water comes from.
quote:
Right, but that's not all of the money. The Athletic Department isn't going to voluntarily spend more money than it has to. It’s not profit sharing, its rev sharing. Any shared money adds to the expense column.
The athletic department doesn’t have to spend any money on rev share. So I’d say that they definitely are going to spend more than they have to.
re: WSJ Article: America’s Biggest Oil Field Is Turning Into a Pressure Cooker
Posted by lostinbr on 12/29/25 at 7:46 pm to ragincajun03
quote:
Is this why the water guys I work with are obsessed with bugs and nanobubbles?
I’m not familiar with nanobubbles in an O&G context so can’t comment on that. :lol:
Bugs.. maybe tangentially related? Bacteria are typically a corrosion concern first and foremost. However, sulfate-reducing bacteria (SRB) reduce sulfate (found in produced water to various extents) to sulfide (e.g. H2S). This can be a big deal when you’re dealing with any sort of water injection, because of the risk of reservoir souring. Once the bacteria are downhole with a food source, it’s gonna be way harder to kill them. The fear is that they will then multiply and create a bunch of H2S in the reservoir.
I’ve been out of O&G for a while now so I’m not sure how the thinking has evolved, but it used to be a big concern with waterflood offshore because seawater used for the waterflood has a fair amount of sulfates. I’d imagine the same would apply to disposal wells for brines with meaningful sulfate levels.
re: WSJ Article: America’s Biggest Oil Field Is Turning Into a Pressure Cooker
Posted by lostinbr on 12/29/25 at 6:49 pm to LemmyLives
quote:
It's salty because it has absorbed ions directly from the rock in the area, so essentially all that is happening is that the salts and chemicals present (for the most part) were already in the ground anyway. Right? I don't suppose they're shipping waste water from the Bakken to Texas and vice versa.
Sort of. There are kind of two separate issues at hand.
One issue is the buildup of reservoir pressure where the injection wells are located. This is what’s happening in cases like the saltwater “geyser” described in the OP. That high reservoir pressure causes the water to migrate elsewhere - either into another part of the formation or possibly to the surface, in the case of shallow disposal wells.
This is where the salinity becomes a problem. The salt itself doesn’t create an issue until it migrates somewhere we don’t want large amounts of saltwater. The biggest concerns being the surface (where it can cause ecological issues) and fresh water aquifers (where it can contaminate fresh water supplies).
ETA: There is a separate issue where the saltwater produced by different wells/reservoirs can be incompatible, leading to scale formation. But this is mainly an issue for the injection wells themselves because it causes things to plug up. Incompatible brine isn’t really an issue for the larger population AFAIK.
Popular
0












