J-Pav--I don't know who those guys are but I am confident that what I said is true.
Been a physicist for quite a long time, and stats are not closed end "formulas" to me--they are tools derived from calculus used to describe things, whether it is electron energy distribution functions, quantum states, or yes, an average of a group of numbers. We didn't arrive at the central limit theorem a priori. We derive these things. But some forget the assumptions that create the simplified relations, because they are usually close enough. But they are not absolute.
A "bell curve" or gaussian distribution, or normal distribution, is a benchmark for a perfect --fictional--distribution. It is a model we use to make the natural world's problems solvable, but not completely accurate.
We get lazy because we view the neatly packaged formulas that we memorize as some kind of canon, when in fact they are simply the result of applying many approximations. Yes, even the central limit theorem--hence the theorem part--it requires some assumptions that require calculus and then we can agree it is an approximation.
https://mathworld.wolfram.com/CentralLimitTheorem.htmlLots of "ifs" and BTW I still have my hardcopy reference of abramowitz and stegun's Hb of Mathematical Functions for those pesky fourier transforms.
The problem with "imagining" the below...Using the results, theorems, simplified formulas in descriptive statistics or other approximations is, as I said, what you get AFTER you take calculus-based mathematical stats to infinity--in order to create a perfect model that is closed and conceptually descriptive. That is not up for debate.
That statement is missing all the assumptions that are baked into what is needed for a normal distribution.
So in simpler form, can an average of 6 numbers be informative if it is 5 zeroes and a 10? probably not. But if you look at why, it derives from the calculus-based derivation of what a bell/normal distribution is and the only way we satisfy the concept is with a perfect set to infinity.