Except in the data that I posted which shows exactly that. Since I am not interested in needless internet arguments, I was curious about differences in the data collection methodologies and looked at the underlying data sources used to generate these figures. Although the New York Times Covid tracker number is discrepant with the numbers you posted, this is because the number you posted is a subset of the NY Times data that also includes deaths reported to local governments. Again, this is factual and not a matter of opinion.
There is a factual difference in methodology which has led to slightly different absolute numbers representing similar trends. The whole point of responding to the 100 number was to simply point out that the original person you responded to and tried to correct had, in fact, shared a number that is credible and available from a source with good data practices. What ultimately matters here is the overall trends, which as we should all agree by now, are not good.
The really important point here is that, when you're interested in understanding an unfamiliar dataset, the steps you take are to understand how it was collected and how reliable it is. This is very different to what you do if you already know the point you want to make before you do the research - where if you've concluded in advance that the number must be low, you find a source you agree with and post only that link.
There is a very large body of evidence that undercounting occurs, especially in the day to day data. is because of time lags and (when it comes to cases) failures in the adequacy of testing. There is a wide variety of scientific literature (not news media articles) which measure the degree of undercounting, so that the public health response can be better managed. We know undercounting is a problem, while over counting is not.
That is incorrect, yes.
"Reported" in this case means reported to government agencies/listed as cause of death/etc. not "reported" as in, "news reporter".