A lot of what I have been doing at work recently has been defending my conclusions. One of the tests we use heavily (it's the only one I CAN use with kids under 3 or kids who are nonverbal) came out with a new edition last year. We just started using it this year and none of my coworkers like it. Recently it came to my attention that many people feel the test is "normed" poorly. Children who had language disorders were included of the sample of children given the test to determine how the "typical" child performs - with the result that the norms are skewed downward. So a child who is delayed, for example a three-year-old who speaks in mostly 1-2 word sentences, looks like they are low average and the numbers don't show a serious delay. I have other ways of documenting a delay - descriptions of how they communicate and calculating their sentence length and use of grammatical structures as compared to the child's age - and I can show delays in that way. And luckily, there are norms for average sentence length so I have a "norm-referenced measure" to defend. But I seem to be spending lots of my time describing how kids performed on that test and explaining why the test is wrong. I can't help but think I couldn't have done that a few years ago - I wouldn't have had the insight that comes with seeing SO many kids in a test setting. But I have to lay everything out for the future when some administrator comes after me as to why I placed that kid when the test numbers don't justify it.
And all that is both math-heavy AND grammar heavy. God help up you if you don't know what a copula is.