The analysis of Assessment Rolls of major Jurisdictions requires advanced technical training and quantitative knowledge. It's hilarious when a local staff reporter settles the score annually, with a lengthy and superficial article, and the politicians run with it, silencing the unhappy taxpayers. And the cycle continues, year in and year out.
A recent local newspaper article indicated that the percent error rates "of the five largest cities for which studies have been completed in the last two years... including New York at 17.6 percent, Chicago at 25.1 percent, and Philadelphia at 20.2 percent. Houston's error rate was 7 percent in its most recent study, and Phoenix's was 8.1 percent."
Though Automated Valuation Modeling ("AVM") was used to develop all of the above Assessment Rolls ("Roll"), the modeling error rates as indicated above (generally defined by the Coefficient of Dispersion or "COD" of the underlying AVM) are not comparable.
While there are general AVM guidelines, they are not like the SAT or GRE. AVMs' development is highly subjective, depending mainly on the in-house modeler(s) or the hired consultant's acumen. Since the actual models are not published, the re-validation of those model CODs, externally, is even more subjective and circular.
So, why are the above CODs are not comparable? Here are the fundamental reasons:
1. Sales Validation -- All market AVMs are developed off of recent, arms-length sales. Thus, all sales have to be validated, and then a random or stratified random sample of arms-length sales serves as the modeling sample. Of course, there is no hard science behind the sales validation process. Therefore, if Jurisdiction X considers all of its border-line cases as arms-length, while Jurisdiction Y aggressively removes them from its identical universe, the resulting AVM of the former, ceteris paribus, will produce a higher COD than the latter's. Unfortunately, when the local reporters compare the competing CODs, they will have no idea how the respective jurisdictions validated the sales.
2. Sales Sampling -- From the universe of the validated arms-length sales, a sample properly representing the overall population is then derived. The sales sample must statistically "represent" the population, failing which the resulting AVM will be invalid, paving the way for a flawed Assessment Roll (statutorily, an Assessment Roll must be fair and equitable). Again, there is no hard and fast rule as to the extraction of the sales sample. If Jurisdiction X restricts the representative test to the 1st-to-99th percentile range while Jurisdiction Y takes a more lax approach of 5th-to-95th percentile, the AVM of X, ceteris paribus, will have higher COD than Y's. Of course, the local reporters would not even know of this requirement, let alone performing the test.
3. Removal of Outliers -- As part of the model optimization, a set of outliers are systematically identified and removed. While there are various methods to identify and remove outliers, the (sales) ratio percentile range is typical. Of course, some would use a very conservative range or approach while others (those obsessed with better stats, i.e., lower CODs) would be more aggressive. Ceteris paribus, the modeler who conservatively defines and removes outliers below the 1st percentile and above the 99th percentile range, will have a much higher model COD than someone who aggressively removes all below the 5th and above the 95th percentile range. Case in point: Chicago's 25.1 vs. Houston's 7. Unfortunately, the local reporters would try to justify both – perhaps they already have, without even knowing the underlying modeling criteria as models are rarely published.
4. Sub-market Modeling -- Many modelers and consultants build their AVMs bottom-up, instead of the customary top-down. Here is an example of what bottom-up modeling means: Let's say that the Roll is for the County as a whole, though the County comprises five Towns. Now, if the modeling takes place at the Town level (bottom-up), instead of at the normal County level (top-down), the average Town-wise CODs would be lower than the customary top-down modeling, even though the objective remains unchanged: To produce a fair and equitable County-wide Roll. This type of bottom-up modeling problem is that there will be significant noise along the Town lines, generating a considerable amount of inconsistent values. Of course, the rush-to-approve local reporters would never know any of this, as those models are rarely made public. They even disregard the FOIL requests by citing 3rd party software copyright, etc.
5. Spatial Tests -- Irrespective of #4 above, publications of Town-wise results are not typical. Again, while the County-wide COD could be compliant, the Town-wise CODs could be far apart. If Town-1 is highly urban (requiring complex modeling, hence higher COD), whereas Town-5 is highly suburban (involves easier modeling, thus much lower COD), the CODs are expected to be quite different. Of course, the modeling criteria (sales sampling, outliers, etc.) must remain uniform across all Towns. Absent publications of the actual models, taxpayer advocacy groups must, at least, insist on the CODs by major sub-markets (e.g., Towns), in addition to the system-wide COD. They must also insist on knowing if the modeling criteria were uniform across all of the major sub-markets. Of course, the local reporters vouching for the Rolls would confidently do so without even knowing how the modeling had taken place.
6. Equity Analysis -- A system-wide COD is just the beginning. It does not confirm that the Roll is fair and equitable. Let's assume that the reported COD is 15, which is compliant, a priori. Now, let's also assume that the unreported Town-wise average sales ratios range between 85 and 115. Since the Rolls tend to be regressive, it's highly likely that the 85 ratios would pertain to the most affluent Town in the County while the 115 would represent one of the middle-class Towns. In essence, the poor and middle-class neighborhoods perennially subsidize their wealthy counterparts. While the rich would make a lot of splash about their Roll values, they would be reticent when they sell their homes at twice the same Roll values. The average ratio of 85 does not mean that all homes in that Town are assessed strictly at that level. The 1st-to-99th range could be 70 to 100 (generally wider), while the Town with an average ratio of 115 could have a 1st-to-99th range of 100 to 130. Now, let's compare 70 to 75 with 125 to 130. The local reporters who boldly confirm the Rolls would be clueless about this regressivity.
7. Data Maintenance -- Intra (i.e., within the Jurisdiction) comparison: Sales are dressed and staged so the sale data are inherently cleaner and more up-to-date than the unsold property data, thereby producing lower CODs for the modeling sample. Also, the sold parcels with data inconsistencies fall off by way of model outliers and resurface upon applying the model to the population. It's a classic 'hide and seek' unless those data errors are heeded before the model application. Of course, nobody knows what happens behind the curtain. Generally, the local MLS plays a significant role in (indirectly), forcing the Jurisdiction to keep the sale data up-to-date (obviously, sale data are easy picking by the media and other interested groups). Inter (i.e., across Jurisdictions) comparison: Two adjoining Jurisdictions may have vastly different outlooks in managing the population data. One may be very proactive while the other may be reactive, at best. Ceteris paribus, the lot fraction defective of the former Roll, would be significantly lower, generating far fewer tax appeals (an excellent metric to follow) than the latter's. Again, the local reporters confirming those Rolls would be clueless of these competing scenarios.
8. Model Testing -- The modelers and consultants who apply their draft models to the mutually exclusive hold-out samples, ceteris paribus, will have more sound and reliable Rolls than those who tend to skip this critical modeling step. This step helps identify the errors and inconsistencies - from sample selection to outliers to optimization to spatial ratios and CODs - in draft models, often to the extent that they get sent back and are reworked from square one. The hold-out sample must have the same attributes of the modeling sample (and, in turn, of the population), so this test is one of the most established ways to finalize a model, leading to its successful application. Again, the Jurisdiction that methodically performs this step produces a more sound and reliable Roll, with potentially far fewer tax appeals than its counterpart, that boldly skips it. Of course, the local reporters confirming these Rolls would not know any of these crucial details.
9. Forward Sales Ratio Study -- A forward sales ratio study would be an ideal way to begin the Roll investigation process. For example, if the Roll was developed off of the 2018 calendar year sales, it could be tested against a set of forward sales ratios (comprising validated Q1/Q2-2019 sales, etc.). To bolster the size of the forward sales sample, seasoned listings could also be added. Once time-adjusted back to the valuation date, the forward sales ratio test should produce results that closely parallel the published Roll. Therefore, before rushing to hire expensive consultants, the taxpayer advocacy groups should consider hiring local analysts to compile forward sales samples and run the ratio tests. The results must then be studied multi-dimensionally, meaning by major sub-markets, value ranges, non-waterfront vs. waterfront, Non-GIS vs. GIS, etc. If the results turn out very different, a challenger AVM is in order. At that point, instead of hiring some from the universe of so-called industry experts (who would not shoot themselves in the foot), an outside economic consulting firm would be preferable as that firm would provide real analysis along with a coordinated strategic action plan.
So, what is the solution? To minimize the damage done by the low-knowledge local reporters who rush to confirm the Roll (to please the ruling party), the taxpayer advocacy groups must present their critical viewpoints via op-eds in competing papers and magazines.
No denial that well-respected billionaire businessmen like Warren Buffett and Sam Zell have written the print media off.
-Sid Som, MBA, MIM
President, Homequant, Inc.
homequant@gmail.com
No comments:
Post a Comment