![]() |
| (Click on the image to enlarge) |
Our Free Home, Condo & Auto Valuations
AVM, CAMA Modeling and Data Analytics Yatra You Won't See Elsewhere!. Read our Books on AVM to be Convinced of our Trend-setting Model-building Power! Sid Som owns and operates this Blog.
Comparing independently developed Automated Valuation Model (AVM) Values to County Market Values (CMV) will point to the areas of failure, meaning over and under-valued assessments on the Tax Roll. Often, the higher value properties are under-assessed, while the lower value properties are over-assessed. If the comparison of "AVM to CMV" points in that direction, the Property Tax Appeals Consultants ("consultants") must work up a small sample, using comps, to further authenticate the discovery. If the comps sample validates the discovery, consultants must pay special attention to that over-valued/ over-assessed population segment.
If a subject and a sales population are provided to a group of concerned parties – from an Assessor to a Bank Appraiser to a Listing Agent offering buyback guarantee to a traditional Listing Agent to a Buyer's Agent to an Appeals Consultant – one would be unpleasantly surprised by the outcome.
The above table represents the sales recency method, meaning the most recent five comps (in terms of sale dates) are the best ones. This is where the lowest and the highest value comps showed up on the initial line-up, hence substituted with the ones waiting in line. Though this method produced the most compact value range (upper bound was compacted down), it produced the lowest subject value of $360,340.
Therefore, if this comp sales analysis were to be used to cater to the target as mentioned earlier audience, this is how the game would be played out:
1. Assessors and Listing Agents (traditional) will be given the "distance" value (highest value).
2. Bank Appraisers and Listing Agents (buyback) will be given the "least adjustment" value (middle-of-the-road value).
3. Appeals Consultants and Buyer's Agents will be given the "sales recency" value (lowest value).
How to Reduce Subjectivity in Comp Sales
1. Apply meaningful selection, scoring/ranking, and adjustments to the sales population;
2. Build an AVM and insist on two AVM values (4th and 5th) on comps line-up;
3. Verify all comps spatially, ensuring they all come from the same or, at least, compatible neighborhoods;
4. Apply time adjustments in line with the local market (using national figures or adjustments could distort results);
5. Pay attention to valuation dates as 01-01-19 and 08-16-19 are different, often requiring additional adjustments;
6. While using sales recency, contract dates are preferred to closing dates (despite the industry norm);
7. If one is not allowed to use AVM values, one must show the AVM values below the comps grid with detailed value analysis;
8. If the sales population is large, a representative sample might be extracted from the most recent arms-length sales; and
9. If the subject population is large, automate the process with batching technology (batch comps).
![]() |
| (Click on the image to enlarge) |
Many property re-assessments produce sub-par results or even fail miserably due to a straightforward fact: jurisdictions often take the full-scale plunge into the venture without conducting meaningful pilots, leading to proper definitions of scale and scope of the actual event. Therefore, running a well-thought-out pilot could save a ton of money and agony (in terms of public and political embarrassment, etc.) down the road.
1. Ideally, Conduct Residential and Commercial Re-assessments concurrently. When they are run together, local governments are empowered to shift tax burdens across property groups (depending on the impact study). If it is statutorily required, the taxpayer watch groups must fight the statute to decouple them, thus making the re-assessment a genuinely transparent, as well as a fair and equitable exercise. If it is run concurrently, the watch group must hire an independent consultant to review the impact study, both inter (across property groups) and intra (within the group). Should they find any inconsistency, they must share the results with the local media.
2. Hire an Econometric consulting firm to run a Pilot. Running a meaningful pilot is one area where private and public sectors tend to part ways. For example, instead of rushing into a full-scale (and expensive) marketing campaign, private companies tend to run a meaningful pilot (i.e., proper sampling, etc.) first, leading to the primary campaign, assuming, of course, pilot results exceed expectations; just meeting expectations could force the project back into the mix of alternatives). Though the idea of pilot projects is not common in local governments, they must get into the practice of running pilots to avoid having to spend too much money at the back-end on damage control. Since a well-constructed and adequately run pilot represents the main event, a well-known econometric consulting firm must do justice to the pilot, paving the way for a meaningful and significant pilot and a reliable impact study.
3. Recollect the Exterior Data for the Pilot Project as if it were the Main Event. Before publishing the data collection manual, the consulting firm must undertake a local market significance study, thus zeroing in on the variables that significantly impact valuations in that particular market. Then, with the assistance of the consulting firm (e.g., arriving at the actual sample, variable types, extent, and use of technology, etc.), the assessing staff must recollect the exterior data on the pilot. In constructing the sample, it's prudent to ignore all incomplete and on-going physical changes. Similarly, the interior data collection is virtually meaningless for re-assessment as they represent mostly lifestyle fixtures/personal properties, not real properties. While significant interior renovations and improvements must be captured and reflected via the "Overall Condition" variable, new indoor pools, porches, etc. should be separately coded to ease valuation (only if they show up in the study as significant market variables). The data collection process must be thoroughly documented so that the process could be precisely duplicated during the main event.
4. Publish the Pilot Results, emphasizing the Potential Tax Impact. Considering this is not the actual re-assessment, the results could be published immediately, with a series of outreach seminars to educate taxpayers on the potential impact of the future re-assessment. Even the taxpayers facing tax increases would be less hostile at this point as they would be allowed a significant voice in reshaping the outcome. If the residential and commercial pilots are run concurrently, watch groups must carefully scrutinize the study, ensuring that the tax burdens are not being irrationally shifted from one group to another, especially "inter," meaning from the commercial to the residential. Of course, they must also study the equity within the group. Of course, that is the advantage of a meaningful front-ended pilot, providing a platform for all brainstorming before the fact.
5. Jurisdictions with Unfair Statutory Limitations must work on Removing the Statute before Undertaking any Major Re-assessment. Hypothetically, if the state mandates that the county must reimburse its taxing districts (e.g., towns) the amount that is refunded to homeowners due to inaccurate property assessments, it would be prudent for the county administration to work with the state to remove this unfair mandate, or at least reduce the burden to a manageable annual limit graduating to a total phase-out) before embarking on any significant re-assessment. Should this legislative effort fail, the county should seriously consider a decentralized assessment system without taking on monumental unwarranted liability. In this example, under the decentralized system, towns would be responsible for their assessments. Simultaneously, the county would continue to provide technical assistance, thus relieving the county of any potential refund liability.
Again, a front-ended pilot would do immense good before the full-scale plunge.
![]() |
| (Click on the image to enlarge) |
As we know, not all segments of the market move in tandem. When the market starts to move up, it generally begins at the bottom of the value strata (start-up homes) and graduates up the value ladder. Therefore, while analyzing a large sales population, it is prudent to use the entire percentile curve (as shown in the above Miami graphic) rather than just the Median as it may musk the actual pictures on both ends of the curve, say below the 25th percentile and above the 75th percentile and more precisely below the 10th and above the 90th.
How to Analyze a Sales Population
1. Single Parameter – Instead of just one parameter (like the Median), it's better to consider the expanded percentile curve, preferably 1st percentile to 99th percentile, avoiding minimum and maximum as they may skew the picture as well.
2. Sample Selection – When confronted with all sales meaning both arms-length and non-arms length and virtually no time to validate the sales, the 5th to 95th percentile sample is more meaningful, without having to spend time on manual validations. Conversely, if the sample comprises only the arms-length sales, the 1st to 99th percentile range could be more meaningful.
3. Outlier Analysis – Therefore, while studying outliers of a sample lacking validation, it's better to consider only the cases below the 5th and above the 95th percentile. Likewise, below 1st and above the 99th could be a better starting point for a validated sample, gradually extending out to the outliers on both ends of the percentile curve (as time permits).
4. Sales Timeframe – When the timeframe is extended (9 to 24 months), sales must be time-adjusted, preferably at the monthly level (deriving monthly time factors). If the sample comprises 3-4 years' of sales, quarterly adjustments will make more statistical sense. When an extended series (e.g., 10+ years) is analyzed, annual factors would be appropriate. Most extended series analyses are performed to detect seasonality in the data.
5. Growth Factors – As we all know, the residential market is as local as it gets. Therefore, a good sales analysis must additionally be broken down to the sub-market level as long as those sub-markets are well-established and accepted. Since the growth rates vary by the market, time adjustment factors must be derived at the sub-market level (e.g., 12% in our example for the City of Miami). Applying national or even regional factors could result in flawed and indefensible results. Time adjustment in AVM is generally different (will be discussed later).
6. Use of Median – Due to time constraints, If one has to choose one parameter to ascertain time's impact, it must be the Median, as it is less prone to outliers (outliers heavily influence average, often distorting the analysis). In an even like that, the sale's Median must be compared with the normalized (by Bldg SF) Median, ensuring they are close to each other.
![]() |
| (Click on the image to enlarge) |
7. Spatial Distribution – As part of the sales sampling, one must also ensure that the sales are spatially distributed in line with the population, so a meaningful spatial chart is in order alongside the data tables. In the above example, one must understand that the Median ASP and Median Bldg SF are mutually exclusive, but they may be connected to get a general idea of the ASP/SF, but not for any serious analysis. To analyze the normalized ASP/SF, one needs to create the organic variable (row-wise ASP/SF) and the run percentile stats.
8. Creating Sales Ratios – In the above example, in addition to the percentile analysis of sales, the distribution of sales ratios (ratio of County Market Values to Time-adjusted Sales or ASP) is shown, thus connecting the apples-to-apples dots. The spatial chart additionally depicts the stratified sales ratios. Caution: While creating the sales ratios, one must time adjust the sales to the valuation date (in this case, 01-01-2019) as the tax roll values are as of that date, or else it would be an apples-to-oranges.
9. Regression Values – Ideally, ASP should be modeled (using multiple regressions or any other industry-accepted methodology). The resulting sales ratios of the regression values (which are smoother and statistically more significant) used all analyses, Including defining and removal of the model outliers. Regression values could also be used to challenge the tax roll market values additionally. When there is a paucity of comps, such regression values could also be used to proxy actual comps in a comparables grid.
In a nutshell, to get a better picture of the overall market, an expanded percentile distribution analysis of sales is significantly more meaningful than a simplistic median-based sales analysis. Additionally, normalized values and spatial sales ratios could provide better insight into the building blocks.
-Sid Som, MBA, MIM
homequant@gmail.com