Wednesday, October 28, 2020

How to use AVM to Discover Over-assessed Parcels on Tax Roll

Comparing independently developed Automated Valuation Model (AVM) Values to County Market Values (CMV) will point to the areas of failure, meaning over and under-valued assessments on the Tax Roll. Often, the higher value properties are under-assessed, while the lower value properties are over-assessed. If the comparison of "AVM to CMV" points in that direction, the Property Tax Appeals Consultants ("consultants") must work up a small sample, using comps, to further authenticate the discovery. If the comps sample validates the discovery, consultants must pay special attention to that over-valued/ over-assessed population segment.

In choosing AVM Vendors, consultants must ensure that those AVM Values are developed specifically for the Tax Status Date. If the Valuation date (or the Tax Status Date, as the case may be) is 1-1-2019, but the AVM Values were developed in June 2018, those values would produce a flawed picture when compared with the County Values. Therefore, it is advisable to work with AVM Vendors that develop custom or specialized models for the Appeals industry, per se.

Many AVM Vendors also sell Comps Reports. However, the Appeals Consultants must be careful in working with the specialized AVM Vendors who additionally tie their AVMs to the Comps production. In other words, the specialized AVM Vendors who use the model coefficients to adjust their comps via the Comps Adjustment Matrix do not necessarily produce the most optimal comp reports as AVM (top-down) and comps reports (bottom-up) are diametrically opposite solutions. If a consultant is looking for a long-term AVM vendor, this is always worth asking, meaning if they tend to tie their comps (reports) to the model coefficients.

In the due diligence course, consultants may ask for a sample Adjustment Matrix for the comps production. The sample itself will say a lot about the quality of their valuation process. For example, if the Comps Adjustment Matrix shows a 'Lot SF' coefficient of .10 (10 cents per Lot SF, assuming it's transferred from the regression model producing the AVM model values), it would be difficult to explain such adjustments to the clients looking for self-explained comps. It would be a clear indication that the vendor is working with totally unqualified "make-shift" modelers.

While no AVM Vendor would be forthcoming to show their AVM models, they might share a sample Comps Adjustment Matrix. It could be telling!

Many Consultants use free or low-cost home valuation sites to work up samples. Unlike the free brokerage sites, some of those sites are generally self-directed, allowing them to arrive at their value conclusions. A few of them even comprise a host of advanced features like subject simulation, comps selection, quantitative adjustments, distance matrix, time adjustments, flexible valuation dates, multiple ranking methods, interactive spatial interface, comps grid, value analysis, and an all-inclusive report, to name a few. Some newer sites are mobile-friendly (so no additional Apps are needed) and strictly top-down, providing "quick look" subject valuations. Of course, those who subscribe to the local MLS have access to the comps solutions as well.  

In any case, having access to a custom or specialized AVM will help consultants to isolate the meaty cases from the Roll, thus narrowing the competition down. It makes no economic sense to go after the cases which are valued at or below the market.

-Sid Som
homequant@gmail.com


Introducing Low-Cost Data Analysis and Modeling Service

 

(Click on the image to enlarge)
US Portfolios/Tax Jurisdictions Only

Our Free Home, Condo & Auto Valuations

Tuesday, October 27, 2020

AVM is a Market Solution, Comparable Sales Analysis isn't (Part 2 of 2)

If a subject and a sales population are provided to a group of concerned parties – from an Assessor to a Bank Appraiser to a Listing Agent offering buyback guarantee to a traditional Listing Agent to a Buyer's Agent to an Appeals Consultant – one would be unpleasantly surprised by the outcome.

They will pick different comps based on their professional requirements and objectives, leading to different, often very conflicting valuations. For instance, Assessors may not have the taxpayers' best interest at heart as they have to meet budgetary requirements, paving the way for counterparties like Appeals consultants. A Listing Agent looking to get an "exclusive" may not do well with a set of middle-of-the-road comps that a Buyer's Agent might be interested in. In other words, the selection of comps is a function of the hat the party wears, making the entire process highly subjective. AVM, on the other hand, is a reasonably scientific exercise. All variables interact with one another in an econometric equation and produce the resulting values. Therefore, all other factors remaining constant, two identical homes will have equal values – but not so in the world of the comparable sales analysis (aka, comp sales) as it is very party-specific.

Once the sales pool that closely represents the subject is scored correctly and quantitatively adjusted, it becomes comps. Generally, the five best comps are then selected to value a subject. Valuers tend to use one of the three standard methods – distance, least adjustments, and sales recency – to narrow their choices down to the five contributing comps.  

In this analysis, the subject home attributes are Bldg SF=3,250, Lot SF=17,400, and Bldg Age=26. An optimal pool of 10 comps was algorithmically produced from an extensive sales population to demonstrate how subjectivity plays a vital role in this valuation process, removing the lowest ($308,770) and the highest value ($422,175) comps in each approach. 




The above table represents the distance method, meaning the five closest (to the subject) comps were considered to be the best comps, producing a value range of $344,820 to $414,940, with a probable subject value of $388,775. Since least adjustments and recency of sales were ignored here, obviously, several comps needing large adjustments or of older originations managed to creep in, making the process sub-optimal.
  



The above table represents the least adjustment method, meaning the comps that required the least adjustments were considered the best comps. The least adjustment is nothing but a balancing act. In other words, larger lots are compensated in value by smaller building sizes, and lesser time adjustments are proxying for older homes, etc. For example, the second least adjusted comp (# 6) with a much smaller lot was corrected by the larger and older building. It also sacrificed one of the closest (# 8) comps. This method produced a lower subject value of $371,150.




The above table represents the sales recency method, meaning the most recent five comps (in terms of sale dates) are the best ones. This is where the lowest and the highest value comps showed up on the initial line-up, hence substituted with the ones waiting in line. Though this method produced the most compact value range (upper bound was compacted down), it produced the lowest subject value of $360,340.


Therefore, if this comp sales analysis were to be used to cater to the target as mentioned earlier audience, this is how the game would be played out:


1. Assessors and Listing Agents (traditional) will be given the "distance" value (highest value).


2. Bank Appraisers and Listing Agents (buyback) will be given the "least adjustment" value (middle-of-the-road value).


3. Appeals Consultants and Buyer's Agents will be given the "sales recency" value (lowest value).


How to Reduce Subjectivity in Comp Sales


1. Apply meaningful selection, scoring/ranking, and adjustments to the sales population;

2. Build an AVM and insist on two AVM values (4th and 5th) on comps line-up;

3. Verify all comps spatially, ensuring they all come from the same or, at least, compatible neighborhoods;

4. Apply time adjustments in line with the local market (using national figures or adjustments could distort results);

5. Pay attention to valuation dates as 01-01-19 and 08-16-19 are different, often requiring additional adjustments;

6. While using sales recency, contract dates are preferred to closing dates (despite the industry norm);

7. If one is not allowed to use AVM values, one must show the AVM values below the comps grid with detailed value analysis;

8. If the sales population is large, a representative sample might be extracted from the most recent arms-length sales; and

9. If the subject population is large, automate the process with batching technology (batch comps).


-Sid Som
homequant@gmail.com


AVM is a Market Solution, Comparable Sales Analysis isn't (Part 1 of 2)

 

(Click on the image to enlarge)

In developing the analysis, the same sales population – derived from a single Zip Code – has been used across all three graphs (of course, one may use other fixed locations like Census Tract, School District, etc.). Considering all sales originated in the same Zip, it helped minimize the impact of location (of course, one can never make location totally irrelevant as each block has a different appeal).

The above graph shows the noisy relationship between the uncorrected (raw) Sale Price and Bldg Size (Heated Living Area). The reason is straightforward: Each sale is directly related to a buyer's judgment, causing a high level of subjectivity; for instance, the buyers are paying between $100K and $250K for a 1,500 SF home. While the investors would target the lower end of the range, the informed buyers would be in the middle, and the uninformed buyers (someone bent on buying a pink house!) would succumb to the higher end of the range. Therefore, the R-squared is extremely low (0.189), thus explaining very little of the variations in sale prices.
  



The Regression Value-1 graph proves that even a rudimentary regression model (with only three independent variables - Land SF, Bldg SF, and Bldg Age) can produce a decent market solution. The fit is significantly tighter, especially at the long end of the curve. The R-squared jumps from 0.19 to 0.91, accounting for 90% of the variations in sale prices. But this model has bi-modal issues between 1,000 and 2,200 SF as the regression values are forked. It's important to note that such stacked values must be investigated for the underlying reasons. One of the simple ways to identify the issue is to scatter the normalized regression values against the other independent variables and look for possible explanations.





The above investigation guides to the solution. As the normalized regression values from the first model were scattered against the Bldg Age variable (above graph), it was evident that many buyers were paying a premium for the younger homes, causing the stack. A sizeable portion of those buyers was willing to pay over $130/SF for the younger homes, while very few offered such a premium for the older stock. More precisely, none paid over $160/SF for the older stock.



So the Bldg Age variable had to be transformed from continuous to binary (younger homes vs. the rest). The regression model's re-run with the transformed Bldg Age produced the above (Regression Value-2) graph. Consequently, the value fork has disappeared, translating to a much tighter fit, with improved R-squared, lower intercept, and a steeper slope approaching 45 degrees.  

Stay safe!

-Sid Som
homequant@gmail.com


Benchmark AVM and CAMA Modeling Service

 

(Click on the image to enlarge)
US Tax Jurisdictions Only

Our Free Home, Condo & Auto Valuations

Monday, October 26, 2020

How to Conduct a Successful Re-assessment – Preemptively

Many property re-assessments produce sub-par results or even fail miserably due to a straightforward fact: jurisdictions often take the full-scale plunge into the venture without conducting meaningful pilots, leading to proper definitions of scale and scope of the actual event. Therefore, running a well-thought-out pilot could save a ton of money and agony (in terms of public and political embarrassment, etc.) down the road.


1. Ideally, Conduct Residential and Commercial Re-assessments concurrently. When they are run together, local governments are empowered to shift tax burdens across property groups (depending on the impact study). If it is statutorily required, the taxpayer watch groups must fight the statute to decouple them, thus making the re-assessment a genuinely transparent, as well as a fair and equitable exercise. If it is run concurrently, the watch group must hire an independent consultant to review the impact study, both inter (across property groups) and intra (within the group). Should they find any inconsistency, they must share the results with the local media.


2.  Hire an Econometric consulting firm to run a Pilot. Running a meaningful pilot is one area where private and public sectors tend to part ways. For example, instead of rushing into a full-scale (and expensive) marketing campaign, private companies tend to run a meaningful pilot (i.e., proper sampling, etc.) first, leading to the primary campaign, assuming, of course, pilot results exceed expectations; just meeting expectations could force the project back into the mix of alternatives). Though the idea of pilot projects is not common in local governments, they must get into the practice of running pilots to avoid having to spend too much money at the back-end on damage control. Since a well-constructed and adequately run pilot represents the main event, a well-known econometric consulting firm must do justice to the pilot, paving the way for a meaningful and significant pilot and a reliable impact study.    


3. Recollect the Exterior Data for the Pilot Project as if it were the Main Event. Before publishing the data collection manual, the consulting firm must undertake a local market significance study, thus zeroing in on the variables that significantly impact valuations in that particular market. Then, with the assistance of the consulting firm (e.g., arriving at the actual sample, variable types, extent, and use of technology, etc.), the assessing staff must recollect the exterior data on the pilot. In constructing the sample, it's prudent to ignore all incomplete and on-going physical changes. Similarly, the interior data collection is virtually meaningless for re-assessment as they represent mostly lifestyle fixtures/personal properties, not real properties. While significant interior renovations and improvements must be captured and reflected via the "Overall Condition" variable, new indoor pools, porches, etc. should be separately coded to ease valuation (only if they show up in the study as significant market variables). The data collection process must be thoroughly documented so that the process could be precisely duplicated during the main event.


4. Publish the Pilot Results, emphasizing the Potential Tax Impact. Considering this is not the actual re-assessment, the results could be published immediately, with a series of outreach seminars to educate taxpayers on the potential impact of the future re-assessment. Even the taxpayers facing tax increases would be less hostile at this point as they would be allowed a significant voice in reshaping the outcome. If the residential and commercial pilots are run concurrently, watch groups must carefully scrutinize the study, ensuring that the tax burdens are not being irrationally shifted from one group to another, especially "inter," meaning from the commercial to the residential. Of course, they must also study the equity within the group. Of course, that is the advantage of a meaningful front-ended pilot, providing a platform for all brainstorming before the fact.  


5. Jurisdictions with Unfair Statutory Limitations must work on Removing the Statute before Undertaking any Major Re-assessment. Hypothetically, if the state mandates that the county must reimburse its taxing districts (e.g., towns) the amount that is refunded to homeowners due to inaccurate property assessments, it would be prudent for the county administration to work with the state to remove this unfair mandate, or at least reduce the burden to a manageable annual limit graduating to a total phase-out) before embarking on any significant re-assessment. Should this legislative effort fail, the county should seriously consider a decentralized assessment system without taking on monumental unwarranted liability. In this example, under the decentralized system, towns would be responsible for their assessments. Simultaneously, the county would continue to provide technical assistance, thus relieving the county of any potential refund liability.


Again, a front-ended pilot would do immense good before the full-scale plunge.


-Sid Som
homequant@gmail.com