A subject I can never seem to get away from is the topic of brief, or as I call them “Drive-Thru” Functional Behavior Assessments. In short, it’s an abbreviated functional analysis, as is commonly practiced in school districts, that frequently produces unreliable results. This is due to the fact that they are typically based on indirect measures of behavior (such as anecdotal reports and subjective rating scales). The professionals conducting functional analyses within a school setting do not directly observe and record quantifiable dimensions of the behavior of significance. I find this inherently problematic. I’ve actually been witness to occasions when the assessor never even observed the child’s behavior prior to writing the FBA summary!
A Functional Behavior Assessment (FBA) is intended to be a document that guides educators into making data-based decisions about how to help a youngster become more socially and academically successful in school. However, too frequently the intent of a Functional Behavior Assessment and the reality of a Functional Behavior Assessment are incompatible. Often times, an FBA becomes a formality. In the end, it becomes paperwork that does not serve any real purpose, especially when it comes to helping a child.
I guess this comes back to convenience. Indirect measures of behavior provide quick positive reinforcement for the staff conducting the “assessment” and that is the quick result of a product. Tightening budgets mean less staff. Remaining staff have more responsibilities and less time. Therefore conducting a thorough functional analysis, which would lead to a more reliable diagnosis of function (and in turn a more accurately targeted intervention) gets thrown out the window. It’s a smaller-sooner way of operating, but cost wins because it gets a result. Unfortunately it may not get us the correct result. In the long run we could end up “chasing our own tails” because we hypothesize one function for the behavior and come up with interventions that get no improvement in behavior and we have to start all over again.
This leads to my next point, what is the cost? Let’s assume a table of collaborators has convened and determined a behavior occurs as a function of seeking attention. Let’s also assume that this collaboration yields interventions and agreed to implement them in an IEP meeting. What happens when the interventions are not working and the child does not make any progress? Then the IEP committee might reconvene and agree to change the intervention to some other empirically tested, peer-reviewed, yet arbitrarily selected intervention, because this intervention “works for most kids who display this behavior”.
This could go on several times throughout a school year. That does not sound very cost or time effective. Intervention is not a bag of tricks where if one trick doesn’t please the crowd you just reach in and grab another one. Applied behavior analysis is more like the filing cabinet you open and systematically search for and use only the interventions that have been proven through research to work.
I recently attended a two day workshop by Dr. Brian Iwata. The research shows very clearly that descriptive functional analyses have very low reliability rates. Yet we continue to use them. Only when we move to more experimental methods of functional analysis do we really arrive at reliable determinants of behavioral function.
Recent research by Bloom (2011) and Jensen (2011) describe trial-based FBA procedures that reduce the amount of time it takes to conduct direct functional analysis measures in schools. It is definitely worthwhile to investigate these procedures future applications.