Accounting for the Republican Debates (Auditing Standard AU-C 530)

Republican ElephantOn September 1, 2015, CNN announced that it was amending its criteria used for determining which candidates would participate in the Republican debates to be held on September 16th.

Back in May, when the network first announced the original criteria, it had said that it would use the top 10 candidates based on the average of specific polls taken between July 16th and September 10th.  However, after the more recent Sept. 1 announcement, the criteria was amended so that any candidate polling in the top 10 between August 7th and September 10th will be included in the next debate.  And as of last Thursday night, when the network announced the final lineup, we can see that the change in criteria resulted in one additional candidate being added—Carly Fiorina, who was the former CEO of HP.  So there will be 11 candidates participating in the debate instead of the originally anticipated 10.

Why did CNN change the criteria?  It said, that the original expectation was that there would be approximately 15 polls during the predetermined time period, but as the window was closing, it turned out there were only five.  The limited number of polls potentially caused the sample to be less representative of recent changes in the popular opinion, as Fiorina, for example, experienced a strong surge in popularity after the first Republican debate back in August.

So just how reasonable is it to use polls to determine the debate participants?  For the Fox News Network’s August 6th debate, five different polls were used.  And there was some controversy about the fact that which five polls would be used was not publicly announced until after the decision was made.  Because of the limit of 10 participants in the August 6th debate, the determination of that 10th place slot happened to cutoff between two candidates—John Kasich and Rick Perry—who were actually within the polling margin of error of one another.

In fact, due to his objections to polls being used this way, Lee Miringoff, who is director of the Marist Institute for Public Opinion, a polling organization, said that Marist was intentionally limiting its questioning so that its poll would be excluded from Fox’s consideration.  He told McClatchy, the media conglomerate, that by using a tenth of a percentage point in the polling results, Fox was relying on a level of precision that didn’t even exist in the polling.  And he further said that candidates were making public appearances and promotional advertisements timed in order to try to affect their standing specifically within Fox’s poll selection window in an attempt make the cut, meaning that polling activity, which is supposed to be measuring the results, actually ended up affecting the process.

Clearly there’s an issue.  How accurate is your measurement process when it starts affecting the outcome?  Statisticians, scientists, and other practitioners concern themselves with these and other questions when they’re trying to develop an accurate representation.  In auditing, the practice of using a sample from a population in order to make judgments about that population as a whole is an accepted approach.  As a matter of fact, auditing standard AU-C 530 specifically addresses this topic and can provide some insight into this polling problem.

Auditing Standard AU-C 530

Auditing Standard AU-C 530 says that auditors use sampling in order to draw conclusions about the population.  And the standard defines sampling as the practice of selecting and evaluating less than 100 percent of a population in such a way that the conclusions reached about the selections are the same as would be reached if 100 percent of the population was tested.

But there are risks associated with sampling.  Sampling risk is defined as the risk that conclusions reached about the sample are different than the ones that would be reached about the whole population.  If your sample leads you to believe that you have to do more auditing procedures than really required, then that reduces the efficiency of the audit; but if the sample leads you to believe that there isn’t a concern where one actually exists, then that affects the effectiveness of your audit—a far more serious risk.  Because if you think that there’s not a misstatement, but there actually is one present, you could end up giving the wrong opinion on the financial statements.

Part of effective sampling is reducing sampling risk to a tolerable level, and the best way to do that is by taking the proper considerations when planning and designing the sample.  When you’re designing a sample, it’s important to clearly define its purpose—that is, what conclusions are you trying to make about the population?  Are you determining if a balance is quantitatively misstated?  Are you testing some other characteristic about the items?

“Variables sampling” is the name used when testing the amount of quantitative misstatement in a population; conversely, “attributes sampling” is used to draw conclusions about some characteristic of a population, like whether or not a control is functioning.  So it’s important to clearly define and document the purpose of the test and make sure that the population you’re sampling from is the appropriate population for drawing those conclusions—and, of course, you have to make sure that the population is complete, that you’re sampling from the whole population.

Properly documenting the completeness of your population ahead of time can prevent headaches down the road.  For example, back in May, when Fox announced the qualifying criteria for its debate, the network said that it would use only those polls “conducted by major, nationally recognized organizations that use standard methodological techniques.”  But the network didn’t specify the techniques and didn’t specify which polls were qualified.  This ambiguity led to some criticism later.

AU-C 530 further says that you need to determine the appropriate size of the population so that you can reduce sampling risk to an acceptable level.  The more confidence needed in the sample, the larger the sample should be.  Also, the expected rate of deviation or of quantitative misstatement in the population factors into the equation.  In this commentary, we’ve noted that both Fox and CNN ended up using five polls to determine participants in the debates—although CNN anticipated that its sample would include 15 polls.  But really these are a sample of a sample, which introduces additional risk.  Each poll conducted can use different criteria.  Some polls were of registered Republicans, whereas others included those who identified as Republican or those who were identified as “leaning” Republican, which is a considerably more subjective determination.

Finally, AU-C 530 says you have to make the selections in such a way that you address sampling risk as well.  Remember the conclusions you draw about the sample should be the same as if you tested the entire population, so some approach is necessary that will select a variety of items that are representative of the whole population.  Often a random selection process is used to accomplish this goal.

When Fox said that they were using only those polls with “standard methodological techniques,” many people interpreted this to mean that a human was making a phone call, rather than a robo-dialer or an internet questionnaire, because that traditional method is anticipated to achieve more accurate results.  However with fewer and fewer people using house phones these days, and with the restrictions made in some states on calling people’s cellphones, even those standard practices are being questioned for their effectiveness.

But all of these are technical or methodological issues raised by the use of sampling.  What about Miringoff’s criticism that polls simply shouldn’t be used for these purposes?  Among other issues, a problem arises when polls’ timing and techniques are changing the way a candidate conducts his or her campaign.  How can we really have confidence in the accuracy of our measurements when the measuring process itself changes the results?

The Heisenberg Uncertainty Principle

In science, when the measuring process changes the measurement, that’s called the “Observer Effect.”  That is, by observing some phenomenon, you change it.  For example, imagine that you’re baking a cake, and you want to test whether or not the cake is ready by sticking a toothpick in the middle and seeing if it comes out dry.  This is actually a really common and effective test.

There’s just one problem.  Each time you open up the oven to stick that toothpick into the middle of the cake, you’re letting a lot of heat out—not to mention the risk of burning your hand.  So if you do this several times, you’re actually slowing the baking process, and you’re causing the cake to be baked at a slightly lower temperature, which could lead to the exterior parts becoming dried out or to under-baking the middle.  Your measuring process is changing the process you’re trying to measure.

Werner Heisenberg famously described this problem in the 1920s.  He wasn’t talking about baking a cake, though.  He was talking about measuring properties of subatomic particles.  Heisenberg said that if you wanted to know where an electron was or how fast it was moving, then you had to obtain information about that electron.  And the least intrusive way of doing so was to fire a single particle of light—a photon—at that electron.  The photon would bounce off of the electron and you could measure that bounce and know where the electron was or how fast it was moving.  But the problem is that the more accurately you measure the location, the less accurately you measure the speed and vice-versa.  Because when you fire that photon at the electron, you’re knocking the electron out of its position and changing its speed.

It turns out that what Heisenberg was describing was not only a measurement problem—the Observer Effect—but actually a property about all subatomic particles, which ended up becoming one of the most bizarre and incomprehensible concepts in quantum mechanics.  Today we call it the Heisenberg Uncertainty Principle, and we know that it means that sometimes knowing more about one characteristic means knowing less about another.

Perhaps some of the criticism about polls being used to determine which candidates participate in the debates is misplaced.  Because no matter what method we use to determine the participants, there will always be a risk associated with the methodology.  As accountants and auditors we’re conscious of this risk, so we develop an expectation of the anticipated rate of deviation or misstatement.  Maybe the problem is that we, as the consumers of journalism, are not properly understanding the risks from the outset and setting an expectation for ourselves.

Perhaps a better criticism is that we’re placing too much confidence in the media and relying too heavily on its ability to make these editorial judgments in the first place.  Maybe we’re investing too much confidence in what’s on television when, in the words of Donald Trump himself, “Sometimes your best investments are the ones you don’t make.”

Accounting for Forrest Gump (Auditing Standard AU-C 520)

Auditing Standard AU-C 520

Today the topic of our Bridge The GAAP – Accounting Podcast is logical relationships.  We’re going to discuss the topic by building a bridge that connects the ideas of Forrest Gump (Tom Hanks), Auditing Standard AU-C 520, and synchronicity.

We start off by discussing the Davie-Brown Index, which is a survey conducted by a marketing agency in order to measure different attributes of celebrities and popular personalities, including trustworthiness.  Tom Hanks repeatedly shows up at the top of the list, raising the question “What’s so trustworthy about Forrest Gump?”

In attempt to answer the question, we turn to the famous study performed by social psychologists Hamilton and Gifford, which introduced the term “illusory correlation” while explaining how we form illogical judgments and stereotypes.  As accountants, understanding this process is important because we so often rely on analytical procedures.

This conversation leads into a discussion of Auditing Standard AU-C 520, which addresses the auditor’s requirement for using analytical procedures at the end of an audit and discusses the requirements for the use of substantive analytical procedures as audit evidence.

The podcast ends by recalling Carl Jung’s work to develop the concept of synchronicity, which is a relationship of events that is other than causal.

View the Bridge the GAAP – Accounting Podcast in iTunes.

Accounting for Aliens (and Auditing Standard AU-C 240)

Accounting for Aliens

Today the topic of our accounting podcast is a little unconventional.  We’re going to trace the concept of the fraud triangle back into an unusual case study involving a message from another world.  By doing so, we will answer the question, What do an alien prophecy, Auditing Standard AU-C 240, and a billion-dollar Ponzi schemer all have in common?

Along the way we’ll discuss the objectives and requirements of Auditing Standard AU-C 240, and how it relates to the requirements established under Auditing Standard AU-C 315, which discusses the auditor’s requirement to understand the entity and its internal control.  We’ll discuss what fraud risk factors are and cover the documentation required by AU-C 240.

The podcast includes a discussion about Ponzi scheme fraudster Scott Rothstein, which is used to shed some light onto the rationalization process.  So with all the themes interwoven, there’s a decent amount of psychology that found its way into this week’s conversation.  I really enjoyed this podcast, and I hope you do to.

View the text of clarified auditing standard AU-C 240.

Auditing Standard AU-C 320: Materiality

Materiality

Today the topic of our accounting podcast is Auditing Standard AU-C 320, which discusses Materiality.  We’ll answer such questions as, What is materiality?  How is it determined?  What has to be documented?

To provide some context to our conversation, we’ll relate the discussion to AU-C 200, which discusses the overall objectives and conduct of an audit, and we’ll discuss how materiality relates to audit risk.

View the text of clarified auditing standard AU-C 320.

Auditing Standard AU-C 315: Understanding the Entity and Risks

Businessman pressing an Risk concept button.

Today the topic of our podcast is Auditing Standard AU-C 315, which discusses the requirement that the auditor understand the entity being audited and its internal control and requires that the auditor identify risks at the financial statement level and at the relevant assertion level.  We’ll answer such questions as, What should we understand about the entity?  How should we perform our risk assessment?  What has to be documented?

To provide background to our conversation, we’ll relate the discussion to our previous two podcast episodes which discussed the overall objectives and conduct of an audit as well as the requirements related to planning an audit.

View the text of clarified auditing standard AU-C 315.

Auditing Standard AU-C 300: Planning an Audit

Planning an Audit
Today the topic of our podcast is Auditing Standard AU-C 300, which discusses Audit Planning.  We’ll answer such questions as, Who is involved in planning an audit?  What are they doing?  What has to be documented?

To provide background to our conversation, we’ll relate the discussion to AU-C 200, which discusses the objectives and conduct of an audit.

View the text of clarified auditing standard AU-C 300.

Auditing Standard AU-C 200: Objectives and Conduct of an Audit

Bridge the GAAP Accounting Podcast

Today the topic of our podcast is Auditing Standard AU-C 200 which answers questions such as:  Just what exactly is an audit?  What is meant by independence?  What is meant by “GAAS”?

As background to our conversation, we’ll also discuss what is meant by the phrase “clarified auditing standards.”

View the text of clarified auditing standard AU-C 200.