Audits as Signals
Those charged with enforcing laws or regulations, or rules of any sort, (collectively “bureaus”) often require regulated entities or individuals (agents) to submit reports on their activities. Bureaus enforce compliance by auditing the reports and imposing punishments when misreporting is identified. For example, a bureau charged with enforcing environmental laws might require polluters to report whether they are in compliance. A bureau charged with enforcing occupational-safety rules might require companies to report accidents. In both cases, it is common practice to audit some fraction of reports and to impose penalties when underreporting is discovered. Similarly, tax administrators rely heavily on self-reporting of tax liability and audit only a fraction of reports. Prosecutors or police regularly ask for selfreports from suspects by asking for a confession. They offer to lower the criminal sanction for a confession. Higher sanctions for failure to confess if guilt is ultimately assessed are akin to sanctions for underreporting. Contracts, commercial relationships, and personal relationships may use similar principles.
The apparent purpose of this type of enforcement system, which we will call a self-report audit (SRA) strategy, is to reduce enforcement costs. If only a fraction of reports have to be audited, costs may be lower than the alternative of directly monitoring a population. For this strategy to work, however, agents must have an incentive to send in informative reports. In some settings only biased reports can be expected. Reports of emissions, accidents, and income may be shaved downwards if agents suspect that there will be little expected cost to doing so. In many settings, however, appropriate incentives can elicit accurate reports. If agents know, for example, that an inaccurate report is likely to be detected and punished, they may send in accurate reports rather than face sanctions.
Most of the literature on auditing and self-reporting considers the case in which a regulated party has private information and the goal of the reporting system is to induce the individual to reveal that information. The regulated party is assumed to know the capabilities of the auditing bureau. Individuals, for example, are assumed to know the ability of the tax administrator to catch cheats. Criminals deciding whether to confess are assumed to know what information the government has against them and the likelihood of a successful prosecution. The world would be a simpler place, and law and economics of much less interest, if these information conditions were widely found. We suspect, however, that in many cases, the agent is unsure about the capabilities of the bureau because the agent knows neither the auditing technology that the bureau possesses nor the information it already has.
In such cases, there is not just one information asymmetry, but two. Thus, we drop the assumption that the agent knows what the bureau knows and consider the enriched auditing problem. The agent is assumed to have private information about its behavior or type, but to have at best imperfect knowledge of the quality of the bureau’s auditing capability, which is private information to the bureau. For example, an individual seeking to hide assets in a foreign bank account has only a rough estimate of how likely the tax administrator is to find them. A polluter required to report environmental emissions is not certain whether the bureau can detect its emissions. Criminals deciding whether to confess may not know what evidence the government has against them.
How should the bureau set an SRA strategy when its auditing capabilities are private information, which agents can only infer? This additional asymmetry changes the game between the parties. If their auditing capabilities are public information, bureaus have no incentive to engage in costly strategic behavior. But if those capabilities are private information, bureaus will act strategically to convey or avoid conveying information about their capabilities. A bureau with good auditing capabilities might engage in costly signaling to convince agents of this fact. A bureau with weak capabilities might mimic its stronger peers to enhance its deterrent capability. In both cases, signaling and mimicking strategies have to consider how both agents and other bureau types will react. The strategies must operate in an equilibrium in which bureaus are strategic and agents try to infer information about the bureaus’ quality and then send in reports given the inferences the agents draw.
Policy makers have a strategic option as well. Sometimes they will want to encourage mimicking by weak bureaus to allow them to better enforce laws when auditing is difficult, but at other times they will want to encourage strong bureaus to differentiate themselves to aid their enforcement. Given that agents draw information about one bureau from the behavior of others, however, there will be cross-bureau externalities. Policy makers will have to consider overall strategies and appropriately balance the costs and benefits of mimicking versus signaling, essentially of pooling and separating equilibria.