A “Dirty” Approach to Efficient Revenue Forecasting

A “dirty forecast” refers to any forecast conducted where non-traditional, coincident indicators are included. These coincident indicators tell us about the behavior within an environment in the here and now rather than measure the environment itself. By focusing on behavior, dirty forecasts are able to pick up changes in the environment well before the changes become measurable outcomes. Dirty processes have been used for some time in economics, but their use in local government forecasting is relatively new. This paper explores the use by discussing what dirty forecasts are and how they can be used to obtain better, more efficient estimates of local government revenues and expenditures. This foundation is then demonstrated with a case study from the city of Seattle.

One of the greatest challenges in the public budgeting process is the establishment of the government's revenue or expenditure forecast (Mikesell, 2014;Tsay, 2005).Forecasts are a commonly used tool in the budgeting process due to the uncertainty and ambiguity of available resources.Governmental budgets are often prepared years in advance, at a time when the needs of the citizenry are unknown, and the resources are not realized.To assist in the budget process, administrators and politicians rely upon forecast estimates to provide a framework, or baseline, of anticipated financial status upon which they can plan.Unfortunately, the condition surrounding the future is unknown and with this uncertainty comes a simple truth: we know forecasts will be wrong (Miller, 2005).
The goal of a successful forecaster is not to develop a perfect estimate, but rather to develop an efficient estimate that contains as little error as possible.This can be accomplished with the use of a "dirty forecast."A dirty forecast refers to any forecast conducted where non-traditional, coincident indicators are included (McDonald, 2013).These coincident indicators tell us about the behavior within an environment in the here and now rather than measure the environment itself.By focusing on behavior, dirty forecasts are able to pick up changes in the environment well before changes become measurable outcomes.This awareness allows public officials to make adjustments to the budget, thereby avoiding problems and transforming financial management practices from responsive to dynamic.Dirty forecasting is not a statistical technique per se, but instead a forecasting strategy that can be used to reduce forecasting error and achieve a higher degree of efficiency.Dirty processes have been used for some time in economics (Ginn, 2011), but their use in local government forecasting is relatively new (McDonald, 2013).This paper explores that use by discussing what dirty forecasts are and how they can be used to obtain better, more efficient estimates of local government revenues and expenditures.This is accomplished with a case study of Seattle, Washington from 1980 to 2009.By exploring Seattle's circumstance, community, and culture, dirty indicators are established, and a forecast of its revenues is estimated for the years 2010, 2011, and 2012.To investigate the efficiency, the dirty forecast is compared to an estimate derived from the city's established forecasting procedure and the observed data.
The results of the analysis show that the model depicting the city's forecasting procedure does provide a good look at future revenues; however, the model is also full of error.On average, the city's model produced $59.1 million in error for the forecasted years.This error is reduced to $21.5 million, producing a more accurate estimate of revenue, with the dirty model that takes only the established dirty indicators into account.The best outcomes for predicting Seattle's revenue was a dirty-hybrid model, which takes both the city's process and the dirty indicators into account and produced an average of only $8.5 million.Although dirty forecasting is not able to eliminate the error altogether, it is able to significantly reduce the error with the inclusion of only three dirty indicators.This supports the conclusion that dirty forecasting may be an effective tool in the budgeting process for local governments.

Forecasting in Public Budgeting
The process of forecasting is as much of an art as it is a science (Frank, 1993). 2 The science of forecasting involves complex models to explain the conditions of a government and predict its future environment (Klay & Vonasek, 2008;Morgan, 2012).These models can be mathematically based, requiring complex statistical tools and large quantities of data, or they can be procedural, whereby a predetermined set of steps can be utilized to produce an estimate.Alternatively, the art of forecasting involves the ability to navigate the decision-making process to choose between competing forecasts and to alter organizational behavior based on the estimates (Frank, 1993;Klein, 1984;Schultz, 1984).
To conduct a forecast of a government's anticipated revenues or expenditures, forecasters must develop a model or procedure for their respective government (Kavanagh & Iglehart, 2012).This model will rely on historical data, typically annualized, upon which future levels may be based.The most commonly taught approach to developing a forecast model is an average value approach, whereby revenues and expenditures are expected to be path dependent (Finkler, 2010;Horgren, Harrison, & Oliver, 2011;Mikesell, 2014).In the average value approach, estimates are derived by increasing the previous year's revenues or expenditures by its average growth rate.According to Mikesell (2014), such a simple process provides a relatively efficient estimate due to the slow nature by which government programs change; however, efficiency relies upon the assumption that revenues and expenditures will always increase and the operating environment will remain on a consistent trajectory (see also Miller, 1991). 3 more complex approach to forecasting can also be used (Kirn, 2007).Through complexity, the forecaster can better account for the operating environment of a government, such as its economic and demographic characteristics (Frank, 1993;Mikesell, 2014;Morgan, 2012).This is typically accomplished through some form of regression analysis, but can also be achieved through the inclusion of a data's autoregressive properties or through the development of a system-wide model (Kirn, 2007).The benefit of complexity is that the more information included in the model, the better the coefficient of determination it will produce.However, complexity is not without cost.Simple approaches to forecasting, such as the average growth rate approach, are easy to teach, allowing the process to diffuse across the public sector.Alternatively, with complexity comes a required skill; and, the more complex the model, the more costly that skill set is to employ.
Regardless of the forecasting model selected, the estimates that it produces are, at best, an educated guess.The goal of the successful forecaster is not to develop a perfect estimate but rather to develop an estimate with as little error as possible (Frank, 1993).Although not frequently discussed, the presence of error can create significant financial difficulty for governments.For example, during the Great Recession, states and local governments budgeted according to their expectation of tax revenues.However, when realized revenues were less than anticipated, budget shortfalls ensued and governments were faced with a choice of raising taxes, reducing services, or issuing new debt (Gordon, 2012;Jonas, 2012).
Efforts to reduce error within the forecasting of public organizations have largely been quantitatively based.Statistically speaking, this is accomplished by improving measures of the goodness of fit (Xu, Kayser, & Holland, 2008), where forecasters base their decisions about model structure on the outcome of testing alternative arrangements using the most recent historical data available (see also Kavanagh & Iglehart, 2012).Qualitatively, forecasters have adopted a number of approaches to reducing error.One such approach is from Klay and Vonasek (2008), whose discussion of consensus forecasting highlights a negotiation process in the selection of estimates.In the consensus forecasting process, a panel of experts is assembled and encouraged to share their experience and opinions on forecasts until a dominantly accepted forecast emerges.The intent is to produce a more accurate estimate by taking into account the pooled knowledge and experience of the experts.Examples can be seen in a variety of government arenas, including the State of Florida's demographic estimates (Office of Economic and Demographic Research, 2011) and the Federal Reserve's forecast of the U.S. economy (Abolafia, 2004).
Research into quantitative and qualitative forecasting efforts have assisted in the reduction of error, but there is still room for improvement.Specifically, there is room for improvement in the nature in which forecasts are conducted.A key cause of error is the reactionary nature of standard forecasting procedures.Standards from the Government Finance Officers Association recommend five years of monthly data (Kavanagh & Iglehart, 2012); however, some cyclical influences can take much longer for the pattern to become evident (Cooley & Prescott, 1995;Tsay, 2005).As a result, the budget process is reactive, requiring a noticeable change in the environment for its underlying forecast to be adjusted.The challenge in reducing forecast error is to transform the budget process from reactive to proactive, where the forecast can anticipate changes and allow the budget to be adjusted accordingly before the change becomes a problem.

What is Dirty Forecasting?
Within public administration, we typically treat problems in forecasting as a public sector issue.While public sector issues deserve public sector solutions, forecasting error is a problem for all disciplines that rely upon forecasting as a tool.None have gone so far as to reduce the occurrence of error as researchers and practitioners within the field of economics.
The field of economics relies heavily upon forecasting techniques for estimates of future measures, such as GDP, investment, and unemployment.The traditional, statistical approach to forecasting has been based upon the identification, specification, and estimation of a single model (Bunn, 1996).Although variation does exist across the field, the tendency is rely heavily upon established models and economic theory as a guide to the forecasting process (Bunn, 1996;Clements, 2002).Through this trend, forecast estimates gain a degree of legitimacy regardless of their efficiency.Statistically, the techniques developed within economics have reduced the amount of error, but they have been unable to eradicate it.In this regard, economic forecasting Figure 1: Venn Diagram of a Dirty Forecast is similar to forecasting in public budgeting in that error is always present and it is the challenge of the forecaster to reduce or minimize that error.
One outcome of the work in economics is that the accuracy of a forecast may depend on whether its objective is to obtain long-term or short-term estimates.Long-term, the use of established models and theory to drive a forecast has produced estimates with minimal error, but this is likely the result of a regression to the mean (Fildes & Stekler, 2002).When the objective is to obtain short-term estimates, forecasts have demonstrated considerable error, frequently missing changes in the environment (Clements, 2002).For example, economic forecasts captured the trends of the 1970s, 1980s, and 1990s, but they have failed to predict any of the recessions observed by the United States during that time (McNees, 1992a(McNees, , 1992b)).According to Clements (2002), a solution to short-term efficiency may be the introduction of societal coincidence indicators (see also Bunn, 1996).
The inclusion of a societal coincidence indicator in the forecasting process creates a nontraditional, or "dirty" forecast (McDonald, 2013).Such indicators rely upon a forecasters judgment to capture the behavior of the environment and translate that behavior into usable measures.The expectation is that behavior is indicative of market conditions.When conditions change, or are expected to change, individuals adjust their behavior accordingly.By focusing on behavior, dirty forecasts are able to capture changes in the environment well before the changes become measurable outcomes.This creates a dynamic forecast that is predictive of change rather than responsive and allows forecasters and those who utilize forecasts to more easily make adjustments when circumstances change.
Dirty indicators are not intended to replace traditional measures.Rather, they are intended to complement traditional measures by capturing a share of the dependent variable not previously accounted for.This complementary relationship can be represented with a Venn diagram of the forecast, as demonstrated by Figure 1.Traditionally, a strong theoretical association between the dependent and independent variables of a forecast is desired.The theoretical link between the dirty indicators and their dependent variable is often murky, but they exhibit a high degree of face validity.That is, an indicator can be considered and included because it makes sense that a relationship might exist.
An example of dirty forecasting comes from Alan Greenspan.During his tenure as Chairman of the Federal Reserve, Greenspan was known to consider a variety of dirty measures to understand the behavior of the market and the direction that the economy was heading (Smick, 2008).Included in these measures is the production of cardboard boxes (see Dizard, 2007).Greenspan assumed that since most things utilized by the economy are placed into a cardboard box at some point in time, an increase in production would signal a forthcoming economic boost.Evidence of the indicator's relationship to GDP can be seen in Table 1, which provides the correlations of cardboard box production and gross private domestic investment (GPDI) with GDP for the United States from 1977 to 1997.Although the difference in the correlations may be minimal (about 98.3% for box production and 97.3% for GPDI), an accurate measure for GPDI can only be obtained several months to years' after the time period of interest; however, box production can be observed in real-time, providing an up-to-date picture of economic performance.
Other examples of established dirty measures include a lipstick indicator and a skirt-length indicator, both of which portray consumer confidence in the market (The Economist, 2009;van Baardwijk & Franses, 2010).The lipstick indicator suggests that people indulge in lessexpensive luxury items when nervous about their future (Hill, Rodeheffer, Griskevicius, Durante, & White, 2012).An increase in lipstick sales suggests a lack of certainty about the economy and employment in the near future.The skirt length indicator monitors the average length of hemlines in the years new fashion lines, assuming that the shorter the hemline, the more confident the consumer in the economic position (Docherty & Hann, 1994).

Dirty Forecasting in Public Budgeting
Thus far, the discussion of non-traditional, dirty indicators and their utility has been limited to economics.Just as with economics, the goal of a forecaster in the budget process is to produce an estimate with as little error as possible.It might then be possible that the introduction of dirty forecasting to public budgeting can produce an outcome similar to that of economics: a more efficient budget estimation and a forecast that is proactive in nature rather than reactive.
The use of forecasting in the public budgeting process is frequently scripted: forecasters follow a set procedure established by the government or some other influential body (Department of the Treasury, 2013; Kavanagh & Iglehart, 2012).The intent of the script is to minimize user error and establish legitimacy for the estimates they produce, but they do little in the way of minimizing the error of the forecast itself.By utilizing a "catch-all" forecasting procedure, we risk the introduction of more error through the assumption that the conditions of all governments are alike.For example, a catch all process assumes the conditions influencing the budget of Chicago, IL are identical to the conditions influencing the budget of Woodville, FL.
The minimization of this error occurs by tailoring a forecast model and procedure to the government it estimates.This can be done through the adoption of dirty forecasting.
The behavioral focus of dirty indicators allow for a forecast to be tailored to the population that the government represents, capturing what makes the city or county unique from others.For example, a city that is heavily reliant upon a sports team for tourism can often utilize the success rate of the team during the season as a predictor of tourism related revenue and public safety needs. 4Not only would a tailored approach provide a better foundation for a forecast, but it provides a forecast that public officials and residents alike can relate to.
More important than tailoring a forecast is what the adoption of a dirty approach means for the transformation of financial management practices.When a budget is prepared using estimates made with standard forecasting procedures, it is difficult to adjust the budget when the underlying circumstances change.Evidence of such situations can be seen from the Great Recession.Prior to the onset of the recession, governments budgeted their expenditures with the assumption that a strong economy would continue (Jonas, 2012).By the start of the collapse, budgets were difficult to adjust around the change in expected revenue as many programs were already underway or under contract.In some instances, such as the case of Saint Joseph County, Indiana, revenues were not realized until 18 months later, meaning the budget could not be adjusted as it had long since passed.Ultimately, their inability to adjust the budget caused many local governments to overspend after tax revenues came in under the estimates (Maguire, 2011;Martin, Levey, & Cawley, 2012).
The inclusion of a dirty indicator can address these problems by transforming the budget from a static process to a dynamic one.Just as the production of cardboard boxes signals a change in the economy before it can be measured with traditional variables, the inclusion of a dirty indicator in a revenue or expenditure forecast can signal a change in revenue well before revenues are realized.Examples of dirty indicators relevant to the budget process include the types of restaurants visited by the population and the mode by which houses are listed on the market.A change in the type of restaurant frequented by a population from a fine dining or middle tier restaurant to more budget-minded family dining could signal a change in the household's financial priorities that will impact the tax revenues a government receives and the public services desired from that government.A similar impact can be expected when there is an increase in houses listed on the market as for sale by owner instead of through a realtor.This awareness allows public officials to make adjustments to the budget, avoiding problems and transforming financial management practices from responsive to dynamic.Had local governments included dirty indicators such as the number of houses for sale by owner, then the budgets could have been adjusted in anticipation of less revenue.
While dirty forecasting provides utility in the above fashions, its greatest utility comes in the form of increased efficiency.No matter how well they are prepared, the estimates derived from a forecast will be wrong.The goal of a forecaster is to reduce this error.The more information we use in a forecast, the better able the forecast is to provide an estimate with a high degree of accuracy.Although traditional forecasting measures, such as income, population, and previously observed revenues, do provide a degree of explanation, they do not fully explain revenue.The more variables we include, the more we are able to explain.Ultimately, a more efficient financial process allows government officials to either reduce the tax burden placed upon residents or fund more programs and services.

Seattle, Washington
To demonstrate the utility of dirty forecasting, a case study of the city of Seattle, Washington is adopted.Seattle is the largest city in the Pacific Northwest, with a population of 608,660 in the city and 3,439,809 in its metropolitan statistical area (MSA) (U. S. Census Bureau, 2010).
Established in 1851, it is a charter city whose government takes the mayor-council form.Unlike most city councils, whose members are elected on a geographic or district basis, all nine members of Seattle's council are elected at-large.While all city offices are technically nonpartisan, an at-large, city-based focus has allowed a liberal political culture to become established.This political culture has led to a number of progressive policies, such as the legalization of gay marriage and recreational marijuana, as well as a band on plastic shopping bags.
Seattle's geographic location has helped in establishing it as an economic hub.The Port of Seattle is one of the largest in the United States.Currently, four of 2013's Fortune 500 companies are based in Seattle (Amazon.com,Expeditors International of Washington, Nordstrom, and Starbucks) and another four are based in neighboring communities (Costco, Microsoft, Paccar, and Weyerhaeuser).The diversification of its business community and its position as an integral port to the United States led to Seattle becoming the 12 th largest metropolitan economy in 2012 with a gross metropolitan product of $258.8 billion (Bureau of Economic Analysis, 2013b).The size and diversity of its economy has helped the city overcome the effects of the Great Recession with minimal loss.
The political and economic stability of the city make it an ideal case study.In a stable environment, the white noise of outside shocks will be minimized, adding a degree of certainty and validity to the results of the dirty forecast.

Methodology
To establish dirty indicators for Seattle, a thorough understanding of its government, economy and culture were needed.This was accomplished with a series of interviews using a snowball sampling process.Interviewees included public officials, professors and teachers, as well as members of the religious and non-profit community.Interviews were then subsidized with archival research from local publications when needed.5 Using information gathered from the interviews, a broad outline of city behavior was established and suitable measurements were pursued.When measures could be obtained, a dual approach was undertaken to determine the legitimacy of the measure as a dirty indicator.First, a casual path was drawn to clarify what the measure would indicate about the city's behavior and why.Second, a Granger causality test was conducted to verify its validity as a statistical indicator.
A Granger causality test is a test for statistically detecting the direction of causality (the cause and effect relationship) when there is a temporal lead-lag relationship between two variables.Developed by Granger (1969), the process of the test is simple: if past values of variable X contain information that helps forecast the current value of variable Y in a linear regression created from past values of X and Y, then the signal of X is said to "Granger cause" Y. Similarly, if the signal presented by variable Y can help forecast the value of X, then Y is said to "Granger cause" X.This relationship is formulated as: where Y is the revenue of the city of Seattle and X is representative of the measures of interest as previously identified from the interviews.To establish the appropriate lag structure of the potential indicator, an Akaike Information Criteria (AIC) is used.Created by Akaike (1973Akaike ( , 1974)), the AIC is a process of maximizing the fit of a model by utilizing the past information of a data set.It follows the process of maximizing the fit across past values of a variable while minimizing the information lost over time.
The concept of causality is useful in establishing a dirty forecast because Granger non-causality is a necessary, but not wholly sufficient, condition for strong exogeneity.Thus, if the results of the test show that X Granger causes Y, then Y cannot be a strongly exogenous variable.The converse, however, is not true.If the evidence shows that X does not Granger cause Y, then it cannot be used to conclude that Y is an exogenous variable.In order to conclude that one of the measures of interest is a dirty indicator, the findings must show unidirectional causality from the measure of interest to the revenue.That is, the non-causality between the city's total revenue and the measure must be rejected while simultaneously failing to reject the non-causality of the measure to the revenue.Should both the measures Granger cause each other or the measures not Granger cause the total government revenue, then the measure is rejected as a dirty indicator.
The lag of a variable, as established by the AIC and utilized in the Granger causality test, establishes the preemptive nature of the dirty forecast.It clarifies the optimal point of predictability of a dirty indicator over time.For instance, if the AIC establishes a two-year lag structure for an indicator, then a change in the indicator's value can provide administrations with an anticipation of a change in revenue two years prior to the observed change in revenue.This preemptive nature provides public managers with the opportunity to adjust their policies prior to in preparation.Following the process established in this paper, 33 potential indicators were established.Of these, consistent and reliable data was only available for 16, and only three showed the statistical relationship necessary to be counted as a dirty indicator.As intended with dirty forecasting, the indicators capture unique behaviors within the county and influencing the county and maintain a strong statistical relationship with the city of Seattle's total government revenue, despite the absence of a strong theoretical relationship.Summary statistics and the results of the Granger causality tests are provided in Tables 2 and 3.
One consistent reference that emerged from the interviews was the coffee culture that surrounds the city.While Seattle is the corporate home of to five coffee roasting companies, coffee is also a part of daily life with an estimated 35 coffee shops per 100,000 residents.To capture the coffee culture, a number of potential indicators were considered, including coffee production and imports.Ultimately, only the domestic consumption of coffee showed the relationships necessary to be included as a dirty indicator.Measured in thousands of 60 kg bags and obtained from the U.S. Department of Agriculture, the variable coffee is believed to capture both the economic drive and the disposable income available within the community.When a household's circumstances change, it adjusts its behavior by reducing or eliminating the consumption of luxury goods.By monitoring coffee consumption, we can see if and when such a change occurs.
Based on the Granger causality test, coffee consumption is shown to "Granger cause" Seattle's total revenue with a lag of two years.
The second dirty indicator relates to the farmers market mentality of the city.Interviewees commented that engagement with the local markets is a common practice and that each market has its own character and dynamic that reflects the surrounding neighborhood.A farmers market may provide fresh produce, but the opportunity to attend can be difficult as they may be held at inconvenient times or locations.The decision of a household to attend a farmers market reflects several household features, including the time and opportunity to attend and the income to spend on fresh produce.As the conditions of the household change, so may its capacity for food purchases.Supermarkets are more conveniently located and offer hours of operations more conducive to busy schedules.They also offer a variety of products not available in a farmers market which may reduce the total cost of spending on groceries or improve the ease with which a meal may be prepared.Therefore, large shifts in the farming income may reflect a shift in the capacity of the household's time or income.This farmers market mentality, represented as the variable farm, is captured in this study through the non-corporate farming income earned within Seattle's MSA.Using data from the regional economic accounts of the U.S. Department of Commerce's Bureau of Economic Analysis, farming income is shown to cause revenue with a two-year lag.
The final dirty indicator reflects the city's relationship with the music industry.Seattle has long been recognized for its role in the music industry, including the grunge scene of the 1990s and the independent movement of the 2000s.The technology industry located in the area contributed to this role with the development of new formats, such as CDs and digital, when the industry standard had been vinyl and 8-track.A shift in music distribution may signal a change in the economy of Seattle and the demand for its music-related products, but it may also reflect a change in how consumers are engaging with the market.Alternative forms of music delivery exist, such as reliance upon a radio, file sharing or internet streaming.As consumers change how they engage the music market, they are likely to change their reliance upon other market services as well.This change in reliance may shift the collection of sales and other consumption taxes in Seattle.Seattle's relationship with the music industry is captured in the variable music, which represents the total millions of units of music distributed across all formats, as reported by the Recording Industry Association of America.However, unlike the previous dirty indicators that relied upon two-year lags, music relies upon a three-year lag.

Revenue Forecasts
Utilizing data from 1980 and 2009, the three models of the city of Seattle's total revenue were estimated using ordinary least squares regression.The results of these regression analyses are provided in Table 4.
The first set of estimates are for the revenue forecasting model based on the process established by the city.According to interviews conducted with staff from the Department of Finance and Administrative Services, each stream of revenue is estimated independently with the forecast of total revenue achieved by adding the streams together.The estimate of each stream is reached by its past values,6 producing a total revenue whose forecast is also based on its past values. 7The past values included in the estimate are limited to the availability of data.(For example, to forecast 2015's revenue in 2014, the most recent year of observed revenue is 2013.)The results show significance for both the two-and three-year lags, such that every dollar collected in the two-year lag forecasts $0.64 of revenue and the three-year lag forecasts $0.45 of revenue.
Although a simple process, the model is relatively strong, with a R 2 of 0.9619.
The second set of estimates are for the dirty model.Here, the lag of variables are again restricted around the availability of data.Domestic coffee consumption has the largest effect, with every 1,000 bags representing a change in the environment of Seattle capable of producing $13,501 in revenue.Income from non-corporate farms within Seattle's MSA also has an effect of $1.01 on total governmental revenue for each dollar earned.The final dirty measure is music distribution, which has both a two-and three-year lag.At the two-year lag, a million units of distributed music signals a change that is associated with a loss of $0.06 in revenue.This effect is reversed with the three-year lag, resulting in an increase of $0.57.The model also demonstrates considerable strength, with a R 2 of 0.9895.
The third, and final, set of estimates is for the hybrid model, which incorporates the features of the city and dirty models.When taking the dirty measures into account, the two-year lag of total revenue remains statistically significant, but the three-year lag does not.Based on the results, every dollar of revenue collected in the two-year lag forecasts $0.71 of revenue.The dirty measures provide a much more interesting picture, with all dirty measures maintaining their significance.According to the estimates, every 1,000 bags of coffee consumption is associated with a reduction in total revenue by $855.86 and every dollar of farm income, leading to its reduction by $0.15.The total distribution of music shows a positive effect at the two-year lag, with every million units of distributed music signaling a change that is associated with an increase in total revenue by $0.06.At the three-year lag, the effect is reversed, resulting in a decrease of $0.11.This model maintained with the strength of the previous models, explaining almost all variance with a R 2 of 0.9972.
Each of the three models presents a strong explanation of total revenue for the city of Seattle; however, to better understand the utility of dirty forecasting, comparisons across models can be drawn.This comparison comes in two parts: a look at the ability of the models to explain variation in revenue, and a forecast of revenue for each model across a number of years.
Beginning with the explanation of variation, all three models have strong R 2 's.The city's model has the lowest explanatory power, providing an explanation of 96.2% of all variation in revenue.
At 98.9%, the explanatory power is improved with the dirty forecast.The hybrid forecast provides the greatest understanding of total revenue with 99.7%.In modeling terms, the stronger the explanation, the stronger the model.
To better understand the explanatory power and the utility of the models, a forecast was drawn for the years 2010, 2011, and 2012.The results of these forecasts are presented in Table 5.At its face value, the city's model would present a good resource in forecasting its revenue.But, on closer inspection, the model produces an average error of $59.1 million across the three years of forecasts.Conversely, the dirty forecast produces an average of only $21.5 million in error.The model that produces the best results with the minimal error is the hybrid model.Across the forecast period, the hybrid model produces an average error of $8.5 million.

Summary and Conclusions
One of the greatest challenges in the public budgeting process is the establishment of the government's revenue or expenditure forecast.The forecasts utilized in the public budgeting process rely upon models to reduce their error and achieve greater efficiency, allowing public administrators to continue providing their existing programs and services, as well as to plan new ones.Yet, finding a model that minimizes error is not without difficulty.It has been the argument of this paper that the adoption of dirty forecasting processes and techniques can be of assistance in this area.
Dirty forecasting is a forecasting strategy that incorporates non-traditional, coincident indicators into the forecasting process with the goal of reducing error and improving overall efficiency (McDonald, 2013).These coincident indicators tell us about the behavior within an environment in the here and now rather than measure the environment itself.By focusing on behavior, dirty forecasts are able to indicate a forthcoming change in the environment well before the change becomes a measurable outcome.This awareness allows public officials to make adjustments to the budget, avoiding problems and transforming financial management practices from responsive to dynamic.
The utility of dirty forecasting was shown with a case study of Seattle, Washington.By adopting three dirty indicators, a more efficient forecast for the city's total revenue was established.When these indicators were incorporated into the city's existing process, a forecasting procedure was established that maximizes predictability.Moving forward, the behavior of these indicators in recent years should pose a concern for the city's administrators.Total distribution of music, for instance has started to decline as consumers can access entertainment through other means (Owsinski, 2014).Similarly, domestic coffee consumption has begun to level off, with industry analysts discussing a shift in the volume and frequency of coffee consumption (National Coffee Association, 2014).When placed into the context of the dirty forecast, these changes indicate a future shift in Seattle's revenue in the next two to three years for which preparations should be made.
The case study of Seattle is not without its problems.A challenge with Seattle, and dirty forecasting in general, is the availability of data.A number of potential indicators were considered, such as the number and types of restaurants in the city (reflective of an individual's expectations on income) or the average transaction cost of gas at one of its pumps (reflective of economic desperation), but no record of these is kept.For others, the number of observations necessary for an appropriate analysis is unavailable.For example, Seattle is a highly-educated city with a variety of educational choices, ranging from public elementary schools to elite preparatory academies.In a short-term analysis, school enrollments reduced forecasting error to within an average of $2,200.Unfortunately, the long-term data necessary for a complete analysis was unavailable at the time of publication.
Although this study shows the utility of dirty forecasting, data problems do present an ongoing challenge.The indicators important for one government are likely to be different than the indicators that are important to another government, making a blanket recommendation on variables to collect difficult and costly.The advent of new technologies has improved data collection in recent years, but until the number of observations increase to adequate levels or its availability becomes more widespread, dirty forecasting can only be implemented on a case-bycase basis.
Once a set of dirty indicators is established, three models of Seattle's revenue are estimated.The first model follows the process established by the city for forecasting its revenue.The second relies solely upon the dirty indicators as predictors of revenue.The third model is a dirty hybrid that follows the established model but includes the dirty indicators.Next, the estimated models are utilized to forecast the revenues for 2010, 2011, and 2012.Comparing the forecasts to the observed revenue, the efficiency of the models is compared.Central to both the Granger causality test and the model estimates are the data utilized.All data for analysis are for the years 1980 through 2009.Data necessary to estimate Seattle's model, as well as total government revenue, are from the city's Department of Finance and Administrative Services.(Data for sources of the dirty measures are discussed below.)

Table 4 :
Estimates of Revenue Models *Represented in thousands of dollars.