It is a reality in the business world that everything is becoming more data-driven. We get more and more data on customer preferences and behaviors, which we compile into DMPs (data management platforms) or into CRM (customer relationship management) databases. We augment our own (first-party) customer data with other companies’ (third-party) data, and we use advanced data analytics to find patterns, spot trends, improve targeting and evaluate outcomes.
In such a digital data-driven world, the decennial counting of noses known as the U.S. Census may seem irrelevant or outdated. But in fact, the data that the Census Bureau collects – both in its decennial count and in its annual American Community Survey (ACS) – have never been more important to business constituencies.
Nearly all of the commercial databases that allow businesses to be so “data driven” require benchmarking to solid population parameters. This is especially true in cases when we are trying to understand (or make claims about) the broad national population, but it is also true when we are merely trying to get a good fix on small or local market segments. Almost invariably, samples and databases need “truth sets” that serve as benchmarks to evaluate the quality of the dataset and provide a basis for statistical adjustments.
In simple cases, these adjustments are used to make the database more accurately reflect the age and gender profile of the target group. In these cases, the data from the decennial Census usually provide a definitive benchmark. But other database adjustments often get more complicated. For example, there is sometime the need to balance the data on other characteristics (e.g., race, education, income, household characteristics, internet usage). In this instance, the more detailed descriptive data gathered in the Census’s American Community Survey are more likely to be critical to the accuracy of commercial datasets.
Even gigantic databases often have biases that need to be corrected through statistical adjustments and modeling. Take, for example, the return-path data on TV viewing that flow to cable and satellite companies. These are massive datasets comprised of passively-collected behavioral data indicating which TV programs were capturing audience at any given moment.
But being massive doesn’t mean these datasets give a complete or accurate picture of TV viewing reality. Often time they miss key population segments that consume TV content through different platforms (e.g., internet or over-the-air broadcast). While, once upon a time, TV audience measurement might have been feasible through one singular national measurement panel, fragmentation has increasingly pushed audience measurement toward the merging of giant data streams that come from different sources — each with its own peculiarities and blind spots. Modeling and statistical adjustments are necessary to get the best reflection of the TV viewing taking place at the national level, but also at much smaller units of analysis. And this modeling depends, to a large degree, on data collected by the Census Bureau.
The business need for reliable population benchmarks extends well beyond the context of media measurement. Businesses often need to forecast demand, test new products and services, and extrapolate from limited tests to broader markets. This frequently involves projecting from a sample to a known universe. However it is ever more difficult and expensive to estimate the size and nature of those broader universes because of declining response rates and the tendency of some population sub-groups to opt out of cooperation with surveys and tests.
The paradox is that at the very moment that universe estimates are becoming rarer and more difficult to obtain, the need among business constituencies to derive value from their investments in the layers of data integrations happening in their DMPs is growing. The decennial Census and its annual ACS counterpart are the mother of all universe estimates – the most fundamental anchors underpinning so many marketing, media and CRM databases.
It is a bit like building a skyscraper: you need to build on a foundation deep in bedrock, not on sand. So this is not a good time for the U.S. Census to go wobbly. Yet that is the concern.
The Census Bureau usually devotes the first six years of each decade planning the next decennial Census and then goes into overdrive in years 8 and 9 to test the innovations needed to locate and count accurately our 325+ million population. Typically the annual appropriations jump up in years 8 and 9 to fund those tests: for example, the Census Bureau budget went up by 96% from 1997 to 1998, and by 60% from 2007 to 2008.
But in 2014, Congress demanded that the 2020 Census cost no more than the 2010 Census (without any adjustment for inflation) – a constraint that caused the cancellation of some important tests targeted at hard-to-count populations both in rural areas and in central cities.
Upon taking office in 2017, the Trump administration cut the budget request for the Census Bureau by another 10% and the Director of the Census Bureau, John Thompson, resigned. Thus since June, the Census Bureau has been operating without a Director. Because of “resource uncertainty,” the Census Bureau has missed some key IT deadlines, delayed the Economic Census scheduled for January 2018 by 6 months, and cut its 2018 test markets from three down to one. In the wake of all of this disruption, the General Accountability Office recently designated the 2020 Census as a government program at “high risk” of failure.
To his credit, Commerce Secretary Wilbur Ross, who oversees the Census Bureau, recently went back to Congress to argue for an immediate supplement of $187 million to shore up the planning and testing required for 2018 – especially for the first-ever use of online forms for basic data collection. Ross also indicated to Congress that the 2014 stipulation of keeping spending flat to 2010 levels is not realistic and proposed spending that would ultimately, across the entire counting operation, amount to a $3.3 billion increase for the 2020 Census. Though critics insist that this still is insufficient, Ross’ budget request suggests that the problem is not a figment of liberal imaginations.
Historically, the U.S. Census has been conducted in a manner that is highly professional, apolitical, and as accurate as demographers know how to make it. At a time when American businesses need rock-solid benchmarks for their increasingly complex data integrations, we should ask for nothing less of the Congress and administration overseeing the 2020 Census.
Source – www.forbes.com/sites/scottmcdonald1/2017/12/06/danger-to-us-business-if-the-2020-census-flops/#18cb35d3249f