<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Zócalo Public Squarepredictions &#8211; Zócalo Public Square</title>
	<atom:link href="https://legacy.zocalopublicsquare.org/tag/predictions/feed/" rel="self" type="application/rss+xml" />
	<link>https://legacy.zocalopublicsquare.org</link>
	<description>Ideas Journalism With a Head and a Heart</description>
	<lastBuildDate>Mon, 21 Oct 2024 07:01:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>An Overconfident Public Learns the Limits of Predictive Technology</title>
		<link>https://legacy.zocalopublicsquare.org/2017/05/18/overconfident-public-learns-limits-technology/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/05/18/overconfident-public-learns-limits-technology/ideas/nexus/#comments</comments>
		<pubDate>Thu, 18 May 2017 07:01:50 +0000</pubDate>
		<dc:creator>By Van Savage</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[iPhone]]></category>
		<category><![CDATA[nexus]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=85548</guid>
		<description><![CDATA[<p>It’s dark outside and you’re bleary-eyed. You search for your phone and it reads 3:17 a.m. Your mind starts to wander: Why does my boss want to meet with me tomorrow? Did I forget to change the diaper on my baby and will I soon be awoken by crying and a wet bed? Will that fun, flirty date turn into something real? </p>
<p>You then use your smart phone to look through emails about the meeting, or to cyber-stalk your love interest. You’re searching for information that might help you predict the future. Depending on what answers you find, you might roll back to sleep relieved, or you might feel more anxious and migrate to news about sports or stocks or war, and wonder what the future holds for them. </p>
<p>In the morning, this reliance on smart phones as counselors and conduits of information continues: weather maps, GPS advice, predictions of </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/05/18/overconfident-public-learns-limits-technology/ideas/nexus/">An Overconfident Public Learns the Limits of Predictive Technology</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>It’s dark outside and you’re bleary-eyed. You search for your phone and it reads 3:17 a.m. Your mind starts to wander: Why does my boss want to meet with me tomorrow? Did I forget to change the diaper on my baby and will I soon be awoken by crying and a wet bed? Will that fun, flirty date turn into something real? </p>
<p>You then use your smart phone to look through emails about the meeting, or to cyber-stalk your love interest. You’re searching for information that might help you predict the future. Depending on what answers you find, you might roll back to sleep relieved, or you might feel more anxious and migrate to news about sports or stocks or war, and wonder what the future holds for them. </p>
<p>In the morning, this reliance on smart phones as counselors and conduits of information continues: weather maps, GPS advice, predictions of airplane ticket costs, and on and on. All of these apps are fancy front ends running on complicated predictive models that use a set of rules to process past and present information to give us little glimpses into the future. </p>
<p>But as someone who spends a lot of time working on predictive models of everything—from how climate change could affect the ways in which species interact, to why a whale sleeps much less than a mouse and is less likely to get cancer—I’m always curious about the level of confidence placed in these predictions. This is because the flip, less-intuitive side of our increased knowledge is that it can reveal how much we don’t know, point to how large our uncertainty is, and increase our anxiety. </p>
<p>Missing from many predictive models is certainty—or, perhaps more accurately, knowledge about their lack of certainty and how to use that in making decisions. The rare exception is the Weather app, which has uncertainty built into it: When the prediction is rain, it’ll tell you if the chance of rain is 10% or 90%, and that difference likely will affect how you dress for the day or whether you grab your umbrella. </p>
<p>But when your navigation system asks whether you want to follow a much more complicated route to save eight minutes on your morning commute, you’re not told if that estimate is being made with 10% certainty or 90%, depending on which lights you catch, how hard it is to turn left onto a busy street with no light, or whether other drivers divert to the same route. If the predicted gain in time is only put forward with 10% certainty, there might be a real possibility you’ll lose time following this new route. In my experience, the predicted eight-minute savings could actually mean I lose 15 minutes. </p>
<p>Of course, I live in the traffic jungle of Los Angeles, so maybe I’m giving the system too difficult a problem to solve. But in this case, and in countless others, I’d love to have the app give me the information I need in order to know how much confidence to put in its prediction. This isn’t how current consumer interfaces are set up, but maybe they should be. Estimates about uncertainty wouldn’t be perfect either, but they would give me a lot more information before I move across six lanes of traffic toward my exit.</p>
<p>One reason we seek predictive models and better technologies is because they promise to give us more control over the world and an increased feeling of security in uncertain futures. This desire for control and security is hardly a modern one. Our brains are encoded with all the fears and anxieties of generations, dating back to when early humans had to reckon with sabertooth cats and woolly mammoths. We have always needed to anticipate the world, in order to avoid being eaten, drowned, or falling off a cliff. </p>
<div class="pullquote"> Even when we have lots of data, we may not have the right data or the right perspective, especially for systems that we don&#8217;t really understand. Big data and machine learning can be incredibly useful, but I still need to be smart about which information and data I choose to include in the first place. </div>
<p>“Predict or perish” could be an apt maxim for eons of human history and Darwinian behavioral modification. Before the advent of modern science that eventually ushered in apps, GPS, and other digital Nostradamuses, we turned to astrologers, the divinations of shamans, and the formulations of fortune tellers who looked into crystal balls or read palms, chicken bones, or entrails. </p>
<p>In this light, Apple’s ad slogan, “practically magic,” illustrates how technology has replaced our previous sources for security, prediction, and belief. We now have better ways than ever to gauge the likelihood of both natural and man-made phenomena—from climate change to the odds of getting a table for four at 8 p.m. at our favorite restaurant. </p>
<p>Many of these technology-enabled models are now realized through machine learning and artificial intelligence that harness immense computing power and sift through big data to find general trends that predict the future. These approaches have yielded impressive results in some cases by using predictive algorithms to outfox human Grandmasters at chess and Go. What most people don’t realize, however, is that many of our current predictive models produce probabilities of outcomes that primarily follow from betting on the continuation of past events. Thus, algorithms work beautifully for board games for which all the rules are known, and the types of data—moves and overall board positions—that need to be collected are very limited. </p>
<p>However, the chances of predicting whether my town will flood are bleak if I only have local records of previous floods and the heights of dams, but don’t, for instance, have any data on snowpack in the distant mountains that melts and feeds into the river, or patterns of rainfall in areas uphill or upstream. A potential problem with machine learning, big data, and other approaches is that even when we have lots of data, we may not have the right data or the right perspective, especially for systems that we don&#8217;t really understand. Big data and machine learning can be incredibly useful, but the point here is that I still need to be smart about which information and data I choose to include in the first place.</p>
<p>Please don’t think I’m suggesting we go back to the Stone Age or even the landline. Modern science gives us much better methods than our primitive forebears had for testing predictions and refining our ideas. But we are still susceptible to wanting more certainty than science and technology can really give us, and we’re very vulnerable to twisting science and technology to fit our own biased, overly confident brains.</p>
<p>In some cases, we are overconfident in our predictive powers, while in other circumstances, we overly denigrate our efforts at prediction. Living in Southern California, it is easy to predict the weather on most days. Being proud of guessing that it’s going to be sunny with temperatures in the 70s to 80s is like being proud you bet the stock market would continue to go up, up, up in the late ‘90s—because both predictions rely on a history of consistent trends with little variation. Large parts of machine learning and its limitations also owe some debt to this line of thinking, because performance is evaluated based on the ability to reproduce “known knowledge.”</p>
<p>There are other ways to try to predict the future that rely more on changing and increasing our understanding of nature or the economy or social interactions. Such an approach also can rely on big data, but it focuses on discovery of the mechanisms and causal relationships that are needed for certain outcomes. (For this sort of work, think of Einstein changing the understanding of how time relates to space, or Darwin changing how we view our origins in relation to other species, or Wegener seeing how the continents move relative to each other and fit together.)</p>
<p>These contrasting approaches raise a key question: Should we judge the certainty of a prediction simply based on how many consecutive times it proves to be accurate?  Or should we base our assessment on evaluating different types of data, and determining what conditions need to exist for a particular outcome to happen at all? If a friend flips a quarter five times in a row and I correctly call five consecutive heads, the question is whether my prediction was based on the history of the previous flips and I got lucky (with high uncertainty) or, possibly, because I knew it was a two-headed quarter and had <i>no</i> uncertainty? How would you know if you didn’t first check the coin? </p>
<p>To extend this line of inquiry, let’s recall that prior to last November’s presidential election, the USC/<i>L.A. Times</i> election model correctly predicted a Trump victory, and Nate Silver’s 538 election model revealed high uncertainty in the outcome, suggesting that a Trump victory was not unlikely. These are two of the only predictive models that aligned reasonably well with the outcome of the election. The virtue of Silver’s 538 was that it included an appropriate amount of uncertainty. The virtue of the USC/<i>L.A. Times</i> election model was that, more than other major polls, it included different and arguably more informative data (though less total data), and a different set of assumptions about how to weight the diversity of the population. </p>
<p>What’s the conclusion? No amount of technology, data, or algorithms can overcome a fundamental lack of information that we either can’t get or never thought to ask for. We must embrace imperfection and understand uncertainty in order to better inform our decisions and to help guide us towards better models that require seeing the world in new ways. If we look closely, our predictive models can tell us as much about what we don’t know—and need to find out—as about what we do know. For our globally-connected, internet-enabled, and big data crunching species, Aristotle’s maxim, “The more you know, the more you know you don’t know,” holds as well now as it did in antiquity. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/05/18/overconfident-public-learns-limits-technology/ideas/nexus/">An Overconfident Public Learns the Limits of Predictive Technology</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/05/18/overconfident-public-learns-limits-technology/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>The Media’s Prediction Addiction Is Anti-Democratic</title>
		<link>https://legacy.zocalopublicsquare.org/2016/05/31/the-medias-prediction-addiction-is-anti-democratic/inquiries/trade-winds/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/05/31/the-medias-prediction-addiction-is-anti-democratic/inquiries/trade-winds/#comments</comments>
		<pubDate>Tue, 31 May 2016 07:01:53 +0000</pubDate>
		<dc:creator>By Andrés Martinez</dc:creator>
				<category><![CDATA[Trade Winds]]></category>
		<category><![CDATA[democracy]]></category>
		<category><![CDATA[journalism]]></category>
		<category><![CDATA[media]]></category>
		<category><![CDATA[political forecasting]]></category>
		<category><![CDATA[politics]]></category>
		<category><![CDATA[polls]]></category>
		<category><![CDATA[predictions]]></category>
		<category><![CDATA[reporting]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=73408</guid>
		<description><![CDATA[<p>You have to give them credit: many journalists are confessing that they really blew it in the first act of this presidential election season.  But most of their <i>mea culpas</i> are off point, apologies for the wrong mistake. </p>
<p>The media is collectively beating itself up for a series of poor predictions—dismissing the Trump candidacy, calling for an early Clinton coronation, anticipating a contested GOP convention—instead of beating itself up for a deeper pathology: its compulsive haste to predict everything in the first place.</p>
<p>Serious people, and even professors of journalism, have long harrumphed about how polls and “the horse race” dominate election coverage, at the expense of eat-your-broccoli-type substantive reporting of candidate policy proposals. Nothing new there. What is new is the extent to which the media’s obsession with predicting electoral outcomes in advance has seeped into, and practically taken over, the candidates’ own discourse on the campaign trail.  </p>
<p>It’s </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/05/31/the-medias-prediction-addiction-is-anti-democratic/inquiries/trade-winds/">The Media’s Prediction Addiction Is Anti-Democratic</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>You have to give them credit: many journalists are confessing that they really blew it in the first act of this presidential election season.  But most of their <i>mea culpas</i> are off point, apologies for the wrong mistake. </p>
<p>The media is collectively beating itself up for a series of poor predictions—dismissing the Trump candidacy, calling for an early Clinton coronation, anticipating a contested GOP convention—instead of beating itself up for a deeper pathology: its compulsive haste to predict everything in the first place.</p>
<p>Serious people, and even professors of journalism, have long harrumphed about how polls and “the horse race” dominate election coverage, at the expense of eat-your-broccoli-type substantive reporting of candidate policy proposals. Nothing new there. What is new is the extent to which the media’s obsession with predicting electoral outcomes in advance has seeped into, and practically taken over, the candidates’ own discourse on the campaign trail.  </p>
<p>It’s as if sideline analysis has become the game itself.  </p>
<p>Trump is the caricature extreme of this trend, giving campaign speeches that consist largely of spinning poll numbers, critiquing the media coverage of the campaign (which the media can then critique, for Trump to then critique back, in a never-ending to-and-fro Wimbledon rally) [and throwing the occasional verbal Molotov cocktail a minority group’s way.] </p>
<p>But Trump is not alone. To an extent that would have been unimaginable not long ago, all candidates this election cycle have spent a fair amount of time discussing polls (selectively, of course) as the ultimate qualification for the highest office in the land. Even when candidates may not have wanted to engage in such horse race spinning, they were often forced to do so by poll-centric questions from the media—“Why are you running given such low poll numbers?” </p>
<p>This was the tenor of the campaign and its coverage even before the first votes were cast in Iowa. Such is the anti-democratic hubris of media elites: Why wait to let people make their choice on election day when we smart media folks can tell you the results in advance, and tell you why you and your neighbors voted the way you did?</p>
<p>The media, political operatives, and news junkies in this age of perpetual chatter and connectivity via 24/7 cable and social media feel compelled to show they’re in the know not just by explaining what’s transpired—but by being able to forecast with certitude what comes next. And so the media and news consumers who want to seem in the know fetishize certain superstar pollsters and data geeks, and congratulate themselves on the amount of data available to make foolproof predictions. </p>
<div class="pullquote">To an extent that would have been unimaginable not long ago, all candidates this election cycle have spent a fair amount of time discussing polls (selectively, of course) as the ultimate qualification for the highest office in the land.</div>
<p>Again, it’s all anti-democratic, we-know-best hubris. The media’s reliance on ever more complex predictive models advance what psychologists call the illusion of control. If there’s anything we can’t seem to tolerate in the 21st century, it’s uncertainty. And also surprises, which is why we’re seeing the outpouring of earnest if overwrought mea culpas from the media.  </p>
<p>These apologies are more disturbing than the mistaken predictions. The apologizers in the media genuinely seem to think their inability to predict this primary season accurately was a blow to the republic—as opposed to their insistence on allowing their prediction addiction to drive, and distort, most election coverage.</p>
<p>Society’s intolerance of uncertainty (the media are playing to its audience, after all) and our mania for perfecting forecasting expands well beyond political reporting and analysis. Meteorology, the science we first think of when we think of “forecasting,” is a pursuit where less uncertainty is a societal good. You want to warn people to take an umbrella along on their commute, or abandon the coastline if an epic hurricane is heading their way.  Blown climate predictions do deserve mea culpas and post-mortems—and there’s no positive interest in waiting to see where the hurricane will land, and withholding judgment.  </p>
<p>The financial world, on the other hand, is an arena long ago perverted by data-driven forecasting. When you invest your savings in a publicly traded company these days, you’re not making a bet on how that company will perform objectively in the long run. You are betting on how its performance in a succession of quarterly short-terms will compare with the forecasts drawn up by Wall Street analysts. It’s not enough to await a company’s results; what matters is how those results conform, or don’t, to the earnings estimates (or “expectations”) imposed on it by outside data crunchers. For instance, in April, Wall Street threw a collective hissy fit when Apple “missed” expectations set by outside forecasters by reporting $50.56 billion instead of $51.97 billion in quarterly revenue. The company’s stock was whacked as a result, taken down 8 percent in a day.</p>
<p>Having a lot of brainpower attempting to predict what lies around the corner for a company, an industry, or the economy as a whole is not an inherently bad thing.  It’s desirable, even, up until the point when forecasting becomes so important that its failures need to be treated like a disaster.  For companies trading on the stock market, the tyranny of managing a business to meet the quarterly earnings expectations of outside forecasters end up stifling innovation and risk-taking. It’s  the financial equivalent of campaigning on your poll numbers instead of setting your own agenda.</p>
<p>Perhaps the world of political analysis should look to the world of sports for a healthier model of how to blend forecasting with substantive analysis, without allowing the former to overwhelm the latter. Sports journalists and fans alike love making predictions, and devote a great deal of airtime and print (not to mention fantasy league energy) to picking scores and predicting individual performances. But, maybe because it’s still a game in the end, there is more allowance made for the notion that ironclad certitude is elusive, undesirable even. Studio broadcast analysts keep track of the accuracy of their predictions and good-naturedly compete and tease each other over them.  But failed predictions don’t trigger weighty mea culpas about how media let society down. </p>
<p>The longest shot ever recorded by oddsmakers happened in the world of soccer last month, when tiny, impoverished and perennially struggling FC Leicester won the English Premier League, despite 5,000-to-1 odds.  The story was a feel-good global phenomenon. The political media-operative complex should take note. Smart analysis can help explain how Leicester pulled off its championship, without having predicted it in advance.  Some uncertainty is inevitable in life, at least until the games are played, and the votes are cast. And that’s OK.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/05/31/the-medias-prediction-addiction-is-anti-democratic/inquiries/trade-winds/">The Media’s Prediction Addiction Is Anti-Democratic</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/05/31/the-medias-prediction-addiction-is-anti-democratic/inquiries/trade-winds/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
