<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Zócalo Public SquareData &#8211; Zócalo Public Square</title>
	<atom:link href="https://legacy.zocalopublicsquare.org/tag/data/feed/" rel="self" type="application/rss+xml" />
	<link>https://legacy.zocalopublicsquare.org</link>
	<description>Ideas Journalism With a Head and a Heart</description>
	<lastBuildDate>Mon, 21 Oct 2024 07:01:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>What Happened to Digital Contact Tracing’s Summer of Potential?</title>
		<link>https://legacy.zocalopublicsquare.org/2021/05/13/digital-contact-tracing-covid-19-pandemic/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2021/05/13/digital-contact-tracing-covid-19-pandemic/ideas/essay/#respond</comments>
		<pubDate>Thu, 13 May 2021 07:01:55 +0000</pubDate>
		<dc:creator>by Maria Carnovale</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[contact tracing]]></category>
		<category><![CDATA[Covid-19]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[ethics]]></category>
		<category><![CDATA[public health]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=119948</guid>
		<description><![CDATA[<p>In the summer of 2020, when most countries were cherishing the quiet before the second peak in COVID-19 cases, the non-profit I was volunteering at was bustling with activity. It had developed an open-source digital contact tracing system—one of those smartphone apps that tracks one’s whereabouts and sends notifications if the user had been exposed to COVID-19.</p>
<p>“Do you want to join one of our meetings?” was the offer of a volunteer I met at a (virtual) university event. He knew I was researching public use of highly contentious technology. I could not turn down the opportunity to peek into the decision-making process of an organization providing one of those technologies to governments.</p>
<p>Four months later, I was still attending those meetings, and it was becoming clear how inadequately digital data and public health often intersect. Most of the public health officials we were meeting with did not seem prepared </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2021/05/13/digital-contact-tracing-covid-19-pandemic/ideas/essay/">What Happened to Digital Contact Tracing’s Summer of Potential?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In the summer of 2020, when most countries were cherishing the quiet before the second peak in COVID-19 cases, the non-profit I was volunteering at was bustling with activity. It had developed an open-source digital contact tracing system—one of those smartphone apps that tracks one’s whereabouts and sends notifications if the user had been exposed to COVID-19.</p>
<p>“Do you want to join one of our meetings?” was the offer of a volunteer I met at a (virtual) university event. He knew I was researching public use of highly contentious technology. I could not turn down the opportunity to peek into the decision-making process of an organization providing one of those technologies to governments.</p>
<p>Four months later, I was still attending those meetings, and it was becoming clear how inadequately digital data and public health often intersect. Most of the public health officials we were meeting with did not seem prepared to include digital contact tracing apps in their operations, but governments wanted them anyway. So, their contractors were seeking the support of technology providers like the non-profit to deliver apps that were promising to slow the spread of the infection and still preserve privacy. Even now, one year later, there is limited evidence that those apps can accomplish both objectives. But in the midst of the pandemic, when the spread of the Coronavirus seemed uncontrollable, it was easy to be seduced by a pre-packaged solution.</p>
<p>Meanwhile, my own curiosity was piqued by the wealth of information that the non-profit was able to catalyze. Public health officials and technology developers frequently came together to tackle this new challenge by sharing good and bad examples from across the country and around the world. The non-profit was open to all, but I could only sneak into these conversations as part of its network.</p>
<p>This is how I came to know about the story of Hector Hugo, a <a href="https://www.wsj.com/articles/ecuador-city-beat-one-of-worlds-worst-outbreaks-of-covid-19-11593532974" target="_blank" rel="noopener">32-year-old urban planner</a>, who used emergency call data to inform the COVID-19 response of Guayaquil, Ecuador. By the end of March 2020, the <a href="https://www.elespectador.com/noticias/el-mundo/a-guayaquil-no-la-salvamos-a-tiempo-pero-evitamos-algo-peor/" target="_blank" rel="noopener">city of Guayaquil had become the epicenter of the COVID-19 pandemic in Latin America</a>. The internet would show images of dead bodies in the streets of the city waiting to be collected. The political system was slow and unprepared.</p>
<p>Hugo came across the emergency call <a href="https://www.wsj.com/articles/ecuador-city-beat-one-of-worlds-worst-outbreaks-of-covid-19-11593532974" target="_blank" rel="noopener">records of Guayaquil’s residents on the internet</a>. He filtered those that seemed related to COVID-19 infections—calls about having trouble breathing or corpses to be collected—and then coded those as points on a map.</p>
<p>Access to data is typically a major roadblock in research, especially when the privacy of health-related information is in the picture. But apparently, these emergency call records were mistakenly uploaded on the cloud, meaning Hugo could just download them. With that data, he created a heatmap of cases—maps showing where the health crisis was most severe. Then, with the help of a Spanish data analyst, Carlos Bort, he crossed those with demographic data and was able to project the likely spread of the virus in the city. Long story short, Guayaquil had a roadmap to allocate healthcare workers and resources to the most vulnerable neighborhoods, those with the highest current and projected levels of contagion.</p>
<p>The number of cases and deaths in Guayaquil dropped in the months following the introduction of this new system. Nobody knows if it was thanks to those heatmaps or to the herd immunity that the city had acquired during the worst moments of the pandemic. Nevertheless, this is how a privacy breach will forever be remembered—as the lucky fluke that saved Guayaquil.</p>
<p>But data hacking is no public health strategy. In early August of 2020, data analysts from a development agency of a Mexican state were looking into digital contact tracing and exposure notification apps for their jurisdiction and reached out to our non-profit to explore feasible options. They wanted to add data analytics to their pandemic response, believing that, like in Guayaquil, it would help allocate public health resources and hoping that digital contact tracing could help.</p>
<div class="pullquote">It’s a Catch-22. Tech companies don’t trust governments with personal information, so no data ever leaves the individual smartphone. But people don’t seem to trust tech companies either, so they don’t download the app in the first place.</div>
<p>“Bluetooth-based apps are quickly becoming the standard in this industry because they are highly privacy-preserving,” I explained. “The app collects encrypted identification codes from other app users that happen to be around you. If one of them tests positive to COVID-19 and uploads the test result, the app sends a notification out to those whose codes it had collected. Ideally, those people get tested and self-isolate to stop the further spread of the virus.”</p>
<p>“Does it work?” asked the inquisitive local official.</p>
<p>“Nobody really knows, there is not much evidence yet,” I replied. “A study released in April 2020 suggests that to be truly effective, roughly <a href="https://www.research.ox.ac.uk/article/2020-04-16-digital-contact-tracing-can-slow-or-even-stop-coronavirus-transmission-and-ease-us-out-of-lockdown" target="_blank" rel="noopener">60 percent of smartphone users need to download it</a>.”</p>
<p>But 60 percent is a high adoption level. Most of the contact tracing and exposure notification apps launched so far <a href="https://www.technologyreview.com/2020/05/07/1000961/launching-mittr-covid-tracing-tracker/" target="_blank" rel="noopener">don’t get to double digits</a>. The most popular are now in the <a href="https://www.technologyreview.com/2020/05/07/1000961/launching-mittr-covid-tracing-tracker/" target="_blank" rel="noopener">20 percent adoption levels</a>. And while the app might still be effective in conjunction with other measures, it cannot aid with those measures, due to individual privacy concerns. That’s because all data the app records remains safely stored on the smartphone, and does not go to a centralized dataset in order to preserve individual privacy.</p>
<p>It’s a Catch-22. Tech companies don’t trust governments with personal information, so no data ever leaves the individual smartphone. But people don’t seem to trust tech companies either, so they don’t download the app in the first place. Governments don’t trust themselves to be able to approach the pandemic without technology as a comfortable safety blanket, so they ask tech companies for apps. It’s a cycle that’s difficult to break.</p>
<p>In an optimistic shot, by summer&#8217;s end, most countries had decided that a contact tracing app was going to be in their future despite the uncertainties it carried. While digital contact tracing made sense in theory, there was <a href="https://theconversation.com/contact-tracing-apps-theres-no-evidence-theyre-helping-stop-covid-19-14839" target="_blank" rel="noopener">not enough evidence that it actually slowed the spread of COVID-19</a>. When two users meet, digital contact tracing apps record a contact. But they <a href="https://citrispolicylab.org/wp-content/uploads/2020/10/Technologies-of-Pandemic-Control_2020.pdf" target="_blank" rel="noopener">are ignorant of the context, and therefore cannot accurately predict the risk of infection</a>. If the contact takes place in an open space rather than indoors, if masks were worn, the chances of transmission are lower. Blue-tooth signals travel across physical barriers, while the Coronavirus does not. If two phones were close enough, one could receive an exposure notification even if the COVID-19 positive person were on the other side of a wall that would prevent transmission.</p>
<p>It might unnecessarily alarm people, but at the same time might provide a false sense of security. With high rates of asymptomatic infections, apps can track the spread of the virus only if users were carrying their phones consistently with their blue-tooth or GPS systems turned on and if large-scale testing were available for those showing no symptoms. Virtually no country met those conditions at the time, and short of that, asymptomatic carriers might never receive an exposure notification and be led to feel confident about their health when, in fact, they were actively spreading COVID-19.</p>
<p>Whether privacy-preserving apps could consistently and accurately track contagion was still a mystery. Digital contact tracing apps were too novel and, being privacy-preserving, scant data was available to researchers. Countries like China had some success with it, but their solution was part of a <a href="https://www.economist.com/china/2020/02/29/to-curb-covid-19-china-is-using-its-high-tech-surveillance-tools" target="_blank" rel="noopener">large digital surveillance system</a>, which was <a href="https://www.sciencedirect.com/science/article/pii/S2666693620300360" target="_blank" rel="noopener">unrealistic in the privacy-sensitive West</a>. The high levels of adoption required to make digital contact tracing work seemed attainable only with mandates requiring people to download the app—an obligation that many governments judged to be <a href="https://www.nature.com/articles/d41586-020-01578-0" target="_blank" rel="noopener">unethical</a> and politically perilous.</p>
<p>The question was then how to nudge people into using the app. Nobody knew, so Belgium decided to ask them directly. In September 2020, it launched an <a href="https://www.esat.kuleuven.be/cosic/sites/corona-app/" target="_blank" rel="noopener">open consultation</a>. Anybody on the internet with enough digital literacy to upload a PDF into an online form could submit a comment on the design of the national exposure notification app and the policies that framed its use—such as what age minors should independently decide on using the app or what privacy statements should look like. It also inquired about the structure and composition of an independent oversight committee that would monitor the use of the technology to ensure it was not abused, that its uses did not impinge on individual rights, and that it did not outlive the health crisis.</p>
<p>It was an innovative example of an open, transparent, and crowdsourced approach to policymaking that acknowledged the risks of misuses and that took steps to mitigate them. Our team had been seeking exactly such arrangements earlier in the pandemic, when a technology contractor in a country with somewhat dictatorial leadership (according to many political commentators) had reached out to receive support in developing and deploying a contact tracing app.</p>
<p>As always, the non-profit enthusiastically accepted to help. But the project proved to be far more ambitious than a simple app. The contact tracing system would feed into a digital ID that acted as a pass. If one had tested positive or had been in contact with a COVID-19 positive person, the digital ID would deny access to public spaces. Ubiquitous digital readers would scan those passes and track individual movements. Cameras with recognition systems would match the identity of the pass owner to that of the smartphone holder.</p>
<p>It scared me. The technology of the non-profit was privacy-preserving, but in this country, the whole ecosystem planned around it was not. After such a large investment, it was reasonable to fear that this infrastructure would outlive the pandemic. I could not shake off the picture of a Big Brother following every step with its intrusive eye.</p>
<p>None of us inside the non-profit were ready to play a part in that process, to be its enablers. But the decision wasn&#8217;t easy. Digital contact tracing might have helped mitigate the rising number of cases in the country. If not us, then someone else would have done it, maybe some unscrupulous firm that was not concerned about the public’s well-being. There are no industry standards nor formal monitoring systems for this technology. Maybe the responsible thing to do was to engage in order to keep a foot in and an eye out.</p>
<p>Ultimately, though, the non-profit did not feel equipped to bear that burden. It was a difficult decision, but one made easier because the financial health of the non-profit was not at stake. None of the volunteers were risking their livelihood on a lost client. That is a privilege that few institutions enjoy.</p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>For many companies, the COVID-19 healthcare crisis has been an opportunity: technological solutions like digital contact tracing and telemedicine suddenly have a market that is both open and, in many places, underregulated. In this context, the non-profit could have been the unicorn to set standards in an industry where technology providers had no formal obligations other than the judgment of history books. Whether it fulfilled or missed this responsibility still is an open question.</p>
<p>But that window did not stay open long. After a summer of glory, digital contact tracing and exposure notification lost their splendor in the fall, dried up as the days shortened, and fell before the winter was even over. Sustained criticism and lack of adoption had made digital contact tracing irrelevant. But the non-profit is not losing its spirit. It has already set its eyes on a sexy new gadget: digital vaccine passports.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2021/05/13/digital-contact-tracing-covid-19-pandemic/ideas/essay/">What Happened to Digital Contact Tracing’s Summer of Potential?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2021/05/13/digital-contact-tracing-covid-19-pandemic/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Does Cass Sunstein Regret Ruining Your Popcorn?</title>
		<link>https://legacy.zocalopublicsquare.org/2020/09/04/cass-sunstein-too-much-information-lauren-goode-interview/events/the-takeaway/</link>
		<comments>https://legacy.zocalopublicsquare.org/2020/09/04/cass-sunstein-too-much-information-lauren-goode-interview/events/the-takeaway/#respond</comments>
		<pubDate>Fri, 04 Sep 2020 21:32:10 +0000</pubDate>
		<dc:creator>by Sarah Rothbard</dc:creator>
				<category><![CDATA[The Takeaway]]></category>
		<category><![CDATA[Cass Sunstein]]></category>
		<category><![CDATA[climate change]]></category>
		<category><![CDATA[Covid-19]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[health]]></category>
		<category><![CDATA[information]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=114245</guid>
		<description><![CDATA[<p>When Cass R. Sunstein was serving as administrator of the White House Office of Information and Regulatory Affairs under President Barack Obama, he oversaw major new legislation requiring chain restaurants to disclose nutrition information. After an extremely long debate, Sunstein and his colleagues decided to include movie theaters. “A lot of people consume a lot of stuff at the movie theater, and it would be good for people to make informed choices,” Sunstein recalled thinking at the time. When he told a friend the good news, she replied with three “deflating but incredibly illuminating” words: “‘Cass ruined popcorn.’”</p>
<p>At a virtual event presented by Zócalo and the Commonwealth Club on YouTube, Facebook, and Twitter, Sunstein—Harvard Law School’s Robert Walmsley University Professor and author of the new book <i>Too Much Information: Understanding What You Don’t Want To Know</i>—strove to answer the question, “How Much Information Is Too Much?”</p>
<p>The inspiration </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/09/04/cass-sunstein-too-much-information-lauren-goode-interview/events/the-takeaway/">Does Cass Sunstein Regret Ruining Your Popcorn?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>When <a href="https://legacy.zocalopublicsquare.org/2020/09/03/legal-scholar-too-much-information-author-cass-sunstein/personalities/in-the-green-room/" target="_blank" rel="noopener noreferrer">Cass R. Sunstein</a> was serving as administrator of the White House Office of Information and Regulatory Affairs under President Barack Obama, he oversaw major new legislation requiring chain restaurants to disclose nutrition information. After an extremely long debate, Sunstein and his colleagues decided to include movie theaters. “A lot of people consume a lot of stuff at the movie theater, and it would be good for people to make informed choices,” Sunstein recalled thinking at the time. When he told a friend the good news, she replied with three “deflating but incredibly illuminating” words: “‘Cass ruined popcorn.’”</p>
<p>At a virtual event presented by Zócalo and the Commonwealth Club on <a href="https://www.youtube.com/watch?v=mAnnPewEHdI" target="_blank" rel="noopener noreferrer">YouTube</a>, Facebook, and Twitter, Sunstein—Harvard Law School’s Robert Walmsley University Professor and author of the new book <a href="https://www.skylightbooks.com/book/9780262044165" target="_blank" rel="noopener noreferrer"><i>Too Much Information: Understanding What You Don’t Want To Know</i></a>—strove to answer the question, “<a href="https://legacy.zocalopublicsquare.org/event/how-much-information-is-too-much/" target="_blank" rel="noopener noreferrer">How Much Information Is Too Much?</a>”</p>
<p>The inspiration for the book and the event, he told moderator <a href="https://legacy.zocalopublicsquare.org/2020/09/03/wired-senior-writer-lauren-goode/personalities/in-the-green-room/" target="_blank" rel="noopener noreferrer">Lauren Goode</a>, senior writer and podcast host at <i>WIRED</i>, was partly his friend’s thoughts regarding popcorn and partly his father being diagnosed with a brain tumor in his 60s. Sunstein’s mother chose not to tell him that he would die within the year. She decided, “telling him he’s going to die is too much information,” said Sunstein, explaining that she wanted her husband to enjoy his last months without staring death in the face. “She didn’t want to ruin his popcorn, basically.”</p>
<p>How do you decide what information is worth sharing, or what information you yourself need? It comes down to two questions, said Sunstein: Is the information useful? And, how does the information make people feel?</p>
<p>There’s a great deal of information that people don’t want to know, Sunstein has found in his research, including calorie counts, genetic tendencies (including whether you are predisposed to cancer and Alzheimer’s), the fuel economy of your car—even whether the person you’re crushing on returns the sentiment.</p>
<p>“For some people, information isn’t going to change their behavior,” said Sunstein. They don’t want to know if hell exists because they’re probably going if so, and they don’t want the calorie count on their cheeseburger because they’re not going to start a diet. However, others consider such information useful—they want to know the side effects of a medication they’re taking in order to decide whether it’s worth it.</p>
<div class="pullquote">How do you decide what information is worth sharing, or what information you yourself need? It comes down to two questions, said Sunstein: Is the information useful? And, how does the information make people feel?</div>
<p>This dilemma “gets at something deep in the human condition,” said Sunstein, that goes back to the choice Adam and Eve face in the Garden of Eden. “Do they want to eat the apple—which is the question, do they want to know?”</p>
<p>But Adam and Eve only had themselves to worry about. In 2020, we have the novel coronavirus and loads of new information coming at us from all fronts. “What do you propose is too much information in the era of COVID-19?” Goode asked Sunstein.</p>
<p>We can go back to the same two questions once again, said Sunstein. The usefulness of information is paramount, especially to people most vulnerable to the virus. “They need to know, even if it scares them, because they might be scared but alive, and that’s a good tradeoff,” he said. The question of how information makes us feel is more complicated because terrifying people or making everything seem hopeless is cruel. “You want to give people the information with a sense of hope and good cheer,” said Sunstein, pointing to New Zealand Prime Minister Jacinda Ardern telling her nation that the Easter bunny and tooth fairy were exempt from lockdown.</p>
<p>And then there is the question of misinformation—which can become dangerous, as in the case of hydroxychloroquine, an arthritis medicine that President Trump called a “game changer,” despite its known side effects and a lack of data backing up any benefits it may have against COVID-19. “When you have something that has become so polarizing,” asked Goode, “how do you propose that people in positions of authority communicate about it?”</p>
<p>“The obvious thing about misinformation that’s bad is that people believe it, and they’ll act in response to it,” said Sunstein. And even if they’re immediately given a correction, they’ll still remember the first statement as true. “That’s a real problem for social media and for newspapers and magazines. It suggests that telling people a falsehood even when it’s corrected may be not a good thing,” he warned.</p>
<p>Social media and politics were hot topics in the audience question-and-answer session, from Facebook’s recent announcement that they are banning new political ads a week before the election to the role of information when it comes to keeping democracy alive.</p>
<p>“Love it,” said Sunstein regarding Facebook’s ban, which he believes could help prevent “a falsehood pandemic that could create a terrible distortion.” When pressed by Goode, he noted that it was possible that the ban should be put in place earlier.</p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>And when it comes to democracy, Sunstein believes that giving people the choice—to read further or not—is paramount, because some people want and need more information and others will be overwhelmed by more than the basics. “Citizens need to be able to figure out whom they like and the issues,” he said. “But also, they’re entitled to be busy and to want to know something about the issues they care about but not about the rest.”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/09/04/cass-sunstein-too-much-information-lauren-goode-interview/events/the-takeaway/">Does Cass Sunstein Regret Ruining Your Popcorn?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2020/09/04/cass-sunstein-too-much-information-lauren-goode-interview/events/the-takeaway/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>When Numeracy Superseded Literacy—and Created the Modern World</title>
		<link>https://legacy.zocalopublicsquare.org/2018/07/06/numeracy-superseded-literacy-created-modern-world/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2018/07/06/numeracy-superseded-literacy-created-modern-world/ideas/essay/#respond</comments>
		<pubDate>Fri, 06 Jul 2018 07:01:07 +0000</pubDate>
		<dc:creator>by Michael E. Hobart</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[arithmetic]]></category>
		<category><![CDATA[astronomy]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[geometry]]></category>
		<category><![CDATA[information]]></category>
		<category><![CDATA[logic]]></category>
		<category><![CDATA[mathematics]]></category>
		<category><![CDATA[Medieval]]></category>
		<category><![CDATA[music]]></category>
		<category><![CDATA[Numbers]]></category>
		<category><![CDATA[numeracy]]></category>
		<category><![CDATA[Renaissance]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=95517</guid>
		<description><![CDATA[<p>In 1025, two learned monks, Radolph of Liége and Ragimbold of Cologne, exchanged several letters on mathematical topics they had encountered while reading a manuscript of the sixth-century Roman philosopher, Boethius, whose writings supplied one of the few mathematics sources in the Middle Ages. These monks were not mathematicians, but they were inquisitive and keen to further their learning. They pondered Boethius’ words. They struggled. In particular, they puzzled over the theorem that the interior angles of a triangle were equal to two right angles. “Interior angles” of a triangle? What could that possibly mean? Neither had a clue.</p>
<p>Even the mathematically averse among us today recognize the basic geometry that Radolph and Ragimbold failed to grasp, for we live in a numerate society, surrounded by countless manifestations of mathematics. Broadly defined as the ability to reason with numbers and other mathematical concepts, numeracy underlies our current information explosion. Its </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2018/07/06/numeracy-superseded-literacy-created-modern-world/ideas/essay/">When Numeracy Superseded Literacy—and Created the Modern World</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In 1025, two learned monks, Radolph of Liége and Ragimbold of Cologne, exchanged several letters on mathematical topics they had encountered while reading a manuscript of the sixth-century Roman philosopher, Boethius, whose writings supplied one of the few mathematics sources in the Middle Ages. These monks were not mathematicians, but they were inquisitive and keen to further their learning. They pondered Boethius’ words. They struggled. In particular, they puzzled over the theorem that the interior angles of a triangle were equal to two right angles. “Interior angles” of a triangle? What could that possibly mean? Neither had a clue.</p>
<p>Even the mathematically averse among us today recognize the basic geometry that Radolph and Ragimbold failed to grasp, for we live in a numerate society, surrounded by countless manifestations of mathematics. Broadly defined as the ability to reason with numbers and other mathematical concepts, numeracy underlies our current information explosion. Its clichés dot popular speech: “do the math,” “crunch the numbers,” “figure the odds.” From birth to death, numbers track our lives institutionally and demographically. Some scorn such customs (think of Mark Twain’s “figures” of “lies, damned lies, and statistics”), but we all acknowledge numeracy as a cultural given, and agree that mathematics fuels the science, technology, and industry of our world. </p>
<p>Still, as the story of Radolph and Ragimbold suggests, it wasn’t always so.</p>
<p>Before the modern era, whose origins we often date from the Renaissance (circa 1250 to 1600), folks certainly counted and measured. But, bluntly put, the abstractions of numeracy and mathematics mattered not a whit in any practical sense. </p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>Instead, literacy—not numeracy—ruled. For our progenitors, it was the invention of writing, probably between 3200 and 3100 B.C., that served the human impulse to order and manage the flux of experience and its “givens” or “data” (from the Latin, <i>datum</i>, “thing given”). Early scripts provided the first information technology, creating the first “information age”—that of literacy. Its apogee occurred with the phonetic alphabet, devised by the Greeks, who correlated the sounds of speech with individual letter symbols so that each symbol stands for a single vowel or consonant. </p>
<p>The alphabet provided the substrate, the symbols for framing nouns and adjectives, and thus the means of creating definitions, which connected thought to the objects and processes of the world. For Aristotle, science would organize and explain data taken in through the senses, abstracted into words, classified into general and specific categories, and bound together with the formal tools of logic. </p>
<p>The medieval world of Radolph and Ragimbold inherited this word-based technology and culture, assimilating the classifying temper into the curriculum of its new universities. Seven liberal arts anchored the course of studies—three linguistic (the trivium of grammar, logic, and rhetoric) and four mathematical (the quadrivium of arithmetic, music, geometry, and astronomy). The latter were taught primarily as stepping stones to contemplation of spiritual realities, for example the divine order that infused numerical proportions, musical harmonies, spatial beauty, and heavenly motion. </p>
<p>Medieval mathematics was everywhere “enchanted,” its numerology animated with allegorical, generally theological meanings. (Witness the bizarre practice of scapulimancy—divination according to the geometry of sheep shoulder blades.) Its categories, nonetheless, remained separate and distinct from one another. Arithmetic, the subject of discrete things, was incommensurable with geometry, which treated continuous things. And so, the topics of “thing” numeracy stayed tucked away in the cubby holes of literacy.</p>
<p>But then came the Renaissance centuries, and two dominant trends that would dramatically challenge the classifying temper and its embedded mathematics. </p>
<p>The first was an information explosion, begun earlier but powered after 1455 by that great engine of learning, the printing press. The mind-boggling proliferation of printed works (roughly 200 million books by the end of the 16th century) leant cheap paper and ink to spreading the humanists’ recovery of ancient texts, the New World discoveries, and the mounting harvest of information gleaned from nature. There was, in the words of Harvard’s Ann Blair, “too much to know,” too much to classify. The surfeit of new information swamped traditional classes, fractured categories, and overwhelmed the classifying temper. The world lay “all in pieces, all coherence gone,” intoned poet John Donne in 1611, gazing rearward at a more intellectually comforting age.</p>
<p>A second trend was intertwined with this overwhelming volume of facts: the advent of a new information technology. Arising largely from practical activities, new ways to encode information brought forth new and different ways of seeing, imagining, and analyzing nature. In each category of the quadrivium—arithmetic, music, geometry, and astronomy—these new means of data collection and processing laid the foundations of modern numeracy.</p>
<div class="pullquote">Broadly defined as the ability to reason with numbers and other mathematical concepts, numeracy underlies our current information explosion.</div>
<p>In arithmetic, from the 13th century forward, the growing presence of Hindu-Arabic numerals habituated Europeans to a new counting system. Employed initially by merchants, bankers, and accountants, it steadily crept into the procedures of mathematicians and natural philosophers, craftsmen and artisans, musicians, and artists. The new system showcased a much simplified, functional notation of nine ciphers, the numerals 1 through 9, in contrast to the cumbersome Greek scheme of 27 alphanumeric letters or to Roman numerals with their vertical strokes and letters. Further, it featured a place-value arrangement of numerals, our familiar decimal system for determining a numeral’s value by its place in the ones, tens, or hundreds column (and continuing). And the symbolic representation of zero as an empty placeholder greatly facilitated arithmetic computations. All these innovations contributed to perceiving numbers as abstract relations, not just collections of things or objects. By Shakespeare’s day, the character of Shylock in <i>The Merchant of Venice</i> was figuring his pound of flesh with the new, more efficient Hindu-Arabic numerals.</p>
<p>In the world of music, a newly invented and abstract notation accompanied the rise of polyphonic singing, which evolved from Gregorian chant. With their longs, breves, semibreves, minims, fusas, and other equivalents of modern musical notes, composers and musicians caged and managed as information the elusive, ephemeral data of rhythm and pitch. For the first time in human history, time itself was measured by means of an independent system of symbols—standardized units of sound (notes) and silence (rests) that corresponded to physical, acoustical reality. Here, too, Hindu-Arabic numerals described the fluid, irrational proportionalities and harmonies comprising the dynamics of musical tone, and gave rise to new tuning systems, including the equalized temperament of modern pianos and other instruments with fixed, musical intervals. </p>
<p>Turning to geometry, the discovery of linear perspective in the visual arts yielded new expression and shape to visual information. Novel geometric techniques offered alternatives to definitions as a means of seeing objects in the world. Perspective grids tied spatial proportionalities to the changing viewpoints of a world in motion. This was a world of the “winged eye” in the memorable phrase of the Renaissance man himself, Leon Battista Alberti. One-to-one mappings between objects and their representations (two-dimensional drawings or three-dimensional sculptures) provided a new context for situating objects in space. Forerunners of modern geometry’s graphs, these techniques led eventually to separating and analyzing the vertical and horizontal axes of motion. </p>
<p>And in astronomy, new technologies brought heavenly motion and eternal time down to earth, enabling the amalgamation of terrestrial and celestial branches of knowledge, physics and astronomy. Before it became money, time became information. Its “inaudible and noiseless foot” (Shakespeare) succumbed to the technological mastery supplied by that “fallen angel,” the oscillating mechanical clock. Pope Gregory’s solar calendar (1582) and subsequent linear chronologies put the capstone on “clock time” writ large. Time’s mystery was converted to time’s problem, at least in part, as time itself increasingly became an abstract variable expressed in mathematical formulas, such as in Galileo’s laws of free fall, pendulums, and projectiles. </p>
<p>Common to all these arenas—commerce and arithmetic, polyphonic sound and music, art and geometry, timekeeping and astronomy—a novel means of creating and managing information made its appearance. The phonetic letters of definitions, which were the foundation of literacy, gave way to numerals, notes and rests, grid lines, and linear chronologies that would undergird a new numeracy. </p>
<p>These were not the enchanted symbols of allegory associated with words and meanings. As new ways of abstracting, encoding, storing, and manipulating bits of information, these empty symbols—curlicue marks and lines on a page—were purely functional. They guided reasoning with their rules of combination and with their algorithms. </p>
<p>Strangely enough, these symbols even captured “nothing” and made it useful. The placeholding zero (null quantity) in arithmetic enabled “borrowing” and “carrying” amounts from column to column, the basis of adding, subtracting, and myriad further computations. The rest (absence of sound) in music made it possible for composers to develop and depict sophisticated, dynamical musical rhythms. The visual vanishing point in the gridlines of artistic perspective provided focus, thereby giving instructions for the spatial arrangements of objects. And in clock time, the instant (point of no time lapse) separated the flow of time past from that of time future, allowing one to plot events, large or small, on a linear continuum. Later, the 18th-century French philosopher Jean le Rond d’Alembert would summarily refer to the empty symbols of numeracy as “phantoms.”</p>
<p>Mapped onto various dimensions of experience, the new techniques of information coding framed new understandings of phenomena and transformed our ways of seeing. And they provided the tools for a new, logical layering of abstraction upon abstraction that became the higher reaches of mathematics.</p>
<p>By Galileo’s lifetime (1564–1642) these practices had mushroomed and coalesced into the information technology of numeracy. Henceforth, as Galileo wrote in a passage often cited, the book of nature would be understood increasingly as “written in the language of mathematics.” With his own experiments and investigations, this “father of modern physics” led a new generation of natural philosophers into the mathematical analysis of matter and motion. Fermat and Descartes followed with their analytical geometries, joining the discreteness of arithmetic with the continuity of geometry. A generation later the calculus of Newton and Leibniz united the forces and motions of matter. On the cornerstones of numeracy, laid in the Renaissance, would be constructed the entire edifice of modern science … and with it our numerate culture, our own phantom world.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2018/07/06/numeracy-superseded-literacy-created-modern-world/ideas/essay/">When Numeracy Superseded Literacy—and Created the Modern World</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2018/07/06/numeracy-superseded-literacy-created-modern-world/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Forget Fake News. Social Media Is Making Democracy Less Democratic.</title>
		<link>https://legacy.zocalopublicsquare.org/2017/11/29/forget-fake-news-social-media-making-democracy-less-democratic/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/11/29/forget-fake-news-social-media-making-democracy-less-democratic/ideas/essay/#respond</comments>
		<pubDate>Wed, 29 Nov 2017 08:01:27 +0000</pubDate>
		<dc:creator>By ROGERS BRUBAKER</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[democracy]]></category>
		<category><![CDATA[Digital Age]]></category>
		<category><![CDATA[Micro-targeting]]></category>
		<category><![CDATA[social media]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=89656</guid>
		<description><![CDATA[<p>Anxieties that new communications technologies and media formats would undermine democratic citizenship go back more than a century. In the late 19th century, critics worried about sensationalistic “yellow journalism”; a cartoon from that era even used the phrase “fake news. And indeed the newly cheap mass newspapers—in reckless disregard of facts—helped push the United States into war with Spain in 1898.  </p>
<p>A generation later, newspaperman and political commentator Walter Lippmann observed that people &#8220;live in the same world, but they think and feel in different ones,” anticipating our current concerns about “media bubbles” by almost a century.    </p>
<p>Yet the revolution in digital communication initially generated more enthusiasm than anxiety. Many believed that the internet would enhance rather than diminish democratic citizenship by empowering ordinary citizens, bypassing institutional gatekeepers, enabling bottom-up mobilization and lateral communication, and making politics more transparent. It would thus foster more responsive government and enable more participatory </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/11/29/forget-fake-news-social-media-making-democracy-less-democratic/ideas/essay/">Forget Fake News. Social Media Is Making Democracy Less Democratic.</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Anxieties that new communications technologies and media formats would undermine democratic citizenship go back more than a century. In the late 19th century, critics worried about sensationalistic “yellow journalism”; a <a href= https://publicdomainreview.org/collections/yellow-journalism-the-fake-news-of-the-19th-century/>cartoon from that era</a> even used the phrase “fake news. And indeed the newly cheap mass newspapers—in reckless disregard of facts—helped push the United States into war with Spain in 1898.  </p>
<p>A generation later, newspaperman and political commentator Walter Lippmann observed that people &#8220;live in the same world, but they think and feel in different ones,” anticipating our current concerns about “media bubbles” by almost a century.    </p>
<p>Yet the revolution in digital communication initially generated more enthusiasm than anxiety. Many believed that the internet would enhance rather than diminish democratic citizenship by empowering ordinary citizens, bypassing institutional gatekeepers, enabling bottom-up mobilization and lateral communication, and making politics more transparent. It would thus foster more responsive government and enable more participatory forms of citizenship. Some forecasted that it would undermine authoritarian regimes, and indeed it was only a few years ago that commentators were celebrating the role of Twitter and Facebook in the Arab Spring.</p>
<p>Today the mood is much darker: The digital dream of renewing democratic citizenship has given way to a digital nightmare of undermining democratic citizenship. And not just because of Donald Trump, who is a symptom as much as a cause. It’s important to look beyond Trump—and beyond the discussions of fake news and Russian manipulation—to broader developments that have created a crisis of public knowledge. </p>
<p>The last decade has seen a transition from connectivity to hyperconnectivity. The share of the United States population over age 14 with a smartphone soared from a mere 11 percent at the end of 2008 to 75 percent at the end of 2014. The same period saw the explosive growth of social media. Regular Facebook users amounted to only 13 percent of the U.S. population at the end of 2008, but just four years later they made up more than half the population (and of course a much higher fraction among younger people). Worldwide, Facebook had 10 times as many users by the end of last year—nearly 2 billion—as it had in 2009. Twitter users increased more than six-fold in the United States from 2010 to 2014, growing from 10 million to 63 million. More Americans under 50 today regularly get news online than from television.</p>
<p>Hyperconnectivity is not just a technological fact; it is shaped by—and shapes in turn—economics, politics, law, and culture as well. Our current regime of connectivity is based on digital surveillance—which has rightly been described as the dominant business model of the Internet economy.  The core of this business model is the extraction of massive amounts of personal data from users in exchange for nominally free services. </p>
<p>This intensifying and ever more sophisticated system of corporate surveillance is more comprehensive and arguably more insidious than even the most powerful systems of government surveillance. It not only enables micro-targeted (and therefore more valuable) commercial advertising. More ominously, this system of surveillance enables micro-targeted and customized <i>political advertising</i>. It’s true that the claims of Cambridge Analytica to have decisively helped elect Donald Trump through such micro-targeting <a href=https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html>have been debunked</a>. But increasingly sophisticated forms of data aggregation and analysis, which allow increasingly accurate <a href=http://journals.uic.edu/ojs/index.php/fm/article/view/4901>inferences about individuals’ traits and dispositions</a>, have undoubtedly made possible forms of customizable micro-targeting that pose new threats to the public sphere and democratic decision-making. </p>
<p>The threat goes well beyond the issue of fake news. Manipulative and non-transparent micro-targeting threatens democratic decision-making regardless of whether the targeted message contains false information. </p>
<p>Democracy depends on public discussion and argument. If political persuasion operates behind the scenes through individualized targeting, it becomes inaccessible to public debate. The individual herself is unaware of being targeted, and since the message is invisible to others, it cannot be engaged or countered. </p>
<p>The threat also goes beyond targeted political advertising. Digital surveillance enables micro-targeted and customized content of all kinds, including news stories that are specifically tailored to the recipient. Such customized news content may be presented as part of a broader, putatively non-political effort to produce and deliver personally relevant information. But even if it is not intended to persuade, customized news challenges the very idea of the publicness of news, and it builds fragmentation—and even privatization—into the basic practices of the digital ecosystem.    </p>
<p>The intensification of digital surveillance is driven by the relentlessly commercialized competition for attention. Obviously, this is not new—getting attention has been central to mass journalism for more than a century. What’s new is the way in which attention is more pervasively and precisely measured, more precisely tracked across time and context, and more precisely monetized than ever before. The ubiquitous measurement, tracking, and monetization of attention have enshrined popularity as the ultimate measure of value (and virality as the highest form of popularity).  </p>
<p>In the media systems of both Europe and North America, the commercial logic of popularity has coexisted in recent decades with a professional logic of appropriateness, newsworthiness, objectivity, and—at its best—critical inquiry. But now the logic of popularity is entirely dominant, and not only in online media. As Leslie Moonves, the head of CBS, memorably commented in early 2016, the Trump campaign “may not be good for America, but it&#8217;s damn good for CBS.”   </p>
<p>Moreover, the metrics of popularity can be gamed and manipulated. Popularity can be manufactured, for example, by <a href=https://www.nytimes.com/2017/05/31/technology/how-twitter-is-being-gamed-to-feed-misinformation.html>using bots to flood Twitter</a> with messages and gain visibility as a “trending topic.” This manipulated visibility can then become self-reinforcing if the topic is picked up—as trending Twitter topics often are—by journalists.</p>
<div class="pullquote">Manipulative and non-transparent micro-targeting threatens democratic decision-making regardless of whether the targeted message contains false information.</div>
<p>There is a deep affinity between the commercial logic of popularity in a hyper-connected digital ecosystem and the cultural and political logic of populism. Populism is an ideology of immediacy or direct access. It challenges gatekeepers and mediating institutions—political parties, professional expertise, and the mainstream media—in the name of “direct access” to knowledge, direct access to culture, and direct access to political decision-making. </p>
<p>Digital hyperconnectivity seems to facilitate precisely such direct access. It seems to be based on disintermediation—on bypassing gatekeepers of all kinds and directly connecting everybody to everyone and everything (including all “the world’s information,” which Google’s famous mission statement claims to make “universally accessible”). Insofar as there is an ideology of hyperconnectivity, it is precisely a populist ideology, an ideology of disintermediation.  </p>
<p>But in fact hyperconnectivity simply replaces one mode of mediation with another. In the domain of news, it tends to replace mediation and filtering based on professional judgment with mediation and filtering based on metrics and algorithms. Who sees what—in Facebook news feeds or Google search results—is not neutral or unfiltered. Rather, who sees what is governed by complex and utterly nontransparent proprietary algorithms.  </p>
<p>The affinity between the commercial logic of popularity and the cultural and political logic of populism has another side. The pursuit of popularity in a hyper-connected digital environment accentuates the populist style of communication that already characterized <a href=http://www.tandfonline.com/doi/abs/10.1080/105846099198613>media-driven forms of political communication</a> well before the internet age—a style characterized by dramatization, confrontation, negativity, emotionalization, personalization, visualization, and hyper-simplification. </p>
<p>The sheer superabundance of content that courses through the digital ecosystem also erodes democratic citizenship. Digital abundance is at once polarizing and paralyzing. There has been much talk of Internet-based filter bubbles and echo chambers that segregate the public into separate cognitive, emotional, and political worlds. But polarization depends on colliding worlds, not on sealed and separate worlds. It depends on mobilization against a despised, feared, or loathed “other.” Digital superabundance facilitates such polarizing mobilization by assuring an inexhaustible and continuously renewed supply of discrediting representations of “the other.” Breitbart News, for example, sustains a continuous stream of stories attacking liberals, leftists, multiculturalists, Muslims, the mainstream press, as well as anyone else who attacks Trump. </p>
<p>Abundance also can be paralyzing. Research suggests that most people are more exposed to contrary views than the theory of filter bubbles would suggest. But this does not mean that they are critically assessing alternative perspectives. The sheer profusion and hyper-availability of radically different views of the world—not just differing opinions or “alternative facts”—can overwhelm people’s limited capacities for critical appraisal and paralyze their faculties of judgment and discernment. Digital superabundance, in other words, can create a “<a href=https://www.journalofdemocracy.org/article/can-democracy-survive-the-internet>blanket of fog</a>.” Inundated in a sea of information, pseudo-information, misinformation, and disinformation, people may feel powerless to cut through the fog and assess competing claims. And declining trust in the media—as well as declining participation in the interpretive communities fostered by churches, unions, parties, and other mediating institutions—may lead many people to retreat into a stance of generalized distrust.  </p>
<p>Digital hyperconnectivity has created a media and information ecosystem that is distinctively vulnerable to the propagation of fake news in the service of profit or propaganda. But fake news is only the tip of a much larger iceberg.</p>
<p>The social mediatization of politics, the intensifying web of surveillance and micro-targeting, the marginalization of institutional gatekeepers, the substitution of algorithms for professional judgment, the relentless pursuit and ubiquitous measurement of popularity, the accentuation of a populist style of communication, and the sheer superabundance of information, misinformation, and disinformation—all these developments have contributed to a crisis of public knowledge.  </p>
<p>The institutions that generate, refine, assess, popularize, and disseminate knowledge—science, universities, and the mainstream and elite media—have suffered a massive loss in public trust and legitimacy. The digital ecosystem that incubates and circulates what purports to be knowledge is increasingly disconnected from these institutions. A mood of “<a href=https://www.cambridge.org/core/journals/canadian-journal-of-political-science-revue-canadienne-de-science-politique/article/ears-wide-shut-epistemological-populism-argutainment-and-canadian-conservative-talk-radio/D650C60C681F28228DF9BAB9D50F0B65>epistemological populism</a>” breeds a pervasive suspicion of expertise. Deep gaps divide the views of scientists from those of the public about subjects such as evolution, the causes of climate change, the safety of vaccines, and the safety of genetically modified foods. Robust conceptions of democratic citizenship are unthinkable without at least minimal assumptions about public knowledge and deliberative reason. But today even the most attenuated assumptions seem wholly untenable.  </p>
<p>What can be done? First, since manifestly false news stories are just a symptom or indicator of a deeper and more systemic problem of public knowledge, strategies for addressing this problem must address this larger problem and not focus solely on fake news. Second, the problem is not simply technological but economic, political, and cultural. For this reason, we cannot simply look for technological fixes. </p>
<p>Third, Google, Facebook, Twitter, and other social media platforms must be held accountable as public institutions and de facto news publishers. They cannot be allowed to hide behind the claim that they are just neutral platforms, responsible only to their users for optimizing their experience. Just what form this broader public accountability should take is a difficult and complex question. But it is certainly not sufficient for Facebook to step up ex-post fact-checking on stories that have been flagged as problematic. That is too little, too late. </p>
<p>Fourth, the crisis of public knowledge makes it urgent to strengthen public broadcasting and other forms of public journalism. The commitment to public journalism has been weakening in recent decades in Europe and the United States. But now more than ever, that commitment must be renewed. </p>
<p>Lastly, we need to invent and invest in new forms of civic education that would seek to cultivate the new forms of literacy, numeracy, and critical intelligence that are needed for democratic citizenship in an age of digital hyperconnectivity. And we need new efforts to reclaim and rebuild a space of genuinely public discussion and debate to counter the growing fragmentation, privatization, and polarization of the digital ecosystem.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/11/29/forget-fake-news-social-media-making-democracy-less-democratic/ideas/essay/">Forget Fake News. Social Media Is Making Democracy Less Democratic.</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/11/29/forget-fake-news-social-media-making-democracy-less-democratic/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why the Census Must Frame the Right Questions on Race and National Origin</title>
		<link>https://legacy.zocalopublicsquare.org/2017/04/25/census-must-frame-right-questions-race-national-origin/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/04/25/census-must-frame-right-questions-race-national-origin/ideas/nexus/#respond</comments>
		<pubDate>Tue, 25 Apr 2017 07:01:33 +0000</pubDate>
		<dc:creator>By Jennifer Lee</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[census]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[data analysis]]></category>
		<category><![CDATA[national origin]]></category>
		<category><![CDATA[nexus]]></category>
		<category><![CDATA[Population]]></category>
		<category><![CDATA[race]]></category>
		<category><![CDATA[statistics]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=85015</guid>
		<description><![CDATA[<p>Like most Americans, I spent most of my life not appreciating the herculean effort the U.S. Census Bureau undertakes every 10 years.  </p>
<p>Since its inception in 1790, the U.S. Census has aimed to count every living person in the country, and the stakes are high. The results of the census determine the allocation of hundreds of billions of federal dollars, which affect every slice of American life.</p>
<p>In order to do so, the Census must ask Americans the right questions—and give them the right options for their answers. It seems relatively simple, but—as I learned in 2013, when I became a member of the Committee on Population Statistics of the Population Association of America—the undertaking is so enormous that the planning for the 2020 Census began even before the completion of the 2010 Census. In 2010, the Census Bureau launched the Alternative Questionnaire Experiment (AQE) to compare different Census questionnaire </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/04/25/census-must-frame-right-questions-race-national-origin/ideas/nexus/">Why the Census Must Frame the Right Questions on Race and National Origin</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Like most Americans, I spent most of my life not appreciating the herculean effort the U.S. Census Bureau undertakes every 10 years.  </p>
<p>Since its inception in 1790, the U.S. Census has aimed to count every living person in the country, and the stakes are high. The results of the census determine the allocation of hundreds of billions of federal dollars, which affect every slice of American life.</p>
<p>In order to do so, the Census must ask Americans the right questions—and give them the right options for their answers. It seems relatively simple, but—as I learned in 2013, when I became a member of the Committee on Population Statistics of the Population Association of America—the undertaking is so enormous that the planning for the 2020 Census began even before the completion of the 2010 Census. In 2010, the Census Bureau launched the <a href=https://www.census.gov/newsroom/releases/archives/2010_census/cb12-146.html>Alternative Questionnaire Experiment (AQE)</a> to compare different Census questionnaire design strategies. Five years later came the <a href=https://www.census.gov/programs-surveys/decennial-census/2020-census/research-testing/testing-activities/2015-census-tests/national-content-test.html>National Content Test (NCT)</a>, in which different questionnaires were sent to a statistically representative sample of approximately 1.2 million households in the United States and Puerto Rico.</p>
<p>I had the opportunity to review the results of both tests and assess which questionnaire design results in the most accurate count of the U.S. population. That meant taking three interrelated components into consideration. The first is increased reporting: Which questions were people most likely to answer? The second is decreased non-reporting: Which questions were more likely to get groups who are susceptible to non-reporting (including poor families who get evicted, immigrants who do not read or understand English, and undocumented migrants who may fear government officials) to respond? The third is increased, detailed reporting: Which questions yield more information about the respondents?</p>
<p>The design of a question itself affects how people answer it. Take the race and ethnicity question. People who identify as Asian or Hispanic answer it differently depending on how it is presented on the Census form.</p>
<p>In the 2010 Census, Hispanic origin and race were listed as two separate questions. In both the AQE and NTC, the Census Bureau tested the option of combining race and Hispanic origin into one question, which they refer to as the “combined format.” In addition, they tested which combined format would elicit the most detailed reporting on origin.</p>
<p>One option was to list the racial categories only, with an option to write in their detailed origin. A second option was to list racial categories and also provide check boxes denoting examples of detailed origin, along with the option to write in one’s origin.</p>
<p><img decoding="async" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2017/04/Census-Race-Question-Different-Formats-600x616.png" alt="census-race-question-different-formats" width="511" height="525" class="aligncenter size-large wp-image-85022" /></p>
<p>More than 70 percent of self-identified Hispanics said they were Hispanic when they were offered Hispanic as a race option (the combined option). When they are not presented with this option, as in the 2010 Census, self-identified Hispanics are more likely to check “some other race” or mark two or more races. In short, the combined option—in which Hispanic is listed as a race category—more easily allows Hispanics to accurately report their Hispanic identity. Moreover, when Hispanics are offered the combined option, they are significantly less likely to mark “some other race” or two or more races to self-identify. Both results indicate more accurate reporting on the part of Hispanics.</p>
<p>Moreover, Asians were most likely to mark their race, including their detailed race, when they are provided with a check box to mark their national origin (for example, Chinese, Filipino, Asian Indian, Vietnamese, Korean, Japanese). When these check boxes are removed, however, and Asians are presented with only a space to write in their national origin, they are less likely to report it. The difference is significant. The check-box format yielded a 97.4 percent response rate among Asian-Americans, and plummeted to 92.6 percent when they were provided only with a write-in option.</p>
<p>Detailed reporting among Asians is critical because it allows researchers to disaggregate data, which is essential to identifying health, educational, and economic disparities among Asian ethnic groups.</p>
<p>Such disaggregation may sound technical and mathematical, but it can have profound human impacts. For example, having data specific to different sub-groups on disease rates, health insurance coverage rates, and birth and death rates can allow policy makers and community organizations to make more informed decisions about how to best serve these populations. </p>
<p>Some Asian ethnic groups are more susceptible to certain health risks: Men and women of Vietnamese origin experience the highest rates of lung cancer among all Asian American subgroups, while men and women of Korean origin have some of the highest colorectal cancer rates. Such data can guide outreach on health insurance coverage; while 13 percent of Asian Americans lack health insurance, the rate is as high as 20 percent among Koreans. </p>
<p>In the state of California, there’s been broad recognition of the importance of breaking out such data. Last fall, Governor Jerry Brown signed legislation directing the Department of Public Health to disaggregate data for the Asian American, Native Hawaiian, and Pacific Islander populations on or after July 1, 2022. Following suit, the University of California and California State University have agreed to begin releasing disaggregated data on admissions, enrollment, and graduation rates—data that will help to unveil the wide disparity in educational attainment among Asian Americans. </p>
<p>Data disaggregation is a powerful weapon to dismantle the dominant narrative of Asian Americans as the model minority, which has resulted in their exclusion from policy debates on poverty, health care, and education. While Asian Americans may be touted as academic high achievers, one-third of Cambodians, Laotians, and Hmong do not graduate from high school. Data disaggregation exposes these gaping differences among Asian ethnic groups, and points to the dire need for the federal resources to help boost the educational outcomes of these groups, which are essential to immigrant and second-generation integration. </p>
<p>If the 2020 Census provides only a write-in option to list one’s origin, we will lose a lot of disaggregated data, and be unable to identify the stark differences among U.S. Asians. We will also miss a great deal of information on the country’s growing and increasingly diverse Hispanic population.</p>
<p>You don’t have to be on a committee like I am to weigh in on the Census. For the second time before the potential revisions of the 2020 Census, the White House Office of Management and Budget has invited public comments. While the ultimate decision about potential changes rests in the hands of Congress, your opinion counts. <a href=https://www.federalregister.gov/documents/2017/03/01/2017-03973/proposals-from-the-federal-interagency-working-group-for-revision-of-the-standards-for-maintaining#open-comment>April 30 is the last day to weigh in</a>. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/04/25/census-must-frame-right-questions-race-national-origin/ideas/nexus/">Why the Census Must Frame the Right Questions on Race and National Origin</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/04/25/census-must-frame-right-questions-race-national-origin/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>What Happens When Personal Information Gets Weaponized</title>
		<link>https://legacy.zocalopublicsquare.org/2017/03/29/when-personal-information-gets-weaponized/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/03/29/when-personal-information-gets-weaponized/ideas/nexus/#respond</comments>
		<pubDate>Wed, 29 Mar 2017 07:01:55 +0000</pubDate>
		<dc:creator>By Michael Greenberger</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[Berggruen Institute]]></category>
		<category><![CDATA[cyberattack]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[privacy]]></category>
		<category><![CDATA[what does war look like in the cyber age?]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=84517</guid>
		<description><![CDATA[<p><i>Michael Greenberger is a professor at the University of Maryland Carey School of Law and the founder and director of the University of Maryland Center for Health and Homeland Security. The following is an edited version of a phone interview with him about data collection in the age of cyberwarfare.</i></p>
<p>When you’re talking about information that can be used, or useful, in conducting cyberwarfare, that type of data is different from the conventional identification data, which when released is an invasion of a person’s privacy, or could be used in a fraudulent manner. The missing cyberwarfare data is the data of companies like utilities, hospitals, ports, and other sorts of critical infrastructure. </p>
<p>The most feared and plausible cyberwarfare scenario is the crippling of the nation’s electric grid, which is the basis for the way we live our lives every day, especially insofar as it is the basis for providing critical </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/03/29/when-personal-information-gets-weaponized/ideas/nexus/">What Happens When Personal Information Gets Weaponized</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><i>Michael Greenberger is a professor at the University of Maryland Carey School of Law and the founder and director of the University of Maryland Center for Health and Homeland Security. The following is an edited version of a phone interview with him about data collection in the age of cyberwarfare.</i></p>
<p>When you’re talking about information that can be used, or useful, in conducting cyberwarfare, that type of data is different from the conventional identification data, which when released is an invasion of a person’s privacy, or could be used in a fraudulent manner. The missing cyberwarfare data is the data of companies like utilities, hospitals, ports, and other sorts of critical infrastructure. </p>
<p>The most feared and plausible cyberwarfare scenario is the crippling of the nation’s electric grid, which is the basis for the way we live our lives every day, especially insofar as it is the basis for providing critical healthcare needs of patients in medical facilities. <a href=http://www.gao.gov/products/GAO-07-39>About 85 percent</a> of our vital national infrastructure—dams, highways, tunnels, bridges, electrical grid, sewers—is owned privately. </p>
<p>So any attempt to mandate the provision of that data, either to the United States government to develop counter-measures to cyberwarfare, or even to states and localities, has been strenuously resisted by the private sector. It has been resisted both as a knowing obstacle that is being set up to protect things like trade secrets and intellectual property; and it’s being resisted in an unknowing way, a knee-jerk adverse reaction to turning over any private commercial data to government institutions. </p>
<p>The government’s inability to access private sector data is probably the most fundamental weakness of our ability to fend off cyberwarfare attacks. The methodology that is in place now is, at best, based on incentive-driven cyber regulation, which tries to make it attractive to private organizations to turn data over to allow that data to be protected. However, volunteerism clearly is not working here, and it is therefore not enough to set up a defense to a full-scale, damaging infrastructure cyberattack. </p>
<p>Protecting the privacy of individual U.S. citizen data, where the government wants to collect mass amounts of private information, raises different kinds of issues. At the University of Maryland Carey Law School, I taught a class on “National Secrets, Foreign Intelligence and Privacy.” That entire course was driven by the Edward Snowden security leaks in June of 2013. Snowden demonstrated that there were various avenues the United States government was using to access private information of United States citizens. </p>
<p>The two central legal authorities that the United States was relying on were Section 215 of the Patriotic Act and Section 702 of the Foreign Intelligence Surveillance Act Amendments of 2008. Nobody outside of the federal government—and I would say most of the federal government itself—understood that these kind of surveillance activities were being undertaken. </p>
<p>Section 215 was the vehicle through which the National Security Agency vacuumed up so-called metadata, which is information about who a citizen calls. The data shows both the phone number of the arranger of the call, and the number of the person to whom the arranger places his call, as well as the amount of time that the call lasts.</p>
<p>It is not a content-driven, wiretapping surveillance—in other words, you do not know the substance or content of the call. But an outsider can tell a lot about somebody’s private life by knowing who they call on a regular basis and how long that call lasts. Knowledge of frequent calls to an HIV/AIDS advice line, Planned Parenthood, or a psychiatrist, tells the reviewer of this data important information that the caller would otherwise clearly want to be private.</p>
<p>This collection of metadata was further aggravated by the fact that when the metadata was accessed by the National Security Agency, if it dipped into the metadata, it could not only look at the telephone traffic between one caller and another caller, but it could search “three hops” of the data.  </p>
<p>The first hop is “A calls B,” and the NSA could get that metadata; then the NSA could get the metadata of everybody that B calls. That’s hop #2. Then hop #3 is the metadata of everybody receiving calls from B. Therefore, with three hops you have a spider web of the metadata of hundreds of thousands of calls. When the program was made public in June of 2013 by the Snowden leaks, President Obama pledged soon thereafter: “We’re only going to collect two hops, not three hops.” </p>
<div class="pullquote"> Edward Snowden demonstrated that there were various avenues the United States government was using to access private information of United States citizens. Nobody outside of the federal government—and I would say most of the federal government itself—understood that these kind of surveillance activities were being undertaken. </div>
<p>Then the next question becomes: How does the NSA access the details of the metadata it has collected? Originally, experienced intelligence officers supervised requests to access the specifics of the metadata. That was considered quite troublesome legally, because one basic tenet of a constitutional search is that a warrant is obtained from an independent court. By having intelligence officers decide whether the metadata could be searched, that tenet was violated. </p>
<p>One of the first things President Obama did in January 2014, besides eliminating three “hops,” was to impose the requirement that if the metadata was to be searched, the NSA, through the Justice Department, had to get a foreign intelligence surveillance warrant from the Foreign Intelligence Surveillance Court showing that there was probable cause that searching the metadata would concern an agent of a foreign power.  </p>
<p>Even with President Obama’s adjustments, Section 215 was criticized broadly, both from the left by civil libertarians and from the right by libertarians.  </p>
<p>The USA Freedom Act in 2015 repealed Section 215. However, that statute required phone service providers to hold onto their metadata records for a longer period of time, and if the NSA needed access to that metadata, it could go to the Foreign Intelligence Surveillance Court to obtain a warrant to examine the metadata if it showed that there was reasonable, articulable suspicion (“RAS”) that the metadata would lead to, <i>inter alia</i>,  terrorist activity. Of course, showing RAS is, in legal terms, not “probable cause” of criminal activity, the classic threshold for a lawful search and seizure under the Fourth Amendment. At some point, therefore, the constitutionality of this new metadata provision may be challenged.  </p>
<p>Section 215 was the legal basis of the first of the two legs of the surveillance revealed by the Snowden leaks. The other is based on Section 702 of the Foreign Intelligence Surveillance Act Amendments of 2008. Section 702 is driven by the fact that the target of the requested surveillance is reasonably believed to be outside the United States and is not a U.S. citizen, circumstances under which the Fourth Amendment would not apply.   </p>
<p>But, in operation, Section 702 surveillance need only look to whether the communication at any time left the United States. Any email that at any time is routed outside our country—as many emails are—is subject to Section 702 surveillance. So that raises a very deep concern, because domestic emails are therefore subject to an NSA Section 702 search.  </p>
<p>Our government is always quick to say: “We do not surveil United States citizens and only do so with a warrant.” Well, the 702 is not a warrant-driven mechanism as a predicate to the search.  (The government needs to get FISA court clearance on a yearly basis for the <i>methodology</i> of 702 searches, but it is not required to get a warrant on a case-by-case basis.) </p>
<p>The NSA and the Justice Department are also quick to say that if, through Section 702 surveillance, they pick up anything that is entirely domestic, the government “minimizes” the search, or does not allow it into the intelligence inventory. However, the 702 exceptions to minimization are so broad that they swallow up the entire concept of minimization. The 702 statutory authority is set to expire later this year, and there is going to be a major debate over whether it should be extended. Section 702 has many supporters.</p>
<p>The Supreme Court has not ruled definitively on these surveillance issues. Even among the present eight Supreme Court justices, there is a likely majority who have signaled their doubts about surveillance that does not strictly follow Fourth Amendment “probable cause” jurisprudence. Even Justice Antonin Scalia, before his passing, was a strict enforcer of classic Fourth Amendment search and seizure doctrine. Moreover, there is evidence that Judge Gorsuch, if confirmed, will follow Scalia’s lead in this regard. </p>
<p>To date, the failure of challenges to these kinds of surveillances is the inability to demonstrate in court “standing” (or precise injury from the surveillance). The one Section 702 case to reach the Supreme Court in 2013 foundered on the inability of the plaintiffs to show with certitude that their communications had been read or heard. However, standing will doubtless be established in a case where evidence obtained under Section 702 is used to convict a criminal defendant. The defendant will have likely failed to suppress introduction of the evidence on grounds that it was obtained without a showing of probable cause. That criminal defendant will doubtless have standing and, if the case reaches the Supreme Court, that court will likely be able to resolve these issues on the merits. </p>
<p>In the end, one of the biggest cybersecurity problems is that the U.S. military-intelligence complex has far too easy access to private information that can be damaging to oneself, information that we reasonably expect to be kept private, and not put into the hands of the government without some showing that it’s directly related to a critical national need. The government has just too-ready access to far too much of everyone’s private information, and that access can be gotten without demonstrating to an independent court that there is a strong national need.</p>
<p>Another major cyber problem is that too many U.S. commercial interests are not using best cyber practices, best cyber technology, to protect sensitive data that, if stolen, enables crippling cyberwarfare against the United States. I do not think that failure has been given a serious enough concern. So losing your credit card information, your passport information, and other forms of privacy happens too easily. This is troublesome and worrying. But it is not the clear and present danger to our collective security of having our infrastructure data hacked and having a broad-based infrastructure break down.</p>
<p>The attempt to minimize the government’s access to personal private information is not a partisan issue. Libertarians on the right and civil libertarians on the left feel strongly that the government’s ability to invade privacy must be limited. However, it is hand-to-hand combat in Washington on these issues, and should there be another devastating terror attack, I think the scales will tip to the side of the government being able to collect whatever it wants, whenever it wants it. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/03/29/when-personal-information-gets-weaponized/ideas/nexus/">What Happens When Personal Information Gets Weaponized</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/03/29/when-personal-information-gets-weaponized/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Cyber Age Demands a New Understanding of War—but We’d Better Hurry</title>
		<link>https://legacy.zocalopublicsquare.org/2017/03/29/cyber-age-demands-new-understanding-war-wed-better-hurry/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/03/29/cyber-age-demands-new-understanding-war-wed-better-hurry/ideas/nexus/#respond</comments>
		<pubDate>Wed, 29 Mar 2017 07:01:15 +0000</pubDate>
		<dc:creator>By James Der Derian</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[Berggruen Institute]]></category>
		<category><![CDATA[cyber age]]></category>
		<category><![CDATA[cyber warfare]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[war]]></category>
		<category><![CDATA[what does war look like in the cyber age?]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=84503</guid>
		<description><![CDATA[<p>It seems highly reckless to prod into flight Hegel’s Owl of Minerva—the goddess of wisdom <i>and</i> war—for an assessment of war in a cyber age that is barely 30 years old.</p>
<p>You will not find it in the <i>Oxford English Dictionary</i>, but “cyberwar” made its first inauspicious appearance in 1987 when an anonymous editor from <i>Omni</i>—Bob Guccione’s other magazine—attached the neologism as a title for an article by Owen Davies, an <i>Omni</i> editor. Although he never used the word or developed the idea of cyberwar, Davies pretty much nailed the coming of robotic warfare. </p>
<p>But something was in the air. In 1987 and <i>avant la lettre</i>, cyberwar in the narrow sense of an attack by malicious code on a computer system, communications network, or critical infrastructure, had a more plausible debut as the “Jerusalem virus” aka the “PLO virus,” a logic bomb that would pop up on </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/03/29/cyber-age-demands-new-understanding-war-wed-better-hurry/ideas/nexus/">The Cyber Age Demands a New Understanding of War—but We’d Better Hurry</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>It seems highly reckless to prod into flight Hegel’s Owl of Minerva—the goddess of wisdom <i>and</i> war—for an assessment of war in a cyber age that is barely 30 years old.</p>
<p>You will not find it in the <i>Oxford English Dictionary</i>, but “cyberwar” made its first inauspicious appearance in 1987 when an anonymous editor from <i>Omni</i>—<a href=https://en.wikipedia.org/wiki/Bob_Guccione>Bob Guccione’s other magazine</a>—attached the neologism as a title for an article by Owen Davies, an <i>Omni</i> editor. Although he never used the word or developed the idea of cyberwar, Davies pretty much nailed the coming of robotic warfare. </p>
<p>But something was in the air. In 1987 and <i>avant la lettre</i>, cyberwar in the narrow sense of an attack by malicious code on a computer system, communications network, or critical infrastructure, had a more plausible debut as the “Jerusalem virus” aka the “PLO virus,” a logic bomb that would pop up on any given Friday the 13th.  Top that, Jason.</p>
<p>The next recorded use of “cyberwar” was in 1991. A young academic steeped in too much William Gibson and Bruce Sterling, after watching way too much of the 24/7 coverage of the first Iraq war, delivered a paper at the Second Annual Cyberspace Conference in Santa Cruz California: “Cyberwar, Videogames and the Gulf War.” Shortly afterward he was asked by the short-lived PBS television show <i>Late City</i> to distill the 100-hour TV war into a two-minute video buzz clip (set to <i>Sweet Bird of Truth</i> by The The).  He gave the concept its first definition: “a new virtual and consensual reality, the first <i>cyberwar</i>, in the sense of a technologically generated, televisually linked and strategically gamed form of violence.”</p>
<p>Both were promptly forgotten. I took some solace in Nietzsche, who said only that which has no history can be defined.  </p>
<p>But then history responded with a vengeance:  Just about every major war since Iraq had a cyber edge to it. To be sure, acts of primal if not always organized violence would continue—all too often in the name of creation myths that would not be out of place in the Stone or Bronze ages—on a daily basis by and against tribes, nations, and superpowers.  </p>
<p>Many of these acts of organized violence continue to fit the classical definitions presented early in the 19th century by the Prussian Carl von Clausewitz, who variously defined war as a duel on a larger scale, a forceful act to compel others to do our will, and a continuation of politics by other means. The contemporary landscape of world politics is littered with <i>casus belli</i> that would not be unfamiliar to Clausewitz, or for that matter, to his eminent precursors like Machiavelli and Hobbes, who identified <i>wars of gain</i> (produced by imperial, economic, and military struggles for dominance), <i>wars of fear</i>  (prompted by perceptions of a rising power or threatening evil), and <i>wars of doctrine</i>  (caused by the clash of monolithic faiths and universalist ideologies). </p>
<p>But Al Qaeda, ISIS, and other non- and wannabe-state actors keep crashing the <a href=https://en.wikipedia.org/wiki/Westphalian_sovereignty>Westphalian system</a>. Today’s new warriors intent on challenging the state’s monopoly on violence—like the insurgent, jihadist, or private militia—are not that far removed from their earlier counterparts, like the pirate, mercenary, and holy crusader. Even the <i>guerre du jour</i> of “hybrid war,” the corrosive mix of private criminality, public bellicosity, and authoritarian politics that scars the residual borders of the Cold War, has more than a hint of the medieval in the interplay of overlapping sovereignties, polymorphous combatants, and clashing cosmologies. </p>
<div class="pullquote"> As everything and everyone becomes connected, it’s very hard to confine cyberwar to a discrete place or bounded time. A few clicks, several thousand shares, and an incident escalates from a local to regional to international crisis. </div>
<p>As long as global violence remains a viable, sometimes the only option in the face of intractable political differences, social injustices, and cultural struggles for recognition, war in one form or another will find a way. States, democratic or not, might be less inclined to initiate violence, but non-, para-, and anti-state actors have proven to be willing as well as able to use networked technology to wage asymmetric warfare—which in turn prompts over-reactions by states and furthers cycles of mimetic violence.</p>
<p>Classical war persists, as does the effort by new actors to offset disadvantages through new technologies. Even Clausewitz, ever the dialectician, acknowledged that “every age had its own peculiar forms of war, its own restrictive conditions, and its own prejudices.”</p>
<p>What is most <i>peculiar</i> about war in a “cyber age?” Depending on whether one goes back to the Greek (<i>kubernētēs</i> or “steersman”), Norbert Weiner (“cybernetics,” 1948), or William Gibson (“cyberspace,” 1984), “cyber” suggests everything from a control system with a feedback capacity, to a technologically-induced consensual hallucination, to a 400-pound hacker (<i>pace</i> Trump) subverting the U.S. elections. </p>
<p>Dating the cyber <i>age</i> is no easier. Someday archeologists will sift through the ruins of Bell Labs, find wire etchings in germanium and silicon and declare 1947, give or take a year, as point zero, from which microprocessors, packet switches, and fiber optics as well as digital code, information theory, and networked systems soon followed.  </p>
<p>However, science will not capture the ghost in the machine. For that, we best go back to the originating myths. Cyber is, literally, as old as the Bible and other holy texts in which gods “steer” or “govern” the universe. In the Judeo-Christian version, those “who have no direction (<i>kubernēsis</i>) fall like leaves” (Proverbs 11:14); those who prosper understand that “with strategic planning (<i>meta kubernēseōs</i>) war is conducted&#8221; (Proverbs 24:6). Leaping a millennia or two forward, our techno-deities might not be as omniscient or omnipotent as past gods; but, weaponized and sanctioned by national security, they deter, disrupt, and if necessary destroy our enemies with relative impunity to us. Obama got religion, ordering 10 times the number of drone attacks executed by Bush; barely two months in office, President Trump increased them by another 432% over Obama.</p>
<p>Perhaps the most peculiar characteristic of war in a cyber age is how well it resists the traditional <i>restrictions</i> of warfare. As everything and everyone becomes connected, it’s very hard to confine cyberwar to a discrete place or bounded time. A few clicks, several thousand shares, and an incident escalates from a local to regional to international crisis. This is the force-multiplier effect of the cyber age, with 9/11 as the most seminal and inspirational example. Access to the internet and flight simulators made it possible for Osama bin Laden and 19 kamikaze fanatics to topple the World Trade Center, hit the Pentagon, kill nearly 3,000 people, and cause billions of dollars in damages (trillions if we include second-order effects, like the Iraq War and the rise of ISIS).</p>
<p>If there is a <i>prejudice</i> to war in the cyber age, it can be found in the conceit that virtualization makes war more virtuous. Rather than resorting to the convention of bombs, the United States and Israel inserted the Stuxnet virus to degrade the Iranian nuclear weapon program; no matter that the virus proved to be less than a precise munition and rapidly spread to non-targeted industrial platforms. Wikileaks hacked thousands of embassy cables to make U.S. diplomacy more transparent and democratic; no matter the collateral damage done to alliances and coalition efforts to restrain anti-democratic regimes. Drones pursue a cleaner kill; no matter the virtual terror induced upon whole populations.</p>
<p>Thirty years on, I think it is safe to say that we have not seen the worst of war in the cyber age. With so many networked actors operating simultaneously across multiple levels of power, prediction, pre-emption, or restriction of cyberwar is exceptionally difficult. Distinguishing intentional from accidental acts is hard. Knock-on effects will grow.</p>
<p>The cyber advantage might now go to the most technologically advanced powers, but the law of uneven development gives latecomers the edge. Which is why we should be asking now, before rather than after the Owl of Minerva takes flight at dusk, what war will look like in the <a href=https://projectqsydney.com/><i>quantum age</i></a>.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/03/29/cyber-age-demands-new-understanding-war-wed-better-hurry/ideas/nexus/">The Cyber Age Demands a New Understanding of War—but We’d Better Hurry</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/03/29/cyber-age-demands-new-understanding-war-wed-better-hurry/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>In California, Big Data Is Getting the Wrong People Arrested</title>
		<link>https://legacy.zocalopublicsquare.org/2016/12/30/california-big-data-getting-wrong-people-arrested/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/12/30/california-big-data-getting-wrong-people-arrested/ideas/nexus/#comments</comments>
		<pubDate>Fri, 30 Dec 2016 08:01:29 +0000</pubDate>
		<dc:creator>By Elizabeth Joh</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[Arizona State University]]></category>
		<category><![CDATA[arrest]]></category>
		<category><![CDATA[ASU]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[California]]></category>
		<category><![CDATA[crime]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[Future Tense]]></category>
		<category><![CDATA[Law]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[wrongful arrest]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=82429</guid>
		<description><![CDATA[<p>Managing information is central to the criminal justice system, and so it’s inevitable that mistakes happen. Names get confused, files lost. When these errors occur, the police can mistakenly arrest or detain people with no legal cause. </p>
<p>But what happens when software is responsible for a wrongful arrest or detention?</p>
<p>On Aug. 1, 2016, Alameda County, California, replaced its ’70s-era case management system with new software, Tyler Technologies’ Odyssey Case Manager. This wasn’t a radical decision: Most counties around the country use some kind of software to process information about the people in their courts. When a judge issues or recalls an arrest warrant, when a defendant posts bail—all of this is data that the courts and the police rely upon to make decisions about whom to detain, arrest, or release.</p>
<p>But since the software was rolled out in this Northern California county, the public defender’s office has learned of </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/12/30/california-big-data-getting-wrong-people-arrested/ideas/nexus/">In California, Big Data Is Getting the Wrong People Arrested</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Managing information is central to the criminal justice system, and so it’s inevitable that mistakes happen. Names get confused, files lost. When these errors occur, the police can mistakenly arrest or detain people with no legal cause. </p>
<p>But what happens when software is responsible for a wrongful arrest or detention?</p>
<p>On Aug. 1, 2016, Alameda County, California, replaced its ’70s-era case management system with <a href=http://arstechnica.com/tech-policy/2016/12/court-software-glitches-result-in-erroneous-arrests-defense-lawyers-say/>new software</a>, Tyler Technologies’ Odyssey Case Manager. This wasn’t a radical decision: Most counties around the country use some kind of software to process information about the people in their courts. When a judge issues or recalls an arrest warrant, when a defendant posts bail—all of this is data that the courts and the police rely upon to make decisions about whom to detain, arrest, or release.</p>
<p>But since the software was rolled out in this Northern California county, the <a href=https://twitter.com/JodiHernandezTV/status/803822733683519488>public defender’s office</a> has learned of dozens of cases in which people have been wrongfully arrested, detained in jail when they should have been released, or erroneously told to register as sex offenders. For example, in September four police officers showed up at the home of a <a href=http://www.sfchronicle.com/bayarea/article/Alameda-County-s-new-software-system-blamed-for-10643452.php>24-year-old man in Fremont to arrest him</a>. An arrest warrant had previously been issued for his failure to appear in court on a drug possession charge but it had been dismissed. Yet the warrant mistakenly remained active in the court’s new Odyssey system, so the man was arrested. There have been so many reported errors—on a “<a href=http://www.eastbaytimes.com/2016/11/29/public-defender-to-appeal-to-higher-court-over-alameda-county-court-software-snafus/>semi-daily basis</a>,” according to the <i>East Bay Times</i>—that the Office of the Alameda County Public Defender has filed <a href=https://www.documentcloud.org/documents/3228162-WWMADMNP05-20161115-113917.html>hundreds of identical motions</a> asking the court to keep accurate records. Similar problems have been reported in some of the other 25 counties in the state with Odyssey contracts, prompting the creation of a “<a href=http://arstechnica.com/tech-policy/2016/12/court-software-glitches-result-in-erroneous-arrests-defense-lawyers-say/>California Tyler User Group</a>” for court staff. Alameda County itself has decided not to use Odyssey for its family, probate, or civil matters.</p>
<p>No one seems to yet understand the source of the errors behind Odyssey’s case management software. For the moment, many of the mistakes appear to result from a <a href=http://www.bbc.com/news/technology-38153992>user interface</a> for court employees that is far more complicated than the previous system. The software manufacturer, Tyler Technologies, has had little comment. Yet this 2016 problem reflects concerns by the Supreme Court from more than 20 years ago.</p>
<p>In 1991, a police officer arrested Issac Evans after an identification check during a traffic stop turned up an outstanding arrest warrant. The arrest allowed the officer to search Evans’ car, which turned up a bag of marijuana and a subsequent drug possession charge. </p>
<div class="pullquote"> Criminal cases are individual, but in the age of big data, problems and solutions have to be systematic.</div>
<p>But the outstanding arrest warrant wasn’t valid—it had already been rescinded by the judge who originally issued it for several traffic violations. In such cases, the court clerk was supposed to have called the sheriff’s clerk, who would then remove the active warrant from the sheriff’s computer database. Had the procedure been followed in Evans’ case, it’s quite likely the marijuana would not have been found because no warrant would have justified his arrest. </p>
<p>Because his arrest was based on an invalid warrant, Evans’ Fourth Amendment rights had been violated. Normally, this would mean that the marijuana found as a result of the search would have been suppressed, under the <a href=https://www.law.cornell.edu/wex/exclusionary_rule>exclusionary rule</a>, which is intended to deter police misconduct. One exception to that rule, however, occurs when the police act in “good faith” on a legal decision that they believe to be correct, even if it later turns out to be wrong. In 1994, the Supreme Court decided in <a href=https://scholar.google.com/scholar_case?case=1629265977811655369&#038;q=arizona+v.+evans&#038;hl=en&#038;as_sdt=2006><i>Arizona v. Evans</i></a> that this exception applied to Evans’ case: The mistake was the fault of the court clerk, not of the arresting officer, who relied in good faith based on the invalid warrant. </p>
<p>Issac Evans lost because the Supreme Court was convinced that he fell victim to an isolated error. Justice Sandra Day O’Connor, for example, suggested that the court might reach a different conclusion in a case where “the recordkeeping system itself” contained “no mechanism to ensure its accuracy over time” and “routinely” resulted in false arrests. Likewise, Justice David Souter stated that if a computer database had no way of “keeping the number of resulting false arrests within an acceptable minimum limit,” the exclusionary rule might apply. The software mistakes occurring in Alameda County appear to be more systematic than isolated. </p>
<p>So how do individual criminal defendants identify and challenge the “fruits of computerized error,” as Souter called them in <i>Arizona v. Evans</i>?</p>
<p>The answer is that we don’t have a very good answer. At some point in the future, the Supreme Court may decide to apply the exclusionary rule in a case where systemic software errors violate Fourth Amendment rights. The Alameda County Superior Court will hear the public defender office’s request to intervene in the software errors in January. In the meantime, software problems like those experienced in Alameda County have tangible, real-life consequences. Moreover, not every defendant who has fallen victim to these problems may discover that their issue is the result of a systemic software problem rather than an isolated bookkeeping snafu.</p>
<p>These problems will likely worsen as software increasingly becomes embedded in everything we do. Odyssey clearly has its flaws, but at least court employees can identify a problem like a recalled arrest warrant, even if it’s too late to stop a wrongful arrest. With other types of software, however, errors may be difficult to detect. <a href=https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>Algorithms</a> designed to help judges decide bail, or to help the police identify suspicious behavior, may be hard for nonexperts to understand, let alone critique. The private companies that design and sell these products may also be reluctant to share their proprietary information.</p>
<p>Part of the problem is a bad fit. Criminal cases are individual, but in the age of big data, problems and solutions have to be systematic. When there are few incentives to audit databases or check for software errors, mistaken arrests and detentions should be no surprise. <a href=https://scholar.google.com/scholar_case?q=herring+v+us&#038;hl=en&#038;as_sdt=2006&#038;case=3829471951415365195&#038;scilh=0>Justice Ruth Bader Ginsburg</a> once stated that “electronic databases form the nervous system of contemporary criminal justice operations.” Today software, and increasingly sophisticated software, is part of that nervous system. Yet we fail to ensure the system’s health.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/12/30/california-big-data-getting-wrong-people-arrested/ideas/nexus/">In California, Big Data Is Getting the Wrong People Arrested</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/12/30/california-big-data-getting-wrong-people-arrested/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Just Because the RNC Says It Wasn’t Hacked Doesn’t Change Reality</title>
		<link>https://legacy.zocalopublicsquare.org/2016/12/23/just-rnc-says-wasnt-hacked-doesnt-change-reality/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/12/23/just-rnc-says-wasnt-hacked-doesnt-change-reality/ideas/nexus/#respond</comments>
		<pubDate>Fri, 23 Dec 2016 08:01:23 +0000</pubDate>
		<dc:creator>By Josephine Wolff</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[Arizona State University]]></category>
		<category><![CDATA[ASU]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[Future Tense]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[hacking]]></category>
		<category><![CDATA[Republican National Committee]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=82343</guid>
		<description><![CDATA[<p>Cybersecurity professionals are fond of saying that there are two kinds of companies: those that have been hacked and those that don’t yet know they’ve been hacked. Right now, the Republican National Committee appears to fall into a new category: an organization that refuses to acknowledge that it’s even vulnerable.</p>
<p>The CIA, in reporting on Russia’s intervention in the presidential election, determined that the RNC had been breached by Russian hackers during the election, but none of the information stolen from the party had been released, the <i>New York Times</i> reported. Following this report, RNC Chairman Reince Priebus, soon to become White House chief of staff, insisted in two television interviews that “the RNC was not hacked.” He apparently based this analysis on the fact that the FBI had previously reviewed its systems as well as the evidence provided by the “hacking detection systems” that the RNC has in place.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/12/23/just-rnc-says-wasnt-hacked-doesnt-change-reality/ideas/nexus/">Just Because the RNC Says It Wasn’t Hacked Doesn’t Change Reality</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Cybersecurity professionals are fond of saying that there are two kinds of companies: those that have been hacked and those that don’t yet know they’ve been hacked. Right now, the Republican National Committee appears to fall into a new category: an organization that refuses to acknowledge that it’s even vulnerable.</p>
<p>The CIA, in reporting on Russia’s intervention in the presidential election, <a href=http://www.nytimes.com/2016/12/09/us/obama-russia-election-hack.html >determined that the RNC had been breached by Russian hackers</a> during the election, but none of the information stolen from the party had been released, the <i>New York Times</i> reported. Following this report, RNC Chairman Reince Priebus, soon to become White House chief of staff, <a href=http://www.politico.com/story/2016/12/priebus-denies-report-rnc-was-hacked-232483 >insisted in two television interviews</a> that “the RNC was not hacked.” He apparently based this analysis on the fact that the FBI had previously reviewed its systems as well as the evidence provided by the “hacking detection systems” that the RNC has in place.</p>
<p>Anyone who confidently, categorically denies that his organization’s computer systems have been breached is either flat-out lying or dangerously delusional. The best-case scenario is the former. If the RNC is, in fact, aware that there are vulnerabilities in its systems (as there undoubtedly are) and is paying attention to whatever evidence the CIA has provided of breaches, then Priebus’ statements could amount to a (perhaps misguided) PR strategy, intended to reassure the public and deter other would-be attackers. (As a general rule, though, boldly claiming that you have never been hacked and trumpeting your infallible “hacking detection systems” is perhaps not the best way to deter potential intruders.)</p>
<p>But if Priebus is telling the truth—if he really has such blind faith in the technical tools that the RNC uses to detect intrusions, and refuses to believe, despite any evidence to the contrary, that those tools could possibly be evaded or that any deeper investigation could reveal things that previous ones had missed—then that’s much worse news. To proudly announce to the world not only that your security monitoring tactics have failed to prevent intrusions detected by other parties but also that you absolutely will not, under any circumstances, ever second-guess or investigate further beyond those tactics is to be ludicrously ignorant of how fallible such tools are. </p>
<div class="pullquote"> From a cybersecurity standpoint, the best thing to hope for in a person running a powerful organization—whether it’s a political party or the White House—is someone who will be constantly searching for evidence of breaches and intrusions. </div>
<p>From a cybersecurity standpoint, the best thing to hope for in a person running a powerful organization—whether it’s a political party or the White House—is someone who will be constantly searching for evidence of breaches and intrusions, someone who understands that the failure to find that evidence is a sign of a weak defense posture, not an absence of adversaries. Blind faith in the protective powers of technical tools is never a good sign—nor is the philosophy that no breach has occurred unless the stolen information has surfaced somewhere else, conclusively confirming a theft. </p>
<p>Many data breaches—especially those directed at governments for the purposes of espionage—do not result in public revelations of stolen information. The only reasons to reveal that you have successfully stolen data are to sell that data, to publicly humiliate or hurt the victims by influencing public opinion, or to extract a ransom from the victims. Often, incidents of political and economic cyberespionage are not motivated by any of these reasons, and the perpetrators therefore sit on their stolen data, quietly using it for their own purposes or waiting until it becomes useful.</p>
<p>Obviously, it’s easier to deny breaches that have no public component and harder to prove definitively that they’ve occurred. But just because the data stolen from the U.S. Office of Personnel Management has <a href=http://www.reuters.com/article/cybersecurity-usa-opm-idUSL1N12X1GP20151102 >not been sold</a> or published online does not mean that breach did not occur, or that it doesn’t matter, or that we should not be thinking about what we can learn from it and how we can better protect government agencies’ networks. </p>
<p>But to do that, you have to be willing to accept that some breaches are determined based on overwhelming evidence, absent any public announcement or confirmation by the perpetrators. Attackers often bypass technical defenses and protection mechanisms, and a slower, more in-depth investigation performed by more sophisticated analysts can reveal things an initial investigation may have missed; the fact that “evidence” of a hack hasn’t been found by the RNC is something to be concerned about, not something to brag about on national television. It’s the kind of thing you brag about when you want to advertise to adversaries not only how poor your network monitoring tools are but also how much false confidence you have placed in them. A government that refuses to accept or believe forensic evidence of data breaches is likely to be a very appealing—and very easy—target. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/12/23/just-rnc-says-wasnt-hacked-doesnt-change-reality/ideas/nexus/">Just Because the RNC Says It Wasn’t Hacked Doesn’t Change Reality</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/12/23/just-rnc-says-wasnt-hacked-doesnt-change-reality/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why Artificial Intelligence Won’t Replace CEOs</title>
		<link>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/#respond</comments>
		<pubDate>Wed, 02 Nov 2016 07:01:51 +0000</pubDate>
		<dc:creator>By Judy D. Olian</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[data analysis]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[information]]></category>
		<category><![CDATA[information science]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[UCLA]]></category>
		<category><![CDATA[UCLA Anderson]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=80798</guid>
		<description><![CDATA[<p>Peter Drucker was prescient about most things, but the computer wasn’t one of them. &#8220;The computer &#8230; is a moron,” the management guru asserted in a McKinsey Quarterly article in 1967, calling the devices that now power our economy and our daily lives “the dumbest tool we have ever had.” </p>
<p>Drucker was hardly alone in underestimating the unfathomable pace of change in digital technologies and artificial intelligence (AI). AI builds on the computational power of vast neural networks sifting through massive digital data sets or “big data” to achieve outcomes analogous, often superior, to those produced by human learning and decision-making. Careers as varied as advertising, financial services, medicine, journalism, agriculture, national defense, environmental sciences, and the creative arts are being transformed by AI. </p>
<p>Computer algorithms gather and analyze thousands of data points, synthesize the information, identify previously undetected patterns, and create meaningful outputs—whether a disease treatment, a face match </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/">Why Artificial Intelligence Won’t Replace CEOs</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Peter Drucker was prescient about most things, but the computer wasn’t one of them. &#8220;The computer &#8230; is a moron,” the management guru asserted in a <a href=http://www.mckinsey.com/business-functions/organization/our-insights/the-manager-and-the-moron>McKinsey Quarterly article</a> in 1967, calling the devices that now power our economy and our daily lives “the dumbest tool we have ever had.” </p>
<p>Drucker was hardly alone in underestimating the unfathomable pace of change in digital technologies and artificial intelligence (AI). AI builds on the computational power of vast neural networks sifting through massive digital data sets or “big data” to achieve outcomes analogous, often superior, to those produced by human learning and decision-making. Careers as varied as advertising, financial services, medicine, journalism, agriculture, national defense, environmental sciences, and the creative arts are being transformed by AI. </p>
<p>Computer algorithms gather and analyze thousands of data points, synthesize the information, identify previously undetected patterns, and create meaningful outputs—whether a disease treatment, a face match in a city of millions, a marketing campaign, new transportation routes, a crop harvesting program, a machine-generated news story, a poem, painting, or musical stanza—faster than a human can pour a cup of coffee.</p>
<p><a href=http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet>A recent McKinsey study</a> suggests that 45 percent of all on-the-job activities can be automated by deploying AI. That includes file clerks whose jobs can become 80 percent automated, or CEOs’ jobs that can be 20 percent automated because AI systems radically simplify and target CEOs’ reading of reports, risk detection, or pattern recognition.  </p>
<p>AI has been one of those long-hyped technologies that hasn’t transformed our whole world yet, but will. Now that AI appears ready for prime time, there is consternation, even among technologists, about the unbridled power that machines may have over human decision- making. Elon Musk has called AI &#8220;our biggest existential threat,” echoing Bill Joy’s 2000 warning in <i>Wired</i> magazine that “the future doesn’t need us.” On the other side, of course, are enthusiasts eager for smart machines to improve our lives and the health of the planet.</p>
<p>I’m on the side of <a href=http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html>Microsoft CEO Satya Nadella, who says</a> we should be preparing for the promise of ever smarter machines as partners to human decision-making, focusing on the proper role, and limitations, of AI tools. For business school educators like me who believe the future will indeed need us, the expanding power of AI or deep learning poses a challenge and opportunity: How do we prepare students for the coming decades so that they embrace the power of AI, and understand its advantages for management and leadership in the future? </p>
<p>It would be a mistake to force every MBA graduate to become a data scientist. The challenge for business schools is to update our broadly focused curricula while giving our MBAs a greater familiarity and comfort level with data analytics. Tomorrow’s CEOs will need a better sense of what increasingly abundant and complex data sets within organizations can, and cannot, answer. </p>
<p>The sophistication and volume of data may be increasing, but history affords models of a decision maker’s proper relationship to data analytics. </p>
<p>Take D-Day. General Dwight D. Eisenhower sought as much data as possible to inform his decision on when to land hundreds of thousands of Allied forces on the beaches of Normandy in that fateful late spring of 1944. As Antony Beevor’s book on the battle and other accounts make clear, Eisenhower especially craved reliable meteorological data, back when weather forecasting was in its infancy. The general cultivated Dr. James Stagg, his chief meteorologist, and became adept not just at analyzing Stagg’s reports, but also at reading Stagg’s own level of confidence in any report.  </p>
<p>For months before the fateful decision to “embark upon the Great Crusade,” Eisenhower developed a keen appreciation for what meteorological forecasts could and could not deliver. In the end, as history knows, Stagg convinced him to postpone the invasion to June 6 from June 5, when the predicted storm raged over the English Channel and when many others questioned Stagg’s call that it would soon clear.</p>
<div class="pullquote"> How do we prepare students for the coming decades so that they embrace the power of AI, and understand its advantages for management and leadership in the future?</div>
<p>No one would argue that Eisenhower should have become an expert meteorologist himself. His job was to oversee and coordinate all aspects of the campaign by collecting pertinent information, and assessing the quality and utility of that information to increase the invasion’s probability of success. Today, big data and the advent of AI expand the information available to corporate decision-makers. However, the role of a CEO in relation to data echoes the absorptive and judgmental function exercised by General Eisenhower in reading probabilities into his meteorologist’s weather reports.</p>
<p>It’s noteworthy that today, amidst all the talk of technological complexity and specialization across so much of corporate America, a Deloitte report prepared for our school found that employers looking to hire MBA graduates value prospective employees’ “soft skills” more than any others. They want to hire people with cultural competence and stronger communication  skills, who can work collaboratively in diverse teams, and be flexible in adapting continuously  to new opportunities and circumstances in the workplace and market.  </p>
<p>This isn’t just about intolerance for jerks in the office. It’s about a leader’s need to be able to synthesize, negotiate, and arbitrate between competing and conflicting environments, experts, and data. If there was once a time when corporate leaders were paid to make “gut check” calls even when essential information was lacking, today’s CEOs will increasingly have to make tough, interpretive judgment calls (a different type of “gut check”) in the face of excessive, often conflicting, information. </p>
<p>Those in the driver seat of institutions have access to an expanding universe of empirically derived insights about widely varying phenomena, such as optimal models for unloading ships in the world’s busiest ports in various weather conditions, parameters of loyalty programs that generate the ‘stickiest’ customer response, or talent selection models that yield both the most successful, and diverse, employment pools. </p>
<p>Corporate leaders will need to be discerning in their use of AI tools. They must judge the source of the data streams before them, ascertain their validity and reliability, detect less-than-obvious patterns in the data, probe the remaining “what ifs” they present, and ultimately make inferences and judgment calls that are more informed, nuanced around context, valid, and useful <i>because they are improved</i> by intelligent machines. Flawed judgments built on flawed or misinterpreted data could be even more harmful than uninformed flawed judgments because of the illusion of quasi-scientific authority resulting from the aura of data.</p>
<p>As a project management tool, AI might prescribe optimal work routines for different types of employees, but it won’t have the sensitivity to translate these needs into nuanced choices of one organizational outcome (e.g., equity in employee assignments) over another (family values). AI might pinpoint the best location for a new restaurant or power plant, but it will be limited in mapping the political and social networks that need to be engaged to bring the new venture to life.  </p>
<p>Machines also lack whimsy. Adtech programs have replaced human ad buyers, but the ability to create puns or design campaigns that pull at our heartstrings will remain innately human, at least for the foreseeable future. </p>
<p>A new level of questioning and integrative thinking is required among MBA graduates. As educators we must foster learning approaches that develop these skills—by teaching keen data management and inferential skills, developing advanced data simulations, and practicing how to probe and question the yet unknown. </p>
<p>In parallel to the ascendancy of machine power, the importance of emotional intelligence, or EQ, looms larger than ever to preserve the human connectivity of organizations and communities. While machines are expected to advance to the point of reading and interpreting emotions, they won’t have the capacity to inspire followers, the wisdom to make ethical judgments, or the savvy to make connections.</p>
<p>That’s still all on us. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/">Why Artificial Intelligence Won’t Replace CEOs</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
