<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Zócalo Public Squarecomputers &#8211; Zócalo Public Square</title>
	<atom:link href="https://legacy.zocalopublicsquare.org/tag/computers/feed/" rel="self" type="application/rss+xml" />
	<link>https://legacy.zocalopublicsquare.org</link>
	<description>Ideas Journalism With a Head and a Heart</description>
	<lastBuildDate>Mon, 21 Oct 2024 07:01:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>How ‘Automation’ Made America Work Harder</title>
		<link>https://legacy.zocalopublicsquare.org/2021/09/02/automation-revolution-america-labor-work-history/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2021/09/02/automation-revolution-america-labor-work-history/ideas/essay/#respond</comments>
		<pubDate>Thu, 02 Sep 2021 07:01:26 +0000</pubDate>
		<dc:creator>by Jason Resnikoff</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[America]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[computer age]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[labor]]></category>
		<category><![CDATA[Labor Day]]></category>
		<category><![CDATA[workforce]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=122148</guid>
		<description><![CDATA[<p>The world confronts “an epochal transition.” Or so the consulting firm McKinsey and Company crowed in 2018, in an article accompanying a glossy 141-page report on the automation revolution. Over the past decade, business leaders, tech giants, and the journalists who cover them have been predicting this new era in history with increasing urgency. Just like the megamachines of the Industrial Revolution of the 19th and early 20th centuries—which shifted employment away from agriculture and toward manufacturing—they say that robots and artificial intelligence will make many, if not most, modern workers obsolete. The very fabric of society, these experts argue, is about to unravel, only to be rewoven anew.</p>
<p>So it must have come as a shock to them when they saw the most recent U.S. Department of Labor’s Bureau of Labor Statistics (BLS) report, which debunks this forecast. The agency found that between 2005 and 2018—the precise moment McKinsey </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2021/09/02/automation-revolution-america-labor-work-history/ideas/essay/">How ‘Automation’ Made America Work Harder</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The world confronts “an epochal transition.” Or so the consulting firm McKinsey and Company crowed in 2018, <a href="https://www.mckinsey.com/featured-insights/future-of-work/retraining-and-reskilling-workers-in-the-age-of-automation" target="_blank" rel="noopener">in an article accompanying a glossy 141-page report</a> on the automation revolution. Over the past decade, business leaders, tech giants, and the journalists who <a href="https://www.nytimes.com/2021/07/03/business/economy/automation-workers-robots-pandemic.html?action=click&amp;module=Top%20Stories&amp;pgtype=Homepage" target="_blank" rel="noopener">cover them</a> have been predicting this new era in history with increasing urgency. Just like the megamachines of the Industrial Revolution of the 19th and early 20th centuries—which shifted employment away from agriculture and toward manufacturing—they say that robots and artificial intelligence will make many, if not most, modern workers obsolete. The very fabric of society, these experts argue, is about to unravel, only to be rewoven anew.</p>
<p>So it must have come as a shock to them when they saw the most recent <a href="https://doi.org/10.21916/mlr.2021.4" target="_blank" rel="noopener">U.S. Department of Labor’s Bureau of Labor Statistics (BLS) report</a>, which debunks this forecast. The agency found that between 2005 and 2018—the precise moment <a href="https://www.mckinsey.com/~/media/mckinsey/industries/public and social sector/our insights/what the future of work will mean for jobs skills and wages/mgi-jobs-lost-jobs-gained-report-december-6-2017.pdf" target="_blank" rel="noopener">McKinsey pinpointed</a> as putting us “on the cusp of a new automation age”—the United States suffered a remarkable fall-off in labor productivity, with average growth about 60 percent lower than the mean for the period between 1998 and 2004. Labor productivity measures economic output (goods and services) against the number of labor hours it takes to produce that output. If machines are taking over people’s work, labor productivity should grow, not stagnate.</p>
<p>The BLS, in stark contrast to management consultants and their ilk, is typically restrained in its assessments. Yet this time, its researchers called cratering productivity “one of the most consequential economic phenomena of the last two decades…a swift rebuke of the popular idea of that time that we had entered a new era of heightened technological progress.”</p>
<p>Many ordinary people have encountered this paradox: they are underemployed or unemployed but also, counterintuitively, working harder than ever before. Historically in the U.S., this phenomenon has always typically accompanied public discussions of “automation,” the ostensible replacement of human labor with machine action. Mid-20th century automobile and computer industry managers often talked about “automation” in an effort to play off the technological enthusiasm of the era. But rather than a concrete technological development and improvement, “automation” was an ideological invention, one that has never benefitted workers. I use scare quotes around “automation” as a reminder that the substance of it was always ideological, not technical. Indeed, from its earliest days, “automation” has meant the mechanized squeezing of workers, not their replacement.</p>
<p>Which is why, ever since the end of World War II, what employers have called “automation” has continued to make life harder, and more thankless, for Americans. Employers have used new tools billed as “automation” to degrade, intensify, and speed up human labor. They have used new machines to obscure the continuing presence of valuable human labor (consider “automated” cash registers where consumers scan and bag their own groceries—a job stores used to pay employees to do—or “automated” answering services where callers themselves do the job of switchboard operators). “All Automation has meant to us is unemployment and overwork,” reported one autoworker in the 1950s; another noted that “automation has not reduced the drudgery of labor…to the production worker it means a return to sweatshop conditions, increased speedup and gearing the man to the machine, instead of the machine to the man.”</p>
<div class="pullquote">Many ordinary people have encountered this paradox: they are underemployed or unemployed but also, counterintuitively, working harder than ever before.</div>
<p>There is no better example of the threat and promise of “automation” than the introduction of the beloved electronic digital computer. The first programmable electronic digital computers were invented during the Second World War to break Nazi codes and perform the enormous calculations necessary for the construction of an atomic bomb. Well into the early 1950s, computers remained for the most part associated with high-level research and cutting-edge engineering. So, at first, it was by no means obvious how a company might use an electronic digital computer to make money. There seemed little computers could offer businessmen, who were more interested in padding profits than decrypting enemy ciphers.</p>
<p>It was left to management theorists, hoping to build up and profit from the budding computer industry, to create a market where none yet existed. First among them was John Diebold, whose 1952 book Automation not only made “automation” a household term, but also introduced the notion that the electronic digital computer could “handle” information—a task that until then had been the province of human clerical workers. “Our clerical procedures,” wrote Diebold, “have been designed largely in terms of human limitations.” The computer, he told a new generation of office employers, would allow the office to escape those human limits by processing paperwork faster and more reliably.</p>
<p>Office employers of the early 1950s found this message attractive—but not because of the allure of raw calculating power, or utopian fantasies of machines that automatically wrote finished briefs. They were worried about unionization. With the end of the Second World War and the rise of U.S. global power, American companies had hired an unprecedented number of low wage clerical workers to staff offices, the vast majority of them women (who employers could pay less because of sexist norms in the workplace). Between 1947 and 1956 clerical employment doubled, from 4.5 to 9 million people. By 1954, one out of every four wage-earning women in the United States was a clerical worker.</p>
<div id="attachment_122154" style="width: 310px" class="wp-caption alignright"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-122154" class="size-medium wp-image-122154" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-300x239.jpg" alt="How ‘Automation’ Made America Work Harder | Zocalo Public Square • Arizona State University • Smithsonian" width="300" height="239" srcset="https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-300x239.jpg 300w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-600x479.jpg 600w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-768x613.jpg 768w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-250x199.jpg 250w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-440x351.jpg 440w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-305x243.jpg 305w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-634x506.jpg 634w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-963x768.jpg 963w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-260x207.jpg 260w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-820x654.jpg 820w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-376x300.jpg 376w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-682x544.jpg 682w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v-150x120.jpg 150w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2021/09/service-pnp-cph-3c10000-3c18000-3c18400-3c18471v.jpg 1024w" sizes="(max-width: 300px) 100vw, 300px" /><p id="caption-attachment-122154" class="wp-caption-text">“Automation” swept American businesses after the Second World War, but at first people weren’t sure how to use computers—invented for complex computational tasks—in the office. Here, a man prepares a Univac computer to predict a winning horse, in 1959. Courtesy of Library of Congress / Herman Hiller, photographer</p></div>
<p>The boom in low-wage clerical labor in the office suite shook employers. Not only were payrolls growing, but offices increasingly appeared ever more proletarian, more factory floor than management’s redoubt. Businesses consultants wrote reports with titles such as “White-Collar Restiveness—A Growing Challenge,” and office managers started to worry. They installed computers in the hopes they might reduce the number of clerical workers necessary to run a modern office—or, as they put it, they bought computers in the hopes they could “automate” office labor.</p>
<p>Unfortunately for them, the electronic digital computer did not reduce the number of clerical workers it took to keep an office afloat. In fact, the number of clerical workers employed in the United States continued to swell until the 1980s, along with the amount of paperwork. In the late 1950s, one manager complained that with computers in the office, “the magnitude of paperwork now is breaking all records” and that there were “just as many clerks and just as many key-punch operators as before.” While computers could process information quickly, data entry remained the task of the human hand, with clerical workers using keypunch machines to translate information onto machine-readable cards or tape that could then be “batch” processed.</p>
<p>Unable to remove human labor from office work, managers pivoted back to something they had done since the dawn of the Industrial Revolution: they used machines to degrade jobs so that they could save money by squeezing workers. Taking a page from the turn-of-the-20th-century playbook around “scientific management”—where manufacturing employers prescribed and timed every movement of the worker at their machine down to the fraction of a second— employers renamed the practice “automation”; and again, instead of saving human labor, the electronic digital computer sped up and intensified it. “Everything is speed in the work now,” one clerical worker in the insurance industry complained. Mary Roberge, who worked for a large Massachusetts insurance company in the 1950s, described a typical experience. In her office, there were 20 female clerical workers for every male manager. The clerical staff “stamped, punched, and endlessly filed and refiled IBM cards.” Bathroom breaks were strictly limited, and there were no coffee breaks. American employers gradually phased out skilled, well-paid secretarial jobs. Three out of every five people who worked with computers in the 1950s and 1960s were poorly remunerated clerical workers. Roberge made $47.50 a week, which, adjusting for inflation, would be less than $22,000 a year today. “That was extremely low pay,” she later reflected, “even in 1959.”</p>
<p>And yet in public, employers and computer manufacturers claimed that no one was performing this work, that the computer did it all on its own—that office work was becoming ever more “automated.” As one IBM promotional film put it: “IBM machines can do the work, so that people have time to think…Machines should work, people should think.” It sounded nice, but it simply wasn’t the case. “Automation” in the American office meant that more people were being forced to work like machines. Sometimes this allowed employers to hire fewer workers, as in the automobile, coal mining, and meatpacking industries where one employee now did the work of two. And sometimes it actually required hiring more people, as in office work.</p>
<p>This remains the story of “automation” today.</p>
<p>Take Slack, an online “collaboration hub” where employees and their managers share a digital space. On its <a href="https://slack.com/">website</a>, the company depicts the application as a tool that offers employees “the flexibility to work when, where and how you work best.” But the communication platform, of course, is the very tool that allows employers to compel workers to labor at home and on vacation, at the breakfast table in the morning, and riding the commuter train home at night.</p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>Seventy years ago, bosses made employees use computing technology to get them to work more, for less. It’s a legacy that repackages harder and longer work as great leaps in convenience for the worker, and obscures the continued necessity and value of human labor. Rather than McKinsey’s “epochal transition,” the new world of automation looks all too familiar.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2021/09/02/automation-revolution-america-labor-work-history/ideas/essay/">How ‘Automation’ Made America Work Harder</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2021/09/02/automation-revolution-america-labor-work-history/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Did Moore’s Law Really Inspire the Computer Age?</title>
		<link>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/#respond</comments>
		<pubDate>Sun, 22 Mar 2020 22:00:11 +0000</pubDate>
		<dc:creator>by Rachel Jones</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Gordon Moore]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Moore's Law]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[Silicon Valley]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=110232</guid>
		<description><![CDATA[<p>In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, there’s a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds). </p>
<p>What made the smartphone—and the rest of our unfolding digital transformation—possible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper <i>Electronics</i>. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. It’s been 55 years since the </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/">Did Moore’s Law Really Inspire the Computer Age?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, there’s a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds). </p>
<p>What made the smartphone—and the rest of our unfolding digital transformation—possible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper <i>Electronics</i>. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. It’s been 55 years since the article’s publication, and it’s worth revisiting its original prediction—now known as Moore’s Law.</p>
<p>If you ask people today what Moore’s Law is, they’ll often say it predicts that every 18 months, engineers will be able to come up with ways to double the number of transistors they can squeeze onto a tiny computer chip, thus doubling its processing power. It’s a curious aspect of the law that this is not what Moore actually said, but he did predict consistent improvement in processing technology. Moreover, the world he anticipated did take shape, with his own work as founder of the chipmaker Intel creating much of the momentum necessary to turn his “law” into a self-fulfilling prophecy. </p>
<p>Initially, Moore had few notions of changing the world. Early in life, he discovered a love for chemistry—and though he was kept back at school for his inarticulate style, he excelled at practical activities, making bombs and rockets in a home-based laboratory. He went on to study chemistry at UC Berkeley under two Nobel laureates, and earned a Ph.D. at the California Institute of Technology in 1954.</p>
<p>Moore’s career trajectory coincided with the rise of the transistor, a device made of semiconductor material that can regulate electrical current flows and act as a switch or gate for electronic signals. As far back as the 1920s, physicists had proposed making transistors as a way to improve on the unreliable, power-hungry vacuum tubes that helped amplify signals on telephone lines, and that would be used in the thousands in computers such as ENIAC and Colossus. In 1939, William Shockley, a young Bell Labs researcher, revived the idea of the transistor and tried to fabricate a device; despite several failures, he continued on and in 1947 he and two colleagues succeeded in making the world’s first working transistor (for which they shared a Nobel Prize in Physics). In 1953, British scientists used transistors to build a computer, and <i>Fortune</i> declared it “The Year of the Transistor.”</p>
<p>In 1955, Shockley moved to Mountain View, California, to be near his mother. He opened a semiconductor laboratory and picked a handful of young scientists to join him, including Moore and his Intel co-founder, Bob Noyce. The launch of the <i>Sputnik</i> satellite in 1957 and the escalation of the Cold War created a boom within a boom: Moore and seven colleagues, including Noyce, broke away from Shockley in a group quickly branded “The Traitorous Eight,” forming the seminal start-up Fairchild Semiconductor. They planned to make silicon transistors, which promised greater robustness, miniaturization and lower power usage, so essential for computers guiding missiles and satellites.</p>
<div id="attachment_110240" style="width: 310px" class="wp-caption alignright"><img decoding="async" aria-describedby="caption-attachment-110240" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-300x293.png" alt="Did Moore’s Law Really Inspire the Computer Age? | Zocalo Public Square • Arizona State University • Smithsonian" width="300" height="293" class="size-medium wp-image-110240" srcset="https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-300x293.png 300w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-250x244.png 250w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-305x298.png 305w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-260x254.png 260w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-307x300.png 307w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT.png 389w" sizes="(max-width: 300px) 100vw, 300px" /><p id="caption-attachment-110240" class="wp-caption-text">&#8220;Our curiosity was similar, but not our approach. Noyce liked things that flew. I liked things that blew up,&#8221; said Gordon Moore (left) with Robert Noyce.<br /><span>Courtesy of <a href="https://commons.wikimedia.org/wiki/Category:Gordon_Moore#/media/File:Gordon_Moore_and_Robert_Noyce_at_Intel_SC1_in_Santa_Clara_1970.png" target="_blank" rel="noopener noreferrer">Intel Free Press</a>.</span></p></div>
<p>Developing the core manufacturing technology was a seat-of-the-pants adventure in which Moore played a central role. In March 1958, Fairchild received an order from IBM for 100 mesa transistors priced at $150 each. Mesas, made on 1-inch silicon wafers, were so named because their profiles resembled the flat-topped mesa formations of the American Southwest. Moore’s responsibility was figuring out how to fabricate them reliably, which involved a complex chemical ballet and a considerable amount of thrift and improvisation. Unable to buy appropriate furnaces, Moore relied on glass-blowing skills to create gas-handling systems, assembled on cobbled-together aqua blue kitchen cabinets and Formica countertops. (Real lab furniture was “as expensive as heck,” he remarked.) Delivery solutions were similarly no-frills: Fairchild sent mesa transistors to IBM in a Brillo box from a local grocery store.</p>
<p>The mesa transistor was successful, but the company’s new planar transistor (named for its flat topography) was a game-changer, bringing more stability and better performance. Another key development was the step to connect transistors by making all components of a complete circuit within a single piece of silicon, paving the way for the first commercial integrated circuits, or microchips. Everyone wanted miniaturized circuitry—the obstacle to greater computing power was its need for more components and interconnections, which increased the possibilities for failure. Noyce grasped a solution: why not leave transistors together in a wafer and interconnect them there, then detach the set as a single unit? Such “microchips” could be smaller, faster and cheaper than transistors manufactured individually and connected to each other afterward. As early as 1959, Moore proposed that “sets of these components will be able to replace 90 percent of all circuitry” in digital computers. </p>
<div class="pullquote">In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute “a major revolution in the history of mankind, as important as the Industrial Revolution.”</div>
<p>Six years later, in 1965, when he wrote his now-famous article in <i>Electronics</i>—“Cramming More Components onto Integrated Circuits”—personal computers were still a decade away. Moore, who had seen the number of elements on a chip go from one, to eight, to 60, hinted at how integrated functions would “broaden [electronics’] scope beyond [his] imagination” and at the “major impact” the changes would bring, but saw his analysis as distilling merely a trend in technology that would make everything cheaper. Nevertheless, his analysis was rigorous. Doubling the number of components on an integrated circuit each year would steadily increase performance and decrease cost, which would—as Moore put it 10 years later—“extend the utility of digital electronics more broadly in society.” </p>
<p>As chemical printing continued to evolve, the economics of microchips would continue to improve, and these more complex chips would provide the cheapest electronics. Thus, an electronics-based revolution could depend on existing silicon technology, rather than some new invention. By 1970, Moore asserted, the transistor that could be made most cheaply would be on a microchip 30 times more complex than one of 1965. </p>
<p>In 1968, Moore left Fairchild and joined Noyce to found Intel, with the aim of “putting cleverness back into processing silicon.” In 1975, he reviewed his original extrapolation. Chips introduced until that point had followed the trend he predicted, but engineers were reaching the limits for circuit and device cleverness. Moore now proposed a doubling about every two years.</p>
<p>The analysis in <i>Electronics</i> was becoming known as Moore’s Law. Having correctly observed the potential for exponential growth, Moore overcame his personal dislike of the spotlight by travelling widely to talk about his idea, taking every opportunity to persuade others. After all, the fulfilment of Moore’s Law would be as much social as technical, relying on widespread acceptance: industry needed to invest to develop the technology, manufacturers needed to put microchips into their products, consumers needed to buy and use electronic devices and functions, and researchers and engineers needed to invent advances to extend Moore’s Law.</p>
<p>In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute “a major revolution in the history of mankind, as important as the Industrial Revolution.” He was so confident in his vision that he told a journalist that students who’d made headlines getting kicked off campuses (“kids with the long hair and beards”) were not the ones to watch: instead, he pronounced, “we are really the revolutionaries in the world today.” In front of a crowd, he pointed out that if the auto industry made progress at the same rate as silicon microelectronics, it would be more expensive to park your car downtown for the night than to buy a new Rolls Royce. “And,” he recalled years later, “one of the members of the audience pointed out, yeah, but it’d only be 2-inches long and a half-inch high; it wouldn’t be much good for your commute.” </p>
<p>The rest is history. “For more than three decades,” the <i>New York Times</i> pointed out in 2003, Moore’s Law “has accurately predicted the accelerating power and plummeting cost of computing. Because of the exponential nature of Moore&#8217;s prediction, each change has arrived faster and more furiously.” Its curve, shallow at first (though spawning the birth of the microprocessor, digital calculator, personal computer and internet along the way) has, since 2005, gone almost straight up in “hockey stick” style.</p>
<p>Despite the changes we’ve all witnessed, Moore’s Law is still widely misunderstood, even in tech circles. “[It’s] only 11 words long &#8230; but most people manage to mangle it,” said one report. Moore’s 1965 article is a sophisticated piece of analysis but many prefer to interpret it more vaguely: “The definition of ‘Moore’s Law’ has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line,” noted its originator, dryly.</p>
<p>Up to April 2002, <a href="https://firstmonday.org/ojs/index.php/fm/article/view/1000/921" target="_blank" rel="noopener noreferrer">Intel&#8217;s website</a> noted that “Moore predicted that the number of transistors per integrated circuit would double every 18 months,” even though Moore had pointed out that he “never said 18 months.”</p>
<p>Why did 18 months stick? Perhaps because a projection by an Intel colleague in 1975 led to a conflation of transistor count and doubling of performance; perhaps because this timescale appeared in an influential technology column in 1992, as the modern configuration of Silicon Valley was forming—perhaps because that speed felt more accurate to the semiconductor industry. </p>
<p>During the technology bust of the early 2000s, people began to speculate about the death of Moore’s Law. Others suggested it would peter out because people would drop their computer fixations to spend less time at work and more with their families, or because Silicon Valley’s obsession with it was “unhealthy” for business strategy. In 2007, the year the smartphone launched, Moore pointed out that “we make more transistors per year than the number of printed characters in all the newspapers, magazines, books, photocopies, and computer printouts.” But he recognized exponential growth could not continue forever; he knew the physical and financial constraints on shrinking the size of chip components.</p>
<p>When people in industry circles <a href="https://www.scientificamerican.com/article/getting-more-from-moores/" target="_blank" rel="noopener noreferrer">describe Moore’s Law</a> as a “dictate—the law by which the industry lives or dies,” it is more evidence of the law’s power within Silicon Valley culture rather than its actual predictive accuracy. As the essayist Ilkka Tuomi observed in “The Lives and Death of Moore’s Law,” Moore’s Law became “an increasingly misleading predictor of future developments” that people understood to be something more like a “rule-of-thumb” than a “deterministic natural law.” In fact, Tuomi speculated, the very slipperiness of Moore’s Law might have accounted for its popularity. To an extent, tech people could pick and choose how they interpreted the dictum to suit their business needs. </p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>Today, Moore’s Law continues to thrive in the smartphone space, having put some 8.5 billion transistors into a single phone that can fit in our pockets. The law may now be, in the words of one commentator, “more a challenge to the industry than an axiom for how chipmaking works,” but for what began as a 10-year forecast, it has had an astonishing run. “Once you’ve made a successful prediction, avoid making another one,” Moore quipped in 2015. </p>
<p>Even as technology continues to pervade our lives—with the advent of more specialized chips and materials, better software, cloud computing, and the promise of quantum computing—his law remains the benchmark and overarching narrative, both forecasting and describing our digital evolution. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/">Did Moore’s Law Really Inspire the Computer Age?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Yes, Classroom Tech Can Tackle Inequality—but Change Takes Politics and Patience</title>
		<link>https://legacy.zocalopublicsquare.org/2017/06/16/yes-classroom-tech-can-tackle-inequality-change-takes-politics-patience/events/the-takeaway/</link>
		<comments>https://legacy.zocalopublicsquare.org/2017/06/16/yes-classroom-tech-can-tackle-inequality-change-takes-politics-patience/events/the-takeaway/#respond</comments>
		<pubDate>Fri, 16 Jun 2017 10:00:08 +0000</pubDate>
		<dc:creator>By Reed Johnson</dc:creator>
				<category><![CDATA[The Takeaway]]></category>
		<category><![CDATA[Arizona State University]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[higher education]]></category>
		<category><![CDATA[internet]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=86053</guid>
		<description><![CDATA[<p>Even as digital technology has grown exponentially more sophisticated, accessible, and integral to our lives, social inequality has cast a deeper shadow across the United States in recent decades. Simultaneously, getting a quality education has become ever more essential for individual success and fulfillment.</p>
<p>The question of whether tech-enhanced education can help break down—or perhaps even erase—growing social divisions confronted a panel of educators brought together at a Zócalo/Arizona State University event titled “Can Digital Learning Dismantle the American Class System?”</p>
<p>The panelists’ collective answer: Digital education can indeed shake up rigid class hierarchies, and it’s already having that effect. But it’s going to take more time and commitment.</p>
<p>“We’re just getting to the point where these tools can start being useful,” said Jaime Casap, the chief education evangelist at Google, who noted that in 1995 only one percent of the world was online. Twenty years later, some 40 percent </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/06/16/yes-classroom-tech-can-tackle-inequality-change-takes-politics-patience/events/the-takeaway/">Yes, Classroom Tech Can Tackle Inequality—but Change Takes Politics and Patience</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Even as digital technology has grown exponentially more sophisticated, accessible, and integral to our lives, social inequality has cast a deeper shadow across the United States in recent decades. Simultaneously, getting a quality education has become ever more essential for individual success and fulfillment.</p>
<p>The question of whether tech-enhanced education can help break down—or perhaps even erase—growing social divisions confronted a panel of educators brought together at a Zócalo/Arizona State University event titled “Can Digital Learning Dismantle the American Class System?”</p>
<p>The panelists’ collective answer: Digital education can indeed shake up rigid class hierarchies, and it’s already having that effect. But it’s going to take more time and commitment.</p>
<p>“We’re just getting to the point where these tools can start being useful,” said Jaime Casap, the chief education evangelist at Google, who noted that in 1995 only one percent of the world was online. Twenty years later, some 40 percent is.</p>
<p>Casap’s opinion was echoed by Arizona State University president Michael Crow. Crow suggested that giving every student access to digital tools would empower them—not as faceless members of a socio-ethnic group or class hierarchy, but as individuals.</p>
<p>“We’re at the early stages of an extremely complicated process,” Crow said. “We’ve never lived as a species when everyone was actually equal.”</p>
<p>The conversation unfolded before an overflow audience at the National Center for the Preservation of Democracy in Little Tokyo in downtown Los Angeles. Moderator Goldie Blumenstyk, senior writer at <i>The Chronicle of Higher Education</i>, started the dialogue by asking each panelist to reframe the evening’s theme in a sentence or two.</p>
<p>Some gave bluntly personal responses. Darryl Adams, retired superintendent of the Coachella Valley Unified School District, talked about growing up economically disadvantaged in Memphis, Tennessee.</p>
<p>“Now, with digital access and the tools that are available, power is available to everyone,” Adams said. “That is opportunity.”</p>
<p>Casap spoke of his family’s dependence on food stamps and welfare while he was growing up in New York City’s rough Hell’s Kitchen neighborhood. When he was a kid, to look up information he had to trudge over to the Columbus branch of the New York Public Library on 10th Avenue—unless it was a Sunday or holiday, when it was closed.</p>
<p>Today, more young students like him could get that information by pressing a key. Withholding technology from needy students guarantees they’ll fall further behind rich ones whose parents can pay for it privately, he suggested.</p>
<p>“I get to do what I do today because of education,” said Casap, who did graduate work at Arizona State University. The educational benefits he received are now being enhanced by digital tools and extended to the next generation, Casap said, mentioning one of his own children’s pursuit of a higher education degree.</p>
<p>“She assumed I was going to pay for it, so that’s a good problem to have,” he joked.</p>
<p>Marie Cini, college provost at the University of Maryland University College, ventured that digital education alone won’t end the class system. Rather, she suggested, digital education creates connectivity among formerly disadvantaged and marginalized people, which in turn creates opportunities and shifts power relationships. That, eventually, leads to changes within the class system.</p>
<p>But there still are obstacles to obtaining all the benefits that digitally enhanced education can offer, Cini said. Parents worried about social status may hesitate to support a child who chooses a blue-collar career, even a high-skilled one. Brand-name institutions still have big advantages over their less well-endowed rivals. These social distinctions and prejudices can carry forward and be exacerbated well after an individual leaves college.</p>
<p>“Look at who runs the country, look who’s on the Supreme Court,” Cini said. “We segment as people, and that’s what ossifies the class system.”</p>
<p>Access to digital technology is important, “but it’s not enough,” Cini said.</p>
<p>Blumenstyk repeatedly pressed the panelists about whether digital technology really could shake up the class system and deliver on its utopian promises if elected officials and taxpayers aren’t willing to pony up more money—an uncertain possibility, at best, in the current political climate.</p>
<p>Adams replied that his school district actually had banded together and agreed to tax itself to provide digital technology to its students, including those living in isolated trailer homes scattered across the region. To reach those students, buses were outfitted with routers and parked in remote neighborhoods so students could access the internet.</p>
<p>“It’s possible to make this transformation, but you’ve got to have the will,” Adams said.</p>
<p>At one point, Blumenstyk reminded Crow of an interview in which he raised the troubling prospect of a future world in which rich kids get taught by professors and poor kids get taught by computers.</p>
<p>Crow responded that there always will be a need for master teachers and professors giving face-to-face instruction, but technology can be an enhancement and a means for “evening out the outcomes.” Virtually everything in our human-made environment—from food, eyewear, and clothing to dictionaries and cell phones—is a technology, “passive objects” that can be “empowered” through “the creativity of individual students or individual teachers,” Crow had said earlier.</p>
<p>And the notion that we must make an either/or choice between old-fashioned learning and digital learning is false, he stressed. As for the cost of education, Crow said, investing in technology reduces it over the long term.</p>
<p>As the evening moved toward its question-and-answer portion, one audience member wondered aloud how digital learning could foster an improved civic culture. Casap said that young students need not simply to be given access to technology, but shown how to use it. For example, he said, studies have shown many kids don’t know how to tell a sponsored website from a real news website.</p>
<p>Another questioner asked why, despite the wider availability of digital technology, there aren’t more people of color working in Silicon Valley.</p>
<p>“What I told my kids was, ‘Build your own company. Don’t wait for Apple to call you,’” Adams responded.</p>
<p>Digital learning, the panelists concurred, is not a panacea, either for all of society’s problems or for any individual’s challenge in staying productive and engaged.</p>
<p>“It’s a long life, you’re probably going to have nine different careers, so you have keep going back and reinventing yourself,” Cini said.</p>
<p>But if digital learning isn’t a cure-all, or a revolutionary action, the panelists seemed to agree that we’re long past the point when moving forward without it is an option.</p>
<p>“We’re going to get the name of the species altered,” Crow said, “so that everyone born before 2000 is a homo sapiens; everyone born after 2000 is a homo sapiens.net.”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2017/06/16/yes-classroom-tech-can-tackle-inequality-change-takes-politics-patience/events/the-takeaway/">Yes, Classroom Tech Can Tackle Inequality—but Change Takes Politics and Patience</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2017/06/16/yes-classroom-tech-can-tackle-inequality-change-takes-politics-patience/events/the-takeaway/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why Artificial Intelligence Won’t Replace CEOs</title>
		<link>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/#respond</comments>
		<pubDate>Wed, 02 Nov 2016 07:01:51 +0000</pubDate>
		<dc:creator>By Judy D. Olian</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Data]]></category>
		<category><![CDATA[data analysis]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[information]]></category>
		<category><![CDATA[information science]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[UCLA]]></category>
		<category><![CDATA[UCLA Anderson]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=80798</guid>
		<description><![CDATA[<p>Peter Drucker was prescient about most things, but the computer wasn’t one of them. &#8220;The computer &#8230; is a moron,” the management guru asserted in a McKinsey Quarterly article in 1967, calling the devices that now power our economy and our daily lives “the dumbest tool we have ever had.” </p>
<p>Drucker was hardly alone in underestimating the unfathomable pace of change in digital technologies and artificial intelligence (AI). AI builds on the computational power of vast neural networks sifting through massive digital data sets or “big data” to achieve outcomes analogous, often superior, to those produced by human learning and decision-making. Careers as varied as advertising, financial services, medicine, journalism, agriculture, national defense, environmental sciences, and the creative arts are being transformed by AI. </p>
<p>Computer algorithms gather and analyze thousands of data points, synthesize the information, identify previously undetected patterns, and create meaningful outputs—whether a disease treatment, a face match </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/">Why Artificial Intelligence Won’t Replace CEOs</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Peter Drucker was prescient about most things, but the computer wasn’t one of them. &#8220;The computer &#8230; is a moron,” the management guru asserted in a <a href=http://www.mckinsey.com/business-functions/organization/our-insights/the-manager-and-the-moron>McKinsey Quarterly article</a> in 1967, calling the devices that now power our economy and our daily lives “the dumbest tool we have ever had.” </p>
<p>Drucker was hardly alone in underestimating the unfathomable pace of change in digital technologies and artificial intelligence (AI). AI builds on the computational power of vast neural networks sifting through massive digital data sets or “big data” to achieve outcomes analogous, often superior, to those produced by human learning and decision-making. Careers as varied as advertising, financial services, medicine, journalism, agriculture, national defense, environmental sciences, and the creative arts are being transformed by AI. </p>
<p>Computer algorithms gather and analyze thousands of data points, synthesize the information, identify previously undetected patterns, and create meaningful outputs—whether a disease treatment, a face match in a city of millions, a marketing campaign, new transportation routes, a crop harvesting program, a machine-generated news story, a poem, painting, or musical stanza—faster than a human can pour a cup of coffee.</p>
<p><a href=http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet>A recent McKinsey study</a> suggests that 45 percent of all on-the-job activities can be automated by deploying AI. That includes file clerks whose jobs can become 80 percent automated, or CEOs’ jobs that can be 20 percent automated because AI systems radically simplify and target CEOs’ reading of reports, risk detection, or pattern recognition.  </p>
<p>AI has been one of those long-hyped technologies that hasn’t transformed our whole world yet, but will. Now that AI appears ready for prime time, there is consternation, even among technologists, about the unbridled power that machines may have over human decision- making. Elon Musk has called AI &#8220;our biggest existential threat,” echoing Bill Joy’s 2000 warning in <i>Wired</i> magazine that “the future doesn’t need us.” On the other side, of course, are enthusiasts eager for smart machines to improve our lives and the health of the planet.</p>
<p>I’m on the side of <a href=http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html>Microsoft CEO Satya Nadella, who says</a> we should be preparing for the promise of ever smarter machines as partners to human decision-making, focusing on the proper role, and limitations, of AI tools. For business school educators like me who believe the future will indeed need us, the expanding power of AI or deep learning poses a challenge and opportunity: How do we prepare students for the coming decades so that they embrace the power of AI, and understand its advantages for management and leadership in the future? </p>
<p>It would be a mistake to force every MBA graduate to become a data scientist. The challenge for business schools is to update our broadly focused curricula while giving our MBAs a greater familiarity and comfort level with data analytics. Tomorrow’s CEOs will need a better sense of what increasingly abundant and complex data sets within organizations can, and cannot, answer. </p>
<p>The sophistication and volume of data may be increasing, but history affords models of a decision maker’s proper relationship to data analytics. </p>
<p>Take D-Day. General Dwight D. Eisenhower sought as much data as possible to inform his decision on when to land hundreds of thousands of Allied forces on the beaches of Normandy in that fateful late spring of 1944. As Antony Beevor’s book on the battle and other accounts make clear, Eisenhower especially craved reliable meteorological data, back when weather forecasting was in its infancy. The general cultivated Dr. James Stagg, his chief meteorologist, and became adept not just at analyzing Stagg’s reports, but also at reading Stagg’s own level of confidence in any report.  </p>
<p>For months before the fateful decision to “embark upon the Great Crusade,” Eisenhower developed a keen appreciation for what meteorological forecasts could and could not deliver. In the end, as history knows, Stagg convinced him to postpone the invasion to June 6 from June 5, when the predicted storm raged over the English Channel and when many others questioned Stagg’s call that it would soon clear.</p>
<div class="pullquote"> How do we prepare students for the coming decades so that they embrace the power of AI, and understand its advantages for management and leadership in the future?</div>
<p>No one would argue that Eisenhower should have become an expert meteorologist himself. His job was to oversee and coordinate all aspects of the campaign by collecting pertinent information, and assessing the quality and utility of that information to increase the invasion’s probability of success. Today, big data and the advent of AI expand the information available to corporate decision-makers. However, the role of a CEO in relation to data echoes the absorptive and judgmental function exercised by General Eisenhower in reading probabilities into his meteorologist’s weather reports.</p>
<p>It’s noteworthy that today, amidst all the talk of technological complexity and specialization across so much of corporate America, a Deloitte report prepared for our school found that employers looking to hire MBA graduates value prospective employees’ “soft skills” more than any others. They want to hire people with cultural competence and stronger communication  skills, who can work collaboratively in diverse teams, and be flexible in adapting continuously  to new opportunities and circumstances in the workplace and market.  </p>
<p>This isn’t just about intolerance for jerks in the office. It’s about a leader’s need to be able to synthesize, negotiate, and arbitrate between competing and conflicting environments, experts, and data. If there was once a time when corporate leaders were paid to make “gut check” calls even when essential information was lacking, today’s CEOs will increasingly have to make tough, interpretive judgment calls (a different type of “gut check”) in the face of excessive, often conflicting, information. </p>
<p>Those in the driver seat of institutions have access to an expanding universe of empirically derived insights about widely varying phenomena, such as optimal models for unloading ships in the world’s busiest ports in various weather conditions, parameters of loyalty programs that generate the ‘stickiest’ customer response, or talent selection models that yield both the most successful, and diverse, employment pools. </p>
<p>Corporate leaders will need to be discerning in their use of AI tools. They must judge the source of the data streams before them, ascertain their validity and reliability, detect less-than-obvious patterns in the data, probe the remaining “what ifs” they present, and ultimately make inferences and judgment calls that are more informed, nuanced around context, valid, and useful <i>because they are improved</i> by intelligent machines. Flawed judgments built on flawed or misinterpreted data could be even more harmful than uninformed flawed judgments because of the illusion of quasi-scientific authority resulting from the aura of data.</p>
<p>As a project management tool, AI might prescribe optimal work routines for different types of employees, but it won’t have the sensitivity to translate these needs into nuanced choices of one organizational outcome (e.g., equity in employee assignments) over another (family values). AI might pinpoint the best location for a new restaurant or power plant, but it will be limited in mapping the political and social networks that need to be engaged to bring the new venture to life.  </p>
<p>Machines also lack whimsy. Adtech programs have replaced human ad buyers, but the ability to create puns or design campaigns that pull at our heartstrings will remain innately human, at least for the foreseeable future. </p>
<p>A new level of questioning and integrative thinking is required among MBA graduates. As educators we must foster learning approaches that develop these skills—by teaching keen data management and inferential skills, developing advanced data simulations, and practicing how to probe and question the yet unknown. </p>
<p>In parallel to the ascendancy of machine power, the importance of emotional intelligence, or EQ, looms larger than ever to preserve the human connectivity of organizations and communities. While machines are expected to advance to the point of reading and interpreting emotions, they won’t have the capacity to inspire followers, the wisdom to make ethical judgments, or the savvy to make connections.</p>
<p>That’s still all on us. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/">Why Artificial Intelligence Won’t Replace CEOs</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/11/02/artificial-intelligence-wont-replace-ceos/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Computers and Robots Can Copy Your Work, and Get Away With It</title>
		<link>https://legacy.zocalopublicsquare.org/2016/08/30/computers-robots-can-copy-work-get-away/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2016/08/30/computers-robots-can-copy-work-get-away/ideas/nexus/#respond</comments>
		<pubDate>Tue, 30 Aug 2016 07:01:53 +0000</pubDate>
		<dc:creator>By James Grimmelmann</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[Arizona State University]]></category>
		<category><![CDATA[ASU]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[copyright]]></category>
		<category><![CDATA[copyright law]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[Future Tense]]></category>
		<category><![CDATA[Law]]></category>
		<category><![CDATA[nexus]]></category>
		<category><![CDATA[robots]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=77810</guid>
		<description><![CDATA[<p>Copyright has a weird relationship with computers. Sometimes it completely freaks out about them; sometimes it pretends it can’t see them at all. The contrast tells us a lot about copyright—and even more about how we relate to new technologies.</p>
<p>Start with the freak-out. One thing that computers are good for is making copies—lots of copies. Drag your music folder from your hard drive to your backup Dropbox and congratulations, you’ve just duplicated thousands of copyrighted songs. If you look up the section of the Copyright Act that sets out what counts as infringement, the <i>very first</i> Thou Shalt Not is “reproduce the copyrighted work.” In theory, Congress could have added some language saying that putting your music in your Dropbox that no one else can access isn’t infringement. In practice, well, it’s Congress.</p>
<p>Congressional inaction has meant that the problem of explaining why the Internet isn’t just an infringement </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/08/30/computers-robots-can-copy-work-get-away/ideas/nexus/">Computers and Robots Can Copy Your Work, and Get Away With It</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Copyright has a weird relationship with computers. Sometimes it completely freaks out about them; sometimes it pretends it can’t see them at all. The contrast tells us a lot about copyright—and even more about how we relate to new technologies.</p>
<p>Start with the freak-out. One thing that computers are good for is making copies—lots of copies. Drag your music folder from your hard drive to your backup Dropbox and congratulations, you’ve just duplicated thousands of copyrighted songs. If you look up the <a href=https://www.law.cornell.edu/uscode/text/17/106>section of the Copyright Act</a> that sets out what counts as infringement, the <i>very first</i> Thou Shalt Not is “reproduce the copyrighted work.” In theory, Congress could have added some language saying that putting your music in your Dropbox that no one else can access isn’t infringement. In practice, well, it’s Congress.</p>
<p>Congressional inaction has meant that the problem of explaining why the Internet isn’t just an infringement machine in need of a good unplugging has been kicked over to the courts. (Yes, the courts staffed by judges who call Dropbox <a href=https://www.washingtonpost.com/news/the-switch/wp/2014/04/23/the-aereo-case-is-being-decided-by-people-who-call-icloud-the-icloud-yes-really/>“the Dropbox” and “iDrop.”</a>) And in the process of keeping computers legal, the judges who make copyright law have developed some surprisingly broad rules shielding automatically made copies from liability.</p>
<p>Take, for example, the 2009 case A.V. v. iParadigms, in which high schools compelled students to submit their term papers to <a href=http://turnitin.com/>Turnitin</a>, a plagiarism-detection site. First it compares papers to those already in its database, looking for suspicious similarities; then it stores the paper to compare to future submissions. Four students sued, arguing that these stored copies infringed their copyrights in their papers. </p>
<p>The court disagreed, because of course you shouldn’t be able to use copyright to keep your teachers from finding out whether you cheated on your homework. But its reasoning is fascinating. Turnitin, the court held, made a “transformative” use of the papers because its use was “completely unrelated to expressive content.” Turnitin’s computers might have <i>copied</i> the papers, but they didn’t really <i>read</i> them. The court added, “The archived student works are stored as digital code, and employees of [Turnitin] do not read or review the archived works.” </p>
<p>Courts use similar logic in case after case. It’s not infringement if <i>computers</i> “read or review” the new copies, only if <i>people</i> do. Google famously scanned millions of books. Completely legal, <a href=https://scholar.google.com/scholar_case?case=6510192672912362556>four courts have agreed</a>, because it’s not as though Google is turning the complete books over to people. “Google Books &#8230; is not a tool to be used to read books,” wrote one judge. In another strand of the litigation, the parties at one point proposed a settlement that would have allowed “non-consumptive” <a href=http://www.slate.com/articles/technology/future_tense/2014/04/digital_humanities_and_the_future_of_technology_in_higher_ed.html>digital humanities</a> research on the scanned books, defined as “research in which computational analysis is performed on one or more Books, but not research in which a researcher reads or displays substantial portions of a Book to understand the intellectual content presented within the Book.” This was fine, in the view of the author and publisher representatives who negotiated the proposed settlement. Computers can do what they want with books as long as no one actually “understand[s]” its “intellectual content.” </p>
<p>This attitude—computers don’t count—isn’t new, either. A century ago, the cutting edge in artistic robotics was the player piano. The Supreme Court heard a <a href=https://scholar.google.com/scholar_case?case=12949386652546347561>player-piano case</a> in 1908 and held that the paper rolls “read” by the <a href=http://www.slate.com/articles/technology/history_of_innovation/2014/05/white_smith_music_case_a_terrible_1908_supreme_court_decision_on_player.html>player pianos weren’t infringing</a>. The rolls, Justice William Day reasoned, “[c]onvey[] no meaning, then, to the eye of even an expert musician.” Instead, they “form a part of a machine. &#8230; They are a mechanical invention made for the sole purpose of performing tunes mechanically upon a musical instrument.” The anthropocentrism is unmistakable. I’ve cataloged <a href=http://james.grimmelmann.net/files/articles/copyright-for-literate-robots.pdf>many different settings</a> where copyright law finds ways to overlook copying as long as no humans are in the loop. </p>
<p>On the one hand, this makes perfect sense. Copyright is designed to encourage human creativity for human audiences. If a book falls in a forest and no one reads it, does it make an infringement? It seems like the only sensible answer is “No harm, no foul.” On the other hand, there’s something strange about a rule that tells technologists just to turn the robots loose. It encourages uses that don’t have much to do with human aesthetics while discouraging uses that do. </p>
<div class="pullquote">Copyright is designed to encourage human creativity for human audiences. If a book falls in a forest and no one reads it, does it make an infringement?</div>
<p>This hands-off approach to robotic <i>readership</i> stands in sharp contrast to copyright’s surprisingly obsessive fretting about robotic <i>authorship</i>. We’re at the dawn of a golden age of <a href=http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1888622>algorithmic authorship</a>. Twitter bots like <a href=https://twitter.com/oliviataters>Olivia Taters</a> and <a href=https://twitter.com/hotteststartups>Hottest Startups</a>, simple as they are, are capable of amazing poetry. From <a href=http://www.musicainformatica.org/topics/push-button-bertha.php>Push Button Bertha</a> to <a href=http://research.microsoft.com/en-us/um/redmond/projects/songsmith>Microsoft Songsmith</a>, computer-generated music ranges from beautiful to banal. Special-effects artists and video-game programmers use <a href=https://en.wikipedia.org/wiki/Procedural_generation>procedural content generation</a> to make <a href=http://www.slate.com/articles/technology/future_tense/2016/08/no_man_s_sky_offers_18_quintillion_planets_for_players_to_explore.html>vast imaginary worlds</a> far beyond what any one person could hope to draw or design. And of course spambots and <a href=http://io9.gizmodo.com/freakishly-realistic-telemarketing-robots-are-denying-t-1481050295>telemarketing robots</a> (and <a href=http://www.nytimes.com/2016/02/25/fashion/a-robot-that-has-fun-at-telemarketers-expense.html/_r=0>counter-robots</a>) are getting eerily good at mimicking human expression. </p>
<p>If all you knew about copyright was the way it treats computer-generated copies, you might think it would similarly look the other way and ignore computer-generated creativity. But no! No two plays of a video game are the same; the computer produces a new and different sequence of sights and sounds every time through. Copyright doesn’t care; video games <a href=https://scholar.google.com/scholar_case?case=7204019639108685629>are still copyrightable</a>. Now, of course they are, it would be ridiculous if you could just completely rip off games, and <a href=https://scholar.google.com/scholar_case?case=8334646367831709790>case</a> after <a href=http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1150&#038;context=historical>case</a> holds that you can’t. </p>
<p>But even as copyright law goes on recognizing copyright in computer-generated works, it can’t help obsessively worrying about them with the same kind of nervous energy it gives to <a href=https://www.publicknowledge.org/news-blog/blogs/no-more-monkey-business-court-rejects-monkey-selfie-copyright-case>monkey selfies</a> and for the same reason: What if there’s no author? What if a creative work just popped into existence, without being clearly traceable to the artistic vision of a specific human? What <i>then</i>, buddy? </p>
<p>The funny thing is that just as the player piano roll shows that mechanical copying long predates computers, so does algorithmic creativity. You know what’s a device for making art according to rigidly specified algorithmic rules? A <a href=https://en.wikipedia.org/wiki/Spirograph>Spirograph</a>. You know what else is? A <a href=https://en.wikipedia.org/wiki/Musikalisches_W%C3%BCrfelspiel>Musikalisches Würfelspiel</a> (sometimes apocryphally named for Mozart), a game in which you roll dice to select measures of music to string together into a minuet. Computers are faster and fancier, but for the most part <a href=http://james.grimmelmann.net/files/articles/computer-authored-works.pdf>not fundamentally different</a>. There’s no need to futz around with speculating on whether your iPhone is a copyright-owning “author” of a <a href=https://en.wikipedia.org/wiki/Temple_Run>Temple Run</a> maze, any more than a Spirograph is the author of a <a href=https://en.wikipedia.org/wiki/Hypotrochoid>hypotrochoid</a> drawing. Typically either the programmer or the user or both are authors, and that’s good enough. </p>
<p>There will be harder cases of what Bruce Boyden calls <a href=http://jla.journals.cdrs.columbia.edu/wp-content/uploads/sites/14/2016/06/7-39.3-Boyden.pdf>“emergent works”</a> that arise out of unpredictable algorithmic interactions. Where neither the programmer nor the user can reasonably foresee what a computer will do, the case for calling either of them an author is weak; they lack the kind of artistic vision copyright is supposed to promote and reward. But what’s interesting and tricky about these emergent works is not that they come from computers but that they’re unpredictable by anyone involved in their creation. </p>
<p>In an age of <a href=http://www.slate.com/blogs/future_tense/2016/07/08/dallas_police_used_bomb_robot_to_detonate_explosion_that_killed_shooting.html>police killbots</a>, worrying about whether Futurama’s Bender owns a copyright in his <a href=https://www.youtube.com/watch?v=0qBlPa-9v_M>dream about killing all humans</a> may seem a little beside the point. But copyright provides a useful window for thinking about hot-button issues in law and technology, ironically <i>because</i> the stakes are so much lower. There are low-tech precedents for new high-tech puzzles, if we care to see them. </p>
<p>The key is not to treat “computers” or “robots” or “drones” or other new kinds of technologies as unified phenomena we have to figure out all at once, but instead to look at the different kinds of ways they operate and can be used. The <a href=http://www.slate.com/blogs/future_tense/2016/07/08/dallas_police_used_bomb_robot_to_detonate_explosion_that_killed_shooting.html>Dallas bomb robot</a> was under direct police control at all times; it was a tool for safely delivering lethal force from a distance in the same way that a sniper rifle is. The most important issue it raised was the security of its communications channel—because the last thing you want when you strap a pound of C-4 to a robot is for someone else to hijack the controls. That’s a very different kind of problem than worrying about delegating life-or-death decisions to algorithms with a limited human presence in the loop. Lumping them together as “lethal robots” obscures more than it reveals; it makes it harder to identify which robots are dangerous and how, and harder to figure out what to do about them. </p>
<p>The same is true for copyright, for privacy, for civil rights, and for the dozens of other pressing public policy problems surrounding new technologies. You learn more about augmented reality by thinking about <i>Pokémon Go</i> than vice versa. Technology policy is complicated because the world is complicated.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2016/08/30/computers-robots-can-copy-work-get-away/ideas/nexus/">Computers and Robots Can Copy Your Work, and Get Away With It</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2016/08/30/computers-robots-can-copy-work-get-away/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>My Grandma&#8217;s Los Altos Garage Launched a Tech Revolution</title>
		<link>https://legacy.zocalopublicsquare.org/2015/01/26/my-grandmas-los-altos-garage-launched-a-tech-revolution/chronicles/who-we-were/</link>
		<comments>https://legacy.zocalopublicsquare.org/2015/01/26/my-grandmas-los-altos-garage-launched-a-tech-revolution/chronicles/who-we-were/#respond</comments>
		<pubDate>Mon, 26 Jan 2015 08:01:50 +0000</pubDate>
		<dc:creator>by Megan Chovanec</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Who We Were]]></category>
		<category><![CDATA[Apple]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Silicon Valley]]></category>
		<category><![CDATA[Steve Jobs]]></category>
		<category><![CDATA[Thinking L.A.]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=57914</guid>
		<description><![CDATA[<p>My grandma’s house is your typical white, one-story house in the suburbs of the Silicon Valley—it has rustic red brick accents, baby blue trim, and a perfectly manicured front lawn. It also happens to have signs out front that read “No Trespassing. Security Cameras Are Filming. All Pictures Must Be Taken From Street.” To me, my grandma’s house is a second home, but to the rest of the world, it’s the place where Apple Inc., was created.</p>
</p>
<p>Steve Jobs grew up in this Los Altos house throughout his childhood. In 1976, according to oft-repeated legend, he hatched the beginnings of Apple here, and put together the first 50 computers in the garage with Apple’s co-founder Steve Wozniak. (Wozniak recently said that that they didn’t do any manufacturing in the garage&#8211;they just finalized the computers in there. But the garage did represent them better than anywhere else.) In 1989, my paternal </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2015/01/26/my-grandmas-los-altos-garage-launched-a-tech-revolution/chronicles/who-we-were/">My Grandma&#8217;s Los Altos Garage Launched a Tech Revolution</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>My grandma’s house is your typical white, one-story house in the suburbs of the Silicon Valley—it has rustic red brick accents, baby blue trim, and a perfectly manicured front lawn. It also happens to have signs out front that read “No Trespassing. Security Cameras Are Filming. All Pictures Must Be Taken From Street.” To me, my grandma’s house is a second home, but to the rest of the world, it’s the place where Apple Inc., was created.</p>
<p><a href="https://legacy.zocalopublicsquare.org/tag/thinking-l-a/"><img loading="lazy" decoding="async" class="alignleft size-full wp-image-50852" style="margin: 5px;" alt="Thinking LA-logo-smaller" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2013/09/Thinking-LA-logo-smaller.jpg" width="150" height="150" /></a></p>
<p>Steve Jobs grew up in this Los Altos house throughout his childhood. In 1976, according to oft-repeated legend, he hatched the beginnings of Apple here, and put together the first 50 computers in the garage with Apple’s co-founder Steve Wozniak. (Wozniak <a href="http://www.businessweek.com/articles/2014-12-04/apple-steve-wozniak-on-the-early-years-with-steve-jobs">recently said</a> that that they didn’t do any manufacturing in the garage&#8211;they just finalized the computers in there. But the garage did represent them better than anywhere else.) In 1989, my paternal grandmother (Marilyn Jobs) married her second husband (Paul Jobs, Steve’s adoptive father). Soon after, my grandma moved into the house with the (not-yet-quite-so) famous garage. </p>
<p>As a kid, I always looked forward to going to my grandma’s house. It was a 25-minute drive across the South Bay from where my family lived in San Jose. I always knew we were within five minutes of my grandma’s house when we exited the 280 Freeway onto Foothill Expressway. As we turned onto my grandma’s street, we passed a strip mall with a Chevron Gas Station, a Trader Joe’s, and a Peet’s Coffee. When our car pulled into the driveway, my grandma would open the front door, smiling and waving at us from the porch. I always jumped out of the car and greeted her with one of my biggest hugs.  </p>
<div class="pullquote">In 1976, the two-car garage was filled with computer boards, components, wires—and the promise of a great company. In fact, the garage was so packed with Steve Job’s equipment that Paul was forced to build a second garage in the backyard to store the cars.</div>
<p>My grandma’s house is where I met my newborn brother for the first time, because I was staying with her while my parents were in the hospital. It’s the place I went to after preschool to wait for my parents to pick me up and eat spoonfuls of smooth Skippy peanut butter while curled up in a reclining chair. It’s the place I went when I was sick, snuggling in bed to watch “Tom and Jerry.” It’s the place where, to this day, my family still goes to celebrate birthdays and eat my grandma’s delicious cake. </p>
<p>Throughout my childhood, my parents always mentioned that grandma’s house was a special place and to me it was, but in a completely different way. So when I was 10 and my parents told me about the wider significance of my grandma’s house, I shrugged it off with a laugh. How could this quaint place have been ground zero for such a world-famous company that steered the course of today’s technology? </p>
<div id="attachment_57919" style="width: 610px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-57919" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas.jpg" alt="The author’s brother and grandma cutting cake at her grandma’s house about eight years ago." width="600" height="450" class="size-full wp-image-57919" srcset="https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas.jpg 600w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-300x225.jpg 300w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-250x188.jpg 250w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-440x330.jpg 440w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-305x229.jpg 305w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-260x195.jpg 260w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2015/01/cakeatgrandmas-400x300.jpg 400w" sizes="auto, (max-width: 600px) 100vw, 600px" /><p id="caption-attachment-57919" class="wp-caption-text">The author’s brother and grandma cutting cake at her grandma’s house about eight years ago.</p></div>
<p>Despite its celebrity status, this three-bedroom, three-bathroom house built in the early 1950s is a humble place. In the living room, porcelain Lladros figurines, Hummel collectibles, and blue and white China fill a curio cabinet by the fireplace. A Japanese bobtail cat named Daisy is always lounging in the small kitchen. A box of Betty Crocker white cake mix and a generic tub of chocolate frosting can always be found in the cupboard, waiting for grandma’s touch of love to make them special. </p>
<p>In 1976, the two-car garage was filled with computer boards, components, wires—and the promise of a great company. In fact, the garage was so packed with Steve Job’s equipment that Paul was forced to build a second garage in the backyard to store the cars. Nowadays, the garage is mostly filled with my grandma’s laundry, cat litter, and her Ford sedan. The only remnants of the garage’s famous past are a few of the original wooden shelves and wood-paneling wall, as well as the same cold concrete floors. It’s funny to think of people traveling hundreds of miles to catch a glimpse of this “treasure trove”! </p>
<p>Paul Jobs passed away in 1993, but my grandma still lives there. I’m in college now, 379 miles away in Southern California, but always visit when I come home on breaks. It’s pretty awesome to imagine that some of the first ideas for a world-renowned company were thought of in a place where I spent so much of my childhood. As a graphic design student, I truly appreciate the innovation that came out of the garage at my grandma’s house. Like so many people, I have an iPhone. When I gaze at it, I’m reminded that technological revolutions have to start somewhere—even if that somewhere is in the garage of a humble home.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2015/01/26/my-grandmas-los-altos-garage-launched-a-tech-revolution/chronicles/who-we-were/">My Grandma&#8217;s Los Altos Garage Launched a Tech Revolution</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2015/01/26/my-grandmas-los-altos-garage-launched-a-tech-revolution/chronicles/who-we-were/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Former Intel CEO Craig R. Barrett</title>
		<link>https://legacy.zocalopublicsquare.org/2013/05/03/former-intel-ceo-craig-r-barrett/personalities/in-the-green-room/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/05/03/former-intel-ceo-craig-r-barrett/personalities/in-the-green-room/#respond</comments>
		<pubDate>Fri, 03 May 2013 07:01:53 +0000</pubDate>
		<dc:creator>Zocalo</dc:creator>
				<category><![CDATA[In the Green Room]]></category>
		<category><![CDATA[Arizona State University]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=47576</guid>
		<description><![CDATA[<p>Craig R. Barrett is the former CEO and president of Intel. Before talking with ASU President Michael Crow about the future of the computer chip, he sat down in the Zócalo green room to talk about what it takes to get him on a dance floor, why he wears a tie design that features the letter R, and why he can’t live without his personal computer or his boron fiber fly fishing rod.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/05/03/former-intel-ceo-craig-r-barrett/personalities/in-the-green-room/">Former Intel CEO Craig R. Barrett</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><strong>Craig R. Barrett</strong> is the former CEO and president of Intel. Before <a href="https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/">talking with ASU President Michael Crow about the future of the computer chip</a>, he sat down in the Zócalo green room to talk about what it takes to get him on a dance floor, why he wears a tie design that features the letter R, and why he can’t live without his personal computer or his boron fiber fly fishing rod.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/05/03/former-intel-ceo-craig-r-barrett/personalities/in-the-green-room/">Former Intel CEO Craig R. Barrett</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/05/03/former-intel-ceo-craig-r-barrett/personalities/in-the-green-room/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Hello, Mr. Chips</title>
		<link>https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/#respond</comments>
		<pubDate>Fri, 22 Mar 2013 07:04:07 +0000</pubDate>
		<dc:creator>by Sarah Rothbard</dc:creator>
				<category><![CDATA[The Takeaway]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Michael Crow]]></category>
		<category><![CDATA[Phoenix]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=46321</guid>
		<description><![CDATA[<p>When Michael M. Crow, the president of Arizona State University, introduced Craig R. Barrett, the former CEO and president of Intel, he called Barrett “a singularly important actor in one of the most profound technologies in human history.”</p>
<p>But when Barrett entered Stanford University in 1957 as a student of metallurgical engineering (not even knowing how to spell the word “metallurgical”), the changes of the next half-century with which he would be intimately involved were impossible to imagine. The world was only on the cusp of technologies like the modern transistor and integrated circuits that made our current digital age possible; advanced computing hadn’t yet been realized.</p>
<p>In a wide-ranging conversation with Crow, in front of a full house—with additional audience members watching in a nearby overflow room—at the Phoenix Art Museum, Barrett discussed the past 50 years of technological change and offered some broad thoughts on his hopes for </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/">Hello, Mr. Chips</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>When Michael M. Crow, the president of Arizona State University, introduced Craig R. Barrett, the former CEO and president of Intel, he called Barrett “a singularly important actor in one of the most profound technologies in human history.”</p>
<p>But when Barrett entered Stanford University in 1957 as a student of metallurgical engineering (not even knowing how to spell the word “metallurgical”), the changes of the next half-century with which he would be intimately involved were impossible to imagine. The world was only on the cusp of technologies like the modern transistor and integrated circuits that made our current digital age possible; advanced computing hadn’t yet been realized.</p>
<p>In a wide-ranging conversation with Crow, in front of a full house—with additional audience members watching in a nearby overflow room—at the Phoenix Art Museum, Barrett discussed the past 50 years of technological change and offered some broad thoughts on his hopes for the future.</p>
<p>In 1965, said Crow, Intel co-founder Gordon Moore (Barrett’s old boss), observed that every 18 months or so, we double our computational capabilities, an observation known as Moore’s Law. Does Moore’s Law still hold true today, or are we—as many people have predicted—reaching the end of that doubling?</p>
<p>Even Moore, said Barrett, couldn’t imagine 10 more years of doubling, and yet “it’s really not showing much signs of slowing down.”</p>
<p>“There’s nothing I can think of that has had this level of productivity change or enhancement in something critical to human decision-making and human life,” said Crow. What’s the driver, at a place like Intel, of this type of change—making products that are faster, cheaper, and better so rapidly?</p>
<p>Barrett said that at Intel, Moore’s Law has been a self-fulfilling prophecy and a strategic plan. Even as academics and analysts and the media predicted the end of exponential improvements in computing, Intel kept making them, and as a result ended up continually ahead of everyone else in the industry. Every new crop of engineers at the company has received the same speech, said Barrett: We’ve been following Moore’s Law for this many years, and “it better not end on your watch.”</p>
<p>But while computing isn’t reaching limits imposed by the science, are there social hurdles—at least in America—that are limiting factors? For instance, asked Crow, why is it harder today than it was during the Cold War to get kids interested in science, technology, math, and engineering—STEM—disciplines?</p>
<p>Barrett said that when he entered college, engineering was the obvious pathway to the middle or upper-middle class. Today, there are more options. Still, Barrett added, engineering is a hugely valuable background. It has the highest percentage of any discipline studied by Fortune 500 CEOs and offers unmatched problem-solving capabilities.</p>
<p>Should a multinational corporation like Intel, asked Crow, care about American success in science and technology—in terms of education and national investment? And what should this country be doing “as the future of microelectronics and integrated circuits and advanced computational devices is moving forward?”</p>
<p>Barrett said that while Intel could continue to be a very successful corporation without hiring another U.S. citizen, as a parent and grandparent of U.S. children, he wants his family to have opportunities. For the U.S. to compete internationally, it has three levers it can pull: improving its education system, investing in research and development, and implementing government policies that “let smart people get together with smart ideas and do something.” Right now, we’re not up to par on any of those fronts, he said.</p>
<p>So where—in the U.S. and around the world—is the excitement in technology going to come from next?</p>
<p>Barrett thinks that in the next decade or two we’ll see more of the same sorts of developments we’ve seen before when it comes to computational power alone. But the exciting thing is going to be marrying computing capabilities with other sciences, like biology and medicine. “The possibilities of what can happen are unlimited,” he said—and combining disciplines is the next frontier.</p>
<p>Breaking into these new frontiers will bring social and cultural disruptions. Should we, asked Crow, put our time, energy, and assets into thinking about the coming complexities?</p>
<p>“I’m a fan, frankly, of ‘let’s let the technology move ahead,’” said Barrett.</p>
<p>The “empowerment of options through technology,” said Crow.</p>
<p>Yes, said Barrett, citing the theory of creative destruction. New technology can bring you wonderful things—if you give it the opportunity to explode existing structures in the process. It’s about letting chaos reign, said Barrett, rather than trying to rein it in.</p>
<p>In the question-and-answer session, audience members continued to press Barrett on what might be next.</p>
<p>Electronics were changed forever when vacuum tubes were replaced by transistors in circuits. What will replace transistors?</p>
<p>Barrett called this “the 64-trillion-dollar question”—and not one it’s possible to answer yet. Until transistor speed stops doubling—making it a flat target—it’s not going to be cost-effective to produce a replacement. Right now, the transistor is “ridiculously cost-effective”: You could add up every letter in the Sunday <em>New York Times</em>, then divide that by the price of the paper, and the transistor would still be much cheaper than a single letter.</p>
<p>Moore’s Law, said another audience member, has made technology disposable—we recycle our Apple products, for instance, every two years. Where is technology going from a sustainability perspective?</p>
<p>Companies like Intel do a lot of recycling of in-house of materials like chemicals and gases, said Barrett. And Moore’s Law also means that as you double the output of product you also try to decrease the input and use fewer resources. The guts of integrated circuits—which are made up of materials like aluminum, copper, silicon, and oxygen—are not at all scarce. When it comes to the rest of the hardware that makes up a product, Europe is ahead of the U.S. in recycling and limiting use of hazardous materials—and Barrett sees more of this sort of legislation on the way.</p>
<p>Computer power, said another audience member, has exceeded brain power, if it’s measured by bits per second. “Are we looking at computers demonstrating emergent properties not unlike the brain?”</p>
<p>Barrett said that computers can beat humans at chess and Jeopardy!—anything in which there’s a finite number of moves, no matter how many. But computers still lack the flexibility of the human brain, and we still don’t understand everything the human brain can do. We’re not there. Yet.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/">Hello, Mr. Chips</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/03/22/hello-mr-chips/events/the-takeaway/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Move Over, Moore’s Law</title>
		<link>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/#respond</comments>
		<pubDate>Wed, 20 Mar 2013 07:01:40 +0000</pubDate>
		<dc:creator>Zocalo</dc:creator>
				<category><![CDATA[Up For Discussion]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Future Tense]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=46233</guid>
		<description><![CDATA[<p>The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s Zócalo event on the future of nanotechnology, we asked engineers and people who think about computing, “What comes after the computer chip?”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/">Move Over, Moore’s Law</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s <a href="https://legacy.zocalopublicsquare.org/event/what-comes-after-the-computer-chip/">Zócalo event on the future of nanotechnology</a>, we asked engineers and people who think about computing, “What comes after the computer chip?”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/">Move Over, Moore’s Law</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>What It’s Like To Be Hacked By China</title>
		<link>https://legacy.zocalopublicsquare.org/2013/02/07/what-its-like-to-be-hacked-by-china/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/02/07/what-its-like-to-be-hacked-by-china/ideas/nexus/#comments</comments>
		<pubDate>Thu, 07 Feb 2013 08:01:05 +0000</pubDate>
		<dc:creator>by William Gerrity</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Internet security]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=44687</guid>
		<description><![CDATA[<p>In 2007 I opened an e-mail from an unknown sender. The message greeted me by a nickname known only to family and close friends. I was in Shanghai, unwinding late at night after a long day, pleased to be contacted by someone familiar from across the Pacific. I figured someone close to me must have gotten a new e-mail address. But the note was signed “Eric.” I did not know an Eric.</p>
<p>The message was friendly and chatty, with several attachments, and it contained a proposal: I could pay one million <em>renminbi</em> (about $150,000 at the time), in exchange for which the sender would not forward the attachments to my business partners or competitors. It took me a second—in that out-of-body, as-if-movie-watching state we go to when totally disoriented—to digest what was happening. This was no friendly e-mail from the home front, no business proposition in any traditional sense. This </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/02/07/what-its-like-to-be-hacked-by-china/ideas/nexus/">What It’s Like To Be Hacked By China</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In 2007 I opened an e-mail from an unknown sender. The message greeted me by a nickname known only to family and close friends. I was in Shanghai, unwinding late at night after a long day, pleased to be contacted by someone familiar from across the Pacific. I figured someone close to me must have gotten a new e-mail address. But the note was signed “Eric.” I did not know an Eric.</p>
<p>The message was friendly and chatty, with several attachments, and it contained a proposal: I could pay one million <em>renminbi</em> (about $150,000 at the time), in exchange for which the sender would not forward the attachments to my business partners or competitors. It took me a second—in that out-of-body, as-if-movie-watching state we go to when totally disoriented—to digest what was happening. This was no friendly e-mail from the home front, no business proposition in any traditional sense. This was blackmail, or extortion, or some other noun that I would never associate with my life.</p>
<p>Last week, I read of the infiltration of <em>The New York Times</em> and other media by Chinese hackers, and I can imagine how <em>Times</em> staffers must be feeling. It brought back all too vividly the violation-induced nausea of my own experience with China’s hacker army.</p>
<p>At the time, I was the chairman of a company that was building shopping centers in China. The company was a partnership of three entities: a major U.S. bank, a Chinese state-owned enterprise, and my firm. We were building centers in third- and fourth-tier cities. The anchor tenant was a multi-national hypermarket. Nearly all the employees were Chinese. It was an exhilarating adventure for me, but it was of little consequence politically. The enterprise was building Chinese shopping centers in Chinese cities for Chinese consumers.</p>
<p>Even so, all of our Internet activity was monitored. There was a small modem-like device attached to the primary server in our computer room. It was not terribly clandestine. We were told that the “government” would be restricting access to international news sites and various Chinese sites.</p>
<p>Our Chinese employees were used to this sort of thing. But for my American colleagues and me, the monitoring was a novelty. Although most international sites were accessible, certain stories on news websites were blacked out. When the power or the Internet would go down, we would promptly get a phone call from China Telecom, our service provider. They were on a friendly, first-name basis with our Shanghainese-speaking IT guy. “What’s up?” they’d ask. “Why are you offline?” They feared we would just disconnect the monitoring device, and they wanted to let us know they were paying attention. But I didn’t have anything to hide, so I didn’t give it much more thought.</p>
<p>I looked at the documents that were attached to the blackmail request. There were operating budgets and business plans. There were confidential memos to the senior management of my financial partner, written at their request, reviewing the progress of their projects. There were memos critical of staff. There were e-mails between my own team and me exchanging casual commentary on people and places, frustrations and triumphs. Perfectly appropriate for private consumption, but not for public consumption. Then there were e-mails from my personal account. Some concerned the troubled life of my recently deceased mother.</p>
<p>It’s one thing to tell yourself you have nothing to hide; it’s another to surrender all privacy to a hostile intruder. And if “Eric” had these documents, what else did he have? What else did he know? What else was there to know? Who was doing this? Why? What did other people already know? Was there anything about me they didn’t know, or couldn’t misconstrue to their advantage? The intrusion was like a digital cancer that could expand ad infinitum, nourishing itself on every link and attachment and contact address, jeopardizing the privacy of others as well as my own.</p>
<p>The <em>Times</em> story of January 30 reported that the newspaper had been hacked from Mainland China in an apparent attempt to stymie a <em>Times</em> investigation into the finances of Premier Wen Jiabao. The article quoted the newspaper’s executive editor, Jill Abramson, who sought to reassure readers and sources. “Computer security experts found no evidence that sensitive e-mails or files from the reporting of our articles about the Wen family were accessed, downloaded or copied,” she said. A few paragraphs later, however, the story went on to note: “Security experts found evidence that the hackers stole the corporate passwords for every <em>Times</em> employee and used those to gain access to the personal computers of 53 employees, most of them outside the <em>Times</em>’ newsroom. Experts found no evidence that the intruders used the passwords to seek information that was not related to the reporting on the Wen family.”</p>
<p>That’s hardly consoling. You have to wonder how confident any future confidential Chinese source will feel about approaching a <em>Times</em> reporter. <em>Every</em> employee of the paper had his or her corporate password stolen, and 53 employees had their personal computers penetrated. Once that happens, the hackers have the ability to observe and record everything. And to keep it forever.</p>
<p>The <em>Times</em> article described how the hackers would normally begin their probing at 8:00 a.m. and knock off after eight hours. On the clock. Mundane. Banal. In my case, experts I consulted told me that the hacking probably came from government monitors who wanted extra cash. During office hours they did their monitoring, and after hours they sought to supplement their income with a little freelancing. I wonder how many <em>Times</em> staffers will be contacted by their own “Eric.” I wonder how many of those individuals are having to revisit, as I did, their belief that they have nothing to hide.</p>
<p>The whole process of being hacked and blackmailed was eerily akin to undergoing a diagnostic colonoscopy without any anesthetic, which, relying on dubious medical advice, I’ve also experienced. During that medical procedure, a seemingly endless stream of water entered my body from a hose in, well, you know where, and a steady flow of water exited. A nurse leaned into me and grabbed my stomach to help the hose make turns and find its way onward. A video monitor broadcast the journey in vivid color just above my head. The doctor was quite excited for me to see it. I found it humiliating. Not unlike having everything one has ever expressed on e-mail exposed and probed.</p>
<p>Within a day of receiving the e-mail from Eric, I contacted the U.S. Consulate, the FBI, and the security office of my financial partner (a publicly traded Wall Street bank). I was soon sitting in my office, reviewing the matter with representatives from each entity. They wanted to know everything. They wanted access to all of my files to see what the hackers could see. They wanted to conduct their own digital colonoscopy. Knowing the hacker was inside probing around was already awful. Having the “good guys” in there probing around didn’t feel much better. All privacy, all dignity, all control was lost.</p>
<p>Blackmail was a familiar story to the security experts. Their strategy was to treat the hacker like a bully. Don’t respond to the demands, and find a way to punch him in the nose. Easier said than done. Finally, a law firm representing the bank sent Eric an e-mail. It said that the authorities had been notified, the partners had been notified, and there was nothing to be gained by trying to expose what had already been disclosed. It was a gamble, as I really didn’t want to have the documents or e-mails widely circulated. But it worked. After a few days, I received a message from Eric. He was friendly and warm. He said it was just business; nothing personal. He still used my nickname. It gave me the chills.</p>
<p>In retrospect, I should have known better. Hundreds of millions of Chinese operate on the Internet without any real sense of privacy, fully aware that a massive eavesdropping apparatus tracks their every communication and move. That is their normal. But relegating my experience to the China file—to the concerns of a faraway place—would also be a mistake. With China’s world and ours intersecting online, I expect we’ll eventually wonder how we could have been so naïve to have assumed that privacy was normal—or that breaches of it were news. And Eric, if he’s reading this, probably agrees.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/02/07/what-its-like-to-be-hacked-by-china/ideas/nexus/">What It’s Like To Be Hacked By China</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/02/07/what-its-like-to-be-hacked-by-china/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
