<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Zócalo Public SquareIntel &#8211; Zócalo Public Square</title>
	<atom:link href="https://legacy.zocalopublicsquare.org/tag/intel/feed/" rel="self" type="application/rss+xml" />
	<link>https://legacy.zocalopublicsquare.org</link>
	<description>Ideas Journalism With a Head and a Heart</description>
	<lastBuildDate>Mon, 21 Oct 2024 07:01:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
		<item>
		<title>Did Moore’s Law Really Inspire the Computer Age?</title>
		<link>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/</link>
		<comments>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/#respond</comments>
		<pubDate>Sun, 22 Mar 2020 22:00:11 +0000</pubDate>
		<dc:creator>by Rachel Jones</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Gordon Moore]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Moore's Law]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[Silicon Valley]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=110232</guid>
		<description><![CDATA[<p>In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, there’s a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds). </p>
<p>What made the smartphone—and the rest of our unfolding digital transformation—possible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper <i>Electronics</i>. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. It’s been 55 years since the </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/">Did Moore’s Law Really Inspire the Computer Age?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, there’s a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds). </p>
<p>What made the smartphone—and the rest of our unfolding digital transformation—possible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper <i>Electronics</i>. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. It’s been 55 years since the article’s publication, and it’s worth revisiting its original prediction—now known as Moore’s Law.</p>
<p>If you ask people today what Moore’s Law is, they’ll often say it predicts that every 18 months, engineers will be able to come up with ways to double the number of transistors they can squeeze onto a tiny computer chip, thus doubling its processing power. It’s a curious aspect of the law that this is not what Moore actually said, but he did predict consistent improvement in processing technology. Moreover, the world he anticipated did take shape, with his own work as founder of the chipmaker Intel creating much of the momentum necessary to turn his “law” into a self-fulfilling prophecy. </p>
<p>Initially, Moore had few notions of changing the world. Early in life, he discovered a love for chemistry—and though he was kept back at school for his inarticulate style, he excelled at practical activities, making bombs and rockets in a home-based laboratory. He went on to study chemistry at UC Berkeley under two Nobel laureates, and earned a Ph.D. at the California Institute of Technology in 1954.</p>
<p>Moore’s career trajectory coincided with the rise of the transistor, a device made of semiconductor material that can regulate electrical current flows and act as a switch or gate for electronic signals. As far back as the 1920s, physicists had proposed making transistors as a way to improve on the unreliable, power-hungry vacuum tubes that helped amplify signals on telephone lines, and that would be used in the thousands in computers such as ENIAC and Colossus. In 1939, William Shockley, a young Bell Labs researcher, revived the idea of the transistor and tried to fabricate a device; despite several failures, he continued on and in 1947 he and two colleagues succeeded in making the world’s first working transistor (for which they shared a Nobel Prize in Physics). In 1953, British scientists used transistors to build a computer, and <i>Fortune</i> declared it “The Year of the Transistor.”</p>
<p>In 1955, Shockley moved to Mountain View, California, to be near his mother. He opened a semiconductor laboratory and picked a handful of young scientists to join him, including Moore and his Intel co-founder, Bob Noyce. The launch of the <i>Sputnik</i> satellite in 1957 and the escalation of the Cold War created a boom within a boom: Moore and seven colleagues, including Noyce, broke away from Shockley in a group quickly branded “The Traitorous Eight,” forming the seminal start-up Fairchild Semiconductor. They planned to make silicon transistors, which promised greater robustness, miniaturization and lower power usage, so essential for computers guiding missiles and satellites.</p>
<div id="attachment_110240" style="width: 310px" class="wp-caption alignright"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-110240" src="https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-300x293.png" alt="Did Moore’s Law Really Inspire the Computer Age? | Zocalo Public Square • Arizona State University • Smithsonian" width="300" height="293" class="size-medium wp-image-110240" srcset="https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-300x293.png 300w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-250x244.png 250w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-305x298.png 305w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-260x254.png 260w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT-307x300.png 307w, https://legacy.zocalopublicsquare.org/wp-content/uploads/2020/03/Gordon_Moore_and_Robert_Noyce-INT.png 389w" sizes="(max-width: 300px) 100vw, 300px" /><p id="caption-attachment-110240" class="wp-caption-text">&#8220;Our curiosity was similar, but not our approach. Noyce liked things that flew. I liked things that blew up,&#8221; said Gordon Moore (left) with Robert Noyce.<br /><span>Courtesy of <a href="https://commons.wikimedia.org/wiki/Category:Gordon_Moore#/media/File:Gordon_Moore_and_Robert_Noyce_at_Intel_SC1_in_Santa_Clara_1970.png" target="_blank" rel="noopener noreferrer">Intel Free Press</a>.</span></p></div>
<p>Developing the core manufacturing technology was a seat-of-the-pants adventure in which Moore played a central role. In March 1958, Fairchild received an order from IBM for 100 mesa transistors priced at $150 each. Mesas, made on 1-inch silicon wafers, were so named because their profiles resembled the flat-topped mesa formations of the American Southwest. Moore’s responsibility was figuring out how to fabricate them reliably, which involved a complex chemical ballet and a considerable amount of thrift and improvisation. Unable to buy appropriate furnaces, Moore relied on glass-blowing skills to create gas-handling systems, assembled on cobbled-together aqua blue kitchen cabinets and Formica countertops. (Real lab furniture was “as expensive as heck,” he remarked.) Delivery solutions were similarly no-frills: Fairchild sent mesa transistors to IBM in a Brillo box from a local grocery store.</p>
<p>The mesa transistor was successful, but the company’s new planar transistor (named for its flat topography) was a game-changer, bringing more stability and better performance. Another key development was the step to connect transistors by making all components of a complete circuit within a single piece of silicon, paving the way for the first commercial integrated circuits, or microchips. Everyone wanted miniaturized circuitry—the obstacle to greater computing power was its need for more components and interconnections, which increased the possibilities for failure. Noyce grasped a solution: why not leave transistors together in a wafer and interconnect them there, then detach the set as a single unit? Such “microchips” could be smaller, faster and cheaper than transistors manufactured individually and connected to each other afterward. As early as 1959, Moore proposed that “sets of these components will be able to replace 90 percent of all circuitry” in digital computers. </p>
<div class="pullquote">In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute “a major revolution in the history of mankind, as important as the Industrial Revolution.”</div>
<p>Six years later, in 1965, when he wrote his now-famous article in <i>Electronics</i>—“Cramming More Components onto Integrated Circuits”—personal computers were still a decade away. Moore, who had seen the number of elements on a chip go from one, to eight, to 60, hinted at how integrated functions would “broaden [electronics’] scope beyond [his] imagination” and at the “major impact” the changes would bring, but saw his analysis as distilling merely a trend in technology that would make everything cheaper. Nevertheless, his analysis was rigorous. Doubling the number of components on an integrated circuit each year would steadily increase performance and decrease cost, which would—as Moore put it 10 years later—“extend the utility of digital electronics more broadly in society.” </p>
<p>As chemical printing continued to evolve, the economics of microchips would continue to improve, and these more complex chips would provide the cheapest electronics. Thus, an electronics-based revolution could depend on existing silicon technology, rather than some new invention. By 1970, Moore asserted, the transistor that could be made most cheaply would be on a microchip 30 times more complex than one of 1965. </p>
<p>In 1968, Moore left Fairchild and joined Noyce to found Intel, with the aim of “putting cleverness back into processing silicon.” In 1975, he reviewed his original extrapolation. Chips introduced until that point had followed the trend he predicted, but engineers were reaching the limits for circuit and device cleverness. Moore now proposed a doubling about every two years.</p>
<p>The analysis in <i>Electronics</i> was becoming known as Moore’s Law. Having correctly observed the potential for exponential growth, Moore overcame his personal dislike of the spotlight by travelling widely to talk about his idea, taking every opportunity to persuade others. After all, the fulfilment of Moore’s Law would be as much social as technical, relying on widespread acceptance: industry needed to invest to develop the technology, manufacturers needed to put microchips into their products, consumers needed to buy and use electronic devices and functions, and researchers and engineers needed to invent advances to extend Moore’s Law.</p>
<p>In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute “a major revolution in the history of mankind, as important as the Industrial Revolution.” He was so confident in his vision that he told a journalist that students who’d made headlines getting kicked off campuses (“kids with the long hair and beards”) were not the ones to watch: instead, he pronounced, “we are really the revolutionaries in the world today.” In front of a crowd, he pointed out that if the auto industry made progress at the same rate as silicon microelectronics, it would be more expensive to park your car downtown for the night than to buy a new Rolls Royce. “And,” he recalled years later, “one of the members of the audience pointed out, yeah, but it’d only be 2-inches long and a half-inch high; it wouldn’t be much good for your commute.” </p>
<p>The rest is history. “For more than three decades,” the <i>New York Times</i> pointed out in 2003, Moore’s Law “has accurately predicted the accelerating power and plummeting cost of computing. Because of the exponential nature of Moore&#8217;s prediction, each change has arrived faster and more furiously.” Its curve, shallow at first (though spawning the birth of the microprocessor, digital calculator, personal computer and internet along the way) has, since 2005, gone almost straight up in “hockey stick” style.</p>
<p>Despite the changes we’ve all witnessed, Moore’s Law is still widely misunderstood, even in tech circles. “[It’s] only 11 words long &#8230; but most people manage to mangle it,” said one report. Moore’s 1965 article is a sophisticated piece of analysis but many prefer to interpret it more vaguely: “The definition of ‘Moore’s Law’ has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line,” noted its originator, dryly.</p>
<p>Up to April 2002, <a href="https://firstmonday.org/ojs/index.php/fm/article/view/1000/921" target="_blank" rel="noopener noreferrer">Intel&#8217;s website</a> noted that “Moore predicted that the number of transistors per integrated circuit would double every 18 months,” even though Moore had pointed out that he “never said 18 months.”</p>
<p>Why did 18 months stick? Perhaps because a projection by an Intel colleague in 1975 led to a conflation of transistor count and doubling of performance; perhaps because this timescale appeared in an influential technology column in 1992, as the modern configuration of Silicon Valley was forming—perhaps because that speed felt more accurate to the semiconductor industry. </p>
<p>During the technology bust of the early 2000s, people began to speculate about the death of Moore’s Law. Others suggested it would peter out because people would drop their computer fixations to spend less time at work and more with their families, or because Silicon Valley’s obsession with it was “unhealthy” for business strategy. In 2007, the year the smartphone launched, Moore pointed out that “we make more transistors per year than the number of printed characters in all the newspapers, magazines, books, photocopies, and computer printouts.” But he recognized exponential growth could not continue forever; he knew the physical and financial constraints on shrinking the size of chip components.</p>
<p>When people in industry circles <a href="https://www.scientificamerican.com/article/getting-more-from-moores/" target="_blank" rel="noopener noreferrer">describe Moore’s Law</a> as a “dictate—the law by which the industry lives or dies,” it is more evidence of the law’s power within Silicon Valley culture rather than its actual predictive accuracy. As the essayist Ilkka Tuomi observed in “The Lives and Death of Moore’s Law,” Moore’s Law became “an increasingly misleading predictor of future developments” that people understood to be something more like a “rule-of-thumb” than a “deterministic natural law.” In fact, Tuomi speculated, the very slipperiness of Moore’s Law might have accounted for its popularity. To an extent, tech people could pick and choose how they interpreted the dictum to suit their business needs. </p>
<div class="signup_embed"><div class="ctct-inline-form" data-form-id="3e5fdcce-d39a-4033-8e5f-6d2afdbbd6d2"></div><p class="optout">You may opt out or <a href="https://www.zocalopublicsquare.org/contact-us/">contact us</a> anytime.</p></div>
<p>Today, Moore’s Law continues to thrive in the smartphone space, having put some 8.5 billion transistors into a single phone that can fit in our pockets. The law may now be, in the words of one commentator, “more a challenge to the industry than an axiom for how chipmaking works,” but for what began as a 10-year forecast, it has had an astonishing run. “Once you’ve made a successful prediction, avoid making another one,” Moore quipped in 2015. </p>
<p>Even as technology continues to pervade our lives—with the advent of more specialized chips and materials, better software, cloud computing, and the promise of quantum computing—his law remains the benchmark and overarching narrative, both forecasting and describing our digital evolution. </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/">Did Moore’s Law Really Inspire the Computer Age?</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2020/03/22/what-is-moores-law/ideas/essay/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Your Project Will Hit a Crisis</title>
		<link>https://legacy.zocalopublicsquare.org/2013/03/21/your-project-will-hit-a-crisis/ideas/nexus/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/03/21/your-project-will-hit-a-crisis/ideas/nexus/#respond</comments>
		<pubDate>Thu, 21 Mar 2013 07:01:32 +0000</pubDate>
		<dc:creator>by Robert P. Colwell</dc:creator>
				<category><![CDATA[Essay]]></category>
		<category><![CDATA[Nexus]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=46252</guid>
		<description><![CDATA[<p>For every big project I&#8217;ve worked on as an engineer, there came a crisis. It would happen somewhere along the design path, generally about halfway through the schedule, and it had the potential to wreck the project. This was always unpleasant for the design team, not only because it threatened the project but also because the competition was probably that much further along. Managers might start pointing fingers at one another, and “stress fractures” in the team would threaten to bring the whole project down, even if the underlying issue was minor.</p>
<p>I first saw that happen at Bell Labs in 1979, on the Bellmac microprocessors. A chief architect had drawn up an elaborate blueprint for a new chip and then parceled out the work to many groups, each of which had to work concurrently to devise suitable circuitry for their part of the design. A few months later, all </p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/21/your-project-will-hit-a-crisis/ideas/nexus/">Your Project &lt;em&gt;Will&lt;/em&gt; Hit a Crisis</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>For every big project I&#8217;ve worked on as an engineer, there came a crisis. It would happen somewhere along the design path, generally about halfway through the schedule, and it had the potential to wreck the project. This was always unpleasant for the design team, not only because it threatened the project but also because the competition was probably that much further along. Managers might start pointing fingers at one another, and “stress fractures” in the team would threaten to bring the whole project down, even if the underlying issue was minor.</p>
<p>I first saw that happen at Bell Labs in 1979, on the Bellmac microprocessors. A chief architect had drawn up an elaborate blueprint for a new chip and then parceled out the work to many groups, each of which had to work concurrently to devise suitable circuitry for their part of the design. A few months later, all of these parcels had to be integrated together for the first time.</p>
<p>Everything worked perfectly. Oops, no, wait, that was just the plan.</p>
<p>In reality, as everyone approached the moment of truth, all of the groups started sending their spies around to see which group was in the biggest danger of missing the deadline. As long as some other group was worse off, then everyone else felt better. This caused a peculiar reverse motivation: instead of the helping the groups that were worst off, the groups that were doing best sat on their hands. No one wanted to risk becoming the new laggard.</p>
<p>At Intel, an enlightened manager (no, not me) saw this problem looming on the P6 chip design and took pre-emptive action. Instead of simply punishing the apparent “weak link” among the groups, this manager rewarded ahead-of-schedule groups that identified and helped behind-schedule groups. We still didn’t always make the schedule, but at least everyone was pulling in the same direction. And, while no one wanted to get sent to the back of the line to help the stragglers, this manager made sure everyone knew we stood or fell together, and our mutual fortunes would be made on that basis.</p>
<p>But besides schedule crises, there were also technical crises. Always. It’s curious that technical crises always happen. What’s equally curious is how useful they tend to be. It’s in the technical-crisis mode itself that some of the best ideas tend to emerge. Something about the extreme urgency focuses everyone’s minds jointly, and superb insight bubbles up from the combined brains.</p>
<p>Why must these crises happen? It’s a combination of things, but one factor is that you never have perfect knowledge. Not ever. You don’t know the actual status of the project, because it’s changing all the time. So many variables are in play that you cannot possibly gauge them all at once. Imagine that you’re trying to build a house, and one team is handling the windows, a second team is handling the floors, a third is dealing with wiring, and a fourth is dealing with painting. But if the floors have to be changed to oak from maple, then the shade of paint has to change, or if the preferred style of window is out of stock, then perhaps the wiring is affected. And that’s a simple example, because we already know what a house is. Imagine if you didn’t really know what a house was, or had only a rough idea of what a house might be. That’s what new technology projects are often like.</p>
<p>With the business of computer chips, no plan for troubleshooting and testing the product can ever be truly comprehensive, because there are so many combinations in play. (In the first couple of seconds of operation of a new chip, you will have run more cycles than were simulated over the previous five years of design.) So as you go, you resolve the issues as you can. Still, even if you’re really good and really lucky and fix most of the problems, some small but crucial ones might remain. These are seemingly minor issues that take on weight and become chronic, like a ball and chain wrapped around the project’s ankle. Eventually, these chronic issues may force a crisis, or at least a difficult choice between two unappealing alternatives.</p>
<p>In developing Intel’s P6 chip (ultimately known as the Pentium Pro) in the early 1990s, we eventually reached a point in the process when it started to look like our design was off. To be specific—for those who are more technically minded—our estimates of the silicon die size (the die size is the length versus width measure of a microprocessor) were showing us that the dimensions for the chip we wanted to produce were too big. Not so big as to make it impossible to produce, but big enough to go against some other priorities and throw other efforts off balance. In the same way that a dull toothache eventually forces you to see your dentist, this problem eventually pained the senior chip architects enough to cause them to grapple with it head-on. We knew that if we procrastinated any longer, the smaller decisions we were making daily could all become moot.</p>
<p>One night, while traveling, we all came clean at a Chinese restaurant in Santa Clara, California. We admitted to each other that the die-size problem had become chronic, with potentially dire implications should we fail to resolve it, and we got serious. Over General Tso’s chicken, we took a napkin and started drawing possible remedies. Finally, one of us came up with the winning idea. He pointed out that the chip, as originally envisioned, had two separate structures, a design feature we’d come up with as a way to disentangle some complex functions. But at this point in the project, we felt confident that we could handle the complexity associated with combining these two structures into one. So we did. By the time we were eating our fortune cookies, we had a workable plan that got the die size under control, didn’t sacrifice too much performance, and would not generate whole new families of design bugs. Intel should have mounted and framed that napkin, because the basic plan it described has been the basis for all of Intel’s x86 chips since 1993.</p>
<p>My takeaway from this and other similar situations is this: Even well-managed projects will encounter project crises that, if not resolved expeditiously and correctly, can scuttle the project or cause it to end in mediocre products. If you hit such a crisis, remember the advice from <em>Hitchhiker&#8217;s Guide to the Galaxy</em>: don’t panic. Don’t waste time looking for scapegoats; remind everyone that they all sink or swim together. Collect your most senior technologists, take them to a Chinese restaurant, and give them as many napkins as it takes to work through the issues.</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/21/your-project-will-hit-a-crisis/ideas/nexus/">Your Project &lt;em&gt;Will&lt;/em&gt; Hit a Crisis</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/03/21/your-project-will-hit-a-crisis/ideas/nexus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Move Over, Moore’s Law</title>
		<link>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/</link>
		<comments>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/#respond</comments>
		<pubDate>Wed, 20 Mar 2013 07:01:40 +0000</pubDate>
		<dc:creator>Zocalo</dc:creator>
				<category><![CDATA[Up For Discussion]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Future Tense]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[technology]]></category>

		<guid isPermaLink="false">https://legacy.zocalopublicsquare.org/?p=46233</guid>
		<description><![CDATA[<p>The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s Zócalo event on the future of nanotechnology, we asked engineers and people who think about computing, “What comes after the computer chip?”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/">Move Over, Moore’s Law</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s <a href="https://legacy.zocalopublicsquare.org/event/what-comes-after-the-computer-chip/">Zócalo event on the future of nanotechnology</a>, we asked engineers and people who think about computing, “What comes after the computer chip?”</p>
<p>The post <a rel="nofollow" href="https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/">Move Over, Moore’s Law</a> appeared first on <a rel="nofollow" href="https://legacy.zocalopublicsquare.org">Zócalo Public Square</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://legacy.zocalopublicsquare.org/2013/03/20/move-over-moores-law/ideas/up-for-discussion/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
