yalealumnimagazine.com  
  1891  
spacer spacer spacer
 
rule
yalealumnimagazine.com   about the Yale Alumni Magazine   classified & display advertising   back issues 1992-present   our blogs   The Yale Classifieds   yam@yale.edu   support us

spacer
 

The Yale Alumni Magazine is owned and operated by Yale Alumni Publications, Inc., a nonprofit corporation independent of Yale University.

The content of the magazine and its website is the responsibility of the editors and does not necessarily reflect the views of Yale or its officers.

 

Comment on this article

The Circuits of the Future
In building smaller, faster, and cheaper computers, Yale’s engineers are battling both cost and physics.

The Second World War had no sooner ended in the fall of 1945 than the U.S. Army unveiled its newest innovation—ENIAC, or Electronic and Numerical Integrator and Computer. The device was designed to calculate missile trajectories, and though what was dubbed the “giant brain” worked as advertised, it was hardly the sleek machine that has become such a ubiquitous part of modern society.

The world’s first electronic computer was huge. It filled several offices at the University of Pennsylvania with a tangle of thousands of vacuum tubes and miles of wire, and for all this, the best ENIAC could manage was what historian Scott McCartney calls “sophomoric calculations at Model T-like speeds.”

Today’s computers, of course, vastly outperform that original model, and they do so with integrated circuitry the size of a fingernail. But while putting five million transistors on a single computer chip is certainly a technological tour de force, it is already yesterday’s news.

“The trend for the past half century has been towards smaller, faster, more powerful, more reliable, and cheaper computers and circuits,” says Tso-Ping Ma, a professor of applied physics and electrical engineering. “We’re trying to figure out how to make this trend continue.”

At Yale, the work is headquartered in the applied physics and electrical engineering laboratories of the Becton Engineering and Applied Science Center. Researchers there aren’t actually making computers; rather, they’re helping to develop the next generations of components—the chips, transistors, diodes, switches, and the like—that will go into the ENIACs, and other advanced electronic devices, of the future.

This is potentially high-payoff science. The computer industry is huge, generating hundreds of billions of dollars annually in revenue, and it is hugely competitive. Up to this point, both the industry and university and government researchers have also been hugely successful in coming up with solutions to the technological challenges that have threatened to block progress. Indeed, says Ma, who, like many of his Yale colleagues, is a frequent consultant to a variety of companies, the industry has done such a good job that had automobile manufacturers done as well, “the average car would now cost 16 cents, have a top speed of 25,000 miles per hour, get 1,500 miles per gallon, and seat 400,000 people.”

This is impressive, but then again, the average car doesn’t crash at least once a day—an unfortunate experience well-known to most computer users—so there is clearly more work to do. While cyber-airbags for Apples and IBMs would be a boon, the main challenge for Yale researchers and others lies in developing new techniques of miniaturization and determining how to deal with their often-unforeseen consequences. “Our eventual goal is to be able to control and manipulate single molecules and atoms,” says theorist Douglas Stone, who chairs the applied physics department.

But getting to that point is proving to be a hugely expensive endeavor. As components shrink well out of sight (transistors are now commonly less than a millionth of a meter long) the cost to equip a laboratory capable of investigating what goes on at this almost unimaginable size can easily top a million dollars.

Yale, in recent years, has equipped a number of such labs. In the Becton Center, “clean rooms” that are about as dust-free as any place on the planet provide the setting for the manufacture of experimental computer chips. Scanning tunneling microscopes take portraits of molecules, an ultra-high-tech “spray painter” creates circuits that are no more than two dozen atoms thick, and, inside a freezer in which temperatures hover just above absolute zero (about 460 degrees below zero on the Fahrenheit scale), scientists are learning how to listen to individual electrons.

That Yale has been willing to pay the kind of price to stay at the frontier of a discipline known as low-energy physics seems proof that the dark days of engineering and applied physics at the University are finally history. In late 1991 and early 1992, the Committee on Restructuring the Faculty of Arts and Sciences responded to ballooning deficits with a controversial proposal to selectively trim, reshape, or even eliminate selected academic departments. Under the restructuring plan, which was championed by then-President Benno C. Schmidt Jr., the engineering disciplines were among those targeted for major cuts in both staff and budget. But a near revolt by faculty members scuttled the overall proposal, and in 1995, President Levin calmed lingering fears of Becton Center scientists when he promoted the “goal of assuring Yale a position at the forefront of engineering education and research.”

A year later, in a document called “Preparing for Yale’s Fourth Century,” Levin used the term “selective excellence” to describe the central principle that would guide the University’s development (Yale Alumni Magazine, Dec. '96). Faced with very real budgetary constraints, Yale would concentrate its intellectual firepower in areas that were likely to yield high returns. “Our strength here is truly by design,” notes Mark Reed, the Harold Hodgkinson Professor of Electrical Engineering and Applied Physics and the chairman of the electrical engineering department. “We add people to our group very carefully.”

Building a team of physicists and engineers capable of addressing the concerns of the computer industry would certainly seem like a good investment. Consumers are always demanding that next year’s computers process more information than the machines they currently own. But more than fulfilling the desire to crunch a bigger spreadsheet or play a more advanced version of “Quake” or “Duke Nuk'em,” computers have become an indispensable part of the effort to understand and manage the world. And while calculating a missile trajectory is a relatively easy task these days, the ability to create on-screen, for example, a realistic model of the world’s weather systems in order to predict the impact of global warming, or of the human brain to determine whether a new drug will work, takes considerably more computing power than even the fastest machines currently possess.

While having to improve computers seems like a given to all save the most committed Luddites, precisely how to do the job is getting less and less certain. “We’ve made tremendous progress over the past half century,” says Mark Reed, who came to Yale from Texas Instruments and who works in molecular computing, a futuristic research effort aimed at solving a problem that lurks over the near-term horizon. “However, as we look at our requirements ten or more years down the road, we have to admit that we really don’t know how to make the components we project we’ll need. We’re going to hit a technological wall.”

When the computer was first developed, one of the biggest hurdles researchers faced was dealing with bugs—literally, moths that were attracted to heat from the vacuum tubes and would cause short circuits. In 1947, scientists at Bell Labs invented the transistor, a so-called solid-state device that, like a vacuum tube, could serve as a gate through which electrons could pass. Flying insects had scant interest in transistors, which were also more reliable and smaller (the first ones were about the size of a cold medicine caplet) than tubes, and by the end of the 1950s, researchers had begun to use them in computers.

To these machines, the world is nothing more than strings of 1’s and 0's—carefully regulated pulses of electrons that turn transistors on and off. The faster this can happen, the more information a computer can process, and over the years, scientists have become adept at finding new ways to shrink transistors and pack more and more of them into the increasingly sophisticated brain—technically known as a microprocessor—of the computer.

“All these incredible advances that we’ve seen over the past quarter century have been primarily due to our ability to make devices smaller and smaller by a process called photolithography,” says Reed. “Basically, you start with a block of material—in computers, it’s silicon—and with a light beam, you whittle away.”

With the most advanced photolithography techniques, it is now possible to create transistors that are ten million times smaller than the original models. But it will be exceedingly difficult to make the shrinkage continue.

The cost of creating computer chips is already formidable; a fabrication facility to make the current generation costs about $1.5 billion. This figure could easily double or triple in the future. Silicon, the very stuff of the information revolution, is close to its limit in terms of the speed with which it can allow electrons to travel. Photolithography carving tools can only carve so small, and the lower limits are almost within sight. And in this ultra-Lilliputian world in which computer makers must now navigate, there are strange and sometimes unpredictable currents that arise from fundamental laws of physics.

Yale researchers have taken up different aspects of the challenge. T.P. Ma, for example, is trying to extend the performance limits of silicon, which begins to have operating problems when the transistors created in the material shrink ten- to one-hundred-times smaller than those currently in production. At this tiny size, which the scientific ruler measures in billionths of a meter (a nanometer), an odd thing happens. A component of the transistor that normally served as an insulator begins to fail. As a result, electrons that had been kept in check by the insulator can now tunnel through the transistor. The electrons disappear, the gates—the proxies for the 1’s and 0’s of computer code—no longer open and close the right way, and the flow of information is disrupted.

“When we identified the electron tunneling problem in 1994, there was no known solution,” says Ma, “and we wondered: Could we find a material that was still thin, but which would make electrons think it was thick, and so prevent the tunneling effect?”

The tricky insulator turned out to be an chemical offshoot of silicon known as silicon nitride. “We’re now in active collaborations with major semiconductor companies, including IBM, Motorola, Texas Instruments, and Intel,” says Ma, adding that transistors fabricated from this material should start appearing in computer circuitry by 2003.

Ma’s colleague, Jerry M. Woodall, the C. Baldwin Sawyer Professor of Electrical Engineering and Applied Physics, works with silicon as well, but Woodall, who came to Yale last year from Purdue and IBM, also deals with more exotic materials that seem to outperform silicon in small-scale circuits. “At least in the relatively near-term—say, the next five to ten years—components might not actually have to get much smaller if we take a materials approach to improving chip speed,” says Woodall.

Showing off a brand-new, $865,000 machine that can “spray paint” exotic substances onto silicon wafers in individual layers one atom thick, the scientist talks about his latest find, a material known as indium arsenide. Transistors and other circuitry built from this substance through a process called molecular beam epitaxy might not be the ultimate answer, says Woodall. “But we think we can get electrons working at least one hundred times faster and still keep components in size ranges we currently know how to handle.”

Eventually, however, the shrinkage will continue, and as it does, the challenges will intensify. Take the effect on wire, for instance. To tie microcircuits together will require wires that are one thousand times skinnier than a human hair. Daniel Prober, a professor of applied physics, developed a technique in the mid-1980s that would eventually enable him to make “nanowires.” At 50-billionths of an inch in diameter, these are among the thinnest in existence for use in real devices, and so far, they’ve proven valuable in a way that has little to do with computers.

Stars emit many different kinds of radiation, and scientists have learned a considerable amount about the universe by developing tools that can detect and analyze these emissions, which are often just faint whispers in space. Certain types of microwave radiation have been difficult to hear, but scientists are keenly interested in listening.

“Microwaves can give us important information about the birth and death of stars,” says Prober. With a grant from NASA and the National Science Foundation, the scientist turned one of the nanowires he'd created into a microwave detector, and the device he invented—the diffusion-cooled superconducting hot-electron bolometer—will soon be flying in a 747 which the space agency plans to use as a high-altitude observatory to pick up these “little squeaks” that provide insights about the workings of the universe.

Robert Schoelkopf, an assistant professor of applied physics, has also turned his work in “nanoscience” toward the detector business. “We’re making single-electron transistors, and among other things, these are proving very useful for picking up and amplifying certain kinds of extremely weak signals from outer space,” says Schoelkopf.

However, the scientist didn’t undertake this line of research with the intention of providing a boon for the radio astronomers with whom he once worked. Rather, the single-electron transistor was simply the logical end point of the miniaturization trend.

Even at their current minuscule size, transistors regulate the flow of thousands of electrons, says Schoelkopf, adding that what goes on in this tiny landscape works in accordance with the principles set down by physicists during the last century. But as computer circuitry shrinks from “micro” to “nano,” classical physics—the physics of everyday experience that describes everything from the way a light bulb works to the operation of the on-off switch on a toaster—no longer applies.

The reason lies in a branch of physics known as quantum mechanics. In the millionth-of-a-meter-and-larger size range, “quantum weirdness,” as scientists from Einstein on down have termed these strange effects, is hidden, but in the billionth-of-a-meter size range that circuit makers are now beginning to explore, “the design rules go haywire,” says Schoelkopf.

Take Ohm’s Law, for instance. This fundamental axiom about electricity likens the flow of electrons to the way water runs through a pipe: If you double the voltage, you double the current. In the quantum world, however, “this no longer applies,” says Schoelkopf. “There’s interference between the electrons, so the current could actually go up or down.”

This kind of variability is not what chip makers want in a circuit. “We’re experimenting with novel electronics like single-electron transistors to understand what happens in the quantum realm,” says Schoelkopf. “We’ll need these answers if we’re ever going to succeed in operating commercially viable quantum-scale devices.”

In a laboratory whose walls are sheathed with four layers of copper to ward off radiation that may come from other parts of Becton as well as from outside cell phones, Schoelkopf cools the tiny transistors he’s made to near absolute zero. At such low temperatures the circuits, which are actually not much smaller than conventional models, “behave according to quantum rules,” says the scientist.

Using this approach, the researcher and his collaborators at Yale and at Chalmers University in Sweden have been able to make single-electron transistors that work one thousand times faster than those currently available. But though these advanced SETs have astounded the scientific community and are going to be used by astronomers as very sensitive amplifiers, Schoelkopf cautions, “Don’t look for this technology to show up in your laptop computer anytime soon.”

Or, cynics might add, maybe ever. It is one thing to create a few nanocircuits and investigate the implications of life in the ultrasmall lane; it is quite another to mass-produce such components in computers that are cheap enough for people to afford.

When Mark Reed, himself a pioneer in nanostructure development, confronted this dilemma, he realized that “our fabrication technologies just weren’t going to win. A facility to make the next generation of microchips is estimated to cost a few billion dollars, and you have to sell a lot of chips to recoup your investment. Beyond that generation, it gets worse.”

Much worse. With the nanoscale electronic components of the more distant future appearing to be too expensive, or simply impossible, to create, Reed went looking for a novel fabrication approach. He found it in a beaker. “Our new strategy is this: Don’t chisel away with photolithography; instead, do clever chemistry so that molecules with the right properties will assemble where you want them,” argues Reed.

What the scientist proposed doing has been suggested before, but it had a distinctly checkered past because molecular scale “self-assembly” proved more hype than reality. In the last few years, however, Reed has given the strategy a good name. “We’ve done the science, and we’re developing self-assembled devices that actually work—in some cases, better than their solid-state counterparts,” he explains.

So far, Reed and his students have created, in a beaker, diodes, which are one-way doors for current, and a molecular switch. “These are not things you’ll be able to buy at your local Radio Shack in the near future,” he says. “But we hope to be building components for specialized applications within the next several years.”

The prospect of pouring carefully selected and custom-designed molecules into a glass jar and having them assemble themselves into highly sophisticated computer chips is clearly a long way off. And as to Reed’s belief that such a self-assembled device would be able to be “schooled” in problem-solving skills, well, that remains in the realm of science fiction.

But so, at the advent of the ENIAC era, were today’s personal computers. “On the road to the revolutionary, you first have to do the mundane,” says Reed. “Our results are surprising a lot of people and paving the way to the computers and electronic devices of the future.”  the end

 
     
   
 
 
 
spacer
 

©1992–2012, Yale Alumni Publications, Inc. All rights reserved.

Yale Alumni Magazine, P.O. Box 1905, New Haven, CT 06509-1905, USA. yam@yale.edu