NKisiland WordPress

NKisiland WordPress

Inside the Dynomak: A Fusion Technology Cheaper Than Coal

Modifying the most common type of experimental reactor might finally make fusion power feasible (from spectrum.ieee.org )

Fusion power has many compelling arguments in its favor. It doesn’t produce dangerous, long-term toxic waste, like nuclear fission. It’s far cleaner than coal, with a supply of fuel that’s virtually unlimited. And unlike with wind and solar, the output of a fusion power plant would be constant and reliable.

In October, Lockheed Martin Corp. revealed that it’s been working on a type of fusion reactor that could be made small enough to transport by truck. Lawrenceville Plasma Physics raised money through crowdfunding in June to advance its alternative proton-boron fusion. Helion Energy is developing a type of fusion based on magnetic compression, and General Fusion is working toward a power system that involves shock waves inside a vortex of liquid metal.

A particularly promising approach was unveiled recently by a University of Washington research group, led by plasma physicist Tom Jarboe. They’ve been developing a type of fusion reactor called a dynomak. The researchers involved say the technology is unique in that it offers a path to a power plant that’s backed up by demonstrated physics and because such a reactor also promises to be even more economical than a coal-fired power plant.The dynomak is a variation of the most popular type of research fusion machine, the tokamak. Essentially, a tokamak is a doughnut-shaped machine that generates helical magnetic fields by combining toroidal fields (which go around the doughnut’s equator) with poloidal fields (which wrap around the outside of the doughnut). These fields have to be strong enough to keep plasma stable and contained indefinitely at the tens to hundreds of millions of degrees Celsius necessary to induce fusion.

In practice, tokamaks are hollow, doughnut-shaped vacuum chambers with interior walls made of heat-resistant metals or ceramics. Outside the chamber are massive superconducting coils that generate the toroidal magnetic fields that stabilize the plasma. The European Union, China, India, Japan, Russia, South Korea, and the United States are collaborating to build a giant US $50 billion tokamak in France called ITER (originally International Thermonuclear Experimental Reactor), which may lead to a fusion power plant in the 2030s. But the University of Washington group—and its alternative-fusion competitors—are hoping to beat it to commercialization.

The University of Washington’s dynomak is a refinement of a subtype of tokamak called a spheromak. The most important difference is that the spheromak does away with most of the tokamak’s expensive superconducting magnetic coils. Instead, a spheromak uses the electric currents flowing though the plasma itself to generate the magnetic fields needed to both stabilize and confine the plasma.

This is tricky, as UW graduate student Derek Sutherland explains. For it to work, you need not only a sophisticated understanding of the physics underlying the behavior of the plasma but also a very efficient way of driving the current. If you’re not careful, you’ll end up dumping all the energy that your reactor is producing right back into the plasma just to keep it contained—resulting in a very expensive machine that will power itself and nothing else.

uw-dynomak_02According to Sutherland, the big breakthrough was UW’s experimental discovery in 2012 of a physical mechanism called imposed-dynamo current drive (hence “dynomak”). By injecting current directly into the plasma, imposed-dynamo current drive lets the system control the helical fields that keep the plasma confined. The result is that you can reach steady-state fusion in a relatively small and inexpensive reactor. “We are able to drive plasma current more efficiently than previously possible,” says Sutherland. “With that efficiency can come higher current and a more compact, economical design.”

How economical? According to projections by Sutherland’s group, a dynomak has the potential to cost less than a tenth as much to build as a tokamak like ITER, even as it produces five times as much power. This massive boost in efficiency is very compelling: According to UW’s analysis, it makes the total cost of a dynomak fusion power plant with an output of 1 gigawatt slightly cheaper than the total cost of a coal power plant with the same output—$2.7 billion versus $2.8 billion.

The UW researchers are particularly optimistic about their dynomak because it’s not much of a deviation from established systems. “I think we’ve blended the mainstream and alternates into a pathway that is completely plausible but different enough to really start addressing the economic issues facing fusion power,” says Sutherland.

“The spheromak—and the dynomak is a species of spheromak—in particular has not received the level of attention that it warrants,” says University of Iowa physicist Fred Skiff. “The potential advantages are significant: a lower magnetic field—and therefore lower cost and complexity—and a smaller reactor.” The lower magnetic field requirements are important because “large superconducting coils are not trivial to produce and protect in a reactor environment.”

However, “there are significant unknowns,” says Skiff. “The ability to control the current profile, the plasma position, and the ability to maintain high confinement will have to be demonstrated.”

The next steps for the dynomak are straightforward. The experimental device Jarboe’s group is working with right now, called HIT-SI3, is about one-tenth the size that a commercial dynomak fusion reactor would be. It includes three helicity injectors, which are the coils that control the delivery of twisting magnetic fields into the plasma. “The eventual dynomak reactor will have six injectors according to the current design,” says Sutherland. With $8 million to $10 million in funding, the group hopes to construct HIT-SIX, a six-injector machine that will be twice as large as HIT-SI3.

At that size, things start to get interesting, says Sutherland. HIT-SIX is designed to reach millions of degrees Celsius using a mega-ampere of plasma current. If imposed-dynamo current drive works well in HIT-SIX, he’ll be “much more confident going forward that our development path will be successful,” he says.

That entire path, including an electricity-generating pilot plant, would require about $4 billion, Jarboe’s group projects. Compared with ITER’s $50 billion, that’s a bargain.

A promising light source for optoelectronic chips can be tuned to different frequencies

The MIT researchers deposited triangular layers of molybdenum disulfide on a silicon substrate. At left, regions highlighted in blue indicate where the layers overlap. Chips that use light, rather than electricity, to move data would consume much less power—and energy efficiency is a growing concern as chips’ transistor counts rise. (from phys.org)

Of the three chief components of optical circuits—light emitters, modulators, and detectors—emitters are the toughest to build. One promising light source for optical chips is molybdenum disulfide (MoS2), which has excellent optical properties when deposited as a single, atom-thick layer. Other experimental on-chip light emitters have more-complex three-dimensional geometries and use rarer materials, which would make them more difficult and costly to manufacture.

physorg_mosi2In the next issue of the journal Nano Letters, researchers from MIT’s departments of Physics and of Electrical Engineering and Computer Science will describe a new technique for building MoS2 light emitters tuned to different frequencies, an essential requirement for optoelectronic chips. Since thin films of material can also be patterned onto sheets of plastic, the same work could point toward thin, flexible, bright, color displays.

The researchers also provide a theoretical characterization of the physical phenomena that explain the emitters’ tunability, which could aid in the search for even better candidate materials. Molybdenum is one of several elements, clustered together on the periodic table, known as transition metals. “There’s a whole family of transition metals,” says Institute Professor Emeritus Mildred Dresselhaus, the corresponding author on the new paper. “If you find it in one, then it gives you some incentive to look at it in the whole family.”

Joining Dresselhaus on the paper are joint first authors Shengxi Huang, a graduate student in electrical engineering and computer science, and Xi Ling, a postdoc in the Research Laboratory of Electronics; associate professor of electrical engineering and computer science Jing Kong; and Liangbo Liang, Humberto Terrones, and Vincent Meunier of Rensselaer Polytechnic Institute.

Most optical communications systems—such as the fiber-optic networks that provide many people with Internet and TV service—maximize bandwidth by encoding different data at different optical frequencies. So tunability is crucial to realizing the full potential of optoelectronic chips.

The MIT researchers tuned their emitters by depositing two layers of MoS2 on a silicon substrate. The top layers were rotated relative to the lower layers, and the degree of rotation determined the wavelength of the emitted light.

Ordinarily, MoS2 is a good light emitter only in monolayers, or atom-thick sheets. As Huang explains, that’s because the two-dimensional structure of the sheet confines the electrons orbiting the MoS2 molecules to a limited number of energy states.

MoS2, like all light-emitting semiconductors, is what’s called a direct-band-gap material. When energy is added to the material, either by a laser “pump” or as an electrical current, it kicks some of the electrons orbiting the molecules into higher energy states. When the electrons fall back into their initial state, they emit their excess energy as light.

In a monolayer of MoS2, the excited electrons can’t escape the plane defined by the material’s crystal lattice: Because of the crystal’s geometry, the only energy states available to them to leap into cross the light-emitting threshold. But in multilayer MoS2, the adjacent layers offer lower-energy states, below the threshold, and an excited electron will always seek the lowest energy it can find.

So while the researchers knew that rotating the layers of MoS2 should alter the wavelength of the emitted light, they were by no means certain that the light would be intense enough for use in optoelectronics. As it turns out, however, the rotation of the layers relative to each other alters the crystal geometry enough to preserve the band gap. The emitted light is not quite as intense as that produced by a monolayer of MoS2, but it’s certainly intense enough for practical use—and significantly more intense than that produced by most rival technologies.

The researchers were able to precisely characterize the relationship between the geometries of the rotated layers and the wavelength and intensity of the light emitted. “For different twisted angles, the actual separation between the two layers is different, so the coupling between the two layers is different,” Huang explains. “This interferes with the electron densities in the bilayer system, which gives you a different photoluminescence.” That theoretical characterization should make it much easier to predict whether other transition-metal compounds will display similar light emission.

“This thing is something really new,” says Fengnian Xia, an assistant professor of electrical engineering at Yale University. “It gives you a new model for tuning.”

“I expected that this kind of angle adjustment would work, but I didn’t expect that the effect would be so huge,” Xia adds. “They get quite significant tuning. That’s a little bit surprising.”

Xia believes that compounds made from other transition metals, such as tungsten disulfide or tungsten diselenide, could ultimately prove more practical than MoS2. But he agrees that the MIT and RPI researchers’ theoretical framework could help guide future work. “They use density-functional theory,” he says. “That’s a kind of general theory that can be applied to other materials also.”

The Coldest Place in the Universe

Physicists in Massachusetts come to grips with the lowest possible temperature: absolute zeroWhere’s the coldest spot in the universe? Not on the moon, where the temperature plunges to a mere minus 378 Fahrenheit. Not even in deepest outer space, which has an estimated background temperature of about minus 455°F. As far as scientists can tell, the lowest temperatures ever attained were recently observed right here on earth. (from smithsonianmag.com)

The record-breaking lows were among the latest feats of ultracold physics, the laboratory study of matter at temperatures so mind-bogglingly frigid that atoms and even light itself behave in highly unusual ways. Electrical resistance in some elements disappears below about minus 440°F, a phenomenon called superconductivity. At even lower temperatures, some liquefied gases become “superfluids” capable of oozing through walls solid enough to hold any other sort of liquid; they even seem to defy gravity as they creep up, over and out of their containers.Physicists acknowledge they can never reach the coldest conceivable temperature, known as absolute zero and long ago calculated to be minus 459.67°F. To physicists, temperature is a measure of how fast atoms are moving, a reflection of their energy—and absolute zero is the point at which there is absolutely no heat energy remaining to be extracted from a substance.She is able to manipulate light this way because the density and the temperature of the BEC slows pulses of light down. (She recently took the experiments a step further, stopping a pulse in one BEC, converting it into electrical energy, transferring it to another BEC, then releasing it and sending it on its way again.) Hau uses BECs to discover more about the nature of light and how to use “slow light”—that is, light trapped in BECs—to improve the processing speed of computers and provide new ways to store information.

Not all ultracold research is performed using BECs. In Finland, for instance, physicist Juha Tuoriniemi magnetically manipulates the cores of rhodium atoms to reach temperatures of 180 trillionths of a degree F above absolute zero. (The Guinness record notwithstanding, many experts credit Tuoriniemi with achieving even lower temperatures than Ketterle, but that depends on whether you’re measuring a group of atoms, such as a BEC, or only parts of atoms, such as the nuclei.)

It might seem that absolute zero is worth trying to attain, but Ketterle says he knows better. “We’re not trying,” he says. “Where we are is cold enough for our experiments.” It’s simply not worth the trouble—not to mention, according to physicists’ understanding of heat and the laws of thermodynamics, impossible. “To suck out all the energy, every last bit of it, and achieve zero energy and absolute zero—that would take the age of the universe to accomplish.”

But a few physicists are intent on getting as close as possible to that theoretical limit, and it was to get a better view of that most rarefied of competitions that I visited Wolfgang Ketterle’s lab at the Massachusetts Institute of Technology in Cambridge. It currently holds the record—at least according to Guinness World Records 2008—for lowest temperature: 810 trillionths of a degree F above absolute zero. Ketterle and his colleagues accomplished that feat in 2003 while working with a cloud—about a thousandth of an inch across—of sodium molecules trapped in place by magnets.

I ask Ketterle to show me the spot where they’d set the record. We put on goggles to protect ourselves from being blinded by infrared light from the laser beams that are used to slow down and thereby cool fast-moving atomic particles. We cross the hall from his sunny office into a dark room with an interconnected jumble of wires, small mirrors, vacuum tubes, laser sources and high-powered computer equipment. “Right here,” he says, his voice rising with excitement as he points to a black box that has an aluminum-foil-wrapped tube leading into it. “This is where we made the coldest temperature.”

Ketterle’s achievement came out of his pursuit of an entirely new form of matter called a Bose-Einstein condensate (BEC). The condensates are not standard gases, liquids or even solids. They form when a cloud of atoms—sometimes millions or more—all enter the same quantum state and behave as one. Albert Einstein and the Indian physicist Satyendra Bose predicted in 1925 that scientists could generate such matter by subjecting atoms to temperatures approaching absolute zero. Seventy years later, Ketterle, working at M.I.T., and almost simultaneously, Carl Wieman, working at the University of Colorado at Boulder, and Eric Cornell of the National Institute of Standards and Technology in Boulder created the first Bose-Einstein condensates. The three promptly won a Nobel Prize. Ketterle’s team is using BECs to study basic properties of matter, such as compressibility, and better understand weird low-temperature phenomena such as superfluidity. Ultimately, Ketterle, like many physicists, hopes to discover new forms of matter that could act as superconductors at room temperature, which would revolutionize how humans use energy. For most Nobel Prize winners, the honor caps a long career. But for Ketterle, who was 44 years old when he was awarded, the creation of BECs opened a new field that he and his colleagues will be exploring for decades.

Another contender for the coldest spot is across Cambridge, in Lene Vestergaard Hau’s lab at Harvard. Her personal best is a few millionths of a degree F above absolute zero, close to Ketterle’s, which she, too, reached while creating BECs. “We make BECs every day now,” she says as we go down a stairwell to a lab packed with equipment. A billiards-table-size platform at the center of the room looks like a maze constructed of tiny oval mirrors and pencil-lead-thin laser beams. Harnessing BECs, Hau and her co-workers have done something that might seem impossible: they have slowed light to a virtual standstill.

The speed of light, as we’ve all heard, is a constant: 186,171 miles per second in a vacuum. But it is different in the real world, outside a vacuum; for instance, light not only bends but also slows ever so slightly when it passes through glass or water. Still, that’s nothing compared with what happens when Hau shines a laser beam of light into a BEC: it’s like hurling a baseball into a pillow. “First, we got the speed down to that of a bicycle,” Hau says. “Now it’s at a crawl, and we can actually stop it—keep light bottled up entirely inside the BEC, look at it, play with it and then release it when we’re ready.”

Have astronomers demonstrated that dead stars can reignite ?

Astronomers using ESA’s Integral gamma-ray observatory have demonstrated beyond doubt that dead stars known as white dwarfs can reignite and explode as supernovae. The finding came after the unique signature of gamma rays from the radioactive elements created in one of these explosions was captured for the first time. (from esa.int ) 

The explosions in question are known as Type Ia supernovae, long suspected to be the result of a white dwarf star blowing up because of a disruptive interaction with a companion star. However, astronomers have lacked definitive evidence that a white dwarf was involved until now. The ‘smoking gun’ in this case was evidence for radioactive nuclei being created by fusion during the thermonuclear explosion of the white dwarf star.

Supernova_explosion_node_full_image_2“Integral has all the capabilities to detect the signature of this fusion, but we had to wait for more than ten years for a once-in-a-lifetime opportunity to catch a nearby supernova,” says Eugene Churazov, from the Space Research Institute (IKI) in Moscow, Russia and the Max Planck Institute for Astrophysics,in Garching, Germany.

Although Type Ia supernovae are expected to occur frequently across the Universe they are rare occurrences in any one galaxy, with typical rates of one every few hundred years.

Integral’s chance came on 21 January 2014, when students at the University College London’s teaching observatory at Mill Hill, UK detected a type Ia supernova, later named SN2014J, in the nearby galaxy M82.

According to the theory of such explosions, the carbon and oxygen found in a white dwarf should be fused into radioactive nickel during the explosion. This nickel should then quickly decay into radioactive cobalt, which would itself subsequently decay, on a somewhat longer timescale, into stable iron.

Because of its proximity – at a distance of about 11.5 million light-years from Earth, SN2014J is the closest of its type to be detected in decades – Integral stood a good chance of seeing the gamma rays produced by the decay. Within one week of the initial discovery, an observing plan to use Integral had been drawn-up and approved.

Using Integral to study the aftermath of the supernova explosion, scientists looked for the signature of cobalt decay – and they found it, in exactly the quantities that the models predicted.

“The consistency of the spectra, obtained by Integral 50 days after the explosion, with that expected from cobalt decay in the expanding debris of the white dwarf was excellent,” says Churazov, who is lead author of a paper describing this study and reported in the journal Nature.

With that confirmation in hand, other astronomers could begin to look into the details of the process. In particular, how the white dwarf is detonated in the first place.

White dwarfs are inert stars that contain up to 1.4 times the mass of the Sun squeezed into a volume about the same size as the Earth. Being inert, they can’t simply blow themselves up. Instead, astronomers believe that they leech matter from a companion star, which builds up on the surface until a critical total mass is reached. At that point, the pressure in the heart of the white dwarf triggers a catastrophic thermonuclear detonation.

Early Integral observations of SN2014J tell a somewhat different story, and have been the focus of a separate study, reported online in Science Express by Roland Diehl from the Max Planck Institute for Extraterrestrial Physics, Germany, and colleagues.

Diehl and his colleagues detected gamma rays from the decay of radioactive nickel just 15 days after the explosion. This was unexpected, because during the early phase of a Type Ia supernova, the explosion debris is thought to be so dense that the gamma rays from the nickel decay should be trapped inside.

“We were puzzled by this surprising signal, and some from the group even thought it must be wrong,” says Diehl. “We had long and ultimately very fruitful discussions about what might explain these data.”

A careful examination of the theory showed that the signal would have been hidden only if the explosion had begun in the heart of the white dwarf. Instead, Diehl and colleagues think that what they are seeing is evidence for a belt of gas from the companion star that must have built up around the equator of the white dwarf. This outer layer detonated, forming the observed nickel and then triggering the internal explosion that became the supernova.

“Regardless of the fine details of how these supernovae are triggered, Integral has proved beyond doubt that a white dwarf is involved in these stellar cataclysms,” says Erik Kuulkers, ESA’s Integral Project Scientist. “This clearly demonstrates that even after almost twelve years in operation, Integral is still playing a crucial role in unraveling some of the mysteries of the high-energy Universe.”

Mandela laid to rest in Qunu, ending a journey that transformed the south of Africa

With military pomp and traditional rituals, South Africa buried Nelson Mandela on Sunday, the end of an exceptional journey for the prisoner turned president who transformed the nation. (from cnn.com)

tata madiba

Mandela was laid to rest in his childhood village of Qunu.

Tribal leaders clad in animal skins joined dignitaries in dark suits at the grave site overlooking the rolling green hills.

As pallbearers walked toward the site after a funeral ceremony, helicopters whizzed past dangling the national flag. Cannons fired a 21-gun salute, its echoes ringing over the quiet village.

Mandela’s widow, Graca Machel, dabbed her eyes with a handkerchief as she watched the proceedings.

“Yours was truly a long walk to freedom. Now you have achieved the ultimate freedom in the bosom of God, your maker,” an officiator at the grave site said.

Military pallbearers gently removed the South African flag that draped the coffin and handed it to President Jacob Zuma, who gave it to Mandela’s family.

At the request of the family, the lowering of the casket was closed to the media.

Sony PS4 dev kit FCC filing shows off extra ports, 2.75GHz max clock frequency

Sony proudly showed off its PlayStation 4 hardware for the first time at E3, and now we’re getting a peek at what developers are working with this generation thanks to the FCC. (from engadget.com)


Sony PS4 dev kit FCC filing shows off extra ports, 275GHz max clock speed

The DUH-D1000AA prototype Development Kit for PS4 is listed in these documents, tested for its Bluetooth and 802.11 b/g/n WiFi radios. As one would expect, the diagrams show it eschews the sleek design of the consumer model for extra cooling, a shape made for rack mounts plus extra indicator lights and ports. Also of note is a “max clock frequency” listing of 2.75GHz, and although we don’t know how fast the game system will run by default, it’s interesting to hear what all that silicon may be capable of (as a commenter points out below, that may relate to the system’s 8GB of GDDR5 RAM) while maintaining a temperature between 5 and 35 degrees celsius. Hit the link below to check out the documents for yourself, after seeing this and the system’s controller become a part of the FCC’s database all we’re left waiting for is Mark Cerny’s baby.

Sony PS4 development kit FCC filing pops up with extra pot