New dark matter experiments prepare to hunt the unknown

This month, three new experiments take significant steps in the hunt for dark matter, the elusive substance that appears to make up more than a quarter of the universe, but interacts very rarely with the matter that makes up our world. The experiments — the Axion Dark Matter eXperiment, the LZ Dark Matter Experiment, and the Super Cryogenic Dark Matter Search at the underground science laboratory known as SNOLAB — learned in July that each would receive much-needed funding from the U.S. Department of Energy and the U.S. National Science Foundation. Each of these “second-generation” experiments will be at least 10 times as sensitive as today’s dark-matter detectors, increasing the likelihood that they will see the small, rare interactions between dark matter and the regular matter we all interact with every day.

Three astrophysicists — Enectali Figueroa-Feliciano of the MIT Kavli Institute and the MIT Department of Physics; Harry Nelson of the University of California at Santa Barbara; and Gray Rybka of the University of Washington — recently discussed preparations for the newly funded dark-matter experiments, and the likelihood that one of them will strike gold. As the experimental plans start to coalesce and detector equipment starts to arrive for ADMX Gen2, LZ, and SuperCDMS SNOLAB, the scientists presented their views on whether these projects will at long last discover dark matter. The interview, conducted by the Kavli Foundation, can be found on the Kavli website.

In support of the recent funding opportunities, Figueroa-Feliciano, Nelson, and Rybka will also answer questions about the next generation of dark matter experiments in a live Google Hangout on Nov. 20 from 12:00-12:30 p.m. Members of the public may submit questions ahead of and during the webcast by emailing info@kavlifoundation.org or by using the hashtag #KavliLive on Twitter or Google Plus.

By MIT Kavli Institute for Astrophysics and Space Research

Image of the Day: Diving at McMurdo Sound

Full Text:

A diver ascends to the bottom of the sea ice at the Cape Evans Wall, a popular dive site for scientists in McMurdo Sound, Antarctica. McMurdo Sound was discovered by Captain James Clark Ross in February 1841 and named for Lt. Archibald McMurdo of the Terror. McMurdo Sound and Ross Island were the entry way for the early Antarctic explorations of Robert Scott and Ernest Shackleton. The nearby McMurdo Station is the largest Antarctic station. Established in December 1955, the station is the logistics hub of the U.S. Antarctic Program, with a harbor, landing strips on sea ice and shelf ice, and a helicopter pad. Its 85 or so buildings range in size from a small radio shack to large, three-story structures.

Image credit: Robert Robbins, National Science Foundation

Hewlett Foundation funds new MIT initiative on cybersecurity policy

MIT has received $15 million in funding from the William and Flora Hewlett Foundation to establish an initiative aimed at laying the foundations for a smart, sustainable cybersecurity policy to deal with the growing cyber threats faced by governments, businesses, and individuals.

The MIT Cybersecurity Policy Initiative (CPI) is one of three new academic initiatives to receive a total of $45 million in support through the Hewlett Foundation’s Cyber Initiative. Simultaneous funding to MIT, Stanford University, and the University of California at Berkeley is intended to jump-start a new field of cyber policy research. The idea is to generate a robust “marketplace of ideas” about how best to enhance the trustworthiness of computer systems while respecting individual privacy and free expression rights, encouraging innovation, and supporting the broader public interest.

With the new awards, the Hewlett Foundation has now allocated $65 million over the next five years to strengthening cybersecurity, the largest-ever private commitment to this nascent field. “Choices we are making today about Internet governance and security have profound implications for the future. To make those choices well, it is imperative that they be made with a sense of what lies ahead and, still more important, of where we want to go,” says Larry Kramer, president of the Hewlett Foundation. “We view these grants as providing seed capital to begin generating thoughtful options.”

“I’ve had the pleasure of working closely with Larry Kramer throughout this process. His dedication and the Hewlett Foundation’s remarkable generosity provide an opportunity for MIT to make a meaningful and lasting impact on cybersecurity policy,” MIT President L. Rafael Reif says. “I am honored by the trust that the Foundation has placed in MIT and excited about the possibilities that lie ahead.”

Each of the three universities will take complementary approaches to addressing this challenge. MIT’s CPI will focus on establishing quantitative metrics and qualitative models to help inform policymakers. Stanford’s Cyber-X Initiative will focus on the core themes of trustworthiness and governance of networks. And UC Berkeley’s Center for Internet Security and Policy will be organized around assessing the possible range of future paths cybersecurity might take.

Interdisciplinary approach

The Institute-wide CPI will bring together scholars from three key disciplinary pillars: engineering, social science, and management. Engineering is vital to understanding the architectural dynamics of the digital systems in which risk occurs. Social science can help explain institutional behavior and frame policy solutions, while management scholars offer insight on practical approaches to institutionalize best practices in operations.

MIT has a strong record of applying interdisciplinary approaches to large-scale problems from energy to cancer. For example, the MIT Energy Initiative has brought together faculty from across campus — including the social sciences — to conduct energy studies designed to inform future energy options and research. These studies include technology policy reports focused on nuclear power, coal, natural gas, and the smart electric grid.

“We’re very good at understanding the system dynamics on the one hand, then translating that understanding into concrete insights and recommendations for policymakers. And we’ll bring that expertise to the understanding of connected digital systems and cybersecurity. That’s our unique contribution to this challenge,” says Daniel Weitzner, the principal investigator for the CPI and a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Developing a more formal understanding of the security behavior of large-scale systems is a crucial foundation for sound public policy. As an analogy, Weitzner says, imagine trying to shape environmental policy without any way of measuring carbon levels in the atmosphere and no science to assess the cost or effectiveness of carbon mitigation tools. “This is the state of cybersecurity policy today: growing urgency, but no metrics and little science,” he says.

CSAIL is home to much of the technology that is at the core of cybersecurity, such as the RSA cryptography algorithm that protects most online financial transactions, and the development of web standards via the MIT-based World Wide Web Consortium. “That gives us the ability to have our hands on the evolution of these technologies to learn about how to make them more trustworthy,” says Weitzner, who was the United States deputy chief technology officer for Internet policy in the White House from 2011 to 2012, while on leave from his longtime position at MIT.

First steps

In pioneering a new field of study, CPI’s first challenge is to identify key research questions, select appropriate methodologies to guide the work, and establish patterns of cross-disciplinary collaboration. Research challenges include:

  • How policymakers should address security risks to personal health information;
  • How financial institutions can reduce risk by sharing threat intelligence;
  • Developing cybersecurity policy frameworks for autonomous vehicles like drones and self-driving cars; and
  • How to achieve regional and even global agreements on both privacy and security norms in online environments.

To address these issues, CPI will not only bring to bear different disciplines from across MIT — from computer science to management to political science — but also engage with stakeholders outside the Institute, including government, industry, and civil society organizations. “We want to understand their challenges and work with them on formulating solutions,” Weitzner said.

In addition to research, a contribution of the CPI in the long run will be to create a pipeline of students to serve as the next generation of leaders working at this intersection of technology and public policy.

The mission of the William and Flora Hewlett Foundation is to “help people build measurably better lives.” The Foundation concentrates its resources on activities in education, the environment, global development and population, performing arts, and philanthropy, as well as grants to support disadvantaged communities in the San Francisco Bay Area.

The Foundation was established by the late William Hewlett with his wife, Flora Lamson Hewlett, and their eldest son, Walter B. Hewlett. William Hewlett, who earned an SM degree in electrical engineering from MIT in 1936, was co-founder, with David Packard, of the Hewlett-Packard Company, a multinational information technology company.

By Resource Development

Image of the Day: The secret of a snake’s slither

Full Text:

The ability of a snake to redistribute its weight can be seen using force visualizations with photoelastic gelatin. When polarized light is shone through this material and then viewed through cross-polarizing film, forces applied by the snake become visible. Specifically, bright regions indicate high force. While the snake undoubtedly sticks to the gelatin as it tries to move across it, it is clear that it tends to lift the peaks and troughs of its body. Snake locomotion may seem simple compared to walking or galloping but in reality, it’s no easy task to move without legs. Previous research had assumed that snakes move by pushing off of rocks and debris around them. Overlapping belly scales provide friction with the ground that gives snakes a preferred direction of motion, like the motion of wheels or ice skates. And like wheels and ice skates, sliding forward for snakes takes less work than sliding sideways. In addition, snakes aren’t lying completely flat against the ground as they slither. They redistribute their weight as they move, concentrating it in areas where their bodies can get the most friction with the ground and therefore maximize thrust.

Image credit: ©Grace Pryor, Mike Shelley and David Hu, Applied Mathematics Laboratory, New York University, and Department of Mechanical Engineering, Georgia Institute of Technology

Image of the Day: A piece of the quantum puzzle

Full Text:

The vast computational power of quantum circuits is poised to revolutionize research in fields ranging from artificial intelligence to chemistry. A team of scientists demonstrate the potential of these systems to study topological properties of matter. Topology, despite its abstract mathematical constructs, is at the foundation of a wide range of physical phenomena in modern science. This image is a top down view of the gmon qubit chip (0.6 cm x 0.6 cm) connected to microwave frequency control lines (copper) with thin wire bonds.

Image credit: Michael T. Fang, Martinis Group, UC Santa Barbara

Motion-induced quicksand

From a mechanical perspective, granular materials are stuck between a rock and a fluid place, with behavior resembling neither a solid nor a liquid. Think of sand through an hourglass: As grains funnel through, they appear to flow like water, but once deposited, they form a relatively stable mound, much like a solid.

Ken Kamrin, an assistant professor of mechanical engineering at MIT, studies granular materials, using mathematical models to explain their often-peculiar behavior. Now Kamrin has applied a recent granular model, developed by his group, and shown that it predicts a bizarre phenomenon called “motion-induced quicksand” — a scenario in which the movement of sand in one location changes the character of sand at a distance.

“The moment you start moving sand, it acts like fluid far away,” Kamrin says. “So, for example, if you’re walking in the desert and there’s a sand dune landslide far away, you will start to sink, very slowly. It’s very wacky behavior.”

Researchers have observed this effect in a number of configurations in the lab, including in what’s called an “annular Couette cell” — a geometry resembling the bowl of a food processor, with a rotating ring in its base. In experiments, researchers have filled a Couette cell with sand, and attempted to push a rod horizontally through the sand.

In a stationary Couette cell, the rod will not budge without a significant application of force. If, however, the cell’s inner ring is rotating, the rod will move through the sand with even the slightest push — even where the sand doesn’t appear to be moving.

“It looks like the mechanical behavior of the sand itself has changed because something was moving far away,” Kamrin says. “How the sand responds to stress has changed entirely.”

While others have observed this effect in experiments, there hasn’t previously existed a model to predict such behavior.

In a paper published in the journal Physical Review Letters, Kamrin and his former postdoc David Henann, now an assistant professor at Brown University, applied their existing model of granular flow to the problem of motion-induced quicksand, replicating the Couette cell geometry.

By spinning the turntable at the bottom of the bucket, the turntable “liquifies” the entire granular assembly, even the material very far from it. It has converted a granular solid (a material that has no trouble supporting the weight of the ball) to a granular fluid in which any object denser than the granular pile will sink. The ball is acting like a force probe, showing that the response of the grains has switched from solid to fluid. Graphic: Martin Van Heckek/Leiden

Modeling spin and creep

Kamrin and Henann originally devised the mathematical model to predict scenarios of primary flow, such as the flow field for sand flowing through a chute, or a circular trough. The researchers weren’t sure if the model would also apply to secondary rheology, where motion at a primary location affects movement at a secondary, removed region.  

Last summer, Kamrin paid a visit to researchers in France who had carried out earlier experiments on secondary rheology. After some casual discussion, he boarded a train back to his hotel, during which he recalls “having a moment where I thought, ‘I think our model could work.’”

He and Henann ran their model on several configurations of secondary rheology, including the Couette cell, and were able to reproduce the results from previous experiments. In particular, the team observed a direct relationship between the speed of the rotating inner ring and the speed, or “creep,” of the rod through sand: For example, if a constant force is applied to the probe, then spinning the inner ring twice as fast will cause the probe to creep twice as fast — a key observation in laboratory studies

The model is based on the effects of neighboring grains. Where most models would simulate the flow of granular material on a grain-by-grain basis — a computationally laborious task — Kamrin’s continuum model represents the average behavior of a small cube of grains, and builds into the model effects from neighboring cubes. The result is an accurate, and computationally efficient, prediction of granular motion and stress.

Taking the stick out of mud

The mathematical model appears to agree with a general mechanism that researchers have held regarding granular flow, termed a “force chain network.” According to this theory, there exist tiny forces between individual grains that connect the whole of a network. Any perturbation, or movement in the material, can ripple through the network, causing forces between particles to “flicker,” as Kamrin puts it. Such flickering may not be strong enough to move particles, but may weaken bonds between grains, allowing objects to move through the material as if it were liquid.

“Because particles at the wall are connected to particles far away thru the force chain network, by jiggling around over here, you’re making the forces fluctuate thru the material,” Kamrin says. “That’s the picture. But there wasn’t really a general flow model that would reflect this.”

Such forces might partially explain the behavior of quicksand, Kamrin says: While quicksand — a soupy mix of sand and water — may look like a solid, the water in it essentially lubricates the frictional contacts between grains such that when someone steps in it, they sink. In the case of dry granular media, it’s perturbations through the force chain network, not water, that are in essence lubricating the contacts between grains.

“It’s sort of similar, it’s just a different source for what causes the sand to feel lubricated,” Kamrin says.

Kamrin and Henann are now finding ways to package their model into software “so that anybody can download it and predict granular flow.”

“These phenomena are sort of the sticks in the mud that have made granular media an open problem,” Kamrin says. “They’ve made the flow of grains distinct from almost everything we’re used to, like standard solids or regular liquids, because most of those materials don’t have these weird effects.”

This research was funded by the National Science Foundation.

By Jennifer Chu | MIT News Office

Image of the Day: AzTEC-3

Full Text:

Nestled among a triplet of young galaxies more than 12.5 billion light-years away is a cosmic powerhouse: A galaxy that is producing stars nearly 1,000 times faster than our own Milky Way. This energetic starburst galaxy, known as AzTEC-3, together with its gang of calmer galaxies, may represent the best evidence yet that large galaxies grow from the merger of smaller ones in the early universe, a process known as hierarchical merging. The picture above is an artist’s impression of this protocluster observed by the Atacama Large Millimeter/submillimeter Array (ALMA). It shows the central starburst galaxy AzTEC-3 along with its labeled cohorts of smaller, less active galaxies. New ALMA observations suggest that AzTEC-3 recently merged with another young galaxy and that the whole system represents the first steps toward forming a galaxy cluster.

Image credit: B. Saxton (NRAO/AUI/NSF)

New dark matter experiments prepare to hunt the unknown

This month, three new experiments take significant steps in the hunt for dark matter, the elusive substance that appears to make up more than a quarter of the universe, but interacts very rarely with the matter that makes up our world. The experiments — the Axion Dark Matter eXperiment, the LZ Dark Matter Experiment, and the Super Cryogenic Dark Matter Search at the underground science laboratory known as SNOLAB — learned in July that each would receive much-needed funding from the U.S. Department of Energy and the U.S. National Science Foundation. Each of these “second-generation” experiments will be at least 10 times as sensitive as today’s dark-matter detectors, increasing the likelihood that they will see the small, rare interactions between dark matter and the regular matter we all interact with every day.

Three astrophysicists — Enectali Figueroa-Feliciano of the MIT Kavli Institute and the MIT Department of Physics; Harry Nelson of the University of California at Santa Barbara; and Gray Rybka of the University of Washington — recently discussed preparations for the newly funded dark-matter experiments, and the likelihood that one of them will strike gold. As the experimental plans start to coalesce and detector equipment starts to arrive for ADMX Gen2, LZ, and SuperCDMS SNOLAB, the scientists presented their views on whether these projects will at long last discover dark matter. The interview, conducted by the Kavli Foundation, can be found on the Kavli website.

In support of the recent funding opportunities, Figueroa-Feliciano, Nelson, and Rybka will also answer questions about the next generation of dark matter experiments in a live Google Hangout on Nov. 20 from 12:00-12:30 p.m. Members of the public may submit questions ahead of and during the webcast by emailing info@kavlifoundation.org or by using the hashtag #KavliLive on Twitter or Google Plus.

By MIT Kavli Institute for Astrophysics and Space Research

Image of the Day: Philae touches down on the surface of a comet

Full Text:

One of the first images captured by the space probe Philae, this scene presents a dramatic view across the body of the large lobe of Comet 67P/Churyumov–Gerasimenko. Along the horizon, a relatively broad, raised portion of material appears to be abruptly truncated in the top left. Zooming into either side of the inner portion of the wall suggests the presence of slightly brighter material – perhaps more recently exposed than other sections. It is the first time a soft landing has been achieved on a comet.

Image credit: European Space Agency – ESA

Q&A: Christopher Knittel on the EPA’s greenhouse gas plan

With cap-and-trade legislation on greenhouse-gas emissions having stalled in Congress in 2010, the Obama administration has taken a different approach to climate policy: It has used the mandate of the Environmental Protection Agency (EPA) to propose a policy limiting power-plant emissions, since electricity consumption produces about 40 percent of U.S. greenhouse gases. (The administration also announced a bilateral agreement with China this week, which sets overall emissions-reductions targets.)

The EPA’s initial proposal is now under public review, before the agency issues a final rule in 2015. Christopher Knittel, the William Barton Rogers Professor of Energy Economics at the MIT Sloan School of Management, is one of 13 economists who co-authored an article about the policy in the journal Science this week. While the plan offers potential benefits, the economists assert, some of its details might limit the policy’s effectiveness. MIT News talked with Knittel about the issue.

Q. How is the EPA’s policy for power plants intended to work?

A. The Clean Power Plan calls for different emissions reductions depending on the state. This state-specific formula has four “buckets:” efficiency increases at the power plant; shifting from coal to natural gas; increases in generation from low-carbon renewables such as wind; and increases in energy efficiency within the state. So they applied these four things and asked what changes were “adequately demonstrated” to generate state-specific required reductions.

Q. The Science piece emphasizes that the EPA’s plan uses a ratio-based means of limiting emissions: the amount of greenhouse gases divided by the amount of electricity consumed. So a state could add renewable energy, lower its ratio, but not reduce total emissions. What are the advantages and disadvantages of doing this?

A. The targets are an emissions rate: tons of CO2 [emitted] per megawatt-hour of electricity generation. Then it’s really up to the states to determine how they’re going to achieve the reductions in this rate. So one strategy is to increase total electricity generated. This compliance strategy, unfortunately, is what makes rate-based regulation economically inefficient.

The states also have the option to convert that rate-based ratio target into what the EPA is calling a mass-based target, total tons of greenhouse-gas emissions. This would effectively imply the state is going to adopt a cap-and-trade program to reach its requirements.

In current work, we — scholars Jim Bushnell, Stephen Holland, Jonathan Hughes, and I — are investigating the incentives states have to adopt to convert their rate-based mandate into a mass-based mandate. Unfortunately, we are finding that states rarely want to [use a mass-based target], which is a pity, because the mass-based regulation is the most efficient regulation, from an economist’s perspective. Holland, Hughes, and I have done work in the transportation sector that shows that when you do things on a rate base, as opposed to a mass base, it is at least three times more expensive, and more costly to society — often more than five times more costly.

Q. Why did the EPA approach it this way?

A. I can only speculate as to why the EPA chose to define the regulation as a rate instead of total greenhouse gas emissions. Regulating a rate is often cheaper from the firm’s perspective, even though it is economically inefficient. Why the EPA chose to define things at the state level is more clear: The Clean Air Act … is written in such a way to leave it up to the states.

But if everyone’s doing their own rate- or mass-based standard, then you don’t take advantage of potentially a large efficiency benefit from trading compliance across states. That is, it might be cheaper for one state to increase its reductions, allowing another state to abate less.

The most ideal regulatory model is that everyone’s under one giant mass-based standard, one big cap-and-trade market. Even if every state’s doing its own cap-and-trade market, that’s unlikely to lead to the efficient outcome. It might be cheaper for California or Montana or Oregon to reduce their greenhouse-gas emissions, but as soon as they meet their standard, they’re going to stop.

Q. The Science article says that certifying efficiency-based gains is a crucial factor. Could you explain this?

A. Given how the regulation treats efficiency, it really puts in the forefront the importance of understanding the real-world reduction in energy consumption coming from efficiency investments. Let’s say I reduce electricity consumption by 100 megawatt-hours through increasing efficiency in buildings. Within the [EPA’s] policy, that reduction is treated as if I’m generating 100 megawatt-hours from a zero-carbon technology. So that increases the denominator in the ratio [of greenhouse gases produced to electricity consumed]. One concern, though, is that often the actual returns from energy-efficiency investments aren’t as large as the predicted returns. And that can be because of rebound [the phenomenon by which better energy efficiency allows people to consume more of it], which is a hot topic now, or other behavioral changes.

Behavioral changes can make those efficiency gains larger or smaller, so getting the right number is very important. I’ve heard stories of people who get all-new windows, and the old windows used to let in air, but now they think the house is stuffy, so they keep their windows cracked. We should be doing more field experiments, more randomized controlled trials, to measure the actual returns to energy efficiency.

Another related concern is that it might be left up to the states to tell the EPA what the reduction was from these energy-efficiency investments. And the state might not have any incentive at all to measure them correctly. So there has to be an increase in oversight, and it likely has to be federal oversight.

Q. While you clearly have concerns about the efficacy of the policy, isn’t this one measure among others, intended to lessen the magnitude of the climate crisis?

A. For many of us, the potential real benefit from the clean power rule is that it will change the dynamic in Paris in the [forthcoming international climate] negotiations. For a long time the U.S. could say it was doing some improvements in transportation, but they really weren’t doing anything in electricity, for climate change. My view is there are a lot of countries out there that aren’t going to do anything unless the U.S. does. This might bring some of those countries on board.

By Peter Dizikes | MIT News Office

« Older Entries