Scientists working jointly with Cassini's composite infrared spectrometer and its high-resolution imaging camera have constructed the highest-resolution heat intensity maps yet of the hottest part of a region of long fissures spraying water vapor and icy particles from Enceladus. These fissures have been nicknamed "tiger stripes." Additional high-resolution spectrometer maps of one end of the tiger stripes Alexandria Sulcus and Cairo Sulcus reveal never-before-seen warm fractures that branch off like split ends from the main tiger stripe trenches. They also show an intriguing warm spot isolated from other active surface fissures.
"The ends of the tiger stripes may be the places where the activity is just getting started, or is winding down, so the complex patterns of heat we see there may give us clues to the life cycle of tiger stripes," said John Spencer, a Cassini team scientist based at Southwest Research Institute in Boulder, Colo.
The images and maps come from the Aug. 13, 2010, Enceladus flyby, Cassini's last remote sensing flyby of the moon until 2015. The geometry of the many flybys between now and 2015 will not allow Cassini to do thermal scans like these, because the spacecraft will be too close to scan the surface and will not view the south pole. This Enceladus flyby, the 11th of Cassini's tour, also gave Cassini its last look at any part of the active south polar region in sunlight.
The highest-resolution spectrometer scan examined the hottest part of the entire tiger stripe system, part of the fracture called Damascus Sulcus. Scientists used the scan to measure fracture temperatures up to190 Kelvin (minus 120 degrees Fahrenheit). This temperature appears slightly higher than previously measured temperatures at Damascus, which were around 170 Kelvin (minus 150 degrees Fahrenheit).
Spencer said he isn't sure if this tiger stripe is just more active than it was the last time Cassini's spectrometer scanned it, in 2008, or if the hottest part of the tiger stripe is so narrow that previous scans averaged its temperature out over a larger area. In any case, the new scan had such good resolution, showing details as small as 800 meters (2,600 feet), that scientists could see for the first time warm material flanking the central trench of Damascus, cooling off quickly away from the trench. The Damascus thermal scan also shows large variations in heat output within a few kilometers along the length of the fracture. This unprecedented resolution will help scientists understand how the tiger stripes deliver heat to the surface of Enceladus.
Cassini acquired the thermal map of Damascus simultaneously with a visible-light image where the tiger stripe is lit by sunlight reflecting off Saturn. The visible-light and thermal data were merged to help scientists understand the relationships between physical heat processes and surface geology.
"Our high-resolution images show that this section of Damascus Sulcus is among the most structurally complex and tectonically dynamic of the tiger stripes," said imaging science team associate Paul Helfenstein of Cornell University, Ithaca, N.Y. Some details in the appearance of the landforms, such as a peculiar pattern of curving striations along the flanks of Damascus, had not previously been noticed in ordinary sunlit images.
The day after the Enceladus flyby, Cassini swooped by the icy moon Tethys, collecting images that helped fill in gaps in the Tethys global map. Cassini's new views of the heavily cratered moon will help scientists understand how tectonic forces, impact cratering, and perhaps even ancient resurfacing events have shaped the moon's appearance.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. The composite infrared spectrometer team is based at NASA's Goddard Space Flight Center, Greenbelt, Md., where the instrument was built.
-December 2010-
Monday, December 13, 2010
NASA's Spitzer Reveals First Carbon-Rich Planet
Astronomers have discovered that a huge, searing-hot planet orbiting another star is loaded with an unusual amount of carbon. The planet, a gas giant named WASP-12b, is the first carbon-rich world ever observed. The discovery was made using NASA's Spitzer Space Telescope, along with previously published ground-based observations.
"This planet reveals the astounding diversity of worlds out there," said Nikku Madhusudhan of the Massachusetts Institute of Technology, Cambridge, lead author of a report in the Dec. 9 issue of the journal Nature. "Carbon-rich planets would be exotic in every way -- formation, interiors and atmospheres."
It's possible that WASP-12b might harbor graphite, diamond, or even a more exotic form of carbon in its interior, beneath its gaseous layers. Astronomers don't currently have the technology to observe the cores of exoplanets, or planets orbiting stars beyond our sun, but their theories hint at these intriguing possibilities.
The research also supports theories that carbon-rich rocky planets much less massive than WASP-12b could exist around other stars. Our Earth has rocks like quartz and feldspar, which are made of silicon and oxygen plus other elements. A carbon-rich rocky planet could be a very different place.
"A carbon-dominated terrestrial world could have lots of pure carbon rocks, like diamond or graphite, as well as carbon compounds like tar," said Joseph Harrington of the University of Central Florida, in Orlando, who is the principal investigator of the research.
Carbon is a common component of planetary systems and a key ingredient of life on Earth. Astronomers often measure carbon-to-oxygen ratios to get an idea of a star's chemistry. Our sun has a carbon-to-oxygen ratio of about one to two, which means it has about half as much carbon as oxygen. None of the planets in our solar system is known to have more carbon than oxygen, or a ratio of one or greater. However, this ratio is unknown for Jupiter, Saturn, Uranus, and Neptune. Unlike WASP-12b, these planets harbor water -- the main oxygen carrier -- deep inside their atmospheres, making it hard to detect.
WASP-12b is the first planet ever to have its carbon-to-oxygen ratio measured at greater than one (the actual ratio is most likely between one and two). This means the planet has excess carbon, some of which is in the form of atmospheric methane.
"When the relative amount of carbon gets that high, it's as though you flip a switch, and everything changes," said Marc Kuchner, an astronomer at NASA Goddard Space Flight Center, Greenbelt, Md., who helped develop the theory of carbon-rich rocky planets but is not associated with the study. "If something like this had happened on Earth, your expensive engagement ring would be made of glass, which would be rare, and the mountains would all be made of diamonds."
Madhusudhan, Harrington and colleagues used Spitzer to observe WASP-12b as it slipped behind its star, in a technique known as secondary eclipse, which was pioneered for exoplanets by Spitzer. These data were combined with previously published observations taken from the ground with the Canada-France-Hawaii Telescope at Mauna Kea, Hawaii. Madhusudhan used the data to conduct a detailed atmospheric analysis, revealing chemicals such as methane and carbon monoxide in the planet's atmosphere.
WASP-12b derives its name from the consortium that found it, the Wide Angle Search for Planets. It is 1.4 times as massive as Jupiter and located roughly 1,200 light-years away from Earth. This blistering world whips around its star in a little over a day, with one side always facing the star. It is so close to its star that the star's gravity stretches the planet into an egg-like shape. What's more, the star's gravity is siphoning mass off the planet into a thin disk that orbits around with it.
The Spitzer data also reveal more information about WASP-12b's temperature. The world was already known to be one of the hottest exoplanets found so far; the new observations indicate that the side that faces the star is 2,600 Kelvin, or 4,200 degrees Fahrenheit. That's more than hot enough to melt steel.
Other authors of the paper are Kevin Stevenson, Sarah Nymeyer, Christopher Campo, Jasmina Blecic, Ryan Hardy, Nate Lust, Christopher Britt and William Bowman of University of Central Florida, Orlando; Peter Wheatley of the University of Warwick, United Kingdom; Drake Deming of NASA Goddard Space Flight Center, Greenbelt, Md.; David Anderson, Coel Hellier and Pierre Maxted of Keele University, United Kingdom; Andrew Collier-Cameron of the University of St. Andrews, United Kingdom; Leslie Hebb of Vanderbilt University, Nashville, Tenn.; Don Pollacco of Queen's University, United Kingdom; and Richard West of the University of Leicester, United Kingdom.
The Spitzer observations were made before it ran out of its liquid coolant in May 2009 and began its warm mission. NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology, also in Pasadena. Caltech manages JPL for NASA.
-December 2010-
"This planet reveals the astounding diversity of worlds out there," said Nikku Madhusudhan of the Massachusetts Institute of Technology, Cambridge, lead author of a report in the Dec. 9 issue of the journal Nature. "Carbon-rich planets would be exotic in every way -- formation, interiors and atmospheres."
It's possible that WASP-12b might harbor graphite, diamond, or even a more exotic form of carbon in its interior, beneath its gaseous layers. Astronomers don't currently have the technology to observe the cores of exoplanets, or planets orbiting stars beyond our sun, but their theories hint at these intriguing possibilities.
The research also supports theories that carbon-rich rocky planets much less massive than WASP-12b could exist around other stars. Our Earth has rocks like quartz and feldspar, which are made of silicon and oxygen plus other elements. A carbon-rich rocky planet could be a very different place.
"A carbon-dominated terrestrial world could have lots of pure carbon rocks, like diamond or graphite, as well as carbon compounds like tar," said Joseph Harrington of the University of Central Florida, in Orlando, who is the principal investigator of the research.
Carbon is a common component of planetary systems and a key ingredient of life on Earth. Astronomers often measure carbon-to-oxygen ratios to get an idea of a star's chemistry. Our sun has a carbon-to-oxygen ratio of about one to two, which means it has about half as much carbon as oxygen. None of the planets in our solar system is known to have more carbon than oxygen, or a ratio of one or greater. However, this ratio is unknown for Jupiter, Saturn, Uranus, and Neptune. Unlike WASP-12b, these planets harbor water -- the main oxygen carrier -- deep inside their atmospheres, making it hard to detect.
WASP-12b is the first planet ever to have its carbon-to-oxygen ratio measured at greater than one (the actual ratio is most likely between one and two). This means the planet has excess carbon, some of which is in the form of atmospheric methane.
"When the relative amount of carbon gets that high, it's as though you flip a switch, and everything changes," said Marc Kuchner, an astronomer at NASA Goddard Space Flight Center, Greenbelt, Md., who helped develop the theory of carbon-rich rocky planets but is not associated with the study. "If something like this had happened on Earth, your expensive engagement ring would be made of glass, which would be rare, and the mountains would all be made of diamonds."
Madhusudhan, Harrington and colleagues used Spitzer to observe WASP-12b as it slipped behind its star, in a technique known as secondary eclipse, which was pioneered for exoplanets by Spitzer. These data were combined with previously published observations taken from the ground with the Canada-France-Hawaii Telescope at Mauna Kea, Hawaii. Madhusudhan used the data to conduct a detailed atmospheric analysis, revealing chemicals such as methane and carbon monoxide in the planet's atmosphere.
WASP-12b derives its name from the consortium that found it, the Wide Angle Search for Planets. It is 1.4 times as massive as Jupiter and located roughly 1,200 light-years away from Earth. This blistering world whips around its star in a little over a day, with one side always facing the star. It is so close to its star that the star's gravity stretches the planet into an egg-like shape. What's more, the star's gravity is siphoning mass off the planet into a thin disk that orbits around with it.
The Spitzer data also reveal more information about WASP-12b's temperature. The world was already known to be one of the hottest exoplanets found so far; the new observations indicate that the side that faces the star is 2,600 Kelvin, or 4,200 degrees Fahrenheit. That's more than hot enough to melt steel.
Other authors of the paper are Kevin Stevenson, Sarah Nymeyer, Christopher Campo, Jasmina Blecic, Ryan Hardy, Nate Lust, Christopher Britt and William Bowman of University of Central Florida, Orlando; Peter Wheatley of the University of Warwick, United Kingdom; Drake Deming of NASA Goddard Space Flight Center, Greenbelt, Md.; David Anderson, Coel Hellier and Pierre Maxted of Keele University, United Kingdom; Andrew Collier-Cameron of the University of St. Andrews, United Kingdom; Leslie Hebb of Vanderbilt University, Nashville, Tenn.; Don Pollacco of Queen's University, United Kingdom; and Richard West of the University of Leicester, United Kingdom.
The Spitzer observations were made before it ran out of its liquid coolant in May 2009 and began its warm mission. NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology, also in Pasadena. Caltech manages JPL for NASA.
-December 2010-
Aerosols: Tiny Particles, Big Impact
Take a deep breath. Even if the air looks clear, it’s nearly certain that you’ll inhale tens of millions of solid particles and liquid droplets. These ubiquitous specks of matter are known as aerosols, and they can be found in the air over oceans, deserts, mountains, forests, ice, and every ecosystem in between. They drift in Earth’s atmosphere from the stratosphere to the surface and range in size from a few nanometers—less than the width of the smallest viruses—to several several tens of micrometers—about the diameter of human hair. Despite their small size, they have major impacts on our climate and our health.
Different specialists describe the particles based on shape, size, and chemical composition. Toxicologists refer to aerosols as ultrafine, fine, or coarse matter. Regulatory agencies, as well as meteorologists, typically call them particulate matter—PM2.5 or PM10, depending on their size. In some fields of engineering, they’re called nanoparticles. The media often uses everyday terms that hint at aerosol sources, such as smoke, ash, and soot.
Climatologists typically use another set of labels that speak to the chemical composition. Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt. In practice, many of these terms are imperfect, as aerosols often clump together to form complex mixtures. It’s common, for example, for particles of black carbon from soot or smoke to mix with nitrates and sulfates, or to coat the surfaces of dust, creating hybrid particles.
The bulk of aerosols—about 90 percent by mass—have natural origins. Volcanoes, for example, eject huge columns of ash into the air, as well as sulfur dioxide and other gases, yielding sulfates. Forest fires send partially burned organic carbon aloft. Certain plants produce gases that react with other substances in the air to yield aerosols, such as the “smoke” in the Great Smoky Mountains of the United States. Likewise in the ocean, some types of microalgae produce a sulfurous gas called dimethylsulfide that can be converted into sulfates in the atmosphere.
Sea salt and dust are two of the most abundant aerosols, as sandstorms whip small pieces of mineral dust from deserts into the atmosphere and wind-driven spray from ocean waves flings sea salt aloft. Both tend to be larger particles than their human-made counterparts.
The remaining 10 percent of aerosols are considered anthropogenic, or human-made, and they come from a variety of sources. Though less abundant than natural forms, anthropogenic aerosols can dominate the air downwind of urban and industrial areas.
Fossil fuel combustion produces large amounts of sulfur dioxide, which reacts with water vapor and other gases in the atmosphere to create sulfate aerosols. Biomass burning, a common method of clearing land and consuming farm waste, yields smoke that’s comprised mainly of organic carbon and black carbon.
Automobiles, incinerators, smelters, and power plants are prolific producers of sulfates, nitrates, black carbon, and other particles. Deforestation, overgrazing, drought, and excessive irrigation can alter the land surface, increasing the rate at which dust aerosols enter the atmosphere. Even indoors, cigarettes, cooking stoves, fireplaces, and candles are sources of aerosols.
-December 2010-
Different specialists describe the particles based on shape, size, and chemical composition. Toxicologists refer to aerosols as ultrafine, fine, or coarse matter. Regulatory agencies, as well as meteorologists, typically call them particulate matter—PM2.5 or PM10, depending on their size. In some fields of engineering, they’re called nanoparticles. The media often uses everyday terms that hint at aerosol sources, such as smoke, ash, and soot.
Climatologists typically use another set of labels that speak to the chemical composition. Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt. In practice, many of these terms are imperfect, as aerosols often clump together to form complex mixtures. It’s common, for example, for particles of black carbon from soot or smoke to mix with nitrates and sulfates, or to coat the surfaces of dust, creating hybrid particles.
The bulk of aerosols—about 90 percent by mass—have natural origins. Volcanoes, for example, eject huge columns of ash into the air, as well as sulfur dioxide and other gases, yielding sulfates. Forest fires send partially burned organic carbon aloft. Certain plants produce gases that react with other substances in the air to yield aerosols, such as the “smoke” in the Great Smoky Mountains of the United States. Likewise in the ocean, some types of microalgae produce a sulfurous gas called dimethylsulfide that can be converted into sulfates in the atmosphere.
Sea salt and dust are two of the most abundant aerosols, as sandstorms whip small pieces of mineral dust from deserts into the atmosphere and wind-driven spray from ocean waves flings sea salt aloft. Both tend to be larger particles than their human-made counterparts.
The remaining 10 percent of aerosols are considered anthropogenic, or human-made, and they come from a variety of sources. Though less abundant than natural forms, anthropogenic aerosols can dominate the air downwind of urban and industrial areas.
Fossil fuel combustion produces large amounts of sulfur dioxide, which reacts with water vapor and other gases in the atmosphere to create sulfate aerosols. Biomass burning, a common method of clearing land and consuming farm waste, yields smoke that’s comprised mainly of organic carbon and black carbon.
Automobiles, incinerators, smelters, and power plants are prolific producers of sulfates, nitrates, black carbon, and other particles. Deforestation, overgrazing, drought, and excessive irrigation can alter the land surface, increasing the rate at which dust aerosols enter the atmosphere. Even indoors, cigarettes, cooking stoves, fireplaces, and candles are sources of aerosols.
-December 2010-
NASA Spacecraft Sees Cosmic Snow Storm During Comet Encounter
The EPOXI mission's recent encounter with comet Hartley 2 provided the first images clear enough for scientists to link jets of dust and gas with specific surface features. NASA and other scientists have begun to analyze the images.
The EPOXI spacecraft revealed a cometary snow storm created by carbon dioxide jets spewing out tons of golf-ball to basketball-sized fluffy ice particles from the peanut-shaped comet's rocky ends. At the same time, a different process was causing water vapor to escape from the comet's smooth mid-section. This information sheds new light on the nature of comets and even planets.
Scientists compared the new data to data from a comet the spacecraft previously visited that was somewhat different from Hartley 2. In 2005, the spacecraft successfully released an impactor into the path of comet Tempel 1, while observing it during a flyby.
"This is the first time we've ever seen individual chunks of ice in the cloud around a comet or jets definitively powered by carbon dioxide gas," said Michael A'Hearn, principal investigator for the spacecraft at the University of Maryland. "We looked for, but didn't see, such ice particles around comet Tempel 1."
The new findings show Hartley 2 acts differently than Tempel 1 or the three other comets with nuclei imaged by spacecraft. Carbon dioxide appears to be a key to understanding Hartley 2 and explains why the smooth and rough areas scientists saw respond differently to solar heating, and have different mechanisms by which water escapes from the comet's interior.
"When we first saw all the specks surrounding the nucleus, our mouths dropped," said Pete Schultz, EPOXI mission co-investigator at Brown University. "Stereo images reveal there are snowballs in front and behind the nucleus, making it look like a scene in one of those crystal snow globes."
Data show the smooth area of comet Hartley 2 looks and behaves like most of the surface of comet Tempel 1, with water evaporating below the surface and percolating out through the dust. However, the rough areas of Hartley 2, with carbon dioxide jets spraying out ice particles, are very different.
"The carbon dioxide jets blast out water ice from specific locations in the rough areas resulting in a cloud of ice and snow," said Jessica Sunshine, EPOXI deputy principal investigator at the University of Maryland. "Underneath the smooth middle area, water ice turns into water vapor that flows through the porous material, with the result that close to the comet in this area we see a lot of water vapor."
Engineers at NASA's Jet Propulsion Laboratory in Pasadena, Calif., have been looking for signs ice particles peppered the spacecraft. So far they found nine times when particles, estimated to weigh slightly less than the mass of a snowflake, might have hit the spacecraft but did not damage it.
"The EPOXI mission spacecraft sailed through the Hartley 2's ice flurries in fine working order and continues to take images as planned of this amazing comet," said Tim Larson, EPOXI project manager at JPL.
Scientists will need more detailed analysis to determine how long this snow storm has been active, and whether the differences in activity between the middle and ends of the comet are the result of how it formed some 4.5 billion years ago or are because of more recent evolutionary effects.
EPOXI is a combination of the names for the mission's two components: the Extrasolar Planet Observations and Characterization (EPOCh), and the flyby of comet Hartley 2, called the Deep Impact Extended Investigation (DIXI).
JPL manages the EPOXI mission for the Science Mission Directorate at NASA Headquarters in Washington. The spacecraft was built for NASA by Ball Aerospace & Technologies Corp., in Boulder.
-December 2010-
The EPOXI spacecraft revealed a cometary snow storm created by carbon dioxide jets spewing out tons of golf-ball to basketball-sized fluffy ice particles from the peanut-shaped comet's rocky ends. At the same time, a different process was causing water vapor to escape from the comet's smooth mid-section. This information sheds new light on the nature of comets and even planets.
Scientists compared the new data to data from a comet the spacecraft previously visited that was somewhat different from Hartley 2. In 2005, the spacecraft successfully released an impactor into the path of comet Tempel 1, while observing it during a flyby.
"This is the first time we've ever seen individual chunks of ice in the cloud around a comet or jets definitively powered by carbon dioxide gas," said Michael A'Hearn, principal investigator for the spacecraft at the University of Maryland. "We looked for, but didn't see, such ice particles around comet Tempel 1."
The new findings show Hartley 2 acts differently than Tempel 1 or the three other comets with nuclei imaged by spacecraft. Carbon dioxide appears to be a key to understanding Hartley 2 and explains why the smooth and rough areas scientists saw respond differently to solar heating, and have different mechanisms by which water escapes from the comet's interior.
"When we first saw all the specks surrounding the nucleus, our mouths dropped," said Pete Schultz, EPOXI mission co-investigator at Brown University. "Stereo images reveal there are snowballs in front and behind the nucleus, making it look like a scene in one of those crystal snow globes."
Data show the smooth area of comet Hartley 2 looks and behaves like most of the surface of comet Tempel 1, with water evaporating below the surface and percolating out through the dust. However, the rough areas of Hartley 2, with carbon dioxide jets spraying out ice particles, are very different.
"The carbon dioxide jets blast out water ice from specific locations in the rough areas resulting in a cloud of ice and snow," said Jessica Sunshine, EPOXI deputy principal investigator at the University of Maryland. "Underneath the smooth middle area, water ice turns into water vapor that flows through the porous material, with the result that close to the comet in this area we see a lot of water vapor."
Engineers at NASA's Jet Propulsion Laboratory in Pasadena, Calif., have been looking for signs ice particles peppered the spacecraft. So far they found nine times when particles, estimated to weigh slightly less than the mass of a snowflake, might have hit the spacecraft but did not damage it.
"The EPOXI mission spacecraft sailed through the Hartley 2's ice flurries in fine working order and continues to take images as planned of this amazing comet," said Tim Larson, EPOXI project manager at JPL.
Scientists will need more detailed analysis to determine how long this snow storm has been active, and whether the differences in activity between the middle and ends of the comet are the result of how it formed some 4.5 billion years ago or are because of more recent evolutionary effects.
EPOXI is a combination of the names for the mission's two components: the Extrasolar Planet Observations and Characterization (EPOCh), and the flyby of comet Hartley 2, called the Deep Impact Extended Investigation (DIXI).
JPL manages the EPOXI mission for the Science Mission Directorate at NASA Headquarters in Washington. The spacecraft was built for NASA by Ball Aerospace & Technologies Corp., in Boulder.
-December 2010-
NASA Probe Sees Solar Wind Decline En Route To Interstellar Space
The 33-year odyssey of NASA's Voyager 1 spacecraft has reached a distant point at the edge of our solar system where there is no outward motion of solar wind.
Now hurtling toward interstellar space some 10.8 billion miles from the sun, Voyager 1 has crossed into an area where the velocity of the hot ionized gas, or plasma, emanating directly outward from the sun has slowed to zero. Scientists suspect the solar wind has been turned sideways by the pressure from the interstellar wind in the region between stars.
The event is a major milestone in Voyager 1's passage through the heliosheath, the turbulent outer shell of the sun's sphere of influence, and the spacecraft's upcoming departure from our solar system.
"The solar wind has turned the corner," said Ed Stone, Voyager project scientist based at the California Institute of Technology in Pasadena, Calif. "Voyager 1 is getting close to interstellar space."
Our sun gives off a stream of charged particles that form a bubble known as the heliosphere around our solar system. The solar wind travels at supersonic speed until it crosses a shockwave called the termination shock. At this point, the solar wind dramatically slows down and heats up in the heliosheath.
Launched on Sept. 5, 1977, Voyager 1 crossed the termination shock in December 2004 into the heliosheath. Scientists have used data from Voyager 1's Low-Energy Charged Particle Instrument to deduce the solar wind's velocity.
When the speed of the charged particles hitting the outward face of Voyager 1 matched the spacecraft's speed, researchers knew that the net outward speed of the solar wind was zero. This occurred in June, when Voyager 1 was about 10.6 billion miles from the sun.
Because the velocities can fluctuate, scientists watched four more monthly readings before they were convinced the solar wind's outward speed actually had slowed to zero. Analysis of the data shows the velocity of the solar wind has steadily slowed at a rate of about 45,000 mph each year since August 2007, when the solar wind was speeding outward at about 130,000 mph. The outward speed has remained at zero since June.
The results were presented at the American Geophysical Union meeting in San Francisco.
"When I realized that we were getting solid zeroes, I was amazed," said Rob Decker, a Voyager Low-Energy Charged Particle Instrument co-investigator and senior staff scientist at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md. "Here was Voyager, a spacecraft that has been a workhorse for 33 years, showing us something completely new again."
Scientists believe Voyager 1 has not crossed the heliosheath into interstellar space. Crossing into interstellar space would mean a sudden drop in the density of hot particles and an increase in the density of cold particles. Scientists are putting the data into their models of the heliosphere's structure and should be able to better estimate when Voyager 1 will reach interstellar space. Researchers currently estimate
Voyager 1 will cross that frontier in about four years.
"In science, there is nothing like a reality check to shake things up, and Voyager 1 provided that with hard facts," said Tom Krimigis, principal investigator on the Low-Energy Charged Particle Instrument, who is based at the Applied Physics Laboratory and the Academy of Athens, Greece. "Once again, we face the predicament of redoing our models."
A sister spacecraft, Voyager 2, was launched in Aug. 20, 1977 and has reached a position 8.8 billion miles from the sun. Both spacecraft have been traveling along different trajectories and at different speeds. Voyager 1 is traveling faster, at a speed of about 38,000 mph, compared to Voyager 2's velocity of 35,000 mph. In the next few years, scientists expect Voyager 2 to encounter the same kind of phenomenon as Voyager 1.
The Voyagers were built by NASA's Jet Propulsion Laboratory in Pasadena, Calif., which continues to operate both spacecraft.
-December 2010-
Now hurtling toward interstellar space some 10.8 billion miles from the sun, Voyager 1 has crossed into an area where the velocity of the hot ionized gas, or plasma, emanating directly outward from the sun has slowed to zero. Scientists suspect the solar wind has been turned sideways by the pressure from the interstellar wind in the region between stars.
The event is a major milestone in Voyager 1's passage through the heliosheath, the turbulent outer shell of the sun's sphere of influence, and the spacecraft's upcoming departure from our solar system.
"The solar wind has turned the corner," said Ed Stone, Voyager project scientist based at the California Institute of Technology in Pasadena, Calif. "Voyager 1 is getting close to interstellar space."
Our sun gives off a stream of charged particles that form a bubble known as the heliosphere around our solar system. The solar wind travels at supersonic speed until it crosses a shockwave called the termination shock. At this point, the solar wind dramatically slows down and heats up in the heliosheath.
Launched on Sept. 5, 1977, Voyager 1 crossed the termination shock in December 2004 into the heliosheath. Scientists have used data from Voyager 1's Low-Energy Charged Particle Instrument to deduce the solar wind's velocity.
When the speed of the charged particles hitting the outward face of Voyager 1 matched the spacecraft's speed, researchers knew that the net outward speed of the solar wind was zero. This occurred in June, when Voyager 1 was about 10.6 billion miles from the sun.
Because the velocities can fluctuate, scientists watched four more monthly readings before they were convinced the solar wind's outward speed actually had slowed to zero. Analysis of the data shows the velocity of the solar wind has steadily slowed at a rate of about 45,000 mph each year since August 2007, when the solar wind was speeding outward at about 130,000 mph. The outward speed has remained at zero since June.
The results were presented at the American Geophysical Union meeting in San Francisco.
"When I realized that we were getting solid zeroes, I was amazed," said Rob Decker, a Voyager Low-Energy Charged Particle Instrument co-investigator and senior staff scientist at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md. "Here was Voyager, a spacecraft that has been a workhorse for 33 years, showing us something completely new again."
Scientists believe Voyager 1 has not crossed the heliosheath into interstellar space. Crossing into interstellar space would mean a sudden drop in the density of hot particles and an increase in the density of cold particles. Scientists are putting the data into their models of the heliosphere's structure and should be able to better estimate when Voyager 1 will reach interstellar space. Researchers currently estimate
Voyager 1 will cross that frontier in about four years.
"In science, there is nothing like a reality check to shake things up, and Voyager 1 provided that with hard facts," said Tom Krimigis, principal investigator on the Low-Energy Charged Particle Instrument, who is based at the Applied Physics Laboratory and the Academy of Athens, Greece. "Once again, we face the predicament of redoing our models."
A sister spacecraft, Voyager 2, was launched in Aug. 20, 1977 and has reached a position 8.8 billion miles from the sun. Both spacecraft have been traveling along different trajectories and at different speeds. Voyager 1 is traveling faster, at a speed of about 38,000 mph, compared to Voyager 2's velocity of 35,000 mph. In the next few years, scientists expect Voyager 2 to encounter the same kind of phenomenon as Voyager 1.
The Voyagers were built by NASA's Jet Propulsion Laboratory in Pasadena, Calif., which continues to operate both spacecraft.
-December 2010-
2012: The Real Truth Q&A
Question (Q): Are there any threats to the Earth in 2012? Many Internet websites say the world will end in December 2012.
Answer (A): Nothing bad will happen to the Earth in 2012. Our planet has been getting along just fine for more than 4 billion years, and credible scientists worldwide know of no threat associated with 2012.
Q: What is the origin of the prediction that the world will end in 2012?
A: The story started with claims that Nibiru, a supposed planet discovered by the Sumerians, is headed toward Earth. This catastrophe was initially predicted for May 2003, but when nothing happened the doomsday date was moved forward to December 2012. Then these two fables were linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 -- hence the predicted doomsday date of December 21, 2012.
Q: Does the Mayan calendar end in December 2012?
A: Just as the calendar you have on your kitchen wall does not cease to exist after December 31, the Mayan calendar does not cease to exist on December 21, 2012. This date is the end of the Mayan long-count period but then -- just as your calendar begins again on January 1 -- another long-count period begins for the Mayan calendar.
Q: Could a phenomena occur where planets align in a way that impacts Earth?
A: There are no planetary alignments in the next few decades, Earth will not cross the galactic plane in 2012, and even if these alignments were to occur, their effects on the Earth would be negligible. Each December the Earth and sun align with the approximate center of the Milky Way Galaxy but that is an annual event of no consequence.
Q: Is there a planet or brown dwarf called Nibiru or Planet X or Eris that is approaching the Earth and threatening our planet with widespread destruction?
A: Nibiru and other stories about wayward planets are an Internet hoax. There is no factual basis for these claims. If Nibiru or Planet X were real and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye. Obviously, it does not exist. Eris is real, but it is a dwarf planet similar to Pluto that will remain in the outer solar system; the closest it can come to Earth is about 4 billion miles.
Q: What is the polar shift theory? Is it true that the earth’s crust does a 180-degree rotation around the core in a matter of days if not hours?
A: A reversal in the rotation of Earth is impossible. There are slow movements of the continents (for example Antarctica was near the equator hundreds of millions of years ago), but that is irrelevant to claims of reversal of the rotational poles. However, many of the disaster websites pull a bait-and-shift to fool people. They claim a relationship between the rotation and the magnetic polarity of Earth, which does change irregularly, with a magnetic reversal taking place every 400,000 years on average. As far as we know, such a magnetic reversal doesn’t cause any harm to life on Earth. A magnetic reversal is very unlikely to happen in the next few millennia, anyway.
Q: Is the Earth in danger of being hit by a meteor in 2012?
A: The Earth has always been subject to impacts by comets and asteroids, although big hits are very rare. The last big impact was 65 million years ago, and that led to the extinction of the dinosaurs. Today NASA astronomers are carrying out a survey called the Spaceguard Survey to find any large near-Earth asteroids long before they hit. We have already determined that there are no threatening asteroids as large as the one that killed the dinosaurs. All this work is done openly with the discoveries posted every day on the NASA NEO Program Office website, so you can see for yourself that nothing is predicted to hit in 2012.
Q: How do NASA scientists feel about claims of pending doomsday?
A: For any claims of disaster or dramatic changes in 2012, where is the science? Where is the evidence? There is none, and for all the fictional assertions, whether they are made in books, movies, documentaries or over the Internet, we cannot change that simple fact. There is no credible evidence for any of the assertions made in support of unusual events taking place in December 2012.
Q: Is there a danger from giant solar storms predicted for 2012?
A: Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications, although engineers are learning how to build electronics that are protected against most solar storms. But there is no special risk associated with 2012. The next solar maximum will occur in the 2012-2014 time frame and is predicted to be an average solar cycle, no different than previous cycles throughout history.
-December 2010-
Answer (A): Nothing bad will happen to the Earth in 2012. Our planet has been getting along just fine for more than 4 billion years, and credible scientists worldwide know of no threat associated with 2012.
Q: What is the origin of the prediction that the world will end in 2012?
A: The story started with claims that Nibiru, a supposed planet discovered by the Sumerians, is headed toward Earth. This catastrophe was initially predicted for May 2003, but when nothing happened the doomsday date was moved forward to December 2012. Then these two fables were linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 -- hence the predicted doomsday date of December 21, 2012.
Q: Does the Mayan calendar end in December 2012?
A: Just as the calendar you have on your kitchen wall does not cease to exist after December 31, the Mayan calendar does not cease to exist on December 21, 2012. This date is the end of the Mayan long-count period but then -- just as your calendar begins again on January 1 -- another long-count period begins for the Mayan calendar.
Q: Could a phenomena occur where planets align in a way that impacts Earth?
A: There are no planetary alignments in the next few decades, Earth will not cross the galactic plane in 2012, and even if these alignments were to occur, their effects on the Earth would be negligible. Each December the Earth and sun align with the approximate center of the Milky Way Galaxy but that is an annual event of no consequence.
Q: Is there a planet or brown dwarf called Nibiru or Planet X or Eris that is approaching the Earth and threatening our planet with widespread destruction?
A: Nibiru and other stories about wayward planets are an Internet hoax. There is no factual basis for these claims. If Nibiru or Planet X were real and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye. Obviously, it does not exist. Eris is real, but it is a dwarf planet similar to Pluto that will remain in the outer solar system; the closest it can come to Earth is about 4 billion miles.
Q: What is the polar shift theory? Is it true that the earth’s crust does a 180-degree rotation around the core in a matter of days if not hours?
A: A reversal in the rotation of Earth is impossible. There are slow movements of the continents (for example Antarctica was near the equator hundreds of millions of years ago), but that is irrelevant to claims of reversal of the rotational poles. However, many of the disaster websites pull a bait-and-shift to fool people. They claim a relationship between the rotation and the magnetic polarity of Earth, which does change irregularly, with a magnetic reversal taking place every 400,000 years on average. As far as we know, such a magnetic reversal doesn’t cause any harm to life on Earth. A magnetic reversal is very unlikely to happen in the next few millennia, anyway.
Q: Is the Earth in danger of being hit by a meteor in 2012?
A: The Earth has always been subject to impacts by comets and asteroids, although big hits are very rare. The last big impact was 65 million years ago, and that led to the extinction of the dinosaurs. Today NASA astronomers are carrying out a survey called the Spaceguard Survey to find any large near-Earth asteroids long before they hit. We have already determined that there are no threatening asteroids as large as the one that killed the dinosaurs. All this work is done openly with the discoveries posted every day on the NASA NEO Program Office website, so you can see for yourself that nothing is predicted to hit in 2012.
Q: How do NASA scientists feel about claims of pending doomsday?
A: For any claims of disaster or dramatic changes in 2012, where is the science? Where is the evidence? There is none, and for all the fictional assertions, whether they are made in books, movies, documentaries or over the Internet, we cannot change that simple fact. There is no credible evidence for any of the assertions made in support of unusual events taking place in December 2012.
Q: Is there a danger from giant solar storms predicted for 2012?
A: Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications, although engineers are learning how to build electronics that are protected against most solar storms. But there is no special risk associated with 2012. The next solar maximum will occur in the 2012-2014 time frame and is predicted to be an average solar cycle, no different than previous cycles throughout history.
-December 2010-
'Greener' Climate Prediction Shows Plants Slow Warming
A new NASA computer modeling effort has found that additional growth of plants and trees in a world with doubled atmospheric carbon dioxide levels would create a new negative feedback – a cooling effect – in the Earth's climate system that could work to reduce future global warming.
The cooling effect would be -0.3 degrees Celsius (C) (-0.5 Fahrenheit (F)) globally and -0.6 degrees C (-1.1 F) over land, compared to simulations where the feedback was not included, said Lahouari Bounoua, of Goddard Space Flight Center, Greenbelt, Md. Bounoua is lead author on a paper detailing the results that will be published Dec. 7 in the journal Geophysical Research Letters.
Without the negative feedback included, the model found a warming of 1.94 degrees C globally when carbon dioxide was doubled.
Bounoua stressed that while the model's results showed a negative feedback, it is not a strong enough response to alter the global warming trend that is expected. In fact, the present work is an example of how, over time, scientists will create more sophisticated models that will chip away at the uncertainty range of climate change and allow more accurate projections of future climate.
"This feedback slows but does not alleviate the projected warming," Bounoua said.
To date, only some models that predict how the planet would respond to a doubling of carbon dioxide have allowed for vegetation to grow as a response to higher carbon dioxide levels and associated increases in temperatures and precipitation.
Of those that have attempted to model this feedback, this new effort differs in that it incorporates a specific response in plants to higher atmospheric carbon dioxide levels. When there is more carbon dioxide available, plants are able to use less water yet maintain previous levels of photosynthesis. The process is called "down-regulation." This more efficient use of water and nutrients has been observed in experimental studies and can ultimately lead to increased leaf growth. The ability to increase leaf growth due to changes in photosynthetic activity was also included in the model. The authors postulate that the greater leaf growth would increase evapotranspiration on a global scale and create an additional cooling effect.
"This is what is completely new," said Bounoua, referring to the incorporation of down-regulation and changed leaf growth into the model. "What we did is improve plants' physiological response in the model by including down-regulation. The end result is a stronger feedback than previously thought."
The modeling approach also investigated how stimulation of plant growth in a world with doubled carbon dioxide levels would be fueled by warmer temperatures, increased precipitation in some regions and plants' more efficient use of water due to carbon dioxide being more readily available in the atmosphere. Previous climate models have included these aspects but not down-regulation. The models without down-regulation projected little to no cooling from vegetative growth.
Scientists agree that in a world where carbon dioxide has doubled – a standard basis for many global warming modeling simulations – temperature would increase from 2 to 4.5 degrees C (3.5 to 8.0 F). (The model used in this study found warming – without incorporating the plant feedback – on the low end of this range.) The uncertainty in that range is mostly due to uncertainty about "feedbacks" – how different aspects of the Earth system will react to a warming world, and then how those changes will either amplify (positive feedback) or dampen (negative feedback) the overall warming.
An example of a positive feedback would be if warming temperatures caused forests to grow in the place of Arctic tundra. The darker surface of a forest canopy would absorb more solar radiation than the snowy tundra, which reflects more solar radiation. The greater absorption would amplify warming. The vegetative feedback modeled in this research, in which increased plant growth would exert a cooling effect, is an example of a negative feedback. The feedback quantified in this study is a result of an interaction between all these aspects: carbon dioxide enrichment, a warming and moistening climate, plants' more efficient use of water, down-regulation and the ability for leaf growth.
This new paper is one of many steps toward gradually improving overall future climate projections, a process that involves better modeling of both warming and cooling feedbacks.
"As we learn more about how these systems react, we can learn more about how the climate will change," said co-author Forrest Hall, of the University of Maryland-Baltimore County and Goddard Space Flight Center. "Each year we get better and better. It's important to get these things right just as it's important to get the track of a hurricane right. We've got to get these models right, and improve our projections, so we'll know where to most effectively concentrate mitigation efforts."
The results presented here indicate that changes in the state of vegetation may already be playing a role in the continental water, energy and carbon budgets as atmospheric carbon dioxide increases, said Piers Sellers, a co-author from NASA's Johnson Space Center, Houston, Texas.
"We're learning more and more about how our planet really works," Sellers said. "We have suspected for some time that the connection between vegetation photosynthesis and the surface energy balance could be a significant player in future climate. This study gives us an indication of the strength and sign of one of these biosphere-atmosphere feedbacks."
-December 2010-
The cooling effect would be -0.3 degrees Celsius (C) (-0.5 Fahrenheit (F)) globally and -0.6 degrees C (-1.1 F) over land, compared to simulations where the feedback was not included, said Lahouari Bounoua, of Goddard Space Flight Center, Greenbelt, Md. Bounoua is lead author on a paper detailing the results that will be published Dec. 7 in the journal Geophysical Research Letters.
Without the negative feedback included, the model found a warming of 1.94 degrees C globally when carbon dioxide was doubled.
Bounoua stressed that while the model's results showed a negative feedback, it is not a strong enough response to alter the global warming trend that is expected. In fact, the present work is an example of how, over time, scientists will create more sophisticated models that will chip away at the uncertainty range of climate change and allow more accurate projections of future climate.
"This feedback slows but does not alleviate the projected warming," Bounoua said.
To date, only some models that predict how the planet would respond to a doubling of carbon dioxide have allowed for vegetation to grow as a response to higher carbon dioxide levels and associated increases in temperatures and precipitation.
Of those that have attempted to model this feedback, this new effort differs in that it incorporates a specific response in plants to higher atmospheric carbon dioxide levels. When there is more carbon dioxide available, plants are able to use less water yet maintain previous levels of photosynthesis. The process is called "down-regulation." This more efficient use of water and nutrients has been observed in experimental studies and can ultimately lead to increased leaf growth. The ability to increase leaf growth due to changes in photosynthetic activity was also included in the model. The authors postulate that the greater leaf growth would increase evapotranspiration on a global scale and create an additional cooling effect.
"This is what is completely new," said Bounoua, referring to the incorporation of down-regulation and changed leaf growth into the model. "What we did is improve plants' physiological response in the model by including down-regulation. The end result is a stronger feedback than previously thought."
The modeling approach also investigated how stimulation of plant growth in a world with doubled carbon dioxide levels would be fueled by warmer temperatures, increased precipitation in some regions and plants' more efficient use of water due to carbon dioxide being more readily available in the atmosphere. Previous climate models have included these aspects but not down-regulation. The models without down-regulation projected little to no cooling from vegetative growth.
Scientists agree that in a world where carbon dioxide has doubled – a standard basis for many global warming modeling simulations – temperature would increase from 2 to 4.5 degrees C (3.5 to 8.0 F). (The model used in this study found warming – without incorporating the plant feedback – on the low end of this range.) The uncertainty in that range is mostly due to uncertainty about "feedbacks" – how different aspects of the Earth system will react to a warming world, and then how those changes will either amplify (positive feedback) or dampen (negative feedback) the overall warming.
An example of a positive feedback would be if warming temperatures caused forests to grow in the place of Arctic tundra. The darker surface of a forest canopy would absorb more solar radiation than the snowy tundra, which reflects more solar radiation. The greater absorption would amplify warming. The vegetative feedback modeled in this research, in which increased plant growth would exert a cooling effect, is an example of a negative feedback. The feedback quantified in this study is a result of an interaction between all these aspects: carbon dioxide enrichment, a warming and moistening climate, plants' more efficient use of water, down-regulation and the ability for leaf growth.
This new paper is one of many steps toward gradually improving overall future climate projections, a process that involves better modeling of both warming and cooling feedbacks.
"As we learn more about how these systems react, we can learn more about how the climate will change," said co-author Forrest Hall, of the University of Maryland-Baltimore County and Goddard Space Flight Center. "Each year we get better and better. It's important to get these things right just as it's important to get the track of a hurricane right. We've got to get these models right, and improve our projections, so we'll know where to most effectively concentrate mitigation efforts."
The results presented here indicate that changes in the state of vegetation may already be playing a role in the continental water, energy and carbon budgets as atmospheric carbon dioxide increases, said Piers Sellers, a co-author from NASA's Johnson Space Center, Houston, Texas.
"We're learning more and more about how our planet really works," Sellers said. "We have suspected for some time that the connection between vegetation photosynthesis and the surface energy balance could be a significant player in future climate. This study gives us an indication of the strength and sign of one of these biosphere-atmosphere feedbacks."
-December 2010-
NASA-Funded Research Discovers Life Built With Toxic Chemical
NASA-funded astrobiology research has changed the fundamental knowledge about what comprises all known life on Earth.
Researchers conducting tests in the harsh environment of Mono Lake in California have discovered the first known microorganism on Earth able to thrive and reproduce using the toxic chemical arsenic. The microorganism substitutes arsenic for phosphorus in its cell components.
"The definition of life has just expanded," said Ed Weiler, NASA's associate administrator for the Science Mission Directorate at the agency's Headquarters in Washington. "As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it."
This finding of an alternative biochemistry makeup will alter biology textbooks and expand the scope of the search for life beyond Earth. The research is published in this week's edition of Science Express.
Carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur are the six basic building blocks of all known forms of life on Earth. Phosphorus is part of the chemical backbone of DNA and RNA, the structures that carry genetic instructions for life, and is considered an essential element for all living cells.
Phosphorus is a central component of the energy-carrying molecule in all cells (adenosine triphosphate) and also the phospholipids that form all cell membranes. Arsenic, which is chemically similar to phosphorus, is poisonous for most life on Earth. Arsenic disrupts metabolic pathways because chemically it behaves similarly to phosphate.
"We know that some microbes can breathe arsenic, but what we've found is a microbe doing something new -- building parts of itself out of arsenic," said Felisa Wolfe-Simon, a NASA Astrobiology Research Fellow in residence at the U.S. Geological Survey in Menlo Park, Calif., and the research team's lead scientist. "If something here on Earth can do something so unexpected, what else can life do that we haven't seen yet?"
The newly discovered microbe, strain GFAJ-1, is a member of a common group of bacteria, the Gammaproteobacteria. In the laboratory, the researchers successfully grew microbes from the lake on a diet that was very lean on phosphorus, but included generous helpings of arsenic. When researchers removed the phosphorus and replaced it with arsenic the microbes continued to grow. Subsequent analyses indicated that the arsenic was being used to produce the building blocks of new GFAJ-1 cells.
The key issue the researchers investigated was when the microbe was grown on arsenic did the arsenic actually became incorporated into the organisms' vital biochemical machinery, such as DNA, proteins and the cell membranes. A variety of sophisticated laboratory techniques was used to determine where the arsenic was incorporated.
The team chose to explore Mono Lake because of its unusual chemistry, especially its high salinity, high alkalinity, and high levels of arsenic. This chemistry is in part a result of Mono Lake's isolation from its sources of fresh water for 50 years.
The results of this study will inform ongoing research in many areas, including the study of Earth's evolution, organic chemistry, biogeochemical cycles, disease mitigation and Earth system research. These findings also will open up new frontiers in microbiology and other areas of research.
"The idea of alternative biochemistries for life is common in science fiction," said Carl Pilcher, director of the NASA Astrobiology Institute at the agency's Ames Research Center in Moffett Field, Calif. "Until now a life form using arsenic as a building block was only theoretical, but now we know such life exists in Mono Lake."
The research team included scientists from the U.S. Geological Survey, Arizona State University in Tempe, Ariz., Lawrence Livermore National Laboratory in Livermore, Calif., Duquesne University in Pittsburgh, Penn., and the Stanford Synchroton Radiation Lightsource in Menlo Park, Calif.
NASA's Astrobiology Program in Washington contributed funding for the research through its Exobiology and Evolutionary Biology program and the NASA Astrobiology Institute. NASA's Astrobiology Program supports research into the origin, evolution, distribution, and future of life on Earth.
-December 2010-
Researchers conducting tests in the harsh environment of Mono Lake in California have discovered the first known microorganism on Earth able to thrive and reproduce using the toxic chemical arsenic. The microorganism substitutes arsenic for phosphorus in its cell components.
"The definition of life has just expanded," said Ed Weiler, NASA's associate administrator for the Science Mission Directorate at the agency's Headquarters in Washington. "As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it."
This finding of an alternative biochemistry makeup will alter biology textbooks and expand the scope of the search for life beyond Earth. The research is published in this week's edition of Science Express.
Carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur are the six basic building blocks of all known forms of life on Earth. Phosphorus is part of the chemical backbone of DNA and RNA, the structures that carry genetic instructions for life, and is considered an essential element for all living cells.
Phosphorus is a central component of the energy-carrying molecule in all cells (adenosine triphosphate) and also the phospholipids that form all cell membranes. Arsenic, which is chemically similar to phosphorus, is poisonous for most life on Earth. Arsenic disrupts metabolic pathways because chemically it behaves similarly to phosphate.
"We know that some microbes can breathe arsenic, but what we've found is a microbe doing something new -- building parts of itself out of arsenic," said Felisa Wolfe-Simon, a NASA Astrobiology Research Fellow in residence at the U.S. Geological Survey in Menlo Park, Calif., and the research team's lead scientist. "If something here on Earth can do something so unexpected, what else can life do that we haven't seen yet?"
The newly discovered microbe, strain GFAJ-1, is a member of a common group of bacteria, the Gammaproteobacteria. In the laboratory, the researchers successfully grew microbes from the lake on a diet that was very lean on phosphorus, but included generous helpings of arsenic. When researchers removed the phosphorus and replaced it with arsenic the microbes continued to grow. Subsequent analyses indicated that the arsenic was being used to produce the building blocks of new GFAJ-1 cells.
The key issue the researchers investigated was when the microbe was grown on arsenic did the arsenic actually became incorporated into the organisms' vital biochemical machinery, such as DNA, proteins and the cell membranes. A variety of sophisticated laboratory techniques was used to determine where the arsenic was incorporated.
The team chose to explore Mono Lake because of its unusual chemistry, especially its high salinity, high alkalinity, and high levels of arsenic. This chemistry is in part a result of Mono Lake's isolation from its sources of fresh water for 50 years.
The results of this study will inform ongoing research in many areas, including the study of Earth's evolution, organic chemistry, biogeochemical cycles, disease mitigation and Earth system research. These findings also will open up new frontiers in microbiology and other areas of research.
"The idea of alternative biochemistries for life is common in science fiction," said Carl Pilcher, director of the NASA Astrobiology Institute at the agency's Ames Research Center in Moffett Field, Calif. "Until now a life form using arsenic as a building block was only theoretical, but now we know such life exists in Mono Lake."
The research team included scientists from the U.S. Geological Survey, Arizona State University in Tempe, Ariz., Lawrence Livermore National Laboratory in Livermore, Calif., Duquesne University in Pittsburgh, Penn., and the Stanford Synchroton Radiation Lightsource in Menlo Park, Calif.
NASA's Astrobiology Program in Washington contributed funding for the research through its Exobiology and Evolutionary Biology program and the NASA Astrobiology Institute. NASA's Astrobiology Program supports research into the origin, evolution, distribution, and future of life on Earth.
-December 2010-
Global Biosphere
Life is an integral part of the Earth system. Living things influence the composition of the atmosphere by “inhaling” and “exhaling” carbon dioxide and oxygen. They play a part in the water cycle by pulling water from the soil and the air, and they help put it back again by exhaling water vapor and aerating the soil so rain can soak into the ground. They regulate ocean chemistry by taking carbon out of the atmosphere. Earth would not be the planet that it is without its biosphere, the sum of its life.
But life is not a constant thing, as illustrated in this series of images. The images show the distribution of chlorophyll over the Earth’s ocean surface averaged over a year. On land, the images represent the density of plant growth. The darkest green areas, where plant growth is greatest, tend to be concentrated in tropical regions around the equator, where rainfall and sunlight are abundant. Since the images are an annual average, seasonal patterns have been erased. The intense summer growing season at higher latitudes is blended with the winter season when little grows, resulting in a mid-range average vegetation index value.
In the ocean, high chlorophyll concentrations are concentrated in areas where tiny, plant-like organisms (phytoplankton) thrive. The highest chlorophyll concentrations are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants can’t grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations.
The average productivity of the biosphere changes little from year to year, but there are small variations. Plants require four things to grow: nutrients, light, water, and moderate temperatures. When any one of these things changes, plant growth will change too. On land, the interannual changes in plant growth are most evident in marginal areas, where small changes in water or temperature have the greatest impact on plant growth. The Australian Outback, the African Sahel, and the Steppes of Central Asia all show annual variations in plant growth in this series. The variations are probably related to changes in temperature or rainfall.
Interannual changes in the ocean are more difficult to interpret. Ocean productivity (as illustrated by the distribution of chlorophyll) depends on ocean temperatures and the availability of nutrients, usually brought to the surface by ocean currents and mixing. Ocean chlorophyll concentrations can change when ocean currents shift or the temperature changes. A drop in chlorophyll could occur if more zooplankton (small animals) are grazing in an area. Finally, the satellite measurement itself may be influenced by changes in weather patterns. Winds could churn the ocean, pushing phytoplankton below the surface where the satellite cannot measure them, or clouds may block the satellite’s view during short bloom events, when chlorophyll spikes.
Measurements of global chlorophyll and vegetation are valuable to scientists because they provide insight into the carbon cycle. Scientists use ocean chlorophyll and vegetation measurements to determine the planet’s net primary productivity: how much carbon is being used by the plants to grow. Carbon cycles through the oceans, soil and rocks, plants on land and in the ocean, and atmosphere. The build up of carbon dioxide released into the atmosphere by burning fossil fuel is the primary cause of global warming. The global biosphere has been helping to offset some of the excess carbon dioxide people have been pumping into the atmosphere.
The maps are made from ocean chlorophyll data collected by the SeaWiFS satellite and vegetation data collected by NOAA satellites and analyzed by the Global Inventory Modeling and Mapping Studies (GIMMS) project at NASA’s Goddard Space Flight Center.
-December 2010-
But life is not a constant thing, as illustrated in this series of images. The images show the distribution of chlorophyll over the Earth’s ocean surface averaged over a year. On land, the images represent the density of plant growth. The darkest green areas, where plant growth is greatest, tend to be concentrated in tropical regions around the equator, where rainfall and sunlight are abundant. Since the images are an annual average, seasonal patterns have been erased. The intense summer growing season at higher latitudes is blended with the winter season when little grows, resulting in a mid-range average vegetation index value.
In the ocean, high chlorophyll concentrations are concentrated in areas where tiny, plant-like organisms (phytoplankton) thrive. The highest chlorophyll concentrations are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants can’t grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations.
The average productivity of the biosphere changes little from year to year, but there are small variations. Plants require four things to grow: nutrients, light, water, and moderate temperatures. When any one of these things changes, plant growth will change too. On land, the interannual changes in plant growth are most evident in marginal areas, where small changes in water or temperature have the greatest impact on plant growth. The Australian Outback, the African Sahel, and the Steppes of Central Asia all show annual variations in plant growth in this series. The variations are probably related to changes in temperature or rainfall.
Interannual changes in the ocean are more difficult to interpret. Ocean productivity (as illustrated by the distribution of chlorophyll) depends on ocean temperatures and the availability of nutrients, usually brought to the surface by ocean currents and mixing. Ocean chlorophyll concentrations can change when ocean currents shift or the temperature changes. A drop in chlorophyll could occur if more zooplankton (small animals) are grazing in an area. Finally, the satellite measurement itself may be influenced by changes in weather patterns. Winds could churn the ocean, pushing phytoplankton below the surface where the satellite cannot measure them, or clouds may block the satellite’s view during short bloom events, when chlorophyll spikes.
Measurements of global chlorophyll and vegetation are valuable to scientists because they provide insight into the carbon cycle. Scientists use ocean chlorophyll and vegetation measurements to determine the planet’s net primary productivity: how much carbon is being used by the plants to grow. Carbon cycles through the oceans, soil and rocks, plants on land and in the ocean, and atmosphere. The build up of carbon dioxide released into the atmosphere by burning fossil fuel is the primary cause of global warming. The global biosphere has been helping to offset some of the excess carbon dioxide people have been pumping into the atmosphere.
The maps are made from ocean chlorophyll data collected by the SeaWiFS satellite and vegetation data collected by NOAA satellites and analyzed by the Global Inventory Modeling and Mapping Studies (GIMMS) project at NASA’s Goddard Space Flight Center.
-December 2010-
Burn Recovery in Yellowstone
In the summer of 1988, lightning- and human-ignited fires consumed vast stretches of Yellowstone National Park. More than 25,000 firefighters cycled through the park combating 50 wildfires, seven of which became major wildfires. By the time the first snowfall extinguished the last flames in September, 793,000 of the park’s 2,221,800 acres had burned.
One of the largest fires raced across western Yellowstone, threatening historic structures around the Old Faithful geyser basin. This series of images shows the scars left in the wake of the western Yellowstone fires and the slow recovery in the twenty years that followed. Taken by Landsat-5, the images were made with a combination of visible and infrared light (green, short-wave infrared, and near infrared) to highlight the burned area and changes in vegetation.
In 1987, dense stands of tall, straight pine trees covered the high, volcanic plateau. The old forest is dark green in the image. The summertime forest is broken up by bright green, grassy river meadows and the occasional brush and grass plain, lighter green and tan. Mineral-rich geyser fields are pale blue and white, and lakes are dark blue. Beneath the green vegetation, a pinkish tinge emerges in places, which may indicate old burn scars or simply summer dormant grasses and plants.
Just one year later, charred trees and grasses replaced the forests and meadows. The freshly burned land is dark red. The fire was still burning when the 1988 image was acquired on August 23. The fires glow bright pink, and a faint pall of smoke hangs over the scene. If the image had been made with visible light, to resemble what a person would see from the air, the smoke would be denser than it is in this infrared image. The smoke is blue in the infrared light.
In the 1989 image, the full extent of the fire is revealed. The burned land surrounds Old Faithful and extends throughout western Yellowstone. This satellite view reveals that the fire did not burn everything in a destructive swath. The burned area is patchy, and the severity of the burn varies from place to place. Darker red areas are severe burns, and lighter red areas are less severe.
In the years that follow, the burn scar fades progressively. On the ground, grasses and wildflowers sprung up from the ashes. Over time, tiny pine trees took root and began to grow. The new trees grow more quickly (photosynthesize at a faster rate) than older, mature forests. Because they use and reflect sunlight differently than older forests, the new trees are lighter in color in these images. Fingers of new forest are most evident intruding into the narrow strip of burned land on the right edge of the image. Forests are likely returning elsewhere as well, but the small trees do not produce a dense enough canopy to show up well in the image. Some of the other changes seen from year to year are seasonal differences; the images range from July to October.
Though changes did occur between 1988 and 2008, recovery has been slow. In 2008, the burned area is still clearly discernable. The high-elevation plateau has a short growing season, with hot, dry summers and cold, harsh winters. In such an environment, it will take decades for the forest to reach its former state.
Scientists and land managers use satellite images such as these to assess burn severity, extent, and recovery. There are no realistic alternatives to see how the entire area is recovering. Walking through some of the burned areas could give a false impression of the conditions, since the burn severity and recovery were not uniform. Even aerial surveys are limited in the amount of land that can realistically be observed at any one time. Only satellites provide the big picture required to track recovery throughout the ecosystem.
-December 2010-
One of the largest fires raced across western Yellowstone, threatening historic structures around the Old Faithful geyser basin. This series of images shows the scars left in the wake of the western Yellowstone fires and the slow recovery in the twenty years that followed. Taken by Landsat-5, the images were made with a combination of visible and infrared light (green, short-wave infrared, and near infrared) to highlight the burned area and changes in vegetation.
In 1987, dense stands of tall, straight pine trees covered the high, volcanic plateau. The old forest is dark green in the image. The summertime forest is broken up by bright green, grassy river meadows and the occasional brush and grass plain, lighter green and tan. Mineral-rich geyser fields are pale blue and white, and lakes are dark blue. Beneath the green vegetation, a pinkish tinge emerges in places, which may indicate old burn scars or simply summer dormant grasses and plants.
Just one year later, charred trees and grasses replaced the forests and meadows. The freshly burned land is dark red. The fire was still burning when the 1988 image was acquired on August 23. The fires glow bright pink, and a faint pall of smoke hangs over the scene. If the image had been made with visible light, to resemble what a person would see from the air, the smoke would be denser than it is in this infrared image. The smoke is blue in the infrared light.
In the 1989 image, the full extent of the fire is revealed. The burned land surrounds Old Faithful and extends throughout western Yellowstone. This satellite view reveals that the fire did not burn everything in a destructive swath. The burned area is patchy, and the severity of the burn varies from place to place. Darker red areas are severe burns, and lighter red areas are less severe.
In the years that follow, the burn scar fades progressively. On the ground, grasses and wildflowers sprung up from the ashes. Over time, tiny pine trees took root and began to grow. The new trees grow more quickly (photosynthesize at a faster rate) than older, mature forests. Because they use and reflect sunlight differently than older forests, the new trees are lighter in color in these images. Fingers of new forest are most evident intruding into the narrow strip of burned land on the right edge of the image. Forests are likely returning elsewhere as well, but the small trees do not produce a dense enough canopy to show up well in the image. Some of the other changes seen from year to year are seasonal differences; the images range from July to October.
Though changes did occur between 1988 and 2008, recovery has been slow. In 2008, the burned area is still clearly discernable. The high-elevation plateau has a short growing season, with hot, dry summers and cold, harsh winters. In such an environment, it will take decades for the forest to reach its former state.
Scientists and land managers use satellite images such as these to assess burn severity, extent, and recovery. There are no realistic alternatives to see how the entire area is recovering. Walking through some of the burned areas could give a false impression of the conditions, since the burn severity and recovery were not uniform. Even aerial surveys are limited in the amount of land that can realistically be observed at any one time. Only satellites provide the big picture required to track recovery throughout the ecosystem.
-December 2010-
El Niño, La Niña, and Rainfall
Perhaps nowhere is the intricate relationship between the ocean and the atmosphere more evident than in the eastern Pacific. The ocean’s surface cools and warms cyclically in response to the strength of the trade winds. In turn, the changing ocean alters rainfall patterns. This series of images shows the dance between ocean and atmosphere. Changes in rainfall, right, echo changes in sea surface temperature, left. Many people recognize the extreme ends of the spectrum, El Niño and La Niña, by the severe droughts and intense rains each brings to different parts of the world.
El Niño occurs when warm water builds up along the equator in the eastern Pacific. The warm ocean surface warms the atmosphere, which allows moisture-rich air to rise and develop into rainstorms. The clearest example of El Niño in this series of images is 1997. The unusually warm waters are dark purple in the sea surface temperature anomaly image, indicating that waters were as much as 6 degrees Celsius warmer than average.
The corresponding streak of dark blue in the rainfall anomaly image reveals that as much as 12 millimeters more rain than average fell over the warmed eastern Pacific. The unusual rainfall extended into northwestern South America (Ecuador and Peru). The disruption in the atmosphere impacts rainfall throughout the world. In the United States, the strongest change in rainfall is in the southeast, the region closest to the pool of warm Pacific water. During El Niño years, such as 1997, the southeast receives more rain than average.
La Niña is the build up of cool waters in the equatorial eastern Pacific, such as occurred in 1988 and, to a slightly lesser degree, 1998. La Niña’s impacts are opposite those of El Niño. The atmosphere cools in response to the cold ocean surface, and less water evaporates. The cooler, dry air is dense. It doesn’t rise or form storms. As a result, less rain falls over the eastern Pacific. Ecuador, Peru, and the southeastern United States are correspondingly dry.
El Niño and La Niña reflect the two end points of an oscillation in the Pacific Ocean. The cycle is not fully understood, but the times series illustrates that the cycle swings back and forth every 3-7 years. Often, El Niño is followed immediately by La Niña, as if the warm water is sloshing back and forth across the Pacific. The development of El Niño events is linked to the trade winds. El Niño occurs when the trade winds are weaker than normal, and La Niña occurs when they are stronger than normal. Both cycles typically peak in December.
El Niño and La Niña aren’t the only cycles evident in this image series. The Pacific Ocean is moody: It turns slightly hot or slightly cold every couple of years. This bi-annual pattern isn’t the distinctive, well-defined stripe of warm ocean waters near the equator typical of El Niño, but rather, a general warming of the ocean.
On top of the two-year warm/cold cycle and the El Niño/La Niña pattern is a broader decadal cycle in which the Pacific has a warm and a cool phase. In the 1990s, the Pacific was in a warm phase. The strong El Niño of 1997 marked the end of the warm phase. Since 1997, the Pacific has been in a generally cool phase, during which time strong El Niño events have not been able to form.
The sea surface temperature anomaly images were made from data collected by the Advanced Very High Resolution Radiometer between 1985 and 2008. They show the average sea surface temperature for the month of December compared to a long-term average of surface temperatures observed between 1985 and 2008. The rainfall anomaly images are from the Global Precipitation Climatology Project, which blends rainfall data from a number of satellites. The images compare December rainfall with the average December rainfall observed between 1979 and 2008.
-December 2010-
El Niño occurs when warm water builds up along the equator in the eastern Pacific. The warm ocean surface warms the atmosphere, which allows moisture-rich air to rise and develop into rainstorms. The clearest example of El Niño in this series of images is 1997. The unusually warm waters are dark purple in the sea surface temperature anomaly image, indicating that waters were as much as 6 degrees Celsius warmer than average.
The corresponding streak of dark blue in the rainfall anomaly image reveals that as much as 12 millimeters more rain than average fell over the warmed eastern Pacific. The unusual rainfall extended into northwestern South America (Ecuador and Peru). The disruption in the atmosphere impacts rainfall throughout the world. In the United States, the strongest change in rainfall is in the southeast, the region closest to the pool of warm Pacific water. During El Niño years, such as 1997, the southeast receives more rain than average.
La Niña is the build up of cool waters in the equatorial eastern Pacific, such as occurred in 1988 and, to a slightly lesser degree, 1998. La Niña’s impacts are opposite those of El Niño. The atmosphere cools in response to the cold ocean surface, and less water evaporates. The cooler, dry air is dense. It doesn’t rise or form storms. As a result, less rain falls over the eastern Pacific. Ecuador, Peru, and the southeastern United States are correspondingly dry.
El Niño and La Niña reflect the two end points of an oscillation in the Pacific Ocean. The cycle is not fully understood, but the times series illustrates that the cycle swings back and forth every 3-7 years. Often, El Niño is followed immediately by La Niña, as if the warm water is sloshing back and forth across the Pacific. The development of El Niño events is linked to the trade winds. El Niño occurs when the trade winds are weaker than normal, and La Niña occurs when they are stronger than normal. Both cycles typically peak in December.
El Niño and La Niña aren’t the only cycles evident in this image series. The Pacific Ocean is moody: It turns slightly hot or slightly cold every couple of years. This bi-annual pattern isn’t the distinctive, well-defined stripe of warm ocean waters near the equator typical of El Niño, but rather, a general warming of the ocean.
On top of the two-year warm/cold cycle and the El Niño/La Niña pattern is a broader decadal cycle in which the Pacific has a warm and a cool phase. In the 1990s, the Pacific was in a warm phase. The strong El Niño of 1997 marked the end of the warm phase. Since 1997, the Pacific has been in a generally cool phase, during which time strong El Niño events have not been able to form.
The sea surface temperature anomaly images were made from data collected by the Advanced Very High Resolution Radiometer between 1985 and 2008. They show the average sea surface temperature for the month of December compared to a long-term average of surface temperatures observed between 1985 and 2008. The rainfall anomaly images are from the Global Precipitation Climatology Project, which blends rainfall data from a number of satellites. The images compare December rainfall with the average December rainfall observed between 1979 and 2008.
-December 2010-
Antarctic Ozone Hole
The stratospheric ozone layer protects life on Earth by absorbing ultraviolet light, which damages DNA in plants and animals (including humans) and leads to skin cancer. Prior to 1979, scientists had not observed concentrations below 220 Dobson Units. But in the early 1980s, through a combination of ground-based and satellite measurements, scientists began to realize that Earth’s natural sunscreen was thinning dramatically over the South Pole each spring. This large, thin spot in the ozone layer came to be known as the ozone hole.
This series of images shows the size and shape of the ozone hole each year from 1979 through 2008 (no data are available for 1995). The measurements were made by NASA’s Total Ozone Mapping Spectrometer (TOMS) instruments from 1979–2003 and by the Royal Netherlands Meteorological Institute (KNMI) Ozone Monitoring Instrument (OMI) from 2004–present. Purple and dark blue areas are part of the ozone hole.
As the images show, the word hole isn’t literal; no place is empty of ozone. Scientists use the word hole as a metaphor for the area in which ozone concentrations drop below the historical threshold of 220 Dobson Units. Using this metaphor, they can describe the hole’s size and depth. These maps show the state of the ozone hole each year on the day of maximum depth—the day the lowest ozone concentrations were measured.
The series begins in 1979. The maximum depth of the hole that year was 194 Dobson Units (DU)—not far below the historical low. For several years, the minimum concentrations stayed in the 190s, but beginning in 1983, the minimums got deeper rapidly: 173 DU in 1982, 154 in 1983, 124 in 1985. In 1991, a new threshold was passed; ozone concentration fell below 100 DU for the first time. Since then, concentrations below 100 have been common. The deepest ozone hole occurred in 1994, when concentrations fell to just 73 DU on September 30.
Records in depth and size haven’t occurred during the same years (the largest ozone hole occurred in 2006), but the long-term trend in both characteristics is consistent: from 1980 through the early 1990s, the hole rapidly grew in size and depth. Since the mid-1990s, area and depth have roughly stabilized (see Ozone Hole Watch website for annual averages). Year-to-year variations in area and depth are caused by variations in stratospheric temperature and circulation. Colder conditions result in a larger area and lower ozone values in the center of the hole.
An uneven seam in the contours of the data marks the location of the international date line. Ozone data are measured by polar-orbiting satellites that collect observations in a series of swaths over the course of the day; the passes are generally separated by about 90 minutes. Stratospheric circulation slowly shifts the contours of the ozone hole over the course of the day (like winds shift the location of clouds). The contours move little from any one swath to the next, but by the end of the day, the cumulative movement is apparent at the date line.
The ozone hole opened the world’s eyes to the global effects of human activity on the atmosphere. It turned out that chlorofluorocarbons (CFCs)—long-lived chemicals that had been used in refrigerators and aerosols sprays since the 1930s—had a dark side. In the layer of the atmosphere closest to Earth (the troposphere), CFCs circulated for decades without degrading or reacting with other chemicals. When they reached the stratosphere, however, their behavior changed. In the upper stratosphere (beyond the protection of the ozone layer), ultraviolet light caused CFCs to break apart, releasing chlorine, a very reactive atom that repeatedly catalyzes ozone destruction.
The global recognition of CFCs’ destructive potential led to the 1989 Montreal Protocol banning the production of ozone-depleting chemicals. Scientists estimate that about 80 percent of the chlorine (and bromine, which has a similar ozone-depleting effect) in the stratosphere over Antarctica today is from human, not natural, sources. Models suggest that the concentration of chlorine and other ozone-depleting substances in the stratosphere will not return to pre-1980 levels until the middle decades of this century. These same models predict that the Antarctic ozone layer will recover around 2040. On the other hand, because of the impact of greenhouse gas warming, the ozone layer over the tropics and mid-southern latitudes may not recover for more than a century, and perhaps not ever.
-December 2010-
This series of images shows the size and shape of the ozone hole each year from 1979 through 2008 (no data are available for 1995). The measurements were made by NASA’s Total Ozone Mapping Spectrometer (TOMS) instruments from 1979–2003 and by the Royal Netherlands Meteorological Institute (KNMI) Ozone Monitoring Instrument (OMI) from 2004–present. Purple and dark blue areas are part of the ozone hole.
As the images show, the word hole isn’t literal; no place is empty of ozone. Scientists use the word hole as a metaphor for the area in which ozone concentrations drop below the historical threshold of 220 Dobson Units. Using this metaphor, they can describe the hole’s size and depth. These maps show the state of the ozone hole each year on the day of maximum depth—the day the lowest ozone concentrations were measured.
The series begins in 1979. The maximum depth of the hole that year was 194 Dobson Units (DU)—not far below the historical low. For several years, the minimum concentrations stayed in the 190s, but beginning in 1983, the minimums got deeper rapidly: 173 DU in 1982, 154 in 1983, 124 in 1985. In 1991, a new threshold was passed; ozone concentration fell below 100 DU for the first time. Since then, concentrations below 100 have been common. The deepest ozone hole occurred in 1994, when concentrations fell to just 73 DU on September 30.
Records in depth and size haven’t occurred during the same years (the largest ozone hole occurred in 2006), but the long-term trend in both characteristics is consistent: from 1980 through the early 1990s, the hole rapidly grew in size and depth. Since the mid-1990s, area and depth have roughly stabilized (see Ozone Hole Watch website for annual averages). Year-to-year variations in area and depth are caused by variations in stratospheric temperature and circulation. Colder conditions result in a larger area and lower ozone values in the center of the hole.
An uneven seam in the contours of the data marks the location of the international date line. Ozone data are measured by polar-orbiting satellites that collect observations in a series of swaths over the course of the day; the passes are generally separated by about 90 minutes. Stratospheric circulation slowly shifts the contours of the ozone hole over the course of the day (like winds shift the location of clouds). The contours move little from any one swath to the next, but by the end of the day, the cumulative movement is apparent at the date line.
The ozone hole opened the world’s eyes to the global effects of human activity on the atmosphere. It turned out that chlorofluorocarbons (CFCs)—long-lived chemicals that had been used in refrigerators and aerosols sprays since the 1930s—had a dark side. In the layer of the atmosphere closest to Earth (the troposphere), CFCs circulated for decades without degrading or reacting with other chemicals. When they reached the stratosphere, however, their behavior changed. In the upper stratosphere (beyond the protection of the ozone layer), ultraviolet light caused CFCs to break apart, releasing chlorine, a very reactive atom that repeatedly catalyzes ozone destruction.
The global recognition of CFCs’ destructive potential led to the 1989 Montreal Protocol banning the production of ozone-depleting chemicals. Scientists estimate that about 80 percent of the chlorine (and bromine, which has a similar ozone-depleting effect) in the stratosphere over Antarctica today is from human, not natural, sources. Models suggest that the concentration of chlorine and other ozone-depleting substances in the stratosphere will not return to pre-1980 levels until the middle decades of this century. These same models predict that the Antarctic ozone layer will recover around 2040. On the other hand, because of the impact of greenhouse gas warming, the ozone layer over the tropics and mid-southern latitudes may not recover for more than a century, and perhaps not ever.
-December 2010-
Drought Cycles in Australia
Drought is a frequent visitor in Australia. The Australian Bureau of Meteorology describes the typical rainfall over much of the continent as “not only low, but highly erratic.” As 2009 drew to a close, the southeastern states of Victoria and New South Wales were enduring their third year of drought, and forecasters could offer little hope that the sustained, above-average rains needed to break the pattern would materialize in early 2010.
The multi-year drought has been devastating for farmers. These satellite-based vegetation images document what farmers and ranchers have had to contend with over the past decade. The images are centered on the agricultural areas near the Murray River—Australia’s largest river—between Hume Reservoir and Lake Tyrrell. The series shows vegetation growing conditions for a 16-day period in the middle of September each year from 2000 through 2009 compared to the average mid-September conditions over the decade. Places where the amount and/or health of vegetation was above the decadal average are green, average areas are off-white, and places where vegetation growth was below average are brown.
The area shown in the images is part of Australia’s “Wheatbelt,” and mid-September is the height of the growing season for cereal grains, including wheat, barley, and oats. Fields have a fairly distinct appearance, creating a speckled patchwork of geometric shapes. Areas where the vegetation appears smoothly continuous (such as in the top center, east of the Great Cambung Swamp) are more likely to be pasture or natural vegetation than cropland. The lower right corner of the image is occupied by the mountain ranges whose streams deliver much of the Murray River’s water volume.
The decade’s drought history is obvious in the cycling between years where the images are dominated by shades of green (2000, 2003, 2005) and years dominated by shades of brown (2002, 2006, 2007, 2008, 2009). While the overall pattern in any given year is generally unmistakable, there are localized differences in how crops responded to the climate each year. These differences could have numerous causes, from localized rainfall to variability in the drought-tolerance of an area’s predominant crop type. At the individual field level, a brown or green patch in a single year could indicate a crop that was struggling or flourishing, but it could also reflect a management decision to plant or harvest at a different time or to leave a field fallow.
Another pattern that emerges from the time series is the dramatic difference in the ability of crops versus natural vegetation to withstand dry years. Scattered across the “Wheatbelt” are small pockets of native landscapes protected in parks and preserves. Among these protected areas is Wyperfield National Park; two arms of the park stretch into the image west of Lake Tyrrell. In 2002, as the countryside plunged widely into drought, the natural vegetation in the park remained near average (neutral colors). The forested mountains south and east of Hume Reservoir turned just slightly brown, a far cry from the deep browns of the croplands to the north and west. Later in the series (2006-2009), even natural areas begin to appear more brown than neutral, although the conditions were not as far below average as they were in cropland.
The images in this series are based on vegetation observations collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite, which passes a major mission milsetone—ten years in orbit—this December. Observations from Terra and its sister satellite Aqua, launched in 2002, are helping ecologists and climate scientists monitor the impact of short- and long-term climate change on Earth’s natural and agricultural ecosystems.
-December 2010-
The multi-year drought has been devastating for farmers. These satellite-based vegetation images document what farmers and ranchers have had to contend with over the past decade. The images are centered on the agricultural areas near the Murray River—Australia’s largest river—between Hume Reservoir and Lake Tyrrell. The series shows vegetation growing conditions for a 16-day period in the middle of September each year from 2000 through 2009 compared to the average mid-September conditions over the decade. Places where the amount and/or health of vegetation was above the decadal average are green, average areas are off-white, and places where vegetation growth was below average are brown.
The area shown in the images is part of Australia’s “Wheatbelt,” and mid-September is the height of the growing season for cereal grains, including wheat, barley, and oats. Fields have a fairly distinct appearance, creating a speckled patchwork of geometric shapes. Areas where the vegetation appears smoothly continuous (such as in the top center, east of the Great Cambung Swamp) are more likely to be pasture or natural vegetation than cropland. The lower right corner of the image is occupied by the mountain ranges whose streams deliver much of the Murray River’s water volume.
The decade’s drought history is obvious in the cycling between years where the images are dominated by shades of green (2000, 2003, 2005) and years dominated by shades of brown (2002, 2006, 2007, 2008, 2009). While the overall pattern in any given year is generally unmistakable, there are localized differences in how crops responded to the climate each year. These differences could have numerous causes, from localized rainfall to variability in the drought-tolerance of an area’s predominant crop type. At the individual field level, a brown or green patch in a single year could indicate a crop that was struggling or flourishing, but it could also reflect a management decision to plant or harvest at a different time or to leave a field fallow.
Another pattern that emerges from the time series is the dramatic difference in the ability of crops versus natural vegetation to withstand dry years. Scattered across the “Wheatbelt” are small pockets of native landscapes protected in parks and preserves. Among these protected areas is Wyperfield National Park; two arms of the park stretch into the image west of Lake Tyrrell. In 2002, as the countryside plunged widely into drought, the natural vegetation in the park remained near average (neutral colors). The forested mountains south and east of Hume Reservoir turned just slightly brown, a far cry from the deep browns of the croplands to the north and west. Later in the series (2006-2009), even natural areas begin to appear more brown than neutral, although the conditions were not as far below average as they were in cropland.
The images in this series are based on vegetation observations collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite, which passes a major mission milsetone—ten years in orbit—this December. Observations from Terra and its sister satellite Aqua, launched in 2002, are helping ecologists and climate scientists monitor the impact of short- and long-term climate change on Earth’s natural and agricultural ecosystems.
-December 2010-
Yellow River Delta
China’s Huang He (Yellow River) is the most sediment-filled river on Earth. Flowing northeast to the Bo Hai Sea from the Bayan Har Mountains, the Yellow River crosses a plateau blanketed with up to 300 meters (980 feet) of fine, wind-blown soil. The soil is easily eroded, and millions of tons of it are carried away by the river every year. Some of it reaches the river’s mouth, where it builds and rebuilds the delta.
The Yellow River Delta has wandered up and down several hundred kilometers of coastline over the past two thousand years. Since the mid-nineteenth century, however, the lower reaches of the river and the delta have been extensively engineered to control flooding and to protect coastal development. This sequence of natural-color images from NASA's Landsat satellites shows the delta near the present river mouth at five-year intervals from 1989 to 2009.
Between 1989 and 1995, the delta became longer and narrower along a southeast-bending arc. In 1996, however, Chinese engineers blocked the main channel and forced the river to veer northeast. By 1999, erosion and settling along the old channel caused the tip of the delta to retreat, while a new peninsula had formed to the north.
The new peninsula thickened in the next five-year interval, and what appears to be aquaculture (dark-colored rectangles) expanded significantly in areas south of the river as of 2004. By 2009, the shoreline northwest of the new river mouth had filled in considerably. This may be the outcome that the engineers were anxious to achieve: the land northwest of the newly fortified shoreline is home to an extensive field of oil and gas wells. Their protection is a primary concern.
Although levees, jetties, and seawalls allow officials to slow erosion and direct the flow of the river, other challenges to protecting the delta’s natural wetlands and its agricultural and industrial development remain. Water and sediment flows to the delta have declined dramatically since the 1970s, due to both reduced rainfall and explosive urban and agricultural demand for water upstream. In the 1990s, the river frequently ran dry well before reaching the delta.
These low- and no-flow periods are a huge problem in the lower reaches of the river and the delta. They severely damage wetlands and aquaculture and worsen the river’s already severe water pollution problem. Ironically, they also increase the flood risk because when flows are low, sediment fills in the riverbed. The river becomes shallower and higher in elevation. In places, the river is already perched above the surrounding floodplain by as much as 10 meters (30 feet). A levee breach during a high water event could be devastating.
-December 2010-
The Yellow River Delta has wandered up and down several hundred kilometers of coastline over the past two thousand years. Since the mid-nineteenth century, however, the lower reaches of the river and the delta have been extensively engineered to control flooding and to protect coastal development. This sequence of natural-color images from NASA's Landsat satellites shows the delta near the present river mouth at five-year intervals from 1989 to 2009.
Between 1989 and 1995, the delta became longer and narrower along a southeast-bending arc. In 1996, however, Chinese engineers blocked the main channel and forced the river to veer northeast. By 1999, erosion and settling along the old channel caused the tip of the delta to retreat, while a new peninsula had formed to the north.
The new peninsula thickened in the next five-year interval, and what appears to be aquaculture (dark-colored rectangles) expanded significantly in areas south of the river as of 2004. By 2009, the shoreline northwest of the new river mouth had filled in considerably. This may be the outcome that the engineers were anxious to achieve: the land northwest of the newly fortified shoreline is home to an extensive field of oil and gas wells. Their protection is a primary concern.
Although levees, jetties, and seawalls allow officials to slow erosion and direct the flow of the river, other challenges to protecting the delta’s natural wetlands and its agricultural and industrial development remain. Water and sediment flows to the delta have declined dramatically since the 1970s, due to both reduced rainfall and explosive urban and agricultural demand for water upstream. In the 1990s, the river frequently ran dry well before reaching the delta.
These low- and no-flow periods are a huge problem in the lower reaches of the river and the delta. They severely damage wetlands and aquaculture and worsen the river’s already severe water pollution problem. Ironically, they also increase the flood risk because when flows are low, sediment fills in the riverbed. The river becomes shallower and higher in elevation. In places, the river is already perched above the surrounding floodplain by as much as 10 meters (30 feet). A levee breach during a high water event could be devastating.
-December 2010-
Mountaintop Mining, West Virginia
Below the densely forested slopes of southern West Virginia’s Appalachian Mountains is a layer cake of thin coal seams. To uncover this coal profitably, mining companies engineer large—sometimes very large—surface mines. This time-series of images of a surface mine in Boone County, West Virginia, illustrates why this controversial mining method is also called “mountaintop removal.”
Based on data from NASA’s Landsat 5 satellite, these natural-color (photo-like) images document the growth of the Hobet mine as it moves from ridge to ridge between 1984 to 2009. The natural landscape of the area is dark green forested mountains, creased by streams and indented by hollows. The active mining areas appear off-white, while areas being reclaimed with vegetation appear light green. A pipeline roughly bisects the images from north to south. The town of Madison, lower right, lies along the banks of the Coal River.
In 1984, the mining operation is limited to a relatively small area west of the Coal River. The mine first expands along mountaintops to the southwest, tracing an oak-leaf-shaped outline around the hollows of Big Horse Creek and continuing in an unbroken line across the ridges to the southwest. Between 1991 and 1992, the mine moves north, and the impact of one of the most controversial aspects of mountaintop mining—rock and earth dams called valley fills—becomes evident.
The law requires coal operators to try to restore the land to its approximate original shape, but the rock debris generally can’t be securely piled as high or graded as steeply as the original mountaintop. There is always too much rock left over, and coal companies dispose of it by building valley fills in hollows, gullies, and streams. Between 1991 and 1992, this leveling and filling in of the topography becomes noticeable as the mine expands northward across a stream valley called Stanley Fork (image center).
The most dramatic valley fill that appears in the series, however, is what appears to be the near-complete filling of Connelly Branch from its source to its mouth at the Mud River between 1996 and 2000. Since 2004, the mine has expanded from the Connelly Branch area to the mountaintops north of the Mud River. Significant changes are apparent to the ridges and valleys feeding into Berry Branch by 2009. Over the 25-year period, the disturbed area grew to more than 10,000 acres (15.6 square miles).
According to a report from the U.S. Fish and Wildlife Service, nearly 40 percent of the year-round and seasonal streams in the Mud River watershed upstream of and including Connelly Branch had been filled or approved for filling through 1998. In 2009, the EPA intervened in the approval of a permit to further expand the Hobet mine into the Berry Branch area and worked with mine operators to minimize the disturbance and to reduce the number and size of valley fills.
Still, some scientists argue that current regulations and mitigation strategies are inadequate. In February, a team of scientists published a review of research on mountaintop mining and valley fills in the magazine Science. The scientists concluded that the impacts on stream and groundwater quality, biodiversity, and forest productivity were "pervasive and irreversible" and that current strategies for mitigation and restoration were not compensating for the degradation.
-December 2010-
Based on data from NASA’s Landsat 5 satellite, these natural-color (photo-like) images document the growth of the Hobet mine as it moves from ridge to ridge between 1984 to 2009. The natural landscape of the area is dark green forested mountains, creased by streams and indented by hollows. The active mining areas appear off-white, while areas being reclaimed with vegetation appear light green. A pipeline roughly bisects the images from north to south. The town of Madison, lower right, lies along the banks of the Coal River.
In 1984, the mining operation is limited to a relatively small area west of the Coal River. The mine first expands along mountaintops to the southwest, tracing an oak-leaf-shaped outline around the hollows of Big Horse Creek and continuing in an unbroken line across the ridges to the southwest. Between 1991 and 1992, the mine moves north, and the impact of one of the most controversial aspects of mountaintop mining—rock and earth dams called valley fills—becomes evident.
The law requires coal operators to try to restore the land to its approximate original shape, but the rock debris generally can’t be securely piled as high or graded as steeply as the original mountaintop. There is always too much rock left over, and coal companies dispose of it by building valley fills in hollows, gullies, and streams. Between 1991 and 1992, this leveling and filling in of the topography becomes noticeable as the mine expands northward across a stream valley called Stanley Fork (image center).
The most dramatic valley fill that appears in the series, however, is what appears to be the near-complete filling of Connelly Branch from its source to its mouth at the Mud River between 1996 and 2000. Since 2004, the mine has expanded from the Connelly Branch area to the mountaintops north of the Mud River. Significant changes are apparent to the ridges and valleys feeding into Berry Branch by 2009. Over the 25-year period, the disturbed area grew to more than 10,000 acres (15.6 square miles).
According to a report from the U.S. Fish and Wildlife Service, nearly 40 percent of the year-round and seasonal streams in the Mud River watershed upstream of and including Connelly Branch had been filled or approved for filling through 1998. In 2009, the EPA intervened in the approval of a permit to further expand the Hobet mine into the Berry Branch area and worked with mine operators to minimize the disturbance and to reduce the number and size of valley fills.
Still, some scientists argue that current regulations and mitigation strategies are inadequate. In February, a team of scientists published a review of research on mountaintop mining and valley fills in the magazine Science. The scientists concluded that the impacts on stream and groundwater quality, biodiversity, and forest productivity were "pervasive and irreversible" and that current strategies for mitigation and restoration were not compensating for the degradation.
-December 2010-
Urbanization of Dubai
To expand the possibilities for beachfront tourist development, Dubai, part of the United Arab Emirates, undertook a massive engineering project to create hundreds of artificial islands along its Persian Gulf coastline. Built from sand dredged from the sea floor and protected from erosion by rock breakwaters, the islands were shaped into recognizable forms, including two large palm trees. The first Palm Island constructed was Palm Jumeirah, and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite observed its progress from 2000 to 2010.
In these false-color images, bare ground appears brown, vegetation appears red, water appears dark blue, and buildings and paved surfaces appear light blue or gray. The first image, acquired in November of 2000, shows the area prior to the island’s construction. The image from February 2002, shows the barest beginnings of the artificial archipelago. By October 2002, substantial progress had been made on Palm Jumeirah, with many sandy “palm fronds” inside a circular breakwater.
By November 2003, the palm tree has been constructed, and buildings and vegetation populate Palm Jumeirah in the images from November 2004, October 2005, September 2006, March 2007, and November 2008. The final image, acquired in February 2010, shows vegetation on most of the palm fronds, and numerous buildings on the tree trunk.
Inland, changes are just as dramatic between November 2000 and February 2010. In the earliest image, empty desert fills the lower right quadrant of the image, as cityscape primarily hugs the coast. As the years pass, urbanization spreads, and the final image shows the area almost entirely filled by roads, buildings, and irrigated land.
-December 2010-
In these false-color images, bare ground appears brown, vegetation appears red, water appears dark blue, and buildings and paved surfaces appear light blue or gray. The first image, acquired in November of 2000, shows the area prior to the island’s construction. The image from February 2002, shows the barest beginnings of the artificial archipelago. By October 2002, substantial progress had been made on Palm Jumeirah, with many sandy “palm fronds” inside a circular breakwater.
By November 2003, the palm tree has been constructed, and buildings and vegetation populate Palm Jumeirah in the images from November 2004, October 2005, September 2006, March 2007, and November 2008. The final image, acquired in February 2010, shows vegetation on most of the palm fronds, and numerous buildings on the tree trunk.
Inland, changes are just as dramatic between November 2000 and February 2010. In the earliest image, empty desert fills the lower right quadrant of the image, as cityscape primarily hugs the coast. As the years pass, urbanization spreads, and the final image shows the area almost entirely filled by roads, buildings, and irrigated land.
-December 2010-
Severe Storms
This collection of images featuring the strongest hurricane, cyclone, or typhoon from any ocean during each year of the past decade includes storms both famous—or infamous—and obscure. The judging is based on the storm with the highest wind speed, using lowest minimum pressure as a tie-breaker when needed. The images were all captured by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra or Aqua satellites, and they are all shown at the same scale.
Of the decade’s most powerful storms, two were in the Atlantic/Caribbean basin, five were in the Pacific north of the equator, and three were in the South Pacific. Even without looking at the table below, you can identify which storms were in the Northern Hemisphere and which were in the Southern: because of the Coriolis force, northern cyclones rotate counterclockwise, while southern storms rotate clockwise. All storm categories are based on the Saffir-Simspon Hurricane Scale.
Many North Americans will recognize Hurricanes Dean (2007) and Wilma (2005). Wilma holds the record for the most intense Atlantic Basin storm on record (based on air pressure), and it made landfall on the island of Cozumel, the Yucatan Peninsula, and Florida. After cutting a devastating path through the Caribbean, Hurricane Dean made a rare Category 5-strength landfall in Mexico.
Even residents around the Western Pacific Basin might not remember the names Damrey (2000) or Faxai (2001); both of these Category 5 super typhoons came and went through the remote Pacific without ever approaching land. Other storms inflicted great damage. Super Typhoon Maemi (2003) was the costliest typhoon ever to hit Korea, killing more than a hundred people. Super Typhoon Chaba (2004) pummeled both the Northern Marianas Islands and Honshu, Japan. Super Typhoon Jangmi (2008), which made landfall across northern Taiwan as a Category 3 storm, was not only the strongest storm of 2008, it was also the only storm worldwide to reach Category 5 strength.
Of the three South Pacific cyclones to appear in this collection, Monica (2006) and Zoe (2002) were nearly equal in terms of strength. Monica crossed the Cape York Peninsula as a Category 2 cyclone, but emerged over the warm waters of the Gulf of Carpentaria and intensified into a Category 5 storm before its second landfall on Northern Territory’s Top End. Cyclone Zoe (2002) traced an erratic path through the Solomons Islands, avoiding major land masses, but the eye passed over the tiny, sparsely populated island of Tikopia at the height of the storm. The strongest storm of 2009 didn’t arrive until October, when Category 5 Hurricane Rick formed in the eastern Pacific. Rick weakened significantly before coming ashore near Mazatlán, Mexico.
Data on maximum wind speed and minimum pressure for Atlantic Basin storms comes from the National Hurricane Center; for Western Pacific storms, from the Japanese Meteorological Agency; for South Pacific storms, the Australian Bureau of Meteorology.
-December 2010-
Of the decade’s most powerful storms, two were in the Atlantic/Caribbean basin, five were in the Pacific north of the equator, and three were in the South Pacific. Even without looking at the table below, you can identify which storms were in the Northern Hemisphere and which were in the Southern: because of the Coriolis force, northern cyclones rotate counterclockwise, while southern storms rotate clockwise. All storm categories are based on the Saffir-Simspon Hurricane Scale.
Storm | Date of image | Maximum Wind Speed km/h (mph) | Minimum Pressure millibars | Basin |
---|---|---|---|---|
Damrey | May 9, 2000 | 290 (180) | 878 | Western Pacific |
Faxai | December 22, 2001 | 290 (180) | 915 | Western Pacific |
Zoe | December 28, 2002 | 285 (177) | 890 | South Pacific |
Maemi | September 10, 2003 | 280 (174) | 910 | Western Pacific |
Chaba | August 23, 2004 | 290 (180) | 879 | Western Pacific |
Wilma | October 18, 2005 | 295 (183) | 882 | Atlantic/Caribbean |
Monica | April 24, 2006 | 285 (177) | 905 | South Pacific |
Dean | August 18, 2007 | 280 (174) | 907 | Atlantic/Caribbean |
Jangmi | September 27, 2008 | 260 (162) | 905 | Western Pacific |
Rick | October 18, 2009 | 285 (127) | 906 | Eastern Pacific |
Even residents around the Western Pacific Basin might not remember the names Damrey (2000) or Faxai (2001); both of these Category 5 super typhoons came and went through the remote Pacific without ever approaching land. Other storms inflicted great damage. Super Typhoon Maemi (2003) was the costliest typhoon ever to hit Korea, killing more than a hundred people. Super Typhoon Chaba (2004) pummeled both the Northern Marianas Islands and Honshu, Japan. Super Typhoon Jangmi (2008), which made landfall across northern Taiwan as a Category 3 storm, was not only the strongest storm of 2008, it was also the only storm worldwide to reach Category 5 strength.
Of the three South Pacific cyclones to appear in this collection, Monica (2006) and Zoe (2002) were nearly equal in terms of strength. Monica crossed the Cape York Peninsula as a Category 2 cyclone, but emerged over the warm waters of the Gulf of Carpentaria and intensified into a Category 5 storm before its second landfall on Northern Territory’s Top End. Cyclone Zoe (2002) traced an erratic path through the Solomons Islands, avoiding major land masses, but the eye passed over the tiny, sparsely populated island of Tikopia at the height of the storm. The strongest storm of 2009 didn’t arrive until October, when Category 5 Hurricane Rick formed in the eastern Pacific. Rick weakened significantly before coming ashore near Mazatlán, Mexico.
Data on maximum wind speed and minimum pressure for Atlantic Basin storms comes from the National Hurricane Center; for Western Pacific storms, from the Japanese Meteorological Agency; for South Pacific storms, the Australian Bureau of Meteorology.
-December 2010-
Mesopotamia Marshes
At the start of the twenty-first century, the once-lush, richly diverse wetlands of Mesopotamia had been decimated. In the decades leading up to the new century, hydro-engineering—dams for flood control and hydroelectricity, canals and reservoirs for agricultural irrigation—had greatly reduced the volume of the annual marsh-renewing floods. Then, in the 1990s, the marshes became a political pawn: former Iraqi leader Saddam Hussein drained large areas at least in part to punish the tribes living there, the Marsh Arabs, for participating in anti-government rebellions.
This series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite begins on February 28, 2000. Historically, spring runoff would have flooded Mesopotamia by this time of year; the wedge of land between the Tigris (flowing south from top center) and the Euphrates (flowing in from near left center) would have been dotted with ponds and verdant with reeds and other wetland plants. Instead, the area was reduced to a few small green patches and bare soil, varying in shades from purplish brown to pale beige. (The bright green vegetation is likely irrigated cropland, not marsh vegetation.) The largest remnant marsh, Al Hawizeh, straddles the Iran-Iraq border just east of the Tigris River (top right). Its dark color may result from the combination of standing water (dark blue), vegetation (dark green), and bare ground (brown). Between 2000 and 2003, this remnant shrinks further.
Following the Second Gulf War and the end of Saddam Hussein’s regime in 2003, Iraqis began demolishing the dikes and canals that had drained the marshes. By February 9, 2004, a dramatic transformation was underway in Mesopotamia. Several large marsh areas north and south of the Euphrates had been re-flooded, and the dry land south of Al Hawizeh Marsh was being systematically filled. These areas appear almost purely dark blue or nearly black, which indicates that standing water was present, but that vegetation was absent or extremely sparse. By 2005, additional areas were flooded, especially north of the Euphrates (near image center). In some places, the water appeared more greenish than it did in 2004; this could be because plants or algae were growing or because the water was shallower than it was the previous year.
In 2007 and 2008, the marshes stood out starkly from the surrounding bare ground. Interestingly, however, they appeared reddish brown, rather than the lush green color that vegetation commonly has in photo-like images such as these. However, the color is deceiving: an infrared-enhanced false-color image, in which even sparse vegetation appears neon green, shows that plants were growing in the marshes. The unusual color may be a unique characteristic of the type of plants (or algae) growing in the marshes, or it may be due to the mixed reflections from standing water (perhaps shallow or muddy), soggy ground, and different kinds of plants.
As the decade drew to a close, the recovering marshes faced new threats, including new dam construction upstream and drought. The amount of flooding visible in the 2009 image was considerably less than in 2008; not only the marshes, but also the adjacent irrigated crop areas appeared far less lush than they did the previous year. The 2009 drought had a severe impact on winter and spring crops in Iraq. The image from 2010 seems to tell a different story, however. While the marshes appeared to have shrunk still further, the irrigated agricultural areas in the center of the image appeared more extensive and greener than they were the previous year.
A United Nations Environment Program assessment of the Iraq marsh restoration in 2006 concluded that roughly 58 percent of the marsh area present in the mid-1970s had been restored in the sense that standing water was seasonally present and vegetation was reasonably dense. Two years of field research by Iraqi and American scientists concluded that there had been a “remarkable rate of reestablishment of native macroinvertebrates, macrophytes, fish, and birds in re-flooded marshes.” However, the lack of connectedness among the various re-flooded marshes remained a concern for species diversity and local extinction. In addition, the volume of water that flowed into the marshes in the first years of restoration may not be able to be sustained as the country stabilizes and economic and agricultural activity resume. As a result, the ultimate fate of Mesopotamian marshes is still uncertain.
-December 2010-
This series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite begins on February 28, 2000. Historically, spring runoff would have flooded Mesopotamia by this time of year; the wedge of land between the Tigris (flowing south from top center) and the Euphrates (flowing in from near left center) would have been dotted with ponds and verdant with reeds and other wetland plants. Instead, the area was reduced to a few small green patches and bare soil, varying in shades from purplish brown to pale beige. (The bright green vegetation is likely irrigated cropland, not marsh vegetation.) The largest remnant marsh, Al Hawizeh, straddles the Iran-Iraq border just east of the Tigris River (top right). Its dark color may result from the combination of standing water (dark blue), vegetation (dark green), and bare ground (brown). Between 2000 and 2003, this remnant shrinks further.
Following the Second Gulf War and the end of Saddam Hussein’s regime in 2003, Iraqis began demolishing the dikes and canals that had drained the marshes. By February 9, 2004, a dramatic transformation was underway in Mesopotamia. Several large marsh areas north and south of the Euphrates had been re-flooded, and the dry land south of Al Hawizeh Marsh was being systematically filled. These areas appear almost purely dark blue or nearly black, which indicates that standing water was present, but that vegetation was absent or extremely sparse. By 2005, additional areas were flooded, especially north of the Euphrates (near image center). In some places, the water appeared more greenish than it did in 2004; this could be because plants or algae were growing or because the water was shallower than it was the previous year.
In 2007 and 2008, the marshes stood out starkly from the surrounding bare ground. Interestingly, however, they appeared reddish brown, rather than the lush green color that vegetation commonly has in photo-like images such as these. However, the color is deceiving: an infrared-enhanced false-color image, in which even sparse vegetation appears neon green, shows that plants were growing in the marshes. The unusual color may be a unique characteristic of the type of plants (or algae) growing in the marshes, or it may be due to the mixed reflections from standing water (perhaps shallow or muddy), soggy ground, and different kinds of plants.
As the decade drew to a close, the recovering marshes faced new threats, including new dam construction upstream and drought. The amount of flooding visible in the 2009 image was considerably less than in 2008; not only the marshes, but also the adjacent irrigated crop areas appeared far less lush than they did the previous year. The 2009 drought had a severe impact on winter and spring crops in Iraq. The image from 2010 seems to tell a different story, however. While the marshes appeared to have shrunk still further, the irrigated agricultural areas in the center of the image appeared more extensive and greener than they were the previous year.
A United Nations Environment Program assessment of the Iraq marsh restoration in 2006 concluded that roughly 58 percent of the marsh area present in the mid-1970s had been restored in the sense that standing water was seasonally present and vegetation was reasonably dense. Two years of field research by Iraqi and American scientists concluded that there had been a “remarkable rate of reestablishment of native macroinvertebrates, macrophytes, fish, and birds in re-flooded marshes.” However, the lack of connectedness among the various re-flooded marshes remained a concern for species diversity and local extinction. In addition, the volume of water that flowed into the marshes in the first years of restoration may not be able to be sustained as the country stabilizes and economic and agricultural activity resume. As a result, the ultimate fate of Mesopotamian marshes is still uncertain.
-December 2010-
Amazon Deforestation
The state of Rondônia in western Brazil is one of the most deforested parts of the Amazon. In the past three decades, clearing and degradation of the state’s original 208,000 square kilometers of forest (about 51.4 million acres, an area slightly smaller than the state of Kansas) has been rapid: 4,200 square kilometers cleared by 1978; 30,000 by 1988; and 53,300 by 1998. By 2003, an estimated 67,764 square kilometers of rainforest—an area larger than the state of West Virginia—had been cleared.
By the beginning of this decade, the frontier had reached the remote northwest corner of Rondônia, pictured in this series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Intact forest is deep green, while cleared areas are tan (bare ground) or light green (crops, pasture, or occasionally, second-growth forest). Over the span of eight years, roads and clearings pushed west-northwest from Buritis toward the Jaciparaná River. The deforested area along the road into Nova Mamoré expanded north-northeast all the way to the BR-346 highway.
Deforestation follows a fairly predictable pattern in these images. The first clearings that appear in the forest are in a fishbone pattern, arrayed along the edges of roads. Over time, the fishbones collapse into a mixture of forest remnants, cleared areas, and settlements. This pattern follows one of the most common deforestation trajectories in the Amazon. Legal and illegal roads penetrate a remote part of the forest, and small farmers migrate to the area. They claim land along the road and clear some of it for crops. Within a few years, heavy rains and erosion deplete the soil, and crop yields fall. Farmers then convert the degraded land to cattle pasture, and clear more forest for crops. Eventually the small land holders, having cleared much of their land, sell it or abandon it to large cattle holders, who consolidate the plots into large areas of pasture.
The estimated change in forested area between 2000 and 2008 is shown in this map (above) based on vegetation index data from MODIS. Places that are red lost vegetation, while places that are peach showed little or no change. The most intensely red areas indicate the biggest vegetation losses—usually the complete clearing of original rainforest. Less intense reds indicate less dramatic change, such as the complete clearing of an already logged forest, or a transition from leafy crops to sparser pasture grasses.
All major tropical forests—including those in the Americas, Africa, Southeast Asia, and Indonesia—are disappearing, mostly to make way for human food production, including livestock and crops. Although tropical deforestation meets some human needs, it also has profound, sometimes devastating, consequences, including social conflict and human rights abuses, extinction of plants and animals, and climate change—challenges that affect the whole world.
-December 2010-
By the beginning of this decade, the frontier had reached the remote northwest corner of Rondônia, pictured in this series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Intact forest is deep green, while cleared areas are tan (bare ground) or light green (crops, pasture, or occasionally, second-growth forest). Over the span of eight years, roads and clearings pushed west-northwest from Buritis toward the Jaciparaná River. The deforested area along the road into Nova Mamoré expanded north-northeast all the way to the BR-346 highway.
Deforestation follows a fairly predictable pattern in these images. The first clearings that appear in the forest are in a fishbone pattern, arrayed along the edges of roads. Over time, the fishbones collapse into a mixture of forest remnants, cleared areas, and settlements. This pattern follows one of the most common deforestation trajectories in the Amazon. Legal and illegal roads penetrate a remote part of the forest, and small farmers migrate to the area. They claim land along the road and clear some of it for crops. Within a few years, heavy rains and erosion deplete the soil, and crop yields fall. Farmers then convert the degraded land to cattle pasture, and clear more forest for crops. Eventually the small land holders, having cleared much of their land, sell it or abandon it to large cattle holders, who consolidate the plots into large areas of pasture.
The estimated change in forested area between 2000 and 2008 is shown in this map (above) based on vegetation index data from MODIS. Places that are red lost vegetation, while places that are peach showed little or no change. The most intensely red areas indicate the biggest vegetation losses—usually the complete clearing of original rainforest. Less intense reds indicate less dramatic change, such as the complete clearing of an already logged forest, or a transition from leafy crops to sparser pasture grasses.
All major tropical forests—including those in the Americas, Africa, Southeast Asia, and Indonesia—are disappearing, mostly to make way for human food production, including livestock and crops. Although tropical deforestation meets some human needs, it also has profound, sometimes devastating, consequences, including social conflict and human rights abuses, extinction of plants and animals, and climate change—challenges that affect the whole world.
-December 2010-
Larsen-B Ice Shelf
In the Southern Hemisphere summer of 2002, scientists monitoring daily satellite images of the Antarctic Peninsula watched in amazement as almost the entire Larsen B Ice Shelf splintered and collapsed in just over one month. They had never witnessed such a large area—3,250 square kilometers, or 1,250 square miles—disintegrate so rapidly.
The collapse of the Larsen B Ice Shelf was captured in this series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite between January 31 and April 13, 2002. At the start of the series, the ice shelf (left) is tattooed with parallel lines of blue dots. The dots are pools of meltwater, and they are arranged in lines because the water drained into existing crevasses. Beneath a thin layer of clouds, a smattering of icebergs appears in the dark, open waters of the bay (right).
By February 17, the leading edge of the C-shaped shelf had retreated about 10 kilometers (6 miles) as the shelf began to splinter. The next good view of the area came on February 23; several more long and narrow ice bergs had fractured from the shelf in the south. By March 7, the shelf had disintegrated into a blue-tinged mixture (mélange) of slush and ice bergs. Many of the bergs were too tall and narrow to float upright. They toppled over and spread out across the bay like a neat row of books that had been knocked off a shelf.
When the bergs tipped over, the very pure ice on the bottom side of the ice shelf was exposed. The pale blue color is largely due to the reflection from this ice. Pure, thick ice absorbs a small amount of red light. Photo-like satellite images are made by combining the satellite’s observations of red, green, and blue wavelengths of light reflected from the Earth’s surface. When all these visible wavelengths are strongly reflected, the surface looks white; when the reddest light is absorbed, the reflection takes on a cyan tinge.
The last images in the series were captured as austral autumn arrived. The bright blue color of the ice debris field faded as the remnants of the shelf were covered by the first snows of the season. Seasonal sea ice began to form and locked most of the ice debris in place for the winter, although the April 13 image shows that a few of the largest icebergs from the southern portion of the shelf had already drifted out of the area.
The collapse of the Larsen appears to have been due to a series of warm summers on the Antarctic Peninsula, which culminated with an exceptionally warm summer in 2002. Significant surface melting due to warm air temperatures created melt ponds that acted like wedges; they deepened the crevasses and eventually caused the shelf to splinter.
Other factors might have contributed to the unusually rapid and near-total disintegration of the shelf. Warm ocean temperatures in the Weddell Sea that occurred during the same period might have caused thinning and melting on the underside of the ice shelf. As the surface melt ponds began to fracture the shelf, strong winds or waves might have flexed the shelf, helping to trigger a runaway break up.
The ice debris field did not become a permanent fixture in Larsen Bay. As seasonal sea ice melted the following summer, the mélange began to drift away with the currents, and in many summers since the collapse, the bay has been completely ice free. Although sea ice occasionally occupies the bay during cold winters, it is no substitute for the ice shelf in terms of its influence on the glaciers that once fed the Larsen B. The grounded portion of the shelf used to push back against the glaciers, slowing them down. Without this pushback, the glaciers that fed the ice sheet have accelerated and thinned.
While the collapse of the Larsen B was unprecedented in terms of scale, it was not the first ice shelf on the Antarctic Peninsula to experience an abrupt break up. The northernmost section of the Larsen Ice Shelf Complex, called Larsen A, lost about 1,500 square kilometers of ice in an abrupt event in January 1995. Following the even more spectacular collapse in 2002, the Larsen A and B glaciers experienced an abrupt acceleration, about 300% on average, and their mass loss went from 2–4 gigatonnes per year in 1996 and 2000 (a gigatonne is one billion metric tonnes), to between 22 and 40 gigatonnes per year in 2006.
Nor was the Larsen B the last Antarctic ice shelf to disappear. Farther down the peninsula to the southwest, the Wilkins Ice Shelf disintegrated in a series of break up events that began in February 2008 (late summer) and continued throughout Southern Hemisphere winter. The last remnant of the northern part of the Wilkins Ice Shelf collapsed in early April 2009. It was the tenth major ice shelf to collapse in recent times.
-December 2010-
The collapse of the Larsen B Ice Shelf was captured in this series of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite between January 31 and April 13, 2002. At the start of the series, the ice shelf (left) is tattooed with parallel lines of blue dots. The dots are pools of meltwater, and they are arranged in lines because the water drained into existing crevasses. Beneath a thin layer of clouds, a smattering of icebergs appears in the dark, open waters of the bay (right).
By February 17, the leading edge of the C-shaped shelf had retreated about 10 kilometers (6 miles) as the shelf began to splinter. The next good view of the area came on February 23; several more long and narrow ice bergs had fractured from the shelf in the south. By March 7, the shelf had disintegrated into a blue-tinged mixture (mélange) of slush and ice bergs. Many of the bergs were too tall and narrow to float upright. They toppled over and spread out across the bay like a neat row of books that had been knocked off a shelf.
When the bergs tipped over, the very pure ice on the bottom side of the ice shelf was exposed. The pale blue color is largely due to the reflection from this ice. Pure, thick ice absorbs a small amount of red light. Photo-like satellite images are made by combining the satellite’s observations of red, green, and blue wavelengths of light reflected from the Earth’s surface. When all these visible wavelengths are strongly reflected, the surface looks white; when the reddest light is absorbed, the reflection takes on a cyan tinge.
The last images in the series were captured as austral autumn arrived. The bright blue color of the ice debris field faded as the remnants of the shelf were covered by the first snows of the season. Seasonal sea ice began to form and locked most of the ice debris in place for the winter, although the April 13 image shows that a few of the largest icebergs from the southern portion of the shelf had already drifted out of the area.
The collapse of the Larsen appears to have been due to a series of warm summers on the Antarctic Peninsula, which culminated with an exceptionally warm summer in 2002. Significant surface melting due to warm air temperatures created melt ponds that acted like wedges; they deepened the crevasses and eventually caused the shelf to splinter.
Other factors might have contributed to the unusually rapid and near-total disintegration of the shelf. Warm ocean temperatures in the Weddell Sea that occurred during the same period might have caused thinning and melting on the underside of the ice shelf. As the surface melt ponds began to fracture the shelf, strong winds or waves might have flexed the shelf, helping to trigger a runaway break up.
The ice debris field did not become a permanent fixture in Larsen Bay. As seasonal sea ice melted the following summer, the mélange began to drift away with the currents, and in many summers since the collapse, the bay has been completely ice free. Although sea ice occasionally occupies the bay during cold winters, it is no substitute for the ice shelf in terms of its influence on the glaciers that once fed the Larsen B. The grounded portion of the shelf used to push back against the glaciers, slowing them down. Without this pushback, the glaciers that fed the ice sheet have accelerated and thinned.
While the collapse of the Larsen B was unprecedented in terms of scale, it was not the first ice shelf on the Antarctic Peninsula to experience an abrupt break up. The northernmost section of the Larsen Ice Shelf Complex, called Larsen A, lost about 1,500 square kilometers of ice in an abrupt event in January 1995. Following the even more spectacular collapse in 2002, the Larsen A and B glaciers experienced an abrupt acceleration, about 300% on average, and their mass loss went from 2–4 gigatonnes per year in 1996 and 2000 (a gigatonne is one billion metric tonnes), to between 22 and 40 gigatonnes per year in 2006.
Nor was the Larsen B the last Antarctic ice shelf to disappear. Farther down the peninsula to the southwest, the Wilkins Ice Shelf disintegrated in a series of break up events that began in February 2008 (late summer) and continued throughout Southern Hemisphere winter. The last remnant of the northern part of the Wilkins Ice Shelf collapsed in early April 2009. It was the tenth major ice shelf to collapse in recent times.
-December 2010-
Artic Sea Ice
Layers of frozen seawater, known simply as sea ice, cap the Arctic Ocean. Ice grows dramatically each winter, usually reaching its maximum in March. The ice melts just as dramatically each summer, generally reaching its minimum in September. These image pairs show Arctic sea ice concentration for the month of September (left) and the following March (right) for a time series beginning in September 1999 and ending in March 2010.
The yellow outline on each image shows the median sea ice extent observed by satellite sensors in September and March from 1979 through 2000. Extent is the total area in which ice concentration is at least 15 percent. The median is the middle value. Half of the extents over the time period were larger than the line, and half were smaller.
Since 1978, satellites have monitored sea ice growth and retreat, and they have detected an overall decline in Arctic sea ice. The rate of decline steepened after the turn of the twenty-first century. In September 2002, the summer minimum ice extent was the lowest it had been since 1979. Although the September 2002 low was only slightly below previous lows (from the 1990s), it was the beginning of a series of record or near-record lows in the Arctic. This series of record lows, combined with poor wintertime recoveries starting in the winter of 2004-2005, marked a sharpening in the rate of decline in Arctic sea ice. Ice extent at summer minimum did not return to anything approaching the long-term average (1979-2000) after 2002.
The yellow outline on each image shows the median sea ice extent observed by satellite sensors in September and March from 1979 through 2000. Extent is the total area in which ice concentration is at least 15 percent. The median is the middle value. Half of the extents over the time period were larger than the line, and half were smaller.
Since 1978, satellites have monitored sea ice growth and retreat, and they have detected an overall decline in Arctic sea ice. The rate of decline steepened after the turn of the twenty-first century. In September 2002, the summer minimum ice extent was the lowest it had been since 1979. Although the September 2002 low was only slightly below previous lows (from the 1990s), it was the beginning of a series of record or near-record lows in the Arctic. This series of record lows, combined with poor wintertime recoveries starting in the winter of 2004-2005, marked a sharpening in the rate of decline in Arctic sea ice. Ice extent at summer minimum did not return to anything approaching the long-term average (1979-2000) after 2002.
September/March (minimum/maximum) | September Average Extent (millions of square kilometers) | March Average Extent (millions of square kilometers) |
---|---|---|
1979–2000 mean | 7.0 | 15.7 |
1999/2000 | 6.2 | 15.3 |
2000/2001 | 6.3 | 15.6 |
2001/2002 | 6.8 | 15.4 |
2002/2003 | 6.0 | 15.5 |
2003/2004 | 6.1 | 15.1 |
2004/2005 | 6.0 | 14.7 |
2005/2006 | 5.6 | 14.4 |
2006/2007 | 5.9 | 14.6 |
2007/2008 | 4.3 | 15.2 |
2008/2009 | 4.7 | 15.2 |
2009/2010 | 5.4 | 15.1 |
After the September 2002 record low, the following two seasons were below average, but not record lows. In September 2005, ice extent dropped again, reaching a new record low. And then, in the summer of 2007, Arctic sea ice extent set a new record low in early August—more than a month before the end of the melt season. That September, the “preferred,” northern navigation route through the Northwest Passage opened. The record-low conditions of 2007 were driven by a variety of factors: an early start to the melt season, unusually sunny weather in the area of the East Siberia Sea, and wind and current patterns that drove ice out of the Arctic. Arches of ice that usually span Nares Strait, between Greenland and Ellesmere Island, failed to form in 2007, giving sea ice an additional exit route from the Arctic.
Cycles of natural variability such as the Arctic Oscillation are known to play a role in Arctic sea ice extent, but the sharp decline seen in this decade cannot be explained by natural variability alone. Natural variability and greenhouse gas emissions (and the resulting rise in global temperatures) likely worked together to melt greater amounts of Arctic sea ice. Some models forecast an ice-free Arctic for at least part of the year before the end of the twenty-first century.
This time series is made from a combination of observations from the Special Sensor Microwave/Imagers (SSM/Is) flown on a series of Defense Meteorological Satellite Program missions and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), a Japanese-built sensor that flies on NASA’s Aqua satellite. These sensors measure microwave energy radiated from the Earth’s surface (sea ice and open water emit microwaves differently). Scientists use the observations to map sea ice concentrations.
Some areas in the images, such as places along the Greenland coast or in Hudson Bay, may appear partially ice-covered when they actually were not. Over the years, satellite sensor capabilities have steadily improved, but some limitations remain, often due to weather and the mixing of land (coast) and water in the satellite sensor’s field of view. The gray circle at the center of each image is the “pole hole,” north of which satellite sensors have historically been unable to collect data. The sea ice estimates from the National Snow and Ice Data Center, NASA’s archive for sea ice data, assume that this hole is ice-filled.
-December 2010-
Cycles of natural variability such as the Arctic Oscillation are known to play a role in Arctic sea ice extent, but the sharp decline seen in this decade cannot be explained by natural variability alone. Natural variability and greenhouse gas emissions (and the resulting rise in global temperatures) likely worked together to melt greater amounts of Arctic sea ice. Some models forecast an ice-free Arctic for at least part of the year before the end of the twenty-first century.
This time series is made from a combination of observations from the Special Sensor Microwave/Imagers (SSM/Is) flown on a series of Defense Meteorological Satellite Program missions and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), a Japanese-built sensor that flies on NASA’s Aqua satellite. These sensors measure microwave energy radiated from the Earth’s surface (sea ice and open water emit microwaves differently). Scientists use the observations to map sea ice concentrations.
Some areas in the images, such as places along the Greenland coast or in Hudson Bay, may appear partially ice-covered when they actually were not. Over the years, satellite sensor capabilities have steadily improved, but some limitations remain, often due to weather and the mixing of land (coast) and water in the satellite sensor’s field of view. The gray circle at the center of each image is the “pole hole,” north of which satellite sensors have historically been unable to collect data. The sea ice estimates from the National Snow and Ice Data Center, NASA’s archive for sea ice data, assume that this hole is ice-filled.
-December 2010-
Antarctic Sea Ice
Unlike the Arctic—an ocean basin surrounded by land, where sea ice extends all the way to the pole—the Antarctic is a large continent surrounded by ocean. Because of this geography, sea ice has more room to expand in the winter, but it is also closer to the equator. The result is that Antarctica’s sea ice extent is larger than the Arctic’s in winter, but smaller in the summer. Total Antarctic sea ice peaks in September (the end of Southern Hemisphere winter) and retreats to a minimum in February.
These image pairs show Antarctic sea ice during the September maximum (left) and the following February minimum (right) for a time series beginning in September 1999 and ending in February 2010. Land is dark gray, and ice shelves—thick slabs of glacial ice grounded along the coast—are light gray. The yellow outline shows the median sea ice extent in September and February from 1979 (when routine satellite observations began) to 2000. Extent is the total area in which ice concentration is at least 15 percent. The median is the middle value. Half of the extents over the time period were larger than the line, and half were smaller.
Since the start of the satellite record, total Antarctic sea ice has increased by about 1 percent per decade. Whether the small overall increase in sea ice extent is a sign of meaningful change in the Antarctic is uncertain because ice extents in the Southern Hemisphere vary considerably from year to year and from place to place around the continent. Considered individually, only the Ross Sea sector had a significant positive trend, while sea ice extent has actually decreased in the Bellingshausen and Amundsen Seas.
The year-to-year and place-to-place variability is evident in these images from the past decade. The winter maximum in the Weddell Sea, for example, is above the median in some years and below it others. In any given year, sea ice concentration may be below the median in one sector, but above the median in another; in September 2000, for example, ice concentrations in the Ross Sea were above the median extent, while those in the Pacific were below it.
At summer minimums, sea ice concentrations appear even more variable. In the Ross Sea, sea ice virtually disappears in some summers (2000, 2005, 2006, and 2009), but not all. The long-term decline in the sea ice in the Bellingshausen and Amundsen Seas is detectable in the past decade’s summer minimums: concentrations were below the median in all years.
This time series is made from a combination of observations from the Special Sensor Microwave/Imagers (SSM/Is) flown on a series of Defense Meteorological Satellite Program missions and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), a Japanese-built sensor that flies on NASA’s Aqua satellite. These sensors measure microwave energy radiated from the Earth’s surface (sea ice and open water emit microwaves differently). Scientists use the observations to map sea ice concentrations.
-December 2010-
These image pairs show Antarctic sea ice during the September maximum (left) and the following February minimum (right) for a time series beginning in September 1999 and ending in February 2010. Land is dark gray, and ice shelves—thick slabs of glacial ice grounded along the coast—are light gray. The yellow outline shows the median sea ice extent in September and February from 1979 (when routine satellite observations began) to 2000. Extent is the total area in which ice concentration is at least 15 percent. The median is the middle value. Half of the extents over the time period were larger than the line, and half were smaller.
Since the start of the satellite record, total Antarctic sea ice has increased by about 1 percent per decade. Whether the small overall increase in sea ice extent is a sign of meaningful change in the Antarctic is uncertain because ice extents in the Southern Hemisphere vary considerably from year to year and from place to place around the continent. Considered individually, only the Ross Sea sector had a significant positive trend, while sea ice extent has actually decreased in the Bellingshausen and Amundsen Seas.
September/February(maximum/minimum) | September Average Extent (millions of square kilometers) | February Average Extent (millions of square kilometers) |
---|---|---|
1979–2000 mean | 18.7 | 2.9 |
1999/2000 | 19.0 | 2.8 |
2000/2001 | 19.1 | 3.7 |
2001/2002 | 18.4 | 2.9 |
2002/2003 | 18.2 | 3.8 |
2003/2004 | 18.6 | 3.6 |
2004/2005 | 19.1 | 2.9 |
2005/2006 | 19.1 | 2.6 |
2006/2007 | 19.4 | 2.9 |
2007/2008 | 19.2 | 3.7 |
2008/2009 | 18.5 | 2.9 |
2009/2010 | 19.2 | 3.2 |
At summer minimums, sea ice concentrations appear even more variable. In the Ross Sea, sea ice virtually disappears in some summers (2000, 2005, 2006, and 2009), but not all. The long-term decline in the sea ice in the Bellingshausen and Amundsen Seas is detectable in the past decade’s summer minimums: concentrations were below the median in all years.
This time series is made from a combination of observations from the Special Sensor Microwave/Imagers (SSM/Is) flown on a series of Defense Meteorological Satellite Program missions and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), a Japanese-built sensor that flies on NASA’s Aqua satellite. These sensors measure microwave energy radiated from the Earth’s surface (sea ice and open water emit microwaves differently). Scientists use the observations to map sea ice concentrations.
-December 2010-
Water Level in Lake Powell
The Colorado River flows from the Rocky Mountains in Colorado through the southwestern United States. Along its route, the river passes through an elaborate water-management system designed to tame the yearly floods from spring snowmelt and to provide a reliable supply of water for residents as far away as California. The system is appreciated for the water it supplies, but criticized for the environmental problems and cultural losses that have resulted from its creation.
Among the dams on the Colorado is Arizona’s Glen Canyon Dam, which creates Lake Powell. The deep, narrow, meandering reservoir extends upstream into southern Utah. In the early twenty-first century, this modern marvel of engineering faced an ancient enemy: severe, prolonged drought in the American Southwest. Combined with water withdrawals that many believe are not sustainable, the drought has caused a dramatic drop in Lake Powell’s water level over the past decade. The changes are documented in this series of natural-color images from the Landsat 5 satellite between 1999 and 2010.
The images show the northeastern reaches of Lake Powell. The Colorado River flows in from the east around Mille Crag Bend and is swallowed by the lake. At the west end of Narrow Canyon, the Dirty Devil River joins the lake from the north. (At normal water levels, the both rivers are essentially part of the reservoir). Sunlight brightens plateaus and southeast-facing slopes, casting shadows on the northern and western faces of the rugged landscape.
At the beginning of the series, in 1999, water levels in Lake Powell were relatively high, and the water was a clear, dark blue. The sediment-filled Colorado River appeared green-brown. Throughout the first years of the next decade, water levels began to drop. The declines were first apparent in the side canyons feeding the reservoir, which thinned and then shortened. By 2002, the lake level had dropped far enough that the exposed canyon walls created a pale outline around the lake.
Dry conditions and falling water levels were unmistakable in the image from April 13, 2003. Lake Powell’s side branches had all retreated compared to the previous year’s extents. Water levels in Narrow Canyon had dropped enough to show canyon floor features not visible in earlier images. In the image acquired on May 1, 2004, the reservoir’s northwestern branch is isolated from the main reservoir; the shallow water upstream could not crest raised areas in the lake bed.
Lake Powell’s water levels plummeted in early 2005, according to the U.S. Department of the Interior Bureau of Reclamation, and the lowest water levels seen in this time series appear in the image from April 2, 2005. The northwestern side branch of Lake Powell remained cut off from the rest of the reservoir. In the main body of Lake Powell, water pooled along its eastern edge, while large expanses of dry canyon floor were visible in the west.
In the latter half of the decade, the drought eased somewhat. Precipitation was near, but still slightly below average in the Upper Colorado River Basin. The lake level began to rebound; only the 2008 image appeared to deviate from the trend toward rising water levels. While the lake was significantly higher in May 2010 than in 2005, a careful comparison of the side canyons reveals that the level was still not back to 1999 levels, when the lake was near full capacity.
The peak inflow to Lake Powell occurs in mid- to late spring as the winter snow in the Rockies melts. According to the Bureau of Reclamation, the April 2010 inflow to Lake Powell was 94.5 percent of average, much greater than the 66 percent of average forecasted at the start of the month. According to the Bureau’s May 2010 summary, the larger-than-expected April inflow was likely due to an earlier-than-expected spring thaw, and so it probably was not a sign that total spring runoff volume would be larger than forecast. The forecast for maximum elevation at summer pool, which usually occurs in late July or early August, was 3,634 feet above sea level, about 66 feet below full pool.
A century of river flow records combined with an additional four to five centuries of tree-ring data show that the drought of the late 1990s and early 2000s was not unusual; longer and more severe droughts are a regular part of the climate variability in that part of the continent. Global warming is expected to make droughts more severe in the future. Even in “low emission” climate scenarios (forecasts that are based on the assumption that future carbon dioxide emissions will increase relatively slowly), models predict precipitation may decline by 20-25 percent over most of California, southern Nevada, and Arizona by the end of this century. Precipitation declines combined with booming urban populations will present a significant challenge to Western water managers in the near future.
-December 2010-
Among the dams on the Colorado is Arizona’s Glen Canyon Dam, which creates Lake Powell. The deep, narrow, meandering reservoir extends upstream into southern Utah. In the early twenty-first century, this modern marvel of engineering faced an ancient enemy: severe, prolonged drought in the American Southwest. Combined with water withdrawals that many believe are not sustainable, the drought has caused a dramatic drop in Lake Powell’s water level over the past decade. The changes are documented in this series of natural-color images from the Landsat 5 satellite between 1999 and 2010.
The images show the northeastern reaches of Lake Powell. The Colorado River flows in from the east around Mille Crag Bend and is swallowed by the lake. At the west end of Narrow Canyon, the Dirty Devil River joins the lake from the north. (At normal water levels, the both rivers are essentially part of the reservoir). Sunlight brightens plateaus and southeast-facing slopes, casting shadows on the northern and western faces of the rugged landscape.
At the beginning of the series, in 1999, water levels in Lake Powell were relatively high, and the water was a clear, dark blue. The sediment-filled Colorado River appeared green-brown. Throughout the first years of the next decade, water levels began to drop. The declines were first apparent in the side canyons feeding the reservoir, which thinned and then shortened. By 2002, the lake level had dropped far enough that the exposed canyon walls created a pale outline around the lake.
Dry conditions and falling water levels were unmistakable in the image from April 13, 2003. Lake Powell’s side branches had all retreated compared to the previous year’s extents. Water levels in Narrow Canyon had dropped enough to show canyon floor features not visible in earlier images. In the image acquired on May 1, 2004, the reservoir’s northwestern branch is isolated from the main reservoir; the shallow water upstream could not crest raised areas in the lake bed.
Lake Powell’s water levels plummeted in early 2005, according to the U.S. Department of the Interior Bureau of Reclamation, and the lowest water levels seen in this time series appear in the image from April 2, 2005. The northwestern side branch of Lake Powell remained cut off from the rest of the reservoir. In the main body of Lake Powell, water pooled along its eastern edge, while large expanses of dry canyon floor were visible in the west.
In the latter half of the decade, the drought eased somewhat. Precipitation was near, but still slightly below average in the Upper Colorado River Basin. The lake level began to rebound; only the 2008 image appeared to deviate from the trend toward rising water levels. While the lake was significantly higher in May 2010 than in 2005, a careful comparison of the side canyons reveals that the level was still not back to 1999 levels, when the lake was near full capacity.
The peak inflow to Lake Powell occurs in mid- to late spring as the winter snow in the Rockies melts. According to the Bureau of Reclamation, the April 2010 inflow to Lake Powell was 94.5 percent of average, much greater than the 66 percent of average forecasted at the start of the month. According to the Bureau’s May 2010 summary, the larger-than-expected April inflow was likely due to an earlier-than-expected spring thaw, and so it probably was not a sign that total spring runoff volume would be larger than forecast. The forecast for maximum elevation at summer pool, which usually occurs in late July or early August, was 3,634 feet above sea level, about 66 feet below full pool.
A century of river flow records combined with an additional four to five centuries of tree-ring data show that the drought of the late 1990s and early 2000s was not unusual; longer and more severe droughts are a regular part of the climate variability in that part of the continent. Global warming is expected to make droughts more severe in the future. Even in “low emission” climate scenarios (forecasts that are based on the assumption that future carbon dioxide emissions will increase relatively slowly), models predict precipitation may decline by 20-25 percent over most of California, southern Nevada, and Arizona by the end of this century. Precipitation declines combined with booming urban populations will present a significant challenge to Western water managers in the near future.
-December 2010-
Solar Activity
Over the span of 11 years, the Sun’s activity waxes and wanes as magnetic field lines that are wound and tangled inside the Sun periodically break through to the surface. These breakthroughs produce a pair of sunspots of opposite magnetic polarity, one positive and the other negative, that travel together across the face of the Sun. The heightened magnetic activity associated with sunspots can lead to solar flares, coronal mass ejections, and other far-reaching electromagnetic phenomena that endanger astronauts and damage or disrupt satellites.
This series of images from the Solar and Heliospheric Observatory (SOHO) spacecraft shows ultraviolet light (left) and sunspots (right) each spring from 1999 through 2010. Sunspots darken the visible surface of the Sun, while observations of ultraviolet light reveal how the magnetic activity that produced the sunspots excited the overlying solar atmosphere, producing intensely bright areas. As Solar Cycle 23 reached its peak between 2000-2002, numerous sunspots speckled both hemispheres, and then their numbers dramatically dropped off as the cycle went toward its minimum.
If you map the location of the spots on the Sun’s surface over the course of a solar cycle, the pattern they make is shaped like a butterfly. The reason for the butterfly pattern is that the first sunspots of each new solar cycle occur mostly at the Sun’s mid-latitudes, but as the solar cycle progresses, the area of maximum sunspot production shifts toward the (solar) equator.
Solar Cycle 24 began in early 2008, but there was minimal activity through early 2009. The image from April 29, 2009, does show a pair of small sunspots (far right, most visible in large image), but the location of the spots near the equator means that they belong to solar cycle 23. The sunspots visible in the images from May 2, 2010, however, are from the new cycle. The most recent forecast from the Space Weather Prediction Center is that solar cycle 24 will be of below-average intensity, and that it will peak in May 2013.
The small changes in solar irradiance that occur during the solar cycle exert a small influence on Earth’s climate, with periods of intense magnetic activity (the solar maximum) producing slightly higher temperatures, and solar minimum periods such as that seen in 2008 and early 2009 likely to have the opposite effect.
-December 2010-
This series of images from the Solar and Heliospheric Observatory (SOHO) spacecraft shows ultraviolet light (left) and sunspots (right) each spring from 1999 through 2010. Sunspots darken the visible surface of the Sun, while observations of ultraviolet light reveal how the magnetic activity that produced the sunspots excited the overlying solar atmosphere, producing intensely bright areas. As Solar Cycle 23 reached its peak between 2000-2002, numerous sunspots speckled both hemispheres, and then their numbers dramatically dropped off as the cycle went toward its minimum.
If you map the location of the spots on the Sun’s surface over the course of a solar cycle, the pattern they make is shaped like a butterfly. The reason for the butterfly pattern is that the first sunspots of each new solar cycle occur mostly at the Sun’s mid-latitudes, but as the solar cycle progresses, the area of maximum sunspot production shifts toward the (solar) equator.
Solar Cycle 24 began in early 2008, but there was minimal activity through early 2009. The image from April 29, 2009, does show a pair of small sunspots (far right, most visible in large image), but the location of the spots near the equator means that they belong to solar cycle 23. The sunspots visible in the images from May 2, 2010, however, are from the new cycle. The most recent forecast from the Space Weather Prediction Center is that solar cycle 24 will be of below-average intensity, and that it will peak in May 2013.
The small changes in solar irradiance that occur during the solar cycle exert a small influence on Earth’s climate, with periods of intense magnetic activity (the solar maximum) producing slightly higher temperatures, and solar minimum periods such as that seen in 2008 and early 2009 likely to have the opposite effect.
-December 2010-
Subscribe to:
Posts (Atom)