NOAA's Response and Restoration Blog

An inside look at the science of cleaning up and fixing the mess of marine pollution


Leave a comment

Five Years After Deepwater Horizon, How Is NOAA Preparing for Future Oil Spills?

The Deepwater Horizon Oil Spill: Five Years Later

This is the ninth and final story in a series of stories over the past month looking at various topics related to the response, the Natural Resource Damage Assessment science, restoration efforts, and the future of the Gulf of Mexico.

Oil in a boat wake on the ocean surface.

Keeping up with emerging technologies and changing energy trends helps us become better prepared for the oil spills of tomorrow, no matter where that may take us. (NOAA)

When the Exxon Valdez tanker ran aground in Alaska and spilled nearly 11 million gallons of crude oil in 1989, the world was a very different place. New laws, regulations, and technologies followed that spill, meaning future oil spills—though they undoubtedly would still occur—would do so in a fundamentally different context.

This was certainly the case by 2010 when the Deepwater Horizon oil rig suffered an explosion caused by a well blowout in the Gulf of Mexico. Tankers transporting oil have become generally safer since 1989 (thanks in part to now-required double hulls), and in 2010, the new frontier in oil production—along with new risks—was located at a wellhead nearly a mile under the ocean surface.

Since that fateful April day in 2010, NOAA has responded to another 400 oil and chemical incidents. Keeping up with emerging technologies and changing energy trends helps us become better prepared for the oil spills of tomorrow, whether they stem from a derailed train carrying particularly flammable oil, a transcontinental pipeline of diluted oil sands, or a cargo ship passing through the Arctic’s icy but increasingly accessible waters.

So how is NOAA’s Office of Response and Restoration preparing for future oil spills?

The Bakken Boom

Crude oil production from North Dakota’s Bakken region has more than quadrupled [PDF] since 2010, and responders must be prepared for spills involving this lighter oil (note: not all oils are the same).

Bakken crude oil is highly flammable and evaporates quickly in the open air. Knowing the chemistry of this oil can help guide decisions about how to respond to spills of Bakken oil. As a result, we’ve added Bakken as one of the oil types in ADIOS, our software program which models what happens to spilled oil over time. Now, responders can predict how much oil naturally disperses, evaporates, or remains on the water’s surface using information customized for Bakken’s unique chemistry.

We’ve also been collaborating across the spill response community to boost preparedness for these types of oil spills. Earlier this year, NOAA worked with the National Response Team to teach responders about how to deal with Bakken crude oil spills, with a special emphasis on health and safety.

The increase of Bakken crude poses another challenge to the nation: spills from oil-hauling trains. There are few ways to move Bakken crude from wells in North Dakota to refiners and consumers across the country. To keep up with the demand, producers have turned to rail transport as a quick alternative. In 2010, rail moved less than five million tons of crude petroleum. By 2013, that number had jumped to nearly 40 million.

NOAA typically responds to marine spills, but our scientific experience also proves useful when oil spills into a navigable river, as can happen when a train derails. To help answer response questions for waterways at risk, we’re adding even more data to our tools for spill responders. Ongoing updates to the Environmental Response Management Application (ERMA), our online mapping tool for environmental response data, illustrate the intersection of railroads and sensitive habitats and species, which might be affected by a spill from a train carrying oil.

Our Neighbor to the North

Oil imports from Canada, where oil sands (also known as tar sands) account for almost all of the country’s oil, have surged. Since 2010 Canadian oil imports have increased more than 40 percent.

Oil sands present another set of unique challenges. This variety is a thick, heavy crude oil (bitumen), which has to be diluted with a thinner type of oil to allow it to flow through a pipeline for transport. The resulting product is known as diluted bitumen, or dilbit.

Because oil sands are a mixture of products, it’s not completely clear how they react in the environment. When this product is released into water, the oils can separate quickly between lighter and heavier parts. As such, responders might have to worry about both lighter components vaporizing into toxic fumes in the air and heavier oil components potentially sinking down into the water column or bottom sediments, becoming more difficult to clean up. This also means that bottom-dwelling organisms may be more vulnerable to spills of oil sands than other types of oils.

As our experts work to assess the impacts from oil sands spills (including the 2010 Enbridge pipeline spill in Michigan), their studies both inform restoration for past spills and help guide response for the next spill. We’ve been working with the response and restoration community around the country to incorporate these lessons into spill response, including at recent meetings of the West Coast Joint Assessment Team and the International Spill Control Organization.

Even Further North

As shrinking summer sea ice opens shipping routes and opportunities for oil and gas production in the Arctic, the risk of an oil spill increases for that region. By 2020, up to 40 million tons per year of oil and gas are expected to travel the Northern Sea route through the Arctic Ocean.

Responding to oil spills in the Arctic will not be easy. Weather can be harsh, even in August. Logistical support is limited, and so is baseline science. Yet in the last five years, NOAA’s Office of Response and Restoration has made leaps in Arctic preparedness. For example, since 2010, we launched Arctic ERMA, a version of our interactive response data mapping tool customized for the region, and released Arctic Ephemeral Data Guidelines, a series of guidelines for collecting high-priority, time-sensitive data in the Arctic after an oil spill. But we still have plenty of work ahead of us.

Ship breaking ice in Arctic waters.

The U.S. Coast Guard Cutter Healy breaks ice in Arctic waters. A ship like this would be the likely center of operations for an oil spill in this remote and harsh region. (NOAA)

During a spill, we predict where oil is going, but Arctic conditions change the way oil behaves compared with warmer waters. Cold temperatures make oil more viscous (thick and slow-flowing), and in a spill, oil may be trapped in, on, and under floating sea ice, further complicating predictions of its movement.

We’ve been working to overcome this challenge by improving our models of oil movement and weathering in icy waters and researching response techniques and oil behavior to close gaps in the science. This May, we also find ourselves in a new role as the United States takes chairmanship of the Arctic Council. Amy Merten of NOAA’s Office of Response and Restoration will chair the Arctic Council’s Emergency Prevention, Preparedness and Response Working Group, where we hope to continue international efforts to boost Arctic spill preparedness.

Expecting the Unexpected

After decades of dealing with oil spills, we know one thing for certain—we have to be ready for anything.

In the last five years, we’ve responded to spills from the mangroves of Bangladesh to the banks of the Ohio River. These spills have involved Bakken crude, oil sands, and hazardous chemicals. They have resulted from well blowouts, leaking pipelines, derailed trains, grounded ships, storms, and more. In fact, one of the largest spills we’ve responded to since Deepwater Horizon involved 224,000 gallons of molasses released into a Hawaiian harbor.

Whatever the situation, it’s our job to provide the best available science for decisions. NOAA has more than 25 years of experience responding to oil spills. Over that time, we have continued to fine-tune our scientific understanding to better protect our coasts from this kind of pollution, a commitment that extends to whatever the next challenge may bring.


3 Comments

Attempting to Answer One Question Over and Over Again: Where Will the Oil Go?

The Deepwater Horizon Oil Spill: Five Years Later

This is the first in a series of stories over the coming weeks looking at various topics related to the response, the Natural Resource Damage Assessment science, restoration efforts, and the future of the Gulf of Mexico.

Oil spills raise all sorts of scientific questions, and NOAA’s job is to help answer them.

We have a saying that each oil spill is unique, but there is one question we get after almost every spill: Where will the oil go? One of our primary scientific products during a spill is a trajectory forecast, which often takes the form of a map showing where the oil is likely to travel and which shorelines and other environmentally or culturally sensitive areas might be at risk.

Oil spill responders need to know this information to know which shorelines to protect with containment boom, or where to stage cleanup equipment, or which areas should be closed to fishing or boating during a spill.

To help predict the movement of oil, we developed the computer model GNOME to forecast the complex interactions among currents, winds, and other physical processes affecting oil’s movement in the ocean. We update this model daily with information gathered from field observations, such as those from trained observers tasked with flying over a spill to verify its often-changing location, and new forecasts for ocean currents and winds.

Modeling a Moving Target

One of the biggest challenges we’ve faced in trying to answer this question was, not surprisingly, the 2010 Deepwater Horizon oil spill. Because of the continual release of oil—tens of thousands of barrels of oil each day—over nearly three months, we had to prepare hundreds of forecasts as more oil entered the Gulf of Mexico each day, was moved by ocean currents and winds, and was weathered, or physically, biologically, or chemically changed, by the environment and response efforts. A typical forecast includes modeling the outlook of the oil’s spread over the next 24, 48, and 72 hours. This task began with the first trajectory our oceanographers issued early in the morning April 21, 2010 after being notified of the accident, and continued for the next 107 days in a row. (You can access all of the forecasts from this spill online.)

Once spilled into the marine environment, oil begins to move and spread surprisingly quickly but not necessarily in a straight line. In the open ocean, winds and currents can easily move oil 20 miles or more per day, and in the presence of strong ocean currents such as the Gulf Stream, oil and other drifting materials can travel more than 100 miles per day. Closer to the coast, tidal currents also can move and spread oil across coastal waters.

While the Deepwater Horizon drilling rig and wellhead were located only 50 miles offshore of Louisiana, it took several weeks for the slick to reach shore as shifting winds and meandering currents slowly moved the oil.

A Spill Playing on Loop

Over the duration of a typical spill, we’ll revise and reissue our forecast maps on a daily basis. These maps include our best prediction of where the oil might go and the regions of highest oil coverage, as well as what is known as a “confidence boundary.” This is a line encircling not just our best predictions for oil coverage but also a broader area on the map reflecting the full possible range in our forecasts [PDF].

Our oceanographers include this confidence boundary on the forecast maps to indicate that there is a chance that oil could be located anywhere inside its borders, depending on actual conditions for wind, weather, and currents. Why is there a range of possible locations in the oil forecasts? Well, the movement of oil is very sensitive to ocean currents and wind, and predictions of oil movement rely on accurate predictions of the currents and wind at the spill site.

In addition, sometimes the information we put into the model is based on an incomplete picture of a spill. Much of the time, the immense size of the Deepwater Horizon spill on the ocean surface meant that observations from specialists flying over the spill and even satellites couldn’t capture the full picture of where all the oil was each day.

Our inevitably inexact knowledge of the many factors informing the trajectory model introduces a certain level of expected variation in its predictions, which is the situation with many models. Forecasters attempt to assess all the possible outcomes for a given scenario, estimate the likelihood of the different possibilities, and ultimately communicate risks to the decision makers.

In the case of the Deepwater Horizon oil spill, we had the added complexity of a spill that spanned many different regions—from the deep Gulf of Mexico, where ocean circulation is dominated by the swift Loop Current, to the continental shelf and nearshore area where ocean circulation is influenced by freshwater flowing from the Mississippi River. And let’s not forget that several tropical storms and hurricanes crossed the Gulf that summer [PDF].

A big concern was that if oil got into the main loop current, it could be transported to the Florida Keys, Cuba, the Bahamas, or up the eastern coast of the United States. Fortunately (for the Florida Keys) a giant eddy formed in the Gulf of Mexico in June 2010 (nicknamed Eddy Franklin after Benjamin Franklin, who did some of the early research on the Gulf Stream). This “Eddy Franklin” created a giant circular water current that kept the oil largely contained in the Gulf of Mexico.

Some of the NOAA forecast team likened our efforts that spring and summer to the movie Groundhog Day, in which the main character is forced to relive the same day over and over again. For our team, every day involved modeling the same oil spill again and again, but with constantly changing results.  Thinking back on that intense forecasting effort brings back memories packed with emotion—and exhaustion. But mostly, we recall with pride the important role our forecast team in Seattle played in answering the question “where will the oil go?”


Leave a comment

University of Washington Helps NOAA Examine Potential for Citizen Science During Oil Spills

Group of people with clipboards on a beach.

One area where volunteers could contribute to NOAA’s scientific efforts related to oil spills is in collecting baseline data before an oil spill happens. (Credit: Heal the Bay/Ana Luisa Ahern, CC BY-NC-SA 2.0)

This is a guest post by University of Washington graduate students Sam Haapaniemi, Myong Hwan Kim, and Roberto Treviño.

During an oil spill, how can NOAA maximize the benefits of citizen science while maintaining a high level of scientific integrity?

This was the central question that our team of University of Washington graduate students has been trying to answer for the past six months. Citizen science is characterized by volunteers helping participate in scientific research, usually either by gathering or analyzing huge amounts of data scientists would be unable to do on their own.

Dramatic improvements in technology—particularly the spread of smartphones—have made answering this question more real and more urgent. This, in turn, has led to huge growth in public interest in oil spill response, along with increased desire and potential ability to help, as demonstrated during the 2007 M/V Cosco Busan and 2010 Deepwater Horizon oil spill responses.

As the scientific experts in oil spills, NOAA’s Office of Response and Restoration has a unique opportunity to engage citizens during spills and enable them to contribute to the scientific process.

What’s in it for me?

Our research team found that the potential benefits of citizen science during oil spills extend to three groups of people outside of responders.

  • First, professional researchers can benefit from the help of having so many more people involved in research. Having more citizen scientists available to help gather data can strengthen the accuracy of observations by drawing from a potentially greater geographic area and by bringing in more fine-grain data. In some cases, citizen scientists also are able to provide local knowledge of a related topic that professional researchers may not possess.
  • The second group that benefits is composed of the citizen scientists themselves. Citizen science programs provide a constructive way for the average person to help solve problems they care about, and, as part of a collective effort, their contributions become more likely to make a real impact. Through this process, the public also gets to learn about their world and connect with others who share this interest.
  • The final group that derives value from citizen science programs is society at large. When thoughtfully designed and managed, citizen science can be an important stakeholder engagement tool for advancing scientific literacy and reducing risk perception. Citizen science programs can provide opportunities to correct risk misconceptions, address stakeholder concerns, share technical information, and establish constructive relationships and dialogue about the science that informs oil spills and response options.

How Should This Work?

Volunteer scrapes mussels off rocks at Hat Island.

A volunteer samples mussels off of Everett, Washington, as part of the citizen science-fueled NOAA Mussel Watch Program. (Credit: Lincoln Loehr, Snohomish County Marine Resources Committee)

Recognizing these benefits, we identified three core requirements that NOAA’s Office of Response and Restoration should consider when designing a citizen science program for oil spills.

  1. Develop a program that provides meaningful work for the public and beneficial scientific information for NOAA.
  2. Create a strong communication loop or network that can be maintained between participating citizens and NOAA.
  3. Develop the program in a collaborative way.

Building on these core requirements, we identified a list of activities NOAA could consider for citizen science efforts both before and during oil spill responses.

Before a response, NOAA could establish data collection protocols for citizen scientists, partner with volunteer organizations that could help coordinate them, and manage baseline studies with the affiliated volunteers. For example, NOAA would benefit from knowing the actual numbers of shorebirds found at different times per year in areas at high risk of oil spills. This information would help NOAA better distinguish impacts to those populations in the event of an oil spill in those areas.

During a response, NOAA could benefit from citizen science volunteers’ observations and field surveys (whether open-ended type or structured-questionnaire type), and volunteers could help process data collected during the response. In addition, NOAA could manage volunteer registration and coordination during a spill response.

How Could This Work?

Evaluating different options for implementing these activities, we found clear trade-offs depending on NOAA’s priorities, such as resource intensity, data value, liability, and participation value. As a result, we created a decision framework, or “decision tool,” for NOAA’s Office of Response and Restoration to use when thinking about how to create a citizen science program. From there, we came up with the following recommendations:

  1. Acknowledge the potential benefits of citizen science. The first step is to recognize that citizen science has benefits for both NOAA and the public.
  2. Define goals clearly and recognize trade-offs. Having clear goals and intended uses for citizen scientist contributions will help NOAA prioritize and frame the program.
  3. Use the decision tool to move from concept to operation. The decision tool we designed will help identify potential paths best suited to various situations.
  4. Build a program that meets the baseline requirements. For any type of citizen science program, NOAA should ensure it is mutually beneficial, maintains two-way communication, and takes a collaborative approach.
  5. Start now: Early actions pays off. Before the next big spill happens, NOAA can prepare for potentially working with citizen scientists by building relationships with volunteer organizations, designing and refining data collection methods, and integrating citizen science into response plans.

While there is not one path to incorporating citizen science into oil spill responses, we found that there is great potential via many different avenues. Citizen science is a growing trend and, if done well, could greatly benefit NOAA during future oil spills.

You can read our final report in full at https://citizensciencemanagement.wordpress.com.

Sam Haapaniemi, Myong Hwan Kim, and Roberto Treviño are graduate students at the University of Washington in Seattle, Washington. The Citizen Science Management Project is being facilitated through the University of Washington’s Program on the Environment. It is the most recent project in an ongoing relationship between NOAA’s Office of Response and Restoration and the University of Washington’s Program on the Environment.


Leave a comment

After an Oil Spill, How—and Why—Do We Survey Affected Shorelines?

Four people walking along a beach.

A team of responders surveying the shoreline of Raccoon Island, Louisiana, on May 12, 2010. They use a systematic method for surveying and describing shorelines affected by oil spills, which was developed during the Exxon Valdez spill in 1989. (U.S. Navy)

This is part of the National Ocean Service’s efforts to celebrate our role in the surveys that inform our lives and protect our coasts.

In March of 1989, oil spill responders in Valdez, Alaska, had a problem. They had a very large oil spill on their hands after the tanker Exxon Valdez had run aground on Bligh Reef in Prince William Sound.

At the time, many aspects of the situation were unprecedented—including the amount of oil spilled and the level of response and cleanup required. Further complicating their efforts were the miles and miles of remote shoreline along Prince William Sound. How could responders know which shorelines were hardest hit by the oil and where they should focus their cleanup efforts? Plus, with so many people involved in the response, what one person might consider “light oiling” on a particular beach, another might consider “heavy oiling.” They needed a systematic way to document the oil spill’s impacts on the extensive shorelines of the sound.

Out of these needs ultimately came the Shoreline Cleanup and Assessment Technique, or SCAT. NOAA was a key player involved in developing this formal process for surveying coastal shorelines affected by oil spills. Today, we maintain the only SCAT program in the federal government although we have been working with the U.S. Environmental Protection Agency (EPA) to help develop similar methods for oil spills on inland lakes and rivers.

Survey Says …

SCAT aims to describe both the oil and the environment along discrete stretches of shoreline potentially affected by an oil spill. Based on that information, responders then can determine the appropriate cleanup methods that will do the most good and the least harm for each section of shoreline.

The teams of trained responders performing SCAT surveys normally are composed of representatives from the state and federal government and the organization responsible for the spill. They head out into the field, armed with SCAT’s clear methodology for categorizing the level and kind of oiling on the shoreline. This includes standardized definitions for describing how thick the oil is, its level of weathering (physical or chemical change), and the type of shoreline impacted, which may be as different as a rocky shoreline, a saltwater marsh, or flooded low-lying tundra.

After carefully documenting these data along all possibly affected portions of shoreline, the teams make their recommendations for cleanup methods. In the process, they have to take a number of other factors into account, such as whether threatened or endangered species are present or if the shoreline is in a high public access area.

It is actually very easy to do more damage than good when cleaning up oiled shorelines. The cleanup itself—with lots of people, heavy equipment, and activity—can be just as or even more harmful to the environment than spilled oil. For sensitive areas, such as a marsh, taking no cleanup action is often the best option for protecting the stability of the fragile shoreline, even if some oil remains.

Data, Data Everywhere

Having a common language for describing shoreline oiling is a critical piece of the conversation during a spill response. Without this standard protocol, spill responders would be reinventing the wheel for each spill. Along that same vein, responders at NOAA are working with the U.S. EPA and State of California to establish a common data standard for the mounds of data collected during these shoreline surveys.

Managing all of that data and turning it into useful products for the response is a lot of work. During bigger spills, multiple data specialists work around the clock to process the data collected during SCAT surveys, perform quality assurance and control, and create informational products, such as maps showing where oil is located and its level of coverage on various types of shorelines.

Data management tools such as GPS trackers and georeferenced photographs help speed up that process, but the next step is moving from paper forms used by SCAT field teams to electronic tools that enable these teams to directly enter their data into the central database for that spill.

Our goal is to create a data framework that can be translated into any tool for any handheld electronic device. These guidelines would provide consistency across digital platforms, specifying exactly what data are being collected and in which structure and format. Furthermore, they would standardize which data are being shared into a spill’s central database, whether they come from a state government agency or the company that caused the spill. This effort feeds into the larger picture for managing data during oil spills and allows everyone working on that spill to understand, access, and work with the data collected, for a long time after the spill.

Currently, we are drafting these data standards for SCAT surveys and incorporating feedback from NOAA, EPA, and California. In the next year or two, we hope to offer these standards as official NOAA guidelines for gathering digital data during oiled shoreline surveys.

To learn more about how teams perform SCAT surveys, check out NOAA’s Shoreline Assessment Manual and Job Aid.


Leave a comment

NOAA Assists with Response to Bakken Oil Train Derailment and Fire in West Virginia

Smoldering train cars derailed from the railroad tracks in snowy West Virginia.

On Feb. 18, 2015, response crews for the West Virginia train derailment were continuing to monitor the burning of the derailed rail cars near Mount Carbon next to the Kanawha River. The West Virginia Train Derailment Unified Command continues to work with federal, state and local agencies on the response efforts for the train derailment that occurred near Mount Carbon on February 15, 2015. (U.S. Coast Guard)

On February 16, 2015, a CSX oil train derailed and caught fire in West Virginia near the confluence of Armstrong Creek and the Kanawha River. The train was hauling 3.1 million gallons of Bakken crude oil from North Dakota to a facility in Virginia. Oil coming from the Bakken Shale oil fields in North Dakota and Montana is highly volatile, and according to an industry report [PDF] prepared for the U.S. Department of Transportation, it contains “higher amounts of dissolved flammable gases compared to some heavy crude oils.”

Of the 109 train cars, 27 of them derailed on the banks of the Kanawha River, but none of them entered the river. Much of the oil they were carrying was consumed in the fire, which affected 19 train cars, and an unknown amount of oil has reached the icy creek and river. Initially, the derailed train cars caused a huge fire, which burned down a nearby house, and resulted in the evacuation of several nearby towns. The evacuation order, which affected at least 100 residents, has now been lifted for all but five homes immediately next to the accident site.

The fires have been contained, and now the focus is on cleaning up the accident site, removing any remaining oil from the damaged train cars, and protecting drinking water intakes downstream. So far, responders have collected approximately 6,800 gallons of oily water from containment trenches dug along the river embankment.

Heavy equipment and oily boom on the edge of a frozen river.

Some oil from the derailed train cars has been observed frozen into the river ice, but no signs of oil appear downstream. (NOAA)

The area, near Mount Carbon, West Virginia, has been experiencing heavy snow and extremely cold temperatures, and the river is largely frozen. Some oil has been observed frozen into the river ice, but testing downstream water intakes for the presence of oil has so far shown negative results. NOAA has been assisting the response by providing custom weather and river forecasting, which includes modeling the potential fate of any oil that has reached the river.

The rapid growth of oil shipments by rail in the past few years has led to a number of high-profile train accidents. A similar incident in Lynchburg, Virginia, last year involved a train also headed to Yorktown, Virginia. In July 2013, 47 people were killed in the Canadian town of Lac-Mégantic, Quebec, after a train carrying Bakken crude oil derailed and exploded. NOAA continues to prepare for the emerging risks associated with this shift in oil transport in the United States.

Look for more updates on this incident from the U.S. Coast Guard News Room and the West Virginia Department of Environmental Protection.


1 Comment

How NOAA Oil Spill Experts Got Involved With Chemical Spill Software

Fire and smoke on a container ship carrying hazardous materials at sea.

The aftermath of a March 2006 explosion of hazardous cargo on the container ship M/V Hyundai Fortune. The risks of transporting hazardous chemicals on ships at sea sparked the inspiration for NOAA oil spill responders to start designing chemical spill software. (Credit: Royal Netherlands Navy)

It was late February of 1979, and the Italian container ship Maria Costa [PDF] had sprung a leak. Rough seas had damaged its hull and the ship now was heading to Chesapeake Bay for repairs. Water was flooding the Maria Costa’s cargo holds.

This was a particular problem not because of its loads of carpets and tobacco, but because the vessel was also carrying 65 tons of pesticide. Stored in thick brown paper bags, this unregulated insecticide was being released from the clay it was transported with into the waters now flooding the cargo holds.

Ethoprop, the major ingredient of this organophosphate insecticide, was not only poisonous to humans but also to marine life at very low concentrations (50 parts per billion in water). Waters around Norfolk, Virginia, had recently suffered another pesticide spill affecting crabs and shrimp, and the leaking Maria Costa was denied entry to Chesapeake Bay because of the risk of polluting its waters again.

During the Maria Costa incident, two NOAA spill responders boarded the ship to take samples of the contaminated water and assess the environmental threat. Even though this event predated the current organization of NOAA’s Office of Response and Restoration, NOAA had been providing direct support to oil spills and marine accidents since showing up as hazardous materials (hazmat) researchers during the Argo Merchant oil spill in 1976.

Blood and Water

The NOAA scientists had blood samples taken before and after spending an hour and a half aboard the damaged vessel taking samples of their own. The results indicated that water in the ship’s tanks had 130 parts per million of ethoprop and the two men’s blood showed tell-tale signs of organophosphate poisoning.

After the resolution of that incident and an ensuing hospital visit by the two NOAA scientists, the head of the NOAA Hazardous Materials Response Program, John Robinson, realized that responding to releases of chemicals other than oil would take a very different kind of response. And that would take a different set of tools than currently existed.

From Book Stacks to Computer Code

John Robinson leaning on the edge of a boat.

John Robinson led the NOAA Hazardous Materials Response Program in its early years and helped guide the team’s pioneering development of chemical spill software tools for emergency responders. (NOAA)

Following the Maria Costa, Robinson got to work with the Seattle Fire Department’s newly formed hazmat team, allowing NOAA to observe how local chemical incidents were managed. Then, he initiated four large-scale exercises around the nation to test how the scientific coordination of a federal response would integrate with local first responder activities during larger-scale chemical incidents.

It didn’t take long to understand how important it was for first responders to have the right tools for applying science in a chemical response. During the first exercise, responders laid out several reference books on the hoods of cars in an attempt to assess the threat from the chemicals involved.

Researching and synthesizing complex information from multiple sources during a stressful situation proved to be the main challenge. Because the threat from chemical spills can evolve so much more rapidly than oil spills—a toxic cloud of chemical vapor can move and disappear within minutes—it was very clear that local efforts would always be front and center during these responses.

Meanwhile, NOAA scientists created a computer program employing a simple set of equations to predict how a toxic chemical gas would move and disperse and started examining how to synthesize chemical information from multiple sources into a resource first responders could trust and use quickly.

Learning from Tragedy

Then, in December of 1984, tragedy struck Bhopal, India, when a deadly chemical cloud released from a Union Carbide plant killed more than 2,000 people. This accidental release of methyl isocyanate, a toxic chemical used to produce pesticides, and its impact on the unprepared surrounding community led the U.S. government to examine how communities in the United States would have been prepared for such an accident.

By 1986, Congress, motivated by the Bhopal accident, passed the Emergency Planning and Community Right-to-Know Act (EPCRA). As a result, certain facilities dealing with hazardous chemicals must report these chemicals and any spills each year to the U.S. Environmental Protection Agency (EPA).

Apple II+ computer hooked up to Apple graphics tablet, color TV, and printer.

In the late 1970s and early 1980s, NOAA’s hazmat team wrote the first version of the ALOHA chemical plume modeling program, now part of the CAMEO software suite for hazardous material response, for this Apple II+ computer. (NOAA)

Because NOAA had already started working with first responders to address the science of chemical spill response, EPA turned to NOAA as a partner in developing tools for first responders and community awareness. From those efforts, CAMEO was born. CAMEO, which stands for Computer-Aided Management of Emergency Operations, is a suite of software products for hazardous materials response and planning.

Getting the Right Information, Right Now

The goal was to consolidate chemical information customized for each community and be able to model potential scenarios. In addition, that information needed to be readily available to the public and to first responders.

In 1986, attempting to do this on a computer was a big deal. At that time, the Internet was in its infancy and not readily accessible. Computers were large desktop affairs, but Apple had just come out with a “portable” computer. NOAA’s Robinson was convinced that with a computer on board first response vehicles, science-based decisions would become the norm for chemical preparedness and response. Today, responders can access that information from their smartphone.

NOAA and EPA still partner on the CAMEO program, which is used by tens of thousands of planners and responders around the world. Almost 30 years later, the program and technology have evolved—and continue to do so—but the vision and goal are the same: providing timely and critical science-based information and tools to people dealing with chemical accidents. Learn more about the CAMEO suite of chemical planning and response products.


Leave a comment

What Does It Take to Clean up the Cleanup From an Oil Spill?

Bags of oiled waste on a beach next to a No Smoking sign.

Bags and bags of oiled waste on the beach of Prince William Sound, Alaska, following the Exxon Valdez oil spill in March 1989. (NOAA)

Imagine spilling a can of paint on your basement floor (note: I have done this more than once.). Luckily, you have some paper towels nearby, and maybe some rags or an old towel you can use to mop up the mess. When you’re finished, all of those items probably will end up in the garbage. Maybe along with some of the old clothes you had on.

You might not think much about the amount of waste you generated, but it was probably a lot more than the volume of paint you spilled—maybe even 10 times as much. That number is actually a rule of thumb for oil spill cleanup. The amount of waste generated is typically about 10 times the volume of oil spilled.

Our colleagues at the International Tanker Owners Pollution Federation (ITOPF) did a study on this very topic, looking at the oil-to-waste ratio for nearly 20 spills [PDF]. (A messy job, for sure.) ITOPF found that the general rule for estimating waste at oil spills still held true at about 10 times the amount spilled.

The Mess of a Cleanup

Cleanup workers collect oily debris in bags on the banks of the Mississippi River.

Responders collect oily debris during the M/V Westchester oil spill in the Mississippi River near Empire, Louisiana, in November 2000. (NOAA)

What kinds of wastes are we talking about? Well, there is the oil recovered itself. In many cases, this can be recycled. Then there are oily liquids. These are the result of skimming oil off of the water surface, which tends to recover a lot of water too, and this has to be processed before it can be properly disposed. Shoreline cleanup is even messier, due to the large amounts of oily sands and gravel, along with seaweed, driftwood, and other debris that can end up getting oiled and need to be removed from beaches.

Some response equipment such as hard containment booms can be cleaned and reused, but that cleaning generates oily wastes too. Then there are the many sorbent materials used to mop up oil; these sorbent pads and soft booms may not be reusable and would be sent to a landfill. Finally, don’t forget about the oil-contaminated protective clothing, plastic bags, and all of the domestic garbage generated by an army of cleanup workers at the site of a spill response.

Aiming for Less Mess

A large U.S. oil spill response will have an entire section of personnel devoted to waste management. Their job is to provide the necessary storage and waste processing facilities, figure out what can be recycled, what will need to be taken to a proper landfill or incineration facility, and how to get it all there. That includes ensuring everything is in compliance with the necessary shipping, tracking, and disposal paperwork.

The amount of waste generated is a serious matter, particularly because oil spills often can occur in remote areas. In far-off locales, proper handling and transport of wastes is often as big a challenge as cleaning up the oil. Dealing with oily wastes is even more difficult in the Arctic and remote Pacific Islands such as Samoa because of the lack of adequate landfill space. One of the common goals of a spill response is to minimize wastes and segregate materials as much as possible to reduce disposal costs.

In a 2008 article [PDF], the U.S. Coast Guard explores in more detail the various sources of waste during an oil spill response and includes suggestions for incentivizing waste reduction during a response.

Follow

Get every new post delivered to your Inbox.

Join 563 other followers