NOAA's Response and Restoration Blog

An inside look at the science of cleaning up and fixing the mess of marine pollution


1 Comment

Attempting to Answer One Question Over and Over Again: Where Will the Oil Go?

The Deepwater Horizon Oil Spill: Five Years Later

This is the first in a series of stories over the coming weeks looking at various topics related to the response, the Natural Resource Damage Assessment science, restoration efforts, and the future of the Gulf of Mexico.

Oil spills raise all sorts of scientific questions, and NOAA’s job is to help answer them.

We have a saying that each oil spill is unique, but there is one question we get after almost every spill: Where will the oil go? One of our primary scientific products during a spill is a trajectory forecast, which often takes the form of a map showing where the oil is likely to travel and which shorelines and other environmentally or culturally sensitive areas might be at risk.

Oil spill responders need to know this information to know which shorelines to protect with containment boom, or where to stage cleanup equipment, or which areas should be closed to fishing or boating during a spill.

To help predict the movement of oil, we developed the computer model GNOME to forecast the complex interactions among currents, winds, and other physical processes affecting oil’s movement in the ocean. We update this model daily with information gathered from field observations, such as those from trained observers tasked with flying over a spill to verify its often-changing location, and new forecasts for ocean currents and winds.

Modeling a Moving Target

One of the biggest challenges we’ve faced in trying to answer this question was, not surprisingly, the 2010 Deepwater Horizon oil spill. Because of the continual release of oil—tens of thousands of barrels of oil each day—over nearly three months, we had to prepare hundreds of forecasts as more oil entered the Gulf of Mexico each day, was moved by ocean currents and winds, and was weathered, or physically, biologically, or chemically changed, by the environment and response efforts. A typical forecast includes modeling the outlook of the oil’s spread over the next 24, 48, and 72 hours. This task began with the first trajectory our oceanographers issued early in the morning April 21, 2010 after being notified of the accident, and continued for the next 107 days in a row. (You can access all of the forecasts from this spill online.)

Once spilled into the marine environment, oil begins to move and spread surprisingly quickly but not necessarily in a straight line. In the open ocean, winds and currents can easily move oil 20 miles or more per day, and in the presence of strong ocean currents such as the Gulf Stream, oil and other drifting materials can travel more than 100 miles per day. Closer to the coast, tidal currents also can move and spread oil across coastal waters.

While the Deepwater Horizon drilling rig and wellhead were located only 50 miles offshore of Louisiana, it took several weeks for the slick to reach shore as shifting winds and meandering currents slowly moved the oil.

A Spill Playing on Loop

Over the duration of a typical spill, we’ll revise and reissue our forecast maps on a daily basis. These maps include our best prediction of where the oil might go and the regions of highest oil coverage, as well as what is known as a “confidence boundary.” This is a line encircling not just our best predictions for oil coverage but also a broader area on the map reflecting the full possible range in our forecasts [PDF].

Our oceanographers include this confidence boundary on the forecast maps to indicate that there is a chance that oil could be located anywhere inside its borders, depending on actual conditions for wind, weather, and currents. Why is there a range of possible locations in the oil forecasts? Well, the movement of oil is very sensitive to ocean currents and wind, and predictions of oil movement rely on accurate predictions of the currents and wind at the spill site.

In addition, sometimes the information we put into the model is based on an incomplete picture of a spill. Much of the time, the immense size of the Deepwater Horizon spill on the ocean surface meant that observations from specialists flying over the spill and even satellites couldn’t capture the full picture of where all the oil was each day.

Our inevitably inexact knowledge of the many factors informing the trajectory model introduces a certain level of expected variation in its predictions, which is the situation with many models. Forecasters attempt to assess all the possible outcomes for a given scenario, estimate the likelihood of the different possibilities, and ultimately communicate risks to the decision makers.

In the case of the Deepwater Horizon oil spill, we had the added complexity of a spill that spanned many different regions—from the deep Gulf of Mexico, where ocean circulation is dominated by the swift Loop Current, to the continental shelf and nearshore area where ocean circulation is influenced by freshwater flowing from the Mississippi River. And let’s not forget that several tropical storms and hurricanes crossed the Gulf that summer [PDF].

A big concern was that if oil got into the main loop current, it could be transported to the Florida Keys, Cuba, the Bahamas, or up the eastern coast of the United States. Fortunately (for the Florida Keys) a giant eddy formed in the Gulf of Mexico in June 2010 (nicknamed Eddy Franklin after Benjamin Franklin, who did some of the early research on the Gulf Stream). This “Eddy Franklin” created a giant circular water current that kept the oil largely contained in the Gulf of Mexico.

Some of the NOAA forecast team likened our efforts that spring and summer to the movie Groundhog Day, in which the main character is forced to relive the same day over and over again. For our team, every day involved modeling the same oil spill again and again, but with constantly changing results.  Thinking back on that intense forecasting effort brings back memories packed with emotion—and exhaustion. But mostly, we recall with pride the important role our forecast team in Seattle played in answering the question “where will the oil go?”


Leave a comment

University of Washington Helps NOAA Examine Potential for Citizen Science During Oil Spills

Group of people with clipboards on a beach.

One area where volunteers could contribute to NOAA’s scientific efforts related to oil spills is in collecting baseline data before an oil spill happens. (Credit: Heal the Bay/Ana Luisa Ahern, CC BY-NC-SA 2.0)

This is a guest post by University of Washington graduate students Sam Haapaniemi, Myong Hwan Kim, and Roberto Treviño.

During an oil spill, how can NOAA maximize the benefits of citizen science while maintaining a high level of scientific integrity?

This was the central question that our team of University of Washington graduate students has been trying to answer for the past six months. Citizen science is characterized by volunteers helping participate in scientific research, usually either by gathering or analyzing huge amounts of data scientists would be unable to do on their own.

Dramatic improvements in technology—particularly the spread of smartphones—have made answering this question more real and more urgent. This, in turn, has led to huge growth in public interest in oil spill response, along with increased desire and potential ability to help, as demonstrated during the 2007 M/V Cosco Busan and 2010 Deepwater Horizon oil spill responses.

As the scientific experts in oil spills, NOAA’s Office of Response and Restoration has a unique opportunity to engage citizens during spills and enable them to contribute to the scientific process.

What’s in it for me?

Our research team found that the potential benefits of citizen science during oil spills extend to three groups of people outside of responders.

  • First, professional researchers can benefit from the help of having so many more people involved in research. Having more citizen scientists available to help gather data can strengthen the accuracy of observations by drawing from a potentially greater geographic area and by bringing in more fine-grain data. In some cases, citizen scientists also are able to provide local knowledge of a related topic that professional researchers may not possess.
  • The second group that benefits is composed of the citizen scientists themselves. Citizen science programs provide a constructive way for the average person to help solve problems they care about, and, as part of a collective effort, their contributions become more likely to make a real impact. Through this process, the public also gets to learn about their world and connect with others who share this interest.
  • The final group that derives value from citizen science programs is society at large. When thoughtfully designed and managed, citizen science can be an important stakeholder engagement tool for advancing scientific literacy and reducing risk perception. Citizen science programs can provide opportunities to correct risk misconceptions, address stakeholder concerns, share technical information, and establish constructive relationships and dialogue about the science that informs oil spills and response options.

How Should This Work?

Volunteer scrapes mussels off rocks at Hat Island.

A volunteer samples mussels off of Everett, Washington, as part of the citizen science-fueled NOAA Mussel Watch Program. (Credit: Lincoln Loehr, Snohomish County Marine Resources Committee)

Recognizing these benefits, we identified three core requirements that NOAA’s Office of Response and Restoration should consider when designing a citizen science program for oil spills.

  1. Develop a program that provides meaningful work for the public and beneficial scientific information for NOAA.
  2. Create a strong communication loop or network that can be maintained between participating citizens and NOAA.
  3. Develop the program in a collaborative way.

Building on these core requirements, we identified a list of activities NOAA could consider for citizen science efforts both before and during oil spill responses.

Before a response, NOAA could establish data collection protocols for citizen scientists, partner with volunteer organizations that could help coordinate them, and manage baseline studies with the affiliated volunteers. For example, NOAA would benefit from knowing the actual numbers of shorebirds found at different times per year in areas at high risk of oil spills. This information would help NOAA better distinguish impacts to those populations in the event of an oil spill in those areas.

During a response, NOAA could benefit from citizen science volunteers’ observations and field surveys (whether open-ended type or structured-questionnaire type), and volunteers could help process data collected during the response. In addition, NOAA could manage volunteer registration and coordination during a spill response.

How Could This Work?

Evaluating different options for implementing these activities, we found clear trade-offs depending on NOAA’s priorities, such as resource intensity, data value, liability, and participation value. As a result, we created a decision framework, or “decision tool,” for NOAA’s Office of Response and Restoration to use when thinking about how to create a citizen science program. From there, we came up with the following recommendations:

  1. Acknowledge the potential benefits of citizen science. The first step is to recognize that citizen science has benefits for both NOAA and the public.
  2. Define goals clearly and recognize trade-offs. Having clear goals and intended uses for citizen scientist contributions will help NOAA prioritize and frame the program.
  3. Use the decision tool to move from concept to operation. The decision tool we designed will help identify potential paths best suited to various situations.
  4. Build a program that meets the baseline requirements. For any type of citizen science program, NOAA should ensure it is mutually beneficial, maintains two-way communication, and takes a collaborative approach.
  5. Start now: Early actions pays off. Before the next big spill happens, NOAA can prepare for potentially working with citizen scientists by building relationships with volunteer organizations, designing and refining data collection methods, and integrating citizen science into response plans.

While there is not one path to incorporating citizen science into oil spill responses, we found that there is great potential via many different avenues. Citizen science is a growing trend and, if done well, could greatly benefit NOAA during future oil spills.

You can read our final report in full at https://citizensciencemanagement.wordpress.com.

Sam Haapaniemi, Myong Hwan Kim, and Roberto Treviño are graduate students at the University of Washington in Seattle, Washington. The Citizen Science Management Project is being facilitated through the University of Washington’s Program on the Environment. It is the most recent project in an ongoing relationship between NOAA’s Office of Response and Restoration and the University of Washington’s Program on the Environment.


Leave a comment

After an Oil Spill, How—and Why—Do We Survey Affected Shorelines?

Four people walking along a beach.

A team of responders surveying the shoreline of Raccoon Island, Louisiana, on May 12, 2010. They use a systematic method for surveying and describing shorelines affected by oil spills, which was developed during the Exxon Valdez spill in 1989. (U.S. Navy)

This is part of the National Ocean Service’s efforts to celebrate our role in the surveys that inform our lives and protect our coasts.

In March of 1989, oil spill responders in Valdez, Alaska, had a problem. They had a very large oil spill on their hands after the tanker Exxon Valdez had run aground on Bligh Reef in Prince William Sound.

At the time, many aspects of the situation were unprecedented—including the amount of oil spilled and the level of response and cleanup required. Further complicating their efforts were the miles and miles of remote shoreline along Prince William Sound. How could responders know which shorelines were hardest hit by the oil and where they should focus their cleanup efforts? Plus, with so many people involved in the response, what one person might consider “light oiling” on a particular beach, another might consider “heavy oiling.” They needed a systematic way to document the oil spill’s impacts on the extensive shorelines of the sound.

Out of these needs ultimately came the Shoreline Cleanup and Assessment Technique, or SCAT. NOAA was a key player involved in developing this formal process for surveying coastal shorelines affected by oil spills. Today, we maintain the only SCAT program in the federal government although we have been working with the U.S. Environmental Protection Agency (EPA) to help develop similar methods for oil spills on inland lakes and rivers.

Survey Says …

SCAT aims to describe both the oil and the environment along discrete stretches of shoreline potentially affected by an oil spill. Based on that information, responders then can determine the appropriate cleanup methods that will do the most good and the least harm for each section of shoreline.

The teams of trained responders performing SCAT surveys normally are composed of representatives from the state and federal government and the organization responsible for the spill. They head out into the field, armed with SCAT’s clear methodology for categorizing the level and kind of oiling on the shoreline. This includes standardized definitions for describing how thick the oil is, its level of weathering (physical or chemical change), and the type of shoreline impacted, which may be as different as a rocky shoreline, a saltwater marsh, or flooded low-lying tundra.

After carefully documenting these data along all possibly affected portions of shoreline, the teams make their recommendations for cleanup methods. In the process, they have to take a number of other factors into account, such as whether threatened or endangered species are present or if the shoreline is in a high public access area.

It is actually very easy to do more damage than good when cleaning up oiled shorelines. The cleanup itself—with lots of people, heavy equipment, and activity—can be just as or even more harmful to the environment than spilled oil. For sensitive areas, such as a marsh, taking no cleanup action is often the best option for protecting the stability of the fragile shoreline, even if some oil remains.

Data, Data Everywhere

Having a common language for describing shoreline oiling is a critical piece of the conversation during a spill response. Without this standard protocol, spill responders would be reinventing the wheel for each spill. Along that same vein, responders at NOAA are working with the U.S. EPA and State of California to establish a common data standard for the mounds of data collected during these shoreline surveys.

Managing all of that data and turning it into useful products for the response is a lot of work. During bigger spills, multiple data specialists work around the clock to process the data collected during SCAT surveys, perform quality assurance and control, and create informational products, such as maps showing where oil is located and its level of coverage on various types of shorelines.

Data management tools such as GPS trackers and georeferenced photographs help speed up that process, but the next step is moving from paper forms used by SCAT field teams to electronic tools that enable these teams to directly enter their data into the central database for that spill.

Our goal is to create a data framework that can be translated into any tool for any handheld electronic device. These guidelines would provide consistency across digital platforms, specifying exactly what data are being collected and in which structure and format. Furthermore, they would standardize which data are being shared into a spill’s central database, whether they come from a state government agency or the company that caused the spill. This effort feeds into the larger picture for managing data during oil spills and allows everyone working on that spill to understand, access, and work with the data collected, for a long time after the spill.

Currently, we are drafting these data standards for SCAT surveys and incorporating feedback from NOAA, EPA, and California. In the next year or two, we hope to offer these standards as official NOAA guidelines for gathering digital data during oiled shoreline surveys.

To learn more about how teams perform SCAT surveys, check out NOAA’s Shoreline Assessment Manual and Job Aid.


Leave a comment

NOAA Assists with Response to Bakken Oil Train Derailment and Fire in West Virginia

Smoldering train cars derailed from the railroad tracks in snowy West Virginia.

On Feb. 18, 2015, response crews for the West Virginia train derailment were continuing to monitor the burning of the derailed rail cars near Mount Carbon next to the Kanawha River. The West Virginia Train Derailment Unified Command continues to work with federal, state and local agencies on the response efforts for the train derailment that occurred near Mount Carbon on February 15, 2015. (U.S. Coast Guard)

On February 16, 2015, a CSX oil train derailed and caught fire in West Virginia near the confluence of Armstrong Creek and the Kanawha River. The train was hauling 3.1 million gallons of Bakken crude oil from North Dakota to a facility in Virginia. Oil coming from the Bakken Shale oil fields in North Dakota and Montana is highly volatile, and according to an industry report [PDF] prepared for the U.S. Department of Transportation, it contains “higher amounts of dissolved flammable gases compared to some heavy crude oils.”

Of the 109 train cars, 27 of them derailed on the banks of the Kanawha River, but none of them entered the river. Much of the oil they were carrying was consumed in the fire, which affected 19 train cars, and an unknown amount of oil has reached the icy creek and river. Initially, the derailed train cars caused a huge fire, which burned down a nearby house, and resulted in the evacuation of several nearby towns. The evacuation order, which affected at least 100 residents, has now been lifted for all but five homes immediately next to the accident site.

The fires have been contained, and now the focus is on cleaning up the accident site, removing any remaining oil from the damaged train cars, and protecting drinking water intakes downstream. So far, responders have collected approximately 6,800 gallons of oily water from containment trenches dug along the river embankment.

Heavy equipment and oily boom on the edge of a frozen river.

Some oil from the derailed train cars has been observed frozen into the river ice, but no signs of oil appear downstream. (NOAA)

The area, near Mount Carbon, West Virginia, has been experiencing heavy snow and extremely cold temperatures, and the river is largely frozen. Some oil has been observed frozen into the river ice, but testing downstream water intakes for the presence of oil has so far shown negative results. NOAA has been assisting the response by providing custom weather and river forecasting, which includes modeling the potential fate of any oil that has reached the river.

The rapid growth of oil shipments by rail in the past few years has led to a number of high-profile train accidents. A similar incident in Lynchburg, Virginia, last year involved a train also headed to Yorktown, Virginia. In July 2013, 47 people were killed in the Canadian town of Lac-Mégantic, Quebec, after a train carrying Bakken crude oil derailed and exploded. NOAA continues to prepare for the emerging risks associated with this shift in oil transport in the United States.

Look for more updates on this incident from the U.S. Coast Guard News Room and the West Virginia Department of Environmental Protection.


1 Comment

How NOAA Oil Spill Experts Got Involved With Chemical Spill Software

Fire and smoke on a container ship carrying hazardous materials at sea.

The aftermath of a March 2006 explosion of hazardous cargo on the container ship M/V Hyundai Fortune. The risks of transporting hazardous chemicals on ships at sea sparked the inspiration for NOAA oil spill responders to start designing chemical spill software. (Credit: Royal Netherlands Navy)

It was late February of 1979, and the Italian container ship Maria Costa [PDF] had sprung a leak. Rough seas had damaged its hull and the ship now was heading to Chesapeake Bay for repairs. Water was flooding the Maria Costa’s cargo holds.

This was a particular problem not because of its loads of carpets and tobacco, but because the vessel was also carrying 65 tons of pesticide. Stored in thick brown paper bags, this unregulated insecticide was being released from the clay it was transported with into the waters now flooding the cargo holds.

Ethoprop, the major ingredient of this organophosphate insecticide, was not only poisonous to humans but also to marine life at very low concentrations (50 parts per billion in water). Waters around Norfolk, Virginia, had recently suffered another pesticide spill affecting crabs and shrimp, and the leaking Maria Costa was denied entry to Chesapeake Bay because of the risk of polluting its waters again.

During the Maria Costa incident, two NOAA spill responders boarded the ship to take samples of the contaminated water and assess the environmental threat. Even though this event predated the current organization of NOAA’s Office of Response and Restoration, NOAA had been providing direct support to oil spills and marine accidents since showing up as hazardous materials (hazmat) researchers during the Argo Merchant oil spill in 1976.

Blood and Water

The NOAA scientists had blood samples taken before and after spending an hour and a half aboard the damaged vessel taking samples of their own. The results indicated that water in the ship’s tanks had 130 parts per million of ethoprop and the two men’s blood showed tell-tale signs of organophosphate poisoning.

After the resolution of that incident and an ensuing hospital visit by the two NOAA scientists, the head of the NOAA Hazardous Materials Response Program, John Robinson, realized that responding to releases of chemicals other than oil would take a very different kind of response. And that would take a different set of tools than currently existed.

From Book Stacks to Computer Code

John Robinson leaning on the edge of a boat.

John Robinson led the NOAA Hazardous Materials Response Program in its early years and helped guide the team’s pioneering development of chemical spill software tools for emergency responders. (NOAA)

Following the Maria Costa, Robinson got to work with the Seattle Fire Department’s newly formed hazmat team, allowing NOAA to observe how local chemical incidents were managed. Then, he initiated four large-scale exercises around the nation to test how the scientific coordination of a federal response would integrate with local first responder activities during larger-scale chemical incidents.

It didn’t take long to understand how important it was for first responders to have the right tools for applying science in a chemical response. During the first exercise, responders laid out several reference books on the hoods of cars in an attempt to assess the threat from the chemicals involved.

Researching and synthesizing complex information from multiple sources during a stressful situation proved to be the main challenge. Because the threat from chemical spills can evolve so much more rapidly than oil spills—a toxic cloud of chemical vapor can move and disappear within minutes—it was very clear that local efforts would always be front and center during these responses.

Meanwhile, NOAA scientists created a computer program employing a simple set of equations to predict how a toxic chemical gas would move and disperse and started examining how to synthesize chemical information from multiple sources into a resource first responders could trust and use quickly.

Learning from Tragedy

Then, in December of 1984, tragedy struck Bhopal, India, when a deadly chemical cloud released from a Union Carbide plant killed more than 2,000 people. This accidental release of methyl isocyanate, a toxic chemical used to produce pesticides, and its impact on the unprepared surrounding community led the U.S. government to examine how communities in the United States would have been prepared for such an accident.

By 1986, Congress, motivated by the Bhopal accident, passed the Emergency Planning and Community Right-to-Know Act (EPCRA). As a result, certain facilities dealing with hazardous chemicals must report these chemicals and any spills each year to the U.S. Environmental Protection Agency (EPA).

Apple II+ computer hooked up to Apple graphics tablet, color TV, and printer.

In the late 1970s and early 1980s, NOAA’s hazmat team wrote the first version of the ALOHA chemical plume modeling program, now part of the CAMEO software suite for hazardous material response, for this Apple II+ computer. (NOAA)

Because NOAA had already started working with first responders to address the science of chemical spill response, EPA turned to NOAA as a partner in developing tools for first responders and community awareness. From those efforts, CAMEO was born. CAMEO, which stands for Computer-Aided Management of Emergency Operations, is a suite of software products for hazardous materials response and planning.

Getting the Right Information, Right Now

The goal was to consolidate chemical information customized for each community and be able to model potential scenarios. In addition, that information needed to be readily available to the public and to first responders.

In 1986, attempting to do this on a computer was a big deal. At that time, the Internet was in its infancy and not readily accessible. Computers were large desktop affairs, but Apple had just come out with a “portable” computer. NOAA’s Robinson was convinced that with a computer on board first response vehicles, science-based decisions would become the norm for chemical preparedness and response. Today, responders can access that information from their smartphone.

NOAA and EPA still partner on the CAMEO program, which is used by tens of thousands of planners and responders around the world. Almost 30 years later, the program and technology have evolved—and continue to do so—but the vision and goal are the same: providing timely and critical science-based information and tools to people dealing with chemical accidents. Learn more about the CAMEO suite of chemical planning and response products.


Leave a comment

What Does It Take to Clean up the Cleanup From an Oil Spill?

Bags of oiled waste on a beach next to a No Smoking sign.

Bags and bags of oiled waste on the beach of Prince William Sound, Alaska, following the Exxon Valdez oil spill in March 1989. (NOAA)

Imagine spilling a can of paint on your basement floor (note: I have done this more than once.). Luckily, you have some paper towels nearby, and maybe some rags or an old towel you can use to mop up the mess. When you’re finished, all of those items probably will end up in the garbage. Maybe along with some of the old clothes you had on.

You might not think much about the amount of waste you generated, but it was probably a lot more than the volume of paint you spilled—maybe even 10 times as much. That number is actually a rule of thumb for oil spill cleanup. The amount of waste generated is typically about 10 times the volume of oil spilled.

Our colleagues at the International Tanker Owners Pollution Federation (ITOPF) did a study on this very topic, looking at the oil-to-waste ratio for nearly 20 spills [PDF]. (A messy job, for sure.) ITOPF found that the general rule for estimating waste at oil spills still held true at about 10 times the amount spilled.

The Mess of a Cleanup

Cleanup workers collect oily debris in bags on the banks of the Mississippi River.

Responders collect oily debris during the M/V Westchester oil spill in the Mississippi River near Empire, Louisiana, in November 2000. (NOAA)

What kinds of wastes are we talking about? Well, there is the oil recovered itself. In many cases, this can be recycled. Then there are oily liquids. These are the result of skimming oil off of the water surface, which tends to recover a lot of water too, and this has to be processed before it can be properly disposed. Shoreline cleanup is even messier, due to the large amounts of oily sands and gravel, along with seaweed, driftwood, and other debris that can end up getting oiled and need to be removed from beaches.

Some response equipment such as hard containment booms can be cleaned and reused, but that cleaning generates oily wastes too. Then there are the many sorbent materials used to mop up oil; these sorbent pads and soft booms may not be reusable and would be sent to a landfill. Finally, don’t forget about the oil-contaminated protective clothing, plastic bags, and all of the domestic garbage generated by an army of cleanup workers at the site of a spill response.

Aiming for Less Mess

A large U.S. oil spill response will have an entire section of personnel devoted to waste management. Their job is to provide the necessary storage and waste processing facilities, figure out what can be recycled, what will need to be taken to a proper landfill or incineration facility, and how to get it all there. That includes ensuring everything is in compliance with the necessary shipping, tracking, and disposal paperwork.

The amount of waste generated is a serious matter, particularly because oil spills often can occur in remote areas. In far-off locales, proper handling and transport of wastes is often as big a challenge as cleaning up the oil. Dealing with oily wastes is even more difficult in the Arctic and remote Pacific Islands such as Samoa because of the lack of adequate landfill space. One of the common goals of a spill response is to minimize wastes and segregate materials as much as possible to reduce disposal costs.

In a 2008 article [PDF], the U.S. Coast Guard explores in more detail the various sources of waste during an oil spill response and includes suggestions for incentivizing waste reduction during a response.


Leave a comment

Latest NOAA Mapping Software Opens up New Possibilities for Emergency Responders

This is a guest post by emergency planner Tom Bergman.

Aerial view of destroyed houses in Vilonia, Arkansas, after EF4 tornado in April 2014.

NOAA and EPA’s MARPLOT mapping software was designed for emergency responders and planners dealing with chemical spills. However, its features lend it to a host of other uses, from search and rescue after a tornado to dealing with wildfires. (NOAA National Weather Service)

For 20 years, thousands of emergency planners and responders have used the MARPLOT mapping software to respond to hazardous chemical spills. But creative MARPLOT users have also employed the program for a wide range of other uses, including dispatching air ambulances and helping identify a serial arsonist.

MARPLOT is the mapping component of a suite of software programs called CAMEO, jointly developed by NOAA’s Office of Response and Restoration and the U.S. Environmental Protection Agency to help emergency planners and responders deal with chemical spills.

These agencies have just released a new version of MARPLOT (version 5.0). MARPLOT 5 offers a host of new and improved capabilities, which translate to more mapping options, greater flexibility, and even more powerful data searching capabilities.

On the Grid

To illustrate a few of the new capabilities of MARPLOT 5, let’s imagine that a category EF2/EF3 tornado is blowing through McClain County, Oklahoma. McClain County is a mostly rural area, with only three small towns. For this scenario, we will assume the tornado passes through the small town of Blanchard, Oklahoma.

Immediately following the tornado, first responders will conduct initial damage surveys of the affected area. Generally, the Incident Command, which is the multi-agency team responsible for managing the emergency response, will want to divide the area the tornado impacted into a “grid” and assign teams to survey specific areas of it. MARPLOT 5 has a new “gridding” tool, which allows those in an Incident Command to determine and display the various survey zones.

In the Ready Files

Fortunately, McClain County is well-prepared to deal with this emergency. The county already has a complete list of addresses for the affected area in the proper file format for working in maps (E911 address point shape files) and has imported them into MARPLOT 5 before the tornado hit. In addition, McClain Emergency Management has compiled information such as locations with chemicals stored on site, homes or businesses with fortified safe rooms, and any special populations such as those with impaired mobility and made that data available in MARPLOT 5. Having this information at their fingertips helps the Incident Command prioritize resources and search areas in the affected zones, as well as keep survey and search-and-rescue teams safe.

The latest version of the software allows users to upload any .png image file to serve as a map symbol. This feature provides critical information to responders in a customizable and easily interpreted way. Notice in the screen shot of the MARPLOT map below that the locations of safe rooms, E911 address points, and residences of oxygen-dependent and mobility-impaired persons are clearly identified by specific symbols. The user can select any map symbol and see an associated information box displayed for that symbol.

Screenshot showing close-up of grid zones for a hypothetical tornado. The map shows safe rooms, 911 address points, and special populations displayed in MARPLOT 5.

Close-up of grid zones for a hypothetical tornado. The map shows safe rooms, 911 address points, and special populations displayed in MARPLOT 5. (NOAA)

In MARPLOT, any square of the grid can be selected and “searched” for information associated with that area of the map, which is then displayed in the latest version of MARPLOT as a “spreadsheet.” This spreadsheet can be printed and given to the teams surveying impacted areas. Below is an example of an information spreadsheet for E911 address points in a selected one-square-mile grid zone (Grid Box 2, 4).

Screenshot of MARPLOT 5 showing addresses in a spreadsheet.

Address points in the selected Grid Box 2, 4, displayed as a spreadsheet in MARPLOT 5 which responders can print out and take on surveys of damaged areas. (NOAA)

With this feature, emergency responders have the information they need contained in both a map and a spreadsheet as they conduct their initial damage survey. In this example, responders assigned to survey Grid Box 2, 4 already know they must clear 142 address points in the area, six of which have safe rooms, two of which have mobility-impaired residents, and one with an oxygen-dependent person.

Furthermore, the emergency responders in this scenario were able to accomplish all of these operations in MARPLOT without any access to Internet or cloud servers. And the software is 100 percent free.

This is a very simple example of new ways MARPLOT 5 may be implemented by emergency planners and responders across the country. There are a host of other new operations in version 5—including real-time weather via web mapping service (WMS) access—that could be used for dealing with wildfires, search and rescue operations, floods, hazardous material releases, resource management, manhunts … In fact, MARPLOT could be used in just about any type of situation where customizable and user-operated mapping might be helpful.

Learn more about and download the latest version of MARPLOT.

Tom Bergman is the author of the CAMEO Companion and host of the www.cameotraining.org website. Tom is the EPCRA (Emergency Planning and Community Right-to-Know Act) Tier 2 Program Manager for the State of Oklahoma and has been a CAMEO trainer for many years.  He has conducted CAMEO training courses in Lithuania, Poland, England, Morocco, and 45 U.S. states.

Follow

Get every new post delivered to your Inbox.

Join 525 other followers