The Hazardous Weather Testbed Experimental Warning Program was such a rewarding experience, and I feel so grateful that I got to participate in the first year that they ever invited broadcast meteorologists.
The final day of the Experimental Warning Program was our presentation day. On Thursday evening, the three forecasters from NWS WFOs (weather forecast offices) and I reviewed all the storm situations we had encountered during the week. We talked about all the different forecast products that were available to us, how we used them, what worked well and why. Finally, we put together a PowerPoint presentation and hosted a webinar that was available for all NOAA employees to access.
As I've mentioned before, all of the products that were available to us had the intent of making it easier for forecasters to spot severe thunderstorms and/or tornadoes. This will help NWS forecasters issue severe thunderstorm warnings and tornado warnings earlier, and with more accuracy. This result can only come about if the products work the way they're intended to, though! Keep reading for a brief description of each of the products I used during the experiments. I referenced most of these products in posts on my blog, Facebook page, and/or Twitter account earlier in the week.
1. First, the NearCast model, which was developed by NOAA in collaboration with CIMSS (Cooperative Institute for Meteorological Satellite Studies) at the University of Wisconsin. NearCast produces 9-hour forecasts, and the forecast gets updated every hour. That means this model is fast enough to take in new information (initial conditions) every hour, and still come up with a 9-hour forecast. The parameters that go into NearCast will not be familiar to people who have never taken an atmospheric thermodynamics course. The forecasts are for precipitable water, which is a measure of how much rain a cloud could potentially produce; and equivalent potential temperature (Theta-e), which can be a proxy for atmospheric instability, because this value is related to the amount of heat and moisture in the air. If NearCast becomes operational (i.e., used in the real world), its data will come from the new GOES-R satellite which NASA plans to launch late next year.
NearCast did a great job of helping us see where new convective thunderstorm cells were going to develop, but it was not helpful in areas where cloud cover was already in place. The model developers knew about this limitation, but they were happy to hear that the product was useful in forecasting the placement of thunderstorms before they actually developed. The NWS forecasters said they would use NearCast when creating POPs (possibility of precipitation) for forecast zones in their county warning area.
2. The vLAPS (Variational Local Analysis & Prediction System) model came to us from the NOAA Earth System Research Laboratory. vLAPS uses a relatively small grid- 800km x 800km- to produce a forecast. One of the unique things about this model is that it can be moved, almost on the fly, to a specific area with the potential for severe weather. The other forecasters and I really liked this model because its high-resolution output showed detailed results for important things like CAPE (Convective Available Potential Energy- for storms), Helicity (for tornadoes and occasionally straight line winds), and maximum column reflectivity (for rain). The problem with the vLAPS, however, is that it took a couple hours to adjust to its new zone if the model was moved to a new location. This means that the first 2-3 hours of data from the model are not useable, and by then an NWS forecaster might miss an important feature or a developing storm cell.
3. ProbSevere is another CIMSS product in collaboration with NOAA. This program was the most intuitive of the forecast products we used this week. This model draws an outline around the areas of convection that meet certain parameters to become severe. This model was outstanding in providing advance warning for storm cells that dumped severe-sized hail, often giving a jump of 10 minutes or longer before we saw a corresponding Severe Thunderstorm Warning issued by the local NWS WFO. However, we discovered through the course of the week that a more accurate description for this model might be "ProbSevereHail". While it was great for finding hail cores, the model was notorious for missing severe thunderstorms with damaging wind gusts all week long.
4. The Convective Initiation (CI) tool was produced by the University of Alabama-Huntsville in collaboration with NASA. It uses a color-coded system of identifying clouds that are likely to become convective; that is, clouds that are likely to grow into tall storm clouds. We all found this system to be very easy to read, and it was very accurate in pinpointing clouds that were in a rapid growth stage. However, the program was a bit too busy, since the scale was broken down into increments of 10% ranges. For instance, the color yellow corresponded to a CI probability of 60%-70%; the color orange corresponded to values between 70%-80%. The forecaster from the Buffalo NY WFO suggested that this level of detail isn't necessary, and none of us disagreed with him. The forecaster visiting from the WFO in Louisville, KY also mentioned that this product was practically useless once the sun went down. I noticed that when the CI product was used in areas with big elevation changes, the results were very erratic. Both times that I noticed this pattern, the storm cell ended up producing flash flooding within an hour or two. One of those instances was on Tuesday in the Shenandoah Valley, so this was of even greater interest to me.
There were other forecast products available to us throughout the week, but I basically stuck with the four that I already mentioned. In part, this was because some of the programs weren't available in the zones I was experimenting with. The OUN WRF is one example of this. The OUN WRF model was developed and implemented by the WFO in Norman, OK (which was also the location of the Hazardous Weather Testbed). We didn't have any severe weather in Oklahoma on Tuesday, Wednesday or Thursday (what are the odds??), so it wouldn't have done me any good to fiddle around with that model. I would have also liked to have a crack at the Pseudogeostationary Model (pGSM)'s Lightning detection programs and tracking tool. However, this tool relies on the presence of a lightning network, and these are located in eight regions of about a 100 mile radius each. So, you kind of have to get lucky with lightning to use this tool. I got to use the Flash Extent Density and Lightning Jump Detection programs a bit on Thursday when the severe storms were firing up in the DC Metro area, since one of the lightning networks is in DC, but my forecasting partner worked with the program more extensively (see image below).
The Hazardous Weather Testbed Experimental Warning Program was such a rewarding experience, and I feel so grateful that I got to participate in the first year that they ever invited broadcast meteorologists. As a broadcaster "guinea pig", it was a bit challenging at first to figure out where I fit in to this week-long program. But in the end, I got a lot out of it by asking a lot of questions and by keeping my eyes and ears open.