| Term 
 
        | what does error growth tell you about predictability? |  | Definition 
 
        | tells what time predictability is lost |  | 
        |  | 
        
        | Term 
 
        | when has the EF done well in representing deterministic error growth? |  | Definition 
 
        | when the average error in that spectrum matches the average control forecast error. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | SQRT of the ensemble variance(standard deviation) |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | average error is low and single valued forecasts is an appropriate and efficient predictor for decision making. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | begins once the average error starts to get significant, and can no longer be safely ignored since it may impact decision. |  | 
        |  | 
        
        | Term 
 
        | in terms of analysis, how can predictability improve |  | Definition 
 
        | analysis improved(better networks, better assimilation scheme), model improved(time resolution, model resolution). average error for at all Tau will decrease, thus improving predictability. |  | 
        |  | 
        
        | Term 
 
        | is error growth flow dependent |  | Definition 
 | 
        |  | 
        
        | Term 
 
        | what are three criteria for an EF to be successful |  | Definition 
 
        | 1.) representation of analysis uncertainty. En ICs must be formulated such that the difference in ICs represent analysis error and the true anal is a Rand sample from Ens Anal PDF 2.)representation of model uncertainty: the variance of model error must be accounted for.
 3.) sufficient ensemble size. enough members in ensemble to produce thorough statistical sampling of forecast PDF
 |  | 
        |  | 
        
        | Term 
 
        | how do you maximize utility? |  | Definition 
 
        | create the sharpest possible, reliable PDF that enables user to take advantage of that value. reduce forecast uncertainty and accomplish job of accurately estimating whatever uncertainty remains. |  | 
        |  | 
        
        | Term 
 
        | pyramid: major components of EPS |  | Definition 
 
        | foundation(determ modeling): source of forecaster uncertainty. obs, DA, BC, model res, deterministic verification. core(forecast generation): calculating ICs and running all members. verification of skill and postprocessing calibration of the raw forecast.
 interface(input,output): optimizes user decisions. products, marketing, verification of value,
 |  | 
        |  | 
        
        | Term 
 
        | 2 ways of communicating forecast info |  | Definition 
 
        | a.) provide info about forecast PDF 1.) ENS Mean to improve Deter for skill and maximize predictability.
 2.) use spread to predict determ skill( forecasr confidence, range of pos)
 3.)use full forecast PDF to predict the prob of an event occurring.
 b.) provide decision recomendations using principles of risk analysis in which forecast prob is compared to risk tolerance. requires input from  user.
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | evaluates ensemble w.r.t the quality of the forecast probability. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | provides standardized measure of performance relative to a reference forecast. best is 1.0 |  | 
        |  | 
        
        | Term 
 
        | how is uncertainty about a result calculated? |  | Definition 
 
        | estimated with a confidence CI(error bar) and compared. for two results to be considered different(to a chosen degree of confidence set by confidence interval) their error bars, or range of possible true values of the metric, must overlap. |  | 
        |  | 
        
        | Term 
 
        | what is bootstrapping and when is it used |  | Definition 
 
        | used for metrics involving binomial sums to estimate error bars. designed to estimate uncertainty in sampling stats, such as mean or variance. basic process is to reproduce a set of data(verification results) having M values by randomly pulling values with replacement from the original data set. process repeated over many times to yield distribution of possibilities for the metric, from which a 95% CI is formed. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | dilution the verification dataset with trivial forecast cases that have little or no uncertainty |  | 
        |  | 
        
        | Term 
 
        | how do you mitigate dataset dilution |  | Definition 
 
        | 1.) the event either never or always occured. 2.) the event was forecast to never or always occur
 |  | 
        |  | 
        
        | Term 
 
        | ensemble resolution(not model res) |  | Definition 
 
        | ability to distinguish events from non events. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | determined by comparing the consequences realized by following the forecasts.(believing the info and taking appropriate action)vs. ignoring the forecast.negative VOI is good |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | combines short term forecast with obs. decides who to trust; forecast vs the obs.trust based on what we perceived to be the potential error, defined by likely variability of the error. larger error variance= less trust. also considers covariances info between the same variable at nearby locations as well as between variables |  | 
        |  | 
        
        | Term 
 
        | when does filter divergence occur |  | Definition 
 
        | when EnKF ensemble members are being dishonest about how much they should be trusted. members too close together in phase space and under represent the true forecast error variance. inappropriate high weighting to the forecast info, thus producing poor analysis. resulting analysis and set of ensemble ICs is fed to the ensemble members producing an even worse set of forecasts for the next cycle which is again highly trusted. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | dist of possible current states of the atmosphere and exists purely due to our limited capability to observe and analyze atmosphere. in clouds of states within small region of phase space, dense in middle, thinning outward. wider forecasts for a less accurate or less precise analysis,which can come from use of fewer observations or other DA deficiencies. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | similar to analysis but if reflects uncertainty.any random sample from forecast PDF can be a possible  future states |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | theoretical distribution that an ensemble attempts to produce. includes all possible states for a given analysis/forecast system and a particular analysis and lead time. size and shape depends on the specific limitations of both the analysis and the NWP model. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | the method of generating ICs by adding scaled, random noise to the best guess analysis. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | any differences between ensemble members designed to represent likely errors and thus simulate forecast uncertainty. perturbations methods and techniques are very diverse and complex. 1.) IC pertubations, small change to the control analysis that represents possible erros from obs and DA. 2.) model pertubations: change in the workings of NWP, simulates likely errors due to the model such as numerical truncation, parametrization, and sub grid scales process. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | sensitivity to IC due to nonlinear affects, small pertubations leading to large changes. strange attractor confined within phase space, infinite # of possibilities. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | system never repeats itself, but familiar states often reappear. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | verifying obs of a forecast bounded by the ensemble members |  | 
        |  | 
        
        | Term 
 | Definition 
 | 
        |  | 
        
        | Term 
 
        | statistical consistencies |  | Definition 
 
        | estimated  PDF from well designed ensemble mirrors the true forecast PDF and does so by accounting for all sources of uncertainty in the analysis/modeling system. |  | 
        |  | 
        
        | Term 
 
        | post processing calibrations |  | Definition 
 
        | make up for ensemble deficiencies and improve consistency, which is critical to the skill and utility. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | designed to investigate statistical consistency by comparing the MSE of the ensemble MSEe with average ensemble variance over all forecast lead times. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | rate of change in ensemble variance with increasing tau, represents ensemble error growth. |  | 
        |  | 
        
        | Term 
 
        | VHR diagrams(3 types of curves) |  | Definition 
 
        | u shaped indicates underspread. v shaped indicates overspread. uniform is necessary, but not sufficient condition for stat consistency. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | each member gets an equal vote on what will occur and the votes are tallied to get the % of members in factor of event occurring. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | gaps among ordered members are consider tor represent evenly divided qualities of continuous probability. |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | contstruct complex PDF with no shape of constrained by dressing each member with a PDF then combining info together. performs well in multi model situations |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | verification tool that employs signal detection theory to evaluate quality of binary type forecasts of an event. area under ROC is measure of utility. ROC curves closer to upper left are better forecasts |  | 
        |  |