New implementation of Knutson scaling incl. fix to negative freqs and double counting by aleeciu · Pull Request #734 · CLIMADA-project/climada_python
Then go through critera 2,3,4 and each time, look at how frequency has already changed and how to adjust remaining categories such as to satisfy Knutson's aggregated change. That is, for point 2, I would look at cat 3 storms and see: by how much do these need to change in frequency after we've already changed cat 4 and 5 storms, such as to satisfy Knutson's frequency change (1) and (2) above. Once this is done, do the same for cat 1&2 (Knutson's change 3 above), etc.
Exactly this method was implemented in the past. But, it did lead to strong inconsistencies with the frequency distributions. A new implementation should be aware of this.
This was the proposal of the time. Maybe only changing the frequencies would resolve the problem.
def _apply_knutson_criterion(self, chg_int_freq, scaling_rcp_year):
"""
Apply changes to intensities and cumulative frequencies.
Parameters
----------
criterion : list(dict))
list of criteria from climada.hazard.tc_clim_change
scale : float
scale parameter because of chosen year and RCP
Returns
-------
tc_cc : climada.hazard.TropCyclone
Tropical cyclone with frequency and intensity scaled according
to the Knutson criterion. Returns a new instance of TropCyclone.
"""
tc_cc = copy.deepcopy(self)
# Criterion per basin
for basin in np.unique(tc_cc.basin):
bas_sel = (np.array(tc_cc.basin) == basin)
# Apply intensity change
inten_chg = [chg
for chg in chg_int_freq
if (chg['variable'] == 'intensity' and
chg['basin'] == basin)
]
for chg in inten_chg:
sel_cat_chg = np.isin(tc_cc.category, chg['category']) & bas_sel
inten_scaling = 1 + (chg['change'] - 1) * scaling_rcp_year
tc_cc.intensity = sparse.diags(
np.where(sel_cat_chg, inten_scaling, 1)
).dot(tc_cc.intensity)
# Apply frequency change
freq_chg = [chg
for chg in chg_int_freq
if (chg['variable'] == 'frequency' and
chg['basin'] == basin)
]
freq_chg.sort(reverse=False, key=lambda x: len(x['category']))
# Iteratively scale frequencies for each category such that
# cumulative frequencies are scaled according to Knutson criterion.
cat_larger_list = []
for chg in freq_chg:
cat_chg_list = [cat
for cat in chg['category']
if cat not in cat_larger_list
]
sel_cat_chg = np.isin(tc_cc.category, cat_chg_list) & bas_sel
if sel_cat_chg.any():
freq_scaling = 1 + (chg['change'] - 1) * scaling_rcp_year
sel_cat_all = (np.isin(tc_cc.category, chg['category'])
& bas_sel)
sel_cat_larger = (np.isin(tc_cc.category, cat_larger_list)
& bas_sel)
freq_scaling_cor = (
(np.sum(self.frequency[sel_cat_all]) * freq_scaling
- np.sum(tc_cc.frequency[sel_cat_larger]))
/ np.sum(self.frequency[sel_cat_chg])
)
tc_cc.frequency[sel_cat_chg] *= freq_scaling_cor
cat_larger_list += cat_chg_list
if (tc_cc.frequency < 0).any():
raise ValueError("The application of the given climate scenario"
"resulted in at least one negative frequenciy."
"This is likely due to the use of a"
"non-representative event set (too small, "
"incorrect reference period, ...)")
return tc_cc