When:
June 18, 2016 @ 10:30 am – 12:30 pm
2016-06-18T10:30:00-05:00
2016-06-18T12:30:00-05:00
Where:
Auditorio Piso 2. Casa Museo Arte y Cultura la Presentación.

10:30am–11:00m
Policy Characteristics and Stakeholder Returns in Participating Life Insurance: Which Contracts Can Lead to a Win-Win?
Charbel Mirza*, University of Lausanne; Joël Wagner, University of Lausanne
Participating life insurance contracts and pension plans often include a return guarantee and participation in the surplus of the institution’s result. The final account value in such contracts depends on the investment policy driven by solvency requirements as well as the level of market returns, the guarantee and the participation rates. Using a contingent claim model for such contracts, we assume a competitive market with minimum solvency requirements similar to Solvency II. We consider solvency requirements on the maturity and one-year time horizons, as well as contracts with single and periodic premium payments. Through numerical analyses we link the expected returns for equity holders and policyholders in various situations. Using the return on equity and policyholder internal rate of return along with utility measures, we assess which contract settings optimize the return-compromise for both stakeholders in a low interest rate environment. Our results extend the academic literature by building on the work by Schmeiser and Wagner (2015, emph{The Journal of Risk and Insurance}, 82(3):659–686) and are relevant for practitioners given the current financial market environment and difficulties in insurance-linked savings plans with guarantees.

11:00am–11:30am
Andrew Hunt*, N/A
Identifiability in Mortality Models
Lack of identifiability, where different sets of parameters give identical fits to the data, is a common feature of many standard mortality models. Traditionally, these issues have been “solved” by imposing arbitrary constraints on the parameters, so as to select a unique set when fitting the model. However, this solution can cause further problems and may mean that our projections from such models depend on the arbitrary choice of constraints we adopt. We investigate this matter in a number of common age/period/cohort mortality models, discussing how identifiability issues arise, how to find them in more complicated models, their potential impact on our projected mortality rates, and how this issue can be resolved fully.

11:30am–12:00m
Predicting human mortality: quantitative evaluation of four stochastic models
Anastasia Novokreshchenova*, University of Turin; Luca Regis, IMT Institute for Advanced Studies Lucca.
In this paper we compare different mortality models. We consider one discrete-time model proposed by Lee and Carter (1992) and three continuous-time models: Wills and Sherris (2011) model, Feller process and Ornstein-Uhlenbeck (OU) process. The first two models estimate the whole surface of mortality simultaneously, while in the last two cases each generation is modelled and calibrated separately. We calibrate the models to the UK female population data. To evaluate the goodness of fit we use two measures: the relative error between the forecasted and the actual mortality rates and the percentage of actual mortality rates which falls within a confidence interval. We find that all the models show relatively similar absolute total error. In terms of the confidence intervals, the results are more divergent since each model implies a certain structure of the variance.
According to our experiments, models which estimate the whole surface of mortality, produce better results in terms of the confidence interval. However, in terms of the mean absolute error, OU and the Feller processes perform best of all.

12:00m–12:30pm:
Mortality Improvement Rates: Modelling and Parameter Uncertainty.
Andres Villegas*, University of New South Wales; Andrew Hunt, Pacific Life Re, London
Rather than looking at mortality rates directly, a number of recent academic studies have looked at modelling rates of improvement in mortality when making projections. Although relatively new in the academic literature, the use of mortality improvement rates has a long-standing tradition in actuarial practice when allowing for improvements in mortality from standard mortality tables. However, mortality improvement rates are difficult to estimate robustly and models of them are subject to high levels of parameter uncertainty, since they are derived by dividing one uncertain quantity by another. Despite this, the studies of mortality improvement rates to date have not investigated parameter uncertainty due to the ad hoc methods used to fit the models to historical data. In this study, we adapt the Poisson model for the numbers of deaths at each age and year, proposed in Brouhns et al. [Insurance: Mathematics and Economics 3 (2002) 31] to model mortality improvement rates. This enables models of improvement rates to be fitted using standard maximum likelihood techniques and allows parameter uncertainty to be investigated using a standard bootstrapping approach. We illustrate the proposed modelling approach using data for the USA and England and Wales populations taken from the Human Mortality Database.

This entry was posted in . Bookmark the permalink.