June 16, 2016 @ 3:00 pm – 4:00 pm
Auditorio Paraninfo. Claustro San Agustín. Universidad de Cartagena.

Plenary Talk: Stochastic control for insurance: new problems and methods. Christian Hipp (Karlsruher Institute of Technology, Karlsruhe, Germany)

Stochastic control for insurance is concerned with problems in insurance models (jump processes) and for insurance applications (constraints from supervision and market). This leads to questions of the following type:
1. How to find numerically a viscosity solution to an integro differential equation;
2. Uniqueness of viscosity solutions when boundary conditions are values of derivatives; and
3. How to solve control problems with hidden variables.
We shall present simple Euler schemes (similar to the ones in Fleming-Soner (2006), Ch. IX) which converge when the value function has a continuous first derivative. This Euler discretisation works in many univariate control problems also when value functions are without continuous second (and first!) derivative. Cases with non smooth value function arise when claim size distributions are atomic or when constraints are restrictive.
Examples for control propblems with hidden variables are a) Bayesian models or mixture models in which the mixing variable is not observable, and these cases are usually solved with a filter approach. Or b) multi-objective problems with an objective function with dimension 2 or larger. The most studied problem of type b) is mean-variance optimisation with finite horizon in portfolio management. We shall consider an infinite horizon problem: maximize dividend payment and minimize ruin probability. This problem will be described and partly solved in three simple models: in the (time and space discrete) de Finetti model, in the classical Lundberg model with exponential claims, and in a simple diffusion model. The dynamic equations for these problems are (almost) hopeless. Instead, heuristic methods are proposed which lead to suboptimal solutions not too far from optimal solutions, and which speed up policy improvement algorithms.

This entry was posted in . Bookmark the permalink.