Book of abstracts


Part 1 Process of initiating events



Yüklə 1,05 Mb.
səhifə3/17
tarix25.07.2018
ölçüsü1,05 Mb.
#58027
1   2   3   4   5   6   7   8   9   ...   17
Part 1

Process of initiating events
Magda Bogalecka1, Krzysztof Kołowrocki2

1Department of Industrial Commodity Science and Chemistry, Gdynia Maritime University, Poland, 2Department of Mathematics, Gdynia Maritime University, Poland
Some kinds of critical infrastructure accidents concerned with its safety level decrease may occur during its operation. Those accidents may bring some dangerous consequences for the environment and have disastrous influence on the human health and activity. Each critical infrastructure accident can generate the initiating event causing dangerous situations in the critical infrastructure operating surroundings. The process of those initiating events can result in this environment threats and lead to the environment dangerous degradations. To involve the interactions between the initiating events, the environment threats and environment degradation effects, semi-Markov general model of a critical infrastructure accident consequences was built.

In this part of the paper, the statistical methods such as the method of moments, the maximum likelihood method and the chi-square goodness-of-fit test are applied to the identification of the process of initiating events generated either by the critical infrastructure accident or by the loss of required safety critical level are used on the basis of statistical data coming from this process realizations.



Keywords: critical infrastructure, sea accident, potential consequences, initiating events.


Statistical identification of critical infrastructure accident consequences process

Part 2

Process of environment threats
Magda Bogalecka1, Krzysztof Kołowrocki2

1Department of Industrial Commodity Science and Chemistry, Gdynia Maritime University, Poland, 2Department of Mathematics, Gdynia Maritime University, Poland
The risk analysis of chemical spills at sea and their consequences is based on the general model of mutual interactions between three processes: the process of the sea accident initiating events, the process of the sea environment threats and the process of the sea environment degradation.

This paper is concerned with the identification of the second one of those three processes. The statistical identification of the unknown parameters of the process of environment degradation i.e. estimating the probabilities of this process of staying at the states at the initial moment, the probabilities of this process transitions between its states and the parameters and forms of the distributions fixed for the description of this process conditional sojourn times at their states are performed in the similar way to that presented in the part 1 of the paper.



Keywords: critical infrastructure, sea accident, potential consequences, environment threats.


Statistical identification of critical infrastructure accident consequences process

Part 3

Process of environment degradation
Magda Bogalecka1, Krzysztof Kołowrocki2

1Department of Industrial Commodity Science and Chemistry, Gdynia Maritime University, Poland, 2Department of Mathematics, Gdynia Maritime University, Poland
The probabilistic general model of critical infrastructure accident consequences includes the process of initiating events, the process of environment threats and the process of environment degradation models.

This paper is concerned with the identification of the third one of those three processes. The statistical identification of the unknown parameters of the process of environment degradation are performed in the similar way to that presented in the part 1 and 2 of the paper.



The results of the parts 1-3 of the paper will be used in the prediction of the process of initiating events, the prediction of the process of environment threats, and the prediction of the process of environment degradation as well as in the prediction of the entire process of the critical infrastructure accident consequences.

Keywords: critical infrastructure, sea accident, potential consequences, environment degradation.

Highly dimensional classification using Tukey depth and bagdistance
Milica Bogicevic, Milan Merkle

University of Belgrade, Faculty of Electrical Engineering, Serbia
In a recent paper “ABCDepth: efficient algorithm for Tukey depth” (arXiv, 2016), we presented the new approximate algorithm for evaluation of Tukey median and Tukey depth of a given point. The algorithm has an advantage over others due to linear complexity in high dimensions. In this work we use this algorithm combined with the notion of bagdistance introduced by Hubert, Rousseeuw and Segaert (Adv. Data Anal. Classif., 2016) in very high dimensional data sets. Several examples in classification and outlier detection will be presented and discussed. We will examine performances of our algorithm in very high dimensions which are still not achievable with existing exact algorithms.

Keywords: Big data, Tukey depth, Classification, ABCDepth

Formulation of the Mean Squared Error for Logistic Regression. An Application with Credit Risk Data
Eva Boj del Val, Teresa Costa Cor

Universitat de Barcelona, Spain
It is showed an expression of the mean squared error of prediction for new observations when using logistic regression. The mean squared error can be expressed as the sum of the process variance and of the estimation variance. The estimation variance can be estimated by applying the delta method and/or by using bootstrap methodology. When using bootstrap, e.g. bootstrap residuals, one is able to obtain an estimation of the distribution of each predicted value. To help us in the knowledge of the randomness of the new predicted values, confidence intervals can be calculated by taking into account the bootstrapped distributions of the predicted new values. The calculus and usefulness of the mean squared error are illustrated to solve the problem of Credit Scoring. They are analyzed two sets of real credit risk data for which the probabilities of default are estimated. Other measures as are: error rates based on counting mispredictions; sensitivity, specificity, ROC curves and the Brier’s score are calculated for comparison with the proposed mean squared error measures.

Keywords: Logistic regression, mean squared error, estimation variance, delta method, bootstrapping residuals, credit risk.

Prediction for regularized clusterwise multiblock regression
Stéphanie Bougeard1, Ndeye Niang2, Gilbert Saporta2

1Department of Epidemiology, Anses, France, 2Department of mathematics and statistics, CEDRIC-CNAM, France
Multiblock methods integrate the information in several blocks of explanatory variables to explain a set of dependent variables. However in many applications, multiblock techniques are used when the observations come from a homogeneous population but it often happens that the observations originate from different ones. A standard approach to obtain clusters within a regression framework is clusterwise, a.k.a. typological, regression. These methods assume that there is an underlying unknown group structure of the observations and that each cluster can be revealed by the fit of a specific regression model. In a more formal way, clusterwise regression simultaneously looks for a partition of the observations into clusters and minimizes the sum of squared errors computed over all the clusters.

We propose to combine a regularized multiblock regression with a clusterwise approach. We focus on prediction as a matter of utmost importance however not addressed in the clusterwise framework. In practice, clusterwise prediction can be used for two major goals: (i) the prediction of new observations and (ii) the selection of the unknown parameters of the clusterwise multiblock regression, i.e. the number of clusters, the number of number of components to be included in the model and the regularization parameter value, while minimizing the cross-validated prediction error. For this purpose, several original multiblock supervised classifications are checked in the field of clusterwise analysis. A simulation study is carried out to assess the prediction method performances and an empirical application is provided to illustrate the method usefulness.



Keywords: multiblock analysis, clusterwise regression, multiblock discriminant analysis, supervised multiblock classification.

Optimal Sustainable Constant Effort Fishing Policies in Random Environments
Carlos A. Braumann1,2, Nuno M. Brites1

1Centro de Investigação em Matemática e Aplicações, Instituto de Investigação e Formação Avançada, Universidade de Évora, Portugal, 2Departamento de Matemática, Escola de Ciências e Tecnologia, Universidade de Évora
We use a general Stochastic Differential Equation model for the growth of a fish population in a randomly varying environment, to which we subtract a harvesting yield term based on a constant or variable fishing effort. We consider a quite general profit structure with linear prices per unit yield and linear costs per unit effort. Previous work on the optimal design of the fishing policy with the purpose of maximizing the expected accumulated profit (discounted by a depreciation rate) over a finite time horizon lead to fishing efforts that vary with the randomly varying population size (sometimes even in a bang-bang way). These policies are not applicable since they need constant evaluation of population size and require very frequent randomly determined changes in the fishing effort. Our approach uses instead a very easily applicable constant effort fishing policy, which leads to sustainability of the population and to a stationary distribution of the population size. We determine the constant fishing effort that optimizes the expected sustainable profit per unit time. Then, for the logistic and the Gompertz models, we use Monte Carlo simulations and check what we lose profitwise by using this policy instead of the optimal inapplicable policy with variable effort; for common situations, our approach is almost as profitable as the first.

Keywords: Fishing Policies, Stochastic Differential Equations, Random Environments, Profit Optimization.

Acknowledgements: The authors belong to the Centro de Investigação em Matemática e Aplicações (CIMA, ref. UID/MAT/04674/2013), a research centre supported by FCT (Fundação para a Ciência e a Tecnologia, Portugal). Nuno M. Brites holds a PhD grant from FCT (ref. SFRH/BD/85096/2012).

Taylor's Law via Ratios, for Some Distributions with Infinite Mean



Mark Brown
Department of Statistics, Columbia University

Taylor’s law (TL) originated as an empirical pattern in ecology. In many sets of samples of population density, the variance of each sample was approximately proportional to a power of the mean of that sample. In a family of nonnegative random variables, TL asserts that the population variance is proportional to a power of the population mean. TL, sometimes called fluctuation scaling, holds widely in physics, ecology, finance, demography, epidemiology, and other sciences, and characterizes many classical probability distributions and stochastic processes such as branching processes and birth-and-death processes. We demonstrate analytically for the first time that a version of TL holds for a class of distributions with infinite mean. These distributions and the associated TL differ qualitatively from those of light-tailed distributions. Our results employ and contribute to methodology of Albrecher and Teugels (2006) and Albrecher, Ladoucette and Teugels (2010). This work opens a new domain of investigation for generalizations of TL.


This work is joint with Professors Joel Cohen and Victor de la Pena.

Sharp Bounds for Exponential Approximations of NWUE Distributions
Mark Brown1, Shuangning Li2

1Department of Statistics, Columbia University, USA, 2Department of Statistics and Actuarial Science, University of Hong Kong, Hong Kong
Let F be an NWUE distribution with mean 1 and G be the stationary renewal distribution of F. We would expect G to converge in distribution to the unit exponential distribution as its mean goes to 1. In this paper, we derive sharp bounds for the Kolmogorov distance between G and the unit exponential distribution, as well as between G and an exponential distribution with the same mean as G. We apply the bounds to geometric convolutions and to first passage times.

Keywords: Exponential approximations, reliability theory, NWUE distributions, Kolmogorov distance.

Asymptotic Analysis and Optimization of Insurance Company Performance
Ekaterina Bulinskaya

Department of Probability Theory, Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Russia
It is well known that one can use different criteria (risk measures or objective functions) in order to evaluate an insurance company performance. The most popular one in the last century was the company ruin probability for Cramer-Lundberg model. However in practice it turned out that the negative surplus level not always leads to bankruptcy, since the company may use, e.g. a bank loan to avoid insolvency. So, there were defined and studied “absolute ruin”, “Parisian ruin”, as well as, “omega models”, in the framework of reliability approach. Being a corporation insurance company is interested in dividend payments to its shareholders. Thus, the so-called cost approach arose in the middle of the last century due to pioneering Bruno de Finetti paper. He proposed to maximize the expected discounted dividends paid out until ruin. Modern period in actuarial sciences is characterized by interplay of financial and actuarial methods leading to unification of reliability and cost approaches. For optimization of a company functioning one can use various tools such as investment, bank loans and reinsurance. We are going to investigate asymptotic behavior of several insurance models and apply the results for establishing the optimal investment and reinsurance strategies. The discrete-time models which are more realistic in some situations will be considered along with continuous-time ones.

Keywords: Asymptotic Behavior, Optimization Criteria, Ruin Probability, Bankruptcy, Dividends, Investment, Reinsurance.

On Quantum Information Characterization of Markov and non-Markov Dynamics of Open Quantum Systems
Andrey Bulinski

Department of Higher Mathematics, Moscow Institute of Physics and Technology, Russia
The theory of open quantum systems continued attracting much attention during the last decade due to the vast area of applications in various branches of physics and also in biology and chemistry. Treatment of the evolution of open system interacting with environment as Markovian is a useful idealization valid only when memory effects can be neglected. When the initial states of the system and environment are uncorrelated, then reduced dynamics of the system is implemented by completely positive linear maps which obey the semigroup law. Nowadays the studies are mostly devoted to non-Markovian evolution of open systems and are often related to modern quantum technologies of information processing and storing. We can refer to nice recent review papers by Rivas-Huelga-Plenio and by Breuer-Laine-Piilo for characterizations of quantum non-Markovianity and diverse measures of its manifestation in dynamics. Two main approaches to Markovianity property focus on CP (completely positive) -divisibility by an analogy with positive divisibility of classical inhomogeneous Markov processes, and the lack of backward flow of information into the system, respectively. Initially the presence of backflow was formulated in terms of increasing distinguishability of a pair of evolving states. However, the CP-divisibility requirement is stronger, and there have emerged many other nonequivalent measures of the information backflow. A stronger form proposed by Buscemi-Datta being equivalent to CP-divisibility is not easy to verify. The analysis was mainly confined to quantum systems with finite-dimensional underlying Hilbert space. We will be concerned with generalizations of the results on the open quantum system evolutions to comprise the infinite-dimensional case, including the possibility to consider states of more general von Neumann algebras than all the bounded operators. To this aim we benefit from Shirokov’s extensions of various entropic measures of quantum correlations.

Keywords: Quantum Information, Open Quantum Systems, Non-Markovian Dynamics, von Neumann Algebras.

Diversification Analysis in Value at Risk Models under Heavy-Tailedness and Dependence
Suzanne Burns

Imperial College Business School, United Kingdom
This paper analyses the effect of diversification across N assets, on the Value at Risk of a portfolio whose dependence is captured by a copula. We investigate risks with Power Law marginals - which satisfy

for large x’s, where ζ is known as the tail index and X is the potential loss value, including risks with Student-t distributions. The copula we show particular interest in is the Student-t as, according to a number of works in the literature, it outperforms other copulas when modeling VaR. The results involving the Gaussian copula will be also included for comparison reasons. The research can be seen as a continuation of [Mo, 2013], which included diversification analysis on the bivariate Student-t copula and Power-Law marginals, amongst its results. We extend the analysis from bivariate data to include N assets, and adjust the ratio which measures diversification optimality from



to ,

where each for i = 1, 2, …, N, is an investment risk (it is natural to refer to the latter quantities as diversification ratios). The results extend previous reports that diversification becomes suboptimal for risks with Power Law marginals for tail indices ζ < 1. We also note how the effect of different input parameters on the ratio changes according to the value of the tail index ζ. The second part of our paper describes the optimizer created to determine numerically the portfolio weights that minimize the VaR for each distribution. The output weights are on par with our results on diversification. Finally, we take the perspective of a portfolio manager that must make assumptions on ζ, and other distribution parameters, and compare the VaR calculated under the assumptions to the VaR obtained knowing the correct distribution.



Empirical Power Study of the Jackson Exponentiality Test
Frederico Caeiro, Ayana Mateus

Faculdade de Ciências e Tecnologia & Centro de Matematica e Aplicações (CMA), Universidade Nova de Lisboa, Portugal
Since many statistical methods depend on the exponential assumption, testing exponentiality has an important role in Statistics. Possible alternative models, which extend the exponential distribution, are the gamma distribution, the Weibull distribution, or the generalized Pareto distribution. Many tests have been proposed in the literature and here we consider the Jackson exponentiality test. In this paper we use Monte Carlo computations to study the Empirical Power of the Jackson test.

Keywords: Exponential distribution; exponentiality test; monte carlo simulation; power of a statistical test.

References:

1. O. A. Y. Jackson. An analysis of departures from the exponential distribution, J. Roy. Statist. Soc. B, 29, 540{549, 1967.



Probability Weighted Moments Method for Pareto distribution
Frederico Caeiro, Ayana Mateus

Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, Portugal and Centro de Matematica e Aplicações (CMA)
The Pareto distribution has been extensively utilized for modelling events in fields such as bibliometrics, demography, insurance, or Finance, among others.

There are several methods to estimate the parameters of the Pareto distributions (see (Arnold, [1], Johnson[3], Quandt[4] and references therein). In this work, we consider the probability weighted moments (PWM) method, a generalization of the classic method of moments. In this method we work with the theoretical moments



with and any real numbers, and with their corresponding sample moments. The PWM estimators are obtained by equating with their corresponding sample moments, and then solving those equations in order of the parameters. The PWM estimators for the Parameters of the Pareto distribution were presented in [2]. We now study the performance such estimators for finite sample sizes and compare them with the moment and maximum likelihood methods through a Monte Carlo simulation.

Keywords: Pareto distributions, Probability weighted moments, Monte Carlo simulation.

References

1. B.C. Arnold. Pareto and Generalized Pareto Distributions. In D. Chotikapanich (ed.) Modeling Income Distributions and Lorenz Curves, Springer, New York, 119{145, 2008.

2. F. Caeiro and M. I. Gomes. Semi-parametric tail inference through probabilityweighted moments.Journal of Statistical Planning and Inference, 141, 937{950, 2011.

3. N.L. Johnson, S. Kotz and N. Balakrishnan. Continuous Univariate Distributions, Wiley, New York, Vol.1, 2nd Ed., 1994.

4. R. E. Quandt. Old and new Methods of Estimation and the Pareto Distribution. Metrika, 10, 55{82, 1966.

5. J. Hosking, J. Wallis and E. Wood. Estimation of the generalized extreme value distribution by the method of probability-weighted moments. Technometrics, 27(3),251{261, 1985.



Option pricing and model calibration under multifactor stochastic volatility and stochastic interest rate - an asymptotic expansion approach
Canhanga B.1, Malyarenko A.2, Ni Y.2, Rancic M.2, Silvestrov S.2

1Faculty of Sciences, Department of Mathematics and Computer Sciences, Eduardo Mondlane University, Mozambique, 2Division of Applied Mathematics, School of Education, Culture and Communication, a Mälardalen University, Sweden
Among other limitations, the celebrated Black-Scholes option pricing model assumes constant volatility and constant interest rates, which is not supported by empirical studies on for example implied volatility surfaces. Studies by many researchers such as Heston in 1993, Christoffersen in 2009, Fouque in 2012, Chiarella-Ziveyi in 2013, and the authors’ previous work removed the constant volatility assumption from the Black-Scholes model by introducing one or two stochastic volatility factors with constant interest rate. In the present paper we follow this line but generalize the model by considering also stochastic interest rate. More specifically, the underlying asset process is governed by a mean-reverting interest rate process in addition to two mean-reverting stochastic volatility processes of fast and slow mean-reverting rates respectively. The focus is to derive an approximating formula for pricing the European option using a double asymptotic expansion method, and present a calibration procedure which is then applied to Swedish option market data.

Yüklə 1,05 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   17




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin