The Pickands–Balkema–de Haan theorem for intuitionistic fuzzy events

In the paper the space of observables with respect to a family of the intuitionistic fuzzy events is considered. We proved the modification of the Fisher–Tippett–Gnedenko theorem for sequence of independent intuitionistic fuzzy observables in paper [3]. Now we prove the modification of the Pickands–Balkema–de Haan theorem. Both are theorems of part of statistic, which is called the extreme value theory.


Introduction
The extreme value theory is a part of statistics, which deals with examination of probability of extreme and rare events with a large impact. The extreme value theory search endpoints of the distributions. The Fisher-Tippet-Gnedenko theorem says about convergence in probability distribution of maximums of independent, equally distributed random variables. An alternative to the maximal observation method is the method that models all observations that exceed any predefined boundary (ie. threshold). This method is used in the Pickands-Balkema-de Haan theorem. In [3] it was proved the modification of the Fisher-Tippett-Gnedenko theorem for sequence of independent intuitionistic fuzzy observables. Now we prove the modification of the Pickands-Balkema-de Haan theorem for sequence of independent intuitionistic fuzzy observables.
One of the preferences of the Kolmogorov concept of probability is the agreement of replacement the notion event with notion of a set. Therefore it seems to be important also in the intuitionistic fuzzy probability theory to work with the notion of an intuitionistic fuzzy event as an intuitionistic fuzzy set. In the intuitionistic fuzzy probability theory instead of the probability P : S → [0, 1] an intuitionistic fuzzy state m : F → [0, 1] is considered, where F is a family of intuitionistic fuzzy subsets of Ω. And instead of a random variable ξ : Ω → R an intuitionistic fuzzy observable x : B(R) → F is considered.
Our main idea is in a representation of a given sequence (y n ) n of intuitionistic fuzzy observables y n : B(R) → F by a probability space (Ω, S, P ) and a sequence (η n ) n of random variables η n : Ω → R. Then from the convergence of (η n ) n in distribution the convergence in distribution of (y n ) n follows. Of course to different sequences (y n ) n different probability spaces can be obtained. Anyway the transformation can be used for obtaining some new results about intuitionistic fuzzy states on F.
Mention that the used Atanassov concept of intuitionistic fuzzy sets [1,2] is more general as the Zadeh notion of fuzzy sets [15,16]. Therefore in Section 2 some basic information about intuitionistic fuzzy states and intuitionistic fuzzy observables on families of intuitionistic fuzzy sets are presented [13]. Further in Section 3 the independence of intuitionistic fuzzy observables is studied. In Section 4 the basic notions from extreme value theory is studied. Finally in Section 5 the intuitionistic fuzzy excess distribution F u is studied and the Pikands-Balkema-de Haan theorem for intuitionistic fuzzy case is proved.
Remark that in a whole text we use a notation "IF" for short a phrase "intuitionistic fuzzy".

IF-events, IF-states and IF-observables
Our main notion in the paper will be the notion of an IF-event, what is a pair of fuzzy events.
If A = (µ A , ν A ) ∈ F, B = (µ B , ν B ) ∈ F, then we define the Lukasiewicz binary operations ⊕, on F by and the partial ordering is given by If f = χ A , then the corresponding IF-event has the form In this case A ⊕ B corresponds to the union of sets, A B to the product of sets and ≤ to the set inclusion.
In the IF-probability theory ( [13]) instead of the notion of probability we use the notion of state.
Probably the most useful result in the IF-state theory is the following representation theorem (see [11]): The third basic notion in the probability theory is the notion of an observable. Let J be the family of all intervals in R of the form Then the σ-algebra σ(J ) is denoted by B(R) and it is called the σ-algebra of Borel sets, its elements are called Borel sets. Definition 2.6. By an IF-observable on F we understand each mapping x : B(R) → F satisfying the following conditions: .
Similarly as in the classical case the following two theorems can be proved ( [13]). Then Moreover if there exists R t 2 dF(t), then we define the IF-dispersion D 2 (x) by the formula

Independence
In the paper we shall work only with independent IF-observables. Of course first we must need the existence of the joint IF-observable. For this reason we shall define the product of IF-events ( [9]).
The next important notion is the notion of a joint IF-observable and its existence (see [12]).
Definition 3.2. Let x, y : B(R) → F be two IF-observables. The joint IF-observable of the IF-observables x, y is a mapping h : B(R 2 ) → F satisfying the following conditions: Proof. See [12] Definition 3.4. Let m be an IF-state. IF-observables Theorem 3.5. Let R N be the set of all sequences (t i ) i of real numbers. Let (x n ) n be a sequence of independent IF-observables in (F, m) with the same IF-distribution function. Then there exists a probability space (R N , σ(C), P ) with the following property. Define for each n ∈ N the mapping ξ n : Then (ξ n ) n is a sequence of independent random variables in a space (R N , σ(C), P ).
Proof. Notation: A set C ⊂ R N is called a cylinder, if there exists n ∈ N , and D ∈ B(R n ) such that C = {(t i ) i : (t 1 , ..., t n ) ∈ D}.
By C we shall denote the family of all cylinders in R N , by σ(C) the σ-algebra generated by C. Construction: Consider the measurable space (R N , σ(C)) a sequence (x n ) n of independent IF-observables x n : B(R) −→ F (i.e. x 1 , . . . , x n are independent for each n ∈ N ), and the states m n : The states m n are consisting, i.e.
for each B ∈ B(R n ). Therefore by the Kolmogorov consistency theorem (see [14]) there exists the probability measure P : σ(C) −→ [0, 1] such that for each B ∈ C, where C is the family of all cylinders in R N and π n : R N → R n is a projection defined by π n (t i ) ∞ 1 = (t 1 , . . . , t n ). Let n ∈ N , A 1 , ..., A n ∈ B(R). Then Let If there exists IF-mean value E(x n ), then Similarly the equality D 2 (ξ n ) = D 2 (x n ) can be proved.
We need the notion of convergence IF-observables yet (see [8]). Definition 3.6. Let x 1 , . . . , x n : B(R) → F be independent IF-observables and g n : R n → R be a Borel measurable function. Then the IF-observable y n = g n (x 1 , . . . , x n ) : B(R) → F is defined by the equality y n = h n •g −1 n , where h n : B(R n ) → F is the n-dimensional IF-observable (joint IF-observable of x 1 , . . . , x n ). 4. the IF-observable y n = 1 an max(x 1 , . . . , x n )−b n is defined by the equality y n = h n •g −1 n , where g n (u 1 , . . . , u n ) = 1 an max(u 1 , . . . , u n ) − b n .
Definition 3.8. Let (y n ) n be a sequence of IF-observables in the IF-space (F, m). We say that (y n ) n converges in distribution to a function Ψ : R −→ [0, 1], if for each t ∈ R lim n→∞ m y n ((−∞, t)) = Ψ(t).

Basic notions from extreme value theory 4.1 The Fisher-Tippett-Gnedenko theorem
The next notions of the extreme value theory on real numbers can be found in [4][5][6] and [7].
Let X 1 , X 2 , ... be independent, equally distributed random variables of real numbers with a distribution function F : R → R defined by where x ∈ R. Denote M n maximum of n random variables for n ≥ 2.
Theorem 4.1. (Fisher-Tippett-Gnedenko) Let X 1 , X 2 , ... be a sequence of independent, equally distributed random variables. If there exists the sequences of real constant a n > 0, b n and a nondegenerate distribution function H, such that then H is the distribution function one of the following three types of distributions:

Weibull
A parameter µ ∈ R is the location parameter and a parameter σ > 0 is the scale parameter. Gumbel, Frechet and Weibull distribution from Theorem 4.1 can be described with using a generalized distribution of extreme values -GEV: A parameter ε is called the shape parameter.

The Pickands-Balkema-de Hann theorem
In Section 4.1 the Fisher-Tippet-Gnedenko theorem says about convergence in probability distribution of maximums of independent, equally distributed random variables. An alternative to the maximal observation method is the method that models all observations that exceed any predefined boundary (i.e., threshold). Such the extremes occur "near" the upper end of distribution support, hence intuitively asymptotic behavior of M n must be related to the distribution function F in its right tail near the right endpoint. We denote by the right endpoint of F (see [4][5][6] and [7]).

Definition 4.2. (Maximum domain of attraction -MDA)
We say that the distribution function F of X i belongs to the maximum domain of attraction of the extreme value distributions H if there exists constants a n > 0, b n ∈ R such that lim n→∞ P M n − b n a n < x = H(x) holds. We write F ∈ MDA(H).

Definition 4.3. (Excess distribution function)
Let X be a random variable with distribution function F and right endpoint x F . For fixed u < x F , u > 0, is the excess distribution function of the random variable X (of the distribution function F ) over the threshold u.
Remark 4.4. The excess distribution function F u can be expressed in the following form Definition 4.5. (Generalized Pareto distribution -GPD) Define the distribution function G ε,β by and β > 0 is the scale parameter. G ε,β is called the generalised Pareto distribution. We can extend the family by adding a location parameter ν ∈ R. Then we get the function G ε,ν,β by replacing the argument x above by x − ν in G ε,β . The support has to be adjusted accordingly.
Remark 4.6. The GPD transforms into a number of other distributions depending on the value of ε. When ε > 0, it takes the form of the ordinary Pareto distribution. This case would be most relevant for financial time series data as it has a heavy tail. If ε = 0, the GPD corresponds to exponential distribution, and it is called a short-tailed, Pareto II type distribution for ε < 0.
Theorem 4.7. (Pickands-Balkema-de Haan) Let F be an excess distribution. For every ε ∈ R, for some positive function β.
Remark 4.8. Theorem 4.7 say that for some function β to be estimated from the data, the excess distribution F u converges to the generalised Pareto distribution G ε,β for large u.
Remark 4.9. The GEV H ε , ε ∈ R, describes the limit distribution of normalised maxima. The GPD G ε,β , ε ∈ R, β > 0, appears as the limit distribution of scaled excesses over high thresholds.

The Pickands-Balkema-de Hann theorem for IF-case
Now we return to the IF-case. First we recall the Fisher-Tippett-Gnedenko theorem for a sequence of independent, equally distributed IF-observables, see [3].
Theorem 5.1. (Fisher-Tippett-Gnedenko) Let x 1 , x 2 , ... be a sequence of independent, equally distributed IF-observables such that D 2 (x n ) = σ 2 , E(x n ) = µ, (n = 1, 2, . . .). If there exists the sequences of real constant a n > 0, b n and a non-degenerate distribution function H, such that then H is the distribution function one of the following three types of distributions:

Weibull
There a parameter µ ∈ R is the location parameter and a parameter σ > 0 is the scale parameter.
Let x be an IF-observable on F and F be an IF-distribution function of x. We denote by the right endpoint of IF-distribution function F. holds. We write F ∈ MDA(H).

Definition 5.3. (Excess IF-distribution function)
Let F be an IF-distribution function with right endpoint t F . For fixed u < t F , u > 0, for some positive function β.
Proof. Let (x n ) n be a sequence of independent IF-observables in (F, m) with the same IFdistribution F. Consider the measure space (R N , σ(C), P ) and random variables ξ n ((t i ) i ) = t n , (n = 1, 2, ...).
Then by Theorem 3.5 the random variables ξ n are independent. Denote F the distribution function of random variable ξ n . We can see that F = F and t F = t F , because Hence F u = F u .
Therefore we obtain for every ε ∈ R, and lim for some positive function β.
Finally from a classical Pickands-Balkema-de Haan theorem (see Theorem 4.7) we obtain Remark 5.5. Theorem 5.4 say that for some function β to be estimated from the data, the excess IF-distribution F u converges to the generalised Pareto distribution G ε,β for large u.

Conclusion
We have proved a very important assertion of mathematical statistics for IF-observables in IFtheory. Evidently the results can be applied also to fuzzy sets theory. On the other hand families of IF-events may be embedded to suitable MV-algebras. Therefore it would be useful to try to extend the Pickands-Balkema-de Haan theorem to probability MV-algebras.