CDA-OPTIMUM DESIGN FOR PARAMETER ESTIMATION, MINIMIZING THE AVERAGE VARIANCE AND ESTIMATING THE AREA UNDER THE CURVE

The aim of this paper is to introduce a new compound optimum design named CDA , by combining the C-optimality, D-optimality, and A-optimality together. The significance of the proposed compound gains from that it can be used for parameter estimation, minimizing the average variance and model estimation simultaneously.


INTRODUCTION
Cook and Wong [2] considered a compound optimality criterion that is a convex combination of the two concave criteria and so we can find the optimal design directly as if this is a single objective optimal design problem. D-optimality focuses on the variances of the estimates of the coefficients in the model, which minimizing the determinant of   1  X X T which is equivalent to maximizing thedeterminant of X X T . An exact design is called D-optimal, if it minimizes the determinant D of the covariance matrix. C-optimality interest is in estimating the linear combination of the parameters with minimum variance, where c is a known vector of constants. In A-optimality

M tr
, the total variance of the parameter estimates, is minimized, equivalent to minimizing the average variance. This paper is organized as follows; the C -, D -, A -Optimum Designs were introduced in Section 2. The CDA-optimality was derived in section 3 and some of its properties were discussed. The generalized CDA-Optimum Design was introducedin Section 4.

C -, D -, A -OPTIMUM DESIGNS
C-optimality introduced by Elfving [2] which provided a geometrical interpretation for finding c-optimal designs and developed by Silvey and Titterington [10] and Titterington [11]. Fellman [4]justified that at most m linearly independent support points are needed for a c-optimal design. Pukelsheim and Torsney [9]introduced a method for computing c-optimal weights given the support points. C-optimality minimize the variance of the best linear unbiased estimate for a given linear combination of the model parameters  T c , where c is × 1, a vector of a known constants. The c-optimaity criterion to be minimized is thus The aim of c-optimality is to obtain the best design for estimating the linear combination of the parameters The efficiency of any design  relative to C-optimum design is defined as: . A disadvantage of c-optimum designs is that they are often singular.

D-Optimum Design
D-optimum design is one of the most commonly used design criteria for linear regression model that is also known as the Determinant criterion. This criterion introduced by Wald [12], and later was called D-optimality by Kiefer and Wolfowitz [5]. The D-Optimality is the most common criterion due to numerous applications found in the literature; see for example, Latif and Zafar Yab [6]and Poursina and Talebi [8]. D-optimality criterion is just to maximize the determinant of the Fisher information matrix, , this means that the optimal design matrix * contains the n experiments which maximizes the determinant of . Mathematically, Maximizing the determinant of the information matrix is equivalent to minimizing the determinant of the dispersion Using such an idea, the D-efficiency of an arbitrary design, X, is naturally defined as

CDA-OPTIMUM DESIGN
To obtainparameter estimation, minimizing the average variance and model estimation of the area under the curve, a new compound criteria called CDA is introduced.CDA is constructedby combing C, D and A-optimality. By maximizing a weighted product of the efficiencies Then taking the logarithm we get and  A  are constants, a maximum is found over  . Hence, thecriterion that has to be Theorem 1. iii.
For any non-optimum design 1  that is a design for which A measure of efficiency of a design relative to a CDA-optimum design is given by The proof can be made directly, since  is a convex combination of three optimum design criteria, so the CDA-criterion is also convex and satisfying convexity conditions. D e c e m b e r 2 2 , 2 0 1 5

Properties of CDA-Optimality
A good design should give a small variance matrix, therefore the function  is related to the variance matrix, and should have following properties: i. Non-negativity:Φ ≥ 0, ii. Isotonicity: if * − is a positive semi-definite matrix, then Φ * ≥ Φ .
The previous properties are important to define with a proper scaling the relative efficiency of an experiment (or a design with the matrix M) with respect to another reference experiment with * . Pazman [7]discussed some other optimality properties for small samples.