Type

Database

Creator

Date

Thumbnail

Search results

You search for analysis and 1,073,035 records were found.

Discriminant analysis for two data sets in IRd with probability densities f and g can be based on the estimation of the set G = {x : f(x) ≥ g(x)}. We consider applications where it is appropriate to assume that the region G has a smooth boundary. In particular, this assumption makes sense if discriminant analysis is used as a data analytic tool. We discuss optimal rates for estimation of G.
Discriminant analysis for two data sets in IRd with probability densities f and g can be based on the estimation of the set G = {x : f(x) Ï g(x)}. We consider applications where it is appropriate to assume that the region G has a smooth boundary. In particular, this assumption makes sense if discriminant analysis is used as a data analytic tool. We discuss optimal rates for estimation of G.
When it comes to health care, everybody – medical professionals, policymakers and patients – wants to know what works and what does not. Ever y day clinicians debate, implicitly or explicitly, whether new research findings are convincing enough to change the way they practice. The quality of research varies, and so much information is being produced that it is impossible for anyone to know and evaluate it all. Traditionally, randomized controlled trials are considered gold standard study designs. However, if they report discordant results and generate controversies, then what should we look for? The answer to this imbroglio is meta-analysis. Steps in designing and conducting meta-analysis involve describing the purpose of meta-analysis, designing a research question, searching for studies, specifying study selection (inclusion and excl...
Comment: 11 pages
Comment: 176 pages
This dissertation consists of four essays which investigate efficiency analysis, especially when non-discretionary inputs exist. A new approach of the multi-stage Data Envelopment Analysis (DEA) for non-discretionary inputs, statistical inference discussions, and applications are provided. In the first essay, I propose a multi-stage DEA model to address the non-discretionary input issue, and provide a simulation analysis that illustrates the implementation and potential advantages of the new approach relative to the leading existing multi-stage models of non-discretionary inputs, such as Ruggiero's 1998 model and Fried, Lovell, Schmidt, and Yaisawarng's 2002 model. Furthermore, the simulation results also suggest that the constant returns to scale assumption seems to be preferred when observations have similar sizes, but variable retur...
Meditation training has been shown to improve attention and emotion regulation. However, the mechanisms responsible for these effects are largely unknown. In order to make further progress, a rigorous interdisciplinary approach that combines both empirical and theoretical experiments is required. This dissertation uses such an approach to analyze electroencephalogram (EEG) data collected during two three-month long intensive meditation retreats in four steps. First, novel tools were developed for preprocessing the EEG data. These tools helped remove ocular artifacts, muscular artifacts, and interference from power lines in a semi-automatic fashion. Second, in order to identify the cortical correlates of meditation, longitudinal changes in the cortical activity were measured using spectral analysis. Three main longitudinal changes wer...
In this chapter we present a technique for the analysis of customer satisfaction based on a dimensionality reduction approach. This technique, usually referred to as Nonlinear Principal Component Analysis (NPCA), assumes that the observed ordinal variables can be mapped into a one-dimensional quantitative variable, but unlike Linear Principal Component Analysis, does not require the adoption of an a priori difference between classification categories and does not presuppose a linear relation among the observed variables. So, neither the weights of the variables nor the differences between their categories are assumed, and both are suitably determined through the data as the solution of an optimization problem. The main features of Nonlinear Principal Component Analysis are illustrated, the problem of missing data is dealt with, and sev...
Our aim is to construct a factor analysis method that can resist the effect of outliers. For this we start with a highly robust initial covariance estimator, after which the factors can be obtained from maximum likelihood or from principal factor analysis (PFA). We find that PFA based on the minimum covariance determinant scatter matrix works well. We also derive the influence function of the PFA method based on either the classical scatter matrix or a robust matrix. These results are applied to the construction of a new type of empirical influence function (EIF), which is very effective for detecting influential data. To facilitate the interpretation, we compute a cutoff value for this EIF. Our findings are illustrated with several real data examples. (C) 2003 Elsevier Science (USA). All rights reserved.
Want to know more?If you want to know more about this cutting edge product, or schedule a demonstration on your own organisation, please feel free to contact us or read the available documentation at http://www.keep.pt/produtos/retrievo/?lang=en