How to report the percentage of explained common variance in exploratory factor analysis
|
|
|
- Emerald Thornton
- 10 years ago
- Views:
Transcription
1 UNIVERSITAT ROVIRA I VIRGILI How to report the percentage of explained common variance in exploratory factor analysis Tarragona 2013 Please reference this document as: Lorenzo-Seva, U. (2013). How to report the percentage of explained common variance in exploratory factor analysis. Technical Report. Department of Psychology, Universitat Rovira i Virgili, Tarragona. Document available at:
2 Contents 1. Percentage of explained variance as an index of goodness of fit 2. Percentage of explained variance in principal component analysis 1 3. Percentage of explained common variance in exploratory factor analysis 3.1. Principal axis factor analysis 3.2. Minimum rank factor analysis 4. Usefulness of assessing the percentage of explained common variance in exploratory factor analysis
3 1. Percentage of explained variance as an index of goodness of fit A popular and intuitive index of goodness of fit in multivariate data analysis is the percentage of explained variance: the higher the percentage of variance a proposed model manages to explain, the more valid the model seems to be. In this document we study how this index can be reported in the context of exploratory factor analysis. Table 1. Univariate descriptive statistics Variable Mean Standard deviation Variances To make the text more compressive, we base our explanations on the analysis of a particular dataset of eight observed variables: the mean, standard deviation, and variances are shown in Table 1. We suspect that an unknown number of latent variables may explain the relationship between the eight observed variables. In multivariate data analysis the relationship between observed variables is typically described using the standardized variance/covariance matrix (i.e., the correlation matrix shown in Table 2). As can be observed the value of the variances in the correlation matrix is 1 for all the variables (and not the variance values shown in Table 1): the reason for this is that the variables have been standardized. As this is quite a simple dataset, the mere visual inspection of the correlation matrix provides important information: 1. Variables 1, 2, 3, and 4 seem to be mainly related among themselves, as are variables 5, 6, 7. These two independent clusters of variables could be explained by the existence of two independent latent variables, each of which is responsible for the variability of one cluster of observed variables.
4 2. Even if two clusters of observed variables seem to exist in the data, the correlation values among variables are systematically low. This result indicates that the observed variables in each cluster do not share a large amount of variance (i.e., the amount of common variance, also known as communality, is low). Table 2. Correlation matrix among the eight variables. Correlation values larger than.20 are printed in bold 3 V1 V2 V3 V4 V5 V6 V7 V8 V V V V V V V V If the aim is to make an analytical study of the information in the correlation matrix in terms of the underlying latent variables, the most appropriate technique available in multivariate data analysis is Exploratory Factor Analysis (EFA). The aim of EFA is to determine the latent structure of a particular dataset by discovering common factors (i.e., the latent variables). In this regard, EFA accounts for the common variance (i.e., the shared variance among observed variables). In the analysis, the common variance is partitioned from its unique variance and error variance, so that only the common variance is present in the factor structure: this means that the percentage of explained variance should be reported in terms of common variance (i.e., the percentage of explained common variance should be reported). Researchers often compute Principal Component Analysis (PCA) as an approximation of EFA. The aim of PCA is to explain as much of the variance of the observed variables as possible using few composite variables (usually referred to as components). PCA does not discriminate between common variance and unique variance. Whether PCA is a proper approximation of EFA or not is a controversy on which Multivariate Behavioral Research published a special issue edited by Dr. Mulaik (1992). Thompson (1992) argued that the practical difference between the approaches (PCA vs EFA) is often negligible in terms of interpretation. On the other hand, Gorsuch (1986)
5 concluded that the differences in results decrease as (a) the score reliability of the measured variables increases, or (b) the number of variables measured increases. Snook and Gorsuch (1989) added that, when only a few variables are being analyzed or the communality is low, PCA and EFA analytic procedures produce divergent results. In the problem that concerns us (reporting the percentage of explained variance), computing PCA is appealing because: (a) the percentage of explained variance is an immediate index of goodness of fit in PCA; and (2) it is not obvious how to compute the percentage of explained common variance in EFA. Unfortunately, our dataset encounters situations (few observed variables and low communality) in which PCA is not an appropriate approach to EFA. Given our pedagogical aim, the rest of this document focuses on: (a) how to obtain the percentage of explained variance in PCA; (b) why it is not possible to compute the percentage of explained common variance in most factor methods; (c) how to compute the percentage of explained common variance in an EFA; and (d) the advantages of being able to report the percentage of explained common variance in an EFA Percentage of explained variance in principal component analysis PCA aims to summarise the information in a correlation matrix. The total amount of variance in the correlation matrix can be calculated by adding the values on the diagonal: as each element on the diagonal has a value of 1, the total amount of variance also corresponds to the number of observed variables. In our dataset, the total amount of variance is 8. This total amount of variance can be partitioned into different parts where each part represents the variance of each component. The eigenvalues printed in Table 3 represent the amount of variance associated with each component. If the eigenavalues are added, the resulting total should be the total variance in the correlation matrix (i.e., the addition should be equal to 8). The percentage of explained variance of each component can be easily computed as the corresponding eigenvalue divided by the total variance: for example, the percentage of variance explained by the first component is / 8 =.28 (or in terms of percentage 28%). The first component also counts for 28% of the variance. When the percentage of explained variance is reported for a particular dataset, the value that is actually reported is the addition of the percentages of the explained variance for each of the components retained (i.e., the accumulated percentage of explained variance). Table 3 shows that if the aim were to explain 100% of the variance in the correlation matrix, then we would need to retain as
6 5 many components as observed variables (which would make no sense at all). However, the idea is to select an optimal number of components. The optimal number of components can be defined as the minimum number of components that accounts for the maximum possible variance. In our visual inspection of the correlation matrix in Table 2, we already intuited that retaining two components should be enough for our dataset. However, if only two components are retained the (accumulated) percentage of explained variance (46.3%) would suggest a poor fit of the component solution. To achieve an acceptable fit, it seems that we should retain at least five components. It seems clear that the percentage of explained variance does not suggest an optimal number of components to be retained. Table 3. Eigenvalues and percentages of variance associated with each component Component Eigenvalue Percentage of explained variance Accumulated percentage of explained variance The reason why the percentage of explained variance does not properly describe the goodness of fit is because PCA is not a proper approximation of EFA for this dataset. If we continue the analysis with our initial idea of retaining two components and we rotate the loading matrix with varimax (Kaiser 1958), the simplicity of the component solution that we obtain seems to reinforce our initial intuition (i.e., of retaining only two components). Table 4 shows this rotated two-component solution.
7 Table 4. Loading matrix of component solution after Varimax rotation. Salient loading values are printed in bold Variables Component 1 Component 2 v v v v v v v v In a typical real situation, probably involving many more latent variables, few observed variables per latent variable, and low communality, the visual inspection of the correlation matrix would be useless. In addition, as we have seen in our example, PCA would not help us to take the proper decisions either. In a situation such as this, the wisest decision would be to compute the most appropriate technique available in multivariate data analysis when the aim is to study the information in the correlation matrix in terms of the underlying latent variables (i.e., to compute an EFA). 3. Percentage of explained common variance in exploratory factor analysis As mentioned above, in EFA only the common variance is present in the factor structure, and the percentage of explained variance should be reported in terms of common variance (i.e., the percentage of explained common variance). However, the percentage of explained common variance cannot be computed in most factor analysis methods. To show this, we analyze our dataset using Principal Axis Factor (PAF) analysis in the section below. We decided to use PAF because it is quite a straightforward method, but the conclusion that we draw can be generalized to most factor analysis methods (like Unweighted Least Squares factor analysis, or Maximum Likelihood factor analysis). The only method that enables the percentage of explained common variance to be comuted is Minimum Rank Factor Analysis (MRFA).
8 3.1. Principal axis factor analysis 7 As mentioned above, PCA analyzes the variance contained in a correlation matrix. In EFA, the matrix that is analyzed is known as the reduced correlation matrix: the diagonal elements of the correlation matrix are substituted by estimates of the communality of each variable. In PAF, the multiple correlation value is typically used as an estimate of communality. Table 5 shows the reduced correlation matrix with the multiple correlation values already placed on the diagonal of the matrix. Table 5. Reduced correlation matrix analyzed in principal axis factor analysis. Multiple correlation values are printed in bold V1 V2 V3 V4 V5 V6 V7 V8 V V V V V V V V In a correlation matrix, the total amount of variance is obtained by adding the values on the diagonal of the matrix. In a reduced correlation matrix, the total amount of variance is obtained in the same way. The total amount of variance of the reduced correlation matrix shown in Table 5 is (i.e., the addition of the multiple correlation values ). It should be noted that this is the total amount of common variance. As the total amount of common variance can be readily obtained, the strategy used in PCA to obtain percentages of explained variance could be replicated: (a) to compute the eigenvalues, and (b) to use them as partitions of the common variance. The eigenvalues related to the reduced correlation matrix are shown in Table 6. By adding the eigenvalues, we can confirm that they do indeed add up to the total amount of common variance (i.e., the addition equals ). However, there is an important limitation: some of the eigenvalues are negative (i.e., the reduced correlation matrix is said to be non-positive definite). This means that these eigenvalues
9 cannot be safely interpreted as partitions of the common variance, and the percentages of explained common variances cannot be computed. Table 6. Eigenvalues associated with the reduced correlation matrix Factor Eigenvalue The reduced correlation matrix computed in most factor analysis methods is systematically non-positive definite. The typical conclusion is that the percentage of explained common variance cannot be computed in EFA. This is why computer software packages refuse to compute the explained common variance as an index of goodness of fit, and do not even include the eigenvalues associated with the reduced correlation matrix in the output Minimum rank factor analysis Minimum Rank Factor Analysis (MRFA, Ten Berge & Kiers, 1991) also analyzes a reduced correlation matrix. However, in MRFA the estimates of the communality to be used on the diagonal of the reduced correlation matrix are carefully chosen so that the reduced correlation matrix is positive definite (i.e., none of the related eigenvalues is negative): the greatest lower bound to reliability (Woodhouse & Jackson, 1977; Ten Berge, Snijders & Zegers,1981) is used as an estimate. Table 7 shows the reduced correlation matrix with the estimates of communality already on the diagonal of the matrix. The total amount of common variance of the reduced correlation matrix shown in Table 7 is (i.e., the addition of values ).
10 Table 7. Reduced correlation matrix analyzed in minimum rank factor analysis. Estimated communality values are printed in bold V1 V2 V3 V4 V5 V6 V7 V8 9 V V V V V V V V The eigenvalues associated with the reduced correlation matrix are shown in Table 8. As can be observed, none of the eigenvalues is negative. Furthermore, the total of all the eigenvalues is equivalent to the total amount of common variance (i.e., the addition equals ). In conclusion, then, the eigenvalues can be considered as partitions of the total common variance, and the percentage of explained common variance can be easily computed as the corresponding eigenvalue divided by the total common variance: for example, the percentage of common variance explained by the first factor is / =.505 (or in terms of percentage 50.5%). Table 8. Eigenvalues and percentages of explained common variance associated with each factor Factor Eigenvalue Percentage of explained common variance Accumulated percentage of explained common variance
11 Table 8 shows that the extraction of two factors accounts for 76.9% of the common variance: this means that a two-factor model is associated with a percentage of explained common variance of 76.9%. As can be seen, using the percentage of common variance as a goodness of fit index helps to correctly assess the most suitable factor model for our dataset. Let us continue the analysis: we extract two factors and rotate the loading matrix with Varimax (Kaiser 1958). Table 9 shows the rotated two-factor solution. Again, the simplicity of the factor solution seems to reinforce our initial intuition (i.e., to retain only two factors). 10 Table 9. Loading matrix of factor solution after Varimax rotation. Salient loading values are printed in bold Variables Factor 1 Factor 2 v v v v v v v v If we compare the two-component solution in Table 4, and the two-factor solution in Table 9, we realize that the salient loading values associated with the component solution are larger than the salient loading values associated with the factor solution. In addition, the Factor Simplicity Index (Lorenzo-Seva, 2003) shows that the twocomponent solution has the simplest structure (a value of.6342 for the component solution versus a value of.5951 for the factor solution). However, this apparently better component solution is artificial: as PCA confounds common variance and unique variance the loading values appear larger than they should actually be. For example, Widaman (1993) showed that when the data fit the assumptions of the common factor model, PCA loadings tend to be too high whereas common factor loadings are very accurate.
12 4. Usefulness of assessing the percentage of explained common variance in exploratory factor analysis 11 The percentage of explained common variance can be used as a goodness of fit test, and this in itself is an important reason for computing it. However, some situations require this percentage to be computed: this is the case when Parallel Analysis (PA, Horn, 1965) is used to assess the dimensionality of a reduced correlation matrix. PA was proposed by Dr. Horn to assess the dimensionality of a correlation matrix. The idea is to compare the eigenvalues of the empirical correlation matrix to eigenvalues of random correlation matrices: only the component associated with empirical eigenvalues larger than random eigenvalues should be retained. The size of the eigenvalues of a correlation matrix depends among such other things as the size of the sample and the shape of the distribution of observed variables on the number of variables in the correlation matrix: this means that, in order to be able to compare the empirical and the random eigenvalues, the random correlation matrices must be generated on the basis of the same number of observed variables. As the number of observed variables is the same, the empirical and the random correlation matrices have an identical amount of total variance: this is why eigenvalues from different correlation matrices can be compared. Although PA is usually based on the comparison of eigenvalues, this comparison could also be made in terms of the percentage of explained variance of each component (i.e., the eigenvalue divided by the total amount of variance). When PA is used to assess the dimensionality of a reduced correlation matrix, the scenario is much more complex. The most important difficulty is related to the total amount of variance of the reduced correlation matrix (i.e., the amount of common variance). In a reduced correlation matrix the total amount of variance is not related to the number of observed variables: if random reduced correlation matrices are generated, each reduced correlation matrix will have a different total amount of common variance (which will probably be low because random variables are expected to be uncorrelated with one another), even if the same number of observed variables is always used. The consequence is that eigenvalues from different reduced correlation matrices cannot be compared with one another, because they are partitions of a different amount of common variance. The solution is to compare the percentage of explained common variance (instead of the eigenvalues). This is the key idea that we propose for assessing the dimensionality of a reduced correlation matrix with PA (see Timmerman, & Lorenzo-Seva, 2011).
13 References Gorsuch, R.L.(1983). Factor analysis (2nd ed.) Hillsdale, NJ: Erlbaum. Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, Kaiser, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23, Lorenzo-Seva, U. (2003). A factor simplicity index. Psychometrika, 68, Mulaik, S. A. (Ed.). (1992). Theme issue on principal components analysis. Multivariate Behavioral Research, 27 (3). Snook, S. C., & Gorsuch, R. L. (1989). Component analysis versus common factor analysis: A Monte Carlo study. Psychological Bulletin, 106, Ten Berge, J. M. F., & Kiers, H. A. L. (1991). A numerical approach to the exact and the approximate minimum rank of a covariance matrix. Psychometrika, 56, Ten Berge, J. M. F., Snijders, T. A. B. & Zegers, F. E. (1981). Computational aspects of the greatest lower bound to reliability and constrained minimum trace factor analysis. Psychometrika, 46, Thompson, B. (1992). A partial test distribution for cosines among factors across samples. In B. Thompson (Ed.), Advances in social science methodology (Vol. 2, pp ). Greenwich, CT:JAI Press. Timmerman, M. E., & Lorenzo-Seva, U. (2011). Dimensionality Assessment of Ordered Polytomous Items with Parallel Analysis. Psychological Methods, 16, Widaman, K. F. (1993). Common factor analysis versus principal component analysis: Differential bias in representing model parameters? Multivariate Behavioral Research, 28, Woodhouse, B. & Jackson, P.H. (1977). Lower bounds to the reliability of the total score on a test composed of nonhomogeneous items: II. A search procedure to locate the greatest lower bound. Psychometrika, 42,
Exploratory Factor Analysis
Exploratory Factor Analysis ( 探 索 的 因 子 分 析 ) Yasuyo Sawaki Waseda University JLTA2011 Workshop Momoyama Gakuin University October 28, 2011 1 Today s schedule Part 1: EFA basics Introduction to factor
A Brief Introduction to Factor Analysis
1. Introduction A Brief Introduction to Factor Analysis Factor analysis attempts to represent a set of observed variables X 1, X 2. X n in terms of a number of 'common' factors plus a factor which is unique
Factorial Invariance in Student Ratings of Instruction
Factorial Invariance in Student Ratings of Instruction Isaac I. Bejar Educational Testing Service Kenneth O. Doyle University of Minnesota The factorial invariance of student ratings of instruction across
Overview of Factor Analysis
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,
Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Analysis
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
Common factor analysis
Common factor analysis This is what people generally mean when they say "factor analysis" This family of techniques uses an estimate of common variance among the original variables to generate the factor
Introduction to Principal Components and FactorAnalysis
Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a
What is Rotating in Exploratory Factor Analysis?
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permission is granted to distribute
A Brief Introduction to SPSS Factor Analysis
A Brief Introduction to SPSS Factor Analysis SPSS has a procedure that conducts exploratory factor analysis. Before launching into a step by step example of how to use this procedure, it is recommended
PRINCIPAL COMPONENT ANALYSIS
1 Chapter 1 PRINCIPAL COMPONENT ANALYSIS Introduction: The Basics of Principal Component Analysis........................... 2 A Variable Reduction Procedure.......................................... 2
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models
Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Richard E. Zinbarg northwestern university, the family institute at northwestern university. William Revelle northwestern university
psychometrika vol. 70, no., 23 33 march 2005 DOI: 0.007/s336-003-0974-7 CRONBACH S α, REVELLE S β, AND MCDONALD S ω H : THEIR RELATIONS WITH EACH OTHER AND TWO ALTERNATIVE CONCEPTUALIZATIONS OF RELIABILITY
Determining the Number of Factors to Retain in EFA: an easy-touse computer program for carrying out Parallel Analysis
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
Common factor analysis versus principal component analysis: a comparison of loadings by means of simulations
1 Common factor analysis versus principal component analysis: a comparison of loadings by means of simulations Joost C. F. de Winter, Dimitra Dodou Faculty of Mechanical, Maritime and Materials Engineering,
4. There are no dependent variables specified... Instead, the model is: VAR 1. Or, in terms of basic measurement theory, we could model it as:
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data 2. Linearity (in the relationships among the variables--factors are linear constructions of the set of variables; the critical source
Using Principal Components Analysis in Program Evaluation: Some Practical Considerations
http://evaluation.wmich.edu/jmde/ Articles Using Principal Components Analysis in Program Evaluation: Some Practical Considerations J. Thomas Kellow Assistant Professor of Research and Statistics Mercer
Principal Component Analysis
Principal Component Analysis Principle Component Analysis: A statistical technique used to examine the interrelations among a set of variables in order to identify the underlying structure of those variables.
Factor Rotations in Factor Analyses.
Factor Rotations in Factor Analyses. Hervé Abdi 1 The University of Texas at Dallas Introduction The different methods of factor analysis first extract a set a factors from a data set. These factors are
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Practical Considerations for Using Exploratory Factor Analysis in Educational Research
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
To do a factor analysis, we need to select an extraction method and a rotation method. Hit the Extraction button to specify your extraction method.
Factor Analysis in SPSS To conduct a Factor Analysis, start from the Analyze menu. This procedure is intended to reduce the complexity in a set of data, so we choose Data Reduction from the menu. And the
Chapter 1 Introduction. 1.1 Introduction
Chapter 1 Introduction 1.1 Introduction 1 1.2 What Is a Monte Carlo Study? 2 1.2.1 Simulating the Rolling of Two Dice 2 1.3 Why Is Monte Carlo Simulation Often Necessary? 4 1.4 What Are Some Typical Situations
Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003
Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003 FA is not worth the time necessary to understand it and carry it out. -Hills, 1977 Factor analysis should not
2. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) F 2 X 4 U 4
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) 3. Univariate and multivariate
A Beginner s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis
Tutorials in Quantitative Methods for Psychology 2013, Vol. 9(2), p. 79-94. A Beginner s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis An Gie Yong and Sean Pearce University of Ottawa
Multivariate Analysis (Slides 13)
Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables
PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA
PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA ABSTRACT The decision of whether to use PLS instead of a covariance
Factor Analysis. Chapter 420. Introduction
Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.
Data analysis process
Data analysis process Data collection and preparation Collect data Prepare codebook Set up structure of data Enter data Screen data for errors Exploration of data Descriptive Statistics Graphs Analysis
SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011
SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011 Statistical techniques to be covered Explore relationships among variables Correlation Regression/Multiple regression Logistic regression Factor analysis
T-test & factor analysis
Parametric tests T-test & factor analysis Better than non parametric tests Stringent assumptions More strings attached Assumes population distribution of sample is normal Major problem Alternatives Continue
5.2 Customers Types for Grocery Shopping Scenario
------------------------------------------------------------------------------------------------------- CHAPTER 5: RESULTS AND ANALYSIS -------------------------------------------------------------------------------------------------------
Teaching Multivariate Analysis to Business-Major Students
Teaching Multivariate Analysis to Business-Major Students Wing-Keung Wong and Teck-Wong Soon - Kent Ridge, Singapore 1. Introduction During the last two or three decades, multivariate statistical analysis
Multivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA 30602-2501
PRINCIPAL COMPONENTS ANALYSIS (PCA) Steven M. Ho!and Department of Geology, University of Georgia, Athens, GA 30602-2501 May 2008 Introduction Suppose we had measured two variables, length and width, and
What Are Principal Components Analysis and Exploratory Factor Analysis?
Statistics Corner Questions and answers about language testing statistics: Principal components analysis and exploratory factor analysis Definitions, differences, and choices James Dean Brown University
FACTOR ANALYSIS. Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables.
FACTOR ANALYSIS Introduction Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables Both methods differ from regression in that they don t have
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
How To Understand Multivariate Models
Neil H. Timm Applied Multivariate Analysis With 42 Figures Springer Contents Preface Acknowledgments List of Tables List of Figures vii ix xix xxiii 1 Introduction 1 1.1 Overview 1 1.2 Multivariate Models
Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016
and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings
An introduction to. Principal Component Analysis & Factor Analysis. Using SPSS 19 and R (psych package) Robin Beaumont [email protected].
An introduction to Principal Component Analysis & Factor Analysis Using SPSS 19 and R (psych package) Robin Beaumont [email protected] Monday, 23 April 2012 Acknowledgment: The original version
Validation of the Core Self-Evaluations Scale research instrument in the conditions of Slovak Republic
Validation of the Core Self-Evaluations Scale research instrument in the conditions of Slovak Republic Lenka Selecká, Jana Holienková Faculty of Arts, Department of psychology University of SS. Cyril and
A Comparison of Variable Selection Techniques for Credit Scoring
1 A Comparison of Variable Selection Techniques for Credit Scoring K. Leung and F. Cheong and C. Cheong School of Business Information Technology, RMIT University, Melbourne, Victoria, Australia E-mail:
Performance Metrics for Graph Mining Tasks
Performance Metrics for Graph Mining Tasks 1 Outline Introduction to Performance Metrics Supervised Learning Performance Metrics Unsupervised Learning Performance Metrics Optimizing Metrics Statistical
FACTORANALYSISOFCATEGORICALDATAINSAS
FACTORANALYSISOFCATEGORICALDATAINSAS Lei Han, Torsten B. Neilands, M. Margaret Dolcini University of California, San Francisco ABSTRACT Data analysts frequently employ factor analysis to extract latent
Principal Component Analysis
Principal Component Analysis ERS70D George Fernandez INTRODUCTION Analysis of multivariate data plays a key role in data analysis. Multivariate data consists of many different attributes or variables recorded
11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression
Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c11 2013/9/9 page 221 le-tex 221 11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial
Research Methodology: Tools
MSc Business Administration Research Methodology: Tools Applied Data Analysis (with SPSS) Lecture 02: Item Analysis / Scale Analysis / Factor Analysis February 2014 Prof. Dr. Jürg Schwarz Lic. phil. Heidi
Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA
PROC FACTOR: How to Interpret the Output of a Real-World Example Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA ABSTRACT THE METHOD This paper summarizes a real-world example of a factor
Missing Data. A Typology Of Missing Data. Missing At Random Or Not Missing At Random
[Leeuw, Edith D. de, and Joop Hox. (2008). Missing Data. Encyclopedia of Survey Research Methods. Retrieved from http://sage-ereference.com/survey/article_n298.html] Missing Data An important indicator
The Case for Hyperplane Fitting Rotations in Factor Analysis: A Comparative Study of Simple Structure
Journal of Data Science 10(2012), 419-439 The Case for Hyperplane Fitting Rotations in Factor Analysis: A Comparative Study of Simple Structure James S. Fleming Southwest Psychometrics and Psychology Resources
Statistics in Psychosocial Research Lecture 8 Factor Analysis I. Lecturer: Elizabeth Garrett-Mayer
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
CHAPTER 4 EXAMPLES: EXPLORATORY FACTOR ANALYSIS
Examples: Exploratory Factor Analysis CHAPTER 4 EXAMPLES: EXPLORATORY FACTOR ANALYSIS Exploratory factor analysis (EFA) is used to determine the number of continuous latent variables that are needed to
Department of Economics
Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional
Time series experiments
Time series experiments Time series experiments Why is this a separate lecture: The price of microarrays are decreasing more time series experiments are coming Often a more complex experimental design
Multivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction
Internal Consistency: Do We Really Know What It Is and How to Assess It?
Journal of Psychology and Behavioral Science June 2014, Vol. 2, No. 2, pp. 205-220 ISSN: 2374-2380 (Print) 2374-2399 (Online) Copyright The Author(s). 2014. All Rights Reserved. Published by American Research
Statistics for Business Decision Making
Statistics for Business Decision Making Faculty of Economics University of Siena 1 / 62 You should be able to: ˆ Summarize and uncover any patterns in a set of multivariate data using the (FM) ˆ Apply
Factor Analysis Example: SAS program (in blue) and output (in black) interleaved with comments (in red)
Factor Analysis Example: SAS program (in blue) and output (in black) interleaved with comments (in red) The following DATA procedure is to read input data. This will create a SAS dataset named CORRMATR
Multivariate Statistical Inference and Applications
Multivariate Statistical Inference and Applications ALVIN C. RENCHER Department of Statistics Brigham Young University A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim
Topic 10: Factor Analysis
Topic 10: Factor Analysis Introduction Factor analysis is a statistical method used to describe variability among observed variables in terms of a potentially lower number of unobserved variables called
Bootstrapping Big Data
Bootstrapping Big Data Ariel Kleiner Ameet Talwalkar Purnamrita Sarkar Michael I. Jordan Computer Science Division University of California, Berkeley {akleiner, ameet, psarkar, jordan}@eecs.berkeley.edu
Repairing Tom Swift s Electric Factor Analysis Machine
UNDERSTANDING STATISTICS, 2(1), 13 43 Copyright 2003, Lawrence Erlbaum Associates, Inc. TEACHING ARTICLES Repairing Tom Swift s Electric Factor Analysis Machine Kristopher J. Preacher and Robert C. MacCallum
Factor Analysis: Statnotes, from North Carolina State University, Public Administration Program. Factor Analysis
Factor Analysis Overview Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
SPSS and AMOS. Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong
Seminar on Quantitative Data Analysis: SPSS and AMOS Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong SBAS (Hong Kong) Ltd. All Rights Reserved. 1 Agenda MANOVA, Repeated
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
Factor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business
Factor Analysis Advanced Financial Accounting II Åbo Akademi School of Business Factor analysis A statistical method used to describe variability among observed variables in terms of fewer unobserved variables
Handling attrition and non-response in longitudinal data
Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein
Statistical Analysis. NBAF-B Metabolomics Masterclass. Mark Viant
Statistical Analysis NBAF-B Metabolomics Masterclass Mark Viant 1. Introduction 2. Univariate analysis Overview of lecture 3. Unsupervised multivariate analysis Principal components analysis (PCA) Interpreting
Exploratory Factor Analysis
Exploratory Factor Analysis Definition Exploratory factor analysis (EFA) is a procedure for learning the extent to which k observed variables might measure m abstract variables, wherein m is less than
Exploratory Factor Analysis: rotation. Psychology 588: Covariance structure and factor models
Exploratory Factor Analysis: rotation Psychology 588: Covariance structure and factor models Rotational indeterminacy Given an initial (orthogonal) solution (i.e., Φ = I), there exist infinite pairs of
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Application of discriminant analysis to predict the class of degree for graduating students in a university system
International Journal of Physical Sciences Vol. 4 (), pp. 06-0, January, 009 Available online at http://www.academicjournals.org/ijps ISSN 99-950 009 Academic Journals Full Length Research Paper Application
How To Run Factor Analysis
Getting Started in Factor Analysis (using Stata 10) (ver. 1.5) Oscar Torres-Reyna Data Consultant [email protected] http://dss.princeton.edu/training/ Factor analysis is used mostly for data reduction
Principles of Dat Da a t Mining Pham Tho Hoan [email protected] [email protected]. n
Principles of Data Mining Pham Tho Hoan [email protected] References [1] David Hand, Heikki Mannila and Padhraic Smyth, Principles of Data Mining, MIT press, 2002 [2] Jiawei Han and Micheline Kamber,
EXPLORATORY FACTOR ANALYSIS IN MPLUS, R AND SPSS. [email protected]
EXPLORATORY FACTOR ANALYSIS IN MPLUS, R AND SPSS Sigbert Klinke 1,2 Andrija Mihoci 1,3 and Wolfgang Härdle 1,3 1 School of Business and Economics, Humboldt-Universität zu Berlin, Germany 2 Department of
The primary goal of this thesis was to understand how the spatial dependence of
5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial
Applications of Structural Equation Modeling in Social Sciences Research
American International Journal of Contemporary Research Vol. 4 No. 1; January 2014 Applications of Structural Equation Modeling in Social Sciences Research Jackson de Carvalho, PhD Assistant Professor
Multivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
FACTOR ANALYSIS NASC
FACTOR ANALYSIS NASC Factor Analysis A data reduction technique designed to represent a wide range of attributes on a smaller number of dimensions. Aim is to identify groups of variables which are relatively
Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.
Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are
Part III. Item-Level Analysis
Part III Item-Level Analysis 6241-029-P3-006-2pass-r02.indd 169 1/16/2013 9:14:56 PM 6241-029-P3-006-2pass-r02.indd 170 1/16/2013 9:14:57 PM 6 Exploratory and Confirmatory Factor Analysis Rex Kline 6.1
How To Determine If Binge Eating Disorder And Bulimia Nervosa Are Distinct From Aorexia Nervosa
Three Studies on the Factorial Distinctiveness of Binge Eating and Bulimic Symptoms Among Nonclinical Men and Women Thomas E. Joiner, Jr., 1 * Kathleen D. Vohs, 2 and Todd F. Heatherton 2 1 Department
A metrics suite for JUnit test code: a multiple case study on open source software
Toure et al. Journal of Software Engineering Research and Development (2014) 2:14 DOI 10.1186/s40411-014-0014-6 RESEARCH Open Access A metrics suite for JUnit test code: a multiple case study on open source
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Marketing Mix Modelling and Big Data P. M Cain
1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
1 Overview and background
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 010 The Greenhouse-Geisser Correction Hervé Abdi 1 Overview and background When performing an analysis of variance with
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
An introduction to Value-at-Risk Learning Curve September 2003
An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk
