Methods Matter: Improving Causal Inference in Educational and Social Science Research

Hardcover | September 29, 2010

byRichard J. Murnane, John B. Willett

not yet rated|write a review
Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typicallyaccompanied by well-formulated plans to evaluate their causal impacts. As a result, knowledge about what works in different situations has been very slow to accumulate. Over the last several decades, advances in research methodology, administrative record keeping, and statistical software have dramatically increased the potential for researchers to conduct compelling evaluations of the causal impacts of educational interventions, and the number of well-designedstudies is growing. Written in clear, concise prose, Methods Matter: Improving Causal Inference in Educational and Social Science Research offers essential guidance for those who evaluate educational policies. Using numerous examples of high-quality studies that have evaluated the causal impacts ofimportant educational interventions, the authors go beyond the simple presentation of new analytical methods to discuss the controversies surrounding each study, and provide heuristic explanations that are also broadly accessible. Murnane and Willett offer strong methodological insights on causalinference, while also examining the consequences of a wide variety of educational policies implemented in the U.S. and abroad. Representing a unique contribution to the literature surrounding educational research, this landmark text will be invaluable for students and researchers in education andpublic policy, as well as those interested in social science.

Pricing and Purchase Info

$82.50

Ships within 1-2 weeks
Ships free on orders over $25

From the Publisher

Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typicallyaccompanied by well-formulated plans to evalua...

Richard J. Murnane, Juliana W. and William Foss Thompson Professor of Education and Society at Harvard University, is an economist who focuses his research on the relationships between education and the economy, teacher labor markets, the determinants of children's achievement, and strategies for making schools more effective. John B...

other books by Richard J. Murnane

The New Division of Labor: How Computers Are Creating the Next Job Market
The New Division of Labor: How Computers Are Creating t...

Kobo ebook|Nov 26 2012

$32.29 online$41.93list price(save 22%)
TEACHING THE NEW BASIC SKILLS
TEACHING THE NEW BASIC SKILLS

Hardcover|Sep 4 1996

$50.00

see all books by Richard J. Murnane
Format:HardcoverDimensions:432 pages, 9.25 × 6.12 × 0.98 inPublished:September 29, 2010Publisher:Oxford University PressLanguage:English

The following ISBNs are associated with this title:

ISBN - 10:0199753865

ISBN - 13:9780199753864

Look for similar items by category:

Customer Reviews of Methods Matter: Improving Causal Inference in Educational and Social Science Research

Reviews

Extra Content

Table of Contents

1. The Challenge for Educational Research1.1 The Long Quest1.2 The Quest is World-Wide1.3 What this Book is About1.4 What to Read Next2. The Importance of Theory2.1 What is Theory?2.2 Theory in Education2.3 Voucher Theory2.4 What Kind of Theories?2.5 What to Read Next3. Designing Research to Address Causal Questions3.1 Conditions to Strive for in All Research3.2 Making Causal Inferences3.3 Past Approaches To Answering Causal Questions in Education3.4 The Key Challenge of Causal Research3.5 What to Read Next4. Investigator-Designed Randomized Experiments4.1 Conducting Randomized Experiments4.1.1 An Example of a "Two-Group" Experiment4.2 Analyzing Data from Randomized Experiments4.2.1 Better Your Research Design, the Simpler Your Data-Analysis4.2.2 Bias and Precision in the Estimation of Experimental Effects4.3 What to Read Next5. Challenges in Designing, Implementing, and Learning from Randomized Experiments5.1 Critical Decisions in the Design of Experiments5.1.1 Defining the Treatment Being Evaluated5.1.2 Defining the Population from Which Participants Will Be Sampled5.1.3 Deciding Which Outcomes to Measure5.1.4 Deciding How Long To Track Participants5.2 Threats to Validity of Randomized Experiments5.2.1 Contamination of the treatment-control contrast5.2.2 Cross-Overs5.2.3 Attrition from the sample5.2.4 Participation in an Experiment Itself Affects Participants' Behavior5.3 Gaining Support for Conducting Randomized Experiments: Examples from India5.3.1 Evaluating an Innovative Input Approach5.3.2 Evaluating an Innovative Incentive Policy5.4 What to Read Next6. Statistical Power and Sample Size6.1 Statistical Power6.1.1 Reviewing the Process of Statistical Inference6.1.2 Defining Statistical Power6.2 Factors Affecting Statistical Power6.2.1 The Strengths and Limitations of Parametric Tests6.2.2 The Benefits of Covariates6.2.3 The Reliability of the Outcome Measure Matters6.2.4 The Choice between One-Tailed and Two-Tailed Tests6.3 What to Read Next7. Experimental Research When Participants Are Clustered within Intact Groups7.1 Using the Random-Intercepts Multilevel Model to Estimate Effect Size When Intact Groups of Participants Were Randomized To Experimental Conditions7.2 Statistical Power When Intact Groups of Participants Were Randomized To Experimental Conditions7.2.1. Statistical Power of the Cluster-Randomized Design and Intraclass Correlation7.3 Using Fixed-Effects Multilevel Models to Estimate Effect Size When Intact Groups of Participants are Randomized To Experimental Conditions7.3.1 Specifying a "Fixed-Effects" Multilevel Model7.3.2. Choosing Between Random- and Fixed-Effects Specifications7.4 What to Read Next8. Using Natural Experiments To Provide "Arguably Exogenous" Treatment Variability8.1 Natural- and Investigator-Designed Experiments: Similarities and Differences8.2 Two Examples of Natural Experiments8.2.1 The Vietnam Era Draft Lottery8.2.2 The Impact of an Offer of Financial Aid for College8.3 Sources of Natural Experiments8.4 Choosing the Width of the Analytic Window8.5 Threats to Validity in Natural Experiments with a Discontinuity Design8.5.1 Accounting for the Relationship between the Forcing Variable and the Outcome in a Discontinuity Design8.5.2 Actions by Participants Can Undermine Exogenous Assignment to Experimental Conditions in a Natural Experiment with a Discontinuity DesignNatural Experiment with a Discontinuity Design9. Estimating Causal Effects Using a Regression-Continuity Approach9.1 Maimonides' Rule and the Impact of Class Size on Student Achievement9.1.1 A Simple "First Difference" Analysis9.1.2 A "Difference-in-Differences" Analysis9.1.3 A Basic "Regression-Discontinuity" Analysis9.1.4 Choosing an Appropriate "Window" or "Bandwidth"9.2 Generalizing the Relationship between Outcome and Forcing Variable9.2.1 Specification Checks Using Pseudo-Outcomes and Pseudo-Cutoffs9.2.2 RD Designs and Statistical Power9.3 Additional Threats to Validity in an RD Design9.4 What to Read Next10. Introducing Instrumental Variables Estimation10.1 Introducing Instrumental Variables Estimation10.1.1 Investigating the Relationship Between an Outcome and a Potentially-Endogenous Question Predictor Using OLS Regression Analysis10.1.2 Instrumental Variables Estimation10.2 Two Critical Assumptions That Underpin Instrumental Variables Estimation10.3 Alternative Ways of Obtaining the IV Estimate10.3.2 Obtaining an IVE by Simultaneous Equations Estima10.4 Extensions of the Basic IVE Approach10.4.1 Incorporating Exogenous Covariates into IV Estimation10.4.2 Incorporating Multiple Instruments into the First-Stage Model10.4.3 Examining the Impact of Interactions between the Endogenous Question Predictor and Exogenous Covariates in the Second-Stage10.4.4 Choosing Appropriate Functional Forms for the Outcome/Predictor Relationships in the First- and Second-Stage Models10.5 Finding and Defending Instruments10.5.1 Proximity of Educational Institutions10.5.2 Institutional Rules and Personal Characteristics10.5.3 Deviations from Cohort Trends10.5.4 The Search Continues10.6 What To Read Next11. Using IVE to Recover the Treatment Effect in a Quasi-Experiment11.1 The Notion of a "Quasi-Experiment"11.2 Using IVE to Estimate the Causal Impact of a Treatment in a Quasi-Experiment11.3 Further Insight into the IVE (LATE) Estimate, in the Context of Quasi- Experimental Data11.4 Using IVE to Resolve "Fuzziness" in a Regression Discontinuity Design11.5 What To Read Next12. Dealing with Bias in Treatment Effects Estimated from Non-Experimental Data12.1 Reducing Observed Bias by the Method of Stratification12.1.1 Stratifying on a Single Covariate12.1.2 Stratifying on Covariates12.2 Reducing Observed Bias by Direct Control for Covariates Using Regression Analysis12.3 Reducing Observed Bias Using A "Propensity Score" Approach12.3.1 Estimation of the Treatment Effect by Stratifying on Propensity Scores12.3.2 Estimation of the Treatment Effect by Matching on Propensity Scores12.3.3 Estimation of the Treatment Effect by Weighting by the Inverse of the Propensity Scores12.4 A Return to the Substantive Question12.5 What to Read Next13. Substantive Lessons from High-Quality Evaluations of Educational Interventions13.1 Increasing School Enrollments13.1.1 Reduce Commuting Time13.1.2 Reduce Out-of-Pocket Educational Costs13.1.3 Reduce Opportunity Costs13.1.4 Does Increasing School Enrollment Necessarily Lead To Improved Long-Term Outcomes?13.2 Improving School Quality13.2.1 Provide More or Better Educational Inputs13.2.1.1 Provide More Books13.2.1.2 Teach Children in Smaller Classes13.2.1.3 Recruit Skilled Teachers or Provide Training to Enhance Teachers' Effectiveness13.2.2 Improve Incentives For Teachers13.2.3 Improving Incentives for Students13.2.4 Increase Families' Schooling Choices13.3 Summing Up14. Methodological Lessons from the Long Quest14.1 Be Clear About Your Theory of Action14.2 Learn about Culture, Rules, and Institutions in the Research Setting14.3 Understand the Counterfactual14.4 Worry about Selection Bias14.5 Measure All Possible Important Outcomes14.6 Be On the Lookout for Longer-Term Effects14.7 Develop a Plan for Examining Impacts on Subgroups14.8 Interpret Your Research Results Correctly14.9 Pay Attention to Anomalous Results14.10 Recognize That Good Research Always Raises New Questions14.11 Final Words