Skip to main content

Learning communities for university students at risk of school failure: Can they make a difference? (Tharp 2009)

Review Guidelines

Citation

Tharp, T. (2009). Learning communities for university students at risk of school failure: Can they make a difference? (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 3389640)

Highlights

    • The study’s objective was to examine the impact of learning community developmental math courses on college retention and credit hours earned at Middle Tennessee State University.
    • The study used school records from fall 2004 and fall 2005 cohorts to compare students who enrolled in learning community developmental math courses to students who enrolled in the standard math developmental courses and to a group of students in college-level courses who had similar placement exam scores.
    • The study found that, for the fall 2004 cohort, the retention rate for the learning community group was significantly lower than the rate for the nondevelopmental group between the fourth and fifth years after enrollment. In addition, for the fall 2005 cohort, the learning community group earned significantly fewer credit hours than the nondevelopmental group in the first, second, and fourth years after enrollment, and fewer credit hours than the nonlearning community developmental group in the fourth year.
    • The quality of causal evidence presented in this report is low because the author did not control for relevant student characteristics in the analysis. This means we are not confident that the estimated effects are attributable to learning communities. Other factors are likely to have contributed.

Intervention Examined

Learning Communities

Features of the Intervention

The learning community intervention consisted of two linked courses: developmental algebra and a university seminar. For the 2004 cohort, the intervention included enhanced components: instructors provided additional tutoring and support as well as social activities. For the 2005 cohort, the intervention had no enhanced components.

Features of the Study

The study analyzed outcomes for three groups of students at Middle Tennessee State University with American College Test (ACT) math subscores of 17, 18, or 19, in two cohorts. In both cohorts, all students were 23 years of age or younger, and the author used an exact matching procedure so that groups were equivalent in terms of gender and race/ethnicity. The treatment group consisted of learning community students enrolled in developmental courses. One comparison group consisted of students in developmental courses who did not take part in learning communities. The other consisted of students who were enrolled in college-level courses but had similar ACT math subscores. In the 2004 cohort, each group contained 42 students; in the 2005 cohort, each group contained 45 students. Using school record data, the study compared outcomes for the three groups jointly, separately by cohort and year. When joint tests were significant, authors performed additional tests to determine which individual contrasts were significant.

Findings

    • The study found that, for the fall 2004 cohort, the retention rate for the learning community group was significantly lower than the rate for the college-level comparison group between the fourth and fifth years after enrollment.
    • In addition, for the fall 2005 cohort, there were significant differences in cumulative credit hours earned. In the first, second, and fourth years after enrollment, the learning community group earned significantly fewer credit hours than the college-level comparison group. In the fourth year, the learning community group earned significantly fewer credit hours than the nonlearning community students enrolled in developmental courses.

Considerations for Interpreting the Findings

The study accounted for differences in pre-intervention academic achievement, gender, ethnicity, and age in the analysis of progress toward degree completion outcomes. However, it did not account for potential differences in pre-intervention financial disadvantage. Estimated impacts could thus reflect pre-existing differences between the groups on this characteristic and not program impacts. For example, treatment students could have had lower socioeconomic status than comparison students, which could account for differences in progress toward degree completion outcomes at the end of the study. Therefore, the study is not eligible for a moderate causal evidence rating, the highest rating available for nonexperimental designs.

Causal Evidence Rating

The quality of causal evidence presented in this report is low because the author did not control for relevant student characteristics in the analysis. This means we are not confident that the estimated effects are attributable to learning communities. Other factors are likely to have contributed.

Reviewed by CLEAR

October 2015

Topic Area