Search for Studies
Search for Studies
CLEAR's database includes all research identified across all topic areas.
Synthesis Report: Research Synthesis: Opportunities for Youth
Topic Area: Opportunities for Youth
Successful programs often involved a substantial time commitment from participating youth.
Several of the successful programs required an intense, often full-time, commitment from participants. In total, ChalleNGe had 17 months of programming. For the first 22 weeks of the program, ChalleNGe participants lived in barracks-style housing (sometimes on a military base) in a disciplined environment. Similarly, participants in Job Corps received eight months of services, on average, and were required to live on-site. The Center for Employment Training (CET) provided occupational and basic skills training in a full-time, worklike setting, to accustom participants to a work schedule. Year Up also was an intensive, full-time program, with six months of training followed by a six-month internship in the information technology or financial operations fields. Participants in Youth Corps committed to completing at least 300 hours of community service. (In the earlier Conservation and Youth Corps program, participants completed an average of 435 hours of community service.)
Many successful programs involved a job placement component or job search assistance.
ChalleNGe and Year Up directly placed youth in jobs or internships when possible. Job Corps offered individualized counseling and job placement assistance to youth. Through WIA Adult and Dislocated Worker programs, all out-of-school youth could receive job search assistance services and access local labor market information.
Positive impacts tended to be realized in the short term and fade over time.
Youth Corps impacts persisted only during the period of active program involvement. Year Up had significant impacts on earnings for three years, but the impacts did not extend to the fourth year. Similarly, there were no significant impacts on employment or earnings of Job Corps participants after more than four years.
More information is needed on the replicability of some programs.
Many of the evaluations examined, such as the evaluations of Job Corps and WIA, were large, including thousands of individuals in multiple states. However, evaluations of other programs that showed promise, such as Year Up, were smaller. More information is needed to determine whether the successes of these programs can translate to their implementation in more and different contexts. For example, CET demonstrated impacts at its original location in San Jose, California. However, when the program was studied at 12 replication sites, no impacts were found.
Synthesis Report: Evidence on the Effects of OSHA Activities
Topic Area: OSHA Enforcement
According to the research, there is some evidence that OSHA inspections reduce injury rates, on average.
Levine et al. (2012) provides moderate causal evidence of OSHA’s impact on injuries and was strongly relevant. The study demonstrated that random OSHA inspections led to a 9 percent decrease in injuries and a 26 percent decrease in injury-related costs among inspected firms. It also showed that OSHA inspections did not adversely affect firms’ financial performance. Further, the study used administrative injury data from Workers’ Compensation records, which can capture actual injury rates better than the firm-reported injury data used in other analyses (though still might not completely capture on-the-job injuries).
Four other studies, using two different research methods, provided moderate causal evidence that OSHA inspections reduced injury rates, but these studies were published before 1995. Because OSHA operations have changed in important ways since then, these findings might have low current relevance.
Some recent research has strong current relevance and provides valuable descriptive information, but low causal evidence on the impact of inspections.
ERG (2004) found that firms that received notice that they might be inspected but were not subsequently inspected experienced a 5 percent decline in injuries in the three years following the notice. Firms that received notice and a subsequent inspection experienced a 14 percent decline in injuries in the same period.
Gray and Mendeloff (2005) found that OSHA inspections that resulted in penalties were associated with a 19 percent decline in lost-workday injuries in 1979–1985, an 11 percent decline in 1987–1991, but no large or significant decline in injuries in 1992–1998. Inspections with penalties and inspections to smaller or non-unionized plants were associated with larger changes in injuries than other inspections.
Haviland et al. (2012) found that inspections with penalties were associated with a 19 to 24 percent decline in injuries during the two years after the inspection. They did not find this association for inspections without penalties or inspections at very small or very large plants. This study is particularly valuable because it uses administrative data on injury rates (as in Levine et al. 2012).
OSHA conducts inspections for various reasons, prioritizing targeted inspections to firms when there is either evidence of relatively dangerous conditions, a catastrophe or fatal accident has occurred, or there has been a complaint or referral (OSHA 2002). The above studies compared firms that had received an inspection, including those that received targeted inspections, to firms that were not inspected at all. But firms in the latter group do not provide a good comparison with the former, because targeted inspections are not random events. That is, there is no reason to believe that firms receiving targeted inspections are comparable to firms that were not inspected. Indeed, we might suspect that these firms were less safe because some inspections are triggered by adverse events. Thus, although these studies provide valuable and relevant information, we cannot be confident that the estimated changes in injuries are caused by OSHA activities per se.
Alternate methods could provide stronger causal evidence on the impacts of OSHA inspections. For example, the above studies could have examined only those firms that received programmed inspections, which are aimed at high-hazard industries, plants, or occupations. Based on observable characteristics, some firms receive programmed inspections with certainty but others are selected at random for these visits (OSHA 2002). Thus, firms that received programmed inspections could credibly be compared to firms with similar characteristics that did not receive an inspection.
There is little information on the characteristics of OSHA inspections and other OSHA activities.
In particular, our systematic review found few studies that explored the following:
- The relationship between size of OSHA penalty and change in injury rates
- What type of firms are mostly likely to respond to the threat of OSHA inspections and fines
- Impacts of changes in OSHA policies, practices, and procedures
- Impacts of OSHA consultations
- Whether impacts vary by characteristics of the inspector or of the inspection itself
- How the use of administrative or self-reported data affect the interpretation of the estimated impact of OSHA inspections
See Mendeloff (2012) for further discussion of areas for future research.
Topic Area: Opportunities for Youth
Research provides strong evidence that NGYCP improves the educational outcomes of at-risk youth.
A well-conducted randomized controlled trial demonstrated that the NGYCP resulted in statistically significant improvements in educational outcomes measured 9 months, 21 months, and 3 years after random assignment. For instance, 72 percent of NGYCP youth earned a high school diploma or GED by 3 years after random assignment, compared with 56 percent of control group youth (Exhibit 1).
Exhibit 1: Educational attainment of NGYCP and control group youth 3 years after random assignment
Note: All differences are statistically significant at the 5-percent level.
There is also strong evidence that NGYCP improves the labor market outcomes of at-risk youth.
Three years after random assignment, NGYCP youth were more likely to be employed (58 versus 51 percent) and had worked one more month in the past year than control group members. They also had higher average annual earnings (Exhibit 2).
Exhibit 2: Earnings of NGYCP and control group youth 3 years after random assignment
A cost-benefit analysis found NGYCP produced large positive benefits.
In a well-conducted cost-benefit analysis, Perez-Arce et al. (2012) determined that, from the perspective of society as a whole, the NGYCP produced net benefits of $25,549 per admittee, a return on investment of 166 percent. The government incurred negative net benefits, largely due to covering the bulk of the operating costs, and NGYCP participants had large, positive net benefits.
NGYCP is a multi-component intervention, with little evidence on the effectiveness of specific components.
Research has not examined whether particular components of the NGYCP—such as the Residential phase, the military-style discipline, or the Youth Initiated Mentoring (YIM)—are responsible for the program’s impacts. Descriptive research (Schwartz et al. 2013) suggests that youth who had longer mentoring relationships were more likely to have positive long-term outcomes, and that mentors provided participants with valuable social-emotional support, guidance, and practical assistance that contributed to their successful program completion (Spencer et al. 2013). However, the research has not established the causal impact of YIM.
Topic Area: Career Academies
Career Academies produced strong and sustained increases in students’ post-high school earnings. These impacts were concentrated among young men.
A well-conducted randomized controlled trial demonstrated that students who participated in Career Academies earned $174 more per month, on average, than students who participated in Non-Academy high school offerings, over the eight years following students’ scheduled high school graduation (Exhibit 1). These impacts were even higher for young men than for the full sample—an average of $311 more per month (Exhibit 2).
Exhibit 1: Average Monthly Earnings Eight Years After Scheduled Graduation
Source: Kemple & Willner 2008.
Note: The impact on average monthly earnings is statistically significant at the 5 percent level.
Exhibit 2: Average Monthly Earnings Eight Years After Scheduled Graduation, Young Men
Source: Kemple & Willner 2008.
Note: The impact on average monthly earnings is statistically significant at the 5 percent level.
Career Academies did not increase educational attainment.
There were no statistically significant differences between Career Academy and Non-Academy students in high school diploma, general equivalency degree (GED), or post-secondary credential attainment.
Implementing all three program components proved somewhat challenging.
According to the evaluation report, integrating academic and technical curricula proved to be a challenge. In addition, the relationships between the academies and local employers were more often leveraged to offer students career exposure than to offer them specific jobs. According to the implementation study associated with the evaluation, factors that affected implementation of Career Academies included availability of resources, the leadership ability of the Academy director, support from the school and school district, allocation of staff time to organize employers’ engagement and work placements, the extent of employers’ participation, and the articulation of a vision that connected program design with local employment needs.
Synthesis Report: Research Synthesis: Employment Programs and Demonstrations for SSI and SSDI Beneficiaries
Topic Area: Disability Employment Policy
Evidence echoes previous literature reviews on the challenges of generating substantive impacts, though customized supports to well-targeted populations show some potential.
The conclusions from CLEAR’s systematic literature search and review process largely echo the key findings from Wittenburg et al. (2013), which summarized the existing literature on employment-focused interventions for people with disabilities. Overall, interventions that provided intensive employment support services and/or employment incentives had moderate success improving employment and earnings outcomes but did not decrease disability income support payments.
The most effective interventions provided intensive, customized supports and services focused on job training, placement, and retention to narrowly defined target populations.
In the Youth Transition Demonstration (YTD), which targeted transition-age youth, the projects that focused their efforts on direct employment services (including outreach to employers, job shadowing, and direct placement) had positive effects on youths’ earnings and employment outcomes, whereas those that focused on case management (including identifying goals, managing time, and connecting to social and health services) had none (Fraker et al. 2014). Similarly, when Kornfeld and Rupp (2000) examined impacts for Project NetWork participants who received employment-focused case-management services, they found that the impacts were smallest for the least service-intensive model. Frey et al. (2011) found that the Mental Health Treatment Study, which targeted SSI and SSDI beneficiaries with psychiatric impairments, improved several employment, earnings, and health outcomes for treatment group members.
More generally, target populations experiencing the largest effects included people with psychiatric disabilities (Cook et al. 2008; Frey et al. 2011), people with developmental disabilities (Kerachsky & Thornton 1987; Decker & Thornton 1995), and youth (Fraker et al. 2014). In contrast, interventions that did not target people with specific impairments, such as the Ticket to Work (TTW) program, which mails a voucher for employment services to all SSI and SSDI beneficiaries that they can use voluntarily, had relatively smaller impacts (Stapleton et al. 2013a).
Interventions that provided support services or incentives to help beneficiaries keep more of their benefits when working had small or no impacts on employment, even if spending on services was high.
Examples of projects with limited impacts on employment and earnings include the Accelerated Benefits Demonstration (ABD), Benefit Offset Pilot Demonstration (BOPD), and TTW program (Weathers & Bailey 2014; Weathers & Hemmeter 2011; Stapleton et al. 2013a). In each case, the interventions had limited success improving employment and earnings outcomes despite substantial costs associated with them.
There is no evidence of SSI or SSDI caseload reductions, even among interventions that improved employment and/or earnings.
The programs and demonstrations reviewed did not achieve a key objective—increasing the participants’ earnings enough to decrease their benefit receipt. For example, in four YTD projects, treatment group members’ SSI receipt increased two years after random assignment (Hemmeter 2014). The increases were due to SSI program waivers at those four projects that protected the participants’ benefit receipt status and benefit amounts. However, if the YTD is to ever achieve SSI program savings, then YTD participants’ receipt of SSI benefits will eventually have to decrease.
The Benefit Offset National Demonstration (BOND), which is testing the provision of work incentives and other supports, provides another example. BOND’s benefit offset replaces the complete loss of all benefits for working SSDI beneficiaries, instead gradually decreasing the SSDI benefit by $1 for every $2 earned above the substantial gainful activity amount. If the benefit offset is to decrease total SSDI benefits paid to BOND participants, then enough BOND participants must respond to the benefit offset by increasing their earnings enough to partially decrease their SSDI benefit. However, BOND did not generate impacts on employment or earnings in its first year of operations (Stapleton et al. 2013b), though several factors suggest that positive impacts on earnings might yet emerge.
Little is known about interventions for improving earnings of people with TBI and PTSD.
Our review found only four studies examining the effectiveness of interventions for people with TBI or PTSD on their return to work, and only one of these examined both employment and earnings outcomes. All four of these studies focused on military veterans.
Davis et al. (2012) randomly assigned 85 volunteer veterans with PTSD to receive services from either Individual Placement and Support (IPS)—a supported employment model—or the Department of Veterans Affairs’ standard Vocational Rehabilitation Program, which provided work therapy through setaside temporary jobs. Veterans who received IPS were significantly more likely to gain competitive employment, worked in a competitive job more weeks, and earned more during the 12-month follow-up period. However, the IPS recipients’ total income, on average, was still below self-sufficiency levels.
Twamley et al. (2014) examined the impact of supplementing supported employment services with cognitive training, finding a doubling in employment rates for those who received such training, but earnings impacts were not examined. Salazar et al. (2000) and Vanderploeg et al. (2008) compared different types of rehabilitation programs for veterans with TBI and found no differences in their return to work or military duty. These two studies did not report earnings impacts.
Recruiting beneficiaries to participate in demonstrations was difficult, which limited the generalizability of study findings.
Most SSA employment demonstrations have struggled to recruit volunteer participants. With a few exceptions, the interventions tested targeted people for services after they had met SSA’s disability criteria and started receiving benefits. To become eligible for SSA disability benefits, applicants must prove that their impairments make it impossible to work at substantive levels. It is therefore unlikely that beneficiaries who have gone through the application process will volunteer for programs that promote work, for fear of losing benefits. Although some interventions used program waivers, such as allowing beneficiaries to keep more of their benefits while working, participants still could lose some benefits by increasing their earnings.
For studies that rely on volunteers, the generalizability of study findings to the entire study recruitment pool depends in part on what percentage of the recruitment pool volunteered for the study. The smaller the volunteer group, the greater the concern that the volunteers were not representative of the larger group. Most of the demonstration projects SSA funded enrolled about 5 percent of the population targeted for recruitment (Rangarajan et al. 2008).
More recent demonstrations with narrow target populations of youth and those with psychiatric impairment had higher participation rates. For example, the YTD projects used all available tools and resources and worked very hard to achieve evaluation enrollment rates ranging from 16 to 30 percent of eligible youth (Fraker et al. 2014). The ABD, which had a participation rate of 99 percent (Michalopoulos et al. 2011).
The ABD provided health insurance coverage as its primary intervention—only a subset of ABD treatment group members received employment supports. ABD’s high participation rate was due to the strong demand for free health insurance coverage among the target population—SSDI beneficiaries without health insurance who were in the 24-month Medicare waiting period. The ABD evaluation revealed that those who volunteered for the demonstration often had unmet medical needs and that the intervention helped address those needs.
Fidelity to the demonstration model is important.
Several different studies provide evidence that favorable impacts are more likely to emerge when the demonstration model is closely followed. Programs that strictly adhere to the IPS model have shown significant impacts on employment and earnings of people with psychiatric impairments (Cook et al. 2005). Specifically, models that integrated vocational services and clinical mental health services, such as medication management and individual therapy, were more effective than models with low levels of service integration, such as those in which vocational rehabilitation and clinical counseling were provided by separate agencies or in separate locations.
Additionally, Fraker et al. (2014) found that YTD projects that were implemented with fidelity to the YTD program model were more effective than programs that were not. These evaluations included detailed documentation of the services delivered to ensure the findings could be replicated in other settings.
Work incentives and supports can be difficult to implement in the context of SSA’s existing work incentives, creating potential confusion for beneficiaries and program staff.
SSA’s complex eligibility determination processes can make it challenging to implement new interventions or approaches that administrators and staff can readily understand. For example, the TTW program has a complex system of incentives that has failed to produce positive outcomes. The TTW program provides SSI and SSDI beneficiaries with more choices of employment services vendors and offers employment-support service providers financial incentives to serve beneficiaries who reach earnings milestones. However, many consider the payment system complex and cumbersome and find it difficult to determine when beneficiaries reach the milestones that generate provider payments; as a result, it has been difficult to recruit providers (Stapleton et al. 2013a).
Similarly, BOND, which had very few participants during its first year, was implemented with other complex, existing work incentives (Stapleton et al. 2013b). For example, the benefit offset is provided only after the Trial Work Period ends and SSA staff have completed a Work Continuing Disability Review to evaluate the beneficiary’s work effort and continued eligibility for benefits.
A strong technical assistance component, with incentives for service providers to accept the assistance, is important to successful implementation.
From the outset of the YTD, the technical assistance that YTD projects received was geared toward achieving desirable employment outcomes for project participants. However, the process analysis of the three projects implemented early in YTD (Phase 1) revealed a need to focus the technical assistance on services directly linked to paid employment and to closely monitor both the delivery of those services and participants’ outcomes. Technical assistance for the three projects implemented later (Phase 2) was adjusted accordingly and helped the Phase 2 projects focus more closely on connecting youth with competitive paid jobs.
For several projects, technical assistance provided under the evaluation contract greatly facilitated the delivery of employment services. For example, at one project site, quantitative data were used during the intervention period to identify program staff whose caseloads were not meeting program targets, and those staff then received opportunities for professional development.
Funders and operators of future interventions with objectives and target populations similar to those of YTD should consider offering service providers high quality technical assistance on the design and delivery of employment services (Fraker et al. 2014).
Demonstrations should be pilot tested before being implemented on a national scale.
In reviewing the implementation of the TTW program, the U.S. Government Accountability Office (GAO 2004) argued that the rush to implement the program created inefficiencies that could have been addressed in a smaller pilot. GAO claimed that if SSA had tested various components of the TTW program before launching it nationwide, it might have identified problems and developed solutions before implementation. In 2008, SSA revised the TTW program regulations to address some of these initial shortcomings.
The benefits of developing a pilot program before launching a major demonstration were illustrated by SSA’s BOPD, which was the precursor to the larger, ongoing BOND. The pilot demonstration was implemented in four states to test the administrative processes needed for BOND. The original plans for implementing BOND were modified based on experiences gleaned from the pilot demonstrations (Bell et al. 2011).
Synthesis Report: Behavioral Finance Synthesis: Findings
Topic Area: Behavioral Finance: Retirement
People have relatively limited knowledge about saving for retirement and can be induced to save more when provided with additional information.
McKenzie and Liersch (2011) demonstrated that people often severely miscalculate hypothetical future savings account balances and the monthly contribution amount required to reach a specified savings goal. Evidence from lab and field experiments indicates that providing people with explicit information on the implications of their own savings behavior, the retirement account options available to them, or how savings account balances can grow leads them to report greater motivation and desire to save and increases actual savings rates (Duflo and Saez 2003; Goda et al. 2014; McKenzie and Liersch 2011). Results from one study, however, suggested that providing more nuanced information may not change behavior (Choi et al. 2011).
Making retirement more salient, by having people think of themselves in retirement or providing a target retirement date, can increase intentions to save and alter investment choices.
Hershfield et al. (2011) conducted several lab experiments in which people were presented with age-progressed or current pictures of themselves and asked to make hypothetical savings decisions. When individuals saw a picture of themselves at an older age, they allocated more of their hypothetical pay to retirement (although the differences were not statistically significant in all experiments). Benartzi et al. (2007) found that the labels attached to investment funds can sometimes change the proportion of investments that people allocate to stocks rather than to less risky investments, such as bonds. People with access to investment funds labeled with a target date for retirement (for example, 2030) invested more in stocks when younger and less when older (holding total investment constant), consistent with optimal savings behavior. However, people with access to investment funds labeled as income or growth funds to indicate how risky they were did not adjust the proportion of income held in stocks over time.
People can become overwhelmed by the number of investment options they face; when this occurs, they tend to use simple rules to make decisions.
An experiment conducted by Iyengar and Kamenica (2010) found that, as people were presented with more and more gambling options, they were more likely to choose the simplest option. Another experiment found that, when people were asked to hypothetically allocate money to different investment funds, having more funds to choose from increased the probability that a person would simply allocate the same amount of money to each fund (Morrin et al. 2012). These studies suggest that because these simple rules can lead to less careful decision making, giving people more options can lead to worse outcomes overall.
Synthesis Report: Behavioral Finance Synthesis: Gaps
Topic Area: Behavioral Finance: Retirement
Many studies have demonstrated a relationship between default options and behavior. Taken together, these studies suggest that default options can affect investment behavior.
CLEAR reviewed 11 studies which used interrupted time series designs to examine the relationship between changes in default options and retirement savings behavior. The most common intervention studied was the introduction of automatic 401(k) enrollment. Typically, individuals must actively decide to enroll in their company’s 401(k) plan, specifying the amount of money they will invest each pay period and to which investment funds this money should be allocated. Under automatic enrollment, individuals are enrolled into a retirement plan with pre-specified parameters without having to take any action or make any decisions, unless they opt out or request to change the details of their investments.
Most of the studies CLEAR reviewed found that defaults are associated with large changes in investment behavior. For example, Choi, et al. (2004) examined changes in 401(k) participation rates at three companies. Prior to automatic enrollment, 401(k) participation rates after six months of employment ranged from 26 to 43 percent at the three firms. After the implementation of automatic enrollment, participation rates exceeded 85 percent. In another example, Thaler and Benartzi (2004) found that Save More TommorowTM, a savings plan which automatically increased employee 401(k) contribution rates whenever an enrolled employee’s salary increased, led to a more than doubling of the average savings rate at one company. When taken together, the body of research suggests that defaults can affect investment behavior.
But no study produces strong causal evidence on the impacts of defaults on its own.
Although the body of evidence suggests that defaults can have causal effects on behavior, none of the individual studies using interrupted time series designs that CLEAR reviewed demonstrated high or moderate causal evidence. A concern in many of these studies is that market trends or employee demand may have influenced the companies’ decisions to implement the defaults, and may also have affected the outcomes. Thus, although some part of the observed changes in savings behavior documented by these studies may be due to the defaults, one cannot conclude that the changes completely reflect the impact of the defaults; other forces are likely at play. Stronger evidence is therefore needed to determine whether any specific default option (for example, automatic enrollment at three percent of pay investing into a money market fund) has a casual effect on behavior.
There is little evidence available on how the impacts of behavioral interventions designed to influence retirement savings vary by employee age, gender, income, or race.
Only one study reviewed by CLEAR that received a high or moderate causal evidence rating examined impacts of an intervention for these important subgroups. Duflo and Saez (2003) explored the impact of receiving information about tax-deferred retirement accounts on account enrollment using a randomized controlled trial. The study found that the informational intervention it studied led to increases in tax-deferred retirement account enrollment and that these impacts were the same for men and for women.
Some studies reviewed by CLEAR that received low causal evidence ratings explored variation in impacts of behavioral interventions across demographic or socioeconomic groups. They tended to find larger impacts on individuals who typically had lower pre-intervention savings rates, such as lower-income individuals and minorities, but results were often mixed (Madrian and Shea 2001; Choi, et al. 2006; Thrift Savings Plan 2012). And, the low causal evidence ratings these studies received suggest that caution should be taken in interpreting these results.
There is little evidence available on how the impacts of behavioral interventions designed to influence retirement affect total savings.
Individuals may respond to the interventions CLEAR reviewed by changing how they save money in the plan or account examined but not the total amount of money they save across all plans and accounts (including 401(k)s, other pensions, brokerage accounts, individual retirement accounts, and bank accounts). For example, a change in firm policy could lead an individual to invest more in a 401(k) but less in an individual retirement account. Policies to increase retirement savings may also influence the amount of debt individuals take on. For example, individuals may pay off their mortgages more slowly if they invest more money in their 401(k).
No study reviewed by CLEAR that received a high or moderate causal evidence rating examined total savings, debt, or net savings (the difference between savings and debt); rather, studies typically focused solely on 401(k) contributions or investments in a specific brokerage fund. This provides a valuable, but incomplete, picture of employees’ savings behavior.
Topic Area: Behavioral Insights, Behavioral Finance: Retirement
Study Type: Causal Impact Analysis
Causal Evidence Rating: Low Causal Evidence
Brown, J. R., Kapteyn, A., & Mitchell, O. S. (2016). Framing and claiming: How information-framing affects expected social security claiming behavior. Journal of Risk and Insurance, 83(1), 139-162.
Brown, J., Kapteyn, A., & Mitchell, O. (2011). Framing effects and expected Social Security claiming behavior (Working paper no. 17018). Cambridge, MA: National Bureau of Economic Research.
Topic Area: Reemployment
Study Type: Causal Impact Analysis
Causal Evidence Rating: High Causal Evidence
Michaelides, M., & Mueser, P. (2016). The labor market effects of U.S. reemployment programs during the Great Recession. (Working paper 08-2015). Nicosia, Cyprus: University of Cyprus, Department of Economics.
Poe-Yamagata, E., Benus, J., Bill, N., Carrington, H., Michaelides, M., & Shen, T. (2011). Impact of the Reemployment and Eligibility Assessment Initiative. Columbia, MD: IMPAQ International.
Topic Area: Reemployment
Study Type: Causal Impact Analysis
Causal Evidence Rating: High Causal Evidence
Lachowska, M., Meral, M., & Woodbury, S.A. (2016). Effects of the unemployment insurance work test on long-term employment outcomes. Labour Economics, 41, 246–265.
Lachowska, M., Meral, M., & Woodbury, S.A. (2015). The effects of eliminating the work search requirement on job match quality and other long-term employment outcomes. Washington, DC: U.S. Department of Labor.
Below is a key for icons used to indicate a study's type and/or rating.
High Causal Evidence
Strong evidence the effects are caused by the examined intervention.
Moderate Causal Evidence
Evidence that the effects are caused to some degree by the examined intervention.
Low Causal Evidence
Little evidence that the effects are caused by the examined intervention.
Causal Impact Analysis
Uses quantitative methods to assess the effectiveness of a program, policy, or intervention.
Describes a program, policy, or intervention using qualitative or quantitative methods.
Examines the implementation of a program, policy, or intervention.
The study found at least one favorable impact in the outcome domain, and no unfavorable impacts.
The study found some favorable and some unfavorable impacts in the outcome domain.
The study found no statistically significant impacts in the outcome domain.
The study found at least one unfavorable impact in the outcome domain, and no favorable impacts.
Not applicable because no outcomes were examined in the outcome domain.
Favorable - low evidence
The study found at least one favorable impact in the outcome domain, and no unfavorable impacts. The study received a low causal evidence ratings so these findings should be interpreted with caution.
Mixed - low evidence
The study found some favorable and some unfavorable impacts in the outcome domain. The study received a low causal evidence ratings so these findings should be interpreted with caution.
None - low evidence
The study found no statistically significant impacts in the outcome domain. The study received a low causal evidence ratings so these findings should be interpreted with caution.
Unfavorable - low evidence
The study found at least one unfavorable impact in the outcome domain, and no favorable impacts. The study received a low causal evidence ratings so these findings should be interpreted with caution.