Quasi-experimental research designs, like experimental designs, test causal hypotheses. In both experimental (i.e., randomized controlled trials or RCTs) and quasi-experimental designs, an outcome is observed for the treated group by a policy intervention (e.g. active labor market policy) against the control group. The key difference between an RCT and a quasi-experimental design is the absence of randomization in the latter. Namely, assignment could be either through self-selection (participants apply for a program themselves) or an administrator selects them (e.g., when a teacher selects best pupils for competitions outside the school) or both.
Quasi-experimental designs identify a comparison group that is as similar as possible to the treatment group in terms of baseline characteristics. The comparison group captures what would have been the outcomes had the program/policy not been implemented (the so-called counterfactual). Hence, the program or policy can be said to have caused any difference in outcomes between the treatment and comparison groups.
The risk of bias could be mitigated and as-valid-as-possible comparison group created through various techniques, most notably through the regression discontinuity design (RDD) method and through propensity score matching (PSM). The main concern is that those who self-select to participate in the program may be systematically different than those who choose to stay aside, called ‘selection’ bias..
Methods of data analysis used in quasi-experimental designs may be ex-post single difference or double difference (also known as difference-in-differences or DID).
A brief with further details on the quasi-experimental methods could be found here.