The randomized controlled trial was approved by the lead author’s institutional review board and was registered with clinicaltrials.gov. No major deviations occurred in the submitted protocol, and data collection for the intervention ended before COVID-19 resulted in school closures. The study was conducted with four schools in fall 2019 that collectively make up a unified school district in Southern California. The four schools’ range in size from 902 to 1,971 students each, have a combined enrollment of 5,706, and represent diverse racial and ethnic (e.g., 60% identify as Hispanic or Latinx and 16% as Black or African American) and income (e.g., 26% of families earn between $15,000 and $49,000 per year and 11% earn less than $15,000 per year) groups. All four schools had an active gender and sexuality alliance (GSA), but identified no other programming. The schools were randomly assigned to either an intervention or control condition. Youth in the intervention condition participated in the 10-week P&E program during one class period per week, whereas youth in the control condition attended school as usual during the same time period. Youth in both conditions completed measures at pretest and posttest, and measures were administered during the same week to both study conditions (pretest was one week prior to the first scheduled intervention session and posttest was 1 week after the last intervention session). We intended to complete a subsequent post-test measure, but restrictions due to the COVID-19 pandemic prevented this from occurring. Measures included demographic characteristics, our hypothesized mechanisms of change (i.e., minority stress), and mental health symptoms (i.e., depression, anxiety, PTSD, and suicidality). Participants received a $20 gift card for completing the measures at pretest and a $25 gift card at posttest.
Fidelity monitoring was conducted using an approach developed in our prior pilot work  and was focused on adherence to curriculum, dosage, quality of service delivery, participant responsiveness, and program differentiation.  After each session, the facilitator and the liaison (co-facilitator) rated the objectives, content, and activities on fidelity, appropriateness, and participant receptiveness using an adherence checklist. Based on the core dimensions of the cultural adaptation model,  liaisons were asked to identify cognitive (comprehension), affective (cultural conflicts or motivation issues), developmental, and any other problems (e.g., environmental) with activities and rate fidelity to each session element (concepts, objectives, activities, instructions).
Youth could participate in the P&E study if they (a) were students at the high school; (b) spoke English; (c) self-identified as LGBT or other non-heterosexual or cisgender identity; and (d) were willing and able to provide verbal assent. To identify potential participants, the study coordinator and school counselors made verbal presentations, distributed fliers, and had direct and confidential meetings to recruit youth to the study. Because LGBT youth represent a sensitive population and human subjects’ protection is complex (i.e., parents may not know their sexual or gender identity), we requested and received a waiver of parental consent by the institutional review board at the study team’s institution. Thus, potential youth participants received a detailed information sheet outlining the study goals, objectives, benefits, and possible risks and provided verbal assent or consent to participate (if they were 18 years old or turned 18 during the study).
Both school facilitators and student participants were informed that their school would be randomly selected to serve as either an intervention or control site. The district was recruited at the superintendent level, staff members at all schools were trained, students were administered pretests, and then schools were randomly assigned. We followed this process to help reduce the potential for bias in procedures.
Although the intervention was led by a study team member, selected school staff members were trained to cofacilitate the curriculum to ensure the program could be readily administered by counselors, teachers, and other school personnel (e.g., social workers). Each school had up to two full-time staff members who received a 1-day training on the P&E curriculum overseen by the principal investigator and facilitated by the project coordinator. Most staff members were school counselors (n = 5), but also included teachers (n = 2) and other trained staff members. The training covered topics including: (a) minority stress among adolescents, (b) adolescent development and gender and sexual identity formation, (c) the NIH prevention principles, (d) P&E curriculum implementation, (e) fidelity monitoring processes, and (f) program outcomes and the research plan. Using previously established best practices , facilitators learned their roles in the project, identifying skills and concepts for each activity throughout the program. Facilitators received a $1,000 honorarium for their time in the training and participation throughout the school year.
Participants were organized to meet weekly during their administration period (i.e., homeroom), as part of their regular school day, to participate in the intervention. Each group was led by the project coordinator and a school staff member who had been trained as a facilitator. Each session covered a different domain of minority stress identified and refined in our prior work (as previously described). A facilitator manual guided the curriculum, was used to monitor fidelity, and included goals, learning objectives, activities, and a list of materials used for each session.
Although we considered whether an attention control  would be helpful in establishing intervention effects and reducing bias, we ultimately determined that attending classes as usual would be the most appropriate alternative activity, given that it would most closely match what youth would do if their school did not offer a focused intervention. Therefore, participants in the control condition did not participate in any unique activities; rather, they completed the surveys on the same timeline, and with the same incentive procedures, as the intervention group.
Demographic information was collected at both pre- and posttest and included gender, sex assigned at birth, age, grade, sexual orientation, and race. Gender categories included cisgender female, cisgender male, trans female or trans woman, trans male or trans man, and genderqueer or gender nonconforming. Sexual orientation had response options of gay, lesbian, bisexual, pansexual, asexual, and another sexual orientation (with the option to provide text). Participants could select all response options that applied on the race and ethnicity question, with options including non-Hispanic White, Black or African American, Latino or Hispanic, American Indian or Alaska Native, Native Hawaiian or other Pacific Islander, and another race and ethnicity with an open-ended text field. Other information collected included religion (both family and personal), language spoken (at home and with friends), whether the participant was currently working, and with whom the participant lives.
The SMASI features 54 items across 10 domains of minority stress for adolescents and demonstrates good reliability and validity in this population [30, 31]. Each statement reflects past-30-day thoughts, feelings, and situations a person may have experienced, with response options of 1 = yes and 0 = no. A decline-to-answer option was also provided. A total SMASI score was calculated by summing the 54 items. Mean imputation at the participant level was used for respondents missing fewer than seven items. Those missing seven or more items were removed from analysis. Scores for the 10 subscales were calculated using percentages; the number of endorsed (yes) responses were divided by the number of items a person actively responded to and multiplied by 100. The SMASI was collected at both time points.
Anxiety was measured using the 21-item Beck Anxiety Inventory . Questions asked how much the person had been bothered by past-month symptoms, with response options of 0 (not at all), 1 (mildly but it didn’t bother me much), 2 (moderately—it wasn’t pleasant at times), and 3 (severely—it bothered me a lot). A sum score was created with a theoretical range between 0 and 63, with higher scores indicating higher levels of anxiety.
The Beck Depression Inventory II  is a list of 21 statements that describe how a person may have been feeling during the past 2 weeks, including sadness, loss of pleasure, and crying. Response options range from 0 to 3 and are unique to each question by catering to the specific feeling or behavior. Scores are calculated by summing all items and have a range between 0 and 63. Higher scores on the inventory demonstrate higher levels of depression.
The PTSD Checklist for DSM-5  was used to measure PTSD and assessed the extent to which the participant was bothered by 20 past-month experiences. Response options range from 0 (not at all) to 4 (extremely), with a possible sum score range between 0 and 80.
Suicidality was measured using the adapted Columbia-Suicide Severity Rating Scale  through five yes-or-no questions pertaining to the past 30 days. Worst-point severity is used to score the measure; a person’s last endorsed (yes) question is their score, with a range between 0 (no endorsed items) to 5 (a person endorsed the most severe question: “Have you thought about a specific plan [for example, having a time or place] to kill yourself?”). As a note, our human subjects plan included informing that if they endorsed suicidality, the facilitator would be informed confidentially (through Qualtrics skip logic) and that after class, they would be provided access to resources for dealing with suicidality, including an opportunity to speak privately with their school counselor to seek additional support. Eight participants endorsed suicidality at the pretest and received these resources.
Each school was randomly assigned to the intervention or control condition. Thus, all students in a school received the same condition. Those in control schools were coded as 0, and those in intervention schools were coded as 1.
Descriptive information for demographics and outcome measures were first reported. Preliminary analyses used chi-square tests and one-way analyses of variance (ANOVA) to ensure demographic information and SMASI and mental health outcomes did not differ by group assignment (intervention vs. control) at Time 1. A two-by-two (group by time) repeated-measures ANOVA was used to understand the relationship between pre- and posttests across the intervention conditions for each outcome of minority stress and its subscales, and mental health symptoms of anxiety, depression, PTSD, and suicidality. These were designed to determine whether the intervention group differed in outcomes from the control group over time.
Subsequent analyses examined intervention as a moderator of the relationship between minority stress and mental health symptoms at Time 2, while controlling for mental health symptoms at Time 1. In each analysis, the mental health symptoms at Time 2 (dependent variable) were regressed onto the mental health symptoms at Time 1 (covariate), minority stress (independent variable), intervention (moderator), and the minority stress–intervention interaction term. Power analyses were conducted in G*Power 22.214.171.124 for both repeated-measures ANOVA and linear regressions with moderation. With 44 participants and α = 0.05, we were sufficiently powered (0.80) to detect an effect size of 0.12 for the repeated-measures ANOVA and an effect size of 0.18 for the moderation analysis. All analyses were conducted using SPSS version 25. Moderation analyses were conducted in the SPSS framework using PROCESS v. 3.4. .