key: cord-0728361-srnuuwlq authors: Baek, Younghwa; Jeong, Kyoungsik; Lee, Siwoo; Kim, Hoseok; Seo, Bok-Nam; Jin, Hee-Jeong title: Feasibility and Effectiveness of Assessing Subhealth Using a Mobile Health Management App (MibyeongBogam) in Early Middle-Aged Koreans: Randomized Controlled Trial date: 2021-08-19 journal: JMIR Mhealth Uhealth DOI: 10.2196/27455 sha: 2f93e249b1266126658513471cf17f453a3d465f doc_id: 728361 cord_uid: srnuuwlq BACKGROUND: Mobile health (mHealth) is a major source of health management systems. Moreover, the demand for mHealth, which is in need of change due to the COVID-19 pandemic, is increasing worldwide. Accordingly, interest in health care in everyday life and the importance of mHealth are growing. OBJECTIVE: We developed the MibyeongBogam (MBBG) app that evaluates the user’s subhealth status via a smartphone and provides a health management method based on that user’s subhealth status for use in everyday life. Subhealth is defined as a state in which the capacity to recover to a healthy state is diminished, but without the presence of clinical disease. The objective of this study was to compare the awareness and status of subhealth after the use of the MBBG app between intervention and control groups, and to evaluate the app’s practicality. METHODS: This study was a prospective, open-label, parallel group, randomized controlled trial. The study was conducted at two hospitals in Korea with 150 healthy people in their 30s and 40s, at a 1:1 allocation ratio. Participants visited the hospital three times as follows: preintervention, intermediate visit 6 weeks after the intervention, and final visit 12 weeks after the intervention. Key endpoints were measured at the first visit before the intervention and at 12 weeks after the intervention. The primary outcome was the awareness of subhealth, and the secondary outcomes were subhealth status, health-promoting behaviors, and motivation to engage in healthy behaviors. RESULTS: The primary outcome, subhealth awareness, tended to slightly increase for both groups after the uncompensated intervention, but there was no significant difference in the score between the two groups (intervention group: mean 23.69, SD 0.25 vs control group: mean 23.1, SD 0.25; P=.09). In the case of secondary outcomes, only some variables of the subhealth status showed significant differences between the two groups after the intervention, and the intervention group showed an improvement in the total scores of subhealth (P=.03), sleep disturbance (P=.02), depression (P=.003), anger (P=.01), and anxiety symptoms (P=.009) compared with the control group. CONCLUSIONS: In this study, the MBBG app showed potential for improving the health, especially with regard to sleep disturbance and depression, of individuals without particular health problems. However, the effects of the app on subhealth awareness and health-promoting behaviors were not clearly evaluated. Therefore, further studies to assess improvements in health after the use of personalized health management programs provided by the MBBG app are needed. The MBBG app may be useful for members of the general public, who are not diagnosed with a disease but are unable to lead an optimal daily life due to discomfort, to seek strategies that can improve their health. TRIAL REGISTRATION: Clinical Research Information Service KCT0003488; https://cris.nih.go.kr/cris/search/search_result_st01.jsp?seq=14379 comparator, care providers, centers, and blinding status (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) ii) Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) iii) Open vs. closed, web-based (self-assessment) vs. face-to-face assessments in abstract: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic or a closed online user group (closed usergroup trial), and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment). Clearly say if outcomes were selfassessed through questionnaires (as common in web-based trials). Note: In traditional offline trials, an open trial (open-label trial) is a type of clinical trial in which both the researchers and participants know which treatment is being administered. To avoid confusion, use "blinded" or "unblinded" to indicated the level of blinding instead of "open", as "open" in web-based trials usually refers to "open access" (i.e. participants can self-enrol) (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) Highly Recommended iv) Results in abstract must contain use data: Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes. Figure 2 ; Not applicable ; P5. Intervention viii) Describe mode of delivery, features/functionalities/components of the intervention and comparator, and the theoretical framework [6] used to design them (instructional strategy [1] , behaviour change techniques, persuasive features, etc., see e.g., [7, 8] for terminology). This includes an in-depth description of the content (including where it is coming from and who developed it) [1] , "whether [and how] it is tailored to individual circumstances and allows users to track their progress and receive feedback" [6] . This also includes a description of communication delivery channels and -if computer-mediated communication is a component -whether communication was synchronous or asynchronous [6] . It also includes information on presentation strategies [1] , including page design principles, average amount of text on pages, presence of hyperlinks to other resources etc. [1] . Essential ix) Describe use parameters (e.g., intended "doses" and optimal timing for use) [1] . Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use [1] , if any, or was the intervention used ad libitum. Highly Recommended x) Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as cointervention. Detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered" [6] . It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability). xii) Describe any co-interventions (incl. training/support): Clearly state any "interventions that are provided in addition to the targeted eHealth intervention" [1] , as ehealth intervention may not be designed as standalone intervention. This includes training sessions and support [1] . It may be necessary to distinguish between the level of training required for the trial, and the level of training for a routine application outside of a RCT setting (discuss under item 21 -generalizability. Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed i) If outcomes were obtained through online questionnaires, describe if they were validated for online use [6] and apply CHERRIES items to describe how the questionnaires were designed/deployed [9] . Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned No EHEALTH-specific additions here Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment i) Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any). Highly Recommended [6] for some items to be included in informed consent documents. iii) Safety and security procedures, incl. privacy considerations, and "any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)" [1] . (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center 13b For each group, losses and exclusions after randomisation, together with reasons i) Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) [5] or other figures or tables demonstrating usage/dose/engagement. 14a Dates defining the periods of recruitment and follow-up i) Indicate if critical "secular events" [1] fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources" [1] . 14b Why the trial ended or was stopped [early] No EHEALTH-specific additions here Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups i) Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1] , e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention. Essential ii) Primary analysis should be intent-to-treat; secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i). Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) i) In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a Highly Recommended ; P4. Study design ; Figure 1 ; P8. Study population ; Figure 1 ; P8. Result Table 1 ; P8. Result Table 1 ; P7. Statistical analysis ; P9. Table 2 metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a). 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory i) A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii). Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms) i) Include privacy breaches, technical problems. This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches [1] , technical problems, and other unexpected/unintended incidents. "Unintended effects" also includes unintended positive effects [2] . ii) Include qualitative feedback from participants or observations from staff/researchers, if available, on strengths and shortcomings of the application, especially if they point to unintended/unexpected effects or uses. This includes (if available) reasons for why people did or did not use the application as intended by the developers. ** NPT = non pharmacological treatment (CONSORT extension) [11] DISCUSSION Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses i) Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events. Generalisability 21 Generalisability (external validity, applicability) of the trial findings NPT: External validity of the trial findings according to the intervention, comparators, patients, and care providers or centers involved in the trial i) Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations [2] . ii) Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting. Relevance of CONSORT reporting criteria for research on eHealth interventions STARE-HI--Statement on reporting of evaluation studies in Health Informatics Issues in evaluating health websites in an Internet-based randomized controlled trial Missing Data Approaches in eHealth Research: Simulation Study and a Tutorial for Nonmathematically Inclined Researchers The law of attrition Establishing Guidelines for Executing and Reporting Internet Intervention Research Using the Internet to Promote Health Behavior Change: A Systematic Review and Meta-analysis of the Impact of Theoretical Basis, Use of Behavior Change Techniques, and Mode of Delivery on Efficacy Online Interventions for Social Marketing Health Behavior Change Campaigns: A Meta-Analysis of Psychological Architectures and Adherence Factors Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials Extending the CONSORT Statement to randomized trials of nonpharmacologic treatment: explanation and elaboration