Search Articles

View query in Help articles search

Search Results (1 to 10 of 2861 Results)

Download search results: END BibTex RIS


An Ensemble Learning Strategy for Eligibility Criteria Text Classification for Clinical Trial Recruitment: Algorithm Development and Validation

An Ensemble Learning Strategy for Eligibility Criteria Text Classification for Clinical Trial Recruitment: Algorithm Development and Validation

After that, we used BERT, RoBERTa, XLNet, and ERNIE to train the vectors, and calculated the Softmax value for the results of each model.

Kun Zeng, Zhiwei Pan, Yibin Xu, Yingying Qu

JMIR Med Inform 2020;8(7):e17832

Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis

Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis

Here, we further investigated three deep-learning methods with pretrained language representation models, BERT, robustly optimized BERT pretraining approach (RoBERTa) [18], and generalized autoregressive pretraining for language understanding (XLNET) [19],

Xiaofeng Wang, Shuai Chen, Tao Li, Wanting Li, Yejie Zhou, Jie Zheng, Qingcai Chen, Jun Yan, Buzhou Tang

JMIR Med Inform 2020;8(7):e17958

Measurement of Semantic Textual Similarity in Clinical Texts: Comparison of Transformer-Based Models

Measurement of Semantic Textual Similarity in Clinical Texts: Comparison of Transformer-Based Models

RoBERTa has the same architecture as BERT but pretrained with a robust optimizing strategy. The RoBERTa pretraining procedure used dynamic MLM but removed the next sentence prediction task.

Xi Yang, Xing He, Hansi Zhang, Yinghan Ma, Jiang Bian, Yonghui Wu

JMIR Med Inform 2020;8(11):e19735