Optimal Training Sets for Bayesian Prediction of MeSH® Assignment

Sunghwan Sohn, Won Kim, Donald C. Comeau, W. John Wilbur

Research output: Contribution to journalArticle

29 Citations (Scopus)

Abstract

Objectives: The aim of this study was to improve naïve Bayes prediction of Medical Subject Headings (MeSH) assignment to documents using optimal training sets found by an active learning inspired method. Design: The authors selected 20 MeSH terms whose occurrences cover a range of frequencies. For each MeSH term, they found an optimal training set, a subset of the whole training set. An optimal training set consists of all documents including a given MeSH term (C1 class) and those documents not including a given MeSH term (C-1 class) that are closest to the C1 class. These small sets were used to predict MeSH assignments in the MEDLINE® database. Measurements: Average precision was used to compare MeSH assignment using the naïve Bayes learner trained on the whole training set, optimal sets, and random sets. The authors compared 95% lower confidence limits of average precisions of naïve Bayes with upper bounds for average precisions of a K-nearest neighbor (KNN) classifier. Results: For all 20 MeSH assignments, the optimal training sets produced nearly 200% improvement over use of the whole training sets. In 17 of those MeSH assignments, naïve Bayes using optimal training sets was statistically better than a KNN. In 15 of those, optimal training sets performed better than optimized feature selection. Overall naïve Bayes averaged 14% better than a KNN for all 20 MeSH assignments. Using these optimal sets with another classifier, C-modified least squares (CMLS), produced an additional 6% improvement over naïve Bayes. Conclusion: Using a smaller optimal training set greatly improved learning with naïve Bayes. The performance is superior to a KNN. The small training set can be used with other sophisticated learning methods, such as CMLS, where using the whole training set would not be feasible.

Original languageEnglish (US)
Pages (from-to)546-553
Number of pages8
JournalJournal of the American Medical Informatics Association
Volume15
Issue number4
DOIs
StatePublished - Jul 1 2008
Externally publishedYes

Fingerprint

Medical Subject Headings
Bayes Theorem
Least-Squares Analysis
Learning
Problem-Based Learning
MEDLINE

ASJC Scopus subject areas

  • Health Informatics

Cite this

Optimal Training Sets for Bayesian Prediction of MeSH® Assignment. / Sohn, Sunghwan; Kim, Won; Comeau, Donald C.; Wilbur, W. John.

In: Journal of the American Medical Informatics Association, Vol. 15, No. 4, 01.07.2008, p. 546-553.

Research output: Contribution to journalArticle

Sohn, Sunghwan ; Kim, Won ; Comeau, Donald C. ; Wilbur, W. John. / Optimal Training Sets for Bayesian Prediction of MeSH® Assignment. In: Journal of the American Medical Informatics Association. 2008 ; Vol. 15, No. 4. pp. 546-553.
@article{b31f974ff87045bfbe4615f91bf02ebc,
title = "Optimal Training Sets for Bayesian Prediction of MeSH{\circledR} Assignment",
abstract = "Objectives: The aim of this study was to improve na{\"i}ve Bayes prediction of Medical Subject Headings (MeSH) assignment to documents using optimal training sets found by an active learning inspired method. Design: The authors selected 20 MeSH terms whose occurrences cover a range of frequencies. For each MeSH term, they found an optimal training set, a subset of the whole training set. An optimal training set consists of all documents including a given MeSH term (C1 class) and those documents not including a given MeSH term (C-1 class) that are closest to the C1 class. These small sets were used to predict MeSH assignments in the MEDLINE{\circledR} database. Measurements: Average precision was used to compare MeSH assignment using the na{\"i}ve Bayes learner trained on the whole training set, optimal sets, and random sets. The authors compared 95{\%} lower confidence limits of average precisions of na{\"i}ve Bayes with upper bounds for average precisions of a K-nearest neighbor (KNN) classifier. Results: For all 20 MeSH assignments, the optimal training sets produced nearly 200{\%} improvement over use of the whole training sets. In 17 of those MeSH assignments, na{\"i}ve Bayes using optimal training sets was statistically better than a KNN. In 15 of those, optimal training sets performed better than optimized feature selection. Overall na{\"i}ve Bayes averaged 14{\%} better than a KNN for all 20 MeSH assignments. Using these optimal sets with another classifier, C-modified least squares (CMLS), produced an additional 6{\%} improvement over na{\"i}ve Bayes. Conclusion: Using a smaller optimal training set greatly improved learning with na{\"i}ve Bayes. The performance is superior to a KNN. The small training set can be used with other sophisticated learning methods, such as CMLS, where using the whole training set would not be feasible.",
author = "Sunghwan Sohn and Won Kim and Comeau, {Donald C.} and Wilbur, {W. John}",
year = "2008",
month = "7",
day = "1",
doi = "10.1197/jamia.M2431",
language = "English (US)",
volume = "15",
pages = "546--553",
journal = "Journal of the American Medical Informatics Association : JAMIA",
issn = "1067-5027",
publisher = "Oxford University Press",
number = "4",

}

TY - JOUR

T1 - Optimal Training Sets for Bayesian Prediction of MeSH® Assignment

AU - Sohn, Sunghwan

AU - Kim, Won

AU - Comeau, Donald C.

AU - Wilbur, W. John

PY - 2008/7/1

Y1 - 2008/7/1

N2 - Objectives: The aim of this study was to improve naïve Bayes prediction of Medical Subject Headings (MeSH) assignment to documents using optimal training sets found by an active learning inspired method. Design: The authors selected 20 MeSH terms whose occurrences cover a range of frequencies. For each MeSH term, they found an optimal training set, a subset of the whole training set. An optimal training set consists of all documents including a given MeSH term (C1 class) and those documents not including a given MeSH term (C-1 class) that are closest to the C1 class. These small sets were used to predict MeSH assignments in the MEDLINE® database. Measurements: Average precision was used to compare MeSH assignment using the naïve Bayes learner trained on the whole training set, optimal sets, and random sets. The authors compared 95% lower confidence limits of average precisions of naïve Bayes with upper bounds for average precisions of a K-nearest neighbor (KNN) classifier. Results: For all 20 MeSH assignments, the optimal training sets produced nearly 200% improvement over use of the whole training sets. In 17 of those MeSH assignments, naïve Bayes using optimal training sets was statistically better than a KNN. In 15 of those, optimal training sets performed better than optimized feature selection. Overall naïve Bayes averaged 14% better than a KNN for all 20 MeSH assignments. Using these optimal sets with another classifier, C-modified least squares (CMLS), produced an additional 6% improvement over naïve Bayes. Conclusion: Using a smaller optimal training set greatly improved learning with naïve Bayes. The performance is superior to a KNN. The small training set can be used with other sophisticated learning methods, such as CMLS, where using the whole training set would not be feasible.

AB - Objectives: The aim of this study was to improve naïve Bayes prediction of Medical Subject Headings (MeSH) assignment to documents using optimal training sets found by an active learning inspired method. Design: The authors selected 20 MeSH terms whose occurrences cover a range of frequencies. For each MeSH term, they found an optimal training set, a subset of the whole training set. An optimal training set consists of all documents including a given MeSH term (C1 class) and those documents not including a given MeSH term (C-1 class) that are closest to the C1 class. These small sets were used to predict MeSH assignments in the MEDLINE® database. Measurements: Average precision was used to compare MeSH assignment using the naïve Bayes learner trained on the whole training set, optimal sets, and random sets. The authors compared 95% lower confidence limits of average precisions of naïve Bayes with upper bounds for average precisions of a K-nearest neighbor (KNN) classifier. Results: For all 20 MeSH assignments, the optimal training sets produced nearly 200% improvement over use of the whole training sets. In 17 of those MeSH assignments, naïve Bayes using optimal training sets was statistically better than a KNN. In 15 of those, optimal training sets performed better than optimized feature selection. Overall naïve Bayes averaged 14% better than a KNN for all 20 MeSH assignments. Using these optimal sets with another classifier, C-modified least squares (CMLS), produced an additional 6% improvement over naïve Bayes. Conclusion: Using a smaller optimal training set greatly improved learning with naïve Bayes. The performance is superior to a KNN. The small training set can be used with other sophisticated learning methods, such as CMLS, where using the whole training set would not be feasible.

UR - http://www.scopus.com/inward/record.url?scp=45849122150&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=45849122150&partnerID=8YFLogxK

U2 - 10.1197/jamia.M2431

DO - 10.1197/jamia.M2431

M3 - Article

VL - 15

SP - 546

EP - 553

JO - Journal of the American Medical Informatics Association : JAMIA

JF - Journal of the American Medical Informatics Association : JAMIA

SN - 1067-5027

IS - 4

ER -