The accuracy of part of speech (POS) tagging reported in medical natural language processing (NLP) literature is typically very high when training and testing data sets are from the same domain and have similar characteristics, but is lower when these differ. This presents a problem for clinical NLP, where it is difficult to obtain large corpora of training data suitable for localized tasks. We experimented with implementing the TnT POS tagger and training it on a manually tagged small corpus of publicly available synthetic clinical reports supplemented with widely used public corpora (GENIA and Penn Treebank). We describe this implementation and report the evaluation results on MiPACQ, a large corpus of manually tagged clinical text. Our tagger achieves accuracy comparable to POS taggers trained on large amounts of real clinical data (91-93%). This demonstrates that medical NLP developers do not need to rely on large restricted resources for POS tagging.