Perioperative bleeding (PB) is associated with increased patient morbidity and mortality, and results in substantial health care resource utilization. To assess bleeding risk, a routine practice in most centers is to use indicators such as elevated values of the International Normalized Ratio (INR). For patients with elevated INR, the routine therapy option is plasma transfusion. However, the predictive accuracy of INR and the value of plasma transfusion still remains unclear. Accurate methods are therefore needed to identify early the patients with increased risk of bleeding. The goal of this work is to apply advanced machine learning methods to study the relationship between preoperative plasma transfusion (PPT) and PB in patients with elevated INR undergoing noncardiac surgery. The problem is cast under the framework of causal inference where robust meaningful measures to quantify the effect of PPT on PB are estimated. Results show that both machine learning and standard statistical methods generally agree that PPT negatively impacts PB and other important patient outcomes. However, machine learning methods show significant results, and machine learning boosting methods are found to make less errors in predicting PB.