We are interested in solving the problem of segmentation when no gold-standard labels are available for new image acquisition protocols. We developed a dual generative adversarial network (GAN), called Synth-GAN, which incorporates a differential operator loss (to favor retaining edges), as well as cyclic loss (to guarantee reconstruction of inputs). We show how the developed approach facilitates the application of an automated deep learning approach trained on one type of image (T2-weighted fat-sat MR) to be successfully applied to images well outside the trained distribution (Tl-weighted MR). A total of 100 images of each sequence from different patients were used (80% for training), and performance of the method was assessed by comparing how the previously developed automated segmentation approach performed prior to and post application of Synth-GAN. The developed approach improved the DICE coefficient from 0.39 (applying the automated segmentation method to the original Tl images) to 0.74 (applying the segmentation method to the synthesized T2 images). This approach will be useful for generalizing automated approaches across modalities, and institutions, when differences in hardware and software significantly alter image representations.