Recent advances in medical imaging have created many possibilities for the exploitation of both microscopic images in digital form and the whole slide images (WSIs) for multiple tasks such as classification, prediction, and retrieval. This is mainly due to annotated datasets available through various research organizations. Magnification level is an important factor as pathologist views the biopsy samples at various magnifications to reach a diagnosis. Whereas WSIs generally do contain the magnification information, microscopic snapshots are often captured without attaching the magnification information. In this paper, we introduce a new dataset, Kimia-5MAG, consisting of 33,345 patches at 5 different magnification classes created from WSIs made publicly available by The Cancer Genome Atlas (TCGA). There exists a large number of microscopic snapshots captured from camera-mounted microscopes but are of little use for automatic processing due to lack of magnification information. One direction to make use of these datasets is learning the magnification level from high resolutions captured WSIs and transferring the knowledge to microscopic snapshots. We investigate combinations of several deep networks and classifiers to predict different magnification levels. The proposed framework achieves 93% classification accuracy. We also analyze the effect of rotation on magnification prediction.