Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without acquiring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and also to produce more realistic target image, we propose to use the adversarial learning strategy inspired by the generative adversarial network to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference based loss function to avoid generating blurry target images. Also, a long-term residual unit is explored to help the training of the generative network for certain medical image synthesis tasks. To make up for the deficiencies of patch-based training, we further apply Auto-Context Model (ACM) to implement a context-aware deep convolutional adversarial network. Experimental results show the robustness and accuracy of our method in synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, i.e., to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks. And the proposed adversarial learning strategy is also proved to be able to help generate more realistic images. Codes are publicly available online.