The goal of this thesis is to automate converting 2D plots including to tactile format. We defined the problem as an image-to-image translation where the source domain belongs to 2D plots and the target domain is the tactile equivalent of the input plot. The proposed method is based on the pix2pix architecture using UNet++ as the generator. We also propose to use gradient penalty and perceptual loss to further enhance the results. To achieve editable outputs, we propose two approaches. One aims to generate RGB outputs. The other aims to generate multichannel outputs where each channel is associated with a component of the 2D plot. We evaluate the proposed models quantitatively and qualitatively. For RGB outputs we use foreground MSE, background MSE, precision, and recall. On the other hand, we use pixel accuracy, Dice coefficient, and Jaccard index to evaluate our channelwise model.