When provided a large variety of photographs and videos, AI algorithms can generate synthetic, manipulable video representations of human faces to a significant degree of perceptual realism. These visual media, commonly referred to as 'deepfakes,' are largely seen as serious threats to our understanding of truth. Within the context of longstanding epistemological debates, I argue that these aforementioned negative consequences are all possible using 'deepfakes'; however, they are not inherently deceptive or malicious. It is not the medium itself that poses these threats, but rather how humans use them. 'Deepfakes' do not show a radical departure in our conceptions of truth in representations, but they do demonstrate an overconfidence in the truth of photographic images. Rather than outright bans or other technology-specific regulation, I propose that the careful study of 'deepfakes' is an opportunity to address some lapses in critical accounts of imagery and visual representation more generally.