Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study the molecular states in the complex cellular environment as the lifetime readings are not biased by the fluorophore concentration or the excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in photon-starved conditions. We demonstrated our model is not only 2,800 times faster than the gold standard time-domain maximum likelihood estimation (TD_MLE) method but also provide more accurate analysis in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis in live cells. With its advantages in speed and reliability, flimGANE is particularly useful in fundamental biological research and clinical applications, where ultrafast analysis is critical.