Tang, Choon YikMunoz, Roman2024-01-032024-01-032023-12-15https://hdl.handle.net/11244/340084Diffusion probabilistic models have emerged as powerful tools for image generation and synthesis tasks. This research delves into the intricate relationship between the hyperparameters of these models and the underlying hardware, aiming to provide insight into optimizing performance. The study specifically investigates the effects of key hyperparameters: the number of feature parameters, the number of timesteps, the image size, the number of images in the dataset, the learning rate, and the number of epoch iterations. Additionally, the influence of hardware, particularly GPU memory, on the overall performance is examined. Through a systematic experimentation framework, we consider different hyperparameters and hardware configurations to quantify their impact on model convergence, image generation quality, and computational efficiency. The research aims to identify optimal hyperparameter settings for diverse tasks while considering the constraints imposed by available hardware resources. Moreover, the study explores potential trade-offs and synergies between hyperparameter tunning and hardware specifications, shedding light on the interplay between algorithmic choices and computational capabilities. The findings from this research contribute to a nuanced understanding of how diffusion probabilistic models can be fine-tuned for image generation, considering the practical implications of hardware limitations. By bridging the gap between algorithmic design and hardware constraints, this work provides valuable guidance for practitioners seeking to leverage diffusion models effectively in real-world scenarios which require image generation.Attribution-ShareAlike 4.0 InternationalImage GenerationDiffusion Probabilistic ModelsArtificial IntelligenceDeep LearningA study on diffusion probabilistic models for image generation