Despite considerable research on pool-based deep active learning (DAL), deploying DAL into real applications still has several challenges. A frequently neglected aspect is the choice of training hyperparameters (HPs), such as the learning rate. Since these HPs determine how the deep neural network learns in each cycle, they must be chosen carefully. In this article, we analyze the role of HPs in DAL. We find that optimizing HPs reduces the performance gap between DAL strategies. Conversely, we highlight challenges in finding optimal HPs when using datasets selected via DAL strategies.