Deep Learning models are increasing in their use day-by-day: they are the common choice of modeling in domains like autonomous navigation systems, conversation systems, and medical diagnosis. With the increase in use, the scrutiny of these models has also increased. Researchers and engineers are trying to understand the reasons behind model predictions and want to know the uncertainties involved in the model outputs. We want to know where the model is not confident in its predictions. The quantification of uncertainty is extremely important as these models are becoming ubiquitous. We briefly discuss the theory of Bayesian learning and different algorithms which have been proposed to tackle the problem. We also discuss some experiments which help us connect some general phenomena observed in training deep networks with the uncertainties from Bayesian approaches.