Hello NPTEL Learners, In this article, you will find NPTEL Deep Learning Assignment 9 Answers 2023. All the Answers are provided below to help the students as a reference don’t straight away look for the solutions, first try to solve the questions by yourself. If you find any difficulty, then look for the solutions.
NPTEL Deep Learning Assignment 10 Answers 2023 Join Group👇
Note: We are trying to give our best so please share with your friends also.

NPTEL Deep Learning Assignment 9 Answers 2023:
We are updating answers soon Join Group for update: CLICK HERE
Q.1. Suppose Elina has trained a neural network 3 times (Experiment A, B and C) with some unknown optimizer. Each time she has kept all other hyper-parameters same, but changed only one hyper-parameter. From the three given loss curves, can you identify what is that hyper- parameter?
Q.2. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.
Statement 1: Mini-batch gradient descent will always overshoot the optimum point even with a lower learning rate value. Statement 2: Mini-batch gradient might oscillate in its path towards convergence and oscillation can reduced by momentum optimizer.
- a. Statement 1 is True and Statement 2 is False
- b. Statement 1 is False and Statement 2 is True
- c. Statement 1 is True and Statement 2 is True
- d. Statement 1 is False and Statement 2 is False
Q.3. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.
Statement 1: Apart from the learning rate, Momentum optimizer has two hyper parameters whereas Adam has just one hyper parameter in its weight update equation Statement 2: Adam optimizer and stochastic gradient descent have the same weight update rule
- a. Statement 1 is True and Statement 2 is False
- b. Statement 1 is False and Statement 2 is True
- c. Statement 1 is True and Statement 2 is True
- d. Statement 1 is False and Statement 2 is False
NPTEL Deep Learning Assignment 9 Answers Join Group👇
Q.4. Which of the following options is true?
- a. Stochastic Gradient Descent has noisier updates
- b. In Stochastic Gradient Descent, a small batch of sample is selected randomly instead of the whole data set for each iteration. Too large update of weight values leading to faster convergence
- c. In big data applications Stochastic Gradient Descent increases the computational burden
- d. Stochastic Gradient Descent is a non-iterative process
Q.5. Which of the following is a possible edge of momentum optimizer over of mini-batch gradient descent?
- a. Mini-batch gradient descent performs better than momentum optimizer when the surface of the loss function has a much more elongated curvature along X- axis than along Y-axis
- b. Mini-batch gradient descent always performs better than momentum optimizer
- c. Mini-batch gradient descent will always overshoot the optimum point even with a lower learning rate value
- d. Mini-batch gradient might oscillate in its path towards convergence which can reduced by momentum optimizer
Q.6. Which of the following is true?
- a. Adam is a replacement optimization algorithm for stochastic gradient descent for training deep learning models in local minima
- b. Apart from the learning rate, Momentum optimizer has two hyper parameters
- whereas Adam has just one hyper parameter in its weight update equation c. Adam optimizer and stochastic gradient descent have the same weight update rule
- d. None of the above
Q.7. Which of the following is the correct property of RMSProp optimizer?
- a. RMSProp divides the learning rate by an exponentially decaying average of squared gradients
- b. RMSProp has a constant learning rate
- c. RMSProp divides the learning rate by an exponentially increasing average of squared gradients
- d. RMSProp decays the learning rate by a constant value
NPTEL Deep Learning Week 9 Answers Join Group👇
CLICK HERE
Q.8. Why it is at all required to choose different learning rates for different weights?
- a. To avoid the problem of diminishing learning rate
- b. To avoid overshooting the optimum point
- c. To reduce vertical oscillations while navigating the optimum point
- d. This would aid to reach the optimum point faster
Q.9. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.
Statement 1: The stochastic gradient computes the gradient using a single sample Statement 2: It converges much faster than the batch gradient
- a. Statement 1 is True and Statement 2 is False
- b. Statement 1 is False and Statement 2 is True
- c. Statement 1 is True and Statement 2 is True
- d. Statement 1 is False and Statement 2 is False
Q.10. What is the main purpose of auxiliary classifier in GoogleNet?
- a. To increase the number of parameters
- b. To avoid vanishing gradient problem
- c. To increase the inference speed
- d. None of the above
NPTEL Deep Learning Assignment 9 Answers Join Group👇
Disclaimer: This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.
About NPTEL Deep Learning Course:
The availability of huge volume of Image and Video data over the internet has made the problem of data analysis and interpretation a really challenging task. Deep Learning has proved itself to be a possible solution to such Computer Vision tasks. Not only in Computer Vision, Deep Learning techniques are also widely applied in Natural Language Processing tasks. In this course we will start with traditional Machine Learning approaches, e.g. Bayesian Classification, Multilayer Perceptron etc. and then move to modern Deep Learning architectures like Convolutional Neural Networks, Autoencoders etc. On completion of the course students will acquire the knowledge of applying Deep Learning techniques to solve various real life problems.
Course Layout:
- Week 1:Â Introduction to Deep Learning, Bayesian Learning, Decision Surfaces
- Week 2:Â Linear Classifiers, Linear Machines with Hinge Loss
- Week 3:Â Optimization Techniques, Gradient Descent, Batch Optimization
- Week 4:Â Introduction to Neural Network, Multilayer Perceptron, Back Propagation Learning
- Week 5:Â Â Unsupervised Learning with Deep Network, Autoencoders
- Week 6:Â Convolutional Neural Network, Building blocks of CNN, Transfer Learning
- Week 7:Â Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
- Week 8:Â Effective training in Deep Net- early stopping, Dropout, Batch Normalization, Instance Normalization, Group Normalization
- Week 9:Â Recent Trends in Deep Learning Architectures, Residual Network, Skip Connection Network, Fully Connected CNN etc.
- Week 10: Classical Supervised Tasks with Deep Learning, Image Denoising, Semanticd Segmentation, Object Detection etc.
- Week 11:Â LSTM Networks
- Week 12:Â Generative Modeling with DL, Variational Autoencoder, Generative Adversarial Network Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
CRITERIA TO GET A CERTIFICATE:
Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100
Final score = Average assignment score + Exam score
YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.
If you have not registered for exam kindly register Through https://examform.nptel.ac.in/
Join Our Telegram Group:- CLICK HERE