# NPTEL Deep Learning Assignment 10 Answers 2023

Hello NPTEL Learners, In this article, you will find NPTEL Deep Learning Assignment 10 Answers 2023. All the Answers are provided below to help the students as a reference donâ€™t straight away look for the solutions, first try to solve the questions by yourself. If you find any difficulty, then look for the solutions.

###### NPTEL Deep Learning Assignment 11 Answers 2023 Join Groupđź‘‡

Note: We are trying to give our best so please share with your friends also.

## NPTEL Deep Learning Assignment 10 Answers 2023:

#### Q.1. In case of Group Normalization, if group number=1, Group Normalization behaves like

• a. Batch Normalization
• b. Layer Normalization
• c. Instance Normalization
• d. None of the above

#### Q.2. In case of Group Normalization, if group number=number of channels, Group Normalization behaves like

• a. Batch Normalization
• b. Layer Normalization
• c. Instance Normalization
• d. None of the above

#### Q.3. When will you do early stopping?

• a. Minimum training loss point
• b. Minimum validation loss point
• c. Minimum test loss point
• d. None of these

#### Q.4. What is the use of learnable parameters in batch-normalization layer?

• a. Calculate mean and variances
• b. Perform normalization
• c. Renormalize the activations
• d. No learnable parameter is present

#### Q.5. Which of the one is not a procedure to prevent overfitting?

• a. Reduce feature size
• b. Use dropout
• c. Use Early stopping
• d. Increase training iterations

• a. 2,501
• b. 5,002
• c. 10,004
• d. 20,008

#### Q.7. Suppose, you have used a batch-normalization layer after a convolution block. After that you train the model using any standard dataset. Now will the extracted feature distribution after batch normalization layer have zero mean and unit variance if we feed any input image?

• a. Yes. Because batch-normalization normalizes the features into zero mean and unit variance
• b. No. It is not possible to normalize the features into zero mean and unit variance
• c. Canâ€™t Say. Because the batch-normalization renormalizes the features using trainable parameters. After training, it may or may not be the zero mean and unit variance.
• d. None of the above

#### Q.8. Which one of the following regularization methods induces sparsity among the trained weights?

• a. L regularizer
• b. La regularizer
• c. Both L & Lâ‚‚
• d. None of the above

#### Q.9. Which one of the following is not an advantage of dropout?

• a. Regularization
• b. Prevent Overfitting
• c. Improve Accuracy
• d. Reduce computational cost during testing

#### Q.10. Batch-Normalization layer takes the input x in mathbb R ^ (NCWH) batch mean is computed as mu_{c} = 1/(NWWM) sum i=1 ^ N Sigma j= Delta ^ W Sigma k=1 ^ H x iCjk and batch variance is computed as mu_{c} )^ 2 . Now after normalization, hat X = x â€“ mu_{C} sqrt sigma mathcal C ^ 2 + epsilon sigma mathcal C ^ 2 = 1/(NWH) Sigma i=1 ^ N Sigma j=1 ^ W Sigma k=1 ^ H (x i mathcal C jk â€“ What is the purpose of e in this expression?

• a. There is no such purpose
• b. It helps to converge faster
• c. It is decay rate in normalization
• d. It prevents division by zero for inputs with zero variance
##### NPTEL Deep Learning Assignment 10 Answers Join Groupđź‘‡

Disclaimer: This answer is provided by us only for discussion purpose if any answer will be getting wrong donâ€™t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided byÂ Chase2learn. This tutorial is only for Discussion andÂ LearningÂ purpose.

#### About NPTEL Deep Learning Course:

The availability of huge volume of Image and Video data over the internet has made the problem of data analysis and interpretation a really challenging task. Deep Learning has proved itself to be a possible solution to such Computer Vision tasks. Not only in Computer Vision, Deep Learning techniques are also widely applied in Natural Language Processing tasks. In this course we will start with traditional Machine Learning approaches, e.g. Bayesian Classification, Multilayer Perceptron etc. and then move to modern Deep Learning architectures like Convolutional Neural Networks, Autoencoders etc. On completion of the course students will acquire the knowledge of applying Deep Learning techniques to solve various real life problems.

#### Course Layout:

• Week 1:Â  Introduction to Deep Learning, Bayesian Learning, Decision Surfaces
• Week 2:Â  Linear Classifiers, Linear Machines with Hinge Loss
• Week 3:Â  Optimization Techniques, Gradient Descent, Batch Optimization
• Week 4:Â  Introduction to Neural Network, Multilayer Perceptron, Back Propagation Learning
• Week 5:Â Â Unsupervised Learning with Deep Network, Autoencoders
• Week 6:Â  Convolutional Neural Network, Building blocks of CNN, Transfer Learning
• Week 8:Â  Effective training in Deep Net- early stopping, Dropout, Batch Normalization, Instance Normalization, Group Normalization
• Week 9:Â  Recent Trends in Deep Learning Architectures, Residual Network, Skip Connection Network, Fully Connected CNN etc.
• Week 10: Classical Supervised Tasks with Deep Learning, Image Denoising, Semanticd Segmentation, Object Detection etc.
• Week 11:Â LSTM Networks
• Week 12:Â Generative Modeling with DL, Variational Autoencoder, Generative Adversarial Network Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
###### CRITERIA TO GET A CERTIFICATE:

Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

If you have not registered for exam kindly register Through https://examform.nptel.ac.in/