NPTEL Deep Learning Assignment 11 Answers 2023

Hello NPTEL Learners, In this article, you will find NPTEL Deep Learning Assignment 11 Answers 2023. All the Answers are provided below to help the students as a reference don’t straight away look for the solutions, first try to solve the questions by yourself. If you find any difficulty, then look for the solutions.

NPTEL Deep Learning Assignment 12 Answers 2023 Join Group👇
NPTEL Principles of Management Assignment 1 Answers 2023

Note: We are trying to give our best so please share with your friends also.

NPTEL Deep Learning Assignment 11 Answers 2023
NPTEL Deep Learning Assignment 11 Answers 2023

NPTEL Deep Learning Assignment 11 Answers 2023:

We are updating answers soon Join Group for update: CLICK HERE

Q.1. This question has Statement 1 and Statement 2. Of the choices given after the statements, choose the one that best describes the two statements.

Statement 1: In a max-unpooling layer, the features are placed at the position from where the max-pooled features were taken.

Statement 2: In a max-unpooling layer, features are placed at the left top corner at the time of up sampling.

  • a. Statement 1 is True and Statement 2 is False
  • b. Statement 1 is False and Statement 2 is True
  • c. Statement 1 is True and Statement 2 is True
  • d. Statement 1 is False and Statement 2 is False

Q.2. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.

Statement 1: In context of triplet loss for face recognition, hard negative example is whose embedding is are far from the anchor, and belongs to same person as the anchor.

Statement 2: In context of triplet loss for face recognition, hard negative example is whose embedding is near to anchor, but belongs to different person other than anchor.

  • a. Statement 1 is True and Statement 2 is False
  • b. Statement 1 is False and Statement 2 is True
  • c. Statement 1 is True and Statement 2 is True
  • d. Statement 1 is False and Statement 2 is False

Q.3. How we create the target for Semantic Segmentation?

  • a. By one-hot encoding the class labels, essentially creating an output channel for each of the possible classes
  • b. By labelling pixel of each class with different color
  • C. By creating list of all possible classes with having the index of the pixel belonging to that particular class.
  • d. None of the above
NPTEL Deep Learning Assignment 11 Answers Join Group👇
NPTEL Deep Learning Assignment 11 Answers 2023

Q.4. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.

Statement 1: Pixel wise cross entropy Loss can be used as cost function for Semantic Segmentation.

Statement 2: KL Divergence Loss can be used as cost function for Instance Segmentation.

  • a. Statement 1 is True and Statement 2 is False
  • b. Statement 1 is False and Statement 2 is True
  • c. Statement 1 is True and Statement 2 is True
  • d. Statement 1 is False and Statement 2 is False

Q.5. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.

Statement 1: Instance segmentation treats multiple objects of the same class as a single entity.

Statement 2: Semantic segmentation treats multiple objects of the same class as distinct individual objects.

  • a. Statement 1 is True and Statement 2 is False
  • b. Statement 1 is False and Statement 2 is True
  • c. Statement 1 is True and Statement 2 is True
  • d. Statement 1 is False and Statement 2 is False

Q.6. This question has Statement 1 and Statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.

Statement 1: Semantic Segmentation can be considered as pixel wise classification problem.

Statement 2: Semantic Segmented output has same dimension as the input image dimension.

  • a. Statement 1 is True and Statement 2 is False
  • b. Statement 1 is False and Statement 2 is True
  • c. Statement 1 is True and Statement 2 is True
  • d. Statement 1 is False and Statement 2 is False

Q.7. Which of the following can be used as cost function for Semantic Segmentation?

  • a. Pixel wise cross entropy Loss
  • b. Mean Square Loss
  • c. KL Divergence Loss
  • d. None of the above
NPTEL Deep Learning Assignment 11 Answers Join Group👇
CLICK HERE

Q.8. Which of the following operation does not increase spatial resolution of features?

  • a. Max-Unpooling
  • b. Deconvolution
  • c. Pixel-shuffle
  • d. Average-Pooling

Q.9. In a Deep CNN architecture, the feature map before applying a max pool layer with (2Ă—2) kernel is given bellow.

After few successive convolution layers, the feature map is again up-sampled using Max Un- pooling. If the following feature map is present before Max-Unpooling layer, what will be the output of the Max-Unpooling layer?

  • Answer:

Q.10. Which of the following operation does not reduce spatial dimension of features?

  • a. Max-Pooling
  • b. Conolution with 3 x 3 Kernel, Stride=2, Padding all sides = 1
  • c. Conolution with 3 x 3 Kernel, Stride=1, Padding all sides = 1
  • d. Average-Pooling
NPTEL Deep Learning Assignment 11 Answers Join Group👇
NPTEL Deep Learning Assignment 11 Answers 2023

Disclaimer: This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.

About NPTEL Deep Learning Course:

The availability of huge volume of Image and Video data over the internet has made the problem of data analysis and interpretation a really challenging task. Deep Learning has proved itself to be a possible solution to such Computer Vision tasks. Not only in Computer Vision, Deep Learning techniques are also widely applied in Natural Language Processing tasks. In this course we will start with traditional Machine Learning approaches, e.g. Bayesian Classification, Multilayer Perceptron etc. and then move to modern Deep Learning architectures like Convolutional Neural Networks, Autoencoders etc. On completion of the course students will acquire the knowledge of applying Deep Learning techniques to solve various real life problems.

Course Layout:

  • Week 1:  Introduction to Deep Learning, Bayesian Learning, Decision Surfaces
  • Week 2:  Linear Classifiers, Linear Machines with Hinge Loss
  • Week 3:  Optimization Techniques, Gradient Descent, Batch Optimization
  • Week 4:  Introduction to Neural Network, Multilayer Perceptron, Back Propagation Learning
  • Week 5:  Unsupervised Learning with Deep Network, Autoencoders
  • Week 6:  Convolutional Neural Network, Building blocks of CNN, Transfer Learning
  • Week 7:  Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
  • Week 8:  Effective training in Deep Net- early stopping, Dropout, Batch Normalization, Instance Normalization, Group Normalization
  • Week 9:  Recent Trends in Deep Learning Architectures, Residual Network, Skip Connection Network, Fully Connected CNN etc.
  • Week 10: Classical Supervised Tasks with Deep Learning, Image Denoising, Semanticd Segmentation, Object Detection etc.
  • Week 11: LSTM Networks
  • Week 12: Generative Modeling with DL, Variational Autoencoder, Generative Adversarial Network Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
CRITERIA TO GET A CERTIFICATE:

Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

If you have not registered for exam kindly register Through https://examform.nptel.ac.in/

Join Our Telegram Group:- CLICK HERE

Sharing Is Caring