Hello NPTEL Learners, In this article, you will find NPTEL Deep Learning Assignment 4 Week 4 Answers 2023. All the Answers are provided below to help the students as a reference don’t straight away look for the solutions, first try to solve the questions by yourself. If you find any difficulty, then look for the solutions.
NPTEL Deep Learning Assignment 5 Answers 2023 Join Group👇
Note: We are trying to give our best so please share with your friends also.

NPTEL Deep Learning Assignment 4 Answers 2023:
We are updating answers soon Join Group for update: CLICK HERE
Q.1. A given cost function is of the form J(0) = 202-40+2? What is the weight update rule for gradient descent optimization at step t+1? Consider, a=0.01 to be the learning rate.
Q.2. Which of the following activation function leads to sparse activation maps?
- a. Sigmoid
- b. Tanh
- c. Linear
- d. ReLU
Q.3. If yi is the ground truth label for ith training data, p, is the predicted label. For a binary class classification, which of the following is a feasible loss function for training a neural net. C is the number of training data
- a. -E-1 yi log p₁ – E-1(1 – y₁) log(1 – p₁)
- b. -E1Y₁ log p₁ – E-₁ yi log(1 – pt) гс
- c. E 1 yi log(1 – P₁) – Σ₁(1 – y₁) log(p;)
- d. E (1-y₁) log p₁ – E-₁ yi log(1 – p₁)
NPTEL Deep Learning Assignment 5 Answers Join Group👇
Q.4. Which logic function cannot be performed using a single-layered Neural Network?
- a. AND
- b. OR
- c. XOR
- d. All
Q.5. Which of the following options closely relate to the following graph? Green cross are the samples of Class-A while mustard rings are samples of Class-B and the red line is the separating line between the two class.
Q.6. Which of the following statement is true?
- a. L2 regularization lead to sparse activation maps
- b. L1 regularization lead to sparse activation maps
- c. Some of the weights are squashed to zero in L2 regularization
- d. L2 regularization is also known as Lasso
Q.7. Which among the following options give the range for a tanh function?
NPTEL Deep Learning Week 4 Answers Join Group👇
CLICK HERE
Q.8. Consider the following neural network shown in the figure with inputs x1, x2 and output Y. The inputs take the values x₁, x₁ € {0,1}. The logical operation performed by the network is
- a. AND
- b. OR
- c. XOR
- d. NOR
Q.9. When is gradient descent algorithm certain to find a global minima?
- a. For convex cost plot
- b. For concave cost plot
- c. For union of 2 convex cost plot
- d. For union of 2 concave cost plot
Q.10. Let X=[-1, 0, 3, 5] be the the input of ith layer of a neural network. On this, we want to apply softmax function. What should be the output of it?
- a. [0.368, 1, 20.09, 148.41]
- b. [0.002, 0.006, 0.118,0.874]
- c. [0.3, 0.05,0.6,0.05]
- d. [0.04,0,0.06,0.9]
NPTEL Deep Learning Assignment 4 Answers Join Group👇
Disclaimer: This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.
About NPTEL Deep Learning Course:
The availability of huge volume of Image and Video data over the internet has made the problem of data analysis and interpretation a really challenging task. Deep Learning has proved itself to be a possible solution to such Computer Vision tasks. Not only in Computer Vision, Deep Learning techniques are also widely applied in Natural Language Processing tasks. In this course we will start with traditional Machine Learning approaches, e.g. Bayesian Classification, Multilayer Perceptron etc. and then move to modern Deep Learning architectures like Convolutional Neural Networks, Autoencoders etc. On completion of the course students will acquire the knowledge of applying Deep Learning techniques to solve various real life problems.
Course Layout:
- Week 1: Introduction to Deep Learning, Bayesian Learning, Decision Surfaces
- Week 2: Linear Classifiers, Linear Machines with Hinge Loss
- Week 3: Optimization Techniques, Gradient Descent, Batch Optimization
- Week 4: Introduction to Neural Network, Multilayer Perceptron, Back Propagation Learning
- Week 5: Unsupervised Learning with Deep Network, Autoencoders
- Week 6: Convolutional Neural Network, Building blocks of CNN, Transfer Learning
- Week 7: Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
- Week 8: Effective training in Deep Net- early stopping, Dropout, Batch Normalization, Instance Normalization, Group Normalization
- Week 9: Recent Trends in Deep Learning Architectures, Residual Network, Skip Connection Network, Fully Connected CNN etc.
- Week 10: Classical Supervised Tasks with Deep Learning, Image Denoising, Semanticd Segmentation, Object Detection etc.
- Week 11: LSTM Networks
- Week 12: Generative Modeling with DL, Variational Autoencoder, Generative Adversarial Network Revisiting Gradient Descent, Momentum Optimizer, RMSProp, Adam
CRITERIA TO GET A CERTIFICATE:
Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100
Final score = Average assignment score + Exam score
YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.
If you have not registered for exam kindly register Through https://examform.nptel.ac.in/
Join Our Telegram Group:- CLICK HERE