NPTEL Introduction To Machine Learning – IITKGP Assignment 6 Answers

NPTEL Introduction To Machine Learning – IITKGP Week 6 Assignment 6 Answer: In this article, you will find NPTEL Introduction To Machine Learning – IITKGP Week 6 Assignment 6 Answer , Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer.

For Week 7 Answers Join this Group👇

Note: We are trying to give our best so please share with your friends also. NPTEL Introduction To Machine Learning – IITKGP

NPTEL Introduction To Machine Learning – IITKGP Week 6 Assignment 6 Answer 2022 :-

Q1. In training a neural network, we notice that the loss does not increase in the first few starting epochs: What is the reason for this?

  • The learning Rate is low.
  • Regularization Parameter is High.
  • Stuck at the Local Minima.
  • All of these could be the reason.

Q.2. What is the sequence of the following tasks in a perception?

I) Initialize the weights of the perceptron randomly. II) Go to the next batch of data set. III) If the prediction does not match the output, change the weights. IV) For a sample input, compute an output.
  • I, II, II, IV
  • IV, III, II,I
  • III, I, II, IV
  •  I, IV, III, II

For Week 7 Assignment Answers Join this Group👇

Q.3. Suppose you have inputs as x, y, and z with values -2, 5, and -4 respectively. You have a neuron ‘q’ and neuron ‘f’ with functions:

q = x + y f = q * z Graphical representation of the functions is as follows: What is the gradient of F with respect to x, y, and z?
  • (-3, 4, 4)
  • (4, 4, 3)
  • (4,-4,3)
  • (3,-4,4)

Q.4. Aneural network can be considered as multiple simple equations stacked together. Suppose we want to replicate the function for the below mentioned decision boundary.

What will be the final equation?

  • (h1 AND NOT h2) OR (NOT h1 AND h2)
  • (h1 OR NOT h2) AND (NOT hl OR h2)
  • (h1 AND h2) OR (hi OR h2)
  • None of these.

Q.5. Which of the following is true about model capacity (where model capacity means the ability of neural network to approximate complex functions)?

  • As number of hidden layers increase, model capacity increases
  • As dropout ratio increases, model capacity increases
  • As learning rate increases, model capacity increases
  • None of these.
NPTEL Cloud Computing All Weeks Assignment Solution: Click Here

Q.6. First Order Gradient descent would not work correctly (1.e. may get stuck) in which of the following graphs?

  • Answer: B

Q.7. Which of the following is true? Single layer associative neural networks do not have the ability to

I) Perform pattern recognition II) Find the parity of a picture III) Determine whether two or more shapes in a picture are connected or not
  • II and III are true
  • II is true
  • All of the above
  • None of the above.

For Week 7 Assignment Answers Join this Group👇

Q.8. The network that involves backward links from outputs to the inputs and hidden layers is called as

  • Self-organizing Maps
  • Perceptron
  • Recurrent Neural Networks
  • Multi-Layered Perceptron

Q.9. Intersection of linear hyperplanes in a three-layer network can produce both convex and non- convex surfaces. Is the statement true?

  • Yes
  • No

Q.10. What is meant by the statement “Backpropagation is a generalızed delta rule”?

  • Because backpropagation can be extended to hidden layer units
  • Because delta is applied to only to input and output layers, thus making it more generalized.
  • It has no significance
  • None of the above.

For Any Changes In Answers Join this Group👇

This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.

About NPTEL Introduction To Machine Learning – IITKGP Course:

This course provides a concise introduction to the fundamental concepts in machine learning and popular machine learning algorithms. We will cover the standard and most popular supervised learning algorithms including linear regression, logistic regression, decision trees, k-nearest neighbour, an introduction to Bayesian learning and the naïve Bayes algorithm, support vector machines and kernels and neural networks with an introduction to Deep Learning. If you have not registered for exam kindly register Through COURSE LAYOUT The course structure and content covers, over a period of 8 weeks:
  • Week 1: Introduction: Basic definitions, types of learning, hypothesis space and inductive bias, evaluation, cross-validation
  • Week 2: Linear regression, Decision trees, overfitting
  • Week 3: Instance based learning, Feature reduction, Collaborative filtering based recommendation
  • Week 4: Probability and Bayes learning
  • Week 5: Logistic Regression, Support Vector Machine, Kernel function and Kernel SVM
  • Week 6: Neural network: Perceptron, multilayer network, backpropagation, introduction to deep neural network
  • Week 7: Computational learning theory, PAC learning model, Sample complexity, VC Dimension, Ensemble learning
  • Week 8: Clustering: k-means, adaptive hierarchical clustering, Gaussian mixture model
Sharing Is Caring

Leave a Comment