Hello NPTEL Learners, In this article, you will find NPTEL Introduction to Machine Learning Assignment 2 Week 2 Answers 2023. All the Answers are provided below to help the students as a reference don’t straight away look for the solutions, first try to solve the questions by yourself. If you find any difficulty, then look for the solutions.
NPTEL Introduction to Machine Learning Assignment 3 Answers 2023 Join Group👇
Note: We are trying to give our best so please share with your friends also.
|NPTEL Introduction to Machine Learning Assignment 1 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 2 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 3 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 4 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 5 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 6 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 7 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 8 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 9 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 10 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 11 Answers||Click Here|
|NPTEL Introduction to Machine Learning Assignment 12 Answers||Click Here|
NPTEL Introduction to Machine Learning Assignment 2 Answers 2023:
We are updating answers soon Join Group for update: CLICK HERE
Q.1. Given a training data set of 10,000 instances, with each input instance having 17 dimensions and each output instance having 2 dimensions, the dimensions of the design matrix used in applying linear regression to this data is
Q.2. Suppose we want to add a regularizer to the linear regression loss function, to control the magnitudes of the weights β . We have a choice between Ω1(β)=Σpi=1|β| and Ω2(β)=Σpi=1β2 Which one is more likely to result in sparse weights?
Q.3. The model obtained by applying linear regression on the identified subset of features may differ from the model obtained at the end of the process of identifying the subset during
NPTEL Introduction to Machine Learning Assignment 3 Answers Join Group👇
Q.4. Consider forward selection, backward selection and best subset selection with respect to the same data set. Which of the following is true?
- Best subset selection can be computationally more expensive than forward selection
- Forward selection and backward selection always lead to the same result
- Best subset selection can be computationally less expensive than backward selection
- Best subset selection and forward selection are computationally equally expensive
- Both (b) and (d)
Q.5. In the lecture on Multivariate Regression, you learn about using orthogonalization iteratively to obtain regression co-effecients. This method is generally referred to as Multiple Regression using Successive Orthogonalization. In the formulation of the method, we observe that in iteration k , we regress the entire dataset on z0,z1,…zk−1 . It seems like a waste of computation to recompute the coefficients for z0 a total of p times, z1 a total of p−1 times and so on. Can we re-use the coefficients computed in iteration j for iteration j+1 for zj−1 ?
- No. Doing so will result in the wrong γ matrix. and hence, the wrong βi ’s.
- Yes. Since zj−1 is orthogonal to zj−l∀l≤j1 , the multiple regression in each iteration is essentially a univariate regression on each of the previous residuals. Since the regression coefficients for the previous residuals don’t change over iterations, we can re-use the coefficients for further iterations.
Q.6. Principal Component Regression (PCR) is an approach to find an orthogonal set of basis vectors which can then be used to reduce the dimension of the input. Which of the following matrices contains the principal component directions as its columns (follow notation from the lecture video)
NPTEL Introduction to Machine Learning Week 2 Answers Join Group👇
Q.7. Consider the following five training examples
We want to learn a function f(x) of the form f(x)=ax+b which is parameterised by (a,b) . Using squared error as the loss function, which of the following parameters would you use to model this function to get a solution with the minimum loss.
Q.8. Here is a data set of words in two languages.
Let us build a nearest neighbours classifier that will predict which language a word belongs to. Say we represent each word using the following features.
• Length of the word
• Number of consonants in the word
• Whether it ends with the letter ’o’ (1 if it does, 0 if it doesn’t)
For example, the representation of the word ‘waffle’ would be [6, 2, 0]. For a distance function, use the Manhattan distance.
d(a,b)=Σni=1|ai−bi|�(�,�)=Σ�=1�|��−��| where a,b∈Rn�,�∈��
Take the input word ‘keto’. With k = 1, the predicted language for the word is?
- None of the above
NPTEL Introduction to Machine Learning Assignment 2 Answers Join Group👇
Disclaimer: This answer is provided by us only for discussion purpose if any answer will be getting wrong don’t blame us. If any doubt or suggestions regarding any question kindly comment. The solution is provided by Chase2learn. This tutorial is only for Discussion and Learning purpose.
About NPTEL Introduction to Machine Learning Course:
With the increased availability of data from varied sources there has been increasing attention paid to the various data driven disciplines such as analytics and machine learning. In this course we intend to introduce some of the basic concepts of machine learning from a mathematically well motivated perspective. We will cover the different learning paradigms and some of the more popular algorithms and architectures used in each of these paradigms.
- Week 0: Probability Theory, Linear Algebra, Convex Optimization – (Recap)
- Week 1: Introduction: Statistical Decision Theory – Regression, Classification, Bias Variance
- Week 2: Linear Regression, Multivariate Regression, Subset Selection, Shrinkage Methods, Principal Component Regression, Partial Least squares
- Week 3: Linear Classification, Logistic Regression, Linear Discriminant Analysis
- Week 4: Perceptron, Support Vector Machines
- Week 5: Neural Networks – Introduction, Early Models, Perceptron Learning, Backpropagation, Initialization, Training & Validation, Parameter Estimation – MLE, MAP, Bayesian Estimation
- Week 6: Decision Trees, Regression Trees, Stopping Criterion & Pruning loss functions, Categorical Attributes, Multiway Splits, Missing Values, Decision Trees – Instability Evaluation Measures
- Week 7: Bootstrapping & Cross Validation, Class Evaluation Measures, ROC curve, MDL, Ensemble Methods – Bagging, Committee Machines and Stacking, Boosting
- Week 8: Gradient Boosting, Random Forests, Multi-class Classification, Naive Bayes, Bayesian Networks
- Week 9: Undirected Graphical Models, HMM, Variable Elimination, Belief Propagation
- Week 10: Partitional Clustering, Hierarchical Clustering, Birch Algorithm, CURE Algorithm, Density-based Clustering
- Week 11: Gaussian Mixture Models, Expectation Maximization
- Week 12: Learning Theory, Introduction to Reinforcement Learning, Optional videos (RL framework, TD learning, Solution Methods, Applications)
CRITERIA TO GET A CERTIFICATE:
Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100
Final score = Average assignment score + Exam score
YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.
If you have not registered for exam kindly register Through https://examform.nptel.ac.in/
Join Our Telegram Group:- CLICK HERE