I recently wrote a python script that uses cubic, polynomial and linear regression on a set of data to return a best fit line. A professor of mine told me that I should be using a Gradient descent algorithm instead of purely matrix operations but offered no explanation. Why does he suggest this?
Edit: To be more specific if I'm given an MxN matrix, to find a least squares fit to the data, I'm just using the formula: (X'X)-1 * X'y = b to find the matrix b that gives me my line's coefficients.
Perhaps because this method requires functions for taking the transpose, inverse and determinant of the matrix as well, it's too taxing on the system running the program?
No comments:
Post a Comment