matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
来源:学生作业帮助网 编辑:作业帮 时间:2024/06/24 16:33:43
![matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);](/uploads/image/z/8654751-63-1.jpg?t=matlab%E6%9D%A5%E5%AE%9E%E7%8E%B0%E6%A2%AF%E5%BA%A6%E4%B8%8B%E9%99%8D%E7%AE%97%E6%B3%95%2C%E4%B8%BA%E4%BB%80%E4%B9%88%E8%AF%AF%E5%B7%AE%E8%B6%8A%E6%9D%A5%E8%B6%8A%E5%A4%A7%3F%E4%B8%8B%E9%9D%A2%E6%98%AF%E6%BA%90%E4%BB%A3%E7%A0%81function+%5B+theta%2CJ_history+%5D+%3D+gradientDescent%28+X%2Cy%2Ctheta%2Calpha%2Cnum_iters+%29%25GRADIENTDESCENT+Summary+of+this+function+goes+here%25+Detailed+explanation+goes+herem%3Dsize%28X%2C1%29%3B)
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码
function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )
%GRADIENTDESCENT Summary of this function goes here
% Detailed explanation goes here
m=size(X,1);
J_history = zeros(num_iters,1);
for iter=1:num_iters
p=theta(1)-alpha*(sum((X*theta-y).*X(:,1)));
q=theta(2)-alpha*(sum((X*theta-y).*X(:,2)));
theta(1)=p;
theta(2)=q;
J_history(iter) = computeCost(X,y,theta);
end
end
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
你for循环里怎么没有m出现?
应该是
p= theta(1) - (alpha / m) * sum((X * theta - y).* X(:,1));
q= theta(2) - (alpha / m) * sum((X * theta - y).* X(:,2));