CS231n:如何计算 Softmax 损失函数的梯度?

新手上路,请多包涵

我正在观看斯坦福 CS231 的一些视频:用于视觉识别的卷积神经网络,但不太了解如何使用 numpy 计算 softmax 损失函数的分析梯度。

根据 这个 stackexchange 答案,softmax 梯度计算如下:

导数计算

上面的 Python 实现是:

 num_classes = W.shape[0]
num_train = X.shape[1]
for i in range(num_train):
  for j in range(num_classes):
    p = np.exp(f_i[j])/sum_i
    dW[j, :] += (p-(j == y[i])) * X[:, i]

谁能解释一下上面的代码片段是如何工作的?下面还包括 softmax 的详细实现。

 def softmax_loss_naive(W, X, y, reg):
  """
  Softmax loss function, naive implementation (with loops)
  Inputs:
  - W: C x D array of weights
  - X: D x N array of data. Data are D-dimensional columns
  - y: 1-dimensional array of length N with labels 0...K-1, for K classes
  - reg: (float) regularization strength
  Returns:
  a tuple of:
  - loss as single float
  - gradient with respect to weights W, an array of same size as W
  """
  # Initialize the loss and gradient to zero.
  loss = 0.0
  dW = np.zeros_like(W)

  #############################################################################
  # Compute the softmax loss and its gradient using explicit loops.           #
  # Store the loss in loss and the gradient in dW. If you are not careful     #
  # here, it is easy to run into numeric instability. Don't forget the        #
  # regularization!                                                           #
  #############################################################################

  # Get shapes
  num_classes = W.shape[0]
  num_train = X.shape[1]

  for i in range(num_train):
    # Compute vector of scores
    f_i = W.dot(X[:, i]) # in R^{num_classes}

    # Normalization trick to avoid numerical instability, per http://cs231n.github.io/linear-classify/#softmax
    log_c = np.max(f_i)
    f_i -= log_c

    # Compute loss (and add to it, divided later)
    # L_i = - f(x_i)_{y_i} + log \sum_j e^{f(x_i)_j}
    sum_i = 0.0
    for f_i_j in f_i:
      sum_i += np.exp(f_i_j)
    loss += -f_i[y[i]] + np.log(sum_i)

    # Compute gradient
    # dw_j = 1/num_train * \sum_i[x_i * (p(y_i = j)-Ind{y_i = j} )]
    # Here we are computing the contribution to the inner sum for a given i.
    for j in range(num_classes):
      p = np.exp(f_i[j])/sum_i
      dW[j, :] += (p-(j == y[i])) * X[:, i]

  # Compute average
  loss /= num_train
  dW /= num_train

  # Regularization
  loss += 0.5 * reg * np.sum(W * W)
  dW += reg*W

  return loss, dW

原文由 Nghia Tran 发布,翻译遵循 CC BY-SA 4.0 许可协议

阅读 780
2 个回答

不确定这是否有帮助,但是:

义真的是指标函数义,如此 所述。这在代码中形成了表达式 (j == y[i])

此外,损失相对于权重的梯度为:

义

在哪里

义

这是代码中 X[:,i] 的起源。

原文由 Ben Barsdell 发布,翻译遵循 CC BY-SA 3.0 许可协议

我知道这已经晚了,但这是我的答案:

我假设您熟悉 cs231n Softmax 损失函数。我们知道:在此处输入图像描述

所以就像我们对 SVM 损失函数所做的那样,梯度如下:在此处输入图像描述

希望有所帮助。

原文由 Jawher.B 发布,翻译遵循 CC BY-SA 4.0 许可协议

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题