您好,登錄后才能下訂單哦!
這期內容當中小編將會給大家帶來有關如何進行Function函數的分析,文章內容豐富且以專業的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
原文以及翻譯:
Function 函數 torch.autograd.Function Records operation history and defines formulas for differentiating ops. 記錄操作歷史,并且定義求導操作的公式. Every operation performed on Tensor s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (input <- output). Then, when backward is called, the graph is processed in the topological ordering, by calling backward() methods of each Function object, and passing returned gradients on to next Function s. 作用在每個Tensor上的操作都會新創建一個新的function對象, 這些function對象執行計算,并且記錄計算的發生. 這些歷史以函數functions的有向無環圖的形式保留下來. 有向無環圖的邊表示數據的依賴關系(輸入 <- 輸出)(input <- output). 之后,當反向傳播backward被調用時,計算圖會以拓撲順序被處理執行. 這個處理過程是通過調用每個Function對象的backward()方法來完成的, 并且依次將返回得到的梯度傳遞到下一個Function對象. Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd. 一般而言,用戶和functions交互的唯一方式是創建一個子類,并定義新的操作. 這也是擴展torch.autograd推薦使用的方式. Each function object is meant to be used only once (in the forward pass). 每個function對象只會被使用一次(在前向傳播過程中).
Examples:例子
>>> class Exp(Function): >>> >>> @staticmethod >>> def forward(ctx, i): >>> result = i.exp() >>> ctx.save_for_backward(result) >>> return result >>> >>> @staticmethod >>> def backward(ctx, grad_output): >>> result, = ctx.saved_tensors >>> return grad_output * result
static backward(ctx, *grad_outputs) Defines a formula for differentiating the operation. 定義求導操作的公式. This function is to be overridden by all subclasses. 這個函數將會被所有子類所重寫. It must accept a context ctx as the first argument, followed by as many outputs did forward() return, and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. 它必須接收一個上下文ctx作為第一個參數, 然后接收forward()函數返回的所有參數, 而且它必須返回forward()函數接收的所有張量tensor. 每個參數是相對于給定輸出的梯度. 并且每個返回的值都應該是相應輸入的梯度. The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output. 我們可以使用上下文context來獲取在前向傳遞過程中保存的張量. 它同時具有屬性ctx.needs_input_grad,他是一個元素為布爾類型的元組, 布爾值表示每個輸入數據是否需要梯度.舉個例子, 如果forward()函數的第一個輸入數據需要根據輸出計算梯度, 那么backward()中的屬性ctx.needs_input_grad[0] = True. static forward(ctx, *args, **kwargs) Performs the operation. 執行操作. This function is to be overridden by all subclasses. 該函數將會被所有子類所重寫. It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types). 它必需接收一個上下文ctx作為第一個參數, 然后可以接著接收任意數量的參數(張量或者其他類型) The context can be used to store tensors that can be then retrieved during the backward pass. 上下文可以被用來保存張量,這樣就可以在后向傳遞的過程中獲取這些張量.
上述就是小編為大家分享的如何進行Function函數的分析了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關知識,歡迎關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。