pytorch  1.8.2
About: PyTorch provides Tensor computation (like NumPy) with strong GPU acceleration and Deep Neural Networks (in Python) built on a tape-based autograd system. LTS (Long Term Support) release.
  Fossies Dox: pytorch-1.8.2.tar.gz  ("unofficial" and yet experimental doxygen-generated source code documentation)  

accumulate_op.cc
Go to the documentation of this file.
2
3namespace caffe2 {
5
6OPERATOR_SCHEMA(Accumulate)
7 .NumInputs(1)
8 .NumOutputs(1)
9 .IdenticalTypeAndShape()
10 .SetDoc(R"DOC(
11Accumulate operator accumulates the input tensor to the output tensor. If the
12output tensor already has the right size, we add to it; otherwise, we first
13initialize the output tensor to all zeros, and then do accumulation. Any
14further calls to the operator, given that no one else fiddles with the output
15in the interim, will do simple accumulations.
16Accumulation is done using Axpby operation as shown:
17 Y = 1*X + gamma*Y
18where X is the input tensor, Y is the output tensor and gamma is the multiplier
19argument.
20)DOC")
21 .Arg("gamma", "(float, default 1.0) Accumulation multiplier")
22 .Input(0, "input", "The input tensor that has to be accumulated to the "
23 "output tensor. If the output size is not the same as input size, the "
24 "output tensor is first reshaped and initialized to zero, and only "
25 "then, accumulation is done.")
26 .Output(0, "output", "Accumulated output tensor");
27
29} // namespace caffe2
Copyright (c) 2016-present, Facebook, Inc.
Definition: blob.h:13
REGISTER_CPU_OPERATOR(ATen, ATenOp< CPUContext >)
OPERATOR_SCHEMA(ATen)
SHOULD_NOT_DO_GRADIENT(ScriptModule)