Skip to content

Commit

Permalink
Generate Python docs from pytorch/pytorch@69ed7c7
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Oct 1, 2024
1 parent c2c4519 commit 0080f08
Show file tree
Hide file tree
Showing 30 changed files with 201 additions and 135 deletions.
Binary file modified 2.5/_images/RReLU.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions 2.5/_modules/torch/backends/cuda.html
Original file line number Diff line number Diff line change
Expand Up @@ -538,6 +538,8 @@ <h1>Source code for torch.backends.cuda</h1><div class="highlight"><pre>
<span class="s2">&quot;mem_efficient_sdp_enabled&quot;</span><span class="p">,</span>
<span class="s2">&quot;math_sdp_enabled&quot;</span><span class="p">,</span>
<span class="s2">&quot;enable_math_sdp&quot;</span><span class="p">,</span>
<span class="s2">&quot;allow_fp16_bf16_reduction_math_sdp&quot;</span><span class="p">,</span>
<span class="s2">&quot;fp16_bf16_reduction_math_sdp_allowed&quot;</span><span class="p">,</span>
<span class="s2">&quot;is_flash_attention_available&quot;</span><span class="p">,</span>
<span class="s2">&quot;can_use_flash_attention&quot;</span><span class="p">,</span>
<span class="s2">&quot;can_use_efficient_attention&quot;</span><span class="p">,</span>
Expand Down Expand Up @@ -835,6 +837,24 @@ <h1>Source code for torch.backends.cuda</h1><div class="highlight"><pre>
<span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">_set_sdp_use_math</span><span class="p">(</span><span class="n">enabled</span><span class="p">)</span></div>


<div class="viewcode-block" id="allow_fp16_bf16_reduction_math_sdp"><a class="viewcode-back" href="../../../backends.html#torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp">[docs]</a><span class="k">def</span> <span class="nf">allow_fp16_bf16_reduction_math_sdp</span><span class="p">(</span><span class="n">enabled</span><span class="p">:</span> <span class="nb">bool</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> .. warning:: This flag is beta and subject to change.</span>

<span class="sd"> Enables or disables fp16/bf16 reduction in math scaled dot product attention.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">_set_math_sdp_allow_fp16_bf16_reduction</span><span class="p">(</span><span class="n">enabled</span><span class="p">)</span></div>


<div class="viewcode-block" id="fp16_bf16_reduction_math_sdp_allowed"><a class="viewcode-back" href="../../../backends.html#torch.backends.cuda.fp16_bf16_reduction_math_sdp_allowed">[docs]</a><span class="k">def</span> <span class="nf">fp16_bf16_reduction_math_sdp_allowed</span><span class="p">():</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> .. warning:: This flag is beta and subject to change.</span>

<span class="sd"> Returns whether fp16/bf16 reduction in math scaled dot product attention is enabled or not.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">_get_math_sdp_allow_fp16_bf16_reduction</span><span class="p">()</span></div>


<div class="viewcode-block" id="is_flash_attention_available"><a class="viewcode-back" href="../../../backends.html#torch.backends.cuda.is_flash_attention_available">[docs]</a><span class="k">def</span> <span class="nf">is_flash_attention_available</span><span class="p">()</span> <span class="o">-&gt;</span> <span class="nb">bool</span><span class="p">:</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;Check if PyTorch was built with FlashAttention for scaled_dot_product_attention.</span>

Expand Down
4 changes: 4 additions & 0 deletions 2.5/_sources/backends.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,10 @@ torch.backends.cuda

.. autofunction:: torch.backends.cuda.enable_math_sdp

.. autofunction:: torch.backends.cuda.fp16_bf16_reduction_math_sdp_allowed

.. autofunction:: torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp

.. autofunction:: torch.backends.cuda.cudnn_sdp_enabled

.. autofunction:: torch.backends.cuda.enable_cudnn_sdp
Expand Down
14 changes: 7 additions & 7 deletions 2.5/_sources/generated/exportdb/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,7 @@ cond_closed_over_variable

.. note::

Tags: :doc:`torch.cond <torch.cond>`, :doc:`python.closure <python.closure>`
Tags: :doc:`python.closure <python.closure>`, :doc:`torch.cond <torch.cond>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -1005,7 +1005,7 @@ dynamic_shape_if_guard

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -1056,7 +1056,7 @@ dynamic_shape_map

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.map <torch.map>`
Tags: :doc:`torch.map <torch.map>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -1284,7 +1284,7 @@ list_contains

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
Tags: :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -1333,7 +1333,7 @@ list_unpack

.. note::

Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -1933,7 +1933,7 @@ dynamic_shape_round

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.builtin <python.builtin>`
Tags: :doc:`python.builtin <python.builtin>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: NOT_SUPPORTED_YET

Expand Down Expand Up @@ -2106,6 +2106,6 @@ Result:

.. code-block::
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7fbb56f51040>
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7fd56b657040>
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/python.assert.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ list_contains

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
Tags: :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/python.builtin.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ dynamic_shape_round

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.builtin <python.builtin>`
Tags: :doc:`python.builtin <python.builtin>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: NOT_SUPPORTED_YET

Expand Down
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/python.closure.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ cond_closed_over_variable

.. note::

Tags: :doc:`torch.cond <torch.cond>`, :doc:`python.closure <python.closure>`
Tags: :doc:`python.closure <python.closure>`, :doc:`torch.cond <torch.cond>`

Support Level: SUPPORTED

Expand Down
4 changes: 2 additions & 2 deletions 2.5/_sources/generated/exportdb/python.control-flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ dynamic_shape_if_guard

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -56,7 +56,7 @@ list_unpack

.. note::

Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`

Support Level: SUPPORTED

Expand Down
4 changes: 2 additions & 2 deletions 2.5/_sources/generated/exportdb/python.data-structure.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ list_contains

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
Tags: :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -176,7 +176,7 @@ list_unpack

.. note::

Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`

Support Level: SUPPORTED

Expand Down
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/torch.cond.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ cond_closed_over_variable

.. note::

Tags: :doc:`torch.cond <torch.cond>`, :doc:`python.closure <python.closure>`
Tags: :doc:`python.closure <python.closure>`, :doc:`torch.cond <torch.cond>`

Support Level: SUPPORTED

Expand Down
8 changes: 4 additions & 4 deletions 2.5/_sources/generated/exportdb/torch.dynamic-shape.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ dynamic_shape_if_guard

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.control-flow <python.control-flow>`
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -492,7 +492,7 @@ dynamic_shape_map

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.map <torch.map>`
Tags: :doc:`torch.map <torch.map>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down Expand Up @@ -550,7 +550,7 @@ dynamic_shape_round

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.builtin <python.builtin>`
Tags: :doc:`python.builtin <python.builtin>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: NOT_SUPPORTED_YET

Expand Down Expand Up @@ -694,7 +694,7 @@ list_contains

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
Tags: :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/torch.map.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ dynamic_shape_map

.. note::

Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.map <torch.map>`
Tags: :doc:`torch.map <torch.map>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`

Support Level: SUPPORTED

Expand Down
2 changes: 1 addition & 1 deletion 2.5/_sources/generated/exportdb/torch.operator.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,4 @@ Result:

.. code-block::
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7fbb56f51040>
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7fd56b657040>
7 changes: 7 additions & 0 deletions 2.5/_sources/notes/numerical_accuracy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,13 @@ reduced-precision reductions are problematic, they can be turned off with

For more information see :ref:`allow_fp16_reduced_precision_reduction<fp16reducedprecision>` and :ref:`allow_bf16_reduced_precision_reduction<bf16reducedprecision>`

Reduced Precision Reduction for FP16 and BF16 in Scaled Dot Product Attention (SDPA)
------------------------------------------------------------------------------------
A naive SDPA math backend, when using FP16/BF16 inputs, can accumulate significant numerical errors due to the usage of low-precision intermediate buffers. To mitigate this issue, the default behavior now involves upcasting FP16/BF16 inputs to FP32. Computations are performed in FP32/TF32, and the final FP32 results are then downcasted back to FP16/BF16. This will improve numerical accuracy of the final output for the math backend with FP16/BF16 inputs, but increases memory usages and may cause the performance regressions in the math backend as computations shift from FP16/BF16 BMM to FP32/TF32 BMM/Matmul.

For scenarios where reduced-precision reductions are preferred for speed, they can be enabled with the following setting:
``torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp(True)``

.. _fp16_on_mi200:

Reduced Precision FP16 and BF16 GEMMs and Convolutions on AMD Instinct MI200 devices
Expand Down
24 changes: 24 additions & 0 deletions 2.5/backends.html
Original file line number Diff line number Diff line change
Expand Up @@ -756,6 +756,28 @@
</dl>
</dd></dl>

<dl class="py function">
<dt class="sig sig-object py" id="torch.backends.cuda.fp16_bf16_reduction_math_sdp_allowed">
<span class="sig-prename descclassname"><span class="pre">torch.backends.cuda.</span></span><span class="sig-name descname"><span class="pre">fp16_bf16_reduction_math_sdp_allowed</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/backends/cuda.html#fp16_bf16_reduction_math_sdp_allowed"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.backends.cuda.fp16_bf16_reduction_math_sdp_allowed" title="Permalink to this definition"></a></dt>
<dd><div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>This flag is beta and subject to change.</p>
</div>
<p>Returns whether fp16/bf16 reduction in math scaled dot product attention is enabled or not.</p>
</dd></dl>

<dl class="py function">
<dt class="sig sig-object py" id="torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp">
<span class="sig-prename descclassname"><span class="pre">torch.backends.cuda.</span></span><span class="sig-name descname"><span class="pre">allow_fp16_bf16_reduction_math_sdp</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">enabled</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/backends/cuda.html#allow_fp16_bf16_reduction_math_sdp"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp" title="Permalink to this definition"></a></dt>
<dd><div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>This flag is beta and subject to change.</p>
</div>
<p>Enables or disables fp16/bf16 reduction in math scaled dot product attention.</p>
<dl class="field-list simple">
</dl>
</dd></dl>

<dl class="py function">
<dt class="sig sig-object py" id="torch.backends.cuda.cudnn_sdp_enabled">
<span class="sig-prename descclassname"><span class="pre">torch.backends.cuda.</span></span><span class="sig-name descname"><span class="pre">cudnn_sdp_enabled</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/backends/cuda.html#cudnn_sdp_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.backends.cuda.cudnn_sdp_enabled" title="Permalink to this definition"></a></dt>
Expand Down Expand Up @@ -1262,6 +1284,8 @@
<li><a class="reference internal" href="#torch.backends.cuda.enable_flash_sdp"><code class="docutils literal notranslate"><span class="pre">enable_flash_sdp()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.math_sdp_enabled"><code class="docutils literal notranslate"><span class="pre">math_sdp_enabled()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.enable_math_sdp"><code class="docutils literal notranslate"><span class="pre">enable_math_sdp()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.fp16_bf16_reduction_math_sdp_allowed"><code class="docutils literal notranslate"><span class="pre">fp16_bf16_reduction_math_sdp_allowed()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp"><code class="docutils literal notranslate"><span class="pre">allow_fp16_bf16_reduction_math_sdp()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.cudnn_sdp_enabled"><code class="docutils literal notranslate"><span class="pre">cudnn_sdp_enabled()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.enable_cudnn_sdp"><code class="docutils literal notranslate"><span class="pre">enable_cudnn_sdp()</span></code></a></li>
<li><a class="reference internal" href="#torch.backends.cuda.is_flash_attention_available"><code class="docutils literal notranslate"><span class="pre">is_flash_attention_available()</span></code></a></li>
Expand Down
Loading

0 comments on commit 0080f08

Please sign in to comment.