Skip to content

Commit

Permalink
Deploying to gh-pages from @ 19a91b9 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-github-bot committed May 18, 2024
1 parent f892cad commit 61f3cdd
Show file tree
Hide file tree
Showing 7 changed files with 64 additions and 25 deletions.
10 changes: 8 additions & 2 deletions fbgemm_gpu-cpp-api/split_table_batched_embeddings.html
Original file line number Diff line number Diff line change
Expand Up @@ -418,8 +418,14 @@
<section id="table-batched-embedding-operators">
<h1>Table Batched Embedding Operators<a class="headerlink" href="#table-batched-embedding-operators" title="Permalink to this heading"></a></h1>
<dl class="cpp function">
<dt class="sig sig-object cpp" id="_CPPv423get_unique_indices_cudaN2at6TensorE7int64_tbKb">
<span id="_CPPv323get_unique_indices_cudaN2at6TensorE7int64_tbKb"></span><span id="_CPPv223get_unique_indices_cudaN2at6TensorE7int64_tbKb"></span><span id="get_unique_indices_cuda__at::Tensor.int64_t.b.bC"></span><span class="target" id="group__table-batched-embed-cuda_1ga2193654aaf8fe11d73c68bb0eeaaad10"></span><span class="n"><span class="pre">std</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">tuple</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">c10</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">optional</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">&gt;</span></span><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">c10</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">optional</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">&gt;</span></span><span class="p"><span class="pre">&gt;</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">get_unique_indices_cuda</span></span></span><span class="sig-paren">(</span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="n sig-param"><span class="pre">linear_indices</span></span>, <span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">max_indices</span></span>, <span class="kt"><span class="pre">bool</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">compute_count</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="kt"><span class="pre">bool</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">compute_inverse_indices</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv423get_unique_indices_cudaN2at6TensorE7int64_tbKb" title="Permalink to this definition"></a><br /></dt>
<dt class="sig sig-object cpp" id="_CPPv423get_unique_indices_cudaRKN2at6TensorEK7int64_tKb">
<span id="_CPPv323get_unique_indices_cudaRKN2at6TensorEK7int64_tKb"></span><span id="_CPPv223get_unique_indices_cudaRKN2at6TensorEK7int64_tKb"></span><span id="get_unique_indices_cuda__at::TensorCR.int64_tC.bC"></span><span class="target" id="group__table-batched-embed-cuda_1gae191fdc58b0fea96617345fb8a4d31c9"></span><span class="n"><span class="pre">std</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">tuple</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">c10</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">optional</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">&gt;</span></span><span class="p"><span class="pre">&gt;</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">get_unique_indices_cuda</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">linear_indices</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">max_indices</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="kt"><span class="pre">bool</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">compute_count</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv423get_unique_indices_cudaRKN2at6TensorEK7int64_tKb" title="Permalink to this definition"></a><br /></dt>
<dd><p>Deduplicate indices. </p>
</dd></dl>

<dl class="cpp function">
<dt class="sig sig-object cpp" id="_CPPv436get_unique_indices_with_inverse_cudaRKN2at6TensorEK7int64_tKbKb">
<span id="_CPPv336get_unique_indices_with_inverse_cudaRKN2at6TensorEK7int64_tKbKb"></span><span id="_CPPv236get_unique_indices_with_inverse_cudaRKN2at6TensorEK7int64_tKbKb"></span><span id="get_unique_indices_with_inverse_cuda__at::TensorCR.int64_tC.bC.bC"></span><span class="target" id="group__table-batched-embed-cuda_1ga03a569dc174346515844d06e1becd0dd"></span><span class="n"><span class="pre">std</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">tuple</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">c10</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">optional</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">&gt;</span></span><span class="p"><span class="pre">,</span></span><span class="w"> </span><span class="n"><span class="pre">c10</span></span><span class="p"><span class="pre">::</span></span><span class="n"><span class="pre">optional</span></span><span class="p"><span class="pre">&lt;</span></span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="p"><span class="pre">&gt;</span></span><span class="p"><span class="pre">&gt;</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">get_unique_indices_with_inverse_cuda</span></span></span><span class="sig-paren">(</span><span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">at</span></span><span class="p"><span class="pre">::</span></span><a class="reference external" href="https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4N2at6TensorE" title="(in PyTorch vmain)"><span class="n"><span class="pre">Tensor</span></span></a><span class="w"> </span><span class="p"><span class="pre">&amp;</span></span><span class="n sig-param"><span class="pre">linear_indices</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="n"><span class="pre">int64_t</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">max_indices</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="kt"><span class="pre">bool</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">compute_count</span></span>, <span class="k"><span class="pre">const</span></span><span class="w"> </span><span class="kt"><span class="pre">bool</span></span><span class="w"> </span><span class="n sig-param"><span class="pre">compute_inverse_indices</span></span><span class="sig-paren">)</span><a class="headerlink" href="#_CPPv436get_unique_indices_with_inverse_cudaRKN2at6TensorEK7int64_tKbKb" title="Permalink to this definition"></a><br /></dt>
<dd><p>Deduplicate indices. </p>
</dd></dl>

Expand Down
4 changes: 3 additions & 1 deletion genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -595,10 +595,12 @@ <h2 id="G">G</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="fbgemm_gpu-cpp-api/sparse_ops.html#_CPPv452generic_histogram_binning_calibration_by_feature_cpuRKN2at6TensorERKN2at6TensorERKN2at6TensorE7int64_tRKN2at6TensorERKN2at6TensorERKN2at6TensorEd7int64_td">generic_histogram_binning_calibration_by_feature_cpu (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/split_table_batched_embeddings.html#_CPPv423get_unique_indices_cudaRKN2at6TensorEK7int64_tKb">get_unique_indices_cuda (C++ function)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="fbgemm_gpu-cpp-api/split_table_batched_embeddings.html#_CPPv423get_unique_indices_cudaN2at6TensorE7int64_tbKb">get_unique_indices_cuda (C++ function)</a>
<li><a href="fbgemm_gpu-cpp-api/split_table_batched_embeddings.html#_CPPv436get_unique_indices_with_inverse_cudaRKN2at6TensorEK7int64_tKbKb">get_unique_indices_with_inverse_cuda (C++ function)</a>
</li>
<li><a href="fbgemm_gpu-cpp-api/experimental_ops.html#_CPPv415gqa_attn_splitkRKN2at6TensorERKN2at6TensorERKN2at6TensorERKN2at6TensorEKdK7int64_tK7int64_tKb">gqa_attn_splitk (C++ function)</a>
</li>
Expand Down
45 changes: 37 additions & 8 deletions group__table-batched-embed-cuda.html
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,10 @@
<table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a id="func-members" name="func-members"></a>
Functions</h2></td></tr>
<tr class="memitem:ga2193654aaf8fe11d73c68bb0eeaaad10" id="r_ga2193654aaf8fe11d73c68bb0eeaaad10"><td class="memItemLeft" align="right" valign="top">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt;, c10::optional&lt; at::Tensor &gt; &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#ga2193654aaf8fe11d73c68bb0eeaaad10">get_unique_indices_cuda</a> (at::Tensor linear_indices, int64_t max_indices, bool compute_count, const bool compute_inverse_indices)</td></tr>
<tr class="separator:ga2193654aaf8fe11d73c68bb0eeaaad10"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:gae191fdc58b0fea96617345fb8a4d31c9" id="r_gae191fdc58b0fea96617345fb8a4d31c9"><td class="memItemLeft" align="right" valign="top">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt; &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#gae191fdc58b0fea96617345fb8a4d31c9">get_unique_indices_cuda</a> (const at::Tensor &amp;linear_indices, const int64_t max_indices, const bool compute_count)</td></tr>
<tr class="separator:gae191fdc58b0fea96617345fb8a4d31c9"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ga03a569dc174346515844d06e1becd0dd" id="r_ga03a569dc174346515844d06e1becd0dd"><td class="memItemLeft" align="right" valign="top">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt;, c10::optional&lt; at::Tensor &gt; &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#ga03a569dc174346515844d06e1becd0dd">get_unique_indices_with_inverse_cuda</a> (const at::Tensor &amp;linear_indices, const int64_t max_indices, const bool compute_count, const bool compute_inverse_indices)</td></tr>
<tr class="separator:ga03a569dc174346515844d06e1becd0dd"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:gab6c0d4c75f93676e976815c6f1aebdeb" id="r_gab6c0d4c75f93676e976815c6f1aebdeb"><td class="memItemLeft" align="right" valign="top">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt; &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#gab6c0d4c75f93676e976815c6f1aebdeb">lru_cache_find_uncached_cuda</a> (at::Tensor unique_indices, at::Tensor unique_indices_length, int64_t max_indices, at::Tensor lxu_cache_state, int64_t time_stamp, at::Tensor lru_state, bool gather_cache_stats, at::Tensor uvm_cache_stats, bool lock_cache_line, at::Tensor lxu_cache_locking_counter, const bool compute_inverse_indices)</td></tr>
<tr class="separator:gab6c0d4c75f93676e976815c6f1aebdeb"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ga920da453c443675fc7fbc9d68e272a61" id="r_ga920da453c443675fc7fbc9d68e272a61"><td class="memItemLeft" align="right" valign="top">int64_t&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#ga920da453c443675fc7fbc9d68e272a61">host_lxu_cache_slot</a> (int64_t h_in, int64_t C)</td></tr>
Expand Down Expand Up @@ -242,26 +244,53 @@ <h2 class="memtitle"><span class="permalink"><a href="#gab305ebdd3822794c5ac462b

</div>
</div>
<a id="ga2193654aaf8fe11d73c68bb0eeaaad10" name="ga2193654aaf8fe11d73c68bb0eeaaad10"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ga2193654aaf8fe11d73c68bb0eeaaad10">&#9670;&#160;</a></span>get_unique_indices_cuda()</h2>
<a id="gae191fdc58b0fea96617345fb8a4d31c9" name="gae191fdc58b0fea96617345fb8a4d31c9"></a>
<h2 class="memtitle"><span class="permalink"><a href="#gae191fdc58b0fea96617345fb8a4d31c9">&#9670;&#160;</a></span>get_unique_indices_cuda()</h2>

<div class="memitem">
<div class="memproto">
<table class="memname">
<tr>
<td class="memname">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt;, c10::optional&lt; at::Tensor &gt; &gt; get_unique_indices_cuda </td>
<td class="memname">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt; &gt; get_unique_indices_cuda </td>
<td>(</td>
<td class="paramtype">at::Tensor</td> <td class="paramname"><span class="paramname"><em>linear_indices</em>, </span></td>
<td class="paramtype">const at::Tensor &amp;</td> <td class="paramname"><span class="paramname"><em>linear_indices</em>, </span></td>
</tr>
<tr>
<td class="paramkey"></td>
<td></td>
<td class="paramtype">int64_t</td> <td class="paramname"><span class="paramname"><em>max_indices</em>, </span></td>
<td class="paramtype">const int64_t</td> <td class="paramname"><span class="paramname"><em>max_indices</em>, </span></td>
</tr>
<tr>
<td class="paramkey"></td>
<td></td>
<td class="paramtype">const bool</td> <td class="paramname"><span class="paramname"><em>compute_count</em></span>&#160;)</td>
</tr>
</table>
</div><div class="memdoc">
<p>Deduplicate indices. </p>

</div>
</div>
<a id="ga03a569dc174346515844d06e1becd0dd" name="ga03a569dc174346515844d06e1becd0dd"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ga03a569dc174346515844d06e1becd0dd">&#9670;&#160;</a></span>get_unique_indices_with_inverse_cuda()</h2>

<div class="memitem">
<div class="memproto">
<table class="memname">
<tr>
<td class="memname">std::tuple&lt; at::Tensor, at::Tensor, c10::optional&lt; at::Tensor &gt;, c10::optional&lt; at::Tensor &gt; &gt; get_unique_indices_with_inverse_cuda </td>
<td>(</td>
<td class="paramtype">const at::Tensor &amp;</td> <td class="paramname"><span class="paramname"><em>linear_indices</em>, </span></td>
</tr>
<tr>
<td class="paramkey"></td>
<td></td>
<td class="paramtype">const int64_t</td> <td class="paramname"><span class="paramname"><em>max_indices</em>, </span></td>
</tr>
<tr>
<td class="paramkey"></td>
<td></td>
<td class="paramtype">bool</td> <td class="paramname"><span class="paramname"><em>compute_count</em>, </span></td>
<td class="paramtype">const bool</td> <td class="paramname"><span class="paramname"><em>compute_count</em>, </span></td>
</tr>
<tr>
<td class="paramkey"></td>
Expand Down
Binary file modified objects.inv
Binary file not shown.
Loading

0 comments on commit 61f3cdd

Please sign in to comment.