diff --git a/dev/404.html b/dev/404.html deleted file mode 100644 index b1f69d84d599e6900d92897a12e537441334dee5..0000000000000000000000000000000000000000 --- a/dev/404.html +++ /dev/null @@ -1,217 +0,0 @@ - - - - - - - - -Page not found (404) • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -Content not found. Please use links in the navbar. - -
- - - -
- - - - -
- - - - - - - - diff --git a/dev/CONTRIBUTING.html b/dev/CONTRIBUTING.html deleted file mode 100644 index ad250c57666b5c4fcbce45eda3fbae372841892c..0000000000000000000000000000000000000000 --- a/dev/CONTRIBUTING.html +++ /dev/null @@ -1,278 +0,0 @@ - - - - - - - - -Contributing to torch • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
- -

This outlines how to propose a change to torch. For more detailed info about contributing to this, and other tidyverse packages, please see the development contributing guide.

-
-

-Fixing typos

-

You can fix typos, spelling mistakes, or grammatical errors in the documentation directly using the GitHub web interface, as long as the changes are made in the source file. This generally means you’ll need to edit roxygen2 comments in an .R, not a .Rd file. You can find the .R file that generates the .Rd by reading the comment in the first line.

-

See also the [Documentation] section.

-
-
-

-Filing bugs

-

If you find a bug in torch please open an issue here. Please, provide detailed information on how to reproduce the bug. It would be great to also provide a reprex.

-
-
-

-Feature requests

-

Feel free to open issues here and add the feature-request tag. Try searching if there’s already an open issue for your feature-request, in this case it’s better to comment or upvote it intead of opening a new one.

-
-
-

-Examples

-

We welcome contributed examples. feel free to open a PR with new examples. The examples should be placed in the vignettes/examples folder.

-

The examples should be an .R file and a .Rmd file with the same name that just renders the code.

-

See mnist-mlp.R and mnist-mlp.Rmd

-

One must be able to run the example without manually downloading any dataset/file. You should also add an entry to the _pkgdown.yaml file.

-
-
-

-Code contributions

-

We have many open issues in the github repo if there’s one item that you want to work on, you can comment on it and ask for directions.

-
-

-Requirements

-
    -
  • R installation
  • -
  • R Tools for compilation (only on Windows)
  • -
  • The devtools package
  • -
  • CMake to compile lantern binaries
  • -
-
-
-

-Workflow

-

We use devtools as the toolchain for development, but a few steps must be done before setiing up.

-

The first time you clone the repository, you must run:

-
-source("tools/buildlantern.R")
-

This will compile Lantern binaries and download LibTorch and copy the binaries to deps folder in the working directory.

-

This command must be run everytime you modify lantern code. ie. code that lives in lantern/src.

-

You can the run

-
-devtools::load_all()
-

To load torch and test interactively. Or

-
-devtools::test()
-

To run the test suite.

-
-
-
-

-Documentation

-

We use roxygen2 to generate the documentation. IN order to update the docs, edit the file in the R directory. To regenerate and preview the docs, use the custom tools/document.R script, as we need to patch roxygen2 to avoid running the examples on CRAN.

-
-
- -
- - - -
- - - - -
- - - - - - - - diff --git a/dev/LICENSE-text.html b/dev/LICENSE-text.html index 8c2a4b165c748529a77d8e0124d11c7068faeb08..6d56b37af2c0e34256a871984580177f4618aef2 100644 --- a/dev/LICENSE-text.html +++ b/dev/LICENSE-text.html @@ -1,78 +1,18 @@ - - - - - - - -License • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -License • torch - - - - - - - - + + -
-
- -
- -
+
+
-
- +
- - + + diff --git a/dev/LICENSE.html b/dev/LICENSE.html index 98eba5eb02cd330f831255b32365f948a79bfcb4..f837eff11b84460fdb70578dd1f32ef17d7f76b4 100644 --- a/dev/LICENSE.html +++ b/dev/LICENSE.html @@ -1,78 +1,18 @@ - - - - - - - -MIT License • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MIT License • torch - - - - - - - - + + -
-
- -
- -
+
+
-
- +
- - + + diff --git a/dev/articles/distributions.html b/dev/articles/distributions.html index 6b08fa7985c3e4133627fecbf08229801944a69f..f58a1dfa0aaff69206063f1b251118f41568b154 100644 --- a/dev/articles/distributions.html +++ b/dev/articles/distributions.html @@ -27,6 +27,8 @@ + +
-
- +
- - + + diff --git a/dev/articles/indexing.html b/dev/articles/indexing.html index 49320be84422d864210eae6390307ab4fb262b24..29423bfcf69f8f904d34f67f0b84f3feaaead25b 100644 --- a/dev/articles/indexing.html +++ b/dev/articles/indexing.html @@ -27,6 +27,8 @@ + +
@@ -193,108 +115,95 @@ in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None).

-
autograd_grad(
-  outputs,
-  inputs,
-  grad_outputs = NULL,
-  retain_graph = create_graph,
-  create_graph = FALSE,
-  allow_unused = FALSE
-)
+
+
autograd_grad(
+  outputs,
+  inputs,
+  grad_outputs = NULL,
+  retain_graph = create_graph,
+  create_graph = FALSE,
+  allow_unused = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
outputs

(sequence of Tensor) – outputs of the differentiated function.

inputs

(sequence of Tensor) – Inputs w.r.t. which the gradient will be -returned (and not accumulated into .grad).

grad_outputs

(sequence of Tensor) – The “vector” in the Jacobian-vector +

+

Arguments

+
outputs
+

(sequence of Tensor) – outputs of the differentiated function.

+
inputs
+

(sequence of Tensor) – Inputs w.r.t. which the gradient will be +returned (and not accumulated into .grad).

+
grad_outputs
+

(sequence of Tensor) – The “vector” in the Jacobian-vector product. Usually gradients w.r.t. each output. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable -for all grad_tensors, then this argument is optional. Default: None.

retain_graph

(bool, optional) – If FALSE, the graph used to compute the +for all grad_tensors, then this argument is optional. Default: None.

+
retain_graph
+

(bool, optional) – If FALSE, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to TRUE is not needed and often can be worked around in a much more efficient way. -Defaults to the value of create_graph.

create_graph

(bool, optional) – If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: FALSE`.

allow_unused

(bool, optional) – If FALSE, specifying inputs that were +Defaults to the value of create_graph.

+
create_graph
+

(bool, optional) – If TRUE, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: FALSE`.

+
allow_unused
+

(bool, optional) – If FALSE, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an -error. Defaults to FALSE

- -

Details

- +error. Defaults to FALSE

+
+
+

Details

If only_inputs is TRUE, the function will only return a list of gradients w.r.t the specified inputs. If it’s FALSE, then gradient w.r.t. all remaining leaves will still be computed, and will be accumulated into their .grad attribute.

+
-

Examples

-
if (torch_is_installed()) {
-w <- torch_tensor(0.5, requires_grad = TRUE)
-b <- torch_tensor(0.9, requires_grad = TRUE)
-x <- torch_tensor(runif(100))
-y <- 2 * x + 1
-loss <- (y - (w*x + b))^2
-loss <- loss$mean()
-
-o <- autograd_grad(loss, list(w, b))
-o
- 
-}
-#> [[1]]
-#> torch_tensor
-#> -0.9935
-#> [ CPUFloatType{1} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> -1.6206
-#> [ CPUFloatType{1} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+w <- torch_tensor(0.5, requires_grad = TRUE)
+b <- torch_tensor(0.9, requires_grad = TRUE)
+x <- torch_tensor(runif(100))
+y <- 2 * x + 1
+loss <- (y - (w*x + b))^2
+loss <- loss$mean()
+
+o <- autograd_grad(loss, list(w, b))
+o
+ 
+}
+#> [[1]]
+#> torch_tensor
+#> -1.0326
+#> [ CPUFloatType{1} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> -1.6274
+#> [ CPUFloatType{1} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/autograd_set_grad_mode.html b/dev/reference/autograd_set_grad_mode.html index a519fa7f66b19f210836244bfc347efd3f5d82cf..ad91d313870a383d7bfa3efb0610d2a9196a774a 100644 --- a/dev/reference/autograd_set_grad_mode.html +++ b/dev/reference/autograd_set_grad_mode.html @@ -1,79 +1,18 @@ - - - - - - - -Set grad mode — autograd_set_grad_mode • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Set grad mode — autograd_set_grad_mode • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Sets or disables gradient history.

-
autograd_set_grad_mode(enabled)
- -

Arguments

- - - - - - -
enabled

bool wether to enable or disable the gradient recording.

+
+
autograd_set_grad_mode(enabled)
+
+
+

Arguments

+
enabled
+

bool wether to enable or disable the gradient recording.

+
+
-
- +
- - + + diff --git a/dev/reference/backends_mkl_is_available.html b/dev/reference/backends_mkl_is_available.html index e552003647691e5716dc92d75bb2195ac8afc88c..24f14a641d315c2962aefaa05dd24de0bc8bd320 100644 --- a/dev/reference/backends_mkl_is_available.html +++ b/dev/reference/backends_mkl_is_available.html @@ -1,79 +1,18 @@ - - - - - - - -MKL is available — backends_mkl_is_available • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MKL is available — backends_mkl_is_available • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,38 +111,36 @@

MKL is available

-
backends_mkl_is_available()
- - -

Value

+
+
backends_mkl_is_available()
+
+
+

Value

Returns whether LibTorch is built with MKL support.

+
+
-
- +
- - + + diff --git a/dev/reference/backends_mkldnn_is_available.html b/dev/reference/backends_mkldnn_is_available.html index fd0a3246df2a82b5368f9a51635edda9cf0459dd..dcecd16cf3c62c2138ca2fb6c2529e7299da90b0 100644 --- a/dev/reference/backends_mkldnn_is_available.html +++ b/dev/reference/backends_mkldnn_is_available.html @@ -1,79 +1,18 @@ - - - - - - - -MKLDNN is available — backends_mkldnn_is_available • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MKLDNN is available — backends_mkldnn_is_available • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,38 +111,36 @@

MKLDNN is available

-
backends_mkldnn_is_available()
- - -

Value

+
+
backends_mkldnn_is_available()
+
+
+

Value

Returns whether LibTorch is built with MKL-DNN support.

+
+
-
- +
- - + + diff --git a/dev/reference/backends_openmp_is_available.html b/dev/reference/backends_openmp_is_available.html index 105ebfd47091c525ff8a79a743eff935ad4118b0..c66b667a80058dfeb5ec5a47803c2588fde67303 100644 --- a/dev/reference/backends_openmp_is_available.html +++ b/dev/reference/backends_openmp_is_available.html @@ -1,79 +1,18 @@ - - - - - - - -OpenMP is available — backends_openmp_is_available • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -OpenMP is available — backends_openmp_is_available • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,38 +111,36 @@

OpenMP is available

-
backends_openmp_is_available()
- - -

Value

+
+
backends_openmp_is_available()
+
+
+

Value

Returns whether LibTorch is built with OpenMP support.

+
+
-
- +
- - + + diff --git a/dev/reference/broadcast_all.html b/dev/reference/broadcast_all.html index f5c2ae08780b81a5a67c96cb945be351e64d94a9..80f2b867527ba6f6f0f4e86f8718b93d3e323468 100644 --- a/dev/reference/broadcast_all.html +++ b/dev/reference/broadcast_all.html @@ -1,84 +1,23 @@ - - - - - - - -Given a list of values (possibly containing numbers), returns a list where each -value is broadcasted based on the following rules: — broadcast_all • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Given a list of values (possibly containing numbers), returns a list where each +value is broadcasted based on the following rules: — broadcast_all • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -198,49 +120,42 @@ TODO: add has_torch_function((v,)) See: https://github.com/pytorch/pytorch/blob/master/torch/distributions/utils.py

-
broadcast_all(values)
+
+
broadcast_all(values)
+
-

Arguments

- - - - - - -
values

List of:

    -
  • torch.*Tensor instances are broadcasted as per _broadcasting-semantics.

  • +
    +

    Arguments

    +
    values
    +

    List of:

    • torch.*Tensor instances are broadcasted as per _broadcasting-semantics.

    • numeric instances (scalars) are upcast to tensors having the same size and type as the first tensor passed to values. If all the values are scalars, then they are upcasted to scalar Tensors. values (list of numeric, torch.*Tensor or objects implementing torch_function)

    • -
- + +
+
-
- +
- - + + diff --git a/dev/reference/call_torch_function.html b/dev/reference/call_torch_function.html new file mode 100644 index 0000000000000000000000000000000000000000..c1691b51af5050dd11c2c95623431bca21942f54 --- /dev/null +++ b/dev/reference/call_torch_function.html @@ -0,0 +1,193 @@ + +Call a (Potentially Unexported) Torch Function — call_torch_function • torch + + +
+
+ + + +
+
+ + +
+

This function allows calling a function prefixed with torch_, including unexported +functions which could have potentially valuable uses but which do not yet have +a user-friendly R wrapper function. Therefore, this function should be used with +extreme caution. Make sure you understand what the function expects as input. It +may be helpful to read the torch source code for help with this, as well as +the documentation for the corresponding function in the Pytorch C++ API. Generally +for development and advanced use only.

+
+ +
+
call_torch_function(name, ..., quiet = FALSE)
+
+ +
+

Arguments

+
name
+

Name of the function to call as a string. Should start with "torch_"

+
...
+

A list of arguments to pass to the function. Argument splicing with +!!! is supported.

+
quiet
+

If TRUE, suppress warnings with valuable information about the dangers of +this function.

+
+
+

Value

+

The return value from calling the function name with arguments ...

+
+ +
+

Examples

+
if (torch_is_installed()) {
+## many unexported functions do 'backward' calculations (e.g. derivatives)
+## These could be used as a part of custom autograd functions for example.
+x <- torch_randn(10, requires_grad = TRUE)
+y <- torch_tanh(x)
+## calculate backwards gradient using standard torch method
+y$backward(torch_ones_like(x))
+x$grad
+## we can get the same result by calling the unexported `torch_tanh_backward()`
+## function. The first argument is 1 to setup the Jacobian-vector product.
+## see https://pytorch.org/blog/overview-of-pytorch-autograd-engine/ for details.
+call_torch_function("torch_tanh_backward", 1, y) 
+all.equal(call_torch_function("torch_tanh_backward", 1, y, quiet = TRUE), x$grad)
+}
+#> Warning: Because this function allows access to unexported functions, please use with caution, and
+#>             only if you are sure know what you are doing. Unexported functions will expect inputs that
+#>             are more C++-like than R-like. For example, they will expect all indexes to be 0-based instead
+#>             of 1-based. In addition unexported functions may be subject to removal from the API without
+#>             warning. Set quiet = TRUE to silence this warning.
+#> [1] TRUE
+
+
+
+ +
+ + +
+ +
+

Site built with pkgdown 2.0.1.

+
+ +
+ + + + + + + + diff --git a/dev/reference/contrib_sort_vertices.html b/dev/reference/contrib_sort_vertices.html index 806a752fe21a465c4518742245d69fb01ffa4d65..bb358c3528361b14c72dbe7f9dc65d8fcf37fb7f 100644 --- a/dev/reference/contrib_sort_vertices.html +++ b/dev/reference/contrib_sort_vertices.html @@ -1,79 +1,18 @@ - - - - - - - -Contrib sort vertices — contrib_sort_vertices • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Contrib sort vertices — contrib_sort_vertices • torch - - - - - + + - - - -
-
- -
- -
+
-

Based on the implementation from Rotated_IoU

+

Based on the implementation from Rotated_IoU

-
contrib_sort_vertices(vertices, mask, num_valid)
- -

Arguments

- - - - - - - - - - - - - - -
vertices

A Tensor with the vertices.

mask

A tensors containing the masks.

num_valid

A integer tensors.

- -

Details

+
+
contrib_sort_vertices(vertices, mask, num_valid)
+
+
+

Arguments

+
vertices
+

A Tensor with the vertices.

+
mask
+

A tensors containing the masks.

+
num_valid
+

A integer tensors.

+
+
+

Details

All tensors should be on a CUDA device so this function can be used.

-

Note

- +
+
+

Note

This function does not make part of the official torch API.

+
-

Examples

-
if (torch_is_installed()) {
-if (cuda_is_available()) {
-v <- torch_randn(8, 1024, 24, 2)$cuda()
-mean <- torch_mean(v, dim=2, keepdim=TRUE)
-v <- v - mean
-m <- (torch_rand(8, 1024, 24) > 0.8)$cuda()
-nv <- torch_sum(m$to(dtype = torch_int()), dim=-1)$to(dtype = torch_int())$cuda()
-result <- contrib_sort_vertices(v, m, nv)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (cuda_is_available()) {
+v <- torch_randn(8, 1024, 24, 2)$cuda()
+mean <- torch_mean(v, dim=2, keepdim=TRUE)
+v <- v - mean
+m <- (torch_rand(8, 1024, 24) > 0.8)$cuda()
+nv <- torch_sum(m$to(dtype = torch_int()), dim=-1)$to(dtype = torch_int())$cuda()
+result <- contrib_sort_vertices(v, m, nv)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/cuda_current_device.html b/dev/reference/cuda_current_device.html index 7d7f4423d05c9e75b63bec166c7c9bff034aef59..3a6e38d2e2f70437b9e608b86fd551725e34b138 100644 --- a/dev/reference/cuda_current_device.html +++ b/dev/reference/cuda_current_device.html @@ -1,79 +1,18 @@ - - - - - - - -Returns the index of a currently selected device. — cuda_current_device • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Returns the index of a currently selected device. — cuda_current_device • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

Returns the index of a currently selected device.

-
cuda_current_device()
- +
+
cuda_current_device()
+
+
-
- +
- - + + diff --git a/dev/reference/cuda_device_count.html b/dev/reference/cuda_device_count.html index 4eb1cc99227bd53f8c2385a30eac358eaee9431c..4960479cfd3f6fca2802becf02cb861b612642be 100644 --- a/dev/reference/cuda_device_count.html +++ b/dev/reference/cuda_device_count.html @@ -1,79 +1,18 @@ - - - - - - - -Returns the number of GPUs available. — cuda_device_count • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Returns the number of GPUs available. — cuda_device_count • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

Returns the number of GPUs available.

-
cuda_device_count()
- +
+
cuda_device_count()
+
+
-
- +
- - + + diff --git a/dev/reference/cuda_get_device_capability.html b/dev/reference/cuda_get_device_capability.html index 51d328b7be1e345530af0a13442e9e6f79654c0f..9a406be595130675d87fc50f554405751cf9f9e8 100644 --- a/dev/reference/cuda_get_device_capability.html +++ b/dev/reference/cuda_get_device_capability.html @@ -1,79 +1,18 @@ - - - - - - - -Returns the major and minor CUDA capability of device — cuda_get_device_capability • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Returns the major and minor CUDA capability of device — cuda_get_device_capability • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Returns the major and minor CUDA capability of device

-
cuda_get_device_capability(device)
- -

Arguments

- - - - - - -
device

Integer value of the CUDA device to return capabilities of.

+
+
cuda_get_device_capability(device)
+
+
+

Arguments

+
device
+

Integer value of the CUDA device to return capabilities of.

+
+
-
- +
- - + + diff --git a/dev/reference/cuda_is_available.html b/dev/reference/cuda_is_available.html index 8e8354016f49ff6c6a19afd6aaea425cffac7ee4..70d930163fd6aacedd425b5a7c536bc696d768fd 100644 --- a/dev/reference/cuda_is_available.html +++ b/dev/reference/cuda_is_available.html @@ -1,79 +1,18 @@ - - - - - - - -Returns a bool indicating if CUDA is currently available. — cuda_is_available • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Returns a bool indicating if CUDA is currently available. — cuda_is_available • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

Returns a bool indicating if CUDA is currently available.

-
cuda_is_available()
- +
+
cuda_is_available()
+
+
-
- +
- - + + diff --git a/dev/reference/dataloader.html b/dev/reference/dataloader.html index 9ca25b802b2039a6b7ce4b17b0b8a0d77aead3d5..acef31f738fa64ef52abe79fc6e6faf9244dfe11 100644 --- a/dev/reference/dataloader.html +++ b/dev/reference/dataloader.html @@ -1,82 +1,21 @@ - - - - - - - -Data loader. Combines a dataset and a sampler, and provides -single- or multi-process iterators over the dataset. — dataloader • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset. — dataloader • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,148 +116,113 @@ single- or multi-process iterators over the dataset. single- or multi-process iterators over the dataset.

-
dataloader(
-  dataset,
-  batch_size = 1,
-  shuffle = FALSE,
-  sampler = NULL,
-  batch_sampler = NULL,
-  num_workers = 0,
-  collate_fn = NULL,
-  pin_memory = FALSE,
-  drop_last = FALSE,
-  timeout = -1,
-  worker_init_fn = NULL,
-  worker_globals = NULL,
-  worker_packages = NULL
-)
+
+
dataloader(
+  dataset,
+  batch_size = 1,
+  shuffle = FALSE,
+  sampler = NULL,
+  batch_sampler = NULL,
+  num_workers = 0,
+  collate_fn = NULL,
+  pin_memory = FALSE,
+  drop_last = FALSE,
+  timeout = -1,
+  worker_init_fn = NULL,
+  worker_globals = NULL,
+  worker_packages = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
dataset

(Dataset): dataset from which to load the data.

batch_size

(int, optional): how many samples per batch to load -(default: 1).

shuffle

(bool, optional): set to TRUE to have the data reshuffled -at every epoch (default: FALSE).

sampler

(Sampler, optional): defines the strategy to draw samples from -the dataset. If specified, shuffle must be False.

batch_sampler

(Sampler, optional): like sampler, but returns a batch of +

+

Arguments

+
dataset
+

(Dataset): dataset from which to load the data.

+
batch_size
+

(int, optional): how many samples per batch to load +(default: 1).

+
shuffle
+

(bool, optional): set to TRUE to have the data reshuffled +at every epoch (default: FALSE).

+
sampler
+

(Sampler, optional): defines the strategy to draw samples from +the dataset. If specified, shuffle must be False.

+
batch_sampler
+

(Sampler, optional): like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, -shuffle, sampler, and drop_last.

num_workers

(int, optional): how many subprocesses to use for data +shuffle, sampler, and drop_last.

+
num_workers
+

(int, optional): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. -(default: 0)

collate_fn

(callable, optional): merges a list of samples to form a mini-batch.

pin_memory

(bool, optional): If TRUE, the data loader will copy tensors +(default: 0)

+
collate_fn
+

(callable, optional): merges a list of samples to form a mini-batch.

+
pin_memory
+

(bool, optional): If TRUE, the data loader will copy tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type -see the example below.

drop_last

(bool, optional): set to TRUE to drop the last incomplete batch, +see the example below.

+
drop_last
+

(bool, optional): set to TRUE to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If FALSE and the size of dataset is not divisible by the batch size, then the last batch -will be smaller. (default: FALSE)

timeout

(numeric, optional): if positive, the timeout value for collecting a batch -from workers. -1 means no timeout. (default: -1)

worker_init_fn

(callable, optional): If not NULL, this will be called on each +will be smaller. (default: FALSE)

+
timeout
+

(numeric, optional): if positive, the timeout value for collecting a batch +from workers. -1 means no timeout. (default: -1)

+
worker_init_fn
+

(callable, optional): If not NULL, this will be called on each worker subprocess with the worker id (an int in [1, num_workers]) as -input, after seeding and before data loading. (default: NULL)

worker_globals

(list or character vector, optional) only used when +input, after seeding and before data loading. (default: NULL)

+
worker_globals
+

(list or character vector, optional) only used when num_workers > 0. If a character vector, then objects with those names are copied from the global environment to the workers. If a named list, then this list is copied and attached to the worker global environment. Notice -that the objects are copied only once at the worker initialization.

worker_packages

(character vector, optional) Only used if num_workers > 0 +that the objects are copied only once at the worker initialization.

+
worker_packages
+

(character vector, optional) Only used if num_workers > 0 optional character vector naming packages that should be loaded in -each worker.

- -

Parallel data loading

- +each worker.

+
+
+

Parallel data loading

When using num_workers > 0 data loading will happen in parallel for each worker. Note that batches are taken in parallel and not observations.

-

The worker initialization process happens in the following order:

    -
  • num_workers R sessions are initialized.

  • -
- -

Then in each worker we perform the following actions:

    -
  • the torch library is loaded.

  • -
  • a random seed is set both using set.seed() and using torch_manual_seed.

  • +

    The worker initialization process happens in the following order:

    • num_workers R sessions are initialized.

    • +

    Then in each worker we perform the following actions:

    • the torch library is loaded.

    • +
    • a random seed is set both using set.seed() and using torch_manual_seed.

    • packages passed to the worker_packages argument are loaded.

    • objects passed trough the worker_globals parameters are copied into the global environment.

    • the worker_init function is ran with an id argument.

    • the dataset fetcher is copied to the worker.

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/dataloader_make_iter.html b/dev/reference/dataloader_make_iter.html index 89a9efbd000a45dad2919205908036a77959515a..ee2e22ad3ff27bc137a19df9d756cccb5eb8fad6 100644 --- a/dev/reference/dataloader_make_iter.html +++ b/dev/reference/dataloader_make_iter.html @@ -1,79 +1,18 @@ - - - - - - - -Creates an iterator from a DataLoader — dataloader_make_iter • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates an iterator from a DataLoader — dataloader_make_iter • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Creates an iterator from a DataLoader

-
dataloader_make_iter(dataloader)
- -

Arguments

- - - - - - -
dataloader

a dataloader object.

+
+
dataloader_make_iter(dataloader)
+
+
+

Arguments

+
dataloader
+

a dataloader object.

+
+
-
- +
- - + + diff --git a/dev/reference/dataloader_next.html b/dev/reference/dataloader_next.html index 80f6c8574e19d1ac5be37140910b2736f3e7b527..64dd7e08e7608be8d6b8cddbd8f7d77962d20941 100644 --- a/dev/reference/dataloader_next.html +++ b/dev/reference/dataloader_next.html @@ -1,79 +1,18 @@ - - - - - - - -Get the next element of a dataloader iterator — dataloader_next • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Get the next element of a dataloader iterator — dataloader_next • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

Get the next element of a dataloader iterator

-
dataloader_next(iter, completed = NULL)
- -

Arguments

- - - - - - - - - - -
iter

a DataLoader iter created with dataloader_make_iter.

completed

the returned value when the iterator is exhausted.

+
+
dataloader_next(iter, completed = NULL)
+
+
+

Arguments

+
iter
+

a DataLoader iter created with dataloader_make_iter.

+
completed
+

the returned value when the iterator is exhausted.

+
+
-
- +
- - + + diff --git a/dev/reference/dataset.html b/dev/reference/dataset.html index f181d1bbe700db05b30ff029b8fec67f4fb48cd0..d6d449251ea2b9fb6eb946497b5d9a555f5b9fe8 100644 --- a/dev/reference/dataset.html +++ b/dev/reference/dataset.html @@ -1,84 +1,23 @@ - - - - - - - -Helper function to create an R6 class that inherits from the abstract Dataset class — dataset • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Helper function to create an R6 class that inherits from the abstract Dataset class — dataset • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -196,57 +118,46 @@ class. All subclasses should overwrite the .getitem() method, which fetching a data sample for a given key. Subclasses could also optionally overwrite .length(), which is expected to return the size of the dataset (e.g. number of samples) used by many sampler implementations -and the default options of dataloader().

+and the default options of dataloader().

-
dataset(
-  name = NULL,
-  inherit = Dataset,
-  ...,
-  private = NULL,
-  active = NULL,
-  parent_env = parent.frame()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
name

a name for the dataset. It it's also used as the class -for it.

inherit

you can optionally inherit from a dataset when creating a -new dataset.

...

public methods for the dataset class

private

passed to R6::R6Class().

active

passed to R6::R6Class().

parent_env

An environment to use as the parent of newly-created -objects.

- -

Note

+
+
dataset(
+  name = NULL,
+  inherit = Dataset,
+  ...,
+  private = NULL,
+  active = NULL,
+  parent_env = parent.frame()
+)
+
-

dataloader() by default constructs a index +

+

Arguments

+
name
+

a name for the dataset. It it's also used as the class +for it.

+
inherit
+

you can optionally inherit from a dataset when creating a +new dataset.

+
...
+

public methods for the dataset class

+
private
+

passed to R6::R6Class().

+
active
+

passed to R6::R6Class().

+
parent_env
+

An environment to use as the parent of newly-created +objects.

+
+
+

Note

+

dataloader() by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.

-

Get a batch of observations

- +
+
+

Get a batch of observations

@@ -256,32 +167,29 @@ of observations (eg, subsetting a tensor by multiple indexes at once is faster t subsetting once for each index), in this case you can implement a .getbatch method that will be used instead of .getitem when getting a batch of observations within the dataloader.

+
+
-
- +
- - + + diff --git a/dev/reference/dataset_subset.html b/dev/reference/dataset_subset.html index 0a12fbb296a02c0d3a2892fe32626feaf674f0da..7002c7ce5cf74977f4562969e7b2cc543bb4d4f7 100644 --- a/dev/reference/dataset_subset.html +++ b/dev/reference/dataset_subset.html @@ -1,79 +1,18 @@ - - - - - - - -Dataset Subset — dataset_subset • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dataset Subset — dataset_subset • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

Subset of a dataset at specified indices.

-
dataset_subset(dataset, indices)
- -

Arguments

- - - - - - - - - - -
dataset

(Dataset): The whole Dataset

indices

(sequence): Indices in the whole set selected for subset

+
+
dataset_subset(dataset, indices)
+
+
+

Arguments

+
dataset
+

(Dataset): The whole Dataset

+
indices
+

(sequence): Indices in the whole set selected for subset

+
+
-
- +
- - + + diff --git a/dev/reference/default_dtype.html b/dev/reference/default_dtype.html index 499dbef72220595bb0323848a2cc78626857af8e..66fe2b6ed71282a0d549d099374edbf2448f1338 100644 --- a/dev/reference/default_dtype.html +++ b/dev/reference/default_dtype.html @@ -1,79 +1,18 @@ - - - - - - - -Gets and sets the default floating point dtype. — torch_set_default_dtype • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gets and sets the default floating point dtype. — torch_set_default_dtype • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,46 +111,40 @@

Gets and sets the default floating point dtype.

-
torch_set_default_dtype(d)
+    
+
torch_set_default_dtype(d)
 
-torch_get_default_dtype()
- -

Arguments

- - - - - - -
d

The default floating point dtype to set. Initially set to -torch_float().

+torch_get_default_dtype()
+
+
+

Arguments

+
d
+

The default floating point dtype to set. Initially set to +torch_float().

+
+ -
- +
- - + + diff --git a/dev/reference/dependent.html b/dev/reference/dependent.html deleted file mode 100644 index 65114239494ffe0209e4134c01bdb37fbe35fab9..0000000000000000000000000000000000000000 --- a/dev/reference/dependent.html +++ /dev/null @@ -1,277 +0,0 @@ - - - - - - - - -Public interface -TODO: check .GreaterThan and other classes, -which are not instanced — dependent • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Public interface -TODO: check .GreaterThan and other classes, -which are not instanced

-
- -
dependent
- - -

Format

- -

An object of class torch_Dependent (inherits from torch_Constraint, R6) of length 4.

- -
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/distr_bernoulli.html b/dev/reference/distr_bernoulli.html index 59b0e4b2469538c8a1135a88ea6b7b327a2ed5d7..bcc26dbf78c42fadaef2c86b3f7c51709412b83d 100644 --- a/dev/reference/distr_bernoulli.html +++ b/dev/reference/distr_bernoulli.html @@ -1,88 +1,27 @@ - - - - - - - -Creates a Bernoulli distribution parameterized by probs +<!-- Generated by pkgdown: do not edit by hand --><html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1.0"><title>Creates a Bernoulli distribution parameterized by probs or logits (but not both). Samples are binary (0 or 1). They take the value 1 with probability p -and 0 with probability 1 - p. — distr_bernoulli • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -204,69 +126,62 @@ Samples are binary (0 or 1). They take the value 1 with probability and 0 with probability 1 - p.

-
distr_bernoulli(probs = NULL, logits = NULL, validate_args = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
probs

(numeric or torch_tensor): the probability of sampling 1

logits

(numeric or torch_tensor): the log-odds of sampling 1

validate_args

whether to validate arguments or not.

- -

See also

+
+
distr_bernoulli(probs = NULL, logits = NULL, validate_args = NULL)
+
-

Distribution for details on the available methods.

+
+

Arguments

+
probs
+

(numeric or torch_tensor): the probability of sampling 1

+
logits
+

(numeric or torch_tensor): the log-odds of sampling 1

+
validate_args
+

whether to validate arguments or not.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_chi2(), -distr_gamma(), -distr_multivariate_normal(), -distr_normal(), -distr_poisson()

+distr_chi2(), +distr_gamma(), +distr_multivariate_normal(), +distr_normal(), +distr_poisson()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_bernoulli(0.3)
-m$sample()  # 30% chance 1; 70% chance 0
-}
-#> torch_tensor
-#>  0
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_bernoulli(0.3)
+m$sample()  # 30% chance 1; 70% chance 0
+}
+#> torch_tensor
+#>  0
+#> [ CPUFloatType{1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/distr_categorical.html b/dev/reference/distr_categorical.html index efdb8a565a475ec70e2a8493a549b19076bcf37e..a945f39fb57dba0c75ccf93b428fe151f33e5bdd 100644 --- a/dev/reference/distr_categorical.html +++ b/dev/reference/distr_categorical.html @@ -1,82 +1,21 @@ - - - - - - - -Creates a categorical distribution parameterized by either probs or -logits (but not both). — distr_categorical • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a categorical distribution parameterized by either probs or +logits (but not both). — distr_categorical • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,28 +116,22 @@ logits (but not both)." /> logits (but not both).

-
distr_categorical(probs = NULL, logits = NULL, validate_args = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
probs

(Tensor): event probabilities

logits

(Tensor): event log probabilities (unnormalized)

validate_args

Additional arguments

- -

Note

+
+
distr_categorical(probs = NULL, logits = NULL, validate_args = NULL)
+
-

It is equivalent to the distribution that torch_multinomial() +

+

Arguments

+
probs
+

(Tensor): event probabilities

+
logits
+

(Tensor): event log probabilities (unnormalized)

+
validate_args
+

Additional arguments

+
+
+

Note

+

It is equivalent to the distribution that torch_multinomial() samples from.

Samples are integers from \(\{0, \ldots, K-1\}\) where K is probs$size(-1).

If probs is 1-dimensional with length-K, each element is the relative probability @@ -229,43 +145,42 @@ The logits argument will be interpreted as unnormalized log probabi and can therefore be any real number. It will likewise be normalized so that the resulting probabilities sum to 1 along the last dimension. attr:logits will return this normalized value.

-

See also: torch_multinomial()

+

See also: torch_multinomial()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_categorical(torch_tensor(c(0.25, 0.25, 0.25, 0.25)))
-m$sample()  # equal probability of 1,2,3,4
-
-}
-#> torch_tensor
-#> 3
-#> [ CPULongType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_categorical(torch_tensor(c(0.25, 0.25, 0.25, 0.25)))
+m$sample()  # equal probability of 1,2,3,4
+
+}
+#> torch_tensor
+#> 4
+#> [ CPULongType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/distr_chi2.html b/dev/reference/distr_chi2.html index 358357c31841e14864c00e106166e11890a24215..b1b23fc2fde722c371f7671071fffc4b75ecd504 100644 --- a/dev/reference/distr_chi2.html +++ b/dev/reference/distr_chi2.html @@ -1,82 +1,21 @@ - - - - - - - -Creates a Chi2 distribution parameterized by shape parameter df. -This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5) — distr_chi2 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a Chi2 distribution parameterized by shape parameter df. +This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5) — distr_chi2 • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,67 +116,62 @@ This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5)distr_gamma(alpha=0.5*df, beta=0.5)

-
distr_chi2(df, validate_args = NULL)
- -

Arguments

- - - - - - - - - - -
df

(float or torch_tensor): shape parameter of the distribution

validate_args

whether to validate arguments or not.

- -

See also

+
+
distr_chi2(df, validate_args = NULL)
+
-

Distribution for details on the available methods.

+
+

Arguments

+
df
+

(float or torch_tensor): shape parameter of the distribution

+
validate_args
+

whether to validate arguments or not.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_bernoulli(), -distr_gamma(), -distr_multivariate_normal(), -distr_normal(), -distr_poisson()

+distr_bernoulli(), +distr_gamma(), +distr_multivariate_normal(), +distr_normal(), +distr_poisson()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_chi2(torch_tensor(1.0))
-m$sample()  # Chi2 distributed with shape df=1
-torch_tensor(0.1046)
-
-}
-#> torch_tensor
-#>  0.1046
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_chi2(torch_tensor(1.0))
+m$sample()  # Chi2 distributed with shape df=1
+torch_tensor(0.1046)
+
+}
+#> torch_tensor
+#>  0.1046
+#> [ CPUFloatType{1} ]
+
+
+
- - - + + diff --git a/dev/reference/distr_gamma.html b/dev/reference/distr_gamma.html index eedc3afb76a6f4170064479f29f791e6f3beab9a..ab9bd31e153e7cf335a47a23512c204d43f8ffc0 100644 --- a/dev/reference/distr_gamma.html +++ b/dev/reference/distr_gamma.html @@ -1,79 +1,18 @@ - - - - - - - -Creates a Gamma distribution parameterized by shape concentration and rate. — distr_gamma • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a Gamma distribution parameterized by shape concentration and rate. — distr_gamma • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,71 +111,64 @@

Creates a Gamma distribution parameterized by shape concentration and rate.

-
distr_gamma(concentration, rate, validate_args = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
concentration

(float or Tensor): shape parameter of the distribution -(often referred to as alpha)

rate

(float or Tensor): rate = 1 / scale of the distribution -(often referred to as beta)

validate_args

whether to validate arguments or not.

- -

See also

+
+
distr_gamma(concentration, rate, validate_args = NULL)
+
-

Distribution for details on the available methods.

+
+

Arguments

+
concentration
+

(float or Tensor): shape parameter of the distribution +(often referred to as alpha)

+
rate
+

(float or Tensor): rate = 1 / scale of the distribution +(often referred to as beta)

+
validate_args
+

whether to validate arguments or not.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_bernoulli(), -distr_chi2(), -distr_multivariate_normal(), -distr_normal(), -distr_poisson()

+distr_bernoulli(), +distr_chi2(), +distr_multivariate_normal(), +distr_normal(), +distr_poisson()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_gamma(torch_tensor(1.0), torch_tensor(1.0))
-m$sample()  # Gamma distributed with concentration=1 and rate=1
-}
-#> torch_tensor
-#>  3.5967
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_gamma(torch_tensor(1.0), torch_tensor(1.0))
+m$sample()  # Gamma distributed with concentration=1 and rate=1
+}
+#> torch_tensor
+#>  0.5747
+#> [ CPUFloatType{1} ]
+
+
+
- - - + + diff --git a/dev/reference/distr_mixture_same_family.html b/dev/reference/distr_mixture_same_family.html index 9097570d6d8ce75d2b43566b51073e314119e3bf..ee34d4ec8db4ba24a29d945a7d265e0a7520f3cc 100644 --- a/dev/reference/distr_mixture_same_family.html +++ b/dev/reference/distr_mixture_same_family.html @@ -1,84 +1,23 @@ - - - - - - - -Mixture of components in the same family — distr_mixture_same_family • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mixture of components in the same family — distr_mixture_same_family • torch - - - - - - - - + + -
-
- -
- -
+
@@ -199,71 +121,63 @@ distribution, i.e., a Distribution with a rightmost batch shape (equal to [k]) which indexes each (batch of) component.

-
distr_mixture_same_family(
-  mixture_distribution,
-  component_distribution,
-  validate_args = NULL
-)
+
+
distr_mixture_same_family(
+  mixture_distribution,
+  component_distribution,
+  validate_args = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - -
mixture_distribution

torch_distributions.Categorical-like +

+

Arguments

+
mixture_distribution
+

torch_distributions.Categorical-like instance. Manages the probability of selecting component. The number of categories must match the rightmost batch dimension of the component_distribution. Must have either scalar batch_shape or batch_shape matching -component_distribution.batch_shape[:-1]

component_distribution

torch_distributions.Distribution-like -instance. Right-most batch dimension indexes component.

validate_args

Additional arguments

- - -

Examples

-
if (torch_is_installed()) {
-# Construct Gaussian Mixture Model in 1D consisting of 5 equally
-# weighted normal distributions
-mix <- distr_categorical(torch_ones(5))
-comp <- distr_normal(torch_randn(5), torch_rand(5))
-gmm <- distr_mixture_same_family(mix, comp)
-
-}
-
+component_distribution.batch_shape[:-1]

+
component_distribution
+

torch_distributions.Distribution-like +instance. Right-most batch dimension indexes component.

+
validate_args
+

Additional arguments

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# Construct Gaussian Mixture Model in 1D consisting of 5 equally
+# weighted normal distributions
+mix <- distr_categorical(torch_ones(5))
+comp <- distr_normal(torch_randn(5), torch_rand(5))
+gmm <- distr_mixture_same_family(mix, comp)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/distr_multivariate_normal.html b/dev/reference/distr_multivariate_normal.html index e0755d785c55e3a23c71bb7053458e0d5d5e191c..5a208212027906ca2c2c8e24bd8cdf76ec51e8b0 100644 --- a/dev/reference/distr_multivariate_normal.html +++ b/dev/reference/distr_multivariate_normal.html @@ -1,80 +1,19 @@ - - - - - - - -Gaussian distribution — distr_multivariate_normal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gaussian distribution — distr_multivariate_normal • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,41 +113,31 @@ parameterized by a mean vector and a covariance matrix." /> parameterized by a mean vector and a covariance matrix.

-
distr_multivariate_normal(
-  loc,
-  covariance_matrix = NULL,
-  precision_matrix = NULL,
-  scale_tril = NULL,
-  validate_args = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
loc

(Tensor): mean of the distribution

covariance_matrix

(Tensor): positive-definite covariance matrix

precision_matrix

(Tensor): positive-definite precision matrix

scale_tril

(Tensor): lower-triangular factor of covariance, with positive-valued diagonal

validate_args

Bool wether to validate the arguments or not.

- -

Details

+
+
distr_multivariate_normal(
+  loc,
+  covariance_matrix = NULL,
+  precision_matrix = NULL,
+  scale_tril = NULL,
+  validate_args = NULL
+)
+
+
+

Arguments

+
loc
+

(Tensor): mean of the distribution

+
covariance_matrix
+

(Tensor): positive-definite covariance matrix

+
precision_matrix
+

(Tensor): positive-definite precision matrix

+
scale_tril
+

(Tensor): lower-triangular factor of covariance, with positive-valued diagonal

+
validate_args
+

Bool wether to validate the arguments or not.

+
+
+

Details

The multivariate normal distribution can be parameterized either in terms of a positive definite covariance matrix \(\mathbf{\Sigma}\) or a positive definite precision matrix \(\mathbf{\Sigma}^{-1}\) @@ -233,62 +145,63 @@ or a lower-triangular matrix \(\mathbf{L}\) with positive-valued diagonal entries, such that \(\mathbf{\Sigma} = \mathbf{L}\mathbf{L}^\top\). This triangular matrix can be obtained via e.g. Cholesky decomposition of the covariance.

-

Note

- +
+
+

Note

Only one of covariance_matrix or precision_matrix or scale_tril can be specified. Using scale_tril will be more efficient: all computations internally are based on scale_tril. If covariance_matrix or precision_matrix is passed instead, it is only used to compute the corresponding lower triangular matrices using a Cholesky decomposition.

-

See also

- -

Distribution for details on the available methods.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_bernoulli(), -distr_chi2(), -distr_gamma(), -distr_normal(), -distr_poisson()

+distr_bernoulli(), +distr_chi2(), +distr_gamma(), +distr_normal(), +distr_poisson()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_multivariate_normal(torch_zeros(2), torch_eye(2))
-m$sample()  # normally distributed with mean=`[0,0]` and covariance_matrix=`I`
-
-
-
-}
-#> torch_tensor
-#> -0.5606
-#> -1.9732
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_multivariate_normal(torch_zeros(2), torch_eye(2))
+m$sample()  # normally distributed with mean=`[0,0]` and covariance_matrix=`I`
+
+
+
+}
+#> torch_tensor
+#>  0.7704
+#> -0.2949
+#> [ CPUFloatType{2} ]
+
+
+
- - - + + diff --git a/dev/reference/distr_normal.html b/dev/reference/distr_normal.html index 99a44e639e1496c0f3d2b5ec17a96bea0719c4c8..a0f8549e4594db369bcc95f4a1ef7b42e687a5df 100644 --- a/dev/reference/distr_normal.html +++ b/dev/reference/distr_normal.html @@ -1,82 +1,21 @@ - - - - - - - -Creates a normal (also called Gaussian) distribution parameterized by -loc and scale. — distr_normal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a normal (also called Gaussian) distribution parameterized by +loc and scale. — distr_normal • torch - - - - - + + - - - -
-
- -
- -
+
@@ -194,75 +116,68 @@ loc and scale." /> loc and scale.

-
distr_normal(loc, scale, validate_args = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
loc

(float or Tensor): mean of the distribution (often referred to as mu)

scale

(float or Tensor): standard deviation of the distribution (often referred to as sigma)

validate_args

Additional arguments

- -

Value

+
+
distr_normal(loc, scale, validate_args = NULL)
+
+
+

Arguments

+
loc
+

(float or Tensor): mean of the distribution (often referred to as mu)

+
scale
+

(float or Tensor): standard deviation of the distribution (often referred to as sigma)

+
validate_args
+

Additional arguments

+
+
+

Value

Object of torch_Normal class

-

See also

- -

Distribution for details on the available methods.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_bernoulli(), -distr_chi2(), -distr_gamma(), -distr_multivariate_normal(), -distr_poisson()

+distr_bernoulli(), +distr_chi2(), +distr_gamma(), +distr_multivariate_normal(), +distr_poisson()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_normal(loc = 0, scale = 1)
-m$sample()  # normally distributed with loc=0 and scale=1
-
-
-}
-#> torch_tensor
-#> 0.01 *
-#>  6.6750
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_normal(loc = 0, scale = 1)
+m$sample()  # normally distributed with loc=0 and scale=1
+
+
+}
+#> torch_tensor
+#> -0.7069
+#> [ CPUFloatType{1} ]
+
+
+
- - - + + diff --git a/dev/reference/distr_poisson.html b/dev/reference/distr_poisson.html index ff3377d08c426a22cceec32aa62258f1303e2b4d..ba21e38aeb7f291529fe989e50360657e9a983f3 100644 --- a/dev/reference/distr_poisson.html +++ b/dev/reference/distr_poisson.html @@ -1,82 +1,21 @@ - - - - - - - -Creates a Poisson distribution parameterized by rate, the rate parameter. — distr_poisson • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a Poisson distribution parameterized by rate, the rate parameter. — distr_poisson • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,65 +117,60 @@ $$ $$

-
distr_poisson(rate, validate_args = NULL)
- -

Arguments

- - - - - - - - - - -
rate

(numeric, torch_tensor): the rate parameter

validate_args

whether to validate arguments or not.

- -

See also

+
+
distr_poisson(rate, validate_args = NULL)
+
-

Distribution for details on the available methods.

+
+

Arguments

+
rate
+

(numeric, torch_tensor): the rate parameter

+
validate_args
+

whether to validate arguments or not.

+
+
+

See also

+

Distribution for details on the available methods.

Other distributions: -distr_bernoulli(), -distr_chi2(), -distr_gamma(), -distr_multivariate_normal(), -distr_normal()

+distr_bernoulli(), +distr_chi2(), +distr_gamma(), +distr_multivariate_normal(), +distr_normal()

+
-

Examples

-
if (torch_is_installed()) {
-m <- distr_poisson(torch_tensor(4))
-m$sample()
-}
-#> torch_tensor
-#>  5
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- distr_poisson(torch_tensor(4))
+m$sample()
+}
+#> torch_tensor
+#>  3
+#> [ CPUFloatType{1} ]
+
+
+
- - - + + diff --git a/dev/reference/enumerate.dataloader.html b/dev/reference/enumerate.dataloader.html index f45e028a8522093b3a49cd51988041cada56c2fd..33b201a48caf69df47fdc9051089e9180d284470 100644 --- a/dev/reference/enumerate.dataloader.html +++ b/dev/reference/enumerate.dataloader.html @@ -1,79 +1,18 @@ - - - - - - - -Enumerate an iterator — enumerate.dataloader • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Enumerate an iterator — enumerate.dataloader • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,42 @@

Enumerate an iterator

-
# S3 method for dataloader
-enumerate(x, max_len = 1e+06, ...)
- -

Arguments

- - - - - - - - - - - - - - -
x

the generator to enumerate.

max_len

maximum number of iterations.

...

passed to specific methods.

+
+
# S3 method for dataloader
+enumerate(x, max_len = 1e+06, ...)
+
+
+

Arguments

+
x
+

the generator to enumerate.

+
max_len
+

maximum number of iterations.

+
...
+

passed to specific methods.

+
+
- - - + + diff --git a/dev/reference/enumerate.html b/dev/reference/enumerate.html index 3ed1815481b4a1c7687125a6fc281b56763d193d..604b0ce3daf96f2e1033fc277e01959912c50fe6 100644 --- a/dev/reference/enumerate.html +++ b/dev/reference/enumerate.html @@ -1,79 +1,18 @@ - - - - - - - -Enumerate an iterator — enumerate • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Enumerate an iterator — enumerate • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

Enumerate an iterator

-
enumerate(x, ...)
- -

Arguments

- - - - - - - - - - -
x

the generator to enumerate.

...

passed to specific methods.

+
+
enumerate(x, ...)
+
+
+

Arguments

+
x
+

the generator to enumerate.

+
...
+

passed to specific methods.

+
+
- - - + + diff --git a/dev/reference/figures/torch-full.png b/dev/reference/figures/torch-full.png deleted file mode 100644 index 61d24b86074b110f4cf3298f417c4148938c8f05..0000000000000000000000000000000000000000 Binary files a/dev/reference/figures/torch-full.png and /dev/null differ diff --git a/dev/reference/get_install_libs_url.html b/dev/reference/get_install_libs_url.html index a1d8ddffeab45611d65acc17be30715b72bcb9c0..1c906cce1ea9079721f781858b9433392cc0ae2d 100644 --- a/dev/reference/get_install_libs_url.html +++ b/dev/reference/get_install_libs_url.html @@ -1,79 +1,18 @@ - - - - - - - -List of files to download — get_install_libs_url • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -List of files to download — get_install_libs_url • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

List the Torch and Lantern files to download as local files in order to proceed with install_torch_from_file().

-
get_install_libs_url(version = "1.9.1", type = install_type(version = version))
- -

Arguments

- - - - - - - - - - -
version

The Torch version to install.

type

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

+
+
get_install_libs_url(version = "1.9.1", type = install_type(version = version))
+
+
+

Arguments

+
version
+

The Torch version to install.

+
type
+

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

+
+
- - - + + diff --git a/dev/reference/index.html b/dev/reference/index.html index 8b0c7052c27a4b73489b48ea3cb2a7411b8ba251..a664efd79292c8bf5241f3dd20f6194a1fe7139f 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,78 +1,18 @@ - - - - - - - -Function reference • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Function reference • torch - - - - - - + + - - -
-
- -
- -
+
- - - - - - - - - - -
-

Tensor creation utilities

+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - + + + + + + + + + + + + + + - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+

Tensor creation utilities

+

torch_empty()

Empty

+

torch_arange()

Arange

+

torch_eye()

Eye

+

torch_full()

Full

+

torch_linspace()

Linspace

+

torch_logspace()

Logspace

+

torch_ones()

Ones

+

torch_rand()

Rand

+

torch_randint()

Randint

+

torch_randn()

Randn

+

torch_randperm()

Randperm

+

torch_zeros()

Zeros

+

torch_empty_like()

Empty_like

+

torch_full_like()

Full_like

+

torch_ones_like()

Ones_like

+

torch_rand_like()

Rand_like

+

torch_randint_like()

Randint_like

+

torch_randn_like()

Randn_like

+

torch_zeros_like()

Zeros_like

+

as_array()

Converts to array

-

Tensor attributes

+
+

Tensor attributes

+

torch_set_default_dtype() torch_get_default_dtype()

Gets and sets the default floating point dtype.

+

is_torch_device()

Checks if object is a device

+

is_torch_dtype()

Check if object is a torch data type

+

torch_float32() torch_float() torch_float64() torch_double() torch_float16() torch_half() torch_uint8() torch_int8() torch_int16() torch_short() torch_int32() torch_int() torch_int64() torch_long() torch_bool() torch_quint8() torch_qint8() torch_qint32()

Torch data types

+

torch_finfo()

Floating point type info

+

torch_iinfo()

Integer type info

+

torch_per_channel_affine() torch_per_tensor_affine() torch_per_channel_symmetric() torch_per_tensor_symmetric()

Creates the corresponding Scheme object

+

torch_reduction_sum() torch_reduction_mean() torch_reduction_none()

Creates the reduction objet

+

is_torch_layout()

Check if an object is a torch layout.

+

is_torch_memory_format()

Check if an object is a memory format

+

is_torch_qscheme()

Checks if an object is a QScheme

+

is_undefined_tensor()

Checks if a tensor is undefined

-

Serialization

+
+

Serialization

+

load_state_dict()

Load a state dict file

+

torch_load()

Loads a saved object

+

torch_save()

Saves an object to a disk file.

-

Mathematical operations on tensors

+
+

Mathematical operations on tensors

+
+

torch_set_default_dtype() torch_get_default_dtype()

+

Gets and sets the default floating point dtype.

torch_set_num_threads() torch_set_num_interop_threads() torch_get_num_interop_threads() torch_get_num_threads()

Number of threads

+

torch_abs()

Abs

+

torch_absolute()

Absolute

+

torch_acos()

Acos

+

torch_acosh()

Acosh

+

torch_adaptive_avg_pool1d()

Adaptive_avg_pool1d

+

torch_add()

Add

+

torch_addbmm()

Addbmm

+

torch_addcdiv()

Addcdiv

+

torch_addcmul()

Addcmul

+

torch_addmm()

Addmm

+

torch_addmv()

Addmv

+

torch_addr()

Addr

+

torch_allclose()

Allclose

+

torch_amax()

Amax

+

torch_amin()

Amin

+

torch_angle()

Angle

+
+

torch_arange()

+

Arange

torch_arccos()

Arccos

+

torch_arccosh()

Arccosh

+

torch_arcsin()

Arcsin

+

torch_arcsinh()

Arcsinh

+

torch_arctan()

Arctan

+

torch_arctanh()

Arctanh

+

torch_argmax

Argmax

+

torch_argmin

Argmin

+

torch_argsort()

Argsort

+

torch_as_strided()

As_strided

+

torch_asin()

Asin

+

torch_asinh()

Asinh

+

torch_atan()

Atan

+

torch_atan2()

Atan2

+

torch_atanh()

Atanh

+

torch_atleast_1d()

Atleast_1d

+

torch_atleast_2d()

Atleast_2d

+

torch_atleast_3d()

Atleast_3d

+

torch_avg_pool1d()

Avg_pool1d

+

torch_baddbmm()

Baddbmm

+

torch_bartlett_window()

Bartlett_window

+

torch_bernoulli()

Bernoulli

+

torch_bincount()

Bincount

+

torch_bitwise_and()

Bitwise_and

+

torch_bitwise_not()

Bitwise_not

+

torch_bitwise_or()

Bitwise_or

+

torch_bitwise_xor()

Bitwise_xor

+

torch_blackman_window()

Blackman_window

+

torch_block_diag()

Block_diag

+

torch_bmm()

Bmm

+

torch_broadcast_tensors()

Broadcast_tensors

+

torch_bucketize()

Bucketize

+

torch_can_cast()

Can_cast

+

torch_cartesian_prod()

Cartesian_prod

+

torch_cat()

Cat

+

torch_cdist()

Cdist

+

torch_ceil()

Ceil

+

torch_celu()

Celu

+

torch_celu_()

Celu_

+

torch_chain_matmul()

Chain_matmul

+

torch_channel_shuffle()

Channel_shuffle

+

torch_cholesky()

Cholesky

+

torch_cholesky_inverse()

Cholesky_inverse

+

torch_cholesky_solve()

Cholesky_solve

+

torch_chunk()

Chunk

+

torch_clamp()

Clamp

+

torch_clip()

Clip

+

torch_clone()

Clone

+

torch_combinations()

Combinations

+

torch_complex()

Complex

+

torch_conj()

Conj

+

torch_conv1d()

Conv1d

+

torch_conv2d()

Conv2d

+

torch_conv3d()

Conv3d

+

torch_conv_tbc()

Conv_tbc

+

torch_conv_transpose1d()

Conv_transpose1d

+

torch_conv_transpose2d()

Conv_transpose2d

+

torch_conv_transpose3d()

Conv_transpose3d

+

torch_cos()

Cos

+

torch_cosh()

Cosh

+

torch_cosine_similarity()

Cosine_similarity

+

torch_count_nonzero()

Count_nonzero

+

torch_cross()

Cross

+

torch_cummax()

Cummax

+

torch_cummin()

Cummin

+

torch_cumprod()

Cumprod

+

torch_cumsum()

Cumsum

+

torch_deg2rad()

Deg2rad

+

torch_dequantize()

Dequantize

+

torch_det()

Det

+

torch_device()

Create a Device object

+

torch_diag()

Diag

+

torch_diag_embed()

Diag_embed

+

torch_diagflat()

Diagflat

+

torch_diagonal()

Diagonal

+

torch_diff()

Computes the n-th forward difference along the given dimension.

+

torch_digamma()

Digamma

+

torch_dist()

Dist

+

torch_div()

Div

+

torch_divide()

Divide

+

torch_dot()

Dot

+

torch_dstack()

Dstack

+
+

torch_float32() torch_float() torch_float64() torch_double() torch_float16() torch_half() torch_uint8() torch_int8() torch_int16() torch_short() torch_int32() torch_int() torch_int64() torch_long() torch_bool() torch_quint8() torch_qint8() torch_qint32()

+

Torch data types

torch_eig()

Eig

+

torch_einsum()

Einsum

+
+

torch_empty()

+

Empty

+

torch_empty_like()

+

Empty_like

torch_empty_strided()

Empty_strided

+

torch_eq()

Eq

+

torch_equal()

Equal

+

torch_erf()

Erf

+

torch_erfc()

Erfc

+

torch_erfinv()

Erfinv

+

torch_exp()

Exp

+

torch_exp2()

Exp2

+

torch_expm1()

Expm1

+
+

torch_eye()

+

Eye

torch_fft_fft()

Fft

+

torch_fft_ifft()

Ifft

+

torch_fft_irfft()

Irfft

+

torch_fft_rfft()

Rfft

+
+

torch_finfo()

+

Floating point type info

torch_fix()

Fix

+

torch_flatten()

Flatten

+

torch_flip()

Flip

+

torch_fliplr()

Fliplr

+

torch_flipud()

Flipud

+

torch_floor()

Floor

+

torch_floor_divide()

Floor_divide

+

torch_fmod()

Fmod

+

torch_frac()

Frac

+
+

torch_full()

+

Full

+

torch_full_like()

+

Full_like

torch_gather()

Gather

+

torch_gcd()

Gcd

+

torch_ge()

Ge

+

torch_generator()

Create a Generator object

+

torch_geqrf()

Geqrf

+

torch_ger()

Ger

+

torch_greater()

Greater

+

torch_greater_equal()

Greater_equal

+

torch_gt()

Gt

+

torch_hamming_window()

Hamming_window

+

torch_hann_window()

Hann_window

+

torch_heaviside()

Heaviside

+

torch_histc()

Histc

+

torch_hstack()

Hstack

+

torch_hypot()

Hypot

+

torch_i0()

I0

+
+

torch_iinfo()

+

Integer type info

torch_imag()

Imag

+

torch_index()

Index torch tensors

+

torch_index_put()

Modify values selected by indices.

+

torch_index_put_()

In-place version of torch_index_put.

+

torch_index_select()

Index_select

+

torch_inverse()

Inverse

+

torch_is_complex()

Is_complex

+

torch_is_floating_point()

Is_floating_point

+

torch_is_installed()

Verifies if torch is installed

+

torch_is_nonzero()

Is_nonzero

+

torch_isclose()

Isclose

+

torch_isfinite()

Isfinite

+

torch_isinf()

Isinf

+

torch_isnan()

Isnan

+

torch_isneginf()

Isneginf

+

torch_isposinf()

Isposinf

+

torch_isreal()

Isreal

+

torch_istft()

Istft

+

torch_kaiser_window()

Kaiser_window

+

torch_kthvalue()

Kthvalue

+

torch_strided() torch_sparse_coo()

Creates the corresponding layout

+

torch_lcm()

Lcm

+

torch_le()

Le

+

torch_lerp()

Lerp

+

torch_less()

Less

+

torch_less_equal()

Less_equal

+

torch_lgamma()

Lgamma

+
+

torch_linspace()

+

Linspace

+

torch_load()

+

Loads a saved object

torch_log()

Log

+

torch_log10()

Log10

+

torch_log1p()

Log1p

+

torch_log2()

Log2

+

torch_logaddexp()

Logaddexp

+

torch_logaddexp2()

Logaddexp2

+

torch_logcumsumexp()

Logcumsumexp

+

torch_logdet()

Logdet

+

torch_logical_and()

Logical_and

+

torch_logical_not

Logical_not

+

torch_logical_or()

Logical_or

+

torch_logical_xor()

Logical_xor

+

torch_logit()

Logit

+
+

torch_logspace()

+

Logspace

torch_logsumexp()

Logsumexp

+

torch_lstsq()

Lstsq

+

torch_lt()

Lt

+

torch_lu()

LU

+

torch_lu_solve()

Lu_solve

+

torch_manual_seed()

Sets the seed for generating random numbers.

+

torch_masked_select()

Masked_select

+

torch_matmul()

Matmul

+

torch_matrix_exp()

Matrix_exp

+

torch_matrix_power()

Matrix_power

+

torch_matrix_rank()

Matrix_rank

+

torch_max

Max

+

torch_maximum()

Maximum

+

torch_mean()

Mean

+

torch_median()

Median

+

torch_contiguous_format() torch_preserve_format() torch_channels_last_format()

Memory format

+

torch_meshgrid()

Meshgrid

+

torch_min

Min

+

torch_minimum()

Minimum

+

torch_mm()

Mm

+

torch_mode()

Mode

+

torch_movedim()

Movedim

+

torch_mul()

Mul

+

torch_multinomial()

Multinomial

+

torch_multiply()

Multiply

+

torch_mv()

Mv

+

torch_mvlgamma()

Mvlgamma

+

torch_nanquantile()

Nanquantile

+

torch_nansum()

Nansum

+

torch_narrow()

Narrow

+

torch_ne()

Ne

+

torch_neg()

Neg

+

torch_negative()

Negative

+

torch_nextafter()

Nextafter

+

torch_nonzero()

Nonzero

+

torch_norm()

Norm

+

torch_normal()

Normal

+

torch_not_equal()

Not_equal

+
+

torch_ones()

+

Ones

+

torch_ones_like()

+

Ones_like

torch_orgqr()

Orgqr

+

torch_ormqr()

Ormqr

+

torch_outer()

Outer

+

torch_pdist()

Pdist

+

torch_pinverse()

Pinverse

+

torch_pixel_shuffle()

Pixel_shuffle

+

torch_poisson()

Poisson

+

torch_polar()

Polar

+

torch_polygamma()

Polygamma

+

torch_pow()

Pow

+

torch_prod()

Prod

+

torch_promote_types()

Promote_types

+

torch_qr()

Qr

+
+

torch_per_channel_affine() torch_per_tensor_affine() torch_per_channel_symmetric() torch_per_tensor_symmetric()

+

Creates the corresponding Scheme object

torch_quantile()

Quantile

+

torch_quantize_per_channel()

Quantize_per_channel

+

torch_quantize_per_tensor()

Quantize_per_tensor

+

torch_rad2deg()

Rad2deg

+
+

torch_rand()

+

Rand

+

torch_rand_like()

+

Rand_like

+

torch_randint()

+

Randint

+

torch_randint_like()

+

Randint_like

+

torch_randn()

+

Randn

+

torch_randn_like()

+

Randn_like

+

torch_randperm()

+

Randperm

torch_range()

Range

+

torch_real()

Real

+

torch_reciprocal()

Reciprocal

+
+

torch_reduction_sum() torch_reduction_mean() torch_reduction_none()

+

Creates the reduction objet

torch_relu()

Relu

+

torch_relu_()

Relu_

+

torch_remainder()

Remainder

+

torch_renorm()

Renorm

+

torch_repeat_interleave()

Repeat_interleave

+

torch_reshape()

Reshape

+

torch_result_type()

Result_type

+

torch_roll()

Roll

+

torch_rot90()

Rot90

+

torch_round()

Round

+

torch_rrelu_()

Rrelu_

+

torch_rsqrt()

Rsqrt

+
+

torch_save()

+

Saves an object to a disk file.

torch_scalar_tensor()

Scalar tensor

+

torch_searchsorted()

Searchsorted

+

torch_selu()

Selu

+

torch_selu_()

Selu_

+

torch_sgn()

Sgn

+

torch_sigmoid()

Sigmoid

+

torch_sign()

Sign

+

torch_signbit()

Signbit

+

torch_sin()

Sin

+

torch_sinh()

Sinh

+

torch_slogdet()

Slogdet

+

torch_solve()

Solve

+

torch_sort()

Sort

+

torch_sparse_coo_tensor()

Sparse_coo_tensor

+

torch_split()

Split

+

torch_sqrt()

Sqrt

+

torch_square()

Square

+

torch_squeeze()

Squeeze

+

torch_stack()

Stack

+

torch_std()

Std

+

torch_std_mean()

Std_mean

+

torch_stft()

Stft

+

torch_sub()

Sub

+

torch_subtract()

Subtract

+

torch_sum()

Sum

+

torch_svd()

Svd

+

torch_symeig()

Symeig

+

torch_t()

T

+

torch_take()

Take

+

torch_tan()

Tan

+

torch_tanh()

Tanh

+

torch_tensor()

Converts R objects to a torch tensor

+

torch_tensordot()

Tensordot

+

torch_threshold_()

Threshold_

+

torch_topk()

Topk

+

torch_trace()

Trace

+

torch_transpose()

Transpose

+

torch_trapz()

Trapz

+

torch_triangular_solve()

Triangular_solve

+

torch_tril()

Tril

+

torch_tril_indices()

Tril_indices

+

torch_triu()

Triu

+

torch_triu_indices()

Triu_indices

+

torch_true_divide()

TRUE_divide

+

torch_trunc()

Trunc

+

torch_unbind()

Unbind

+

torch_unique_consecutive()

Unique_consecutive

+

torch_unsafe_chunk()

Unsafe_chunk

+

torch_unsafe_split()

Unsafe_split

+

torch_unsqueeze()

Unsqueeze

+

torch_vander()

Vander

+

torch_var()

Var

+

torch_var_mean()

Var_mean

+

torch_vdot()

Vdot

+

torch_view_as_complex()

View_as_complex

+

torch_view_as_real()

View_as_real

+

torch_vstack()

Vstack

+

torch_where()

Where

-

broadcast_all()

+
+

torch_zeros()

Given a list of values (possibly containing numbers), returns a list where each -value is broadcasted based on the following rules:

-

Neural network modules

-

-
-

nn_adaptive_avg_pool1d()

+

Zeros

+

torch_zeros_like()

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

-

nn_adaptive_avg_pool2d()

+

Zeros_like

+

AutogradContext

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

-

nn_adaptive_avg_pool3d()

+

Class representing the context.

+

Constraint

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

-

nn_adaptive_log_softmax_with_loss()

+

Abstract base class for constraints.

+

Distribution

AdaptiveLogSoftmaxWithLoss module

-

nn_adaptive_max_pool1d()

+

Generic R6 class representing distributions

+

as_array()

Applies a 1D adaptive max pooling over an input signal composed of several input planes.

-

nn_adaptive_max_pool2d()

+

Converts to array

+

autograd_backward()

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

-

nn_adaptive_max_pool3d()

+

Computes the sum of gradients of given tensors w.r.t. graph leaves.

+

autograd_function()

Applies a 3D adaptive max pooling over an input signal composed of several input planes.

-

nn_avg_pool1d()

+

Records operation history and defines formulas for differentiating ops.

+

autograd_grad()

Applies a 1D average pooling over an input signal composed of several -input planes.

-

nn_avg_pool2d()

+

Computes and returns the sum of gradients of outputs w.r.t. the inputs.

+

autograd_set_grad_mode()

Applies a 2D average pooling over an input signal composed of several input -planes.

-

nn_avg_pool3d()

+

Set grad mode

+

backends_mkl_is_available()

Applies a 3D average pooling over an input signal composed of several input -planes.

-

nn_batch_norm1d()

+

MKL is available

+

backends_mkldnn_is_available()

BatchNorm1D module

-

nn_batch_norm2d()

+

MKLDNN is available

+

backends_openmp_is_available()

BatchNorm2D

-

nn_batch_norm3d()

+

OpenMP is available

+

broadcast_all()

BatchNorm3D

-

nn_bce_loss()

+

Given a list of values (possibly containing numbers), returns a list where each +value is broadcasted based on the following rules:

+

call_torch_function()

Binary cross entropy loss

-

nn_bce_with_logits_loss()

+

Call a (Potentially Unexported) Torch Function

+

contrib_sort_vertices()

BCE with logits loss

-

nn_bilinear()

+

Contrib sort vertices

+

cuda_current_device()

Bilinear module

-

nn_buffer()

+

Returns the index of a currently selected device.

+

cuda_device_count()

Creates a nn_buffer

-

nn_celu()

+

Returns the number of GPUs available.

+

cuda_get_device_capability()

CELU module

-

nn_contrib_sparsemax()

+

Returns the major and minor CUDA capability of device

+

cuda_is_available()

Sparsemax activation

-

nn_conv1d()

+

Returns a bool indicating if CUDA is currently available.

+

dataloader()

Conv1D module

-

nn_conv2d()

+

Data loader. Combines a dataset and a sampler, and provides +single- or multi-process iterators over the dataset.

+

dataloader_make_iter()

Conv2D module

-

nn_conv3d()

+

Creates an iterator from a DataLoader

+

dataloader_next()

Conv3D module

-

nn_conv_transpose1d()

+

Get the next element of a dataloader iterator

+

dataset()

ConvTranspose1D

-

nn_conv_transpose2d()

+

Helper function to create an R6 class that inherits from the abstract Dataset class

+

dataset_subset()

ConvTranpose2D module

-

nn_conv_transpose3d()

+

Dataset Subset

+

distr_bernoulli()

ConvTranpose3D module

-

nn_cosine_embedding_loss()

+

Creates a Bernoulli distribution parameterized by probs +or logits (but not both). +Samples are binary (0 or 1). They take the value 1 with probability p +and 0 with probability 1 - p.

+

distr_categorical()

Cosine embedding loss

-

nn_cross_entropy_loss()

+

Creates a categorical distribution parameterized by either probs or +logits (but not both).

+

distr_chi2()

CrossEntropyLoss module

-

nn_ctc_loss()

+

Creates a Chi2 distribution parameterized by shape parameter df. +This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5)

+

distr_gamma()

The Connectionist Temporal Classification loss.

-

nn_dropout()

+

Creates a Gamma distribution parameterized by shape concentration and rate.

+

distr_mixture_same_family()

+

Mixture of components in the same family

+

distr_multivariate_normal()

+

Gaussian distribution

+

distr_normal()

+

Creates a normal (also called Gaussian) distribution parameterized by +loc and scale.

+

distr_poisson()

+

Creates a Poisson distribution parameterized by rate, the rate parameter.

+

enumerate()

+

Enumerate an iterator

+

enumerate(<dataloader>)

+

Enumerate an iterator

+

get_install_libs_url()

+

List of files to download

+

install_torch()

+

Install Torch

+

install_torch_from_file()

+

Install Torch from files

+

is_dataloader()

+

Checks if the object is a dataloader

+

is_nn_buffer()

+

Checks if the object is a nn_buffer

+

is_nn_module()

+

Checks if the object is an nn_module

+

is_nn_parameter()

+

Checks if an object is a nn_parameter

+

is_optimizer()

+

Checks if the object is a torch optimizer

+

is_torch_device()

+

Checks if object is a device

+

is_torch_dtype()

+

Check if object is a torch data type

+

is_torch_layout()

+

Check if an object is a torch layout.

+

is_torch_memory_format()

+

Check if an object is a memory format

+

is_torch_qscheme()

+

Checks if an object is a QScheme

+

is_undefined_tensor()

+

Checks if a tensor is undefined

+

jit_compile()

+

Compile TorchScript code into a graph

+

jit_load()

+

Loads a script_function or script_module previously saved with jit_save

+

jit_save()

+

Saves a script_function to a path

+

jit_save_for_mobile()

+

Saves a script_function or script_module in bytecode form, +to be loaded on a mobile device

+

jit_scalar()

+

Adds the 'jit_scalar' class to the input

+

jit_trace()

+

Trace a function and return an executable script_function.

+

jit_trace_module()

+

Trace a module

+

jit_tuple()

+

Adds the 'jit_tuple' class to the input

+

linalg_cholesky()

+

Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.

+

linalg_cholesky_ex()

+

Computes the Cholesky decomposition of a complex Hermitian or real +symmetric positive-definite matrix.

+

linalg_cond()

+

Computes the condition number of a matrix with respect to a matrix norm.

+

linalg_det()

+

Computes the determinant of a square matrix.

+

linalg_eig()

+

Computes the eigenvalue decomposition of a square matrix if it exists.

+

linalg_eigh()

+

Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix.

+

linalg_eigvals()

+

Computes the eigenvalues of a square matrix.

+

linalg_eigvalsh()

+

Computes the eigenvalues of a complex Hermitian or real symmetric matrix.

+

linalg_householder_product()

+

Computes the first n columns of a product of Householder matrices.

+

linalg_inv()

+

Computes the inverse of a square matrix if it exists.

+

linalg_inv_ex()

+

Computes the inverse of a square matrix if it is invertible.

+

linalg_lstsq()

+

Computes a solution to the least squares problem of a system of linear equations.

+

linalg_matrix_norm()

+

Computes a matrix norm.

+

linalg_matrix_power()

+

Computes the n-th power of a square matrix for an integer n.

+

linalg_matrix_rank()

+

Computes the numerical rank of a matrix.

+

linalg_multi_dot()

+

Efficiently multiplies two or more matrices

+

linalg_norm()

+

Computes a vector or matrix norm.

+

linalg_pinv()

+

Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.

+

linalg_qr()

+

Computes the QR decomposition of a matrix.

+

linalg_slogdet()

+

Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix.

+

linalg_solve()

+

Computes the solution of a square system of linear equations with a unique solution.

+

linalg_svd()

+

Computes the singular value decomposition (SVD) of a matrix.

+

linalg_svdvals()

+

Computes the singular values of a matrix.

+

linalg_tensorinv()

+

Computes the multiplicative inverse of torch_tensordot()

+

linalg_tensorsolve()

+

Computes the solution X to the system torch_tensordot(A, X) = B.

+

linalg_vector_norm()

+

Computes a vector norm.

+

load_state_dict()

+

Load a state dict file

+

lr_lambda()

+

Sets the learning rate of each parameter group to the initial lr +times a given function. When last_epoch=-1, sets initial lr as lr.

+

lr_multiplicative()

+

Multiply the learning rate of each parameter group by the factor given +in the specified function. When last_epoch=-1, sets initial lr as lr.

+

lr_one_cycle()

+

Once cycle learning rate

+

lr_scheduler()

+

Creates learning rate schedulers

+

lr_step()

+

Step learning rate decay

+

nn_adaptive_avg_pool1d()

+

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_avg_pool2d()

+

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_avg_pool3d()

+

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_log_softmax_with_loss()

+

AdaptiveLogSoftmaxWithLoss module

+

nn_adaptive_max_pool1d()

+

Applies a 1D adaptive max pooling over an input signal composed of several input planes.

+

nn_adaptive_max_pool2d()

+

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

+

nn_adaptive_max_pool3d()

+

Applies a 3D adaptive max pooling over an input signal composed of several input planes.

+

nn_avg_pool1d()

+

Applies a 1D average pooling over an input signal composed of several +input planes.

+

nn_avg_pool2d()

+

Applies a 2D average pooling over an input signal composed of several input +planes.

+

nn_avg_pool3d()

+

Applies a 3D average pooling over an input signal composed of several input +planes.

+

nn_batch_norm1d()

+

BatchNorm1D module

+

nn_batch_norm2d()

+

BatchNorm2D

+

nn_batch_norm3d()

+

BatchNorm3D

+

nn_bce_loss()

+

Binary cross entropy loss

+

nn_bce_with_logits_loss()

+

BCE with logits loss

+

nn_bilinear()

+

Bilinear module

+

nn_buffer()

+

Creates a nn_buffer

+

nn_celu()

+

CELU module

+

nn_contrib_sparsemax()

+

Sparsemax activation

+

nn_conv1d()

+

Conv1D module

+

nn_conv2d()

+

Conv2D module

+

nn_conv3d()

+

Conv3D module

+

nn_conv_transpose1d()

+

ConvTranspose1D

+

nn_conv_transpose2d()

+

ConvTranpose2D module

+

nn_conv_transpose3d()

+

ConvTranpose3D module

+

nn_cosine_embedding_loss()

+

Cosine embedding loss

+

nn_cross_entropy_loss()

+

CrossEntropyLoss module

+

nn_ctc_loss()

+

The Connectionist Temporal Classification loss.

+

nn_dropout()

+

Dropout module

+

nn_dropout2d()

+

Dropout2D module

+

nn_dropout3d()

+

Dropout3D module

+

nn_elu()

+

ELU module

+

nn_embedding()

+

Embedding module

+

nn_fractional_max_pool2d()

+

Applies a 2D fractional max pooling over an input signal composed of several input planes.

+

nn_fractional_max_pool3d()

+

Applies a 3D fractional max pooling over an input signal composed of several input planes.

+

nn_gelu()

+

GELU module

+

nn_glu()

+

GLU module

+

nn_group_norm()

+

Group normalization

+

nn_gru()

+

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

+

nn_hardshrink()

+

Hardshwink module

+

nn_hardsigmoid()

+

Hardsigmoid module

+

nn_hardswish()

+

Hardswish module

+

nn_hardtanh()

+

Hardtanh module

+

nn_hinge_embedding_loss()

+

Hinge embedding loss

+

nn_identity()

+

Identity module

+

nn_init_calculate_gain()

+

Calculate gain

+

nn_init_constant_()

+

Constant initialization

+

nn_init_dirac_()

+

Dirac initialization

+

nn_init_eye_()

+

Eye initialization

+

nn_init_kaiming_normal_()

+

Kaiming normal initialization

+

nn_init_kaiming_uniform_()

+

Kaiming uniform initialization

+

nn_init_normal_()

+

Normal initialization

+

nn_init_ones_()

+

Ones initialization

+

nn_init_orthogonal_()

+

Orthogonal initialization

+

nn_init_sparse_()

+

Sparse initialization

+

nn_init_trunc_normal_()

+

Truncated normal initialization

+

nn_init_uniform_()

+

Uniform initialization

+

nn_init_xavier_normal_()

+

Xavier normal initialization

+

nn_init_xavier_uniform_()

+

Xavier uniform initialization

+

nn_init_zeros_()

+

Zeros initialization

+

nn_kl_div_loss()

+

Kullback-Leibler divergence loss

+

nn_l1_loss()

+

L1 loss

+

nn_layer_norm()

+

Layer normalization

+

nn_leaky_relu()

+

LeakyReLU module

+

nn_linear()

+

Linear module

+

nn_log_sigmoid()

+

LogSigmoid module

+

nn_log_softmax()

+

LogSoftmax module

+

nn_lp_pool1d()

+

Applies a 1D power-average pooling over an input signal composed of several input +planes.

+

nn_lp_pool2d()

+

Applies a 2D power-average pooling over an input signal composed of several input +planes.

+

nn_lstm()

+

Applies a multi-layer long short-term memory (LSTM) RNN to an input +sequence.

+

nn_margin_ranking_loss()

+

Margin ranking loss

+

nn_max_pool1d()

+

MaxPool1D module

+

nn_max_pool2d()

+

MaxPool2D module

+

nn_max_pool3d()

+

Applies a 3D max pooling over an input signal composed of several input +planes.

+

nn_max_unpool1d()

+

Computes a partial inverse of MaxPool1d.

+

nn_max_unpool2d()

+

Computes a partial inverse of MaxPool2d.

+

nn_max_unpool3d()

+

Computes a partial inverse of MaxPool3d.

+

nn_module()

+

Base class for all neural network modules.

+

nn_module_list()

+

Holds submodules in a list.

+

nn_mse_loss()

+

MSE loss

+

nn_multi_margin_loss()

+

Multi margin loss

+

nn_multihead_attention()

+

MultiHead attention

+

nn_multilabel_margin_loss()

+

Multilabel margin loss

+

nn_multilabel_soft_margin_loss()

+

Multi label soft margin loss

+

nn_nll_loss()

+

Nll loss

+

nn_pairwise_distance()

+

Pairwise distance

+

nn_parameter()

+

Creates an nn_parameter

+

nn_poisson_nll_loss()

+

Poisson NLL loss

+

nn_prelu()

+

PReLU module

+

nn_relu()

+

ReLU module

+

nn_relu6()

+

ReLu6 module

+

nn_rnn()

+

RNN module

+

nn_rrelu()

+

RReLU module

+

nn_selu()

+

SELU module

+

nn_sequential()

+

A sequential container

+

nn_sigmoid()

+

Sigmoid module

+

nn_smooth_l1_loss()

+

Smooth L1 loss

+

nn_soft_margin_loss()

+

Soft margin loss

+

nn_softmax()

+

Softmax module

+

nn_softmax2d()

+

Softmax2d module

+

nn_softmin()

+

Softmin

+

nn_softplus()

+

Softplus module

+

nn_softshrink()

+

Softshrink module

+

nn_softsign()

+

Softsign module

+

nn_tanh()

+

Tanh module

+

nn_tanhshrink()

+

Tanhshrink module

+

nn_threshold()

+

Threshoold module

+

nn_triplet_margin_loss()

+

Triplet margin loss

+

nn_triplet_margin_with_distance_loss()

+

Triplet margin with distance loss

+

nn_utils_clip_grad_norm_()

+

Clips gradient norm of an iterable of parameters.

+

nn_utils_clip_grad_value_()

+

Clips gradient of an iterable of parameters at specified value.

+

nn_utils_rnn_pack_padded_sequence()

+

Packs a Tensor containing padded sequences of variable length.

+

nn_utils_rnn_pack_sequence()

+

Packs a list of variable length Tensors

+

nn_utils_rnn_pad_packed_sequence()

+

Pads a packed batch of variable length sequences.

+

nn_utils_rnn_pad_sequence()

+

Pad a list of variable length Tensors with padding_value

+

nnf_adaptive_avg_pool1d()

+

Adaptive_avg_pool1d

+

nnf_adaptive_avg_pool2d()

+

Adaptive_avg_pool2d

+

nnf_adaptive_avg_pool3d()

+

Adaptive_avg_pool3d

+

nnf_adaptive_max_pool1d()

+

Adaptive_max_pool1d

+

nnf_adaptive_max_pool2d()

+

Adaptive_max_pool2d

+

nnf_adaptive_max_pool3d()

+

Adaptive_max_pool3d

+

nnf_affine_grid()

+

Affine_grid

+

nnf_alpha_dropout()

+

Alpha_dropout

+

nnf_avg_pool1d()

+

Avg_pool1d

+

nnf_avg_pool2d()

+

Avg_pool2d

+

nnf_avg_pool3d()

+

Avg_pool3d

+

nnf_batch_norm()

+

Batch_norm

+

nnf_bilinear()

+

Bilinear

+

nnf_binary_cross_entropy()

+

Binary_cross_entropy

+

nnf_binary_cross_entropy_with_logits()

+

Binary_cross_entropy_with_logits

+

nnf_celu() nnf_celu_()

+

Celu

+

nnf_contrib_sparsemax()

+

Sparsemax

+

nnf_conv1d()

+

Conv1d

+

nnf_conv2d()

+

Conv2d

+

nnf_conv3d()

+

Conv3d

+

nnf_conv_tbc()

+

Conv_tbc

+

nnf_conv_transpose1d()

+

Conv_transpose1d

+

nnf_conv_transpose2d()

+

Conv_transpose2d

+

nnf_conv_transpose3d()

+

Conv_transpose3d

+

nnf_cosine_embedding_loss()

+

Cosine_embedding_loss

+

nnf_cosine_similarity()

+

Cosine_similarity

+

nnf_cross_entropy()

+

Cross_entropy

+

nnf_ctc_loss()

+

Ctc_loss

+

nnf_dropout()

+

Dropout

+

nnf_dropout2d()

+

Dropout2d

+

nnf_dropout3d()

+

Dropout3d

+

nnf_elu() nnf_elu_()

+

Elu

+

nnf_embedding()

+

Embedding

+

nnf_embedding_bag()

+

Embedding_bag

+

nnf_fold()

+

Fold

+

nnf_fractional_max_pool2d()

+

Fractional_max_pool2d

+

nnf_fractional_max_pool3d()

+

Fractional_max_pool3d

+

nnf_gelu()

+

Gelu

+

nnf_glu()

+

Glu

+

nnf_grid_sample()

+

Grid_sample

+

nnf_group_norm()

+

Group_norm

+

nnf_gumbel_softmax()

+

Gumbel_softmax

+

nnf_hardshrink()

+

Hardshrink

+

nnf_hardsigmoid()

+

Hardsigmoid

+

nnf_hardswish()

+

Hardswish

+

nnf_hardtanh() nnf_hardtanh_()

+

Hardtanh

+

nnf_hinge_embedding_loss()

+

Hinge_embedding_loss

+

nnf_instance_norm()

+

Instance_norm

+

nnf_interpolate()

+

Interpolate

+

nnf_kl_div()

+

Kl_div

+

nnf_l1_loss()

+

L1_loss

+

nnf_layer_norm()

+

Layer_norm

+

nnf_leaky_relu()

+

Leaky_relu

+

nnf_linear()

+

Linear

+

nnf_local_response_norm()

+

Local_response_norm

+

nnf_log_softmax()

+

Log_softmax

+

nnf_logsigmoid()

+

Logsigmoid

+

nnf_lp_pool1d()

+

Lp_pool1d

+

nnf_lp_pool2d()

+

Lp_pool2d

+

nnf_margin_ranking_loss()

+

Margin_ranking_loss

+

nnf_max_pool1d()

+

Max_pool1d

+

nnf_max_pool2d()

+

Max_pool2d

+

nnf_max_pool3d()

+

Max_pool3d

+

nnf_max_unpool1d()

+

Max_unpool1d

+

nnf_max_unpool2d()

+

Max_unpool2d

+

nnf_max_unpool3d()

+

Max_unpool3d

+

nnf_mse_loss()

+

Mse_loss

+

nnf_multi_head_attention_forward()

+

Multi head attention forward

+

nnf_multi_margin_loss()

+

Multi_margin_loss

+

nnf_multilabel_margin_loss()

+

Multilabel_margin_loss

+

nnf_multilabel_soft_margin_loss()

+

Multilabel_soft_margin_loss

+

nnf_nll_loss()

+

Nll_loss

+

nnf_normalize()

+

Normalize

+

nnf_one_hot()

+

One_hot

+

nnf_pad()

+

Pad

+

nnf_pairwise_distance()

+

Pairwise_distance

+

nnf_pdist()

+

Pdist

+

nnf_pixel_shuffle()

+

Pixel_shuffle

+

nnf_poisson_nll_loss()

+

Poisson_nll_loss

+

nnf_prelu()

+

Prelu

+

nnf_relu() nnf_relu_()

+

Relu

+

nnf_relu6()

+

Relu6

+

nnf_rrelu() nnf_rrelu_()

+

Rrelu

+

nnf_selu() nnf_selu_()

+

Selu

+

nnf_sigmoid()

+

Sigmoid

+

nnf_smooth_l1_loss()

+

Smooth_l1_loss

+

nnf_soft_margin_loss()

+

Soft_margin_loss

+

nnf_softmax()

+

Softmax

+

nnf_softmin()

+

Softmin

+

nnf_softplus()

+

Softplus

+

nnf_softshrink()

+

Softshrink

+

nnf_softsign()

+

Softsign

+

nnf_tanhshrink()

+

Tanhshrink

+

nnf_threshold() nnf_threshold_()

+

Threshold

+

nnf_triplet_margin_loss()

+

Triplet_margin_loss

+

nnf_triplet_margin_with_distance_loss()

+

Triplet margin with distance loss

+

nnf_unfold()

+

Unfold

+

optim_adadelta()

+

Adadelta optimizer

+

optim_adagrad()

+

Adagrad optimizer

+

optim_adam()

+

Implements Adam algorithm.

+

optim_asgd()

+

Averaged Stochastic Gradient Descent optimizer

+

optim_lbfgs()

+

LBFGS optimizer

+

optim_required()

+

Dummy value indicating a required value.

+

optim_rmsprop()

+

RMSprop optimizer

+

optim_rprop()

+

Implements the resilient backpropagation algorithm.

+

optim_sgd()

+

SGD optimizer

+

optimizer()

+

Creates a custom optimizer

+

slc()

+

Creates a slice

+

tensor_dataset()

+

Dataset wrapping tensors.

+

with_detect_anomaly()

+

Context-manager that enable anomaly detection for the autograd engine.

+

with_enable_grad()

+

Enable grad

+

with_no_grad()

+

Temporarily modify gradient recording.

+

Neural network modules

+

+
+

nn_adaptive_avg_pool1d()

+

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_avg_pool2d()

+

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_avg_pool3d()

+

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

+

nn_adaptive_log_softmax_with_loss()

+

AdaptiveLogSoftmaxWithLoss module

+

nn_adaptive_max_pool1d()

+

Applies a 1D adaptive max pooling over an input signal composed of several input planes.

+

nn_adaptive_max_pool2d()

+

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

+

nn_adaptive_max_pool3d()

+

Applies a 3D adaptive max pooling over an input signal composed of several input planes.

+

nn_avg_pool1d()

+

Applies a 1D average pooling over an input signal composed of several +input planes.

+

nn_avg_pool2d()

+

Applies a 2D average pooling over an input signal composed of several input +planes.

+

nn_avg_pool3d()

+

Applies a 3D average pooling over an input signal composed of several input +planes.

+

nn_batch_norm1d()

+

BatchNorm1D module

+

nn_batch_norm2d()

+

BatchNorm2D

+

nn_batch_norm3d()

+

BatchNorm3D

+

nn_bce_loss()

+

Binary cross entropy loss

+

nn_bce_with_logits_loss()

+

BCE with logits loss

+

nn_bilinear()

+

Bilinear module

+

nn_buffer()

+

Creates a nn_buffer

+

nn_celu()

+

CELU module

+

nn_contrib_sparsemax()

+

Sparsemax activation

+

nn_conv1d()

+

Conv1D module

+

nn_conv2d()

+

Conv2D module

+

nn_conv3d()

+

Conv3D module

+

nn_conv_transpose1d()

+

ConvTranspose1D

+

nn_conv_transpose2d()

+

ConvTranpose2D module

+

nn_conv_transpose3d()

+

ConvTranpose3D module

+

nn_cosine_embedding_loss()

+

Cosine embedding loss

+

nn_cross_entropy_loss()

+

CrossEntropyLoss module

+

nn_ctc_loss()

+

The Connectionist Temporal Classification loss.

+

nn_dropout()

Dropout module

+

nn_dropout2d()

Dropout2D module

+

nn_dropout3d()

Dropout3D module

+

nn_elu()

ELU module

+

nn_embedding()

Embedding module

+

nn_fractional_max_pool2d()

Applies a 2D fractional max pooling over an input signal composed of several input planes.

+

nn_fractional_max_pool3d()

Applies a 3D fractional max pooling over an input signal composed of several input planes.

+

nn_gelu()

GELU module

+

nn_glu()

GLU module

+

nn_group_norm()

Group normalization

+

nn_gru()

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

+

nn_hardshrink()

Hardshwink module

+

nn_hardsigmoid()

Hardsigmoid module

+

nn_hardswish()

Hardswish module

+

nn_hardtanh()

Hardtanh module

+

nn_hinge_embedding_loss()

Hinge embedding loss

+

nn_identity()

Identity module

+

nn_init_calculate_gain()

Calculate gain

+

nn_init_constant_()

Constant initialization

+

nn_init_dirac_()

Dirac initialization

+

nn_init_eye_()

Eye initialization

+

nn_init_kaiming_normal_()

Kaiming normal initialization

+

nn_init_kaiming_uniform_()

Kaiming uniform initialization

+

nn_init_normal_()

Normal initialization

+

nn_init_ones_()

Ones initialization

+

nn_init_orthogonal_()

Orthogonal initialization

+

nn_init_sparse_()

Sparse initialization

+

nn_init_trunc_normal_()

Truncated normal initialization

+

nn_init_uniform_()

Uniform initialization

+

nn_init_xavier_normal_()

Xavier normal initialization

+

nn_init_xavier_uniform_()

Xavier uniform initialization

+

nn_init_zeros_()

Zeros initialization

+

nn_kl_div_loss()

Kullback-Leibler divergence loss

+

nn_l1_loss()

L1 loss

+

nn_layer_norm()

Layer normalization

+

nn_leaky_relu()

LeakyReLU module

+

nn_linear()

Linear module

+

nn_log_sigmoid()

LogSigmoid module

+

nn_log_softmax()

LogSoftmax module

+

nn_lp_pool1d()

Applies a 1D power-average pooling over an input signal composed of several input planes.

+

nn_lp_pool2d()

Applies a 2D power-average pooling over an input signal composed of several input planes.

+

nn_lstm()

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

+

nn_margin_ranking_loss()

Margin ranking loss

+

nn_max_pool1d()

MaxPool1D module

+

nn_max_pool2d()

MaxPool2D module

+

nn_max_pool3d()

Applies a 3D max pooling over an input signal composed of several input planes.

+

nn_max_unpool1d()

Computes a partial inverse of MaxPool1d.

+

nn_max_unpool2d()

Computes a partial inverse of MaxPool2d.

+

nn_max_unpool3d()

Computes a partial inverse of MaxPool3d.

+

nn_module()

Base class for all neural network modules.

+

nn_module_list()

Holds submodules in a list.

+

nn_mse_loss()

MSE loss

+

nn_multi_margin_loss()

Multi margin loss

+

nn_multihead_attention()

MultiHead attention

+

nn_multilabel_margin_loss()

Multilabel margin loss

+

nn_multilabel_soft_margin_loss()

Multi label soft margin loss

+

nn_nll_loss()

Nll loss

+

nn_pairwise_distance()

Pairwise distance

+

nn_parameter()

Creates an nn_parameter

+

nn_poisson_nll_loss()

Poisson NLL loss

+

nn_prelu()

PReLU module

+

nn_relu()

ReLU module

+

nn_relu6()

ReLu6 module

+

nn_rnn()

RNN module

+

nn_rrelu()

RReLU module

+

nn_selu()

SELU module

+

nn_sequential()

A sequential container

+

nn_sigmoid()

Sigmoid module

+

nn_smooth_l1_loss()

Smooth L1 loss

+

nn_soft_margin_loss()

Soft margin loss

+

nn_softmax()

Softmax module

+

nn_softmax2d()

Softmax2d module

+

nn_softmin()

Softmin

+

nn_softplus()

Softplus module

+

nn_softshrink()

Softshrink module

+

nn_softsign()

Softsign module

+

nn_tanh()

Tanh module

+

nn_tanhshrink()

Tanhshrink module

+

nn_threshold()

Threshoold module

+

nn_triplet_margin_loss()

Triplet margin loss

+

nn_triplet_margin_with_distance_loss()

Triplet margin with distance loss

+

nn_utils_clip_grad_norm_()

Clips gradient norm of an iterable of parameters.

+

nn_utils_clip_grad_value_()

Clips gradient of an iterable of parameters at specified value.

+

nn_utils_rnn_pack_padded_sequence()

Packs a Tensor containing padded sequences of variable length.

+

nn_utils_rnn_pack_sequence()

Packs a list of variable length Tensors

+

nn_utils_rnn_pad_packed_sequence()

Pads a packed batch of variable length sequences.

+

nn_utils_rnn_pad_sequence()

Pad a list of variable length Tensors with padding_value

+

is_nn_module()

Checks if the object is an nn_module

+

is_nn_parameter()

Checks if an object is a nn_parameter

+

is_nn_buffer()

Checks if the object is a nn_buffer

-

Neural networks functional module

+
+

Neural networks functional module

+

nnf_adaptive_avg_pool1d()

Adaptive_avg_pool1d

+

nnf_adaptive_avg_pool2d()

Adaptive_avg_pool2d

+

nnf_adaptive_avg_pool3d()

Adaptive_avg_pool3d

+

nnf_adaptive_max_pool1d()

Adaptive_max_pool1d

+

nnf_adaptive_max_pool2d()

Adaptive_max_pool2d

+

nnf_adaptive_max_pool3d()

Adaptive_max_pool3d

+

nnf_affine_grid()

Affine_grid

+

nnf_alpha_dropout()

Alpha_dropout

+

nnf_avg_pool1d()

Avg_pool1d

+

nnf_avg_pool2d()

Avg_pool2d

+

nnf_avg_pool3d()

Avg_pool3d

+

nnf_batch_norm()

Batch_norm

+

nnf_bilinear()

Bilinear

+

nnf_binary_cross_entropy()

Binary_cross_entropy

+

nnf_binary_cross_entropy_with_logits()

Binary_cross_entropy_with_logits

+

nnf_celu() nnf_celu_()

Celu

+

nnf_contrib_sparsemax()

Sparsemax

+

nnf_conv1d()

Conv1d

+

nnf_conv2d()

Conv2d

+

nnf_conv3d()

Conv3d

+

nnf_conv_tbc()

Conv_tbc

+

nnf_conv_transpose1d()

Conv_transpose1d

+

nnf_conv_transpose2d()

Conv_transpose2d

+

nnf_conv_transpose3d()

Conv_transpose3d

+

nnf_cosine_embedding_loss()

Cosine_embedding_loss

+

nnf_cosine_similarity()

Cosine_similarity

+

nnf_cross_entropy()

Cross_entropy

+

nnf_ctc_loss()

Ctc_loss

+

nnf_dropout()

Dropout

+

nnf_dropout2d()

Dropout2d

+

nnf_dropout3d()

Dropout3d

+

nnf_elu() nnf_elu_()

Elu

+

nnf_embedding()

Embedding

+

nnf_embedding_bag()

Embedding_bag

+

nnf_fold()

Fold

+

nnf_fractional_max_pool2d()

Fractional_max_pool2d

+

nnf_fractional_max_pool3d()

Fractional_max_pool3d

+

nnf_gelu()

Gelu

+

nnf_glu()

Glu

+

nnf_grid_sample()

Grid_sample

+

nnf_group_norm()

Group_norm

+

nnf_gumbel_softmax()

Gumbel_softmax

+

nnf_hardshrink()

Hardshrink

+

nnf_hardsigmoid()

Hardsigmoid

+

nnf_hardswish()

Hardswish

+

nnf_hardtanh() nnf_hardtanh_()

Hardtanh

+

nnf_hinge_embedding_loss()

Hinge_embedding_loss

+

nnf_instance_norm()

Instance_norm

+

nnf_interpolate()

Interpolate

+

nnf_kl_div()

Kl_div

+

nnf_l1_loss()

L1_loss

+

nnf_layer_norm()

Layer_norm

+

nnf_leaky_relu()

Leaky_relu

+

nnf_linear()

Linear

+

nnf_local_response_norm()

Local_response_norm

+

nnf_log_softmax()

Log_softmax

+

nnf_logsigmoid()

Logsigmoid

+

nnf_lp_pool1d()

Lp_pool1d

+

nnf_lp_pool2d()

Lp_pool2d

+

nnf_margin_ranking_loss()

Margin_ranking_loss

+

nnf_max_pool1d()

Max_pool1d

+

nnf_max_pool2d()

Max_pool2d

+

nnf_max_pool3d()

Max_pool3d

+

nnf_max_unpool1d()

Max_unpool1d

+

nnf_max_unpool2d()

Max_unpool2d

+

nnf_max_unpool3d()

Max_unpool3d

+

nnf_mse_loss()

Mse_loss

+

nnf_multi_head_attention_forward()

Multi head attention forward

+

nnf_multi_margin_loss()

Multi_margin_loss

+

nnf_multilabel_margin_loss()

Multilabel_margin_loss

+

nnf_multilabel_soft_margin_loss()

Multilabel_soft_margin_loss

+

nnf_nll_loss()

Nll_loss

+

nnf_normalize()

Normalize

+

nnf_one_hot()

One_hot

+

nnf_pad()

Pad

+

nnf_pairwise_distance()

Pairwise_distance

+

nnf_pdist()

Pdist

+

nnf_pixel_shuffle()

Pixel_shuffle

+

nnf_poisson_nll_loss()

Poisson_nll_loss

+

nnf_prelu()

Prelu

+

nnf_relu() nnf_relu_()

Relu

+

nnf_relu6()

Relu6

+

nnf_rrelu() nnf_rrelu_()

Rrelu

+

nnf_selu() nnf_selu_()

Selu

+

nnf_sigmoid()

Sigmoid

+

nnf_smooth_l1_loss()

Smooth_l1_loss

+

nnf_soft_margin_loss()

Soft_margin_loss

+

nnf_softmax()

Softmax

+

nnf_softmin()

Softmin

+

nnf_softplus()

Softplus

+

nnf_softshrink()

Softshrink

+

nnf_softsign()

Softsign

+

nnf_tanhshrink()

Tanhshrink

+

nnf_threshold() nnf_threshold_()

Threshold

+

nnf_triplet_margin_loss()

Triplet_margin_loss

+

nnf_triplet_margin_with_distance_loss()

Triplet margin with distance loss

+

nnf_unfold()

Unfold

-

Optimizers

+
+

Optimizers

+

optimizer()

Creates a custom optimizer

+

optim_adadelta()

Adadelta optimizer

+

optim_adagrad()

Adagrad optimizer

+

optim_adam()

Implements Adam algorithm.

+

optim_asgd()

Averaged Stochastic Gradient Descent optimizer

+

optim_lbfgs()

LBFGS optimizer

+

optim_required()

Dummy value indicating a required value.

+

optim_rmsprop()

RMSprop optimizer

+

optim_rprop()

Implements the resilient backpropagation algorithm.

+

optim_sgd()

SGD optimizer

+

is_optimizer()

Checks if the object is a torch optimizer

-

Learning rate schedulers

+
+

Learning rate schedulers

+

lr_lambda()

Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.

+

lr_multiplicative()

Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr.

+

lr_one_cycle()

Once cycle learning rate

+

lr_scheduler()

Creates learning rate schedulers

+

lr_step()

Step learning rate decay

-

Datasets

+
+

Datasets

+

dataset()

Helper function to create an R6 class that inherits from the abstract Dataset class

+

dataset_subset()

Dataset Subset

+

dataloader()

Data loader. Combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset.

+

dataloader_make_iter()

Creates an iterator from a DataLoader

+

dataloader_next()

Get the next element of a dataloader iterator

+

enumerate()

Enumerate an iterator

+

enumerate(<dataloader>)

Enumerate an iterator

+

tensor_dataset()

Dataset wrapping tensors.

+

is_dataloader()

Checks if the object is a dataloader

-

Distributions

+
+

Distributions

+

Distribution

Generic R6 class representing distributions

+

distr_bernoulli()

Creates a Bernoulli distribution parameterized by probs or logits (but not both). Samples are binary (0 or 1). They take the value 1 with probability p and 0 with probability 1 - p.

+

distr_categorical()

Creates a categorical distribution parameterized by either probs or logits (but not both).

+

distr_chi2()

Creates a Chi2 distribution parameterized by shape parameter df. This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5)

+

distr_gamma()

Creates a Gamma distribution parameterized by shape concentration and rate.

+

distr_mixture_same_family()

Mixture of components in the same family

+

distr_multivariate_normal()

Gaussian distribution

+

distr_normal()

Creates a normal (also called Gaussian) distribution parameterized by loc and scale.

+

distr_poisson()

Creates a Poisson distribution parameterized by rate, the rate parameter.

+

Constraint

Abstract base class for constraints.

-

Autograd

+
+

Autograd

+

autograd_backward()

Computes the sum of gradients of given tensors w.r.t. graph leaves.

+

autograd_function()

Records operation history and defines formulas for differentiating ops.

+

autograd_grad()

Computes and returns the sum of gradients of outputs w.r.t. the inputs.

+

autograd_set_grad_mode()

Set grad mode

+

with_no_grad()

Temporarily modify gradient recording.

+

with_enable_grad()

Enable grad

+

AutogradContext

Class representing the context.

-

Linear Algebra

+
+

Linear Algebra

+

linalg_cholesky()

Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.

+

linalg_cholesky_ex()

Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.

+

linalg_cond()

Computes the condition number of a matrix with respect to a matrix norm.

+

linalg_det()

Computes the determinant of a square matrix.

+

linalg_eig()

Computes the eigenvalue decomposition of a square matrix if it exists.

+

linalg_eigh()

Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix.

+

linalg_eigvals()

Computes the eigenvalues of a square matrix.

+

linalg_eigvalsh()

Computes the eigenvalues of a complex Hermitian or real symmetric matrix.

+

linalg_householder_product()

Computes the first n columns of a product of Householder matrices.

+

linalg_inv()

Computes the inverse of a square matrix if it exists.

+

linalg_inv_ex()

Computes the inverse of a square matrix if it is invertible.

+

linalg_lstsq()

Computes a solution to the least squares problem of a system of linear equations.

+

linalg_matrix_norm()

Computes a matrix norm.

+

linalg_matrix_power()

Computes the n-th power of a square matrix for an integer n.

+

linalg_matrix_rank()

Computes the numerical rank of a matrix.

+

linalg_multi_dot()

Efficiently multiplies two or more matrices

+

linalg_norm()

Computes a vector or matrix norm.

+

linalg_pinv()

Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.

+

linalg_qr()

Computes the QR decomposition of a matrix.

+

linalg_slogdet()

Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix.

+

linalg_solve()

Computes the solution of a square system of linear equations with a unique solution.

+

linalg_svd()

Computes the singular value decomposition (SVD) of a matrix.

+

linalg_svdvals()

Computes the singular values of a matrix.

+

linalg_tensorinv()

Computes the multiplicative inverse of torch_tensordot()

+

Computes the multiplicative inverse of torch_tensordot()

linalg_tensorsolve()

Computes the solution X to the system torch_tensordot(A, X) = B.

+

linalg_vector_norm()

Computes a vector norm.

-

Cuda utilities

+
+

Cuda utilities

+

cuda_current_device()

Returns the index of a currently selected device.

+

cuda_device_count()

Returns the number of GPUs available.

+

cuda_get_device_capability()

Returns the major and minor CUDA capability of device

+

cuda_is_available()

Returns a bool indicating if CUDA is currently available.

-

JIT

+
+

JIT

+

jit_compile()

Compile TorchScript code into a graph

+

jit_load()

Loads a script_function or script_module previously saved with jit_save

+

jit_save()

Saves a script_function to a path

+

jit_save_for_mobile()

Saves a script_function or script_module in bytecode form, to be loaded on a mobile device

+

jit_scalar()

Adds the 'jit_scalar' class to the input

+

jit_trace()

Trace a function and return an executable script_function.

+

jit_trace_module()

Trace a module

+

jit_tuple()

Adds the 'jit_tuple' class to the input

-

Backends

+
+

Backends

+

backends_mkl_is_available()

MKL is available

+

backends_mkldnn_is_available()

MKLDNN is available

+

backends_openmp_is_available()

OpenMP is available

-

Installation

+
+

Installation

+

install_torch()

Install Torch

+

install_torch_from_file()

Install Torch from files

+

get_install_libs_url()

List of files to download

- +
+
-
- +
- - + + diff --git a/dev/reference/install_torch.html b/dev/reference/install_torch.html index ca9f1a06f3041169e9a841c4fa06bda72953dfb7..5e0699fce470f1fcc46b1c4faf9818381b605394 100644 --- a/dev/reference/install_torch.html +++ b/dev/reference/install_torch.html @@ -1,79 +1,18 @@ - - - - - - - -Install Torch — install_torch • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Install Torch — install_torch • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,46 +111,34 @@

Installs Torch and its dependencies.

-
install_torch(
-  version = "1.9.1",
-  type = install_type(version = version),
-  reinstall = FALSE,
-  path = install_path(),
-  timeout = 360,
-  ...
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
version

The Torch version to install.

type

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

reinstall

Re-install Torch even if its already installed?

path

Optional path to install or check for an already existing installation.

timeout

Optional timeout in seconds for large file download.

...

other optional arguments (like `load` for manual installation).

- -

Details

+
+
install_torch(
+  version = "1.9.1",
+  type = install_type(version = version),
+  reinstall = FALSE,
+  path = install_path(),
+  timeout = 360,
+  ...
+)
+
+
+

Arguments

+
version
+

The Torch version to install.

+
type
+

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

+
reinstall
+

Re-install Torch even if its already installed?

+
path
+

Optional path to install or check for an already existing installation.

+
timeout
+

Optional timeout in seconds for large file download.

+
...
+

other optional arguments (like `load` for manual installation).

+
+
+

Details

When using path to install in a specific location, make sure the TORCH_HOME environment variable is set to this same path to reuse this installation. The TORCH_INSTALL environment variable can be set to 0 to prevent auto-installing torch and TORCH_LOAD set to 0 @@ -236,32 +146,29 @@ to avoid loading dependencies automatically. These environment variables are mea cases and troubleshooting only. When timeout error occurs during library archive download, or length of downloaded files differ from reported length, an increase of the timeout value should help.

+
+
- - - + + diff --git a/dev/reference/install_torch_from_file.html b/dev/reference/install_torch_from_file.html index 32e65a41dbce69336f0f46cc48675085ff2f258f..ba8b213f1483ef34dc85c726ffc71ffbf8855d8a 100644 --- a/dev/reference/install_torch_from_file.html +++ b/dev/reference/install_torch_from_file.html @@ -1,79 +1,18 @@ - - - - - - - -Install Torch from files — install_torch_from_file • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Install Torch from files — install_torch_from_file • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,71 +111,58 @@

Installs Torch and its dependencies from files.

-
install_torch_from_file(
-  version = "1.9.1",
-  type = install_type(version = version),
-  libtorch,
-  liblantern,
-  ...
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
version

The Torch version to install.

type

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

libtorch

The installation archive file to use for Torch. Shall be a "file://" URL scheme.

liblantern

The installation archive file to use for Lantern. Shall be a "file://" URL scheme.

...

other parameters to be passed to "install_torch()"

- -

Details

+
+
install_torch_from_file(
+  version = "1.9.1",
+  type = install_type(version = version),
+  libtorch,
+  liblantern,
+  ...
+)
+
+
+

Arguments

+
version
+

The Torch version to install.

+
type
+

The installation type for Torch. Valid values are "cpu" or the 'CUDA' version.

+
libtorch
+

The installation archive file to use for Torch. Shall be a "file://" URL scheme.

+
liblantern
+

The installation archive file to use for Lantern. Shall be a "file://" URL scheme.

+
...
+

other parameters to be passed to "install_torch()"

+
+
+

Details

When "install_torch()" initiated download is not possible, but installation archive files are present on local filesystem, "install_torch_from_file()" can be used as a workaround to installation issue. "libtorch" is the archive containing all torch modules, and "liblantern" is the C interface to libtorch that is used for the R package. Both are highly dependent, and should be checked through "get_install_libs_url()"

+
+
- - - + + diff --git a/dev/reference/is_dataloader.html b/dev/reference/is_dataloader.html index f2016ee165d92921081c3b7a55f28369d3ad247c..ae860c22695f0a9273248ee0fcae47f7caa2f8c4 100644 --- a/dev/reference/is_dataloader.html +++ b/dev/reference/is_dataloader.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if the object is a dataloader — is_dataloader • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if the object is a dataloader — is_dataloader • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if the object is a dataloader

-
is_dataloader(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_dataloader(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_nn_buffer.html b/dev/reference/is_nn_buffer.html index 769b796b4938ea8a10b094bcb4ab07529cba6e69..ec97ac20ede4655cf1d50c81e4080fa39cf4299c 100644 --- a/dev/reference/is_nn_buffer.html +++ b/dev/reference/is_nn_buffer.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if the object is a nn_buffer — is_nn_buffer • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if the object is a nn_buffer — is_nn_buffer • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if the object is a nn_buffer

-
is_nn_buffer(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_nn_buffer(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_nn_module.html b/dev/reference/is_nn_module.html index f3b0d76eae817b109a9b7392492b2c3799556deb..a71ab523f74b5f26ad933b7017f49ae4f9a25ba4 100644 --- a/dev/reference/is_nn_module.html +++ b/dev/reference/is_nn_module.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if the object is an nn_module — is_nn_module • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if the object is an nn_module — is_nn_module • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if the object is an nn_module

-
is_nn_module(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_nn_module(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_nn_parameter.html b/dev/reference/is_nn_parameter.html index d2a57a0dd09a3ba23d68a3a305da0de0bfc957c8..6267d6d71a577ea60a0f81a22439bedc99863541 100644 --- a/dev/reference/is_nn_parameter.html +++ b/dev/reference/is_nn_parameter.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if an object is a nn_parameter — is_nn_parameter • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if an object is a nn_parameter — is_nn_parameter • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if an object is a nn_parameter

-
is_nn_parameter(x)
- -

Arguments

- - - - - - -
x

the object to check

+
+
is_nn_parameter(x)
+
+
+

Arguments

+
x
+

the object to check

+
+
- - - + + diff --git a/dev/reference/is_optimizer.html b/dev/reference/is_optimizer.html index 896f0f54e814bcb20c87ae79fafac7dd090c33d1..44e1b686586e05142bff19be08889ced91dd1e44 100644 --- a/dev/reference/is_optimizer.html +++ b/dev/reference/is_optimizer.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if the object is a torch optimizer — is_optimizer • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if the object is a torch optimizer — is_optimizer • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if the object is a torch optimizer

-
is_optimizer(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_optimizer(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_torch_device.html b/dev/reference/is_torch_device.html index 7dcfa13e5a206c493fded2bb7b6b8ce9d2b568a6..2b24deb54c5b59faa683d5291a36224e6a3fbb75 100644 --- a/dev/reference/is_torch_device.html +++ b/dev/reference/is_torch_device.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if object is a device — is_torch_device • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if object is a device — is_torch_device • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if object is a device

-
is_torch_device(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_torch_device(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_torch_dtype.html b/dev/reference/is_torch_dtype.html index 151d1282b302d219055b1b89ab6a6f3bfce25c15..8abbb09f8ab0465a0c2eefc04893f58cf77f063f 100644 --- a/dev/reference/is_torch_dtype.html +++ b/dev/reference/is_torch_dtype.html @@ -1,79 +1,18 @@ - - - - - - - -Check if object is a torch data type — is_torch_dtype • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Check if object is a torch data type — is_torch_dtype • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Check if object is a torch data type

-
is_torch_dtype(x)
- -

Arguments

- - - - - - -
x

object to check.

+
+
is_torch_dtype(x)
+
+
+

Arguments

+
x
+

object to check.

+
+
- - - + + diff --git a/dev/reference/is_torch_layout.html b/dev/reference/is_torch_layout.html index c3e96571d42e568798c063069c5a9feb78022d60..b3bfac21cfdb40e3d92ba5ede79d411877de9e7a 100644 --- a/dev/reference/is_torch_layout.html +++ b/dev/reference/is_torch_layout.html @@ -1,79 +1,18 @@ - - - - - - - -Check if an object is a torch layout. — is_torch_layout • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Check if an object is a torch layout. — is_torch_layout • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Check if an object is a torch layout.

-
is_torch_layout(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_torch_layout(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_torch_memory_format.html b/dev/reference/is_torch_memory_format.html index 7a1aa849aa751548bcfb5a86bdde8f62e692501d..c4b2953697ba4e1e4a7a62110206d5f544bdf1c1 100644 --- a/dev/reference/is_torch_memory_format.html +++ b/dev/reference/is_torch_memory_format.html @@ -1,79 +1,18 @@ - - - - - - - -Check if an object is a memory format — is_torch_memory_format • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Check if an object is a memory format — is_torch_memory_format • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Check if an object is a memory format

-
is_torch_memory_format(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_torch_memory_format(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_torch_qscheme.html b/dev/reference/is_torch_qscheme.html index c3be73740554fc0edde1eb6ae08d92190c12ad1b..5f5a083cd672a61ed30a7f26e5d380cb5a6eecb2 100644 --- a/dev/reference/is_torch_qscheme.html +++ b/dev/reference/is_torch_qscheme.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if an object is a QScheme — is_torch_qscheme • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if an object is a QScheme — is_torch_qscheme • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if an object is a QScheme

-
is_torch_qscheme(x)
- -

Arguments

- - - - - - -
x

object to check

+
+
is_torch_qscheme(x)
+
+
+

Arguments

+
x
+

object to check

+
+
- - - + + diff --git a/dev/reference/is_undefined_tensor.html b/dev/reference/is_undefined_tensor.html index f39a24c9338a95bbca5f09d15a9d0c0169632b37..19a709f02f716bda1e517f70ec15ded00b94fb88 100644 --- a/dev/reference/is_undefined_tensor.html +++ b/dev/reference/is_undefined_tensor.html @@ -1,79 +1,18 @@ - - - - - - - -Checks if a tensor is undefined — is_undefined_tensor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Checks if a tensor is undefined — is_undefined_tensor • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Checks if a tensor is undefined

-
is_undefined_tensor(x)
- -

Arguments

- - - - - - -
x

tensor to check

+
+
is_undefined_tensor(x)
+
+
+

Arguments

+
x
+

tensor to check

+
+
- - - + + diff --git a/dev/reference/jit_compile.html b/dev/reference/jit_compile.html index d97125d4453e15459a14ba54883b8263e6a0024a..831b2c61022ed0f5da4f50d2fe7862fdef77fc7d 100644 --- a/dev/reference/jit_compile.html +++ b/dev/reference/jit_compile.html @@ -1,80 +1,19 @@ - - - - - - - -Compile TorchScript code into a graph — jit_compile • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Compile TorchScript code into a graph — jit_compile • torch - - - - - - + + - - -
-
- -
- -
+
-

See the TorchScript language reference for +

See the TorchScript language reference for documentation on how to write TorchScript code.

-
jit_compile(source)
- -

Arguments

- - - - - - -
source

valid TorchScript source code.

- - -

Examples

-
if (torch_is_installed()) {
-comp <- jit_compile("
-def fn (x):
-  return torch.abs(x)
-  
-def foo (x):
-  return torch.sum(x)
-
-")
-
-comp$fn(torch_tensor(-1))
-comp$foo(torch_randn(10))
-
-}
-#> torch_tensor
-#> -4.03201
-#> [ CPUFloatType{} ]
-
+
+
jit_compile(source)
+
+ +
+

Arguments

+
source
+

valid TorchScript source code.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+comp <- jit_compile("
+def fn (x):
+  return torch.abs(x)
+  
+def foo (x):
+  return torch.sum(x)
+
+")
+
+comp$fn(torch_tensor(-1))
+comp$foo(torch_randn(10))
+
+}
+#> torch_tensor
+#> 2.75868
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/jit_load.html b/dev/reference/jit_load.html index c0c766645006952e766ac9f5be1f07d7eb7868fd..db4735224d65eb5f2a29a20c9d4695029399a444 100644 --- a/dev/reference/jit_load.html +++ b/dev/reference/jit_load.html @@ -1,79 +1,18 @@ - - - - - - - -Loads a script_function or script_module previously saved with jit_save — jit_load • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Loads a script_function or script_module previously saved with jit_save — jit_load • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Loads a script_function or script_module previously saved with jit_save

-
jit_load(path, ...)
- -

Arguments

- - - - - - - - - - -
path

a path to a script_function or script_module serialized with -jit_save().

...

currently unused.

+
+
jit_load(path, ...)
+
+
+

Arguments

+
path
+

a path to a script_function or script_module serialized with +jit_save().

+
...
+

currently unused.

+
+
- - - + + diff --git a/dev/reference/jit_save.html b/dev/reference/jit_save.html index cf02e34f26165e4078fdf92c2048bf854841fc5b..e55af24516ef3eb07e960b9690bebd6afce95c54 100644 --- a/dev/reference/jit_save.html +++ b/dev/reference/jit_save.html @@ -1,79 +1,18 @@ - - - - - - - -Saves a script_function to a path — jit_save • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Saves a script_function to a path — jit_save • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,65 +111,57 @@

Saves a script_function to a path

-
jit_save(obj, path, ...)
- -

Arguments

- - - - - - - - - - - - - - -
obj

An script_function to save

path

The path to save the serialized function.

...

currently unused

- - -

Examples

-
if (torch_is_installed()) {
-fn <- function(x) {
-  torch_relu(x)
-}
-
-input <- torch_tensor(c(-1, 0, 1))
-tr_fn <- jit_trace(fn, input)
-
-tmp <- tempfile("tst", fileext = "pt")
-jit_save(tr_fn, tmp)
-
-}
-
+
+
jit_save(obj, path, ...)
+
+ +
+

Arguments

+
obj
+

An script_function to save

+
path
+

The path to save the serialized function.

+
...
+

currently unused

+
+ +
+

Examples

+
if (torch_is_installed()) {
+fn <- function(x) {
+  torch_relu(x)
+}
+
+input <- torch_tensor(c(-1, 0, 1))
+tr_fn <- jit_trace(fn, input)
+
+tmp <- tempfile("tst", fileext = "pt")
+jit_save(tr_fn, tmp)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/jit_save_for_mobile.html b/dev/reference/jit_save_for_mobile.html index 461a9bfff74186b5b2bc7127aec624de56f085f1..619836c738a08381fdacb76e7f62c73d2fd1842d 100644 --- a/dev/reference/jit_save_for_mobile.html +++ b/dev/reference/jit_save_for_mobile.html @@ -1,82 +1,21 @@ - - - - - - - -Saves a script_function or script_module in bytecode form, -to be loaded on a mobile device — jit_save_for_mobile • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Saves a script_function or script_module in bytecode form, +to be loaded on a mobile device — jit_save_for_mobile • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,65 +116,57 @@ to be loaded on a mobile device to be loaded on a mobile device

-
jit_save_for_mobile(obj, path, ...)
- -

Arguments

- - - - - - - - - - - - - - -
obj

An script_function or script_module to save

path

The path to save the serialized function.

...

currently unused

- +
+
jit_save_for_mobile(obj, path, ...)
+
-

Examples

-
if (torch_is_installed()) {
-fn <- function(x) {
-  torch_relu(x)
-}
-
-input <- torch_tensor(c(-1, 0, 1))
-tr_fn <- jit_trace(fn, input)
-
-tmp <- tempfile("tst", fileext = "pt")
-jit_save_for_mobile(tr_fn, tmp)
-
-}
-
+
+

Arguments

+
obj
+

An script_function or script_module to save

+
path
+

The path to save the serialized function.

+
...
+

currently unused

+
+ +
+

Examples

+
if (torch_is_installed()) {
+fn <- function(x) {
+  torch_relu(x)
+}
+
+input <- torch_tensor(c(-1, 0, 1))
+tr_fn <- jit_trace(fn, input)
+
+tmp <- tempfile("tst", fileext = "pt")
+jit_save_for_mobile(tr_fn, tmp)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/jit_scalar.html b/dev/reference/jit_scalar.html index cd011ee5c7bfff954e558f085408251794f95444..ed8e2071de25e0025b220bda42a5b5160872305b 100644 --- a/dev/reference/jit_scalar.html +++ b/dev/reference/jit_scalar.html @@ -1,80 +1,19 @@ - - - - - - - -Adds the 'jit_scalar' class to the input — jit_scalar • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adds the 'jit_scalar' class to the input — jit_scalar • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,43 +113,37 @@ them to the jit." /> them to the jit.

-
jit_scalar(x)
- -

Arguments

- - - - - - -
x

a length 1 R vector.

+
+
jit_scalar(x)
+
+
+

Arguments

+
x
+

a length 1 R vector.

+
+
- - - + + diff --git a/dev/reference/jit_trace.html b/dev/reference/jit_trace.html index 3629984b394a1ec0bb08501fa36b2ae2491d1c7a..d378fec99daf79656724ae7a5cede35786ec0896 100644 --- a/dev/reference/jit_trace.html +++ b/dev/reference/jit_trace.html @@ -1,81 +1,20 @@ - - - - - - - -Trace a function and return an executable script_function. — jit_trace • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Trace a function and return an executable script_function. — jit_trace • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,50 +115,47 @@ recording the operations performed on all the tensors." /> recording the operations performed on all the tensors.

-
jit_trace(func, ..., strict = TRUE)
+
+
jit_trace(func, ..., strict = TRUE)
+
-

Arguments

- - - - - - - - - - - - - - -
func

An R function that will be run with example_inputs. func arguments +

+

Arguments

+
func
+

An R function that will be run with example_inputs. func arguments and return values must be tensors or (possibly nested) lists that contain tensors. -Can also be a nn_module(), in such case jit_trace_module() is used to trace -that module.

...

example inputs that will be passed to the function while +Can also be a nn_module(), in such case jit_trace_module() is used to trace +that module.

+
...
+

example inputs that will be passed to the function while tracing. The resulting trace can be run with inputs of different types and shapes assuming the traced operations support those types and shapes. example_inputs may also be a single Tensor in which case it is automatically wrapped in a list. Note that ... can not be named, and the order is -respected.

strict

run the tracer in a strict mode or not (default: TRUE). Only +respected.

+
strict
+

run the tracer in a strict mode or not (default: TRUE). Only turn this off when you want the tracer to record your mutable container types (currently list/dict) and you are sure that the container you are using in your problem is a constant structure and does not get used as control flow -(if, for) conditions.

- -

Value

- +(if, for) conditions.

+
+
+

Value

An script_function if func is a function and script_module if -func is a nn_module().

-

Details

- +func is a nn_module().

+
+
+

Details

The resulting recording of a standalone function produces a script_function. In the future we will also support tracing nn_modules.

-

Note

- +
+
+

Note

Scripting is not yet supported in R.

-

Warning

- +
+
+

Warning

@@ -247,8 +166,7 @@ Tracing only records operations done when the given function is run on the given tensors. Therefore, the returned script_function will always run the same traced graph on any input. This has some important implications when your module is expected to run different sets of operations, depending on the input and/or the -module state. For example,

    -
  • Tracing will not record any control-flow like if-statements or loops. When +module state. For example,

    • Tracing will not record any control-flow like if-statements or loops. When this control-flow is constant across your module, this is fine and it often inlines the control-flow decisions. But sometimes the control-flow is actually part of the model itself. For instance, a recurrent network is a loop over @@ -256,54 +174,51 @@ the (possibly dynamic) length of an input sequence.

    • In the returned script_function, operations that have different behaviors in training and eval modes will always behave as if it is in the mode it was in during tracing, no matter which mode the script_function is in.

    • -
    - -

    In cases like these, tracing would not be appropriate and scripting is a better +

In cases like these, tracing would not be appropriate and scripting is a better choice. If you trace such models, you may silently get incorrect results on subsequent invocations of the model. The tracer will try to emit warnings when doing something that may cause an incorrect trace to be produced.

+
-

Examples

-
if (torch_is_installed()) {
-fn <- function(x) {
- torch_relu(x)
-}
-input <- torch_tensor(c(-1, 0, 1))
-tr_fn <- jit_trace(fn, input)
-tr_fn(input)
-
-}
-#> torch_tensor
-#>  0
-#>  0
-#>  1
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+fn <- function(x) {
+ torch_relu(x)
+}
+input <- torch_tensor(c(-1, 0, 1))
+tr_fn <- jit_trace(fn, input)
+tr_fn(input)
+
+}
+#> torch_tensor
+#>  0
+#>  0
+#>  1
+#> [ CPUFloatType{3} ]
+
+
+
- - - + + diff --git a/dev/reference/jit_trace_module.html b/dev/reference/jit_trace_module.html index f91e90e40a7a0503fde824505feae300ebc034e5..5fcb5bf4293593c5fbc1f7fc55b8b9a5b7d48a19 100644 --- a/dev/reference/jit_trace_module.html +++ b/dev/reference/jit_trace_module.html @@ -1,83 +1,22 @@ - - - - - - - -Trace a module — jit_trace_module • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Trace a module — jit_trace_module • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+

Trace a module and return an executable ScriptModule that will be optimized -using just-in-time compilation. When a module is passed to jit_trace(), only +using just-in-time compilation. When a module is passed to jit_trace(), only the forward method is run and traced. With jit_trace_module(), you can specify a named list of method names to example inputs to trace (see the inputs) argument below.

-
jit_trace_module(mod, ..., strict = TRUE)
+
+
jit_trace_module(mod, ..., strict = TRUE)
+
-

Arguments

- - - - - - - - - - - - - - -
mod

A torch nn_module() containing methods whose names are specified -in inputs. The given methods will be compiled as a part of a single ScriptModule.

...

A named list containing sample inputs indexed by method names +

+

Arguments

+
mod
+

A torch nn_module() containing methods whose names are specified +in inputs. The given methods will be compiled as a part of a single ScriptModule.

+
...
+

A named list containing sample inputs indexed by method names in mod. The inputs will be passed to methods whose names correspond to inputs -keys while tracing. list('forward'=example_forward_input, 'method2'=example_method2_input).

strict

run the tracer in a strict mode or not (default: TRUE). Only +keys while tracing. list('forward'=example_forward_input, 'method2'=example_method2_input).

+
strict
+

run the tracer in a strict mode or not (default: TRUE). Only turn this off when you want the tracer to record your mutable container types (currently list/dict) and you are sure that the container you are using in your problem is a constant structure and does not get used as control flow -(if, for) conditions.

- -

Details

- -

See jit_trace for more information on tracing.

+(if, for) conditions.

+
+
+

Details

+

See jit_trace for more information on tracing.

+
-

Examples

-
if (torch_is_installed()) {
-linear <- nn_linear(10, 1)
-tr_linear <- jit_trace_module(linear, forward = list(torch_randn(10, 10)))
-
-x <- torch_randn(10, 10)
-torch_allclose(linear(x), tr_linear(x))
-
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+linear <- nn_linear(10, 1)
+tr_linear <- jit_trace_module(linear, forward = list(torch_randn(10, 10)))
+
+x <- torch_randn(10, 10)
+torch_allclose(linear(x), tr_linear(x))
+
+}
+#> [1] TRUE
+
+
+
- - - + + diff --git a/dev/reference/jit_tuple.html b/dev/reference/jit_tuple.html index 9bc1e440e8fa76026d022f913be3e3a4ac6eff8c..770236a0977097df55146183c40f0b9255fe1094 100644 --- a/dev/reference/jit_tuple.html +++ b/dev/reference/jit_tuple.html @@ -1,80 +1,19 @@ - - - - - - - -Adds the 'jit_tuple' class to the input — jit_tuple • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adds the 'jit_tuple' class to the input — jit_tuple • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,43 +113,37 @@ tuple and instead of a list or dictionary when tracing." /> tuple and instead of a list or dictionary when tracing.

-
jit_tuple(x)
- -

Arguments

- - - - - - -
x

the list object that will be converted to a tuple.

+
+
jit_tuple(x)
+
+
+

Arguments

+
x
+

the list object that will be converted to a tuple.

+
+
- - - + + diff --git a/dev/reference/linalg_cholesky.html b/dev/reference/linalg_cholesky.html index 904bc6478078aa23019a664cb623f64de0b5c401..aa0bc12469a1082eb5ca436ab48078a7b1af2228 100644 --- a/dev/reference/linalg_cholesky.html +++ b/dev/reference/linalg_cholesky.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix. — linalg_cholesky • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix. — linalg_cholesky • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,22 +115,19 @@ the Cholesky decomposition of a complex Hermitian or real symme is defined as

-
linalg_cholesky(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of symmetric or Hermitian positive-definite matrices.

- -

Details

+
+
linalg_cholesky(A)
+
-

-A=LLHLKn×n +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of symmetric or Hermitian positive-definite matrices.

+
+
+

Details

+

A=LLHLKn×n A = LL^{H}\mathrlap{\qquad L \in \mathbb{K}^{n \times n}}

where is a lower triangular matrix and @@ -217,89 +136,86 @@ transpose when is real-valued.

Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

-

See also

- -
-
+
+

See also

+
+
  • linalg_cholesky_ex() for a version of this operation that skips the (slow) error checking by default and instead returns the debug information. This makes it a faster way to check if a matrix is positive-definite. -linalg_eigh() for a different decomposition of a Hermitian matrix. +linalg_eigh() for a different decomposition of a Hermitian matrix. The eigenvalue decomposition gives more information about the matrix but it slower to compute than the Cholesky decomposition.

  • -
- -

Other linalg: -linalg_cholesky_ex(), -linalg_det(), -linalg_eigh(), -linalg_eigvalsh(), -linalg_eigvals(), -linalg_eig(), -linalg_householder_product(), -linalg_inv_ex(), -linalg_inv(), -linalg_lstsq(), -linalg_matrix_norm(), -linalg_matrix_power(), -linalg_matrix_rank(), -linalg_multi_dot(), -linalg_norm(), -linalg_pinv(), -linalg_qr(), -linalg_slogdet(), -linalg_solve(), -linalg_svdvals(), -linalg_svd(), -linalg_tensorinv(), -linalg_tensorsolve(), -linalg_vector_norm()

+

Other linalg: +linalg_cholesky_ex(), +linalg_det(), +linalg_eigh(), +linalg_eigvalsh(), +linalg_eigvals(), +linalg_eig(), +linalg_householder_product(), +linalg_inv_ex(), +linalg_inv(), +linalg_lstsq(), +linalg_matrix_norm(), +linalg_matrix_power(), +linalg_matrix_rank(), +linalg_multi_dot(), +linalg_norm(), +linalg_pinv(), +linalg_qr(), +linalg_slogdet(), +linalg_solve(), +linalg_svdvals(), +linalg_svd(), +linalg_tensorinv(), +linalg_tensorsolve(), +linalg_vector_norm()

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_eye(10)
-linalg_cholesky(a)
-
-}
-#> torch_tensor
-#>  1  0  0  0  0  0  0  0  0  0
-#>  0  1  0  0  0  0  0  0  0  0
-#>  0  0  1  0  0  0  0  0  0  0
-#>  0  0  0  1  0  0  0  0  0  0
-#>  0  0  0  0  1  0  0  0  0  0
-#>  0  0  0  0  0  1  0  0  0  0
-#>  0  0  0  0  0  0  1  0  0  0
-#>  0  0  0  0  0  0  0  1  0  0
-#>  0  0  0  0  0  0  0  0  1  0
-#>  0  0  0  0  0  0  0  0  0  1
-#> [ CPUFloatType{10,10} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_eye(10)
+linalg_cholesky(a)
+
+}
+#> torch_tensor
+#>  1  0  0  0  0  0  0  0  0  0
+#>  0  1  0  0  0  0  0  0  0  0
+#>  0  0  1  0  0  0  0  0  0  0
+#>  0  0  0  1  0  0  0  0  0  0
+#>  0  0  0  0  1  0  0  0  0  0
+#>  0  0  0  0  0  1  0  0  0  0
+#>  0  0  0  0  0  0  1  0  0  0
+#>  0  0  0  0  0  0  0  1  0  0
+#>  0  0  0  0  0  0  0  0  1  0
+#>  0  0  0  0  0  0  0  0  0  1
+#> [ CPUFloatType{10,10} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_cholesky_ex.html b/dev/reference/linalg_cholesky_ex.html index 3199832adcc8cec19fca92cc1563a51d328a7f02..4d39336d036609591e7ff13adb1c36c65a5748d5 100644 --- a/dev/reference/linalg_cholesky_ex.html +++ b/dev/reference/linalg_cholesky_ex.html @@ -1,50 +1,7 @@ - - - - - - - -Computes the Cholesky decomposition of a complex Hermitian or real -symmetric positive-definite matrix. — linalg_cholesky_ex • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the Cholesky decomposition of a complex Hermitian or real +symmetric positive-definite matrix. — linalg_cholesky_ex • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+

This function skips the (slow) error checking and error message construction -of linalg_cholesky(), instead directly returning the LAPACK +of linalg_cholesky(), instead directly returning the LAPACK error codes as part of a named tuple (L, info). This makes this function a faster way to check if a matrix is positive-definite, and it provides an opportunity to handle decomposition errors more gracefully or performantly -than linalg_cholesky() does. +than linalg_cholesky() does. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions. @@ -222,99 +144,95 @@ and the decomposition could not be completed. If check_errors=TRUE and info contains positive integers, then a RuntimeError is thrown.

-
linalg_cholesky_ex(A, check_errors = FALSE)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): the Hermitian n \times n matrix or the batch of such matrices of size -(*, n, n) where * is one or more batch dimensions.

check_errors

(bool, optional): controls whether to check the content of infos. Default: FALSE.

- -

Note

+
+
linalg_cholesky_ex(A, check_errors = FALSE)
+
+
+

Arguments

+
A
+

(Tensor): the Hermitian n \times n matrix or the batch of such matrices of size +(*, n, n) where * is one or more batch dimensions.

+
check_errors
+

(bool, optional): controls whether to check the content of infos. Default: FALSE.

+
+
+

Note

If A is on a CUDA device, this function may synchronize that device with the CPU.

This function is "experimental" and it may change in a future PyTorch release.

-

See also

- -

linalg_cholesky() is a NumPy compatible variant that always checks for errors.

+
+ +
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(2, 2)
-out = linalg_cholesky_ex(A)
-out
-
-}
-#> $L
-#> torch_tensor
-#> -0.5589  0.0000
-#>  1.0619  0.0758
-#> [ CPUFloatType{2,2} ]
-#> 
-#> $info
-#> torch_tensor
-#> 1
-#> [ CPUIntType{} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(2, 2)
+out = linalg_cholesky_ex(A)
+out
+
+}
+#> $L
+#> torch_tensor
+#> -0.2524  0.0000
+#> -0.6080  1.0792
+#> [ CPUFloatType{2,2} ]
+#> 
+#> $info
+#> torch_tensor
+#> 1
+#> [ CPUIntType{} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/linalg_cond.html b/dev/reference/linalg_cond.html index e3d0453ccbd91b3c539b5b525e866efa19bcbf52..ae910920a76a43bc65161f78f10bfb1a4c840a45 100644 --- a/dev/reference/linalg_cond.html +++ b/dev/reference/linalg_cond.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the condition number of a matrix with respect to a matrix norm. — linalg_cond • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the condition number of a matrix with respect to a matrix norm. — linalg_cond • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,31 +115,27 @@ the condition number of a matrix is defined as

-
linalg_cond(A, p = NULL)
+
+
linalg_cond(A, p = NULL)
+
-

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions for p in (2, -2), and of shape (*, n, n) where every matrix -is invertible for p in ('fro', 'nuc', inf, -inf, 1, -1).

p

(int, inf, -inf, 'fro', 'nuc', optional): -the type of the matrix norm to use in the computations (see above). Default: NULL

- -

Value

- +is invertible for p in ('fro', 'nuc', inf, -inf, 1, -1).

+
p
+

(int, inf, -inf, 'fro', 'nuc', optional): +the type of the matrix norm to use in the computations (see above). Default: NULL

+
+
+

Value

A real-valued tensor, even when A is complex.

-

Details

- -

-κ(A)=ApA1p\kappa(A) = \|A\|_p\|A^{-1}\|_p

+
+
+

Details

+

κ(A)=ApA1p\kappa(A) = \|A\|_p\|A^{-1}\|_p

The condition number of A measures the numerical stability of the linear system AX = B with respect to a matrix norm.

Supports input of float, double, cfloat and cdouble dtypes. @@ -226,69 +144,54 @@ the output has the same batch dimensions.

p defines the matrix norm that is computed. See the table in 'Details' to find the supported norms.

For p is one of ('fro', 'nuc', inf, -inf, 1, -1), this function uses -linalg_norm() and linalg_inv().

+linalg_norm() and linalg_inv().

As such, in this case, the matrix (or every matrix in the batch) A has to be square and invertible.

For p in (2, -2), this function can be computed in terms of the singular values

-

-κ2(A)=σ1σnκ2(A)=σnσ1\kappa_2(A) = \frac{\sigma_1}{\sigma_n}\qquad \kappa_{-2}(A) = \frac{\sigma_n}{\sigma_1}

-

In these cases, it is computed using linalg_svd(). For these norms, the matrix +

κ2(A)=σ1σnκ2(A)=σnσ1\kappa_2(A) = \frac{\sigma_1}{\sigma_n}\qquad \kappa_{-2}(A) = \frac{\sigma_n}{\sigma_1}

+

In these cases, it is computed using linalg_svd(). For these norms, the matrix (or every matrix in the batch) A may have any shape.

- - - - - - - - - - - -
pmatrix norm
NULL2-norm (largest singular value)
'fro'Frobenius norm
'nuc'nuclear norm
Infmax(sum(abs(x), dim=2))
-Infmin(sum(abs(x), dim=2))
1max(sum(abs(x), dim=1))
-1min(sum(abs(x), dim=1))
2largest singular value
-2smallest singular value
- - -

Note

- +
pmatrix norm
NULL2-norm (largest singular value)
'fro'Frobenius norm
'nuc'nuclear norm
Infmax(sum(abs(x), dim=2))
-Infmin(sum(abs(x), dim=2))
1max(sum(abs(x), dim=1))
-1min(sum(abs(x), dim=1))
2largest singular value
-2smallest singular value
+
+

Note

When inputs are on a CUDA device, this function synchronizes that device with the CPU if if p is one of ('fro', 'nuc', inf, -inf, 1, -1).

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_tensor(rbind(c(1., 0, -1), c(0, 1, 0), c(1, 0, 1)))
-linalg_cond(a)
-linalg_cond(a, "fro")
- 
-}
-#> torch_tensor
-#> 3.16228
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_tensor(rbind(c(1., 0, -1), c(0, 1, 0), c(1, 0, 1)))
+linalg_cond(a)
+linalg_cond(a, "fro")
+ 
+}
+#> torch_tensor
+#> 3.16228
+#> [ CPUFloatType{} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_det.html b/dev/reference/linalg_det.html index abec1dc03f9714d7d1d53ff83febd2b0a10801a3..3a281e421bef55b17460ead10504c112ceef8672 100644 --- a/dev/reference/linalg_det.html +++ b/dev/reference/linalg_det.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the determinant of a square matrix. — linalg_det • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the determinant of a square matrix. — linalg_det • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,86 +115,82 @@ Also supports batches of matrices, and if A is a batch of matrices the output has the same batch dimensions.

-
linalg_det(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

- -

See also

+
+
linalg_det(A)
+
- +
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

+
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(3,3)
-linalg_det(a)
-
-a <- torch_randn(3,3,3)
-linalg_det(a)
-
-}
-#> torch_tensor
-#> 0.01 *
-#>  7.6662
-#> -597.1869
-#> -145.2378
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(3,3)
+linalg_det(a)
+
+a <- torch_randn(3,3,3)
+linalg_det(a)
+
+}
+#> torch_tensor
+#>  0.1305
+#> -0.1993
+#> -1.2173
+#> [ CPUFloatType{3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_eig.html b/dev/reference/linalg_eig.html index 5703938013d8da14e156b31b128604cc24edcf04..6bbb95d8993ac70007b09081a80a4ac0cfb9e8a5 100644 --- a/dev/reference/linalg_eig.html +++ b/dev/reference/linalg_eig.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the eigenvalue decomposition of a square matrix if it exists. — linalg_eig • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the eigenvalue decomposition of a square matrix if it exists. — linalg_eig • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,27 +115,25 @@ the eigenvalue decomposition of a square matrix (if it exists) is defined as

-
linalg_eig(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of diagonalizable matrices.

- -

Value

+
+
linalg_eig(A)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of diagonalizable matrices.

+
+
+

Value

A list (eigenvalues, eigenvectors) which corresponds to and above. eigenvalues and eigenvectors will always be complex-valued, even when A is real. The eigenvectors will be given by the columns of eigenvectors.

-

Details

- -

-A=Vdiag(Λ)V1VCn×n,ΛCn +

+
+

Details

+

A=Vdiag(Λ)V1VCn×n,ΛCn A = V \operatorname{diag}(\Lambda) V^{-1}\mathrlap{\qquad V \in \mathbb{C}^{n \times n}, \Lambda \in \mathbb{C}^n}

This decomposition exists if and only if is diagonalizable_. @@ -221,15 +141,16 @@ This is the case when all its eigenvalues are different. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

-

Note

- +
+
+

Note

The eigenvalues and eigenvectors of a real matrix may be complex.

-

Warning

- +
+
+

Warning

-

Other linalg: +linalg_cholesky_ex(), +linalg_cholesky(), +linalg_det(), +linalg_eigh(), +linalg_eigvalsh(), +linalg_eigvals(), +linalg_householder_product(), +linalg_inv_ex(), +linalg_inv(), +linalg_lstsq(), +linalg_matrix_norm(), +linalg_matrix_power(), +linalg_matrix_rank(), +linalg_multi_dot(), +linalg_norm(), +linalg_pinv(), +linalg_qr(), +linalg_slogdet(), +linalg_solve(), +linalg_svdvals(), +linalg_svd(), +linalg_tensorinv(), +linalg_tensorsolve(), +linalg_vector_norm()

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(2, 2)
-wv = linalg_eig(a)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(2, 2)
+wv = linalg_eig(a)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/linalg_eigh.html b/dev/reference/linalg_eigh.html index dc5d5d49d71cbd70868ed21b89f8508a1b7030c6..efb70e19df40d6011b1c76ee533462c4d06c92bf 100644 --- a/dev/reference/linalg_eigh.html +++ b/dev/reference/linalg_eigh.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix. — linalg_eigh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix. — linalg_eigh • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,33 +115,29 @@ the eigenvalue decomposition of a complex Hermitian or real sym is defined as

-
linalg_eigh(A, UPLO = "L")
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of symmetric or Hermitian matrices.

UPLO

('L', 'U', optional): controls whether to use the upper or lower triangular part -of A in the computations. Default: 'L'.

- -

Value

+
+
linalg_eigh(A, UPLO = "L")
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of symmetric or Hermitian matrices.

+
UPLO
+

('L', 'U', optional): controls whether to use the upper or lower triangular part +of A in the computations. Default: 'L'.

+
+
+

Value

A list (eigenvalues, eigenvectors) which corresponds to and above. -eigenvalues will always be real-valued, even when A is complex.

-

It will also be ordered in ascending order. +eigenvalues will always be real-valued, even when A is complex. +It will also be ordered in ascending order. eigenvectors will have the same dtype as A and will contain the eigenvectors as its columns.

-

Details

- -

-A=Qdiag(Λ)QHQKn×n,ΛRn +

+
+

Details

+

A=Qdiag(Λ)QHQKn×n,ΛRn A = Q \operatorname{diag}(\Lambda) Q^{H}\mathrlap{\qquad Q \in \mathbb{K}^{n \times n}, \Lambda \in \mathbb{R}^n}

where is the conjugate transpose when is complex, and the transpose when is real-valued. @@ -227,21 +145,19 @@ A = Q \operatorname{diag}(\Lambda) Q^{H}\mathrlap{\qquad Q \in \mathbb{K}^{n \ti

Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

-

A is assumed to be Hermitian (resp. symmetric), but this is not checked internally, instead:

    -
  • If UPLO\ = 'L' (default), only the lower triangular part of the matrix is used in the computation.

  • +

    A is assumed to be Hermitian (resp. symmetric), but this is not checked internally, instead:

    • If UPLO\ = 'L' (default), only the lower triangular part of the matrix is used in the computation.

    • If UPLO\ = 'U', only the upper triangular part of the matrix is used. The eigenvalues are returned in ascending order.

    • -
    - -

    Note

    - +
+
+

Note

The eigenvalues of real symmetric or complex Hermitian matrices are always real.

-

Warning

- +
+
+

Warning

-

Other linalg: +linalg_cholesky_ex(), +linalg_cholesky(), +linalg_det(), +linalg_eigvalsh(), +linalg_eigvals(), +linalg_eig(), +linalg_householder_product(), +linalg_inv_ex(), +linalg_inv(), +linalg_lstsq(), +linalg_matrix_norm(), +linalg_matrix_power(), +linalg_matrix_rank(), +linalg_multi_dot(), +linalg_norm(), +linalg_pinv(), +linalg_qr(), +linalg_slogdet(), +linalg_solve(), +linalg_svdvals(), +linalg_svd(), +linalg_tensorinv(), +linalg_tensorsolve(), +linalg_vector_norm()

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(2, 2)
-linalg_eigh(a)
-
-}
-#> [[1]]
-#> torch_tensor
-#> -1.6506
-#> -0.2649
-#> [ CPUFloatType{2} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> -0.6397 -0.7686
-#>  0.7686 -0.6397
-#> [ CPUFloatType{2,2} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(2, 2)
+linalg_eigh(a)
+
+}
+#> [[1]]
+#> torch_tensor
+#> -0.3496
+#>  0.3220
+#> [ CPUFloatType{2} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> -0.1557 -0.9878
+#>  0.9878 -0.1557
+#> [ CPUFloatType{2,2} ]
+#> 
+
+
+
- - - + + diff --git a/dev/reference/linalg_eigvals.html b/dev/reference/linalg_eigvals.html index eedf2a4563bf7f3375c9290fc69d55dbafe39505..8716bda2af5a53c04e6d0ddde09f3dc1810da0ac 100644 --- a/dev/reference/linalg_eigvals.html +++ b/dev/reference/linalg_eigvals.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the eigenvalues of a square matrix. — linalg_eigvals • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the eigenvalues of a square matrix. — linalg_eigvals • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,92 +115,90 @@ the eigenvalues of a square matrix are defined as the roots (counted with multiplicity) of the polynomial p of degree n given by

-
linalg_eigvals(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

- -

Details

+
+
linalg_eigvals(A)
+
-

-p(λ)=det(AλIn)λC +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

+
+
+

Details

+

p(λ)=det(AλIn)λC p(\lambda) = \operatorname{det}(A - \lambda \mathrm{I}_n)\mathrlap{\qquad \lambda \in \mathbb{C}}

where is the n-dimensional identity matrix. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

-

Note

- +
+
+

Note

The eigenvalues of a real matrix may be complex, as the roots of a real polynomial may be complex. The eigenvalues of a matrix are always well-defined, even when the matrix is not diagonalizable.

-

See also

- -

linalg_eig() computes the full eigenvalue decomposition.

+
+ +
-

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(2, 2)
-w <- linalg_eigvals(a)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(2, 2)
+w <- linalg_eigvals(a)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/linalg_eigvalsh.html b/dev/reference/linalg_eigvalsh.html index 67f33396b54b005629b13601c752bf9ab873b9d6..2d0fac01b1263b29abd96e7e8471832d919d49a2 100644 --- a/dev/reference/linalg_eigvalsh.html +++ b/dev/reference/linalg_eigvalsh.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the eigenvalues of a complex Hermitian or real symmetric matrix. — linalg_eigvalsh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the eigenvalues of a complex Hermitian or real symmetric matrix. — linalg_eigvalsh • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,31 +115,27 @@ the eigenvalues of a complex Hermitian or real symmetric matri are defined as the roots (counted with multiplicity) of the polynomial p of degree n given by

-
linalg_eigvalsh(A, UPLO = "L")
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of symmetric or Hermitian matrices.

UPLO

('L', 'U', optional): controls whether to use the upper or lower triangular part -of A in the computations. Default: 'L'.

- -

Value

+
+
linalg_eigvalsh(A, UPLO = "L")
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of symmetric or Hermitian matrices.

+
UPLO
+

('L', 'U', optional): controls whether to use the upper or lower triangular part +of A in the computations. Default: 'L'.

+
+
+

Value

A real-valued tensor cointaining the eigenvalues even when A is complex. The eigenvalues are returned in ascending order.

-

Details

- -

-p(λ)=det(AλIn)λR +

+
+

Details

+

p(λ)=det(AλIn)λR p(\lambda) = \operatorname{det}(A - \lambda \mathrm{I}_n)\mathrlap{\qquad \lambda \in \mathbb{R}}

where is the n-dimensional identity matrix.

@@ -226,80 +144,74 @@ Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions. The eigenvalues are returned in ascending order.

-

A is assumed to be Hermitian (resp. symmetric), but this is not checked internally, instead:

+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(2, 2)
-linalg_eigvalsh(a)
-
-}
-#> torch_tensor
-#> -1.1490
-#>  0.2240
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(2, 2)
+linalg_eigvalsh(a)
+
+}
+#> torch_tensor
+#> -2.1100
+#> -1.5659
+#> [ CPUFloatType{2} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_householder_product.html b/dev/reference/linalg_householder_product.html index d4f332c64aed3526df21ec9d76644e777138f8e5..b7ced757147bc58998e89d2e4f27f861c758741d 100644 --- a/dev/reference/linalg_householder_product.html +++ b/dev/reference/linalg_householder_product.html @@ -1,82 +1,21 @@ - - - - - - - -Computes the first n columns of a product of Householder matrices. — linalg_householder_product • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the first n columns of a product of Householder matrices. — linalg_householder_product • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,110 +117,103 @@ with and a vector with , this function computes the first columns of the matrix

-
linalg_householder_product(A, tau)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

tau

(Tensor): tensor of shape (*, k) where * is zero or more batch dimensions.

- -

Details

+
+
linalg_householder_product(A, tau)
+
-

-H1H2...HkwithHi=ImτiviviH +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
tau
+

(Tensor): tensor of shape (*, k) where * is zero or more batch dimensions.

+
+
+

Details

+

H1H2...HkwithHi=ImτiviviH H_1H_2 ... H_k \qquad with \qquad H_i = \mathrm{I}_m - \tau_i v_i v_i^{H}

where is the m-dimensional identity matrix and is the conjugate transpose when is complex, and the transpose when is real-valued. -See Representation of Orthogonal or Unitary Matrices for +See Representation of Orthogonal or Unitary Matrices for further details.

Supports inputs of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if the inputs are batches of matrices then the output has the same batch dimensions.

-

Note

- +
+
+

Note

This function only uses the values strictly below the main diagonal of A. The other values are ignored.

-

See also

- -
-
    -
  • torch_geqrf() can be used together with this function to form the Q from the -linalg_qr() decomposition.

  • -
  • torch_ormqr() is a related function that computes the matrix multiplication +

+
+

See also

+
+
  • torch_geqrf() can be used together with this function to form the Q from the +linalg_qr() decomposition.

  • +
  • torch_ormqr() is a related function that computes the matrix multiplication of a product of Householder matrices with another matrix. However, that function is not supported by autograd.

  • -
- -

Other linalg: -linalg_cholesky_ex(), -linalg_cholesky(), -linalg_det(), -linalg_eigh(), -linalg_eigvalsh(), -linalg_eigvals(), -linalg_eig(), -linalg_inv_ex(), -linalg_inv(), -linalg_lstsq(), -linalg_matrix_norm(), -linalg_matrix_power(), -linalg_matrix_rank(), -linalg_multi_dot(), -linalg_norm(), -linalg_pinv(), -linalg_qr(), -linalg_slogdet(), -linalg_solve(), -linalg_svdvals(), -linalg_svd(), -linalg_tensorinv(), -linalg_tensorsolve(), -linalg_vector_norm()

+

Other linalg: +linalg_cholesky_ex(), +linalg_cholesky(), +linalg_det(), +linalg_eigh(), +linalg_eigvalsh(), +linalg_eigvals(), +linalg_eig(), +linalg_inv_ex(), +linalg_inv(), +linalg_lstsq(), +linalg_matrix_norm(), +linalg_matrix_power(), +linalg_matrix_rank(), +linalg_multi_dot(), +linalg_norm(), +linalg_pinv(), +linalg_qr(), +linalg_slogdet(), +linalg_solve(), +linalg_svdvals(), +linalg_svd(), +linalg_tensorinv(), +linalg_tensorsolve(), +linalg_vector_norm()

+
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(2, 2)
-h_tau <- torch_geqrf(A)
-Q <- linalg_householder_product(h_tau[[1]], h_tau[[2]])
-torch_allclose(Q, linalg_qr(A)[[1]])
-
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(2, 2)
+h_tau <- torch_geqrf(A)
+Q <- linalg_householder_product(h_tau[[1]], h_tau[[2]])
+torch_allclose(Q, linalg_qr(A)[[1]])
+
+}
+#> [1] TRUE
+
+
+
- - - + + diff --git a/dev/reference/linalg_inv.html b/dev/reference/linalg_inv.html index 63fac7437e7257099787ddfdeeca7fc63a1e62dc..b6e7cf57a498ea9fcba25a8c589088efbdfc8a58 100644 --- a/dev/reference/linalg_inv.html +++ b/dev/reference/linalg_inv.html @@ -1,79 +1,18 @@ - - - - - - - -Computes the inverse of a square matrix if it exists. — linalg_inv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the inverse of a square matrix if it exists. — linalg_inv • torch - - - - - + + - - - -
-
- -
- -
+
@@ -189,25 +111,22 @@

Throws a runtime_error if the matrix is not invertible.

-
linalg_inv(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of invertible matrices.

- -

Details

+
+
linalg_inv(A)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of invertible matrices.

+
+
+

Details

Letting be or , for a matrix , its inverse matrix (if it exists) is defined as

-

-A1A=AA1=In +

A1A=AA1=In A^{-1}A = AA^{-1} = \mathrm{I}_n where is the n-dimensional identity matrix.

@@ -216,80 +135,80 @@ the inverse is unique. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

-

Consider using linalg_solve() if possible for multiplying a matrix on the left by +

Consider using linalg_solve() if possible for multiplying a matrix on the left by the inverse, as linalg_solve(A, B) == A$inv() %*% B -It is always prefered to use linalg_solve() when possible, as it is faster and more +It is always prefered to use linalg_solve() when possible, as it is faster and more numerically stable than computing the inverse explicitly.

-

See also

- -

linalg_pinv() computes the pseudoinverse (Moore-Penrose inverse) of matrices +

+ +
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(4, 4)
-linalg_inv(A)
-
-}
-#> torch_tensor
-#>  1.4303  0.2242 -1.1203  0.6147
-#> -2.7630 -0.0928  2.2771  0.6130
-#>  0.7085 -0.3920 -0.9817 -0.4471
-#>  1.3918  0.0016 -0.3745 -0.1163
-#> [ CPUFloatType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(4, 4)
+linalg_inv(A)
+
+}
+#> torch_tensor
+#>  0.4200 -0.9976  0.4567 -1.1060
+#>  0.5622 -0.3490  0.8849 -1.3613
+#> -0.8692  0.6176 -0.2201  0.2716
+#> -0.6411 -0.1898  0.3037  0.0321
+#> [ CPUFloatType{4,4} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_inv_ex.html b/dev/reference/linalg_inv_ex.html index 03082a55886cba0b9341f8686ad82d8c5846b3c2..d4a07425e5c340fb021e8a4d95117cc7f251e217 100644 --- a/dev/reference/linalg_inv_ex.html +++ b/dev/reference/linalg_inv_ex.html @@ -1,48 +1,5 @@ - - - - - - - -Computes the inverse of a square matrix if it is invertible. — linalg_inv_ex • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the inverse of a square matrix if it is invertible. — linalg_inv_ex • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -211,88 +133,84 @@ Also supports batches of matrices, and if A is a batch of matrices the output has the same batch dimensions.

-
linalg_inv_ex(A, check_errors = FALSE)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions -consisting of square matrices.

check_errors

(bool, optional): controls whether to check the content of info. Default: FALSE.

- -

Note

+
+
linalg_inv_ex(A, check_errors = FALSE)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions +consisting of square matrices.

+
check_errors
+

(bool, optional): controls whether to check the content of info. Default: FALSE.

+
+
+

Note

If A is on a CUDA device then this function may synchronize that device with the CPU.

This function is "experimental" and it may change in a future PyTorch release.

-

See also

- -

linalg_inv() is a NumPy compatible variant that always checks for errors.

+
+ +
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(3, 3)
-out <- linalg_inv_ex(A)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(3, 3)
+out <- linalg_inv_ex(A)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/linalg_lstsq.html b/dev/reference/linalg_lstsq.html index 08e769d18dfbe7ec488e793f7f0e105140b7b3e7..693e4be19ba31c908364e05011c1cede0d391b00 100644 --- a/dev/reference/linalg_lstsq.html +++ b/dev/reference/linalg_lstsq.html @@ -1,81 +1,20 @@ - - - - - - - -Computes a solution to the least squares problem of a system of linear equations. — linalg_lstsq • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a solution to the least squares problem of a system of linear equations. — linalg_lstsq • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,44 +115,34 @@ the least squares problem for a linear system with is defined as

-
linalg_lstsq(A, B, rcond = NULL, ..., driver = NULL)
+
+
linalg_lstsq(A, B, rcond = NULL, ..., driver = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
A

(Tensor): lhs tensor of shape (*, m, n) where * is zero or more batch dimensions.

B

(Tensor): rhs tensor of shape (*, m, k) where * is zero or more batch dimensions.

rcond

(float, optional): used to determine the effective rank of A. +

+

Arguments

+
A
+

(Tensor): lhs tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
B
+

(Tensor): rhs tensor of shape (*, m, k) where * is zero or more batch dimensions.

+
rcond
+

(float, optional): used to determine the effective rank of A. If rcond = NULL, rcond is set to the machine -precision of the dtype of A times max(m, n). Default: NULL.

...

currently unused.

driver

(str, optional): name of the LAPACK/MAGMA method to be used. +precision of the dtype of A times max(m, n). Default: NULL.

+
...
+

currently unused.

+
driver
+

(str, optional): name of the LAPACK/MAGMA method to be used. If NULL, 'gelsy' is used for CPU inputs and 'gels' for CUDA inputs. -Default: NULL.

- -

Value

- +Default: NULL.

+
+
+

Value

A list (solution, residuals, rank, singular_values).

-

Details

- -

-minXKn×kAXBF +

+
+

Details

+

minXKn×kAXBF \min_{X \in \mathbb{K}^{n \times k}} \|AX - B\|_F

where denotes the Frobenius norm. @@ -240,16 +152,13 @@ the output has the same batch dimensions. driver chooses the LAPACK/MAGMA function that will be used.

For CPU inputs the valid values are 'gels', 'gelsy', 'gelsd, 'gelss'. For CUDA input, the only valid driver is 'gels', which assumes that A is full-rank.

-

To choose the best driver on CPU consider:

    -
  • If A is well-conditioned (its condition number is not too large), or you do not mind some precision loss.

  • +

    To choose the best driver on CPU consider:

    • If A is well-conditioned (its condition number is not too large), or you do not mind some precision loss.

    • For a general matrix: 'gelsy' (QR with pivoting) (default)

    • If A is full-rank: 'gels' (QR)

    • If A is not well-conditioned.

    • 'gelsd' (tridiagonal reduction and SVD)

    • But if you run into memory issues: 'gelss' (full SVD).

    • -
    - -

    See also the full description of these drivers

    +

See also the full description of these drivers

rcond is used to determine the effective rank of the matrices in A when driver is one of ('gelsy', 'gelsd', 'gelss'). In this case, if are the singular values of A in decreasing order, @@ -257,8 +166,7 @@ In this case, if are the singular values of A in decreasing order, If rcond = NULL (default), rcond is set to the machine precision of the dtype of A.

This function returns the solution to the problem and some extra information in a list of four tensors (solution, residuals, rank, singular_values). For inputs A, B -of shape (*, m, n), (*, m, k) respectively, it cointains

    -
  • solution: the least squares solution. It has shape (*, n, k).

  • +of shape (*, m, n), (*, m, k) respectively, it cointains

    • solution: the least squares solution. It has shape (*, n, k).

    • residuals: the squared residuals of the solutions, that is, . It has shape equal to the batch dimensions of A. It is computed when m > n and every matrix in A is full-rank, @@ -273,81 +181,81 @@ otherwise it is an empty tensor.

    • It has shape (*, min(m, n)). It is computed when driver is one of ('gelsd', 'gelss'), otherwise it is an empty tensor.

      -
    - -

    Note

    - +
+
+

Note

This function computes X = A$pinverse() %*% B in a faster and more numerically stable way than performing the computations separately.

-

Warning

- +
+
+

Warning

The default value of rcond may change in a future PyTorch release. It is therefore recommended to use a fixed value to avoid potential breaking changes.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-A <- torch_tensor(rbind(c(10, 2, 3), c(3, 10, 5), c(5, 6, 12)))$unsqueeze(1) # shape (1, 3, 3)
-B <- torch_stack(list(rbind(c(2, 5, 1), c(3, 2, 1), c(5, 1, 9)),
-                      rbind(c(4, 2, 9), c(2, 0, 3), c(2, 5, 3))), dim = 1) # shape (2, 3, 3)
-X <- linalg_lstsq(A, B)$solution # A is broadcasted to shape (2, 3, 3)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_tensor(rbind(c(10, 2, 3), c(3, 10, 5), c(5, 6, 12)))$unsqueeze(1) # shape (1, 3, 3)
+B <- torch_stack(list(rbind(c(2, 5, 1), c(3, 2, 1), c(5, 1, 9)),
+                      rbind(c(4, 2, 9), c(2, 0, 3), c(2, 5, 3))), dim = 1) # shape (2, 3, 3)
+X <- linalg_lstsq(A, B)$solution # A is broadcasted to shape (2, 3, 3)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/linalg_matrix_norm.html b/dev/reference/linalg_matrix_norm.html index b313d99333f5cd0f78ba59783b58d0f704c7f03d..ac2cd315bf138efd736039b8c6cbb640f901dca8 100644 --- a/dev/reference/linalg_matrix_norm.html +++ b/dev/reference/linalg_matrix_norm.html @@ -1,83 +1,22 @@ - - - - - - - -Computes a matrix norm. — linalg_matrix_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a matrix norm. — linalg_matrix_norm • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -197,134 +119,108 @@ dimensions specified by the 2-tuple dim and the other dimensions wi be treated as batch dimensions. The output will have the same batch dimensions.

-
linalg_matrix_norm(
-  A,
-  ord = "fro",
-  dim = c(-2, -1),
-  keepdim = FALSE,
-  dtype = NULL
-)
+
+
linalg_matrix_norm(
+  A,
+  ord = "fro",
+  dim = c(-2, -1),
+  keepdim = FALSE,
+  dtype = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
A

(Tensor): tensor with two or more dimensions. By default its +

+

Arguments

+
A
+

(Tensor): tensor with two or more dimensions. By default its shape is interpreted as (*, m, n) where * is zero or more -batch dimensions, but this behavior can be controlled using dim.

ord

(int, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: 'fro'

dim

(int, Tupleint, optional): dimensions over which to compute +batch dimensions, but this behavior can be controlled using dim.

+
ord
+

(int, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: 'fro'

+
dim
+

(int, Tupleint, optional): dimensions over which to compute the vector or matrix norm. See above for the behavior when dim=NULL. -Default: NULL

keepdim

(bool, optional): If set to TRUE, the reduced dimensions are retained -in the result as dimensions with size one. Default: FALSE

dtype

dtype (torch_dtype, optional): If specified, the input tensor is cast to +Default: NULL

+
keepdim
+

(bool, optional): If set to TRUE, the reduced dimensions are retained +in the result as dimensions with size one. Default: FALSE

+
dtype
+

dtype (torch_dtype, optional): If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor's type -will be dtype. Default: NULL

- -

Details

- +will be dtype. Default: NULL

+
+
+

Details

ord defines the norm that is computed. The following norms are -supported:

- - - - - - - - - - - - -
ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
- - -

See also

- - +supported:

ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_arange(0, 8, dtype=torch_float())$reshape(c(3,3))
-linalg_matrix_norm(a)
-linalg_matrix_norm(a, ord = -1)
-b <- a$expand(c(2, -1, -1))
-linalg_matrix_norm(b)
-linalg_matrix_norm(b, dim = c(1, 3))
-
-}
-#> torch_tensor
-#>   3.1623
-#>  10.0000
-#>  17.2627
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_arange(0, 8, dtype=torch_float())$reshape(c(3,3))
+linalg_matrix_norm(a)
+linalg_matrix_norm(a, ord = -1)
+b <- a$expand(c(2, -1, -1))
+linalg_matrix_norm(b)
+linalg_matrix_norm(b, dim = c(1, 3))
+
+}
+#> torch_tensor
+#>   3.1623
+#>  10.0000
+#>  17.2627
+#> [ CPUFloatType{3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_matrix_power.html b/dev/reference/linalg_matrix_power.html index 3698b910653b5bfce3cff98458007897fb6885e3..fbcc88df5e013ad6a5be87735ae7b1af1bcef074 100644 --- a/dev/reference/linalg_matrix_power.html +++ b/dev/reference/linalg_matrix_power.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the n-th power of a square matrix for an integer n. — linalg_matrix_power • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the n-th power of a square matrix for an integer n. — linalg_matrix_power • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -193,93 +115,89 @@ Also supports batches of matrices, and if A is a batch of matrices the output has the same batch dimensions.

-
linalg_matrix_power(A, n)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, m) where * is zero or more batch dimensions.

n

(int): the exponent.

- -

Details

+
+
linalg_matrix_power(A, n)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, m) where * is zero or more batch dimensions.

+
n
+

(int): the exponent.

+
+
+

Details

If n=0, it returns the identity matrix (or batch) of the same shape as A. If n is negative, it returns the inverse of each matrix (if invertible) raised to the power of abs(n).

-

See also

- -

linalg_solve() computes A$inverse() %*% B with a +

+ +
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(3, 3)
-linalg_matrix_power(A, 0)
-
-}
-#> torch_tensor
-#>  1  0  0
-#>  0  1  0
-#>  0  0  1
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(3, 3)
+linalg_matrix_power(A, 0)
+
+}
+#> torch_tensor
+#>  1  0  0
+#>  0  1  0
+#>  0  0  1
+#> [ CPUFloatType{3,3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_matrix_rank.html b/dev/reference/linalg_matrix_rank.html index c14c6e5f65d6dc68bf3932cbe5ba385e92cb395b..dcf3c842bc795d35135e83d38c3a4a72798e7561 100644 --- a/dev/reference/linalg_matrix_rank.html +++ b/dev/reference/linalg_matrix_rank.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the numerical rank of a matrix. — linalg_matrix_rank • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the numerical rank of a matrix. — linalg_matrix_rank • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -193,30 +115,24 @@ that are greater than the specified tol threshold." /> that are greater than the specified tol threshold.

-
linalg_matrix_rank(A, tol = NULL, hermitian = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more -batch dimensions.

tol

(float, Tensor, optional): the tolerance value. See above for -the value it takes when NULL. Default: NULL.

hermitian

(bool, optional): indicates whether A is Hermitian if complex -or symmetric if real. Default: FALSE.

- -

Details

+
+
linalg_matrix_rank(A, tol = NULL, hermitian = FALSE)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more +batch dimensions.

+
tol
+

(float, Tensor, optional): the tolerance value. See above for +the value it takes when NULL. Default: NULL.

+
hermitian
+

(bool, optional): indicates whether A is Hermitian if complex +or symmetric if real. Default: FALSE.

+
+
+

Details

Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

@@ -225,78 +141,77 @@ symmetric if real, but this is not checked internally. Instead, just the lower triangular part of the matrix is used in the computations.

If tol is not specified and A is a matrix of dimensions (m, n), the tolerance is set to be

-

-tol=σ1max(m,n)ε +

tol=σ1max(m,n)ε tol = \sigma_1 \max(m, n) \varepsilon

where is the largest singular value (or eigenvalue in absolute value when hermitian = TRUE), and - is the epsilon value for the dtype of A (see torch_finfo()).

+ is the epsilon value for the dtype of A (see torch_finfo()).

If A is a batch of matrices, tol is computed this way for every element of the batch.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_eye(10)
-linalg_matrix_rank(a)
-
-}
-#> torch_tensor
-#> 10
-#> [ CPULongType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_eye(10)
+linalg_matrix_rank(a)
+
+}
+#> torch_tensor
+#> 10
+#> [ CPULongType{} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_multi_dot.html b/dev/reference/linalg_multi_dot.html index 8269daba59e4904305d23e58626087e6d31a4f54..2b2c206d88d87b7a55e355049a3f9b7d2f91407b 100644 --- a/dev/reference/linalg_multi_dot.html +++ b/dev/reference/linalg_multi_dot.html @@ -1,80 +1,19 @@ - - - - - - - -Efficiently multiplies two or more matrices — linalg_multi_dot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Efficiently multiplies two or more matrices — linalg_multi_dot • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,20 +113,18 @@ the fewest arithmetic operations are performed." /> the fewest arithmetic operations are performed.

-
linalg_multi_dot(tensors)
- -

Arguments

- - - - - - -
tensors

(Sequence[Tensor]): two or more tensors to multiply. The first and last -tensors may be 1D or 2D. Every other tensor must be 2D.

- -

Details

+
+
linalg_multi_dot(tensors)
+
+
+

Arguments

+
tensors
+

(Sequence[Tensor]): two or more tensors to multiply. The first and last +tensors may be 1D or 2D. Every other tensor must be 2D.

+
+
+

Details

Supports inputs of float, double, cfloat and cdouble dtypes. This function does not support batched inputs.

Every tensor in tensors must be 2D, except for the first and last which @@ -213,85 +133,85 @@ of shape (1, n), similarly if the last tensor is a 1D vector of sha as a column vector of shape (n, 1).

If the first and last tensors are matrices, the output will be a matrix. However, if either is a 1D vector, then the output will be a 1D vector.

-

Note

- -

This function is implemented by chaining torch_mm() calls after +

+
+

Note

+

This function is implemented by chaining torch_mm() calls after computing the optimal matrix multiplication order.

The cost of multiplying two matrices with shapes (a, b) and (b, c) is a * b * c. Given matrices A, B, C with shapes (10, 100), (100, 5), (5, 50) respectively, we can calculate the cost of different multiplication orders as follows:

-

-cost((AB)C)=10×100×5+10×5×50=7500 cost(A(BC))=10×100×50+100×5×50=75000 +

cost((AB)C)=10×100×5+10×5×50=7500 cost(A(BC))=10×100×50+100×5×50=75000 \begin{align*} \operatorname{cost}((AB)C) &= 10 \times 100 \times 5 + 10 \times 5 \times 50 = 7500 \ \operatorname{cost}(A(BC)) &= 10 \times 100 \times 50 + 100 \times 5 \times 50 = 75000 \end{align*}

In this case, multiplying A and B first followed by C is 10 times faster.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-
-linalg_multi_dot(list(torch_tensor(c(1,2)), torch_tensor(c(2,3))))
-
-}
-#> torch_tensor
-#> 8
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+linalg_multi_dot(list(torch_tensor(c(1,2)), torch_tensor(c(2,3))))
+
+}
+#> torch_tensor
+#> 8
+#> [ CPUFloatType{} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_norm.html b/dev/reference/linalg_norm.html index cd7141b41beffa0d8847d144a39d3d48bdaf4d1b..4432994952760fa5855284b846f228158ed7830a 100644 --- a/dev/reference/linalg_norm.html +++ b/dev/reference/linalg_norm.html @@ -1,48 +1,5 @@ - - - - - - - -Computes a vector or matrix norm. — linalg_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a vector or matrix norm. — linalg_norm • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+

If A is complex valued, it computes the norm of A$abs() Supports input of float, double, cfloat and cdouble dtypes. -Whether this function computes a vector or matrix norm is determined as follows:

    -
  • If dim is an int, the vector norm will be computed.

  • +Whether this function computes a vector or matrix norm is determined as follows:

    • If dim is an int, the vector norm will be computed.

    • If dim is a 2-tuple, the matrix norm will be computed.

    • If dim=NULL and ord=NULL, A will be flattened to 1D and the 2-norm of the resulting vector will be computed.

    • If dim=NULL and ord!=NULL, A must be 1D or 2D.

    • -
    +
+
+
linalg_norm(A, ord = NULL, dim = NULL, keepdim = FALSE, dtype = NULL)
-
linalg_norm(A, ord = NULL, dim = NULL, keepdim = FALSE, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
A

(Tensor): tensor of shape (*, n) or (*, m, n) where * is zero or more batch dimensions

ord

(int, float, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: NULL

dim

(int, Tupleint, optional): dimensions over which to compute +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, n) or (*, m, n) where * is zero or more batch dimensions

+
ord
+

(int, float, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: NULL

+
dim
+

(int, Tupleint, optional): dimensions over which to compute the vector or matrix norm. See above for the behavior when dim=NULL. -Default: NULL

keepdim

(bool, optional): If set to TRUE, the reduced dimensions are retained -in the result as dimensions with size one. Default: FALSE

dtype

dtype (torch_dtype, optional): If specified, the input tensor is cast to +Default: NULL

+
keepdim
+

(bool, optional): If set to TRUE, the reduced dimensions are retained +in the result as dimensions with size one. Default: FALSE

+
dtype
+

dtype (torch_dtype, optional): If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor's type -will be dtype. Default: NULL

- -

Details

- +will be dtype. Default: NULL

+
+
+

Details

ord defines the norm that is computed. The following norms are -supported:

- - - - - - - - - - - - -
ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
- - -

See also

- - +supported:

ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_arange(0, 8, dtype=torch_float()) - 4
-a
-b <- a$reshape(c(3, 3))
-b
-
-linalg_norm(a)
-linalg_norm(b)
-
-}
-#> torch_tensor
-#> 7.74597
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_arange(0, 8, dtype=torch_float()) - 4
+a
+b <- a$reshape(c(3, 3))
+b
+
+linalg_norm(a)
+linalg_norm(b)
+
+}
+#> torch_tensor
+#> 7.74597
+#> [ CPUFloatType{} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_pinv.html b/dev/reference/linalg_pinv.html index f39efe2a9610c478f154cc754c096ad85e5fd4f0..b3daf760f23191a9a5f96a91ff02dd76462f9d1e 100644 --- a/dev/reference/linalg_pinv.html +++ b/dev/reference/linalg_pinv.html @@ -1,83 +1,22 @@ - - - - - - - -Computes the pseudoinverse (Moore-Penrose inverse) of a matrix. — linalg_pinv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the pseudoinverse (Moore-Penrose inverse) of a matrix. — linalg_pinv • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,121 +119,113 @@ Also supports batches of matrices, and if A is a batch of matrices the output has the same batch dimensions.

-
linalg_pinv(A, rcond = 1e-15, hermitian = FALSE)
+
+
linalg_pinv(A, rcond = 1e-15, hermitian = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

rcond

(float or Tensor, optional): the tolerance value to determine when is a singular value zero +

+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
rcond
+

(float or Tensor, optional): the tolerance value to determine when is a singular value zero If it is a torch_Tensor, its shape must be broadcastable to that of the singular values of -A as returned by linalg_svd(). -Default: 1e-15.

hermitian

(bool, optional): indicates whether A is Hermitian if complex -or symmetric if real. Default: FALSE.

- -

Details

- +A as returned by linalg_svd(). +Default: 1e-15.

+
hermitian
+

(bool, optional): indicates whether A is Hermitian if complex +or symmetric if real. Default: FALSE.

+
+
+

Details

If hermitian= TRUE, A is assumed to be Hermitian if complex or symmetric if real, but this is not checked internally. Instead, just the lower triangular part of the matrix is used in the computations. The singular values (or the norm of the eigenvalues when hermitian= TRUE) that are below the specified rcond threshold are treated as zero and discarded in the computation.

-

Note

- -

This function uses linalg_svd() if hermitian= FALSE and -linalg_eigh() if hermitian= TRUE. +

+
+

Note

+

This function uses linalg_svd() if hermitian= FALSE and +linalg_eigh() if hermitian= TRUE. For CUDA inputs, this function synchronizes that device with the CPU.

-

Consider using linalg_lstsq() if possible for multiplying a matrix on the left by +

Consider using linalg_lstsq() if possible for multiplying a matrix on the left by the pseudoinverse, as linalg_lstsq(A, B)$solution == A$pinv() %*% B

-

It is always prefered to use linalg_lstsq() when possible, as it is faster and more +

It is always prefered to use linalg_lstsq() when possible, as it is faster and more numerically stable than computing the pseudoinverse explicitly.

-

See also

- -
-
+ +
-

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(3, 5)
-linalg_pinv(A)
-
-}
-#> torch_tensor
-#>  0.0540 -0.5891 -0.7179
-#>  0.1619  0.2299  0.6291
-#> -0.2104  0.0714 -0.0272
-#>  0.0113  0.0798  0.7690
-#>  0.3353  0.2787  0.2296
-#> [ CPUFloatType{5,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(3, 5)
+linalg_pinv(A)
+
+}
+#> torch_tensor
+#> -0.1550  0.1730 -0.1443
+#>  0.8198 -1.0037  0.4749
+#>  0.2307  0.3540 -0.2147
+#> -0.3399  0.1085 -0.4138
+#> -0.3653 -0.0718  0.1589
+#> [ CPUFloatType{5,3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_qr.html b/dev/reference/linalg_qr.html index 77d8e0a05f3b2064501f0ecde3a2e9ad78536c76..96b486a2e9d473a8ab86c232f882b0f1d0c72500 100644 --- a/dev/reference/linalg_qr.html +++ b/dev/reference/linalg_qr.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the QR decomposition of a matrix. — linalg_qr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the QR decomposition of a matrix. — linalg_qr • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,37 +115,32 @@ the full QR decomposition of a matrix is defined as

-
linalg_qr(A, mode = "reduced")
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

mode

(str, optional): one of 'reduced', 'complete', 'r'. -Controls the shape of the returned tensors. Default: 'reduced'.

- -

Value

+
+
linalg_qr(A, mode = "reduced")
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
mode
+

(str, optional): one of 'reduced', 'complete', 'r'. +Controls the shape of the returned tensors. Default: 'reduced'.

+
+
+

Value

A list (Q, R).

-

Details

- -

-A=QRQKm×m,RKm×n +

+
+

Details

+

A=QRQKm×m,RKm×n A = QR\mathrlap{\qquad Q \in \mathbb{K}^{m \times m}, R \in \mathbb{K}^{m \times n}}

where is orthogonal in the real case and unitary in the complex case, and is upper triangular. When m > n (tall matrix), as R is upper triangular, its last m - n rows are zero. In this case, we can drop the last m - n columns of Q to form the reduced QR decomposition:

-

-A=QRQKm×n,RKn×n +

A=QRQKm×n,RKn×n A = QR\mathrlap{\qquad Q \in \mathbb{K}^{m \times n}, R \in \mathbb{K}^{n \times n}}

The reduced QR decomposition agrees with the full QR decomposition when n >= m (wide matrix). @@ -231,80 +148,77 @@ Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions. The parameter mode chooses between the full and reduced QR decomposition.

-

If A has shape (*, m, n), denoting k = min(m, n)

+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_tensor(rbind(c(12., -51, 4), c(6, 167, -68), c(-4, 24, -41)))
-qr <- linalg_qr(a)
-
-torch_mm(qr[[1]], qr[[2]])$round()
-torch_mm(qr[[1]]$t(), qr[[1]])$round()
-
-}
-#> torch_tensor
-#>  1  0  0
-#>  0  1  0
-#>  0  0  1
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_tensor(rbind(c(12., -51, 4), c(6, 167, -68), c(-4, 24, -41)))
+qr <- linalg_qr(a)
+
+torch_mm(qr[[1]], qr[[2]])$round()
+torch_mm(qr[[1]]$t(), qr[[1]])$round()
+
+}
+#> torch_tensor
+#>  1  0  0
+#>  0  1  0
+#>  0  0  1
+#> [ CPUFloatType{3,3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_slogdet.html b/dev/reference/linalg_slogdet.html index f5fb627f7b13bd6375b96f80493b5c4d107cd106..6f41d76b998e13570f34dcd4732e55e14caa28d2 100644 --- a/dev/reference/linalg_slogdet.html +++ b/dev/reference/linalg_slogdet.html @@ -1,83 +1,22 @@ - - - - - - - -Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix. — linalg_slogdet • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix. — linalg_slogdet • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,101 +119,97 @@ Also supports batches of matrices, and if A is a batch of matrices the output has the same batch dimensions.

-
linalg_slogdet(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

- -

Value

+
+
linalg_slogdet(A)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

+
+
+

Value

A list (sign, logabsdet). logabsdet will always be real-valued, even when A is complex. sign will have the same dtype as A.

-

Notes

- +
+
+

Notes

-
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_randn(3,3)
-linalg_slogdet(a)
-
-}
-#> [[1]]
-#> torch_tensor
-#> 1
-#> [ CPUFloatType{} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> 2.03651
-#> [ CPUFloatType{} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_randn(3,3)
+linalg_slogdet(a)
+
+}
+#> [[1]]
+#> torch_tensor
+#> 1
+#> [ CPUFloatType{} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> -1.57985
+#> [ CPUFloatType{} ]
+#> 
+
+
+
- - - + + diff --git a/dev/reference/linalg_solve.html b/dev/reference/linalg_solve.html index ace806325e224f2176c9b3fb9a55fed9b4347a7f..23199ed04d1fdc22fe79449e03d0a766ed96856a 100644 --- a/dev/reference/linalg_solve.html +++ b/dev/reference/linalg_solve.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the solution of a square system of linear equations with a unique solution. — linalg_solve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the solution of a square system of linear equations with a unique solution. — linalg_solve • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,24 +115,20 @@ this function computes the solution of the linear system assoc , which is defined as

-
linalg_solve(A, B)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

B

(Tensor): right-hand side tensor of shape (*, n) or (*, n, k) or (n,) or (n, k) -according to the rules described above

- -

Details

+
+
linalg_solve(A, B)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, n, n) where * is zero or more batch dimensions.

+
B
+

(Tensor): right-hand side tensor of shape (*, n) or (*, n, k) or (n,) or (n, k) +according to the rules described above

+
+
+

Details

$$ AX = B $$

@@ -219,82 +137,80 @@ This function assumes that is invertible. Supports inputs of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if the inputs are batches of matrices then the output has the same batch dimensions.

-

Letting * be zero or more batch dimensions,

    -
  • If A has shape (*, n, n) and B has shape (*, n) (a batch of vectors) or shape +

    Letting * be zero or more batch dimensions,

    • If A has shape (*, n, n) and B has shape (*, n) (a batch of vectors) or shape (*, n, k) (a batch of matrices or "multiple right-hand sides"), this function returns X of shape (*, n) or (*, n, k) respectively.

    • Otherwise, if A has shape (*, n, n) and B has shape (n,) or (n, k), B is broadcasted to have shape (*, n) or (*, n, k) respectively.

    • -
    - -

    This function then returns the solution of the resulting batch of systems of linear equations.

    -

    Note

    - +

This function then returns the solution of the resulting batch of systems of linear equations.

+
+
+

Note

This function computes X = A$inverse() @ B in a faster and more numerically stable way than performing the computations separately.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(3, 3)
-b <- torch_randn(3)
-x <- linalg_solve(A, b)
-torch_allclose(torch_matmul(A, x), b)
-
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(3, 3)
+b <- torch_randn(3)
+x <- linalg_solve(A, b)
+torch_allclose(torch_matmul(A, x), b)
+
+}
+#> [1] TRUE
+
+
+
- - - + + diff --git a/dev/reference/linalg_svd.html b/dev/reference/linalg_svd.html index 7f99caabf133b2a32014968e6fdf07e14da09778..b8bc4152305035d8b7c62433f287db5948b05848 100644 --- a/dev/reference/linalg_svd.html +++ b/dev/reference/linalg_svd.html @@ -1,81 +1,20 @@ - - - - - - - -Computes the singular value decomposition (SVD) of a matrix. — linalg_svd • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the singular value decomposition (SVD) of a matrix. — linalg_svd • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,41 +115,36 @@ the full SVD of a matrix , if k = min(m,n), is defined as

-
linalg_svd(A, full_matrices = TRUE)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

full_matrices

(bool, optional): controls whether to compute the full or reduced -SVD, and consequently, the shape of the returned tensors U and V. Default: TRUE.

- -

Value

+
+
linalg_svd(A, full_matrices = TRUE)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
full_matrices
+

(bool, optional): controls whether to compute the full or reduced +SVD, and consequently, the shape of the returned tensors U and V. Default: TRUE.

+
+
+

Value

A list (U, S, V) which corresponds to , , above. S will always be real-valued, even when A is complex. It will also be ordered in descending order. U and V will have the same dtype as A. The left / right singular vectors will be given by the columns of U and the rows of V respectively.

-

Details

- -

-A=Udiag(S)VHUKm×m,SRk,VKn×n +

+
+

Details

+

A=Udiag(S)VHUKm×m,SRk,VKn×n A = U \operatorname{diag}(S) V^{H} \mathrlap{\qquad U \in \mathbb{K}^{m \times m}, S \in \mathbb{R}^k, V \in \mathbb{K}^{n \times n}}

where , is the conjugate transpose when is complex, and the transpose when is real-valued.

The matrices , (and thus ) are orthogonal in the real case, and unitary in the complex case. When m > n (resp. m < n) we can drop the last m - n (resp. n - m) columns of U (resp. V) to form the reduced SVD:

-

-A=Udiag(S)VHUKm×k,SRk,VKk×n +

A=Udiag(S)VHUKm×k,SRk,VKk×n A = U \operatorname{diag}(S) V^{H} \mathrlap{\qquad U \in \mathbb{K}^{m \times k}, S \in \mathbb{R}^k, V \in \mathbb{K}^{k \times n}}

where .

@@ -239,13 +156,15 @@ the output has the same batch dimensions.

which corresponds to , , above.

The singular values are returned in descending order. The parameter full_matrices chooses between the full (default) and reduced SVD.

-

Note

- +
+
+

Note

When full_matrices=TRUE, the gradients with respect to U[..., :, min(m, n):] and Vh[..., min(m, n):, :] will be ignored, as those vectors can be arbitrary bases of the corresponding subspaces.

-

Warnings

- +
+
+

Warnings

The returned tensors U and V are not unique, nor are they continuous with @@ -267,103 +186,100 @@ the gradient will be numerically unstable, as it depends on the singular values . The gradient will also be numerically unstable when A has small singular values, as it also depends on the computaiton of .

-

See also

- -
-
+
+

See also

+
+
  • linalg_svdvals() computes only the singular values. +Unlike linalg_svd(), the gradients of linalg_svdvals() are always numerically stable.

  • -
  • linalg_eig() for a function that computes another type of spectral +

  • linalg_eig() for a function that computes another type of spectral decomposition of a matrix. The eigendecomposition works just on on square matrices.

  • -
  • linalg_eigh() for a (faster) function that computes the eigenvalue decomposition +

  • linalg_eigh() for a (faster) function that computes the eigenvalue decomposition for Hermitian and symmetric matrices.

  • -
  • linalg_qr() for another (much faster) decomposition that works on general +

  • linalg_qr() for another (much faster) decomposition that works on general matrices.

  • -
- -

Other linalg: -linalg_cholesky_ex(), -linalg_cholesky(), -linalg_det(), -linalg_eigh(), -linalg_eigvalsh(), -linalg_eigvals(), -linalg_eig(), -linalg_householder_product(), -linalg_inv_ex(), -linalg_inv(), -linalg_lstsq(), -linalg_matrix_norm(), -linalg_matrix_power(), -linalg_matrix_rank(), -linalg_multi_dot(), -linalg_norm(), -linalg_pinv(), -linalg_qr(), -linalg_slogdet(), -linalg_solve(), -linalg_svdvals(), -linalg_tensorinv(), -linalg_tensorsolve(), -linalg_vector_norm()

+

Other linalg: +linalg_cholesky_ex(), +linalg_cholesky(), +linalg_det(), +linalg_eigh(), +linalg_eigvalsh(), +linalg_eigvals(), +linalg_eig(), +linalg_householder_product(), +linalg_inv_ex(), +linalg_inv(), +linalg_lstsq(), +linalg_matrix_norm(), +linalg_matrix_power(), +linalg_matrix_rank(), +linalg_multi_dot(), +linalg_norm(), +linalg_pinv(), +linalg_qr(), +linalg_slogdet(), +linalg_solve(), +linalg_svdvals(), +linalg_tensorinv(), +linalg_tensorsolve(), +linalg_vector_norm()

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(5, 3)
-linalg_svd(a, full_matrices=FALSE)
-
-}
-#> [[1]]
-#> torch_tensor
-#> -0.2344 -0.5241  0.1385
-#> -0.2371 -0.6090  0.4807
-#> -0.3938  0.5668  0.7073
-#>  0.8399 -0.0160  0.4752
-#>  0.1683 -0.1816  0.1537
-#> [ CPUFloatType{5,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  3.6441
-#>  1.9351
-#>  0.8854
-#> [ CPUFloatType{3} ]
-#> 
-#> [[3]]
-#> torch_tensor
-#>  0.6504  0.7497  0.1219
-#> -0.3620  0.1649  0.9175
-#> -0.6677  0.6409 -0.3787
-#> [ CPUFloatType{3,3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(5, 3)
+linalg_svd(a, full_matrices=FALSE)
+
+}
+#> [[1]]
+#> torch_tensor
+#>  0.4204  0.2025 -0.1607
+#> -0.0947 -0.0915  0.8299
+#>  0.0823 -0.9359 -0.2467
+#> -0.8371 -0.0819 -0.0139
+#> -0.3269  0.2608 -0.4736
+#> [ CPUFloatType{5,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  2.5337
+#>  2.3223
+#>  1.9506
+#> [ CPUFloatType{3} ]
+#> 
+#> [[3]]
+#> torch_tensor
+#> -0.2494  0.5621 -0.7885
+#>  0.1513  0.8269  0.5416
+#>  0.9565  0.0158 -0.2913
+#> [ CPUFloatType{3,3} ]
+#> 
+
+
+
- - - + + diff --git a/dev/reference/linalg_svdvals.html b/dev/reference/linalg_svdvals.html index 6491c7019a16bd73cdd8d675c0c2afb46d152135..8619fae7324321a6b5bf4cbe3518f49543e0e8ba 100644 --- a/dev/reference/linalg_svdvals.html +++ b/dev/reference/linalg_svdvals.html @@ -1,82 +1,21 @@ - - - - - - - -Computes the singular values of a matrix. — linalg_svdvals • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the singular values of a matrix. — linalg_svdvals • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -195,87 +117,85 @@ the output has the same batch dimensions. The singular values are returned in descending order.

-
linalg_svdvals(A)
- -

Arguments

- - - - - - -
A

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

- -

Value

+
+
linalg_svdvals(A)
+
+
+

Arguments

+
A
+

(Tensor): tensor of shape (*, m, n) where * is zero or more batch dimensions.

+
+ -

Examples

-
if (torch_is_installed()) {
-A <- torch_randn(5, 3)
-S <- linalg_svdvals(A)
-S
-
-}
-#> torch_tensor
-#>  3.7771
-#>  1.8606
-#>  1.3879
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_randn(5, 3)
+S <- linalg_svdvals(A)
+S
+
+}
+#> torch_tensor
+#>  2.6794
+#>  1.9196
+#>  0.8711
+#> [ CPUFloatType{3} ]
+
+
+
- - - + + diff --git a/dev/reference/linalg_tensorinv.html b/dev/reference/linalg_tensorinv.html index a7e5221ebc4b47fffd72cef7dba04878a20f6c34..69d2b33377751c1b62116568d6c0103c4c980cfe 100644 --- a/dev/reference/linalg_tensorinv.html +++ b/dev/reference/linalg_tensorinv.html @@ -1,82 +1,21 @@ - - - - - - - -Computes the multiplicative inverse of torch_tensordot() — linalg_tensorinv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the multiplicative inverse of torch_tensordot() — linalg_tensorinv • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,104 +117,98 @@ If this is the case, it computes a tensor X such that tensordot(A, X, ind) is the identity matrix in dimension m.

-
linalg_tensorinv(A, ind = 3L)
- -

Arguments

- - - - - - - - - - -
A

(Tensor): tensor to invert.

ind

(int): index at which to compute the inverse of torch_tensordot(). Default: 3.

- -

Details

+
+
linalg_tensorinv(A, ind = 3L)
+
+
+

Arguments

+
A
+

(Tensor): tensor to invert.

+
ind
+

(int): index at which to compute the inverse of torch_tensordot(). Default: 3.

+
+
+

Details

Supports input of float, double, cfloat and cdouble dtypes.

-

Note

- -

Consider using linalg_tensorsolve() if possible for multiplying a tensor on the left +

+
+

Note

+

Consider using linalg_tensorsolve() if possible for multiplying a tensor on the left by the tensor inverse as linalg_tensorsolve(A, B) == torch_tensordot(linalg_tensorinv(A), B))

-

It is always prefered to use linalg_tensorsolve() when possible, as it is faster and more +

It is always prefered to use linalg_tensorsolve() when possible, as it is faster and more numerically stable than computing the pseudoinverse explicitly.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-A <- torch_eye(4 * 6)$reshape(c(4, 6, 8, 3))
-Ainv <- linalg_tensorinv(A, ind=3)
-Ainv$shape
-B <- torch_randn(4, 6)
-torch_allclose(torch_tensordot(Ainv, B), linalg_tensorsolve(A, B))
-
-A <- torch_randn(4, 4)
-Atensorinv<- linalg_tensorinv(A, 2)
-Ainv <- linalg_inv(A)
-torch_allclose(Atensorinv, Ainv)
-
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_eye(4 * 6)$reshape(c(4, 6, 8, 3))
+Ainv <- linalg_tensorinv(A, ind=3)
+Ainv$shape
+B <- torch_randn(4, 6)
+torch_allclose(torch_tensordot(Ainv, B), linalg_tensorsolve(A, B))
+
+A <- torch_randn(4, 4)
+Atensorinv<- linalg_tensorinv(A, 2)
+Ainv <- linalg_inv(A)
+torch_allclose(Atensorinv, Ainv)
+
+}
+#> [1] TRUE
+
+
+
- - - + + diff --git a/dev/reference/linalg_tensorsolve.html b/dev/reference/linalg_tensorsolve.html index b313a71adf040eda0b5aae6800af411142faec7f..cf7d5e017568d0e91583776119af214477e03650 100644 --- a/dev/reference/linalg_tensorsolve.html +++ b/dev/reference/linalg_tensorsolve.html @@ -1,82 +1,21 @@ - - - - - - - -Computes the solution X to the system torch_tensordot(A, X) = B. — linalg_tensorsolve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the solution X to the system torch_tensordot(A, X) = B. — linalg_tensorsolve • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,107 +117,98 @@ The returned tensor x satisfies tensordot(A, x, dims=x$ndim) == B.

-
linalg_tensorsolve(A, B, dims = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
A

(Tensor): tensor to solve for.

B

(Tensor): the solution

dims

(Tupleint, optional): dimensions of A to be moved. -If NULL, no dimensions are moved. Default: NULL.

- -

Details

+
+
linalg_tensorsolve(A, B, dims = NULL)
+
+
+

Arguments

+
A
+

(Tensor): tensor to solve for.

+
B
+

(Tensor): the solution

+
dims
+

(Tupleint, optional): dimensions of A to be moved. +If NULL, no dimensions are moved. Default: NULL.

+
+
+

Details

If dims is specified, A will be reshaped as A = movedim(A, dims, seq(len(dims) - A$ndim + 1, 0))

Supports inputs of float, double, cfloat and cdouble dtypes.

-

See also

- - +
+ -

Examples

-
if (torch_is_installed()) {
-A <- torch_eye(2 * 3 * 4)$reshape(c(2 * 3, 4, 2, 3, 4))
-B <- torch_randn(2 * 3, 4)
-X <- linalg_tensorsolve(A, B)
-X$shape
-torch_allclose(torch_tensordot(A, X, dims=X$ndim), B)
-
-A <- torch_randn(6, 4, 4, 3, 2)
-B <- torch_randn(4, 3, 2)
-X <- linalg_tensorsolve(A, B, dims=c(1, 3))
-A <- A$permute(c(2, 4, 5, 1, 3))
-torch_allclose(torch_tensordot(A, X, dims=X$ndim), B, atol=1e-6)
-
-}
-#> [1] FALSE
-
+
+

Examples

+
if (torch_is_installed()) {
+A <- torch_eye(2 * 3 * 4)$reshape(c(2 * 3, 4, 2, 3, 4))
+B <- torch_randn(2 * 3, 4)
+X <- linalg_tensorsolve(A, B)
+X$shape
+torch_allclose(torch_tensordot(A, X, dims=X$ndim), B)
+
+A <- torch_randn(6, 4, 4, 3, 2)
+B <- torch_randn(4, 3, 2)
+X <- linalg_tensorsolve(A, B, dims=c(1, 3))
+A <- A$permute(c(2, 4, 5, 1, 3))
+torch_allclose(torch_tensordot(A, X, dims=X$ndim), B, atol=1e-6)
+
+}
+#> [1] FALSE
+
+
+
- - - + + diff --git a/dev/reference/linalg_vector_norm.html b/dev/reference/linalg_vector_norm.html index 5fa8e642d43fe35d427bef5c63ef2bdd08a00173..b6b3cc243315d35a01f3e726db7439dc51b4f988 100644 --- a/dev/reference/linalg_vector_norm.html +++ b/dev/reference/linalg_vector_norm.html @@ -1,82 +1,21 @@ - - - - - - - -Computes a vector norm. — linalg_vector_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a vector norm. — linalg_vector_norm • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,134 +117,105 @@ This function does not necessarily treat multidimensonal A as a bat vectors, instead:

-
linalg_vector_norm(A, ord = 2, dim = NULL, keepdim = FALSE, dtype = NULL)
+
+
linalg_vector_norm(A, ord = 2, dim = NULL, keepdim = FALSE, dtype = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
A

(Tensor): tensor, flattened by default, but this behavior can be -controlled using dim.

ord

(int, float, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: 2

dim

(int, Tupleint, optional): dimensions over which to compute +

+

Arguments

+
A
+

(Tensor): tensor, flattened by default, but this behavior can be +controlled using dim.

+
ord
+

(int, float, inf, -inf, 'fro', 'nuc', optional): order of norm. Default: 2

+
dim
+

(int, Tupleint, optional): dimensions over which to compute the vector or matrix norm. See above for the behavior when dim=NULL. -Default: NULL

keepdim

(bool, optional): If set to TRUE, the reduced dimensions are retained -in the result as dimensions with size one. Default: FALSE

dtype

dtype (torch_dtype, optional): If specified, the input tensor is cast to +Default: NULL

+
keepdim
+

(bool, optional): If set to TRUE, the reduced dimensions are retained +in the result as dimensions with size one. Default: FALSE

+
dtype
+

dtype (torch_dtype, optional): If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor's type -will be dtype. Default: NULL

- -

Details

- +will be dtype. Default: NULL

+
+
+

Details

-
    -
  • If dim=NULL, A will be flattened before the norm is computed.

  • +
    • If dim=NULL, A will be flattened before the norm is computed.

    • If dim is an int or a tuple, the norm will be computed over these dimensions and the other dimensions will be treated as batch dimensions.

    • -
    - -

    This behavior is for consistency with linalg_norm().

    +

This behavior is for consistency with linalg_norm().

ord defines the norm that is computed. The following norms are -supported:

- - - - - - - - - - - - -
ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
- - -

See also

- - +supported:

ordnorm for matricesnorm for vectors
NULL (default)Frobenius norm2-norm (see below)
"fro"Frobenius norm– not supported –
"nuc"nuclear norm– not supported –
Infmax(sum(abs(x), dim=2))max(abs(x))
-Infmin(sum(abs(x), dim=2))min(abs(x))
0– not supported –sum(x != 0)
1max(sum(abs(x), dim=1))as below
-1min(sum(abs(x), dim=1))as below
2largest singular valueas below
-2smallest singular valueas below
other int or float– not supported –sum(abs(x)^{ord})^{(1 / ord)}
+ -

Examples

-
if (torch_is_installed()) {
-a <- torch_arange(0, 8, dtype=torch_float()) - 4
-a
-b <- a$reshape(c(3, 3))
-b
-
-linalg_vector_norm(a, ord = 3.5)
-linalg_vector_norm(b, ord = 3.5)
-
-}
-#> torch_tensor
-#> 5.43449
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_arange(0, 8, dtype=torch_float()) - 4
+a
+b <- a$reshape(c(3, 3))
+b
+
+linalg_vector_norm(a, ord = 3.5)
+linalg_vector_norm(b, ord = 3.5)
+
+}
+#> torch_tensor
+#> 5.43449
+#> [ CPUFloatType{} ]
+
+
+
- - - + + diff --git a/dev/reference/load_state_dict.html b/dev/reference/load_state_dict.html index 41e7f8ac0970a98602816d56c8ca198fd9338bc2..0a0f9796775a9995886b4b3c5c6c7678b2bdeb44 100644 --- a/dev/reference/load_state_dict.html +++ b/dev/reference/load_state_dict.html @@ -1,82 +1,21 @@ - - - - - - - -Load a state dict file — load_state_dict • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Load a state dict file — load_state_dict • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -195,50 +117,46 @@ For it to work correctly you need to use torch.save with the flag: classes from the tensors in the dict.

-
load_state_dict(path)
- -

Arguments

- - - - - - -
path

to the state dict file

- -

Value

+
+
load_state_dict(path)
+
+
+

Arguments

+
path
+

to the state dict file

+
+
+

Value

a named list of tensors.

-

Details

- -

The above might change with development of this +

+
+

Details

+

The above might change with development of this in pytorch's C++ api.

+
+
- - - + + diff --git a/dev/reference/logits_to_probs.html b/dev/reference/logits_to_probs.html deleted file mode 100644 index 9da0f79b80e94cfb4e71068db65c93cad215adf4..0000000000000000000000000000000000000000 --- a/dev/reference/logits_to_probs.html +++ /dev/null @@ -1,238 +0,0 @@ - - - - - - - - -Converts a tensor of logits into probabilities. Note that for the -binary case, each value denotes log odds, whereas for the -multi-dimensional case, the values along the last dimension denote -the log probabilities (possibly unnormalized) of the events. — logits_to_probs • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Converts a tensor of logits into probabilities. Note that for the -binary case, each value denotes log odds, whereas for the -multi-dimensional case, the values along the last dimension denote -the log probabilities (possibly unnormalized) of the events.

-
- -
logits_to_probs(logits, is_binary = FALSE)
- - - -
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/lr_lambda.html b/dev/reference/lr_lambda.html index 3b2ddc5624b565975705ef817742e5904a86a0d7..4ebe46ca1a9bb727cd4ddc18cc420219d0fdbbb2 100644 --- a/dev/reference/lr_lambda.html +++ b/dev/reference/lr_lambda.html @@ -1,82 +1,21 @@ - - - - - - - -Sets the learning rate of each parameter group to the initial lr -times a given function. When last_epoch=-1, sets initial lr as lr. — lr_lambda • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sets the learning rate of each parameter group to the initial lr +times a given function. When last_epoch=-1, sets initial lr as lr. — lr_lambda • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,74 +116,64 @@ times a given function. When last_epoch=-1, sets initial lr as lr. times a given function. When last_epoch=-1, sets initial lr as lr.

-
lr_lambda(optimizer, lr_lambda, last_epoch = -1, verbose = FALSE)
+
+
lr_lambda(optimizer, lr_lambda, last_epoch = -1, verbose = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
optimizer

(Optimizer): Wrapped optimizer.

lr_lambda

(function or list): A function which computes a multiplicative +

+

Arguments

+
optimizer
+

(Optimizer): Wrapped optimizer.

+
lr_lambda
+

(function or list): A function which computes a multiplicative factor given an integer parameter epoch, or a list of such -functions, one for each group in optimizer.param_groups.

last_epoch

(int): The index of last epoch. Default: -1.

verbose

(bool): If TRUE, prints a message to stdout for -each update. Default: FALSE.

- - -

Examples

-
if (torch_is_installed()) {
-# Assuming optimizer has two groups.
-lambda1 <- function(epoch) epoch %/% 30
-lambda2 <- function(epoch) 0.95^epoch
-if (FALSE) {
-scheduler <- lr_lambda(optimizer, lr_lambda = list(lambda1, lambda2))
-for (epoch in 1:100) {
-  train(...)
-  validate(...)
-  scheduler$step()
-}
-}
-
-}
-
+functions, one for each group in optimizer.param_groups.

+
last_epoch
+

(int): The index of last epoch. Default: -1.

+
verbose
+

(bool): If TRUE, prints a message to stdout for +each update. Default: FALSE.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# Assuming optimizer has two groups.
+lambda1 <- function(epoch) epoch %/% 30
+lambda2 <- function(epoch) 0.95^epoch
+if (FALSE) {
+scheduler <- lr_lambda(optimizer, lr_lambda = list(lambda1, lambda2))
+for (epoch in 1:100) {
+  train(...)
+  validate(...)
+  scheduler$step()
+}
+}
+
+}
+
+
+
- - - + + diff --git a/dev/reference/lr_multiplicative.html b/dev/reference/lr_multiplicative.html index ca23a11982980e74beb523de37b5084ef374b8a8..7ed449acb293f5934344104390d0f4d54a3c7d8f 100644 --- a/dev/reference/lr_multiplicative.html +++ b/dev/reference/lr_multiplicative.html @@ -1,82 +1,21 @@ - - - - - - - -Multiply the learning rate of each parameter group by the factor given -in the specified function. When last_epoch=-1, sets initial lr as lr. — lr_multiplicative • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multiply the learning rate of each parameter group by the factor given +in the specified function. When last_epoch=-1, sets initial lr as lr. — lr_multiplicative • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,72 +116,62 @@ in the specified function. When last_epoch=-1, sets initial lr as lr. in the specified function. When last_epoch=-1, sets initial lr as lr.

-
lr_multiplicative(optimizer, lr_lambda, last_epoch = -1, verbose = FALSE)
+
+
lr_multiplicative(optimizer, lr_lambda, last_epoch = -1, verbose = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
optimizer

(Optimizer): Wrapped optimizer.

lr_lambda

(function or list): A function which computes a multiplicative +

+

Arguments

+
optimizer
+

(Optimizer): Wrapped optimizer.

+
lr_lambda
+

(function or list): A function which computes a multiplicative factor given an integer parameter epoch, or a list of such -functions, one for each group in optimizer.param_groups.

last_epoch

(int): The index of last epoch. Default: -1.

verbose

(bool): If TRUE, prints a message to stdout for -each update. Default: FALSE.

- - -

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-lmbda <- function(epoch) 0.95
-scheduler <- lr_multiplicative(optimizer, lr_lambda=lmbda)
-for (epoch in 1:100) {
-  train(...)
-  validate(...)
-  scheduler$step()
-}
-}
-
-}
-
+functions, one for each group in optimizer.param_groups.

+
last_epoch
+

(int): The index of last epoch. Default: -1.

+
verbose
+

(bool): If TRUE, prints a message to stdout for +each update. Default: FALSE.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+lmbda <- function(epoch) 0.95
+scheduler <- lr_multiplicative(optimizer, lr_lambda=lmbda)
+for (epoch in 1:100) {
+  train(...)
+  validate(...)
+  scheduler$step()
+}
+}
+
+}
+
+
+
- - - + + diff --git a/dev/reference/lr_one_cycle.html b/dev/reference/lr_one_cycle.html index 9728dd68e17fd7c61f173f3e0291be50a00b2436..8bff16191d9e090fb1065286aa9ee86b613b7d2c 100644 --- a/dev/reference/lr_one_cycle.html +++ b/dev/reference/lr_one_cycle.html @@ -1,83 +1,22 @@ - - - - - - - -Once cycle learning rate — lr_one_cycle • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Once cycle learning rate — lr_one_cycle • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -197,183 +119,151 @@ from that maximum learning rate to some minimum learning rate much lower than the initial learning rate.

-
lr_one_cycle(
-  optimizer,
-  max_lr,
-  total_steps = NULL,
-  epochs = NULL,
-  steps_per_epoch = NULL,
-  pct_start = 0.3,
-  anneal_strategy = "cos",
-  cycle_momentum = TRUE,
-  base_momentum = 0.85,
-  max_momentum = 0.95,
-  div_factor = 25,
-  final_div_factor = 10000,
-  last_epoch = -1,
-  verbose = FALSE
-)
+
+
lr_one_cycle(
+  optimizer,
+  max_lr,
+  total_steps = NULL,
+  epochs = NULL,
+  steps_per_epoch = NULL,
+  pct_start = 0.3,
+  anneal_strategy = "cos",
+  cycle_momentum = TRUE,
+  base_momentum = 0.85,
+  max_momentum = 0.95,
+  div_factor = 25,
+  final_div_factor = 10000,
+  last_epoch = -1,
+  verbose = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
optimizer

(Optimizer): Wrapped optimizer.

max_lr

(float or list): Upper learning rate boundaries in the cycle -for each parameter group.

total_steps

(int): The total number of steps in the cycle. Note that +

+

Arguments

+
optimizer
+

(Optimizer): Wrapped optimizer.

+
max_lr
+

(float or list): Upper learning rate boundaries in the cycle +for each parameter group.

+
total_steps
+

(int): The total number of steps in the cycle. Note that if a value is not provided here, then it must be inferred by providing a value for epochs and steps_per_epoch. -Default: NULL

epochs

(int): The number of epochs to train for. This is used along +Default: NULL

+
epochs
+

(int): The number of epochs to train for. This is used along with steps_per_epoch in order to infer the total number of steps in the cycle if a value for total_steps is not provided. -Default: NULL

steps_per_epoch

(int): The number of steps per epoch to train for. This is +Default: NULL

+
steps_per_epoch
+

(int): The number of steps per epoch to train for. This is used along with epochs in order to infer the total number of steps in the cycle if a value for total_steps is not provided. -Default: NULL

pct_start

(float): The percentage of the cycle (in number of steps) spent +Default: NULL

+
pct_start
+

(float): The percentage of the cycle (in number of steps) spent increasing the learning rate. -Default: 0.3

anneal_strategy

(str): 'cos', 'linear' +Default: 0.3

+
anneal_strategy
+

(str): 'cos', 'linear' Specifies the annealing strategy: "cos" for cosine annealing, "linear" for linear annealing. -Default: 'cos'

cycle_momentum

(bool): If TRUE, momentum is cycled inversely +Default: 'cos'

+
cycle_momentum
+

(bool): If TRUE, momentum is cycled inversely to learning rate between 'base_momentum' and 'max_momentum'. -Default: TRUE

base_momentum

(float or list): Lower momentum boundaries in the cycle +Default: TRUE

+
base_momentum
+

(float or list): Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is 'base_momentum' and learning rate is 'max_lr'. -Default: 0.85

max_momentum

(float or list): Upper momentum boundaries in the cycle +Default: 0.85

+
max_momentum
+

(float or list): Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is 'max_momentum' and learning rate is 'base_lr' -Default: 0.95

div_factor

(float): Determines the initial learning rate via +Default: 0.95

+
div_factor
+

(float): Determines the initial learning rate via initial_lr = max_lr/div_factor -Default: 25

final_div_factor

(float): Determines the minimum learning rate via +Default: 25

+
final_div_factor
+

(float): Determines the minimum learning rate via min_lr = initial_lr/final_div_factor -Default: 1e4

last_epoch

(int): The index of the last batch. This parameter is used when -resuming a training job. Since step() should be invoked after each +Default: 1e4

+
last_epoch
+

(int): The index of the last batch. This parameter is used when +resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. -Default: -1

verbose

(bool): If TRUE, prints a message to stdout for -each update. Default: FALSE.

- -

Details

- +Default: -1

+
verbose
+

(bool): If TRUE, prints a message to stdout for +each update. Default: FALSE.

+
+
+

Details

This policy was initially described in the paper -Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.

+Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.

The 1cycle learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This scheduler is not chainable.

Note also that the total number of steps in the cycle can be determined in one -of two ways (listed in order of precedence):

    -
  • A value for total_steps is explicitly provided.

  • +of two ways (listed in order of precedence):

    • A value for total_steps is explicitly provided.

    • A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided.

    • -
    - -

    In this case, the number of total steps is inferred by +

In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch

You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-data_loader <- dataloader(...)
-optimizer <- optim_sgd(model$parameters, lr=0.1, momentum=0.9)
-scheduler <- lr_one_cycle(optimizer, max_lr=0.01, steps_per_epoch=length(data_loader), 
-                          epochs=10)
-
-for (i in 1:epochs) {
-  coro::loop(for (batch in data_loader) {
-     train_batch(...)
-     scheduler$step()
-  })
-}
-}
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+data_loader <- dataloader(...)
+optimizer <- optim_sgd(model$parameters, lr=0.1, momentum=0.9)
+scheduler <- lr_one_cycle(optimizer, max_lr=0.01, steps_per_epoch=length(data_loader), 
+                          epochs=10)
+
+for (i in 1:epochs) {
+  coro::loop(for (batch in data_loader) {
+     train_batch(...)
+     scheduler$step()
+  })
+}
+}
+
+}
+
+
+
- - - + + diff --git a/dev/reference/lr_scheduler.html b/dev/reference/lr_scheduler.html index df3c7b1bad91fbe91c29249e71807ec46b28b1ff..0dc9f0a19cff56d38ea4f1377bd1ac90703c0f4d 100644 --- a/dev/reference/lr_scheduler.html +++ b/dev/reference/lr_scheduler.html @@ -1,79 +1,18 @@ - - - - - - - -Creates learning rate schedulers — lr_scheduler • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates learning rate schedulers — lr_scheduler • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,62 +111,50 @@

Creates learning rate schedulers

-
lr_scheduler(
-  classname = NULL,
-  inherit = LRScheduler,
-  ...,
-  parent_env = parent.frame()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
classname

optional name for the learning rate scheduler

inherit

an optional learning rate scheduler to inherit from

...

named list of methods. You must implement the get_lr() -method that doesn't take any argument and returns learning rates -for each param_group in the optimizer.

parent_env

passed to R6::R6Class().

+
+
lr_scheduler(
+  classname = NULL,
+  inherit = LRScheduler,
+  ...,
+  parent_env = parent.frame()
+)
+
+
+

Arguments

+
classname
+

optional name for the learning rate scheduler

+
inherit
+

an optional learning rate scheduler to inherit from

+
...
+

named list of methods. You must implement the get_lr() +method that doesn't take any argument and returns learning rates +for each param_group in the optimizer.

+
parent_env
+

passed to R6::R6Class().

+
+
- - - + + diff --git a/dev/reference/lr_step.html b/dev/reference/lr_step.html index 4da6d5437d9b016f52b7c53a987bce280b9c047c..2f98cf4c8ed4e6f663076f4e24f3a026acff0c0a 100644 --- a/dev/reference/lr_step.html +++ b/dev/reference/lr_step.html @@ -1,82 +1,21 @@ - - - - - - - -Step learning rate decay — lr_step • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Step learning rate decay — lr_step • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,74 +117,64 @@ other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.

-
lr_step(optimizer, step_size, gamma = 0.1, last_epoch = -1)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
optimizer

(Optimizer): Wrapped optimizer.

step_size

(int): Period of learning rate decay.

gamma

(float): Multiplicative factor of learning rate decay. -Default: 0.1.

last_epoch

(int): The index of last epoch. Default: -1.

- +
+
lr_step(optimizer, step_size, gamma = 0.1, last_epoch = -1)
+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-# Assuming optimizer uses lr = 0.05 for all groups
-# lr = 0.05     if epoch < 30
-# lr = 0.005    if 30 <= epoch < 60
-# lr = 0.0005   if 60 <= epoch < 90
-# ...
-scheduler <- lr_step(optimizer, step_size=30, gamma=0.1)
-for (epoch in 1:100) {
-  train(...)
-  validate(...)
-  scheduler$step()
-}
-}
-
-}
-
+
+

Arguments

+
optimizer
+

(Optimizer): Wrapped optimizer.

+
step_size
+

(int): Period of learning rate decay.

+
gamma
+

(float): Multiplicative factor of learning rate decay. +Default: 0.1.

+
last_epoch
+

(int): The index of last epoch. Default: -1.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+# Assuming optimizer uses lr = 0.05 for all groups
+# lr = 0.05     if epoch < 30
+# lr = 0.005    if 30 <= epoch < 60
+# lr = 0.0005   if 60 <= epoch < 90
+# ...
+scheduler <- lr_step(optimizer, step_size=30, gamma=0.1)
+for (epoch in 1:100) {
+  train(...)
+  validate(...)
+  scheduler$step()
+}
+}
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_avg_pool1d.html b/dev/reference/nn_adaptive_avg_pool1d.html index 620972ac0625e437e46124a7bca008e7d537db21..83b8d106f23282fe29c36a3025c5d79dc541d338 100644 --- a/dev/reference/nn_adaptive_avg_pool1d.html +++ b/dev/reference/nn_adaptive_avg_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 1D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 1D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,52 +113,48 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_avg_pool1d(output_size)
- -

Arguments

- - - - - - -
output_size

the target output size H

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5
-m = nn_adaptive_avg_pool1d(5)
-input <- torch_randn(1, 64, 8)
-output <- m(input)
-
-}
-
+
+
nn_adaptive_avg_pool1d(output_size)
+
+ +
+

Arguments

+
output_size
+

the target output size H

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5
+m = nn_adaptive_avg_pool1d(5)
+input <- torch_randn(1, 64, 8)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_avg_pool2d.html b/dev/reference/nn_adaptive_avg_pool2d.html index 24d91d87215fcf40d77659e320b86b1499f86be3..20bbce1a0097f9fdd645b6e6dbc3ceac6fc727f1 100644 --- a/dev/reference/nn_adaptive_avg_pool2d.html +++ b/dev/reference/nn_adaptive_avg_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 2D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 2D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,59 +113,55 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_avg_pool2d(output_size)
+
+
nn_adaptive_avg_pool2d(output_size)
+
-

Arguments

- - - - - - -
output_size

the target output size of the image of the form H x W. +

+

Arguments

+
output_size
+

the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a int, or NULL which means the size will -be the same as that of the input.

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5x7
-m <- nn_adaptive_avg_pool2d(c(5,7))
-input <- torch_randn(1, 64, 8, 9)
-output <- m(input)
-# target output size of 7x7 (square)
-m <- nn_adaptive_avg_pool2d(7)
-input <- torch_randn(1, 64, 10, 9)
-output <- m(input)
-
-}
-
+be the same as that of the input.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5x7
+m <- nn_adaptive_avg_pool2d(c(5,7))
+input <- torch_randn(1, 64, 8, 9)
+output <- m(input)
+# target output size of 7x7 (square)
+m <- nn_adaptive_avg_pool2d(7)
+input <- torch_randn(1, 64, 10, 9)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_avg_pool3d.html b/dev/reference/nn_adaptive_avg_pool3d.html index 6a19c247adb6406893636efd27b913c7cf2679e9..47eee6d16e5bdcf78f73b519657cffda313ee574 100644 --- a/dev/reference/nn_adaptive_avg_pool3d.html +++ b/dev/reference/nn_adaptive_avg_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 3D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 3D adaptive average pooling over an input signal composed of several input planes. — nn_adaptive_avg_pool3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,59 +113,55 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_avg_pool3d(output_size)
+
+
nn_adaptive_avg_pool3d(output_size)
+
-

Arguments

- - - - - - -
output_size

the target output size of the form D x H x W. +

+

Arguments

+
output_size
+

the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D. D, H and W can be either a int, or None which means the size will -be the same as that of the input.

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5x7x9
-m <- nn_adaptive_avg_pool3d(c(5,7,9))
-input <- torch_randn(1, 64, 8, 9, 10)
-output <- m(input)
-# target output size of 7x7x7 (cube)
-m <- nn_adaptive_avg_pool3d(7)
-input <- torch_randn(1, 64, 10, 9, 8)
-output <- m(input)
-
-}
-
+be the same as that of the input.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5x7x9
+m <- nn_adaptive_avg_pool3d(c(5,7,9))
+input <- torch_randn(1, 64, 8, 9, 10)
+output <- m(input)
+# target output size of 7x7x7 (cube)
+m <- nn_adaptive_avg_pool3d(7)
+input <- torch_randn(1, 64, 10, 9, 8)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_log_softmax_with_loss.html b/dev/reference/nn_adaptive_log_softmax_with_loss.html index facfd95fa35a784503a5a9b8b84e48681cddd46f..256fbfcc7d5cbfe040b338c64bd3041d3fa055a1 100644 --- a/dev/reference/nn_adaptive_log_softmax_with_loss.html +++ b/dev/reference/nn_adaptive_log_softmax_with_loss.html @@ -1,80 +1,19 @@ - - - - - - - -AdaptiveLogSoftmaxWithLoss module — nn_adaptive_log_softmax_with_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -AdaptiveLogSoftmaxWithLoss module — nn_adaptive_log_softmax_with_loss • torch - - - - - - - - + + -
-
- -
- -
+
-
nn_adaptive_log_softmax_with_loss(
-  in_features,
-  n_classes,
-  cutoffs,
-  div_value = 4,
-  head_bias = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
in_features

(int): Number of features in the input tensor

n_classes

(int): Number of classes in the dataset

cutoffs

(Sequence): Cutoffs used to assign targets to their buckets

div_value

(float, optional): value used as an exponent to compute sizes -of the clusters. Default: 4.0

head_bias

(bool, optional): If True, adds a bias term to the 'head' of the -adaptive softmax. Default: False

- -

Value

+
+
nn_adaptive_log_softmax_with_loss(
+  in_features,
+  n_classes,
+  cutoffs,
+  div_value = 4,
+  head_bias = FALSE
+)
+
-

NamedTuple with output and loss fields:

    -
  • output is a Tensor of size N containing computed target +

    +

    Arguments

    +
    in_features
    +

    (int): Number of features in the input tensor

    +
    n_classes
    +

    (int): Number of classes in the dataset

    +
    cutoffs
    +

    (Sequence): Cutoffs used to assign targets to their buckets

    +
    div_value
    +

    (float, optional): value used as an exponent to compute sizes +of the clusters. Default: 4.0

    +
    head_bias
    +

    (bool, optional): If True, adds a bias term to the 'head' of the +adaptive softmax. Default: False

    +
    +
    +

    Value

    +

    NamedTuple with output and loss fields:

    • output is a Tensor of size N containing computed target log probabilities for each example

    • loss is a Scalar representing the computed negative log likelihood loss

    • -
    - -

    Details

    - +
+
+

Details

Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word @@ -251,8 +161,7 @@ present are evaluated.

The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute -- that is, contain a small number of assigned labels. -We highly recommend taking a look at the original paper for more details.

    -
  • cutoffs should be an ordered Sequence of integers sorted +We highly recommend taking a look at the original paper for more details.

    • cutoffs should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example setting cutoffs = c(10, 100, 1000) @@ -271,59 +180,54 @@ and indices starting from \(1\)).

    • head_bias if set to True, adds a bias term to the 'head' of the adaptive softmax. See paper for details. Set to False in the official implementation.

    • -
    - -

    Note

    - +
+
+

Note

This module returns a NamedTuple with output and loss fields. See further documentation for details.

To compute log-probabilities for all classes, the log_prob method can be used.

-

Warning

- +
+
+

Warning

Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1.

-

Shape

- +
+
+

Shape

-
    -
  • input: \((N, \mbox{in\_features})\)

  • +
    • input: \((N, \mbox{in\_features})\)

    • target: \((N)\) where each value satisfies \(0 <= \mbox{target[i]} <= \mbox{n\_classes}\)

    • output1: \((N)\)

    • output2: Scalar

    • -
    - +
+
- - - + + diff --git a/dev/reference/nn_adaptive_max_pool1d.html b/dev/reference/nn_adaptive_max_pool1d.html index 14f961e5c5d777d01e3af84b41194f0ccb0ab95e..e23ecaa6f55607d2a405a61a3077f8e4f3e34f20 100644 --- a/dev/reference/nn_adaptive_max_pool1d.html +++ b/dev/reference/nn_adaptive_max_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 1D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 1D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,57 +113,51 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_max_pool1d(output_size, return_indices = FALSE)
- -

Arguments

- - - - - - - - - - -
output_size

the target output size H

return_indices

if TRUE, will return the indices along with the outputs. -Useful to pass to nn_max_unpool1d(). Default: FALSE

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5
-m <- nn_adaptive_max_pool1d(5)
-input <- torch_randn(1, 64, 8)
-output <- m(input)
-
-}
-
+
+
nn_adaptive_max_pool1d(output_size, return_indices = FALSE)
+
+ +
+

Arguments

+
output_size
+

the target output size H

+
return_indices
+

if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool1d(). Default: FALSE

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5
+m <- nn_adaptive_max_pool1d(5)
+input <- torch_randn(1, 64, 8)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_max_pool2d.html b/dev/reference/nn_adaptive_max_pool2d.html index 604bc9725b85a2ae8e6d97634bb90d21c567e4b6..f66f213b2ed0cd3571eb15914515482cdb082dc0 100644 --- a/dev/reference/nn_adaptive_max_pool2d.html +++ b/dev/reference/nn_adaptive_max_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 2D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 2D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,64 +113,58 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_max_pool2d(output_size, return_indices = FALSE)
+
+
nn_adaptive_max_pool2d(output_size, return_indices = FALSE)
+
-

Arguments

- - - - - - - - - - -
output_size

the target output size of the image of the form H x W. +

+

Arguments

+
output_size
+

the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a int, or None which means the size will -be the same as that of the input.

return_indices

if TRUE, will return the indices along with the outputs. -Useful to pass to nn_max_unpool2d(). Default: FALSE

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5x7
-m <- nn_adaptive_max_pool2d(c(5,7))
-input <- torch_randn(1, 64, 8, 9)
-output <- m(input)
-# target output size of 7x7 (square)
-m <- nn_adaptive_max_pool2d(7)
-input <- torch_randn(1, 64, 10, 9)
-output <- m(input)
-
-}
-
+be the same as that of the input.

+
return_indices
+

if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5x7
+m <- nn_adaptive_max_pool2d(c(5,7))
+input <- torch_randn(1, 64, 8, 9)
+output <- m(input)
+# target output size of 7x7 (square)
+m <- nn_adaptive_max_pool2d(7)
+input <- torch_randn(1, 64, 10, 9)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_adaptive_max_pool3d.html b/dev/reference/nn_adaptive_max_pool3d.html index ac502701e4606b786d167504eea3fd2f4417943a..cd455e52e3e0a0f44fc49eff0b729351a4e2a394 100644 --- a/dev/reference/nn_adaptive_max_pool3d.html +++ b/dev/reference/nn_adaptive_max_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 3D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 3D adaptive max pooling over an input signal composed of several input planes. — nn_adaptive_max_pool3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,64 +113,58 @@ The number of output features is equal to the number of input planes." /> The number of output features is equal to the number of input planes.

-
nn_adaptive_max_pool3d(output_size, return_indices = FALSE)
+
+
nn_adaptive_max_pool3d(output_size, return_indices = FALSE)
+
-

Arguments

- - - - - - - - - - -
output_size

the target output size of the image of the form D x H x W. +

+

Arguments

+
output_size
+

the target output size of the image of the form D x H x W. Can be a tuple (D, H, W) or a single D for a cube D x D x D. D, H and W can be either a int, or None which means the size will -be the same as that of the input.

return_indices

if TRUE, will return the indices along with the outputs. -Useful to pass to nn_max_unpool3d(). Default: FALSE

- - -

Examples

-
if (torch_is_installed()) {
-# target output size of 5x7x9
-m <- nn_adaptive_max_pool3d(c(5,7,9))
-input <- torch_randn(1, 64, 8, 9, 10)
-output <- m(input)
-# target output size of 7x7x7 (cube)
-m <- nn_adaptive_max_pool3d(7)
-input <- torch_randn(1, 64, 10, 9, 8)
-output <- m(input)
-
-}
-
+be the same as that of the input.

+
return_indices
+

if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

+
+ +
+

Examples

+
if (torch_is_installed()) {
+# target output size of 5x7x9
+m <- nn_adaptive_max_pool3d(c(5,7,9))
+input <- torch_randn(1, 64, 8, 9, 10)
+output <- m(input)
+# target output size of 7x7x7 (cube)
+m <- nn_adaptive_max_pool3d(7)
+input <- torch_randn(1, 64, 10, 9, 8)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_avg_pool1d.html b/dev/reference/nn_avg_pool1d.html index af74671e701a71ed33184481f643ccba39ad5718..03a907907c82d0f332d87df0695a2e0c6a150432 100644 --- a/dev/reference/nn_avg_pool1d.html +++ b/dev/reference/nn_avg_pool1d.html @@ -1,87 +1,26 @@ - - - - - - - -Applies a 1D average pooling over an input signal composed of several -input planes. — nn_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 1D average pooling over an input signal composed of several +input planes. — nn_avg_pool1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -204,97 +126,84 @@ can be precisely described as:

$$

-
nn_avg_pool1d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on both sides

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

count_include_pad

when TRUE, will include the zero-padding in the averaging calculation

- -

Details

+
+
nn_avg_pool1d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on both sides

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
count_include_pad
+

when TRUE, will include the zero-padding in the averaging calculation

+
+
+

Details

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.

The parameters kernel_size, stride, padding can each be an int or a one-element tuple.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, L_{in})\)

  • +
    • Input: \((N, C, L_{in})\)

    • Output: \((N, C, L_{out})\), where

    • -
    - -

    $$ +

$$ L_{out} = \left\lfloor \frac{L_{in} + 2 \times \mbox{padding} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-  
-# pool with window of size=3, stride=2
-m <- nn_avg_pool1d(3, stride=2)
-m(torch_randn(1, 1, 8))
-
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   0.3143 -0.1988  0.5027
-#> [ CPUFloatType{1,1,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+  
+# pool with window of size=3, stride=2
+m <- nn_avg_pool1d(3, stride=2)
+m(torch_randn(1, 1, 8))
+
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   0.1674 -0.7449 -0.4110
+#> [ CPUFloatType{1,1,3} ]
+
+
+
- - - + + diff --git a/dev/reference/nn_avg_pool2d.html b/dev/reference/nn_avg_pool2d.html index 7f5d21eb28af0626ab5dba8053d35a47e48386ed..a2a01c61eec77e87086d4d6173ba0073e9dc6a76 100644 --- a/dev/reference/nn_avg_pool2d.html +++ b/dev/reference/nn_avg_pool2d.html @@ -1,87 +1,26 @@ - - - - - - - -Applies a 2D average pooling over an input signal composed of several input -planes. — nn_avg_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 2D average pooling over an input signal composed of several input +planes. — nn_avg_pool2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -204,64 +126,47 @@ input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n) $$

-
nn_avg_pool2d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE,
-  divisor_override = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on both sides

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

count_include_pad

when TRUE, will include the zero-padding in the averaging calculation

divisor_override

if specified, it will be used as divisor, otherwise kernel_size will be used

- -

Details

+
+
nn_avg_pool2d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE,
+  divisor_override = NULL
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on both sides

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
count_include_pad
+

when TRUE, will include the zero-padding in the averaging calculation

+
divisor_override
+

if specified, it will be used as divisor, otherwise kernel_size will be used

+
+
+

Details

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.

-

The parameters kernel_size, stride, padding can either be:

    -
  • a single int -- in which case the same value is used for the height and width dimension

  • +

    The parameters kernel_size, stride, padding can either be:

    • a single int -- in which case the same value is used for the height and width dimension

    • a tuple of two ints -- in which case, the first int is used for the height dimension, and the second int for the width dimension

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, H_{in}, W_{in})\)

    • Output: \((N, C, H_{out}, W_{out})\), where

    • -
    - -

    $$ +

$$ H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor $$ @@ -269,44 +174,43 @@ $$ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-  
-# pool of square window of size=3, stride=2
-m <- nn_avg_pool2d(3, stride=2)
-# pool of non-square window
-m <- nn_avg_pool2d(c(3, 2), stride=c(2, 1))
-input <- torch_randn(20, 16, 50, 32)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+  
+# pool of square window of size=3, stride=2
+m <- nn_avg_pool2d(3, stride=2)
+# pool of non-square window
+m <- nn_avg_pool2d(c(3, 2), stride=c(2, 1))
+input <- torch_randn(20, 16, 50, 32)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_avg_pool3d.html b/dev/reference/nn_avg_pool3d.html index a11494911b818932b9306d03522ed4c8ea137618..7db179703e554de8368320ca2ce1d022d854d782 100644 --- a/dev/reference/nn_avg_pool3d.html +++ b/dev/reference/nn_avg_pool3d.html @@ -1,50 +1,7 @@ - - - - - - - -Applies a 3D average pooling over an input signal composed of several input -planes. — nn_avg_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 3D average pooling over an input signal composed of several input +planes. — nn_avg_pool3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -208,64 +130,47 @@ can be precisely described as:

$$

-
nn_avg_pool3d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE,
-  divisor_override = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on all three sides

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

count_include_pad

when TRUE, will include the zero-padding in the averaging calculation

divisor_override

if specified, it will be used as divisor, otherwise kernel_size will be used

- -

Details

+
+
nn_avg_pool3d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE,
+  divisor_override = NULL
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on all three sides

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
count_include_pad
+

when TRUE, will include the zero-padding in the averaging calculation

+
divisor_override
+

if specified, it will be used as divisor, otherwise kernel_size will be used

+
+
+

Details

If padding is non-zero, then the input is implicitly zero-padded on all three sides for padding number of points.

-

The parameters kernel_size, stride can either be:

    -
  • a single int -- in which case the same value is used for the depth, height and width dimension

  • +

    The parameters kernel_size, stride can either be:

    • a single int -- in which case the same value is used for the depth, height and width dimension

    • a tuple of three ints -- in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • -
    - -

    $$ +

$$ D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor $$ @@ -277,44 +182,43 @@ $$ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{kernel\_size}[2]}{\mbox{stride}[2]} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-  
-# pool of square window of size=3, stride=2
-m = nn_avg_pool3d(3, stride=2)
-# pool of non-square window
-m = nn_avg_pool3d(c(3, 2, 2), stride=c(2, 1, 2))
-input = torch_randn(20, 16, 50,44, 31)
-output = m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+  
+# pool of square window of size=3, stride=2
+m = nn_avg_pool3d(3, stride=2)
+# pool of non-square window
+m = nn_avg_pool3d(c(3, 2, 2), stride=c(2, 1, 2))
+input = torch_randn(20, 16, 50,44, 31)
+output = m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_batch_norm1d.html b/dev/reference/nn_batch_norm1d.html index eed4ae4a7a302219d5c4c67ce739187694490e8a..9ec7bbf4abf6a5dbf1725dc37caf6c7cb21d465e 100644 --- a/dev/reference/nn_batch_norm1d.html +++ b/dev/reference/nn_batch_norm1d.html @@ -1,81 +1,20 @@ - - - - - - - -BatchNorm1D module — nn_batch_norm1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -BatchNorm1D module — nn_batch_norm1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+

Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper -Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

+Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

-
nn_batch_norm1d(
-  num_features,
-  eps = 1e-05,
-  momentum = 0.1,
-  affine = TRUE,
-  track_running_stats = TRUE
-)
+
+
nn_batch_norm1d(
+  num_features,
+  eps = 1e-05,
+  momentum = 0.1,
+  affine = TRUE,
+  track_running_stats = TRUE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
num_features

\(C\) from an expected input of size -\((N, C, L)\) or \(L\) from input of size \((N, L)\)

eps

a value added to the denominator for numerical stability. -Default: 1e-5

momentum

the value used for the running_mean and running_var +

+

Arguments

+
num_features
+

\(C\) from an expected input of size +\((N, C, L)\) or \(L\) from input of size \((N, L)\)

+
eps
+

a value added to the denominator for numerical stability. +Default: 1e-5

+
momentum
+

the value used for the running_mean and running_var computation. Can be set to NULL for cumulative moving average -(i.e. simple average). Default: 0.1

affine

a boolean value that when set to TRUE, this module has -learnable affine parameters. Default: TRUE

track_running_stats

a boolean value that when set to TRUE, this +(i.e. simple average). Default: 0.1

+
affine
+

a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

+
track_running_stats
+

a boolean value that when set to TRUE, this module tracks the running mean and variance, and when set to FALSE, this module does not track such statistics and always uses batch -statistics in both training and eval modes. Default: TRUE

- -

Details

- +statistics in both training and eval modes. Default: TRUE

+
+
+

Details

$$ y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta $$

@@ -250,8 +162,9 @@ of 0.1. If track_running_stats is set to FALSE, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

-

Note

- +
+
+

Note

@@ -263,52 +176,49 @@ where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.

Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it's common terminology to call this Temporal Batch Normalization.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C)\) or \((N, C, L)\)

  • +
    • Input: \((N, C)\) or \((N, C, L)\)

    • Output: \((N, C)\) or \((N, C, L)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# With Learnable Parameters
    -m <- nn_batch_norm1d(100)
    -# Without Learnable Parameters
    -m <- nn_batch_norm1d(100, affine = FALSE)
    -input <- torch_randn(20, 100)
    -output <- m(input) 
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# With Learnable Parameters
+m <- nn_batch_norm1d(100)
+# Without Learnable Parameters
+m <- nn_batch_norm1d(100, affine = FALSE)
+input <- torch_randn(20, 100)
+output <- m(input) 
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_batch_norm2d.html b/dev/reference/nn_batch_norm2d.html index 829f889c1269074d6a8a6d41c2eca1724f34e074..2f89e9c69a58ac43223222bd5d387e50f6673489 100644 --- a/dev/reference/nn_batch_norm2d.html +++ b/dev/reference/nn_batch_norm2d.html @@ -1,81 +1,20 @@ - - - - - - - -BatchNorm2D — nn_batch_norm2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -BatchNorm2D — nn_batch_norm2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs additional channel dimension) as described in the paper -Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

+Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

-
nn_batch_norm2d(
-  num_features,
-  eps = 1e-05,
-  momentum = 0.1,
-  affine = TRUE,
-  track_running_stats = TRUE
-)
+
+
nn_batch_norm2d(
+  num_features,
+  eps = 1e-05,
+  momentum = 0.1,
+  affine = TRUE,
+  track_running_stats = TRUE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
num_features

\(C\) from an expected input of size -\((N, C, H, W)\)

eps

a value added to the denominator for numerical stability. -Default: 1e-5

momentum

the value used for the running_mean and running_var +

+

Arguments

+
num_features
+

\(C\) from an expected input of size +\((N, C, H, W)\)

+
eps
+

a value added to the denominator for numerical stability. +Default: 1e-5

+
momentum
+

the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average -(i.e. simple average). Default: 0.1

affine

a boolean value that when set to TRUE, this module has -learnable affine parameters. Default: TRUE

track_running_stats

a boolean value that when set to TRUE, this +(i.e. simple average). Default: 0.1

+
affine
+

a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

+
track_running_stats
+

a boolean value that when set to TRUE, this module tracks the running mean and variance, and when set to FALSE, this module does not track such statistics and uses batch statistics instead in both training and eval modes if the running mean and variance are None. -Default: TRUE

- -

Details

- +Default: TRUE

+
+
+

Details

$$ y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta $$

@@ -252,8 +164,9 @@ of 0.1.

If track_running_stats is set to FALSE, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

-

Note

- +
+
+

Note

This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is @@ -262,52 +175,49 @@ where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value. Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it's common terminology to call this Spatial Batch Normalization.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, H, W)\)

  • +
    • Input: \((N, C, H, W)\)

    • Output: \((N, C, H, W)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# With Learnable Parameters
    -m <- nn_batch_norm2d(100)
    -# Without Learnable Parameters
    -m <- nn_batch_norm2d(100, affine=FALSE)
    -input <- torch_randn(20, 100, 35, 45)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# With Learnable Parameters
+m <- nn_batch_norm2d(100)
+# Without Learnable Parameters
+m <- nn_batch_norm2d(100, affine=FALSE)
+input <- torch_randn(20, 100, 35, 45)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_batch_norm3d.html b/dev/reference/nn_batch_norm3d.html index f54ca20a27fcd7e8bef550b943771196c9ce879b..8c69cd5804f1b3b81fff5cdaa180bd3e65c7a942 100644 --- a/dev/reference/nn_batch_norm3d.html +++ b/dev/reference/nn_batch_norm3d.html @@ -1,81 +1,20 @@ - - - - - - - -BatchNorm3D — nn_batch_norm3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -BatchNorm3D — nn_batch_norm3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+

Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper -Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

+Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

-
nn_batch_norm3d(
-  num_features,
-  eps = 1e-05,
-  momentum = 0.1,
-  affine = TRUE,
-  track_running_stats = TRUE
-)
+
+
nn_batch_norm3d(
+  num_features,
+  eps = 1e-05,
+  momentum = 0.1,
+  affine = TRUE,
+  track_running_stats = TRUE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
num_features

\(C\) from an expected input of size -\((N, C, D, H, W)\)

eps

a value added to the denominator for numerical stability. -Default: 1e-5

momentum

the value used for the running_mean and running_var +

+

Arguments

+
num_features
+

\(C\) from an expected input of size +\((N, C, D, H, W)\)

+
eps
+

a value added to the denominator for numerical stability. +Default: 1e-5

+
momentum
+

the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average -(i.e. simple average). Default: 0.1

affine

a boolean value that when set to TRUE, this module has -learnable affine parameters. Default: TRUE

track_running_stats

a boolean value that when set to TRUE, this +(i.e. simple average). Default: 0.1

+
affine
+

a boolean value that when set to TRUE, this module has +learnable affine parameters. Default: TRUE

+
track_running_stats
+

a boolean value that when set to TRUE, this module tracks the running mean and variance, and when set to FALSE, this module does not track such statistics and uses batch statistics instead in both training and eval modes if the running mean and variance are None. -Default: TRUE

- -

Details

- +Default: TRUE

+
+
+

Details

$$ y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta $$

@@ -253,8 +165,9 @@ of 0.1.

If track_running_stats is set to FALSE, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

-

Note

- +
+
+

Note

This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is: @@ -264,52 +177,49 @@ new observed value.

Because the Batch Normalization is done over the C dimension, computing statistics on (N, D, H, W) slices, it's common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, D, H, W)\)

  • +
    • Input: \((N, C, D, H, W)\)

    • Output: \((N, C, D, H, W)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# With Learnable Parameters
    -m <- nn_batch_norm3d(100)
    -# Without Learnable Parameters
    -m <- nn_batch_norm3d(100, affine=FALSE)
    -input <- torch_randn(20, 100, 35, 45, 55)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# With Learnable Parameters
+m <- nn_batch_norm3d(100)
+# Without Learnable Parameters
+m <- nn_batch_norm3d(100, affine=FALSE)
+input <- torch_randn(20, 100, 35, 45, 55)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_bce_loss.html b/dev/reference/nn_bce_loss.html index a0a65569b44c4dd6436008047fcaada77a40f4fc..2f4686233e2e55c8f06f539194a92bf547ebfc83 100644 --- a/dev/reference/nn_bce_loss.html +++ b/dev/reference/nn_bce_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Binary cross entropy loss — nn_bce_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Binary cross entropy loss — nn_bce_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,29 +113,25 @@ between the target and the output:" /> between the target and the output:

-
nn_bce_loss(weight = NULL, reduction = "mean")
+
+
nn_bce_loss(weight = NULL, reduction = "mean")
+
-

Arguments

- - - - - - - - - - -
weight

(Tensor, optional): a manual rescaling weight given to the loss -of each batch element. If given, has to be a Tensor of size nbatch.

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
weight
+

(Tensor, optional): a manual rescaling weight given to the loss +of each batch element. If given, has to be a Tensor of size nbatch.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

The unreduced (i.e. with reduction set to 'none') loss can be described as: $$ \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad @@ -244,55 +162,52 @@ and using it for things like linear regression would not be straight-forward. Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar. If reduction is 'none', then \((N, *)\), same shape as input.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_sigmoid()
    -loss <- nn_bce_loss()
    -input <- torch_randn(3, requires_grad=TRUE)
    -target <- torch_rand(3)
    -output <- loss(m(input), target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_sigmoid()
+loss <- nn_bce_loss()
+input <- torch_randn(3, requires_grad=TRUE)
+target <- torch_rand(3)
+output <- loss(m(input), target)
+output$backward()
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_bce_with_logits_loss.html b/dev/reference/nn_bce_with_logits_loss.html index d155cdac9e60e71dfef16571da44182fb45ce60d..2f584788db8ef9f6d8b2c72c020705ea24ef5ce8 100644 --- a/dev/reference/nn_bce_with_logits_loss.html +++ b/dev/reference/nn_bce_with_logits_loss.html @@ -1,82 +1,21 @@ - - - - - - - -BCE with logits loss — nn_bce_with_logits_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -BCE with logits loss — nn_bce_with_logits_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,34 +117,28 @@ followed by a BCELoss as, by combining the operations into one laye we take advantage of the log-sum-exp trick for numerical stability.

-
nn_bce_with_logits_loss(weight = NULL, reduction = "mean", pos_weight = NULL)
+
+
nn_bce_with_logits_loss(weight = NULL, reduction = "mean", pos_weight = NULL)
+
-

Arguments

- - - - - - - - - - - - - - -
weight

(Tensor, optional): a manual rescaling weight given to the loss -of each batch element. If given, has to be a Tensor of size nbatch.

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
weight
+

(Tensor, optional): a manual rescaling weight given to the loss +of each batch element. If given, has to be a Tensor of size nbatch.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

pos_weight

(Tensor, optional): a weight of positive examples. -Must be a vector with length equal to the number of classes.

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
pos_weight
+

(Tensor, optional): a weight of positive examples. +Must be a vector with length equal to the number of classes.

+
+
+

Details

The unreduced (i.e. with reduction set to 'none') loss can be described as:

$$ \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad @@ -256,62 +172,59 @@ classification,

For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to \(\frac{300}{100}=3\). The loss would act as if the dataset contains \(3\times 100=300\) positive examples.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

  • +
    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar. If reduction is 'none', then \((N, *)\), same shape as input.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_bce_with_logits_loss()
    -input <- torch_randn(3, requires_grad=TRUE)
    -target <- torch_empty(3)$random_(1, 2)
    -output <- loss(input, target)
    -output$backward()
    -
    -target <- torch_ones(10, 64, dtype=torch_float32())  # 64 classes, batch size = 10
    -output <- torch_full(c(10, 64), 1.5)  # A prediction (logit)
    -pos_weight <- torch_ones(64)  # All weights are equal to 1
    -criterion <- nn_bce_with_logits_loss(pos_weight=pos_weight)
    -criterion(output, target)  # -log(sigmoid(1.5))
    -
    -}
    -#> torch_tensor
    -#> 0.201413
    -#> [ CPUFloatType{} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_bce_with_logits_loss()
+input <- torch_randn(3, requires_grad=TRUE)
+target <- torch_empty(3)$random_(1, 2)
+output <- loss(input, target)
+output$backward()
+
+target <- torch_ones(10, 64, dtype=torch_float32())  # 64 classes, batch size = 10
+output <- torch_full(c(10, 64), 1.5)  # A prediction (logit)
+pos_weight <- torch_ones(64)  # All weights are equal to 1
+criterion <- nn_bce_with_logits_loss(pos_weight=pos_weight)
+criterion(output, target)  # -log(sigmoid(1.5))
+
+}
+#> torch_tensor
+#> 0.201413
+#> [ CPUFloatType{} ]
+
+
+
- - - + + diff --git a/dev/reference/nn_bilinear.html b/dev/reference/nn_bilinear.html index 669c6cbbf060bf527e3b8f32dd82dc3045bca9fa..d37251da96f1505f11354a95507580c00dc9041b 100644 --- a/dev/reference/nn_bilinear.html +++ b/dev/reference/nn_bilinear.html @@ -1,80 +1,19 @@ - - - - - - - -Bilinear module — nn_bilinear • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bilinear module — nn_bilinear • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,49 +113,38 @@ \(y = x_1^T A x_2 + b\)

-
nn_bilinear(in1_features, in2_features, out_features, bias = TRUE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
in1_features

size of each first input sample

in2_features

size of each second input sample

out_features

size of each output sample

bias

If set to FALSE, the layer will not learn an additive bias. -Default: TRUE

- -

Shape

+
+
nn_bilinear(in1_features, in2_features, out_features, bias = TRUE)
+
+
+

Arguments

+
in1_features
+

size of each first input sample

+
in2_features
+

size of each second input sample

+
out_features
+

size of each output sample

+
bias
+

If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

+
+
+

Shape

-
    -
  • Input1: \((N, *, H_{in1})\) \(H_{in1}=\mbox{in1\_features}\) and +

    • Input1: \((N, *, H_{in1})\) \(H_{in1}=\mbox{in1\_features}\) and \(*\) means any number of additional dimensions. All but the last dimension of the inputs should be the same.

    • Input2: \((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\).

    • Output: \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) and all but the last dimension are the same shape as the input.

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight: the learnable weights of the module of shape +

    • weight: the learnable weights of the module of shape \((\mbox{out\_features}, \mbox{in1\_features}, \mbox{in2\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\mbox{in1\_features}}\)

    • @@ -241,45 +152,42 @@ The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where If bias is TRUE, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\mbox{in1\_features}}\)

      -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_bilinear(20, 30, 50)
    -input1 <- torch_randn(128, 20)
    -input2 <- torch_randn(128, 30)
    -output = m(input1, input2)
    -print(output$size()) 
    -
    -}
    -#> [1] 128  50
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_bilinear(20, 30, 50)
+input1 <- torch_randn(128, 20)
+input2 <- torch_randn(128, 30)
+output = m(input1, input2)
+print(output$size()) 
+
+}
+#> [1] 128  50
+
+
+
- - - + + diff --git a/dev/reference/nn_buffer.html b/dev/reference/nn_buffer.html index 6dd953f4ec9fe015287fb028cdf5aaa7d07794d2..496a81ad964f8064e1bac7dbd088caaef4f0b607 100644 --- a/dev/reference/nn_buffer.html +++ b/dev/reference/nn_buffer.html @@ -1,79 +1,18 @@ - - - - - - - -Creates a nn_buffer — nn_buffer • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a nn_buffer — nn_buffer • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

Indicates that a tensor is a buffer in a nn_module

-
nn_buffer(x, persistent = TRUE)
- -

Arguments

- - - - - - - - - - -
x

the tensor that will be converted to nn_buffer

persistent

whether the buffer should be persistent or not.

+
+
nn_buffer(x, persistent = TRUE)
+
+
+

Arguments

+
x
+

the tensor that will be converted to nn_buffer

+
persistent
+

whether the buffer should be persistent or not.

+
+
- - - + + diff --git a/dev/reference/nn_celu.html b/dev/reference/nn_celu.html index e5812b9cd6613ddf944e1dcd72fcdc0a61b056f7..dea2a9ab3f044e9416c365d3cdc5f5f2e57f2c22 100644 --- a/dev/reference/nn_celu.html +++ b/dev/reference/nn_celu.html @@ -1,79 +1,18 @@ - - - - - - - -CELU module — nn_celu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -CELU module — nn_celu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,72 +111,65 @@

Applies the element-wise function:

-
nn_celu(alpha = 1, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
alpha

the \(\alpha\) value for the CELU formulation. Default: 1.0

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_celu(alpha = 1, inplace = FALSE)
+
+
+

Arguments

+
alpha
+

the \(\alpha\) value for the CELU formulation. Default: 1.0

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) $$

More details can be found in the paper -Continuously Differentiable Exponential Linear Units.

-

Shape

- +Continuously Differentiable Exponential Linear Units.

+
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_celu()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_celu()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_contrib_sparsemax.html b/dev/reference/nn_contrib_sparsemax.html index 00ab249fba2b8f40d4c1468f5f0a8428308f5421..d40d67ec0a9e1fa58fe21e5fddc09be0fe80d068 100644 --- a/dev/reference/nn_contrib_sparsemax.html +++ b/dev/reference/nn_contrib_sparsemax.html @@ -1,79 +1,18 @@ - - - - - - - -Sparsemax activation — nn_contrib_sparsemax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sparsemax activation — nn_contrib_sparsemax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,48 +111,43 @@

Sparsemax activation module.

-
nn_contrib_sparsemax(dim = -1)
- -

Arguments

- - - - - - -
dim

The dimension over which to apply the sparsemax function. (-1)

- -

Details

+
+
nn_contrib_sparsemax(dim = -1)
+
+
+

Arguments

+
dim
+

The dimension over which to apply the sparsemax function. (-1)

+
+
+
- - - + + diff --git a/dev/reference/nn_conv1d.html b/dev/reference/nn_conv1d.html index 59ff0be6010903460f5ff8944f20dfa4834dfc4d..c5a74fff9c7b34f72033f81944a2e72b1ec00e0a 100644 --- a/dev/reference/nn_conv1d.html +++ b/dev/reference/nn_conv1d.html @@ -1,83 +1,22 @@ - - - - - - - -Conv1D module — nn_conv1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv1D module — nn_conv1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,87 +119,67 @@ In the simplest case, the output value of the layer with input size precisely described as:

-
nn_conv1d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1,
-  bias = TRUE,
-  padding_mode = "zeros"
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int, tuple or str, optional) – Padding added to both sides of -the input. Default: 0

dilation

(int or tuple, optional): Spacing between kernel -elements. Default: 1

groups

(int, optional): Number of blocked connections from input -channels to output channels. Default: 1

bias

(bool, optional): If TRUE, adds a learnable bias to the -output. Default: TRUE

padding_mode

(string, optional): 'zeros', 'reflect', -'replicate' or 'circular'. Default: 'zeros'

- -

Details

+
+
nn_conv1d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1,
+  bias = TRUE,
+  padding_mode = "zeros"
+)
+
+
+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int, tuple or str, optional) – Padding added to both sides of +the input. Default: 0

+
dilation
+

(int or tuple, optional): Spacing between kernel +elements. Default: 1

+
groups
+

(int, optional): Number of blocked connections from input +channels to output channels. Default: 1

+
bias
+

(bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

+
padding_mode
+

(string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

$$ \mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) \star \mbox{input}(N_i, k) $$

where \(\star\) is the valid -cross-correlation operator, +cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, -\(L\) is a length of signal sequence.

    -
  • stride controls the stride for the cross-correlation, a single +\(L\) is a length of signal sequence.

    • stride controls the stride for the cross-correlation, a single number or a one-element tuple.

    • padding controls the amount of implicit zero-paddings on both sides for padding number of points.

    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this -link +link has a nice visualization of what dilation does.

    • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by -groups. For example,

        -
      • At groups=1, all inputs are convolved to all outputs.

      • +groups. For example,

        • At groups=1, all inputs are convolved to all outputs.

        • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently @@ -286,10 +188,9 @@ concatenated.

        • its own set of filters, of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

        -
      - -

      Note

      - +
+
+

Note

@@ -303,25 +204,23 @@ literature as depthwise convolution. In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K, can be constructed by arguments \((C_{\mbox{in}}=C_{in}, C_{\mbox{out}}=C_{in} \times K, ..., \mbox{groups}=C_{in})\).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, L_{in})\)

  • +
    • Input: \((N, C_{in}, L_{in})\)

    • Output: \((N, C_{out}, L_{out})\) where

    • -
    - -

    $$ +

$$ L_{out} = \left\lfloor\frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor $$

-

Attributes

- +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}, \mbox{kernel\_size})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where @@ -330,42 +229,39 @@ The values of these weights are sampled from (out_channels). If bias is TRUE, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_conv1d(16, 33, 3, stride=2)
    -input <- torch_randn(20, 16, 50)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_conv1d(16, 33, 3, stride=2)
+input <- torch_randn(20, 16, 50)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_conv2d.html b/dev/reference/nn_conv2d.html index 1ca89d71c52fcf3a396980ce76f11d35787daa16..60b80b363885b58e63af2b764c0bea9e54a4c06d 100644 --- a/dev/reference/nn_conv2d.html +++ b/dev/reference/nn_conv2d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv2D module — nn_conv2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv2D module — nn_conv2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,67 +113,49 @@ planes." /> planes.

-
nn_conv2d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1,
-  bias = TRUE,
-  padding_mode = "zeros"
-)
+
+
nn_conv2d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1,
+  bias = TRUE,
+  padding_mode = "zeros"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple or string, optional): Zero-padding added to both sides of +

+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int or tuple or string, optional): Zero-padding added to both sides of the input. controls the amount of padding applied to the input. It can be either a string 'valid', 'same' or a tuple of ints giving the -amount of implicit padding applied on both sides. Default: 0

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

groups

(int, optional): Number of blocked connections from input -channels to output channels. Default: 1

bias

(bool, optional): If TRUE, adds a learnable bias to the -output. Default: TRUE

padding_mode

(string, optional): 'zeros', 'reflect', -'replicate' or 'circular'. Default: 'zeros'

- -

Details

- +amount of implicit padding applied on both sides. Default: 0

+
dilation
+

(int or tuple, optional): Spacing between kernel elements. Default: 1

+
groups
+

(int, optional): Number of blocked connections from input +channels to output channels. Default: 1

+
bias
+

(bool, optional): If TRUE, adds a learnable bias to the +output. Default: TRUE

+
padding_mode
+

(string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

In the simplest case, the output value of the layer with input size \((N, C_{\mbox{in}}, H, W)\) and output \((N, C_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}})\) can be precisely described as:

@@ -262,8 +166,7 @@ $$

where \(\star\) is the valid 2D cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is -width in pixels.

    -
  • stride controls the stride for the cross-correlation, a single +width in pixels.

    • stride controls the stride for the cross-correlation, a single number or a tuple.

    • padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.

    • @@ -272,8 +175,7 @@ known as the à trous algorithm. It is harder to describe, but this linkdilation does.

    • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by -groups. For example,

        -
      • At groups=1, all inputs are convolved to all outputs.

      • +groups. For example,

        • At groups=1, all inputs are convolved to all outputs.

        • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently @@ -282,17 +184,13 @@ concatenated.

        • its own set of filters, of size: \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

        -
      - -

      The parameters kernel_size, stride, padding, dilation can either be:

        -
      • a single int -- in which case the same value is used for the height and +

      The parameters kernel_size, stride, padding, dilation can either be:

      • a single int -- in which case the same value is used for the height and width dimension

      • a tuple of two ints -- in which case, the first int is used for the height dimension, and the second int for the width dimension

      • -
      - -

      Note

      - +
+
+

Note

@@ -310,12 +208,12 @@ a depthwise convolution with a depthwise multiplier K, can be const may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting backends_cudnn_deterministic = TRUE.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • Output: \((N, C_{out}, H_{out}, W_{out})\) where $$ H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] @@ -325,14 +223,12 @@ $$ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor $$

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}\), \(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). The values of these weights are sampled from @@ -343,48 +239,45 @@ The values of these weights are sampled from then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -# With square kernels and equal stride
    -m <- nn_conv2d(16, 33, 3, stride = 2)
    -# non-square kernels and unequal stride and with padding
    -m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2))
    -# non-square kernels and unequal stride and with padding and dilation
    -m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2), dilation=c(3, 1))
    -input <- torch_randn(20, 16, 50, 100)
    -output <- m(input)  
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+# With square kernels and equal stride
+m <- nn_conv2d(16, 33, 3, stride = 2)
+# non-square kernels and unequal stride and with padding
+m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2))
+# non-square kernels and unequal stride and with padding and dilation
+m <- nn_conv2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2), dilation=c(3, 1))
+input <- torch_randn(20, 16, 50, 100)
+output <- m(input)  
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_conv3d.html b/dev/reference/nn_conv3d.html index 6d1b8f903466bafc62de333acfd21c20e6fda256..387af6bd1daaae0647334c6d1da4b7f35a3c1427 100644 --- a/dev/reference/nn_conv3d.html +++ b/dev/reference/nn_conv3d.html @@ -1,82 +1,21 @@ - - - - - - - -Conv3D module — nn_conv3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv3D module — nn_conv3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,67 +117,48 @@ In the simplest case, the output value of the layer with input size \((N, C_{in} and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:

-
nn_conv3d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1,
-  bias = TRUE,
-  padding_mode = "zeros"
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int, tuple or str, optional): padding added to all six sides of the input. Default: 0

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE

padding_mode

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

- -

Details

+
+
nn_conv3d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1,
+  bias = TRUE,
+  padding_mode = "zeros"
+)
+
+
+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int, tuple or str, optional): padding added to all six sides of the input. Default: 0

+
dilation
+

(int or tuple, optional): Spacing between kernel elements. Default: 1

+
groups
+

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

+
bias
+

(bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE

+
padding_mode
+

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

$$ out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) $$

-

where \(\star\) is the valid 3D cross-correlation operator

    -
  • stride controls the stride for the cross-correlation.

  • +

    where \(\star\) is the valid 3D cross-correlation operator

    • stride controls the stride for the cross-correlation.

    • padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.

    • dilation controls the spacing between the kernel points; also known as the à trous algorithm. @@ -271,16 +174,12 @@ concatenated.

    • At groups= in_channels, each input channel is convolved with its own set of filters, of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\).

    • -
    - -

    The parameters kernel_size, stride, padding, dilation can either be:

      -
    • a single int -- in which case the same value is used for the depth, height and width dimension

    • +

    The parameters kernel_size, stride, padding, dilation can either be:

    • a single int -- in which case the same value is used for the depth, height and width dimension

    • a tuple of three ints -- in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension

    • -
    - -

    Note

    - +
+
+

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. @@ -296,12 +195,12 @@ may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = TRUE. Please see the notes on :doc:/notes/randomness for background.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where $$ D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] @@ -315,14 +214,12 @@ $$ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] \times (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor $$

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}},\) \(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). The values of these weights are sampled from @@ -332,45 +229,42 @@ The values of these weights are sampled from then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{in}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# With square kernels and equal stride
    -m <- nn_conv3d(16, 33, 3, stride=2)
    -# non-square kernels and unequal stride and with padding
    -m <- nn_conv3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(4, 2, 0))
    -input <- torch_randn(20, 16, 10, 50, 100)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# With square kernels and equal stride
+m <- nn_conv3d(16, 33, 3, stride=2)
+# non-square kernels and unequal stride and with padding
+m <- nn_conv3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(4, 2, 0))
+input <- torch_randn(20, 16, 10, 50, 100)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_conv_transpose1d.html b/dev/reference/nn_conv_transpose1d.html index 0d3350b80a0d6cac7b1551ed55d75d0dd1d78506..48da6fbad2bd2350cf97378b01204e26ce86bb84 100644 --- a/dev/reference/nn_conv_transpose1d.html +++ b/dev/reference/nn_conv_transpose1d.html @@ -1,80 +1,19 @@ - - - - - - - -ConvTranspose1D — nn_conv_transpose1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ConvTranspose1D — nn_conv_transpose1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,85 +113,63 @@ composed of several input planes." /> composed of several input planes.

-
nn_conv_transpose1d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  bias = TRUE,
-  dilation = 1,
-  padding_mode = "zeros"
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding -will be added to both sides of the input. Default: 0

output_padding

(int or tuple, optional): Additional size added to one side -of the output shape. Default: 0

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If True, adds a learnable bias to the output. Default: TRUE

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

padding_mode

(string, optional): 'zeros', 'reflect', -'replicate' or 'circular'. Default: 'zeros'

- -

Details

+
+
nn_conv_transpose1d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  bias = TRUE,
+  dilation = 1,
+  padding_mode = "zeros"
+)
+
+
+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of the input. Default: 0

+
output_padding
+

(int or tuple, optional): Additional size added to one side +of the output shape. Default: 0

+
groups
+

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

+
bias
+

(bool, optional): If True, adds a learnable bias to the output. Default: TRUE

+
dilation
+

(int or tuple, optional): Spacing between kernel elements. Default: 1

+
padding_mode
+

(string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or -a deconvolution (although it is not an actual deconvolution operation).

    -
  • stride controls the stride for the cross-correlation.

  • +a deconvolution (although it is not an actual deconvolution operation).

    • stride controls the stride for the cross-correlation.

    • padding controls the amount of implicit zero-paddings on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.

    • output_padding controls the additional size added to one side of the output shape. See note below for details.

    • dilation controls the spacing between the kernel points; also known as the -à trous algorithm. It is harder to describe, but this link +à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.

    • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by -groups. For example,

        -
      • At groups=1, all inputs are convolved to all outputs.

      • +groups. For example,

        • At groups=1, all inputs are convolved to all outputs.

        • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently @@ -278,10 +178,9 @@ concatenated.

        • its own set of filters (of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

        -
      - -

      Note

      - +
+
+

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. @@ -300,25 +199,23 @@ not actually add zero-padding to output.

may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, L_{in})\)

  • +
    • Input: \((N, C_{in}, L_{in})\)

    • Output: \((N, C_{out}, L_{out})\) where $$ L_{out} = (L_{in} - 1) \times \mbox{stride} - 2 \times \mbox{padding} + \mbox{dilation} \times (\mbox{kernel\_size} - 1) + \mbox{output\_padding} + 1 $$

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) \(\mbox{kernel\_size})\). The values of these weights are sampled from @@ -328,42 +225,39 @@ The values of these weights are sampled from If bias is TRUE, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{out}} * \mbox{kernel\_size}}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_conv_transpose1d(32, 16, 2)
    -input <- torch_randn(10, 32, 2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_conv_transpose1d(32, 16, 2)
+input <- torch_randn(10, 32, 2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_conv_transpose2d.html b/dev/reference/nn_conv_transpose2d.html index 355cb3c89b8f9b3d8d47ed72d2d7ff46542e8047..37e0adc41497b5570d30446216c8457a5520deca 100644 --- a/dev/reference/nn_conv_transpose2d.html +++ b/dev/reference/nn_conv_transpose2d.html @@ -1,80 +1,19 @@ - - - - - - - -ConvTranpose2D module — nn_conv_transpose2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ConvTranpose2D module — nn_conv_transpose2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,73 +113,52 @@ composed of several input planes." /> composed of several input planes.

-
nn_conv_transpose2d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  bias = TRUE,
-  dilation = 1,
-  padding_mode = "zeros"
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding -will be added to both sides of each dimension in the input. Default: 0

output_padding

(int or tuple, optional): Additional size added to one side -of each dimension in the output shape. Default: 0

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If True, adds a learnable bias to the output. Default: True

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

padding_mode

(string, optional): 'zeros', 'reflect', -'replicate' or 'circular'. Default: 'zeros'

- -

Details

+
+
nn_conv_transpose2d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  bias = TRUE,
+  dilation = 1,
+  padding_mode = "zeros"
+)
+
+
+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +will be added to both sides of each dimension in the input. Default: 0

+
output_padding
+

(int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

+
groups
+

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

+
bias
+

(bool, optional): If True, adds a learnable bias to the output. Default: True

+
dilation
+

(int or tuple, optional): Spacing between kernel elements. Default: 1

+
padding_mode
+

(string, optional): 'zeros', 'reflect', +'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or -a deconvolution (although it is not an actual deconvolution operation).

    -
  • stride controls the stride for the cross-correlation.

  • +a deconvolution (although it is not an actual deconvolution operation).

    • stride controls the stride for the cross-correlation.

    • padding controls the amount of implicit zero-paddings on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.

    • @@ -267,8 +168,7 @@ of the output shape. See note below for details.

      It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by -groups. For example,

        -
      • At groups=1, all inputs are convolved to all outputs.

      • +groups. For example,

        • At groups=1, all inputs are convolved to all outputs.

        • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently @@ -277,26 +177,22 @@ concatenated.

        • its own set of filters (of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

        -
      - -

      The parameters kernel_size, stride, padding, output_padding -can either be:

        -
      • a single int -- in which case the same value is used for the height and width dimensions

      • +

      The parameters kernel_size, stride, padding, output_padding +can either be:

      • a single int -- in which case the same value is used for the height and width dimensions

      • a tuple of two ints -- in which case, the first int is used for the height dimension, and the second int for the width dimension

      • -
      - -

      Note

      - +
+
+

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation_, and not a full cross-correlation. It is up to the user to add proper padding.

The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that -when a nn_conv2d and a nn_conv_transpose2d are initialized with same +when a nn_conv2d and a nn_conv_transpose2d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when stride > 1, -nn_conv2d maps multiple input shapes to the same output +nn_conv2d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that output_padding is only used to find output shape, but does @@ -305,12 +201,12 @@ not actually add zero-padding to output.

may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C_{in}, H_{in}, W_{in})\)

    • Output: \((N, C_{out}, H_{out}, W_{out})\) where $$ H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] @@ -320,14 +216,12 @@ $$ W_{out} = (W_{in} - 1) \times \mbox{stride}[1] - 2 \times \mbox{padding}[1] + \mbox{dilation}[1] \times (\mbox{kernel\_size}[1] - 1) + \mbox{output\_padding}[1] + 1 $$

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) \(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]})\). The values of these weights are sampled from @@ -337,54 +231,51 @@ The values of these weights are sampled from If bias is True, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{1}\mbox{kernel\_size}[i]}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# With square kernels and equal stride
    -m <- nn_conv_transpose2d(16, 33, 3, stride=2)
    -# non-square kernels and unequal stride and with padding
    -m <- nn_conv_transpose2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2))
    -input <- torch_randn(20, 16, 50, 100)
    -output <- m(input)
    -# exact output size can be also specified as an argument
    -input <- torch_randn(1, 16, 12, 12)
    -downsample <- nn_conv2d(16, 16, 3, stride=2, padding=1)
    -upsample <- nn_conv_transpose2d(16, 16, 3, stride=2, padding=1)
    -h <- downsample(input)
    -h$size()
    -output <- upsample(h, output_size=input$size())
    -output$size()
    -
    -}
    -#> [1]  1 16 12 12
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# With square kernels and equal stride
+m <- nn_conv_transpose2d(16, 33, 3, stride=2)
+# non-square kernels and unequal stride and with padding
+m <- nn_conv_transpose2d(16, 33, c(3, 5), stride=c(2, 1), padding=c(4, 2))
+input <- torch_randn(20, 16, 50, 100)
+output <- m(input)
+# exact output size can be also specified as an argument
+input <- torch_randn(1, 16, 12, 12)
+downsample <- nn_conv2d(16, 16, 3, stride=2, padding=1)
+upsample <- nn_conv_transpose2d(16, 16, 3, stride=2, padding=1)
+h <- downsample(input)
+h$size()
+output <- upsample(h, output_size=input$size())
+output$size()
+
+}
+#> [1]  1 16 12 12
+
+
+
- - - + + diff --git a/dev/reference/nn_conv_transpose3d.html b/dev/reference/nn_conv_transpose3d.html index 11bd01f8599241e82a0ad4266133a14c760aa755..eb489d7793c553df6e24c5a2a1ef1a8e34c7f616 100644 --- a/dev/reference/nn_conv_transpose3d.html +++ b/dev/reference/nn_conv_transpose3d.html @@ -1,80 +1,19 @@ - - - - - - - -ConvTranpose3D module — nn_conv_transpose3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ConvTranpose3D module — nn_conv_transpose3d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,76 +113,55 @@ planes." /> planes.

-
nn_conv_transpose3d(
-  in_channels,
-  out_channels,
-  kernel_size,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  bias = TRUE,
-  dilation = 1,
-  padding_mode = "zeros"
-)
+
+
nn_conv_transpose3d(
+  in_channels,
+  out_channels,
+  kernel_size,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  bias = TRUE,
+  dilation = 1,
+  padding_mode = "zeros"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding +

+

Arguments

+
in_channels
+

(int): Number of channels in the input image

+
out_channels
+

(int): Number of channels produced by the convolution

+
kernel_size
+

(int or tuple): Size of the convolving kernel

+
stride
+

(int or tuple, optional): Stride of the convolution. Default: 1

+
padding
+

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0 output_padding (int or tuple, optional): Additional size added to one side -of each dimension in the output shape. Default: 0

output_padding

(int or tuple, optional): Additional size added to one side -of each dimension in the output shape. Default: 0

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If True, adds a learnable bias to the output. Default: True

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

padding_mode

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

- -

Details

- +of each dimension in the output shape. Default: 0

+
output_padding
+

(int or tuple, optional): Additional size added to one side +of each dimension in the output shape. Default: 0

+
groups
+

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

+
bias
+

(bool, optional): If True, adds a learnable bias to the output. Default: True

+
dilation
+

(int or tuple, optional): Spacing between kernel elements. Default: 1

+
padding_mode
+

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

+
+
+

Details

The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes.

This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or -a deconvolution (although it is not an actual deconvolution operation).

    -
  • stride controls the stride for the cross-correlation.

  • +a deconvolution (although it is not an actual deconvolution operation).

    • stride controls the stride for the cross-correlation.

    • padding controls the amount of implicit zero-paddings on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.

    • @@ -270,8 +171,7 @@ of the output shape. See note below for details.

      It is harder to describe, but this link_ has a nice visualization of what dilation does.

    • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by -groups. For example,

        -
      • At groups=1, all inputs are convolved to all outputs.

      • +groups. For example,

        • At groups=1, all inputs are convolved to all outputs.

        • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently @@ -280,17 +180,13 @@ concatenated.

        • its own set of filters (of size \(\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor\)).

        -
      - -

      The parameters kernel_size, stride, padding, output_padding -can either be:

        -
      • a single int -- in which case the same value is used for the depth, height and width dimensions

      • +

      The parameters kernel_size, stride, padding, output_padding +can either be:

      • a single int -- in which case the same value is used for the depth, height and width dimensions

      • a tuple of three ints -- in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension

      • -
      - -

      Note

      - +
+
+

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. @@ -309,12 +205,12 @@ not actually add zero-padding to output.

may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    • Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where $$ D_{out} = (D_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{dilation}[0] @@ -328,14 +224,12 @@ $$ W_{out} = (W_{in} - 1) \times \mbox{stride}[2] - 2 \times \mbox{padding}[2] + \mbox{dilation}[2] \times (\mbox{kernel\_size}[2] - 1) + \mbox{output\_padding}[2] + 1 $$

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape +

    • weight (Tensor): the learnable weights of the module of shape \((\mbox{in\_channels}, \frac{\mbox{out\_channels}}{\mbox{groups}},\) \(\mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}, \mbox{kernel\_size[2]})\). The values of these weights are sampled from @@ -345,46 +239,43 @@ The values of these weights are sampled from If bias is True, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_{\mbox{out}} * \prod_{i=0}^{2}\mbox{kernel\_size}[i]}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -if (FALSE) {
    -# With square kernels and equal stride
    -m <- nn_conv_transpose3d(16, 33, 3, stride=2)
    -# non-square kernels and unequal stride and with padding
    -m <- nn_conv_transpose3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(0, 4, 2))
    -input <- torch_randn(20, 16, 10, 50, 100)
    -output <- m(input)
    -}
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+# With square kernels and equal stride
+m <- nn_conv_transpose3d(16, 33, 3, stride=2)
+# non-square kernels and unequal stride and with padding
+m <- nn_conv_transpose3d(16, 33, c(3, 5, 2), stride=c(2, 1, 1), padding=c(0, 4, 2))
+input <- torch_randn(20, 16, 10, 50, 100)
+output <- m(input)
+}
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_cosine_embedding_loss.html b/dev/reference/nn_cosine_embedding_loss.html index e8f992b6d7655ad22c4608e85aa4c854e09ecac9..3e48bb674bc707c3b00d593162e7ba3f1018ffc8 100644 --- a/dev/reference/nn_cosine_embedding_loss.html +++ b/dev/reference/nn_cosine_embedding_loss.html @@ -1,84 +1,23 @@ - - - - - - - -Cosine embedding loss — nn_cosine_embedding_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cosine embedding loss — nn_cosine_embedding_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -199,30 +121,26 @@ embeddings or semi-supervised learning. The loss function for each sample is:

-
nn_cosine_embedding_loss(margin = 0, reduction = "mean")
+
+
nn_cosine_embedding_loss(margin = 0, reduction = "mean")
+
-

Arguments

- - - - - - - - - - -
margin

(float, optional): Should be a number from \(-1\) to \(1\), +

+

Arguments

+
margin
+

(float, optional): Should be a number from \(-1\) to \(1\), \(0\) to \(0.5\) is suggested. If margin is missing, the -default value is \(0\).

reduction

(string, optional): Specifies the reduction to apply to the output: +default value is \(0\).

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \mbox{loss}(x, y) = \begin{array}{ll} @@ -230,32 +148,29 @@ specifying either of those two args will override reduction. Defaul \max(0, \cos(x_1, x_2) - \mbox{margin}), & \mbox{if } y = -1 \end{array} $$

+
+
- - - + + diff --git a/dev/reference/nn_cross_entropy_loss.html b/dev/reference/nn_cross_entropy_loss.html index 682edc8ee5e230c0c7b30746fb496c76d9fb6561..c53aa1259e4c4a0f2f7974c8c749ade9deba4c1e 100644 --- a/dev/reference/nn_cross_entropy_loss.html +++ b/dev/reference/nn_cross_entropy_loss.html @@ -1,80 +1,19 @@ - - - - - - - -CrossEntropyLoss module — nn_cross_entropy_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -CrossEntropyLoss module — nn_cross_entropy_loss • torch - - - - - - - - + + -
-
- -
- -
+
-

This criterion combines nn_log_softmax() and nn_nll_loss() in one single class. +

This criterion combines nn_log_softmax() and nn_nll_loss() in one single class. It is useful when training a classification problem with C classes.

-
nn_cross_entropy_loss(weight = NULL, ignore_index = -100, reduction = "mean")
+
+
nn_cross_entropy_loss(weight = NULL, ignore_index = -100, reduction = "mean")
+
-

Arguments

- - - - - - - - - - - - - - -
weight

(Tensor, optional): a manual rescaling weight given to each class. -If given, has to be a Tensor of size C

ignore_index

(int, optional): Specifies a target value that is ignored +

+

Arguments

+
weight
+

(Tensor, optional): a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

+
ignore_index
+

(int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is -TRUE, the loss is averaged over non-ignored targets.

reduction

(string, optional): Specifies the reduction to apply to the output: +TRUE, the loss is averaged over non-ignored targets.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes.

This is particularly useful when you have an unbalanced training set. @@ -245,12 +161,12 @@ Can also be used for higher dimension inputs, such as 2D images, by providing an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C)\) where C = number of classes, or +

    • Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

    • Target: \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), or @@ -261,44 +177,41 @@ If reduction is 'none', then the same size as the targ \((N)\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_cross_entropy_loss()
    -input <- torch_randn(3, 5, requires_grad=TRUE)
    -target <- torch_randint(low = 1, high = 5, size = 3, dtype = torch_long())
    -output <- loss(input, target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_cross_entropy_loss()
+input <- torch_randn(3, 5, requires_grad=TRUE)
+target <- torch_randint(low = 1, high = 5, size = 3, dtype = torch_long())
+output <- loss(input, target)
+output$backward()
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_ctc_loss.html b/dev/reference/nn_ctc_loss.html index 0a2590b46229ff97309a13022248cfa82d4c8dea..6eb8c07bf972fed99e905abdc27dd3d38aeea4cf 100644 --- a/dev/reference/nn_ctc_loss.html +++ b/dev/reference/nn_ctc_loss.html @@ -1,82 +1,21 @@ - - - - - - - -The Connectionist Temporal Classification loss. — nn_ctc_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -The Connectionist Temporal Classification loss. — nn_ctc_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -195,34 +117,28 @@ with respect to each input node. The alignment of input to target is assumed to limits the length of the target sequence such that it must be \(\leq\) the input length.

-
nn_ctc_loss(blank = 0, reduction = "mean", zero_infinity = FALSE)
+
+
nn_ctc_loss(blank = 0, reduction = "mean", zero_infinity = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - -
blank

(int, optional): blank label. Default \(0\).

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
blank
+

(int, optional): blank label. Default \(0\).

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and -then the mean over the batch is taken. Default: 'mean'

zero_infinity

(bool, optional): +then the mean over the batch is taken. Default: 'mean'

+
zero_infinity
+

(bool, optional): Whether to zero infinite losses and the associated gradients. Default: FALSE Infinite losses mainly occur when the inputs are too short -to be aligned to the targets.

- -

Note

- +to be aligned to the targets.

+
+
+

Note

In order to use CuDNN, the following must be satisfied: targets must be in concatenated format, all input_lengths must be T. \(blank=0\), target_lengths \(\leq 256\), the integer arguments must be of @@ -232,12 +148,12 @@ dtype torch_int32.

may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = TRUE.

-

Shape

- +
+
+

Shape

-
    -
  • Log_probs: Tensor of size \((T, N, C)\), +

    • Log_probs: Tensor of size \((T, N, C)\), where \(T = \mbox{input length}\), \(N = \mbox{batch size}\), and \(C = \mbox{number of classes (including blank)}\). @@ -270,81 +186,79 @@ If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.

    • Output: scalar. If reduction is 'none', then \((N)\), where \(N = \mbox{batch size}\).

    • -
    - -

    [nnf)log_softmax()]: R:nnf)log_softmax() +

[nnf)log_softmax()]: R:nnf)log_softmax() [n,0:s_n]: R:n,0:s_n

-

References

- +
+
+

References

A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf

+
-

Examples

-
if (torch_is_installed()) {
-# Target are to be padded
-T <- 50      # Input sequence length
-C <- 20      # Number of classes (including blank)
-N <- 16      # Batch size
-S <- 30      # Target sequence length of longest target in batch (padding length)
-S_min <- 10  # Minimum target length, for demonstration purposes
-
-# Initialize random batch of input vectors, for *size = (T,N,C)
-input <- torch_randn(T, N, C)$log_softmax(2)$detach()$requires_grad_()
-
-# Initialize random batch of targets (0 = blank, 1:C = classes)
-target <- torch_randint(low=1, high=C, size=c(N, S), dtype=torch_long())
-
-input_lengths <- torch_full(size=c(N), fill_value=TRUE, dtype=torch_long())
-target_lengths <- torch_randint(low=S_min, high=S, size=c(N), dtype=torch_long())
-ctc_loss <- nn_ctc_loss()
-loss <- ctc_loss(input, target, input_lengths, target_lengths)
-loss$backward()
-
-
-# Target are to be un-padded
-T <- 50      # Input sequence length
-C <- 20      # Number of classes (including blank)
-N <- 16      # Batch size
-
-# Initialize random batch of input vectors, for *size = (T,N,C)
-input <- torch_randn(T, N, C)$log_softmax(2)$detach()$requires_grad_()
-input_lengths <- torch_full(size=c(N), fill_value=TRUE, dtype=torch_long())
-
-# Initialize random batch of targets (0 = blank, 1:C = classes)
-target_lengths <- torch_randint(low=1, high=T, size=c(N), dtype=torch_long())
-target <- torch_randint(low=1, high=C, size=as.integer(sum(target_lengths)), dtype=torch_long())
-ctc_loss <- nn_ctc_loss()
-loss <- ctc_loss(input, target, input_lengths, target_lengths)
-loss$backward()
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# Target are to be padded
+T <- 50      # Input sequence length
+C <- 20      # Number of classes (including blank)
+N <- 16      # Batch size
+S <- 30      # Target sequence length of longest target in batch (padding length)
+S_min <- 10  # Minimum target length, for demonstration purposes
+
+# Initialize random batch of input vectors, for *size = (T,N,C)
+input <- torch_randn(T, N, C)$log_softmax(2)$detach()$requires_grad_()
+
+# Initialize random batch of targets (0 = blank, 1:C = classes)
+target <- torch_randint(low=1, high=C, size=c(N, S), dtype=torch_long())
+
+input_lengths <- torch_full(size=c(N), fill_value=TRUE, dtype=torch_long())
+target_lengths <- torch_randint(low=S_min, high=S, size=c(N), dtype=torch_long())
+ctc_loss <- nn_ctc_loss()
+loss <- ctc_loss(input, target, input_lengths, target_lengths)
+loss$backward()
+
+
+# Target are to be un-padded
+T <- 50      # Input sequence length
+C <- 20      # Number of classes (including blank)
+N <- 16      # Batch size
+
+# Initialize random batch of input vectors, for *size = (T,N,C)
+input <- torch_randn(T, N, C)$log_softmax(2)$detach()$requires_grad_()
+input_lengths <- torch_full(size=c(N), fill_value=TRUE, dtype=torch_long())
+
+# Initialize random batch of targets (0 = blank, 1:C = classes)
+target_lengths <- torch_randint(low=1, high=T, size=c(N), dtype=torch_long())
+target <- torch_randint(low=1, high=C, size=as.integer(sum(target_lengths)), dtype=torch_long())
+ctc_loss <- nn_ctc_loss()
+loss <- ctc_loss(input, target, input_lengths, target_lengths)
+loss$backward()
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_dropout.html b/dev/reference/nn_dropout.html index d23d042552dbb84c70e1bb8e1fee9597e0d26242..ece938cbd6b1b1396a1711643ff736ebb7c4510a 100644 --- a/dev/reference/nn_dropout.html +++ b/dev/reference/nn_dropout.html @@ -1,82 +1,21 @@ - - - - - - - -Dropout module — nn_dropout • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout module — nn_dropout • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,72 +117,65 @@ distribution. Each channel will be zeroed out independently on every forward call.

-
nn_dropout(p = 0.5, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
p

probability of an element to be zeroed. Default: 0.5

inplace

If set to TRUE, will do this operation in-place. Default: FALSE.

- -

Details

+
+
nn_dropout(p = 0.5, inplace = FALSE)
+
+
+

Arguments

+
p
+

probability of an element to be zeroed. Default: 0.5

+
inplace
+

If set to TRUE, will do this operation in-place. Default: FALSE.

+
+
+

Details

This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper -Improving neural networks by preventing co-adaptation of feature detectors.

+Improving neural networks by preventing co-adaptation of feature detectors.

Furthermore, the outputs are scaled by a factor of :math:\frac{1}{1-p} during training. This means that during evaluation the module simply computes an identity function.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\). Input can be of any shape

  • +
    • Input: \((*)\). Input can be of any shape

    • Output: \((*)\). Output is of the same shape as input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_dropout(p = 0.2)
    -input <- torch_randn(20, 16)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_dropout(p = 0.2)
+input <- torch_randn(20, 16)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_dropout2d.html b/dev/reference/nn_dropout2d.html index a1804f5b229d110e397e401b3fc0bbd3fc86e714..a4c1ba6d53491df8645663ed8cb117b0858d8763 100644 --- a/dev/reference/nn_dropout2d.html +++ b/dev/reference/nn_dropout2d.html @@ -1,81 +1,20 @@ - - - - - - - -Dropout2D module — nn_dropout2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout2D module — nn_dropout2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,78 +115,71 @@ e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 2D tensor \(\mbox{input}[i, j]\)).

-
nn_dropout2d(p = 0.5, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
p

(float, optional): probability of an element to be zero-ed.

inplace

(bool, optional): If set to TRUE, will do this operation -in-place

- -

Details

+
+
nn_dropout2d(p = 0.5, inplace = FALSE)
+
+
+

Arguments

+
p
+

(float, optional): probability of an element to be zero-ed.

+
inplace
+

(bool, optional): If set to TRUE, will do this operation +in-place

+
+
+

Details

Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. -Usually the input comes from nn_conv2d modules.

+Usually the input comes from nn_conv2d modules.

As described in the paper -Efficient Object Localization Using Convolutional Networks , +Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, nn_dropout2d will help promote independence between feature maps and should be used instead.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, H, W)\)

  • +
    • Input: \((N, C, H, W)\)

    • Output: \((N, C, H, W)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_dropout2d(p = 0.2)
    -input <- torch_randn(20, 16, 32, 32)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_dropout2d(p = 0.2)
+input <- torch_randn(20, 16, 32, 32)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_dropout3d.html b/dev/reference/nn_dropout3d.html index 71af823373f9e109b2aeb26cee9818cced6626fa..73f3f7f0465b1bd60a542fa4f51fe460cb66553b 100644 --- a/dev/reference/nn_dropout3d.html +++ b/dev/reference/nn_dropout3d.html @@ -1,81 +1,20 @@ - - - - - - - -Dropout3D module — nn_dropout3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout3D module — nn_dropout3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,78 +115,71 @@ e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 3D tensor \(\mbox{input}[i, j]\)).

-
nn_dropout3d(p = 0.5, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
p

(float, optional): probability of an element to be zeroed.

inplace

(bool, optional): If set to TRUE, will do this operation -in-place

- -

Details

+
+
nn_dropout3d(p = 0.5, inplace = FALSE)
+
+
+

Arguments

+
p
+

(float, optional): probability of an element to be zeroed.

+
inplace
+

(bool, optional): If set to TRUE, will do this operation +in-place

+
+
+

Details

Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. -Usually the input comes from nn_conv2d modules.

+Usually the input comes from nn_conv2d modules.

As described in the paper -Efficient Object Localization Using Convolutional Networks , +Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.

In this case, nn_dropout3d will help promote independence between feature maps and should be used instead.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, D, H, W)\)

  • +
    • Input: \((N, C, D, H, W)\)

    • Output: \((N, C, D, H, W)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_dropout3d(p = 0.2)
    -input <- torch_randn(20, 16, 4, 32, 32)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_dropout3d(p = 0.2)
+input <- torch_randn(20, 16, 4, 32, 32)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_elu.html b/dev/reference/nn_elu.html index 8e2f98b3a527aa7d78b2f02dd235901e68c92031..ec533b212c1eac06fc84eb73da474bb805619698 100644 --- a/dev/reference/nn_elu.html +++ b/dev/reference/nn_elu.html @@ -1,79 +1,18 @@ - - - - - - - -ELU module — nn_elu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ELU module — nn_elu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,70 +111,63 @@

Applies the element-wise function:

-
nn_elu(alpha = 1, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
alpha

the \(\alpha\) value for the ELU formulation. Default: 1.0

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_elu(alpha = 1, inplace = FALSE)
+
+
+

Arguments

+
alpha
+

the \(\alpha\) value for the ELU formulation. Default: 1.0

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_elu()
    -input <- torch_randn(2)
    -output <-  m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_elu()
+input <- torch_randn(2)
+output <-  m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_embedding.html b/dev/reference/nn_embedding.html index 1b224cc4f5b2c78b095729fdee73034460ced33a..076126cf784a7b25156e614425f7f21ef790118b 100644 --- a/dev/reference/nn_embedding.html +++ b/dev/reference/nn_embedding.html @@ -1,82 +1,21 @@ - - - - - - - -Embedding module — nn_embedding • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Embedding module — nn_embedding • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,60 +117,44 @@ The input to the module is a list of indices, and the output is the correspondin word embeddings.

-
nn_embedding(
-  num_embeddings,
-  embedding_dim,
-  padding_idx = NULL,
-  max_norm = NULL,
-  norm_type = 2,
-  scale_grad_by_freq = FALSE,
-  sparse = FALSE,
-  .weight = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
num_embeddings

(int): size of the dictionary of embeddings

embedding_dim

(int): the size of each embedding vector

padding_idx

(int, optional): If given, pads the output with the embedding vector at padding_idx -(initialized to zeros) whenever it encounters the index.

max_norm

(float, optional): If given, each embedding vector with norm larger than max_norm -is renormalized to have norm max_norm.

norm_type

(float, optional): The p of the p-norm to compute for the max_norm option. Default 2.

scale_grad_by_freq

(boolean, optional): If given, this will scale gradients by the inverse of frequency of -the words in the mini-batch. Default False.

sparse

(bool, optional): If True, gradient w.r.t. weight matrix will be a sparse tensor.

.weight

(Tensor) embeddings weights (in case you want to set it manually)

-

See Notes for more details regarding sparse gradients.

- -

Note

+
+
nn_embedding(
+  num_embeddings,
+  embedding_dim,
+  padding_idx = NULL,
+  max_norm = NULL,
+  norm_type = 2,
+  scale_grad_by_freq = FALSE,
+  sparse = FALSE,
+  .weight = NULL
+)
+
+
+

Arguments

+
num_embeddings
+

(int): size of the dictionary of embeddings

+
embedding_dim
+

(int): the size of each embedding vector

+
padding_idx
+

(int, optional): If given, pads the output with the embedding vector at padding_idx +(initialized to zeros) whenever it encounters the index.

+
max_norm
+

(float, optional): If given, each embedding vector with norm larger than max_norm +is renormalized to have norm max_norm.

+
norm_type
+

(float, optional): The p of the p-norm to compute for the max_norm option. Default 2.

+
scale_grad_by_freq
+

(boolean, optional): If given, this will scale gradients by the inverse of frequency of +the words in the mini-batch. Default False.

+
sparse
+

(bool, optional): If True, gradient w.r.t. weight matrix will be a sparse tensor.

+
.weight
+

(Tensor) embeddings weights (in case you want to set it manually)

+

See Notes for more details regarding sparse gradients.

+
+
+

Note

Keep in mind that only a limited number of optimizers support sparse gradients: currently it's optim.SGD (CUDA and CPU), optim.SparseAdam (CUDA and CPU) and optim.Adagrad (CPU)

@@ -258,71 +164,66 @@ vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector from nn_embedding is always zero.

-

Attributes

- +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) +

    • weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from \(\mathcal{N}(0, 1)\)

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((*)\), LongTensor of arbitrary shape containing the indices to extract

  • +
    • Input: \((*)\), LongTensor of arbitrary shape containing the indices to extract

    • Output: \((*, H)\), where * is the input shape and \(H=\mbox{embedding\_dim}\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# an Embedding module containing 10 tensors of size 3
    -embedding <- nn_embedding(10, 3)
    -# a batch of 2 samples of 4 indices each
    -input <- torch_tensor(rbind(c(1,2,4,5),c(4,3,2,9)), dtype = torch_long())
    -embedding(input)
    -# example with padding_idx
    -embedding <- nn_embedding(10, 3, padding_idx=1)
    -input <- torch_tensor(matrix(c(1,3,1,6), nrow = 1), dtype = torch_long())
    -embedding(input)
    -
    -}
    -#> torch_tensor
    -#> (1,.,.) = 
    -#>   0.0000  0.0000  0.0000
    -#>   2.6061 -0.9449 -0.8783
    -#>   0.0000  0.0000  0.0000
    -#>   0.7411  0.0675  0.2930
    -#> [ CPUFloatType{1,4,3} ][ grad_fn = <EmbeddingBackward> ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# an Embedding module containing 10 tensors of size 3
+embedding <- nn_embedding(10, 3)
+# a batch of 2 samples of 4 indices each
+input <- torch_tensor(rbind(c(1,2,4,5),c(4,3,2,9)), dtype = torch_long())
+embedding(input)
+# example with padding_idx
+embedding <- nn_embedding(10, 3, padding_idx=1)
+input <- torch_tensor(matrix(c(1,3,1,6), nrow = 1), dtype = torch_long())
+embedding(input)
+
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   0.0000  0.0000  0.0000
+#>  -2.7940 -0.4395  2.2699
+#>   0.0000  0.0000  0.0000
+#>   1.2420 -1.8120  0.7758
+#> [ CPUFloatType{1,4,3} ][ grad_fn = <EmbeddingBackward> ]
+
+
+
- - - + + diff --git a/dev/reference/nn_fractional_max_pool2d.html b/dev/reference/nn_fractional_max_pool2d.html index 0e7d996fb8fbecf7918c72beb72d413dac453fa2..12e532c358952c9085a342cfcff6c93c2c6aae80 100644 --- a/dev/reference/nn_fractional_max_pool2d.html +++ b/dev/reference/nn_fractional_max_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 2D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 2D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool2d • torch - - - - - - - - + + -
-
- -
- -
+

Fractional MaxPooling is described in detail in the paper -Fractional MaxPooling by Ben Graham

+Fractional MaxPooling by Ben Graham

-
nn_fractional_max_pool2d(
-  kernel_size,
-  output_size = NULL,
-  output_ratio = NULL,
-  return_indices = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window to take a max over. -Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)

output_size

the target output size of the image of the form oH x oW. -Can be a tuple (oH, oW) or a single number oH for a square image oH x oH

output_ratio

If one wants to have an output size as a ratio of the input size, this option can be given. -This has to be a number or tuple in the range (0, 1)

return_indices

if TRUE, will return the indices along with the outputs. -Useful to pass to nn_max_unpool2d(). Default: FALSE

- -

Details

+
+
nn_fractional_max_pool2d(
+  kernel_size,
+  output_size = NULL,
+  output_ratio = NULL,
+  return_indices = FALSE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)

+
output_size
+

the target output size of the image of the form oH x oW. +Can be a tuple (oH, oW) or a single number oH for a square image oH x oH

+
output_ratio
+

If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

+
return_indices
+

if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool2d(). Default: FALSE

+
+
+

Details

The max-pooling operation is applied in \(kH \times kW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.

+
-

Examples

-
if (torch_is_installed()) {
-
-}
-#> NULL
-
+
+

Examples

+
if (torch_is_installed()) {
+
+}
+#> NULL
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_fractional_max_pool3d.html b/dev/reference/nn_fractional_max_pool3d.html index a7aee2d877d5bdbeb486439382ddf6217e5581f3..4c0b3b7096d004646e430a70cd17a1898bbbe397 100644 --- a/dev/reference/nn_fractional_max_pool3d.html +++ b/dev/reference/nn_fractional_max_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a 3D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 3D fractional max pooling over an input signal composed of several input planes. — nn_fractional_max_pool3d • torch - - - - - - - - + + -
-
- -
- -
+

Fractional MaxPooling is described in detail in the paper -Fractional MaxPooling by Ben Graham

+Fractional MaxPooling by Ben Graham

-
nn_fractional_max_pool3d(
-  kernel_size,
-  output_size = NULL,
-  output_ratio = NULL,
-  return_indices = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window to take a max over. -Can be a single number k (for a square kernel of k x k x k) or a tuple (kt x kh x kw)

output_size

the target output size of the image of the form oT x oH x oW. -Can be a tuple (oT, oH, oW) or a single number oH for a square image oH x oH x oH

output_ratio

If one wants to have an output size as a ratio of the input size, this option can be given. -This has to be a number or tuple in the range (0, 1)

return_indices

if TRUE, will return the indices along with the outputs. -Useful to pass to nn_max_unpool3d(). Default: FALSE

- -

Details

+
+
nn_fractional_max_pool3d(
+  kernel_size,
+  output_size = NULL,
+  output_ratio = NULL,
+  return_indices = FALSE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window to take a max over. +Can be a single number k (for a square kernel of k x k x k) or a tuple (kt x kh x kw)

+
output_size
+

the target output size of the image of the form oT x oH x oW. +Can be a tuple (oT, oH, oW) or a single number oH for a square image oH x oH x oH

+
output_ratio
+

If one wants to have an output size as a ratio of the input size, this option can be given. +This has to be a number or tuple in the range (0, 1)

+
return_indices
+

if TRUE, will return the indices along with the outputs. +Useful to pass to nn_max_unpool3d(). Default: FALSE

+
+
+

Details

The max-pooling operation is applied in \(kTxkHxkW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.

+
-

Examples

-
if (torch_is_installed()) {
-# pool of cubic window of size=3, and target output size 13x12x11
-m = nn_fractional_max_pool3d(3, output_size=c(13, 12, 11))
-# pool of cubic window and target output size being half of input size
-m = nn_fractional_max_pool3d(3, output_ratio=c(0.5, 0.5, 0.5))
-input = torch_randn(20, 16, 50, 32, 16)
-output = m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# pool of cubic window of size=3, and target output size 13x12x11
+m = nn_fractional_max_pool3d(3, output_size=c(13, 12, 11))
+# pool of cubic window and target output size being half of input size
+m = nn_fractional_max_pool3d(3, output_ratio=c(0.5, 0.5, 0.5))
+input = torch_randn(20, 16, 50, 32, 16)
+output = m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_gelu.html b/dev/reference/nn_gelu.html index f079da206f4418741a59ea74f0217de263c5554d..33fcccf93fae7a7761fc1a7e9d11f2563257d1c5 100644 --- a/dev/reference/nn_gelu.html +++ b/dev/reference/nn_gelu.html @@ -1,80 +1,19 @@ - - - - - - - -GELU module — nn_gelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -GELU module — nn_gelu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,56 +113,54 @@ $$\mbox{GELU}(x) = x * \Phi(x)$$" /> $$\mbox{GELU}(x) = x * \Phi(x)$$

-
nn_gelu()
- - -

Details

+
+
nn_gelu()
+
+
+

Details

where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m = nn_gelu()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m = nn_gelu()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_glu.html b/dev/reference/nn_glu.html index 021dfda1efce47791086a535af38e423896d7bd6..c43637c70e099cf037bc09e619a784a810ccc369 100644 --- a/dev/reference/nn_glu.html +++ b/dev/reference/nn_glu.html @@ -1,81 +1,20 @@ - - - - - - - -GLU module — nn_glu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -GLU module — nn_glu • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,61 +115,55 @@ of the input matrices and \(b\) is the second half." /> of the input matrices and \(b\) is the second half.

-
nn_glu(dim = -1)
- -

Arguments

- - - - - - -
dim

(int): the dimension on which to split the input. Default: -1

- -

Shape

+
+
nn_glu(dim = -1)
+
+
+

Arguments

+
dim
+

(int): the dimension on which to split the input. Default: -1

+
+
+

Shape

-
    -
  • Input: \((\ast_1, N, \ast_2)\) where * means, any number of additional +

    • Input: \((\ast_1, N, \ast_2)\) where * means, any number of additional dimensions

    • Output: \((\ast_1, M, \ast_2)\) where \(M=N/2\)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_glu()
    -input <- torch_randn(4, 2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_glu()
+input <- torch_randn(4, 2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_group_norm.html b/dev/reference/nn_group_norm.html index 42a6ba44028d523c1a16f3dc9c0093b4f1dc96b7..bc8f93b9645df6e4c4d76218812dca1642399586 100644 --- a/dev/reference/nn_group_norm.html +++ b/dev/reference/nn_group_norm.html @@ -1,80 +1,19 @@ - - - - - - - -Group normalization — nn_group_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Group normalization — nn_group_norm • torch - - - - - - - - + + -
-
- -
- -
+

Applies Group Normalization over a mini-batch of inputs as described in -the paper Group Normalization.

+the paper Group Normalization.

-
nn_group_norm(num_groups, num_channels, eps = 1e-05, affine = TRUE)
+
+
nn_group_norm(num_groups, num_channels, eps = 1e-05, affine = TRUE)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
num_groups

(int): number of groups to separate the channels into

num_channels

(int): number of channels expected in input

eps

a value added to the denominator for numerical stability. Default: 1e-5

affine

a boolean value that when set to TRUE, this module +

+

Arguments

+
num_groups
+

(int): number of groups to separate the channels into

+
num_channels
+

(int): number of channels expected in input

+
eps
+

a value added to the denominator for numerical stability. Default: 1e-5

+
affine
+

a boolean value that when set to TRUE, this module has learnable per-channel affine parameters initialized to ones (for weights) -and zeros (for biases). Default: TRUE.

- -

Details

- +and zeros (for biases). Default: TRUE.

+
+
+

Details

$$ y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta $$

@@ -228,60 +142,58 @@ per-channel affine transform parameter vectors of size num_channels affine is TRUE. The standard-deviation is calculated via the biased estimator, equivalent to torch_var(input, unbiased=FALSE).

-

Note

- +
+
+

Note

This layer uses statistics computed from input data in both training and evaluation modes.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, *)\) where \(C=\mbox{num\_channels}\)

  • +
    • Input: \((N, C, *)\) where \(C=\mbox{num\_channels}\)

    • Output: \((N, C, *)\)` (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -input <- torch_randn(20, 6, 10, 10)
    -# Separate 6 channels into 3 groups
    -m <- nn_group_norm(3, 6)
    -# Separate 6 channels into 6 groups (equivalent with [nn_instance_morm])
    -m <- nn_group_norm(6, 6)
    -# Put all 6 channels into a single group (equivalent with [nn_layer_norm])
    -m <- nn_group_norm(1, 6)
    -# Activating the module
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+input <- torch_randn(20, 6, 10, 10)
+# Separate 6 channels into 3 groups
+m <- nn_group_norm(3, 6)
+# Separate 6 channels into 6 groups (equivalent with [nn_instance_morm])
+m <- nn_group_norm(6, 6)
+# Put all 6 channels into a single group (equivalent with [nn_layer_norm])
+m <- nn_group_norm(1, 6)
+# Activating the module
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_gru.html b/dev/reference/nn_gru.html index c29cd877cef2f22ea5d3b5d35888d5cb80656c5b..02cb901d0ce49a734dc19bd9445926c89936c114 100644 --- a/dev/reference/nn_gru.html +++ b/dev/reference/nn_gru.html @@ -1,80 +1,19 @@ - - - - - - - -Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. — nn_gru • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. — nn_gru • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,63 +113,47 @@ function:" /> function:

-
nn_gru(
-  input_size,
-  hidden_size,
-  num_layers = 1,
-  bias = TRUE,
-  batch_first = FALSE,
-  dropout = 0,
-  bidirectional = FALSE,
-  ...
-)
+
+
nn_gru(
+  input_size,
+  hidden_size,
+  num_layers = 1,
+  bias = TRUE,
+  batch_first = FALSE,
+  dropout = 0,
+  bidirectional = FALSE,
+  ...
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input_size

The number of expected features in the input x

hidden_size

The number of features in the hidden state h

num_layers

Number of recurrent layers. E.g., setting num_layers=2 +

+

Arguments

+
input_size
+

The number of expected features in the input x

+
hidden_size
+

The number of features in the hidden state h

+
num_layers
+

Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and -computing the final results. Default: 1

bias

If FALSE, then the layer does not use bias weights b_ih and b_hh. -Default: TRUE

batch_first

If TRUE, then the input and output tensors are provided -as (batch, seq, feature). Default: FALSE

dropout

If non-zero, introduces a Dropout layer on the outputs of each +computing the final results. Default: 1

+
bias
+

If FALSE, then the layer does not use bias weights b_ih and b_hh. +Default: TRUE

+
batch_first
+

If TRUE, then the input and output tensors are provided +as (batch, seq, feature). Default: FALSE

+
dropout
+

If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to -dropout. Default: 0

bidirectional

If TRUE, becomes a bidirectional GRU. Default: FALSE

...

currently unused.

- -

Details

- +dropout. Default: 0

+
bidirectional
+

If TRUE, becomes a bidirectional GRU. Default: FALSE

+
...
+

currently unused.

+
+
+

Details

$$ \begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ @@ -261,32 +167,31 @@ at time t, \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0, and \(r_t\), \(z_t\), \(n_t\) are the reset, update, and new gates, respectively. \(\sigma\) is the sigmoid function.

-

Note

- +
+
+

Note

All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\mbox{hidden\_size}}\)

-

Inputs

- +
+
+

Inputs

-

Inputs: input, h_0

    -
  • input of shape (seq_len, batch, input_size): tensor containing the features +

    Inputs: input, h_0

    • input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length -sequence. See nn_utils_rnn_pack_padded_sequence() +sequence. See nn_utils_rnn_pack_padded_sequence() for details.

    • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.

    • -
    - -

    Outputs

    - +
+
+

Outputs

-

Outputs: output, h_n

    -
  • output of shape (seq_len, batch, num_directions * hidden_size): tensor +

    Outputs: output, h_n

    • output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a PackedSequence has been given as the input, the output will also be a packed sequence. @@ -298,14 +203,12 @@ Similarly, the directions can be separated in the packed case.

    • containing the hidden state for t = seq_len Like output, the layers can be separated using h_n$view(num_layers, num_directions, batch, hidden_size).

      -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight_ih_l[k] : the learnable input-hidden weights of the \(\mbox{k}^{th}\) layer +

    • weight_ih_l[k] : the learnable input-hidden weights of the \(\mbox{k}^{th}\) layer (W_ir|W_iz|W_in), of shape (3*hidden_size x input_size)

    • weight_hh_l[k] : the learnable hidden-hidden weights of the \(\mbox{k}^{th}\) layer (W_hr|W_hz|W_hn), of shape (3*hidden_size x hidden_size)

    • @@ -313,44 +216,41 @@ Like output, the layers can be separated using (b_ir|b_iz|b_in), of shape (3*hidden_size)

    • bias_hh_l[k] : the learnable hidden-hidden bias of the \(\mbox{k}^{th}\) layer (b_hr|b_hz|b_hn), of shape (3*hidden_size)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -rnn <- nn_gru(10, 20, 2)
    -input <- torch_randn(5, 3, 10)
    -h0 <- torch_randn(2, 3, 20)
    -output <- rnn(input, h0)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+rnn <- nn_gru(10, 20, 2)
+input <- torch_randn(5, 3, 10)
+h0 <- torch_randn(2, 3, 20)
+output <- rnn(input, h0)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_hardshrink.html b/dev/reference/nn_hardshrink.html index 277f964e4f8b0debdd2218559e441367fc3fc10e..570af5c22c6afd47bfd63a49621a67cdd1b3e9bb 100644 --- a/dev/reference/nn_hardshrink.html +++ b/dev/reference/nn_hardshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Hardshwink module — nn_hardshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardshwink module — nn_hardshrink • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Applies the hard shrinkage function element-wise:

-
nn_hardshrink(lambd = 0.5)
- -

Arguments

- - - - - - -
lambd

the \(\lambda\) value for the Hardshrink formulation. Default: 0.5

- -

Details

+
+
nn_hardshrink(lambd = 0.5)
+
+
+

Arguments

+
lambd
+

the \(\lambda\) value for the Hardshrink formulation. Default: 0.5

+
+
+

Details

$$ \mbox{HardShrink}(x) = \left\{ \begin{array}{ll} @@ -211,50 +131,47 @@ x, & \mbox{ if } x < -\lambda \\ \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_hardshrink()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_hardshrink()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_hardsigmoid.html b/dev/reference/nn_hardsigmoid.html index 75b57910bfd95eb24e2b8667ef7414e5e250f8f3..63690cd94d49310ad49016963f6c1749361d73e3 100644 --- a/dev/reference/nn_hardsigmoid.html +++ b/dev/reference/nn_hardsigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Hardsigmoid module — nn_hardsigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardsigmoid module — nn_hardsigmoid • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,11 +111,12 @@

Applies the element-wise function:

-
nn_hardsigmoid()
- - -

Details

+
+
nn_hardsigmoid()
+
+
+

Details

$$ \mbox{Hardsigmoid}(x) = \left\{ \begin{array}{ll} 0 & \mbox{if~} x \le -3, \\ @@ -202,50 +125,47 @@ \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_hardsigmoid()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_hardsigmoid()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_hardswish.html b/dev/reference/nn_hardswish.html index 871dabc9c073863bb2049fb65a49edd7c29b0a7a..9df03e2289f96044a3bec36d473b4aebc386900f 100644 --- a/dev/reference/nn_hardswish.html +++ b/dev/reference/nn_hardswish.html @@ -1,80 +1,19 @@ - - - - - - - -Hardswish module — nn_hardswish • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardswish module — nn_hardswish • torch - - - - - - - - + + -
-
- -
- -
+

Applies the hardswish function, element-wise, as described in the paper: -Searching for MobileNetV3

+Searching for MobileNetV3

-
nn_hardswish()
- - -

Details

+
+
nn_hardswish()
+
+
+

Details

$$ \mbox{Hardswish}(x) = \left\{ \begin{array}{ll} 0 & \mbox{if } x \le -3, \\ @@ -203,52 +126,49 @@ Searching for MobileNetV3" /> x \cdot (x + 3)/6 & \mbox{otherwise} \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -if (FALSE) {
    -m <- nn_hardswish()
    -input <- torch_randn(2)
    -output <- m(input)
    -}
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+m <- nn_hardswish()
+input <- torch_randn(2)
+output <- m(input)
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_hardtanh.html b/dev/reference/nn_hardtanh.html index 4c2a66de446fa2c89674827882bd1b5758775d0c..bddb01a9059d4595041f94573c08154c5370fe08 100644 --- a/dev/reference/nn_hardtanh.html +++ b/dev/reference/nn_hardtanh.html @@ -1,80 +1,19 @@ - - - - - - - -Hardtanh module — nn_hardtanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardtanh module — nn_hardtanh • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,27 +113,21 @@ HardTanh is defined as:" /> HardTanh is defined as:

-
nn_hardtanh(min_val = -1, max_val = 1, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
min_val

minimum value of the linear region range. Default: -1

max_val

maximum value of the linear region range. Default: 1

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_hardtanh(min_val = -1, max_val = 1, inplace = FALSE)
+
+
+

Arguments

+
min_val
+

minimum value of the linear region range. Default: -1

+
max_val
+

maximum value of the linear region range. Default: 1

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{HardTanh}(x) = \left\{ \begin{array}{ll} 1 & \mbox{ if } x > 1 \\ @@ -222,50 +138,47 @@ HardTanh is defined as:

$$

The range of the linear region :math:[-1, 1] can be adjusted using min_val and max_val.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_hardtanh(-2, 2)
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_hardtanh(-2, 2)
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_hinge_embedding_loss.html b/dev/reference/nn_hinge_embedding_loss.html index fbc2ac497b50678c87c2abfc04af3bc445f7b7b4..ab9619a0863426c5a78d49564dd13a7e6ddec0bc 100644 --- a/dev/reference/nn_hinge_embedding_loss.html +++ b/dev/reference/nn_hinge_embedding_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Hinge embedding loss — nn_hinge_embedding_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hinge embedding loss — nn_hinge_embedding_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,28 +113,24 @@ (containing 1 or -1).

-
nn_hinge_embedding_loss(margin = 1, reduction = "mean")
+
+
nn_hinge_embedding_loss(margin = 1, reduction = "mean")
+
-

Arguments

- - - - - - - - - - -
margin

(float, optional): Has a default value of 1.

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
margin
+

(float, optional): Has a default value of 1.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as \(x\), and is typically used for learning nonlinear embeddings or semi-supervised learning. @@ -231,43 +149,38 @@ $$

\end{array} $$

where \(L = \{l_1,\dots,l_N\}^\top\).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\) where \(*\) means, any number of dimensions. The sum operation +

    • Input: \((*)\) where \(*\) means, any number of dimensions. The sum operation operates over all the elements.

    • Target: \((*)\), same shape as the input

    • Output: scalar. If reduction is 'none', then same shape as the input

    • -
    - +
+
- - - + + diff --git a/dev/reference/nn_identity.html b/dev/reference/nn_identity.html index 8b6e8bb40f2a019cf4e640ced3aef0bd58cd6e80..3f23fe8ce92c4ff976a888ce5389556dff8e782a 100644 --- a/dev/reference/nn_identity.html +++ b/dev/reference/nn_identity.html @@ -1,79 +1,18 @@ - - - - - - - -Identity module — nn_identity • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Identity module — nn_identity • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,53 +111,49 @@

A placeholder identity operator that is argument-insensitive.

-
nn_identity(...)
- -

Arguments

- - - - - - -
...

any arguments (unused)

- - -

Examples

-
if (torch_is_installed()) {
-m <- nn_identity(54, unused_argument1 = 0.1, unused_argument2 = FALSE)
-input <- torch_randn(128, 20)
-output <- m(input)
-print(output$size())
-
-}
-#> [1] 128  20
-
+
+
nn_identity(...)
+
+ +
+

Arguments

+
...
+

any arguments (unused)

+
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_identity(54, unused_argument1 = 0.1, unused_argument2 = FALSE)
+input <- torch_randn(128, 20)
+output <- m(input)
+print(output$size())
+
+}
+#> [1] 128  20
+
+
+
- - - + + diff --git a/dev/reference/nn_init_calculate_gain.html b/dev/reference/nn_init_calculate_gain.html index 712139daad36ca7ce350a97f35bb364b4099f013..9a220fe82e804248cfb9b03cdd36b966bb7b4957 100644 --- a/dev/reference/nn_init_calculate_gain.html +++ b/dev/reference/nn_init_calculate_gain.html @@ -1,79 +1,18 @@ - - - - - - - -Calculate gain — nn_init_calculate_gain • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Calculate gain — nn_init_calculate_gain • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,47 +111,39 @@

Return the recommended gain value for the given nonlinearity function.

-
nn_init_calculate_gain(nonlinearity, param = NULL)
- -

Arguments

- - - - - - - - - - -
nonlinearity

the non-linear function

param

optional parameter for the non-linear function

+
+
nn_init_calculate_gain(nonlinearity, param = NULL)
+
+
+

Arguments

+
nonlinearity
+

the non-linear function

+
param
+

optional parameter for the non-linear function

+
+
- - - + + diff --git a/dev/reference/nn_init_constant_.html b/dev/reference/nn_init_constant_.html index f7f9f911aecb56abfb282221e574b2df5d7b835b..14e2cf9d908bf3ce8ac2b0f7d58165e30970a859 100644 --- a/dev/reference/nn_init_constant_.html +++ b/dev/reference/nn_init_constant_.html @@ -1,79 +1,18 @@ - - - - - - - -Constant initialization — nn_init_constant_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Constant initialization — nn_init_constant_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,59 +111,53 @@

Fills the input Tensor with the value val.

-
nn_init_constant_(tensor, val)
- -

Arguments

- - - - - - - - - - -
tensor

an n-dimensional Tensor

val

the value to fill the tensor with

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_constant_(w, 0.3)
-
-}
-#> torch_tensor
-#>  0.3000  0.3000  0.3000  0.3000  0.3000
-#>  0.3000  0.3000  0.3000  0.3000  0.3000
-#>  0.3000  0.3000  0.3000  0.3000  0.3000
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_constant_(tensor, val)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
val
+

the value to fill the tensor with

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_constant_(w, 0.3)
+
+}
+#> torch_tensor
+#>  0.3000  0.3000  0.3000  0.3000  0.3000
+#>  0.3000  0.3000  0.3000  0.3000  0.3000
+#>  0.3000  0.3000  0.3000  0.3000  0.3000
+#> [ CPUFloatType{3,5} ]
+
+
+
- - - + + diff --git a/dev/reference/nn_init_dirac_.html b/dev/reference/nn_init_dirac_.html index 19cfcdbb6ceadf52c79f8a4bfd8efdac5c863a39..7102c852272a875f39109776bffc36ed85ea3b4a 100644 --- a/dev/reference/nn_init_dirac_.html +++ b/dev/reference/nn_init_dirac_.html @@ -1,82 +1,21 @@ - - - - - - - -Dirac initialization — nn_init_dirac_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dirac initialization — nn_init_dirac_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,56 +117,50 @@ layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity.

-
nn_init_dirac_(tensor, groups = 1)
- -

Arguments

- - - - - - - - - - -
tensor

a 3, 4, 5-dimensional torch.Tensor

groups

(optional) number of groups in the conv layer (default: 1)

- - -

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-w <- torch_empty(3, 16, 5, 5)
-nn_init_dirac_(w)
-}
-
-}
-
+
+
nn_init_dirac_(tensor, groups = 1)
+
+ +
+

Arguments

+
tensor
+

a 3, 4, 5-dimensional torch.Tensor

+
groups
+

(optional) number of groups in the conv layer (default: 1)

+
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+w <- torch_empty(3, 16, 5, 5)
+nn_init_dirac_(w)
+}
+
+}
+
+
+
- - - + + diff --git a/dev/reference/nn_init_eye_.html b/dev/reference/nn_init_eye_.html index 586c51bc4c2e34ad850f0871b0c8ea2c7bf3c9ee..a131a03b5d587254289f6e84f399f95856a84501 100644 --- a/dev/reference/nn_init_eye_.html +++ b/dev/reference/nn_init_eye_.html @@ -1,81 +1,20 @@ - - - - - - - -Eye initialization — nn_init_eye_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Eye initialization — nn_init_eye_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,55 +115,51 @@ Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible.

-
nn_init_eye_(tensor)
- -

Arguments

- - - - - - -
tensor

a 2-dimensional torch tensor.

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_eye_(w)
-
-}
-#> torch_tensor
-#>  1  0  0  0  0
-#>  0  1  0  0  0
-#>  0  0  1  0  0
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_eye_(tensor)
+
+ +
+

Arguments

+
tensor
+

a 2-dimensional torch tensor.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_eye_(w)
+
+}
+#> torch_tensor
+#>  1  0  0  0  0
+#>  0  1  0  0  0
+#>  0  0  1  0  0
+#> [ CPUFloatType{3,5} ]
+
+
+
- - - + + diff --git a/dev/reference/nn_init_kaiming_normal_.html b/dev/reference/nn_init_kaiming_normal_.html index c6e86714f02981d11f026d62c09b98ebcd4ef247..e73c6a3d8dc5506253a5b3d29d76f9987b58e4fa 100644 --- a/dev/reference/nn_init_kaiming_normal_.html +++ b/dev/reference/nn_init_kaiming_normal_.html @@ -1,81 +1,20 @@ - - - - - - - -Kaiming normal initialization — nn_init_kaiming_normal_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kaiming normal initialization — nn_init_kaiming_normal_ • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,76 +115,66 @@ described in Delving deep into rectifiers: Surpassing human-level performa normal distribution.

-
nn_init_kaiming_normal_(
-  tensor,
-  a = 0,
-  mode = "fan_in",
-  nonlinearity = "leaky_relu"
-)
+
+
nn_init_kaiming_normal_(
+  tensor,
+  a = 0,
+  mode = "fan_in",
+  nonlinearity = "leaky_relu"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
tensor

an n-dimensional torch.Tensor

a

the negative slope of the rectifier used after this layer (only used -with 'leaky_relu')

mode

either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +

+

Arguments

+
tensor
+

an n-dimensional torch.Tensor

+
a
+

the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

+
mode
+

either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing -'fan_out' preserves the magnitudes in the backwards pass.

nonlinearity

the non-linear function. recommended to use only with 'relu' -or 'leaky_relu' (default).

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_kaiming_normal_(w, mode = "fan_in", nonlinearity = "leaky_relu")
-
-}
-#> torch_tensor
-#> -0.6792  1.1186 -0.7690  0.7750 -0.1943
-#> -0.1540  0.0082  0.3742  0.0420  0.8249
-#>  0.1577  0.2320 -0.5648 -0.6196 -0.8011
-#> [ CPUFloatType{3,5} ]
-
+'fan_out' preserves the magnitudes in the backwards pass.

+
nonlinearity
+

the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_kaiming_normal_(w, mode = "fan_in", nonlinearity = "leaky_relu")
+
+}
+#> torch_tensor
+#>  0.5597  1.6307 -1.1737 -0.0284  0.6240
+#> -0.4227 -0.2330 -0.9105  0.7609  0.1408
+#>  0.1924 -0.7354  0.1678 -0.8725  0.7955
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_kaiming_uniform_.html b/dev/reference/nn_init_kaiming_uniform_.html index b57eaceb870243cbf736c02126296f0ebd80c28d..ab08ba728c7f5b00afbdd62ac8992e91243ef0fb 100644 --- a/dev/reference/nn_init_kaiming_uniform_.html +++ b/dev/reference/nn_init_kaiming_uniform_.html @@ -1,81 +1,20 @@ - - - - - - - -Kaiming uniform initialization — nn_init_kaiming_uniform_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kaiming uniform initialization — nn_init_kaiming_uniform_ • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,76 +115,66 @@ described in Delving deep into rectifiers: Surpassing human-level performa uniform distribution.

-
nn_init_kaiming_uniform_(
-  tensor,
-  a = 0,
-  mode = "fan_in",
-  nonlinearity = "leaky_relu"
-)
+
+
nn_init_kaiming_uniform_(
+  tensor,
+  a = 0,
+  mode = "fan_in",
+  nonlinearity = "leaky_relu"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
tensor

an n-dimensional torch.Tensor

a

the negative slope of the rectifier used after this layer (only used -with 'leaky_relu')

mode

either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves +

+

Arguments

+
tensor
+

an n-dimensional torch.Tensor

+
a
+

the negative slope of the rectifier used after this layer (only used +with 'leaky_relu')

+
mode
+

either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing -'fan_out' preserves the magnitudes in the backwards pass.

nonlinearity

the non-linear function. recommended to use only with 'relu' -or 'leaky_relu' (default).

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_kaiming_uniform_(w, mode = "fan_in", nonlinearity = "leaky_relu")
-
-}
-#> torch_tensor
-#> -0.3597 -0.2455  1.0234  1.0863  0.4554
-#>  0.6568  0.5657  0.0153  0.1182 -0.3720
-#>  0.1252 -0.4104  0.5346  0.6952  0.0063
-#> [ CPUFloatType{3,5} ]
-
+'fan_out' preserves the magnitudes in the backwards pass.

+
nonlinearity
+

the non-linear function. recommended to use only with 'relu' +or 'leaky_relu' (default).

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_kaiming_uniform_(w, mode = "fan_in", nonlinearity = "leaky_relu")
+
+}
+#> torch_tensor
+#>  1.0333  0.2205 -0.9490 -0.9102 -0.4403
+#> -0.4488  0.7044 -0.8638 -0.9148 -0.4350
+#> -0.1012  0.9291  0.5385  0.5255  1.0466
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_normal_.html b/dev/reference/nn_init_normal_.html index e161f361593a1a4387d2439d7fa90d84b65a232c..0711537b1414b78d84e00fae6a9f913ae80d2621 100644 --- a/dev/reference/nn_init_normal_.html +++ b/dev/reference/nn_init_normal_.html @@ -1,79 +1,18 @@ - - - - - - - -Normal initialization — nn_init_normal_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Normal initialization — nn_init_normal_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,63 +111,55 @@

Fills the input Tensor with values drawn from the normal distribution

-
nn_init_normal_(tensor, mean = 0, std = 1)
- -

Arguments

- - - - - - - - - - - - - - -
tensor

an n-dimensional Tensor

mean

the mean of the normal distribution

std

the standard deviation of the normal distribution

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_normal_(w)
-
-}
-#> torch_tensor
-#>  1.5091 -0.8198 -0.7872 -0.1892  0.1993
-#>  0.9473  0.5092 -0.9673 -1.7986 -0.9718
-#> -0.5666  0.1663  0.7699 -0.2990 -0.2959
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_normal_(tensor, mean = 0, std = 1)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
mean
+

the mean of the normal distribution

+
std
+

the standard deviation of the normal distribution

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_normal_(w)
+
+}
+#> torch_tensor
+#> -1.1571  0.7198 -0.0445 -1.1248  1.1080
+#> -0.3596  0.5308 -0.6259  0.1228  0.2068
+#> -0.4467  0.6220 -0.1635 -0.7117  1.1448
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_ones_.html b/dev/reference/nn_init_ones_.html index 6be0eb93a7d6f2bccdd596e76ec5a40726e1ee25..da8da2220815ef645d175c64957c5590e890b6e7 100644 --- a/dev/reference/nn_init_ones_.html +++ b/dev/reference/nn_init_ones_.html @@ -1,79 +1,18 @@ - - - - - - - -Ones initialization — nn_init_ones_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ones initialization — nn_init_ones_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,55 +111,51 @@

Fills the input Tensor with the scalar value 1

-
nn_init_ones_(tensor)
- -

Arguments

- - - - - - -
tensor

an n-dimensional Tensor

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_ones_(w)
-
-}
-#> torch_tensor
-#>  1  1  1  1  1
-#>  1  1  1  1  1
-#>  1  1  1  1  1
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_ones_(tensor)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_ones_(w)
+
+}
+#> torch_tensor
+#>  1  1  1  1  1
+#>  1  1  1  1  1
+#>  1  1  1  1  1
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_orthogonal_.html b/dev/reference/nn_init_orthogonal_.html index c0ae2d60690e6435800c02c4f679c8914657cc44..89cb20293a27a883a14b44a942387ccbb3f61cf7 100644 --- a/dev/reference/nn_init_orthogonal_.html +++ b/dev/reference/nn_init_orthogonal_.html @@ -1,82 +1,21 @@ - - - - - - - -Orthogonal initialization — nn_init_orthogonal_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Orthogonal initialization — nn_init_orthogonal_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,59 +117,53 @@ at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.

-
nn_init_orthogonal_(tensor, gain = 1)
- -

Arguments

- - - - - - - - - - -
tensor

an n-dimensional Tensor

gain

optional scaling factor

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3,5)
-nn_init_orthogonal_(w)
-
-}
-#> torch_tensor
-#> -0.0802  0.3374  0.2647  0.8924  0.1153
-#>  0.0655  0.5401  0.7129 -0.4255  0.1213
-#>  0.7062  0.5276 -0.4176  0.0162 -0.2195
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_orthogonal_(tensor, gain = 1)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
gain
+

optional scaling factor

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3,5)
+nn_init_orthogonal_(w)
+
+}
+#> torch_tensor
+#> -0.3827 -0.4775 -0.0170  0.7565  0.2300
+#> -0.7800  0.3363 -0.4935 -0.1851 -0.0275
+#>  0.0643 -0.6951 -0.4478 -0.2671 -0.4907
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_sparse_.html b/dev/reference/nn_init_sparse_.html index 9b03048c9a4906385add28597403ca11bede0500..37065d7aad0928d2f994d44ad4127aede3ae5ac2 100644 --- a/dev/reference/nn_init_sparse_.html +++ b/dev/reference/nn_init_sparse_.html @@ -1,81 +1,20 @@ - - - - - - - -Sparse initialization — nn_init_sparse_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sparse initialization — nn_init_sparse_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,60 +115,52 @@ non-zero elements will be drawn from the normal distribution as described in Deep learning via Hessian-free optimization - Martens, J. (2010).

-
nn_init_sparse_(tensor, sparsity, std = 0.01)
- -

Arguments

- - - - - - - - - - - - - - -
tensor

an n-dimensional Tensor

sparsity

The fraction of elements in each column to be set to zero

std

the standard deviation of the normal distribution used to generate -the non-zero values

- - -

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-w <- torch_empty(3, 5)
-nn_init_sparse_(w, sparsity = 0.1)
-}
-}
-
+
+
nn_init_sparse_(tensor, sparsity, std = 0.01)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
sparsity
+

The fraction of elements in each column to be set to zero

+
std
+

the standard deviation of the normal distribution used to generate +the non-zero values

+
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+w <- torch_empty(3, 5)
+nn_init_sparse_(w, sparsity = 0.1)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_trunc_normal_.html b/dev/reference/nn_init_trunc_normal_.html index 10fc6960e7189948f2620fd5e53ddfdb53c9c325..3eec5a0f2b66f5ed036b5f51262d44beb4982deb 100644 --- a/dev/reference/nn_init_trunc_normal_.html +++ b/dev/reference/nn_init_trunc_normal_.html @@ -1,80 +1,19 @@ - - - - - - - -Truncated normal initialization — nn_init_trunc_normal_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Truncated normal initialization — nn_init_trunc_normal_ • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,71 +113,59 @@ normal distribution." /> normal distribution.

-
nn_init_trunc_normal_(tensor, mean = 0, std = 1, a = -2, b = 2)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
tensor

an n-dimensional Tensor

mean

the mean of the normal distribution

std

the standard deviation of the normal distribution

a

the minimum cutoff value

b

the maximum cutoff value

- +
+
nn_init_trunc_normal_(tensor, mean = 0, std = 1, a = -2, b = 2)
+
-

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_trunc_normal_(w)
-
-}
-#> torch_tensor
-#>  0.3472 -0.1520  1.8191 -0.3626 -0.0984
-#> -0.4118 -0.1650  0.6247 -0.4865 -0.4083
-#> -1.6809  1.4343 -1.8766  0.1507  0.3511
-#> [ CPUFloatType{3,5} ]
-
+
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
mean
+

the mean of the normal distribution

+
std
+

the standard deviation of the normal distribution

+
a
+

the minimum cutoff value

+
b
+

the maximum cutoff value

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_trunc_normal_(w)
+
+}
+#> torch_tensor
+#>  0.5455 -0.1162  0.5018  0.6756 -0.4415
+#> -0.4394  1.0757  1.3277  0.1873  0.4977
+#> -1.1916 -0.2625  0.1538  1.1972  1.8154
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_uniform_.html b/dev/reference/nn_init_uniform_.html index 165d565d82cbf79fc447856732b8efc884f454b2..3dd89073bf4971bc013881dc5f00975ee9569982 100644 --- a/dev/reference/nn_init_uniform_.html +++ b/dev/reference/nn_init_uniform_.html @@ -1,79 +1,18 @@ - - - - - - - -Uniform initialization — nn_init_uniform_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Uniform initialization — nn_init_uniform_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,63 +111,55 @@

Fills the input Tensor with values drawn from the uniform distribution

-
nn_init_uniform_(tensor, a = 0, b = 1)
- -

Arguments

- - - - - - - - - - - - - - -
tensor

an n-dimensional Tensor

a

the lower bound of the uniform distribution

b

the upper bound of the uniform distribution

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_uniform_(w)
-
-}
-#> torch_tensor
-#>  0.3068  0.9416  0.9948  0.5596  0.7974
-#>  0.4951  0.9223  0.1013  0.2746  0.8222
-#>  0.4814  0.3279  0.4280  0.3882  0.9761
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_uniform_(tensor, a = 0, b = 1)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
a
+

the lower bound of the uniform distribution

+
b
+

the upper bound of the uniform distribution

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_uniform_(w)
+
+}
+#> torch_tensor
+#>  0.6769  0.6400  0.3228  0.9061  0.4819
+#>  0.0917  0.8477  0.8595  0.6387  0.1965
+#>  0.0443  0.7941  0.2130  0.8781  0.4385
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_xavier_normal_.html b/dev/reference/nn_init_xavier_normal_.html index 82ed4fc8c7156a3d0cab43ffa49ec6cbd045f0c7..b7ecf6fbd37a1bdd2906a199a1ca602541108aab 100644 --- a/dev/reference/nn_init_xavier_normal_.html +++ b/dev/reference/nn_init_xavier_normal_.html @@ -1,81 +1,20 @@ - - - - - - - -Xavier normal initialization — nn_init_xavier_normal_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Xavier normal initialization — nn_init_xavier_normal_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,59 +115,53 @@ described in Understanding the difficulty of training deep feedforward neu distribution.

-
nn_init_xavier_normal_(tensor, gain = 1)
- -

Arguments

- - - - - - - - - - -
tensor

an n-dimensional Tensor

gain

an optional scaling factor

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_xavier_normal_(w)
-
-}
-#> torch_tensor
-#> -0.1307  0.2649 -0.7583  0.1676  0.5796
-#>  1.4691 -0.6556  1.0769 -0.1324 -0.3864
-#> -0.7963  0.6202  0.2521  0.2650 -0.7918
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_xavier_normal_(tensor, gain = 1)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
gain
+

an optional scaling factor

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_xavier_normal_(w)
+
+}
+#> torch_tensor
+#> -0.3113 -0.3936 -0.6188 -0.6611 -0.0949
+#>  0.3428 -0.6016 -0.0857  0.1810 -0.4266
+#> -0.5410 -0.7038 -0.6446  0.0237 -0.0038
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_xavier_uniform_.html b/dev/reference/nn_init_xavier_uniform_.html index 785f99b0214f765b416a280bcab0230aec41def6..1eddc2436d3e27a0abe50d0f8a60614fd1f5c8b0 100644 --- a/dev/reference/nn_init_xavier_uniform_.html +++ b/dev/reference/nn_init_xavier_uniform_.html @@ -1,81 +1,20 @@ - - - - - - - -Xavier uniform initialization — nn_init_xavier_uniform_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Xavier uniform initialization — nn_init_xavier_uniform_ • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,59 +115,53 @@ described in Understanding the difficulty of training deep feedforward neu distribution.

-
nn_init_xavier_uniform_(tensor, gain = 1)
- -

Arguments

- - - - - - - - - - -
tensor

an n-dimensional Tensor

gain

an optional scaling factor

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_xavier_uniform_(w)
-
-}
-#> torch_tensor
-#>  0.5033  0.3168  0.3422 -0.2498  0.3055
-#>  0.5362 -0.7325  0.5433  0.3791 -0.0240
-#> -0.2203 -0.6872  0.7151  0.3301  0.2871
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_xavier_uniform_(tensor, gain = 1)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional Tensor

+
gain
+

an optional scaling factor

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_xavier_uniform_(w)
+
+}
+#> torch_tensor
+#>  0.7878 -0.7953 -0.4437 -0.4975 -0.8106
+#> -0.7286 -0.8021  0.4725 -0.4185 -0.0301
+#> -0.5941  0.1946  0.5911  0.4893 -0.1208
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_init_zeros_.html b/dev/reference/nn_init_zeros_.html index 32b0d29ecb977aa7e12c11f818ed16c194a58625..9cae20dbadbdd05d7f34869ffb6710bb71db0311 100644 --- a/dev/reference/nn_init_zeros_.html +++ b/dev/reference/nn_init_zeros_.html @@ -1,79 +1,18 @@ - - - - - - - -Zeros initialization — nn_init_zeros_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Zeros initialization — nn_init_zeros_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,55 +111,51 @@

Fills the input Tensor with the scalar value 0

-
nn_init_zeros_(tensor)
- -

Arguments

- - - - - - -
tensor

an n-dimensional tensor

- - -

Examples

-
if (torch_is_installed()) {
-w <- torch_empty(3, 5)
-nn_init_zeros_(w)
-
-}
-#> torch_tensor
-#>  0  0  0  0  0
-#>  0  0  0  0  0
-#>  0  0  0  0  0
-#> [ CPUFloatType{3,5} ]
-
+
+
nn_init_zeros_(tensor)
+
+ +
+

Arguments

+
tensor
+

an n-dimensional tensor

+
+ +
+

Examples

+
if (torch_is_installed()) {
+w <- torch_empty(3, 5)
+nn_init_zeros_(w)
+
+}
+#> torch_tensor
+#>  0  0  0  0  0
+#>  0  0  0  0  0
+#>  0  0  0  0  0
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_kl_div_loss.html b/dev/reference/nn_kl_div_loss.html index a3491f40b792a6deb92e17f6a27bbea39fb7c2f4..3addcd42e71b4d5263e10474e9665a3fbd4cf8a8 100644 --- a/dev/reference/nn_kl_div_loss.html +++ b/dev/reference/nn_kl_div_loss.html @@ -1,83 +1,22 @@ - - - - - - - -Kullback-Leibler divergence loss — nn_kl_div_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kullback-Leibler divergence loss — nn_kl_div_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+

The Kullback-Leibler divergence loss measure -Kullback-Leibler divergence +Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.

-
nn_kl_div_loss(reduction = "mean")
+
+
nn_kl_div_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied. 'batchmean': the sum of the output will be divided by batchsize. 'sum': the output will be summed. 'mean': the output will be divided by the number of elements in the output. -Default: 'mean'

- -

Details

- -

As with nn_nll_loss(), the input given is expected to contain +Default: 'mean'

+
+
+

Details

+

As with nn_nll_loss(), the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor.

The targets are interpreted as probabilities by default, but could be considered as log-probabilities with log_target set to TRUE.

@@ -241,51 +161,47 @@ over observations as well as over dimensions. 'batchmean' correct KL divergence where losses are averaged over batch dimension only. 'mean' mode's behavior will be changed to the same as 'batchmean' in the next major release.

-

Note

- +
+
+

Note

reduction = 'mean' doesn't return the true kl divergence value, please use reduction = 'batchmean' which aligns with KL math definition. In the next major release, 'mean' will be changed to be the same as 'batchmean'.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar by default. If reduction is 'none', then \((N, *)\), the same shape as the input

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/nn_l1_loss.html b/dev/reference/nn_l1_loss.html index 3cdbcd940815863719c8140eb6fa42d821d7025b..2dbecc87ab9c751f15b4c909b5bb7c29e9e26a70 100644 --- a/dev/reference/nn_l1_loss.html +++ b/dev/reference/nn_l1_loss.html @@ -1,80 +1,19 @@ - - - - - - - -L1 loss — nn_l1_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -L1 loss — nn_l1_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,24 +113,22 @@ element in the input \(x\) and target \(y\)." /> element in the input \(x\) and target \(y\).

-
nn_l1_loss(reduction = "mean")
+
+
nn_l1_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

The unreduced (i.e. with reduction set to 'none') loss can be described as:

$$ @@ -228,54 +148,51 @@ $$

of \(n\) elements each.

The sum operation still operates over all the elements, and divides by \(n\). The division by \(n\) can be avoided if one sets reduction = 'sum'.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar. If reduction is 'none', then \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_l1_loss()
    -input <- torch_randn(3, 5, requires_grad=TRUE)
    -target <- torch_randn(3, 5)
    -output <- loss(input, target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_l1_loss()
+input <- torch_randn(3, 5, requires_grad=TRUE)
+target <- torch_randn(3, 5)
+output <- loss(input, target)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_layer_norm.html b/dev/reference/nn_layer_norm.html index 9744dec4f8e030197924030074988c7c3a569e6b..3cba6ef175279f58fb66ca84b6b18232e7979c04 100644 --- a/dev/reference/nn_layer_norm.html +++ b/dev/reference/nn_layer_norm.html @@ -1,80 +1,19 @@ - - - - - - - -Layer normalization — nn_layer_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Layer normalization — nn_layer_norm • torch - - - - - - - - + + -
-
- -
- -
+

Applies Layer Normalization over a mini-batch of inputs as described in -the paper Layer Normalization

+the paper Layer Normalization

-
nn_layer_norm(normalized_shape, eps = 1e-05, elementwise_affine = TRUE)
+
+
nn_layer_norm(normalized_shape, eps = 1e-05, elementwise_affine = TRUE)
+
-

Arguments

- - - - - - - - - - - - - - -
normalized_shape

(int or list): input shape from an expected input +

+

Arguments

+
normalized_shape
+

(int or list): input shape from an expected input of size \([* \times \mbox{normalized\_shape}[0] \times \mbox{normalized\_shape}[1] \times \ldots \times \mbox{normalized\_shape}[-1]]\) If a single integer is used, it is treated as a singleton list, and this module will -normalize over the last dimension which is expected to be of that specific size.

eps

a value added to the denominator for numerical stability. Default: 1e-5

elementwise_affine

a boolean value that when set to TRUE, this module +normalize over the last dimension which is expected to be of that specific size.

+
eps
+

a value added to the denominator for numerical stability. Default: 1e-5

+
elementwise_affine
+

a boolean value that when set to TRUE, this module has learnable per-element affine parameters initialized to ones (for weights) -and zeros (for biases). Default: TRUE.

- -

Details

- +and zeros (for biases). Default: TRUE.

+
+
+

Details

$$ y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta $$

@@ -228,66 +144,64 @@ certain number dimensions which have to be of the shape specified by normalized_shape if elementwise_affine is TRUE.

The standard-deviation is calculated via the biased estimator, equivalent to torch_var(input, unbiased=FALSE).

-

Note

- +
+
+

Note

Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the affine option, Layer Normalization applies per-element scale and bias with elementwise_affine.

This layer uses statistics computed from input data in both training and evaluation modes.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\)

  • +
    • Input: \((N, *)\)

    • Output: \((N, *)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -  
    -input <- torch_randn(20, 5, 10, 10)
    -# With Learnable Parameters
    -m <- nn_layer_norm(input$size()[-1])
    -# Without Learnable Parameters
    -m <- nn_layer_norm(input$size()[-1], elementwise_affine=FALSE)
    -# Normalize over last two dimensions
    -m <- nn_layer_norm(c(10, 10))
    -# Normalize over last dimension of size 10
    -m <- nn_layer_norm(10)
    -# Activating the module
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+  
+input <- torch_randn(20, 5, 10, 10)
+# With Learnable Parameters
+m <- nn_layer_norm(input$size()[-1])
+# Without Learnable Parameters
+m <- nn_layer_norm(input$size()[-1], elementwise_affine=FALSE)
+# Normalize over last two dimensions
+m <- nn_layer_norm(c(10, 10))
+# Normalize over last dimension of size 10
+m <- nn_layer_norm(10)
+# Activating the module
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_leaky_relu.html b/dev/reference/nn_leaky_relu.html index 81bd55cf60e7e679c0d0eddab7a59917a2c67eb1..556940f17428d7508c9c160ae58bb59c3e9fd04e 100644 --- a/dev/reference/nn_leaky_relu.html +++ b/dev/reference/nn_leaky_relu.html @@ -1,79 +1,18 @@ - - - - - - - -LeakyReLU module — nn_leaky_relu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -LeakyReLU module — nn_leaky_relu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Applies the element-wise function:

-
nn_leaky_relu(negative_slope = 0.01, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
negative_slope

Controls the angle of the negative slope. Default: 1e-2

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_leaky_relu(negative_slope = 0.01, inplace = FALSE)
+
+
+

Arguments

+
negative_slope
+

Controls the angle of the negative slope. Default: 1e-2

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{LeakyReLU}(x) = \max(0, x) + \mbox{negative\_slope} * \min(0, x) $$ @@ -218,50 +136,47 @@ x, & \mbox{ if } x \geq 0 \\ \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_leaky_relu(0.1)
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_leaky_relu(0.1)
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_linear.html b/dev/reference/nn_linear.html index 4000f5e32c65de7a6795387b4051347c49c2b73f..440fa49443e5bee701f299bcc9d2d4ef6591c3f9 100644 --- a/dev/reference/nn_linear.html +++ b/dev/reference/nn_linear.html @@ -1,79 +1,18 @@ - - - - - - - -Linear module — nn_linear • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Linear module — nn_linear • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,43 +111,34 @@

Applies a linear transformation to the incoming data: y = xA^T + b

-
nn_linear(in_features, out_features, bias = TRUE)
- -

Arguments

- - - - - - - - - - - - - - -
in_features

size of each input sample

out_features

size of each output sample

bias

If set to FALSE, the layer will not learn an additive bias. -Default: TRUE

- -

Shape

+
+
nn_linear(in_features, out_features, bias = TRUE)
+
+
+

Arguments

+
in_features
+

size of each input sample

+
out_features
+

size of each output sample

+
bias
+

If set to FALSE, the layer will not learn an additive bias. +Default: TRUE

+
+
+

Shape

-
    -
  • Input: (N, *, H_in) where * means any number of +

    • Input: (N, *, H_in) where * means any number of additional dimensions and H_in = in_features.

    • Output: (N, *, H_out) where all but the last dimension are the same shape as the input and :math:H_out = out_features.

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight: the learnable weights of the module of shape +

    • weight: the learnable weights of the module of shape (out_features, in_features). The values are initialized from \(U(-\sqrt{k}, \sqrt{k})\)s, where \(k = \frac{1}{\mbox{in\_features}}\)

    • @@ -233,44 +146,41 @@ initialized from \(U(-\sqrt{k}, \sqrt{k})\)s, where If bias is TRUE, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\mbox{in\_features}}\)

      -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_linear(20, 30)
    -input <- torch_randn(128, 20)
    -output <- m(input)
    -print(output$size())
    -
    -}
    -#> [1] 128  30
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_linear(20, 30)
+input <- torch_randn(128, 20)
+output <- m(input)
+print(output$size())
+
+}
+#> [1] 128  30
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_log_sigmoid.html b/dev/reference/nn_log_sigmoid.html index 8af1d5074e9b4bff8b9423c037ba2a28b273e215..93608c0049090955ac44909287fc816c584fbe8a 100644 --- a/dev/reference/nn_log_sigmoid.html +++ b/dev/reference/nn_log_sigmoid.html @@ -1,82 +1,21 @@ - - - - - - - -LogSigmoid module — nn_log_sigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -LogSigmoid module — nn_log_sigmoid • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,53 +117,50 @@ $$ $$

-
nn_log_sigmoid()
- - -

Shape

+
+
nn_log_sigmoid()
+
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_log_sigmoid()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_log_sigmoid()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_log_softmax.html b/dev/reference/nn_log_softmax.html index 086abd34f3d5cb43b3ca0ea17b83fe9781d1eefb..1aa4f563b7f7c4295296c188f56eebc7fbebe677 100644 --- a/dev/reference/nn_log_softmax.html +++ b/dev/reference/nn_log_softmax.html @@ -1,80 +1,19 @@ - - - - - - - -LogSoftmax module — nn_log_softmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -LogSoftmax module — nn_log_softmax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,70 +113,66 @@ input Tensor. The LogSoftmax formulation can be simplified as:" /> input Tensor. The LogSoftmax formulation can be simplified as:

-
nn_log_softmax(dim)
- -

Arguments

- - - - - - -
dim

(int): A dimension along which LogSoftmax will be computed.

- -

Value

+
+
nn_log_softmax(dim)
+
+
+

Arguments

+
dim
+

(int): A dimension along which LogSoftmax will be computed.

+
+
+

Value

a Tensor of the same dimension and shape as the input with values in the range [-inf, 0)

-

Details

- +
+
+

Details

$$ \mbox{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\) where * means, any number of additional +

    • Input: \((*)\) where * means, any number of additional dimensions

    • Output: \((*)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_log_softmax(1)
    -input <- torch_randn(2, 3)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_log_softmax(1)
+input <- torch_randn(2, 3)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_lp_pool1d.html b/dev/reference/nn_lp_pool1d.html index 5ce8d1a6706f585a21be62d186c231f0340a4dea..60bf773453aeac3d7d144761de2cc5af61e9da01 100644 --- a/dev/reference/nn_lp_pool1d.html +++ b/dev/reference/nn_lp_pool1d.html @@ -1,84 +1,23 @@ - - - - - - - -Applies a 1D power-average pooling over an input signal composed of several input -planes. — nn_lp_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 1D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -198,89 +120,76 @@ planes. $$

-
nn_lp_pool1d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
norm_type

if inf than one gets max pooling if 0 you get sum pooling ( -proportional to the avg pooling)

kernel_size

a single int, the size of the window

stride

a single int, the stride of the window. Default value is kernel_size

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

- -

Details

+
+
nn_lp_pool1d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
+
+
+

Arguments

+
norm_type
+

if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

+
kernel_size
+

a single int, the size of the window

+
stride
+

a single int, the stride of the window. Default value is kernel_size

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
+
+

Details

-
    -
  • At p = \(\infty\), one gets Max Pooling

  • +
    • At p = \(\infty\), one gets Max Pooling

    • At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)

    • -
    - -

    Note

    - +
+
+

Note

If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, L_{in})\)

  • +
    • Input: \((N, C, L_{in})\)

    • Output: \((N, C, L_{out})\), where

    • -
    - -

    $$ +

$$ L_{out} = \left\lfloor\frac{L_{in} - \mbox{kernel\_size}}{\mbox{stride}} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-# power-2 pool of window of length 3, with stride 2.
-m <- nn_lp_pool1d(2, 3, stride=2)
-input <- torch_randn(20, 16, 50)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# power-2 pool of window of length 3, with stride 2.
+m <- nn_lp_pool1d(2, 3, stride=2)
+input <- torch_randn(20, 16, 50)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_lp_pool2d.html b/dev/reference/nn_lp_pool2d.html index 97b6664dbc4360cbee3bac821ea697f538775793..cad41f744bf78b5c87daca326e90e836c3201abb 100644 --- a/dev/reference/nn_lp_pool2d.html +++ b/dev/reference/nn_lp_pool2d.html @@ -1,84 +1,23 @@ - - - - - - - -Applies a 2D power-average pooling over an input signal composed of several input -planes. — nn_lp_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 2D power-average pooling over an input signal composed of several input +planes. — nn_lp_pool2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -198,101 +120,85 @@ planes. $$

-
nn_lp_pool2d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
norm_type

if inf than one gets max pooling if 0 you get sum pooling ( -proportional to the avg pooling)

kernel_size

the size of the window

stride

the stride of the window. Default value is kernel_size

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

- -

Details

+
+
nn_lp_pool2d(norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
+
+
+

Arguments

+
norm_type
+

if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

+
kernel_size
+

the size of the window

+
stride
+

the stride of the window. Default value is kernel_size

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
+
+

Details

-
    -
  • At p = \(\infty\), one gets Max Pooling

  • +
    • At p = \(\infty\), one gets Max Pooling

    • At p = 1, one gets Sum Pooling (which is proportional to average pooling)

    • -
    - -

    The parameters kernel_size, stride can either be:

      -
    • a single int -- in which case the same value is used for the height and width dimension

    • +

    The parameters kernel_size, stride can either be:

    • a single int -- in which case the same value is used for the height and width dimension

    • a tuple of two ints -- in which case, the first int is used for the height dimension, and the second int for the width dimension

    • -
    - -

    Note

    - +
+
+

Note

If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, H_{in}, W_{in})\)

    • Output: \((N, C, H_{out}, W_{out})\), where

    • -
    - -

    $$ +

$$ H_{out} = \left\lfloor\frac{H_{in} - \mbox{kernel\_size}[0]}{\mbox{stride}[0]} + 1\right\rfloor $$ $$ W_{out} = \left\lfloor\frac{W_{in} - \mbox{kernel\_size}[1]}{\mbox{stride}[1]} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-  
-# power-2 pool of square window of size=3, stride=2
-m <- nn_lp_pool2d(2, 3, stride=2)
-# pool of non-square window of power 1.2
-m <- nn_lp_pool2d(1.2, c(3, 2), stride=c(2, 1))
-input <- torch_randn(20, 16, 50, 32)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+  
+# power-2 pool of square window of size=3, stride=2
+m <- nn_lp_pool2d(2, 3, stride=2)
+# pool of non-square window of power 1.2
+m <- nn_lp_pool2d(1.2, c(3, 2), stride=c(2, 1))
+input <- torch_randn(20, 16, 50, 32)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_lstm.html b/dev/reference/nn_lstm.html index 9cb12e36d1ba48ccd345d832c05652242f5620f4..bd203acb7ec983aaa8efcdf7710b4bbda8176b5c 100644 --- a/dev/reference/nn_lstm.html +++ b/dev/reference/nn_lstm.html @@ -1,82 +1,21 @@ - - - - - - - -Applies a multi-layer long short-term memory (LSTM) RNN to an input -sequence. — nn_lstm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a multi-layer long short-term memory (LSTM) RNN to an input +sequence. — nn_lstm • torch - - - - - - - - + + -
-
- -
- -
+
@@ -194,63 +116,47 @@ sequence. function:

-
nn_lstm(
-  input_size,
-  hidden_size,
-  num_layers = 1,
-  bias = TRUE,
-  batch_first = FALSE,
-  dropout = 0,
-  bidirectional = FALSE,
-  ...
-)
+
+
nn_lstm(
+  input_size,
+  hidden_size,
+  num_layers = 1,
+  bias = TRUE,
+  batch_first = FALSE,
+  dropout = 0,
+  bidirectional = FALSE,
+  ...
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input_size

The number of expected features in the input x

hidden_size

The number of features in the hidden state h

num_layers

Number of recurrent layers. E.g., setting num_layers=2 +

+

Arguments

+
input_size
+

The number of expected features in the input x

+
hidden_size
+

The number of features in the hidden state h

+
num_layers
+

Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and -computing the final results. Default: 1

bias

If FALSE, then the layer does not use bias weights b_ih and b_hh. -Default: TRUE

batch_first

If TRUE, then the input and output tensors are provided -as (batch, seq, feature). Default: FALSE

dropout

If non-zero, introduces a Dropout layer on the outputs of each +computing the final results. Default: 1

+
bias
+

If FALSE, then the layer does not use bias weights b_ih and b_hh. +Default: TRUE

+
batch_first
+

If TRUE, then the input and output tensors are provided +as (batch, seq, feature). Default: FALSE

+
dropout
+

If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to -dropout. Default: 0

bidirectional

If TRUE, becomes a bidirectional LSTM. Default: FALSE

...

currently unused.

- -

Details

- +dropout. Default: 0

+
bidirectional
+

If TRUE, becomes a bidirectional LSTM. Default: FALSE

+
...
+

currently unused.

+
+
+

Details

$$ \begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ @@ -267,35 +173,34 @@ is the hidden state of the previous layer at time t-1 or the initia state at time 0, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and output gates, respectively. \(\sigma\) is the sigmoid function.

-

Note

- +
+
+

Note

All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\mbox{hidden\_size}}\)

-

Inputs

- +
+
+

Inputs

-

Inputs: input, (h_0, c_0)

    -
  • input of shape (seq_len, batch, input_size): tensor containing the features +

    Inputs: input, (h_0, c_0)

    - -

    If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.

    -

    Outputs

    - +

If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.

+
+
+

Outputs

-

Outputs: output, (h_n, c_n)

    -
  • output of shape (seq_len, batch, num_directions * hidden_size): tensor +

    Outputs: output, (h_n, c_n)

    • output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a torch_nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. @@ -309,14 +214,12 @@ Like output, the layers can be separated using h_n$view(c(num_layers, num_directions, batch, hidden_size)) and similarly for c_n.

    • c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight_ih_l[k] : the learnable input-hidden weights of the \(\mbox{k}^{th}\) layer +

    • weight_ih_l[k] : the learnable input-hidden weights of the \(\mbox{k}^{th}\) layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size x input_size)

    • weight_hh_l[k] : the learnable hidden-hidden weights of the \(\mbox{k}^{th}\) layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size x hidden_size)

    • @@ -324,44 +227,41 @@ containing the cell state for t = seq_len

      (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)

    • bias_hh_l[k] : the learnable hidden-hidden bias of the \(\mbox{k}^{th}\) layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -rnn <- nn_lstm(10, 20, 2)
    -input <- torch_randn(5, 3, 10)
    -h0 <- torch_randn(2, 3, 20)
    -c0 <- torch_randn(2, 3, 20)
    -output <- rnn(input, list(h0, c0))
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+rnn <- nn_lstm(10, 20, 2)
+input <- torch_randn(5, 3, 10)
+h0 <- torch_randn(2, 3, 20)
+c0 <- torch_randn(2, 3, 20)
+output <- rnn(input, list(h0, c0))
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_margin_ranking_loss.html b/dev/reference/nn_margin_ranking_loss.html index 3d53e11307e4578d0eb0017c3a742bd7609f77d9..c7f2247e6e769c6fd46cabee12da6b5fe2de76a9 100644 --- a/dev/reference/nn_margin_ranking_loss.html +++ b/dev/reference/nn_margin_ranking_loss.html @@ -1,83 +1,22 @@ - - - - - - - -Margin ranking loss — nn_margin_ranking_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Margin ranking loss — nn_margin_ranking_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,80 +119,73 @@ If \(y = 1\) then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for \(y = -1\).

-
nn_margin_ranking_loss(margin = 0, reduction = "mean")
+
+
nn_margin_ranking_loss(margin = 0, reduction = "mean")
+
-

Arguments

- - - - - - - - - - -
margin

(float, optional): Has a default value of \(0\).

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
margin
+

(float, optional): Has a default value of \(0\).

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

The loss function for each pair of samples in the mini-batch is:

$$ \mbox{loss}(x1, x2, y) = \max(0, -y * (x1 - x2) + \mbox{margin}) $$

-

Shape

- +
+
+

Shape

-
    -
  • Input1: \((N)\) where N is the batch size.

  • +
    • Input1: \((N)\) where N is the batch size.

    • Input2: \((N)\), same shape as the Input1.

    • Target: \((N)\), same shape as the inputs.

    • Output: scalar. If reduction is 'none', then \((N)\).

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_margin_ranking_loss()
    -input1 <- torch_randn(3, requires_grad=TRUE)
    -input2 <- torch_randn(3, requires_grad=TRUE)
    -target <- torch_randn(3)$sign()
    -output <- loss(input1, input2, target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_margin_ranking_loss()
+input1 <- torch_randn(3, requires_grad=TRUE)
+input2 <- torch_randn(3, requires_grad=TRUE)
+target <- torch_randn(3)$sign()
+output <- loss(input1, input2, target)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_pool1d.html b/dev/reference/nn_max_pool1d.html index 3dd43f0c860a5123cbc1d94970ee55abf6d1d7fc..9378fa70cb2b437a94ed70f8de67924ecb2dba2b 100644 --- a/dev/reference/nn_max_pool1d.html +++ b/dev/reference/nn_max_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -MaxPool1D module — nn_max_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MaxPool1D module — nn_max_pool1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,47 +113,35 @@ planes." /> planes.

-
nn_max_pool1d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  dilation = 1,
-  return_indices = FALSE,
-  ceil_mode = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window to take a max over

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on both sides

dilation

a parameter that controls the stride of elements in the window

return_indices

if TRUE, will return the max indices along with the outputs. -Useful for nn_max_unpool1d() later.

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

- -

Details

+
+
nn_max_pool1d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  dilation = 1,
+  return_indices = FALSE,
+  ceil_mode = FALSE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window to take a max over

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on both sides

+
dilation
+

a parameter that controls the stride of elements in the window

+
return_indices
+

if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool1d() later.

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
+
+

Details

In the simplest case, the output value of the layer with input size \((N, C, L)\) and output \((N, C, L_{out})\) can be precisely described as:

$$ @@ -240,56 +150,53 @@ input(N_i, C_j, stride \times k + m) $$

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. dilation controls the spacing between the kernel points. -It is harder to describe, but this link +It is harder to describe, but this link has a nice visualization of what dilation does.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, L_{in})\)

  • +
    • Input: \((N, C, L_{in})\)

    • Output: \((N, C, L_{out})\), where

    • -
    - -

    $$ +

$$ L_{out} = \left\lfloor \frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation} \times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-# pool of size=3, stride=2
-m <- nn_max_pool1d(3, stride=2)
-input <- torch_randn(20, 16, 50)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# pool of size=3, stride=2
+m <- nn_max_pool1d(3, stride=2)
+input <- torch_randn(20, 16, 50)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_pool2d.html b/dev/reference/nn_max_pool2d.html index fc7462e07b19b71caa154a11b79549a71572c1ff..76d274176a947f6780518e401e89dc9d44bfefa9 100644 --- a/dev/reference/nn_max_pool2d.html +++ b/dev/reference/nn_max_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -MaxPool2D module — nn_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MaxPool2D module — nn_max_pool2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,47 +113,35 @@ planes." /> planes.

-
nn_max_pool2d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  dilation = 1,
-  return_indices = FALSE,
-  ceil_mode = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window to take a max over

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on both sides

dilation

a parameter that controls the stride of elements in the window

return_indices

if TRUE, will return the max indices along with the outputs. -Useful for nn_max_unpool2d() later.

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

- -

Details

+
+
nn_max_pool2d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  dilation = 1,
+  return_indices = FALSE,
+  ceil_mode = FALSE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window to take a max over

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on both sides

+
dilation
+

a parameter that controls the stride of elements in the window

+
return_indices
+

if TRUE, will return the max indices along with the outputs. +Useful for nn_max_unpool2d() later.

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
+
+

Details

In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) can be precisely described as:

@@ -245,22 +155,17 @@ $$

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. dilation controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of what dilation does.

-

The parameters kernel_size, stride, padding, dilation can either be:

    -
  • a single int -- in which case the same value is used for the height and width dimension

  • +

    The parameters kernel_size, stride, padding, dilation can either be:

    • a single int -- in which case the same value is used for the height and width dimension

    • a tuple of two ints -- in which case, the first int is used for the height dimension, and the second int for the width dimension

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, H_{in}, W_{in})\)

    • Output: \((N, C, H_{out}, W_{out})\), where

    • -
    - -

    $$ +

$$ H_{out} = \left\lfloor\frac{H_{in} + 2 * \mbox{padding[0]} - \mbox{dilation[0]} \times (\mbox{kernel\_size[0]} - 1) - 1}{\mbox{stride[0]}} + 1\right\rfloor $$

@@ -268,43 +173,42 @@ $$

W_{out} = \left\lfloor\frac{W_{in} + 2 * \mbox{padding[1]} - \mbox{dilation[1]} \times (\mbox{kernel\_size[1]} - 1) - 1}{\mbox{stride[1]}} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-# pool of square window of size=3, stride=2
-m <- nn_max_pool2d(3, stride=2)
-# pool of non-square window
-m <- nn_max_pool2d(c(3, 2), stride=c(2, 1))
-input <- torch_randn(20, 16, 50, 32)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# pool of square window of size=3, stride=2
+m <- nn_max_pool2d(3, stride=2)
+# pool of non-square window
+m <- nn_max_pool2d(c(3, 2), stride=c(2, 1))
+input <- torch_randn(20, 16, 50, 32)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_pool3d.html b/dev/reference/nn_max_pool3d.html index 370da4b546a3d9106ab16416c2f56e9b67f2718f..1cb8c61aa0291e5506839c87c50a0ff1cae7481e 100644 --- a/dev/reference/nn_max_pool3d.html +++ b/dev/reference/nn_max_pool3d.html @@ -1,83 +1,22 @@ - - - - - - - -Applies a 3D max pooling over an input signal composed of several input -planes. — nn_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Applies a 3D max pooling over an input signal composed of several input +planes. — nn_max_pool3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -196,47 +118,35 @@ output \((N, C, D_{out}, H_{out}, W_{out})\) and kernel_size \((kD, can be precisely described as:

-
nn_max_pool3d(
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  dilation = 1,
-  return_indices = FALSE,
-  ceil_mode = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
kernel_size

the size of the window to take a max over

stride

the stride of the window. Default value is kernel_size

padding

implicit zero padding to be added on all three sides

dilation

a parameter that controls the stride of elements in the window

return_indices

if TRUE, will return the max indices along with the outputs. -Useful for torch_nn.MaxUnpool3d later

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape

- -

Details

+
+
nn_max_pool3d(
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  dilation = 1,
+  return_indices = FALSE,
+  ceil_mode = FALSE
+)
+
+
+

Arguments

+
kernel_size
+

the size of the window to take a max over

+
stride
+

the stride of the window. Default value is kernel_size

+
padding
+

implicit zero padding to be added on all three sides

+
dilation
+

a parameter that controls the stride of elements in the window

+
return_indices
+

if TRUE, will return the max indices along with the outputs. +Useful for torch_nn.MaxUnpool3d later

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape

+
+
+

Details

$$ \begin{array}{ll} \mbox{out}(N_i, C_j, d, h, w) = & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ @@ -246,26 +156,21 @@ $$

If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. dilation controls the spacing between the kernel points. It is harder to describe, but this link_ has a nice visualization of what dilation does. -The parameters kernel_size, stride, padding, dilation can either be:

    -
  • a single int -- in which case the same value is used for the depth, height and width dimension

  • +The parameters kernel_size, stride, padding, dilation can either be:

    • a single int -- in which case the same value is used for the depth, height and width dimension

    • a tuple of three ints -- in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where $$ D_{out} = \left\lfloor\frac{D_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor $$

    • -
    - -

    $$ +

$$ H_{out} = \left\lfloor\frac{H_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor $$

@@ -273,43 +178,42 @@ $$

W_{out} = \left\lfloor\frac{W_{in} + 2 \times \mbox{padding}[2] - \mbox{dilation}[2] \times (\mbox{kernel\_size}[2] - 1) - 1}{\mbox{stride}[2]} + 1\right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-# pool of square window of size=3, stride=2
-m <- nn_max_pool3d(3, stride=2)
-# pool of non-square window
-m <- nn_max_pool3d(c(3, 2, 2), stride=c(2, 1, 2))
-input <- torch_randn(20, 16, 50,44, 31)
-output <- m(input)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+# pool of square window of size=3, stride=2
+m <- nn_max_pool3d(3, stride=2)
+# pool of non-square window
+m <- nn_max_pool3d(c(3, 2, 2), stride=c(2, 1, 2))
+input <- torch_randn(20, 16, 50,44, 31)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_unpool1d.html b/dev/reference/nn_max_unpool1d.html index 7d93d2d4edd9a46f2acce4a0d0554d6e510c6967..8980c229718c50b6e56b3f1e06eb53bda912714f 100644 --- a/dev/reference/nn_max_unpool1d.html +++ b/dev/reference/nn_max_unpool1d.html @@ -1,82 +1,21 @@ - - - - - - - -Computes a partial inverse of MaxPool1d. — nn_max_unpool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a partial inverse of MaxPool1d. — nn_max_unpool1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,110 +117,99 @@ including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.

-
nn_max_unpool1d(kernel_size, stride = NULL, padding = 0)
- -

Arguments

- - - - - - - - - - - - - - -
kernel_size

(int or tuple): Size of the max pooling window.

stride

(int or tuple): Stride of the max pooling window. -It is set to kernel_size by default.

padding

(int or tuple): Padding that was added to the input

- -

Note

+
+
nn_max_unpool1d(kernel_size, stride = NULL, padding = 0)
+
+
+

Arguments

+
kernel_size
+

(int or tuple): Size of the max pooling window.

+
stride
+

(int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

+
padding
+

(int or tuple): Padding that was added to the input

+
+
+

Note

MaxPool1d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.

-

Inputs

- +
+
+

Inputs

-
    -
  • input: the input Tensor to invert

  • -
  • indices: the indices given out by nn_max_pool1d()

  • +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool1d()

    • output_size (optional): the targeted output size

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, H_{in})\)

  • +
    • Input: \((N, C, H_{in})\)

    • Output: \((N, C, H_{out})\), where $$ H_{out} = (H_{in} - 1) \times \mbox{stride}[0] - 2 \times \mbox{padding}[0] + \mbox{kernel\_size}[0] $$ or as given by output_size in the call operator

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -pool <- nn_max_pool1d(2, stride=2, return_indices=TRUE)
    -unpool <- nn_max_unpool1d(2, stride=2)
    -
    -input <- torch_tensor(array(1:8/1, dim = c(1,1,8)))
    -out <- pool(input)
    -unpool(out[[1]], out[[2]])
    -
    -# Example showcasing the use of output_size
    -input <- torch_tensor(array(1:8/1, dim = c(1,1,8)))
    -out <- pool(input)
    -unpool(out[[1]], out[[2]], output_size=input$size())
    -unpool(out[[1]], out[[2]])
    -
    -}
    -#> torch_tensor
    -#> (1,1,.,.) = 
    -#>   0
    -#>   2
    -#>   0
    -#>   4
    -#>   0
    -#>   6
    -#>   0
    -#>   8
    -#> [ CPUFloatType{1,1,8,1} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+pool <- nn_max_pool1d(2, stride=2, return_indices=TRUE)
+unpool <- nn_max_unpool1d(2, stride=2)
+
+input <- torch_tensor(array(1:8/1, dim = c(1,1,8)))
+out <- pool(input)
+unpool(out[[1]], out[[2]])
+
+# Example showcasing the use of output_size
+input <- torch_tensor(array(1:8/1, dim = c(1,1,8)))
+out <- pool(input)
+unpool(out[[1]], out[[2]], output_size=input$size())
+unpool(out[[1]], out[[2]])
+
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>   0
+#>   2
+#>   0
+#>   4
+#>   0
+#>   6
+#>   0
+#>   8
+#> [ CPUFloatType{1,1,8,1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_unpool2d.html b/dev/reference/nn_max_unpool2d.html index ff877c89f922ac80e5eda18fc499510a37c5a02c..e12dd0fa77512180527e228827431d5d7f568544 100644 --- a/dev/reference/nn_max_unpool2d.html +++ b/dev/reference/nn_max_unpool2d.html @@ -1,82 +1,21 @@ - - - - - - - -Computes a partial inverse of MaxPool2d. — nn_max_unpool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a partial inverse of MaxPool2d. — nn_max_unpool2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,49 +117,41 @@ including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.

-
nn_max_unpool2d(kernel_size, stride = NULL, padding = 0)
- -

Arguments

- - - - - - - - - - - - - - -
kernel_size

(int or tuple): Size of the max pooling window.

stride

(int or tuple): Stride of the max pooling window. -It is set to kernel_size by default.

padding

(int or tuple): Padding that was added to the input

- -

Note

+
+
nn_max_unpool2d(kernel_size, stride = NULL, padding = 0)
+
+
+

Arguments

+
kernel_size
+

(int or tuple): Size of the max pooling window.

+
stride
+

(int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

+
padding
+

(int or tuple): Padding that was added to the input

+
+
+

Note

MaxPool2d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.

-

Inputs

- +
+
+

Inputs

-
    -
  • input: the input Tensor to invert

  • -
  • indices: the indices given out by nn_max_pool2d()

  • +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool2d()

    • output_size (optional): the targeted output size

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, H_{in}, W_{in})\)

    • Output: \((N, C, H_{out}, W_{out})\), where $$ H_{out} = (H_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} @@ -246,56 +160,53 @@ $$ W_{out} = (W_{in} - 1) \times \mbox{stride[1]} - 2 \times \mbox{padding[1]} + \mbox{kernel\_size[1]} $$ or as given by output_size in the call operator

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -pool <- nn_max_pool2d(2, stride=2, return_indices=TRUE)
    -unpool <- nn_max_unpool2d(2, stride=2)
    -input <- torch_randn(1,1,4,4)
    -out <- pool(input)
    -unpool(out[[1]], out[[2]])
    -
    -# specify a different output size than input size
    -unpool(out[[1]], out[[2]], output_size=c(1, 1, 5, 5))
    -
    -}
    -#> torch_tensor
    -#> (1,1,.,.) = 
    -#>   0.0000  1.2945  0.0000  0.0000  0.0000
    -#>   0.0000  0.0000  1.3472  1.0731  0.0000
    -#>   0.0000  2.0541  0.0000  0.0000  0.0000
    -#>   0.0000  0.0000  0.0000  0.0000  0.0000
    -#>   0.0000  0.0000  0.0000  0.0000  0.0000
    -#> [ CPUFloatType{1,1,5,5} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+pool <- nn_max_pool2d(2, stride=2, return_indices=TRUE)
+unpool <- nn_max_unpool2d(2, stride=2)
+input <- torch_randn(1,1,4,4)
+out <- pool(input)
+unpool(out[[1]], out[[2]])
+
+# specify a different output size than input size
+unpool(out[[1]], out[[2]], output_size=c(1, 1, 5, 5))
+
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>   1.4170  0.0000  0.0000  0.0000  0.0000
+#>   0.0000  0.5730  0.0000  0.6062  0.0000
+#>   0.6256  0.0000  0.0000  0.0000  0.0000
+#>   0.0000  0.0000  0.0000  0.0000  0.0000
+#>   0.0000  0.0000  0.0000  0.0000  0.0000
+#> [ CPUFloatType{1,1,5,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_max_unpool3d.html b/dev/reference/nn_max_unpool3d.html index 26226f68288b845dcddf74e3b42fe15a0783960b..bc67a94abfa58a710e7bb129e51d65b589b75582 100644 --- a/dev/reference/nn_max_unpool3d.html +++ b/dev/reference/nn_max_unpool3d.html @@ -1,82 +1,21 @@ - - - - - - - -Computes a partial inverse of MaxPool3d. — nn_max_unpool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes a partial inverse of MaxPool3d. — nn_max_unpool3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,53 +117,43 @@ including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.

-
nn_max_unpool3d(kernel_size, stride = NULL, padding = 0)
- -

Arguments

- - - - - - - - - - - - - - -
kernel_size

(int or tuple): Size of the max pooling window.

stride

(int or tuple): Stride of the max pooling window. -It is set to kernel_size by default.

padding

(int or tuple): Padding that was added to the input

- -

Note

+
+
nn_max_unpool3d(kernel_size, stride = NULL, padding = 0)
+
+
+

Arguments

+
kernel_size
+

(int or tuple): Size of the max pooling window.

+
stride
+

(int or tuple): Stride of the max pooling window. +It is set to kernel_size by default.

+
padding
+

(int or tuple): Padding that was added to the input

+
+
+

Note

MaxPool3d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs section below.

-

Inputs

- +
+
+

Inputs

-
    -
  • input: the input Tensor to invert

  • -
  • indices: the indices given out by nn_max_pool3d()

  • +
    • input: the input Tensor to invert

    • +
    • indices: the indices given out by nn_max_pool3d()

    • output_size (optional): the targeted output size

    • -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

  • +
    • Input: \((N, C, D_{in}, H_{in}, W_{in})\)

    • Output: \((N, C, D_{out}, H_{out}, W_{out})\), where

    • -
    - -

    $$ +

$$ D_{out} = (D_{in} - 1) \times \mbox{stride[0]} - 2 \times \mbox{padding[0]} + \mbox{kernel\_size[0]} $$ $$ @@ -251,45 +163,44 @@ $$ W_{out} = (W_{in} - 1) \times \mbox{stride[2]} - 2 \times \mbox{padding[2]} + \mbox{kernel\_size[2]} $$

or as given by output_size in the call operator

+
-

Examples

-
if (torch_is_installed()) {
-  
-# pool of square window of size=3, stride=2
-pool <- nn_max_pool3d(3, stride=2, return_indices=TRUE)
-unpool <- nn_max_unpool3d(3, stride=2)
-out <- pool(torch_randn(20, 16, 51, 33, 15))
-unpooled_output <- unpool(out[[1]], out[[2]])
-unpooled_output$size()
-
-}
-#> [1] 20 16 51 33 15
-
+
+

Examples

+
if (torch_is_installed()) {
+  
+# pool of square window of size=3, stride=2
+pool <- nn_max_pool3d(3, stride=2, return_indices=TRUE)
+unpool <- nn_max_unpool3d(3, stride=2)
+out <- pool(torch_randn(20, 16, 51, 33, 15))
+unpooled_output <- unpool(out[[1]], out[[2]])
+unpooled_output$size()
+
+}
+#> [1] 20 16 51 33 15
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_module.html b/dev/reference/nn_module.html index 5d40ce7196fc8f125f7736364f4196778f0cb8f5..46317d4d2c8fdd5e660e9656491a19702b94d5be 100644 --- a/dev/reference/nn_module.html +++ b/dev/reference/nn_module.html @@ -1,79 +1,18 @@ - - - - - - - -Base class for all neural network modules. — nn_module • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Base class for all neural network modules. — nn_module • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,133 +111,120 @@

Your models should also subclass this class.

-
nn_module(
-  classname = NULL,
-  inherit = nn_Module,
-  ...,
-  private = NULL,
-  active = NULL,
-  parent_env = parent.frame()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
classname

an optional name for the module

inherit

an optional module to inherit from

...

methods implementation

private

passed to R6::R6Class().

active

passed to R6::R6Class().

parent_env

passed to R6::R6Class().

- -

Details

+
+
nn_module(
+  classname = NULL,
+  inherit = nn_Module,
+  ...,
+  private = NULL,
+  active = NULL,
+  parent_env = parent.frame()
+)
+
+
+

Arguments

+
classname
+

an optional name for the module

+
inherit
+

an optional module to inherit from

+
...
+

methods implementation

+
private
+

passed to R6::R6Class().

+
active
+

passed to R6::R6Class().

+
parent_env
+

passed to R6::R6Class().

+
+
+

Details

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes.

You are expected to implement the initialize and the forward to create a new nn_module.

-

Initialize

- +
+
+

Initialize

The initialize function will be called whenever a new instance of the nn_module is created. We use the initialize functions to define submodules and parameters -of the module. For example:

initialize = function(input_size, output_size) {
-   self$conv1 <- nn_conv2d(input_size, output_size, 5)
-   self$conv2 <- nn_conv2d(output_size, output_size, 5)
-}
-
+of the module. For example:

initialize = function(input_size, output_size) {
+   self$conv1 <- nn_conv2d(input_size, output_size, 5)
+   self$conv2 <- nn_conv2d(output_size, output_size, 5)
+}

The initialize function can have any number of parameters. All objects assigned to self$ will be available for other methods that you implement. -Tensors wrapped with nn_parameter() or nn_buffer() and submodules are +Tensors wrapped with nn_parameter() or nn_buffer() and submodules are automatically tracked when assigned to self$.

The initialize function is optional if the module you are defining doesn't have weights, submodules or buffers.

-

Forward

- +
+
+

Forward

The forward method is called whenever an instance of nn_module is called. This is usually used to implement the computation that the module does with the weights ad submodules defined in the initialize function.

-

For example:

forward = function(input) {
-   input <- self$conv1(input)
-   input <- nnf_relu(input)
-   input <- self$conv2(input)
-   input <- nnf_relu(input)
-   input
- }
-
+

For example:

forward = function(input) {
+   input <- self$conv1(input)
+   input <- nnf_relu(input)
+   input <- self$conv2(input)
+   input <- nnf_relu(input)
+   input
+ }

The forward function can use the self$training attribute to make different computations depending wether the model is training or not, for example if you were implementing the dropout module.

+
-

Examples

-
if (torch_is_installed()) {
-model <- nn_module(
- initialize = function() {
-   self$conv1 <- nn_conv2d(1, 20, 5)
-   self$conv2 <- nn_conv2d(20, 20, 5)
- },
- forward = function(input) {
-   input <- self$conv1(input)
-   input <- nnf_relu(input)
-   input <- self$conv2(input)
-   input <- nnf_relu(input)
-   input
- }
-)
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+model <- nn_module(
+ initialize = function() {
+   self$conv1 <- nn_conv2d(1, 20, 5)
+   self$conv2 <- nn_conv2d(20, 20, 5)
+ },
+ forward = function(input) {
+   input <- self$conv1(input)
+   input <- nnf_relu(input)
+   input <- self$conv2(input)
+   input <- nnf_relu(input)
+   input
+ }
+)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_module_list.html b/dev/reference/nn_module_list.html index 67c3f41d1cb09a193bc7d6dedb1dd32a077903ef..5e0962d82c03663bb8a344c952d6e11e15c01e3b 100644 --- a/dev/reference/nn_module_list.html +++ b/dev/reference/nn_module_list.html @@ -1,81 +1,20 @@ - - - - - - - -Holds submodules in a list. — nn_module_list • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Holds submodules in a list. — nn_module_list • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,59 +115,55 @@ modules it contains are properly registered, and will be visible by all nn_module methods.

-
nn_module_list(modules = list())
- -

Arguments

- - - - - - -
modules

a list of modules to add

- - -

Examples

-
if (torch_is_installed()) {
-
-my_module <- nn_module(
- initialize = function() {
-   self$linears <- nn_module_list(lapply(1:10, function(x) nn_linear(10, 10)))
- },
- forward = function(x) {
-  for (i in 1:length(self$linears))
-    x <- self$linears[[i]](x)
-  x
- }
-)
-
-}
-
+
+
nn_module_list(modules = list())
+
+ +
+

Arguments

+
modules
+

a list of modules to add

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+my_module <- nn_module(
+ initialize = function() {
+   self$linears <- nn_module_list(lapply(1:10, function(x) nn_linear(10, 10)))
+ },
+ forward = function(x) {
+  for (i in 1:length(self$linears))
+    x <- self$linears[[i]](x)
+  x
+ }
+)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_mse_loss.html b/dev/reference/nn_mse_loss.html index a6ecab20869c9ef3c57fd28268d195a1183944b4..4fdf34513e485beb49bc3a328ad89e51d401e228 100644 --- a/dev/reference/nn_mse_loss.html +++ b/dev/reference/nn_mse_loss.html @@ -1,82 +1,21 @@ - - - - - - - -MSE loss — nn_mse_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MSE loss — nn_mse_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,24 +117,22 @@ The unreduced (i.e. with reduction set to 'none') loss as:

-
nn_mse_loss(reduction = "mean")
+
+
nn_mse_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2, @@ -230,52 +150,49 @@ $$

of \(n\) elements each.

The mean operation still operates over all the elements, and divides by \(n\). The division by \(n\) can be avoided if one sets reduction = 'sum'.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_mse_loss()
    -input <- torch_randn(3, 5, requires_grad=TRUE)
    -target <- torch_randn(3, 5)
    -output <- loss(input, target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_mse_loss()
+input <- torch_randn(3, 5, requires_grad=TRUE)
+target <- torch_randn(3, 5)
+output <- loss(input, target)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_multi_margin_loss.html b/dev/reference/nn_multi_margin_loss.html index c3cc0a1ef8b7026955afafed8bf00237ba0d8611..2b63decb41ee17ee9c81e9405d7320bcf0e6c860 100644 --- a/dev/reference/nn_multi_margin_loss.html +++ b/dev/reference/nn_multi_margin_loss.html @@ -1,82 +1,21 @@ - - - - - - - -Multi margin loss — nn_multi_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multi margin loss — nn_multi_margin_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,39 +117,31 @@ output \(y\) (which is a 1D tensor of target class indices, \(0 \leq y \leq \mbox{x.size}(1)-1\)):

-
nn_multi_margin_loss(p = 1, margin = 1, weight = NULL, reduction = "mean")
+
+
nn_multi_margin_loss(p = 1, margin = 1, weight = NULL, reduction = "mean")
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
p

(int, optional): Has a default value of \(1\). \(1\) and \(2\) -are the only supported values.

margin

(float, optional): Has a default value of \(1\).

weight

(Tensor, optional): a manual rescaling weight given to each +

+

Arguments

+
p
+

(int, optional): Has a default value of \(1\). \(1\) and \(2\) +are the only supported values.

+
margin
+

(float, optional): Has a default value of \(1\).

+
weight
+

(Tensor, optional): a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is -treated as if having all ones.

reduction

(string, optional): Specifies the reduction to apply to the output: +treated as if having all ones.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is: $$ @@ -241,32 +155,29 @@ The loss function then becomes:

$$ \mbox{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\mbox{margin} - x[y] + x[i]))^p)}{\mbox{x.size}(0)} $$

+
+
-
- +
- - + + diff --git a/dev/reference/nn_multihead_attention.html b/dev/reference/nn_multihead_attention.html index 44121be0330bb44089714a2b562ec96910ed42ed..b9f85e245fe422479143e13656dc4ba0a6bbddf6 100644 --- a/dev/reference/nn_multihead_attention.html +++ b/dev/reference/nn_multihead_attention.html @@ -1,81 +1,20 @@ - - - - - - - -MultiHead attention — nn_multihead_attention • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -MultiHead attention — nn_multihead_attention • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,70 +115,54 @@ from different representation subspaces. See reference: Attention Is All You Need

-
nn_multihead_attention(
-  embed_dim,
-  num_heads,
-  dropout = 0,
-  bias = TRUE,
-  add_bias_kv = FALSE,
-  add_zero_attn = FALSE,
-  kdim = NULL,
-  vdim = NULL
-)
+
+
nn_multihead_attention(
+  embed_dim,
+  num_heads,
+  dropout = 0,
+  bias = TRUE,
+  add_bias_kv = FALSE,
+  add_zero_attn = FALSE,
+  kdim = NULL,
+  vdim = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
embed_dim

total dimension of the model.

num_heads

parallel attention heads.

dropout

a Dropout layer on attn_output_weights. Default: 0.0.

bias

add bias as module parameter. Default: True.

add_bias_kv

add bias to the key and value sequences at dim=0.

add_zero_attn

add a new batch of zeros to the key and -value sequences at dim=1.

kdim

total number of features in key. Default: NULL

vdim

total number of features in value. Default: NULL. +

+

Arguments

+
embed_dim
+

total dimension of the model.

+
num_heads
+

parallel attention heads.

+
dropout
+

a Dropout layer on attn_output_weights. Default: 0.0.

+
bias
+

add bias as module parameter. Default: True.

+
add_bias_kv
+

add bias to the key and value sequences at dim=0.

+
add_zero_attn
+

add a new batch of zeros to the key and +value sequences at dim=1.

+
kdim
+

total number of features in key. Default: NULL

+
vdim
+

total number of features in value. Default: NULL. Note: if kdim and vdim are NULL, they will be set to embed_dim such that -query, key, and value have the same number of features.

- -

Details

- +query, key, and value have the same number of features.

+
+
+

Details

$$ \mbox{MultiHead}(Q, K, V) = \mbox{Concat}(head_1,\dots,head_h)W^O \mbox{where} head_i = \mbox{Attention}(QW_i^Q, KW_i^K, VW_i^V) $$

-

Shape

- +
+
+

Shape

-

Inputs:

    -
  • query: \((L, N, E)\) where L is the target sequence length, N is the batch size, E is +

    Inputs:

    • query: \((L, N, E)\) where L is the target sequence length, N is the batch size, E is the embedding dimension.

    • key: \((S, N, E)\), where S is the source sequence length, N is the batch size, E is the embedding dimension.

    • @@ -273,13 +179,9 @@ positions. If a ByteTensor is provided, the non-zero positions are not allowed t while the zero positions will be unchanged. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged. If a FloatTensor is provided, it will be added to the attention weight.

      -
    - -

    Outputs:

      -
    • attn_output: \((L, N, E)\) where L is the target sequence length, N is +

    Outputs:

    • attn_output: \((L, N, E)\) where L is the target sequence length, N is the batch size, E is the embedding dimension.

    • -
    • attn_output_weights:

        -
      • if avg_weights is TRUE (the default), the output attention +

      • attn_output_weights:

        • if avg_weights is TRUE (the default), the output attention weights are averaged over the attention heads, giving a tensor of shape \((N, L, S)\) where N is the batch size, L is the target sequence length, S is the source sequence length.

        • @@ -287,45 +189,42 @@ length, S is the source sequence length.

          as-is, with shape \((N, H, L, S)\), where H is the number of attention heads.

      • -
      - - -

      Examples

      -
      if (torch_is_installed()) {
      -if (FALSE) {
      -multihead_attn = nn_multihead_attention(embed_dim, num_heads)
      -out <- multihead_attn(query, key, value)
      -attn_output <- out[[1]]
      -attn_output_weights <- out[[2]]
      -}
      -
      -}
      -
      +
+ +
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+multihead_attn = nn_multihead_attention(embed_dim, num_heads)
+out <- multihead_attn(query, key, value)
+attn_output <- out[[1]]
+attn_output_weights <- out[[2]]
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_multilabel_margin_loss.html b/dev/reference/nn_multilabel_margin_loss.html index a807a71c594b23439a44d91fb8197e36defe5765..afb6e3ebd68a921117adf47284904d0803328000 100644 --- a/dev/reference/nn_multilabel_margin_loss.html +++ b/dev/reference/nn_multilabel_margin_loss.html @@ -1,82 +1,21 @@ - - - - - - - -Multilabel margin loss — nn_multilabel_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multilabel margin loss — nn_multilabel_margin_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,24 +117,22 @@ and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:

-
nn_multilabel_margin_loss(reduction = "mean")
+
+
nn_multilabel_margin_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \mbox{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\mbox{x.size}(0)} $$

@@ -224,56 +144,53 @@ and \(i \neq y[j]\) for all \(i\) and \(j\).

The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((C)\) or \((N, C)\) where N is the batch size and C +

    • Input: \((C)\) or \((N, C)\) where N is the batch size and C is the number of classes.

    • Target: \((C)\) or \((N, C)\), label targets padded by -1 ensuring same shape as the input.

    • Output: scalar. If reduction is 'none', then \((N)\).

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_multilabel_margin_loss()
    -x <- torch_tensor(c(0.1, 0.2, 0.4, 0.8))$view(c(1,4))
    -# for target y, only consider labels 4 and 1, not after label -1
    -y <- torch_tensor(c(4, 1, -1, 2), dtype = torch_long())$view(c(1,4))
    -loss(x, y)
    -
    -}
    -#> torch_tensor
    -#> 0.85
    -#> [ CPUFloatType{} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_multilabel_margin_loss()
+x <- torch_tensor(c(0.1, 0.2, 0.4, 0.8))$view(c(1,4))
+# for target y, only consider labels 4 and 1, not after label -1
+y <- torch_tensor(c(4, 1, -1, 2), dtype = torch_long())$view(c(1,4))
+loss(x, y)
+
+}
+#> torch_tensor
+#> 0.85
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_multilabel_soft_margin_loss.html b/dev/reference/nn_multilabel_soft_margin_loss.html index a37ccba6ab29038a49455eafcbf9c4d9b845c848..cac0d48d7a911c66fb30b611fd795c58920da7e7 100644 --- a/dev/reference/nn_multilabel_soft_margin_loss.html +++ b/dev/reference/nn_multilabel_soft_margin_loss.html @@ -1,81 +1,20 @@ - - - - - - - -Multi label soft margin loss — nn_multilabel_soft_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multi label soft margin loss — nn_multilabel_soft_margin_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,30 +115,26 @@ loss based on max-entropy, between input \(x\) and target \(y\) of size \((N, C)\).

-
nn_multilabel_soft_margin_loss(weight = NULL, reduction = "mean")
+
+
nn_multilabel_soft_margin_loss(weight = NULL, reduction = "mean")
+
-

Arguments

- - - - - - - - - - -
weight

(Tensor, optional): a manual rescaling weight given to each +

+

Arguments

+
weight
+

(Tensor, optional): a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is -treated as if having all ones.

reduction

(string, optional): Specifies the reduction to apply to the output: +treated as if having all ones.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

For each sample in the minibatch:

$$ loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) @@ -224,42 +142,37 @@ specifying either of those two args will override reduction. Defaul $$

where \(i \in \left\{0, \; \cdots , \; \mbox{x.nElement}() - 1\right\}\), \(y[i] \in \left\{0, \; 1\right\}\).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C)\) where N is the batch size and C is the number of classes.

  • +
    • Input: \((N, C)\) where N is the batch size and C is the number of classes.

    • Target: \((N, C)\), label targets padded by -1 ensuring same shape as the input.

    • Output: scalar. If reduction is 'none', then \((N)\).

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/nn_nll_loss.html b/dev/reference/nn_nll_loss.html index 2430b615a460843ab89b29b4d1b228944266e497..5a054ad462dcff9bb1658d9fd2b7c2ac64f01529 100644 --- a/dev/reference/nn_nll_loss.html +++ b/dev/reference/nn_nll_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Nll loss — nn_nll_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nll loss — nn_nll_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,36 +113,30 @@ problem with C classes." /> problem with C classes.

-
nn_nll_loss(weight = NULL, ignore_index = -100, reduction = "mean")
+
+
nn_nll_loss(weight = NULL, ignore_index = -100, reduction = "mean")
+
-

Arguments

- - - - - - - - - - - - - - -
weight

(Tensor, optional): a manual rescaling weight given to each +

+

Arguments

+
weight
+

(Tensor, optional): a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is -treated as if having all ones.

ignore_index

(int, optional): Specifies a target value that is ignored -and does not contribute to the input gradient.

reduction

(string, optional): Specifies the reduction to apply to the output: +treated as if having all ones.

+
ignore_index
+

(int, optional): Specifies a target value that is ignored +and does not contribute to the input gradient.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override -reduction. Default: 'mean'

- -

Details

- +reduction. Default: 'mean'

+
+
+

Details

If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.

@@ -256,75 +172,72 @@ $$

an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C)\) where C = number of classes, or +

    • Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

    • Target: \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

    • Output: scalar.

    • -
    - -

    If reduction is 'none', then the same size as the target: \((N)\), or +

If reduction is 'none', then the same size as the target: \((N)\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

+
-

Examples

-
if (torch_is_installed()) {
-m <- nn_log_softmax(dim=2)
-loss <- nn_nll_loss()
-# input is of size N x C = 3 x 5
-input <- torch_randn(3, 5, requires_grad=TRUE)
-# each element in target has to have 0 <= value < C
-target <- torch_tensor(c(2, 1, 5), dtype = torch_long())
-output <- loss(m(input), target)
-output$backward()
-
-# 2D loss example (used, for example, with image inputs)
-N <- 5
-C <- 4
-loss <- nn_nll_loss()
-# input is of size N x C x height x width
-data <- torch_randn(N, 16, 10, 10)
-conv <- nn_conv2d(16, C, c(3, 3))
-m <- nn_log_softmax(dim=1)
-# each element in target has to have 0 <= value < C
-target <- torch_empty(N, 8, 8, dtype=torch_long())$random_(1, C)
-output <- loss(m(conv(data)), target)
-output$backward()
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+m <- nn_log_softmax(dim=2)
+loss <- nn_nll_loss()
+# input is of size N x C = 3 x 5
+input <- torch_randn(3, 5, requires_grad=TRUE)
+# each element in target has to have 0 <= value < C
+target <- torch_tensor(c(2, 1, 5), dtype = torch_long())
+output <- loss(m(input), target)
+output$backward()
+
+# 2D loss example (used, for example, with image inputs)
+N <- 5
+C <- 4
+loss <- nn_nll_loss()
+# input is of size N x C x height x width
+data <- torch_randn(N, 16, 10, 10)
+conv <- nn_conv2d(16, C, c(3, 3))
+m <- nn_log_softmax(dim=1)
+# each element in target has to have 0 <= value < C
+target <- torch_empty(N, 8, 8, dtype=torch_long())$random_(1, C)
+output <- loss(m(conv(data)), target)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_pairwise_distance.html b/dev/reference/nn_pairwise_distance.html index 01b94a6110a3f72ab1f03ae0a825b6773a94ba4a..b3b7e6f98c475ba705f8604f8d825fb20c5a07c9 100644 --- a/dev/reference/nn_pairwise_distance.html +++ b/dev/reference/nn_pairwise_distance.html @@ -1,80 +1,19 @@ - - - - - - - -Pairwise distance — nn_pairwise_distance • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pairwise distance — nn_pairwise_distance • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,77 +113,68 @@ using the p-norm:" /> using the p-norm:

-
nn_pairwise_distance(p = 2, eps = 1e-06, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
p

(real): the norm degree. Default: 2

eps

(float, optional): Small value to avoid division by zero. -Default: 1e-6

keepdim

(bool, optional): Determines whether or not to keep the vector dimension. -Default: FALSE

- -

Details

+
+
nn_pairwise_distance(p = 2, eps = 1e-06, keepdim = FALSE)
+
+
+

Arguments

+
p
+

(real): the norm degree. Default: 2

+
eps
+

(float, optional): Small value to avoid division by zero. +Default: 1e-6

+
keepdim
+

(bool, optional): Determines whether or not to keep the vector dimension. +Default: FALSE

+
+
+

Details

$$ \Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input1: \((N, D)\) where D = vector dimension

  • +
    • Input1: \((N, D)\) where D = vector dimension

    • Input2: \((N, D)\), same shape as the Input1

    • Output: \((N)\). If keepdim is TRUE, then \((N, 1)\).

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -pdist <- nn_pairwise_distance(p=2)
    -input1 <- torch_randn(100, 128)
    -input2 <- torch_randn(100, 128)
    -output <- pdist(input1, input2)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+pdist <- nn_pairwise_distance(p=2)
+input1 <- torch_randn(100, 128)
+input2 <- torch_randn(100, 128)
+output <- pdist(input1, input2)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_parameter.html b/dev/reference/nn_parameter.html index 5f4020f5bf3a355d94d6787dff60274aaee936ec..0a7f6aa114321d3dc95146bcaa8252d2d1bfbc5f 100644 --- a/dev/reference/nn_parameter.html +++ b/dev/reference/nn_parameter.html @@ -1,79 +1,18 @@ - - - - - - - -Creates an nn_parameter — nn_parameter • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates an nn_parameter — nn_parameter • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Indicates to nn_module that x is a parameter

-
nn_parameter(x, requires_grad = TRUE)
- -

Arguments

- - - - - - - - - - -
x

the tensor that you want to indicate as parameter

requires_grad

whether this parameter should have -requires_grad = TRUE

+
+
nn_parameter(x, requires_grad = TRUE)
+
+
+

Arguments

+
x
+

the tensor that you want to indicate as parameter

+
requires_grad
+

whether this parameter should have +requires_grad = TRUE

+
+
-
- +
- - + + diff --git a/dev/reference/nn_poisson_nll_loss.html b/dev/reference/nn_poisson_nll_loss.html index 09e2bdea2b1e34cd812f501417e19ff57396c2f3..1762423e842254caf977a71af3a2bbbe944fcd97 100644 --- a/dev/reference/nn_poisson_nll_loss.html +++ b/dev/reference/nn_poisson_nll_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Poisson NLL loss — nn_poisson_nll_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Poisson NLL loss — nn_poisson_nll_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,46 +113,38 @@ The loss can be described as:" /> The loss can be described as:

-
nn_poisson_nll_loss(
-  log_input = TRUE,
-  full = FALSE,
-  eps = 1e-08,
-  reduction = "mean"
-)
+
+
nn_poisson_nll_loss(
+  log_input = TRUE,
+  full = FALSE,
+  eps = 1e-08,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
log_input

(bool, optional): if TRUE the loss is computed as +

+

Arguments

+
log_input
+

(bool, optional): if TRUE the loss is computed as \(\exp(\mbox{input}) - \mbox{target}*\mbox{input}\), if FALSE the loss is -\(\mbox{input} - \mbox{target}*\log(\mbox{input}+\mbox{eps})\).

full

(bool, optional): whether to compute full loss, i. e. to add the +\(\mbox{input} - \mbox{target}*\log(\mbox{input}+\mbox{eps})\).

+
full
+

(bool, optional): whether to compute full loss, i. e. to add the Stirling approximation term -\(\mbox{target}*\log(\mbox{target}) - \mbox{target} + 0.5 * \log(2\pi\mbox{target})\).

eps

(float, optional): Small value to avoid evaluation of \(\log(0)\) when -log_input = FALSE. Default: 1e-8

reduction

(string, optional): Specifies the reduction to apply to the output: +\(\mbox{target}*\log(\mbox{target}) - \mbox{target} + 0.5 * \log(2\pi\mbox{target})\).

+
eps
+

(float, optional): Small value to avoid evaluation of \(\log(0)\) when +log_input = FALSE. Default: 1e-8

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \mbox{target} \sim \mathrm{Poisson}(\mbox{input}) \mbox{loss}(\mbox{input}, \mbox{target}) = \mbox{input} - \mbox{target} * \log(\mbox{input}) @@ -239,54 +153,51 @@ $$

The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar by default. If reduction is 'none', then \((N, *)\), the same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -loss <- nn_poisson_nll_loss()
    -log_input <- torch_randn(5, 2, requires_grad=TRUE)
    -target <- torch_randn(5, 2)
    -output <- loss(log_input, target)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+loss <- nn_poisson_nll_loss()
+log_input <- torch_randn(5, 2, requires_grad=TRUE)
+target <- torch_randn(5, 2)
+output <- loss(log_input, target)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_prelu.html b/dev/reference/nn_prelu.html index 05c1abd78fda434bff5171729f4c08fb874529e6..f4251003d202e9eb12b860f2f9418f4a73c7598d 100644 --- a/dev/reference/nn_prelu.html +++ b/dev/reference/nn_prelu.html @@ -1,48 +1,5 @@ - - - - - - - -PReLU module — nn_prelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -PReLU module — nn_prelu • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -213,85 +135,77 @@ ax, & \mbox{ otherwise } $$

-
nn_prelu(num_parameters = 1, init = 0.25)
+
+
nn_prelu(num_parameters = 1, init = 0.25)
+
-

Arguments

- - - - - - - - - - -
num_parameters

(int): number of \(a\) to learn. +

+

Arguments

+
num_parameters
+

(int): number of \(a\) to learn. Although it takes an int as input, there is only two values are legitimate: -1, or the number of channels at input. Default: 1

init

(float): the initial value of \(a\). Default: 0.25

- -

Details

- +1, or the number of channels at input. Default: 1

+
init
+

(float): the initial value of \(a\). Default: 0.25

+
+
+

Details

Here \(a\) is a learnable parameter. When called without arguments, nn.prelu() uses a single parameter \(a\) across all input channels. If called with nn_prelu(nChannels), a separate \(a\) is used for each input channel.

-

Note

- +
+
+

Note

weight decay should not be used when learning \(a\) for good performance.

Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight (Tensor): the learnable weights of shape (num_parameters).

  • -
- - -

Examples

-
if (torch_is_installed()) {
-m <- nn_prelu()
-input <- torch_randn(2)
-output <- m(input)
-
-}
-
+
  • weight (Tensor): the learnable weights of shape (num_parameters).

  • +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_prelu()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_relu.html b/dev/reference/nn_relu.html index 93595beb2680a528c8f6220fda18e8357df18bb5..f89cb98391458e2a0ea34ff13363337707607f60 100644 --- a/dev/reference/nn_relu.html +++ b/dev/reference/nn_relu.html @@ -1,80 +1,19 @@ - - - - - - - -ReLU module — nn_relu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ReLU module — nn_relu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,65 +113,59 @@ $$\mbox{ReLU}(x) = (x)^+ = \max(0, x)$$" /> $$\mbox{ReLU}(x) = (x)^+ = \max(0, x)$$

-
nn_relu(inplace = FALSE)
- -

Arguments

- - - - - - -
inplace

can optionally do the operation in-place. Default: FALSE

- -

Shape

+
+
nn_relu(inplace = FALSE)
+
+
+

Arguments

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_relu()
    -input <- torch_randn(2)
    -m(input)
    -
    -}
    -#> torch_tensor
    -#>  1.0428
    -#>  0.0000
    -#> [ CPUFloatType{2} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_relu()
+input <- torch_randn(2)
+m(input)
+
+}
+#> torch_tensor
+#>  0
+#>  0
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_relu6.html b/dev/reference/nn_relu6.html index be5c0ed345ad77aa9640af2e03333b1facb94b49..6ea3b19270ea83007739f41bcb5117cce64ac57a 100644 --- a/dev/reference/nn_relu6.html +++ b/dev/reference/nn_relu6.html @@ -1,79 +1,18 @@ - - - - - - - -ReLu6 module — nn_relu6 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -ReLu6 module — nn_relu6 • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,66 +111,61 @@

Applies the element-wise function:

-
nn_relu6(inplace = FALSE)
- -

Arguments

- - - - - - -
inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_relu6(inplace = FALSE)
+
+
+

Arguments

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{ReLU6}(x) = \min(\max(0,x), 6) $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_relu6()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_relu6()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_rnn.html b/dev/reference/nn_rnn.html index 20bbee410a1a7409fbd3156f96e7e12adb196d30..6051104bebdfce5618cd7d57b2fd0b60a5fe5511 100644 --- a/dev/reference/nn_rnn.html +++ b/dev/reference/nn_rnn.html @@ -1,80 +1,19 @@ - - - - - - - -RNN module — nn_rnn • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -RNN module — nn_rnn • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,69 +113,51 @@ to an input sequence." /> to an input sequence.

-
nn_rnn(
-  input_size,
-  hidden_size,
-  num_layers = 1,
-  nonlinearity = NULL,
-  bias = TRUE,
-  batch_first = FALSE,
-  dropout = 0,
-  bidirectional = FALSE,
-  ...
-)
+
+
nn_rnn(
+  input_size,
+  hidden_size,
+  num_layers = 1,
+  nonlinearity = NULL,
+  bias = TRUE,
+  batch_first = FALSE,
+  dropout = 0,
+  bidirectional = FALSE,
+  ...
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input_size

The number of expected features in the input x

hidden_size

The number of features in the hidden state h

num_layers

Number of recurrent layers. E.g., setting num_layers=2 +

+

Arguments

+
input_size
+

The number of expected features in the input x

+
hidden_size
+

The number of features in the hidden state h

+
num_layers
+

Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and -computing the final results. Default: 1

nonlinearity

The non-linearity to use. Can be either 'tanh' or -'relu'. Default: 'tanh'

bias

If FALSE, then the layer does not use bias weights b_ih and -b_hh. Default: TRUE

batch_first

If TRUE, then the input and output tensors are provided -as (batch, seq, feature). Default: FALSE

dropout

If non-zero, introduces a Dropout layer on the outputs of each +computing the final results. Default: 1

+
nonlinearity
+

The non-linearity to use. Can be either 'tanh' or +'relu'. Default: 'tanh'

+
bias
+

If FALSE, then the layer does not use bias weights b_ih and +b_hh. Default: TRUE

+
batch_first
+

If TRUE, then the input and output tensors are provided +as (batch, seq, feature). Default: FALSE

+
dropout
+

If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to -dropout. Default: 0

bidirectional

If TRUE, becomes a bidirectional RNN. Default: FALSE

...

other arguments that can be passed to the super class.

- -

Details

- +dropout. Default: 0

+
bidirectional
+

If TRUE, becomes a bidirectional RNN. Default: FALSE

+
...
+

other arguments that can be passed to the super class.

+
+
+

Details

For each element in the input sequence, each layer computes the following function:

$$ @@ -264,26 +168,24 @@ the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity is 'relu', then \(\mbox{ReLU}\) is used instead of \(\tanh\).

-

Inputs

- +
+
+

Inputs

-
    -
  • input of shape (seq_len, batch, input_size): tensor containing the features +

    • input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence.

    • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.

    • -
    - -

    Outputs

    - +
+
+

Outputs

-
    -
  • output of shape (seq_len, batch, num_directions * hidden_size): tensor +

    • output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a :class:nn_packed_sequence has been given as the input, the output will also be a packed sequence. @@ -295,14 +197,12 @@ Similarly, the directions can be separated in the packed case.

    • containing the hidden state for t = seq_len. Like output, the layers can be separated using h_n$view(num_layers, num_directions, batch, hidden_size).

      -
    - -

    Shape

    - +
+
+

Shape

-
    -
  • Input1: \((L, N, H_{in})\) tensor containing input features where +

    • Input1: \((L, N, H_{in})\) tensor containing input features where \(H_{in}=\mbox{input\_size}\) and L represents a sequence length.

    • Input2: \((S, N, H_{out})\) tensor containing the initial hidden state for each element in the batch. @@ -312,14 +212,12 @@ If the RNN is bidirectional, num_directions should be 2, else it should be 1.

      Output1: \((L, N, H_{all})\) where \(H_{all}=\mbox{num\_directions} * \mbox{hidden\_size}\)

    • Output2: \((S, N, H_{out})\) tensor containing the next hidden state for each element in the batch

    • -
    - -

    Attributes

    - +
+
+

Attributes

-
    -
  • weight_ih_l[k]: the learnable input-hidden weights of the k-th layer, +

    • weight_ih_l[k]: the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)

    • weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer, @@ -328,114 +226,112 @@ of shape (hidden_size, hidden_size)

    • of shape (hidden_size)

    • bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)

    • -
    - -

    Note

    - +
+
+

Note

All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\mbox{hidden\_size}}\)

+
-

Examples

-
if (torch_is_installed()) {
-rnn <- nn_rnn(10, 20, 2)
-input <- torch_randn(5, 3, 10)
-h0 <- torch_randn(2, 3, 20)
-rnn(input, h0)
-
-}
-#> [[1]]
-#> torch_tensor
-#> (1,.,.) = 
-#>  Columns 1 to 9  0.4745 -0.1859  0.1777 -0.1858  0.5894  0.2637 -0.6385 -0.7097 -0.7912
-#>   0.4220  0.6160 -0.5404  0.7711  0.4409  0.9132 -0.4366  0.4381 -0.0477
-#>   0.4044 -0.0396 -0.1762 -0.0959  0.3090  0.5827 -0.2244  0.5444 -0.2442
-#> 
-#> Columns 10 to 18  0.6514  0.0520 -0.7184 -0.0654  0.4755  0.1592 -0.3316 -0.0359 -0.0747
-#>  -0.5912 -0.6414  0.5730 -0.5611  0.3446  0.3079  0.4660 -0.5513 -0.7317
-#>  -0.4048 -0.4203  0.0843  0.0599  0.5302 -0.1320  0.7580  0.5875  0.6088
-#> 
-#> Columns 19 to 20 -0.0469  0.7087
-#>   0.5845 -0.0100
-#>  -0.6411 -0.8461
-#> 
-#> (2,.,.) = 
-#>  Columns 1 to 9  0.4009  0.2581 -0.0894 -0.1635  0.4117 -0.2862 -0.3065  0.4298 -0.1099
-#>  -0.1908  0.7495  0.2457  0.2087  0.5420 -0.3270 -0.6689  0.2622  0.7032
-#>   0.3058  0.4054 -0.2991 -0.2056  0.0978  0.0599 -0.5669  0.2577  0.3610
-#> 
-#> Columns 10 to 18 -0.2357  0.0171  0.1912 -0.0659 -0.2354  0.0344 -0.4921 -0.7601 -0.3682
-#>  -0.3424  0.4032 -0.0949 -0.3796  0.7319 -0.5674 -0.1728  0.2599 -0.0181
-#>  -0.4019  0.2927  0.2896 -0.6814  0.6901  0.4075  0.0290  0.4181  0.2707
-#> 
-#> Columns 19 to 20  0.7225  0.2856
-#>  -0.2969 -0.0214
-#>  -0.6084 -0.0990
-#> 
-#> (3,.,.) = 
-#>  Columns 1 to 9  0.1321 -0.0842  0.0630  0.0963  0.6189  0.4216 -0.1391  0.1832  0.0561
-#>   0.2111  0.4532 -0.3053  0.1982  0.5539  0.3073 -0.7156  0.4295 -0.1437
-#>   0.1695  0.1270 -0.0628 -0.4363  0.6908  0.2328 -0.4889  0.4444 -0.0808
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{5,3,20} ][ grad_fn = <StackBackward> ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> (1,.,.) = 
-#>  Columns 1 to 9 -0.1138 -0.2225  0.0216 -0.0199  0.0298  0.0759 -0.4418  0.1248  0.3882
-#>  -0.3342  0.0456  0.1421  0.2609  0.3095 -0.1813  0.0032  0.2931  0.0233
-#>  -0.2491 -0.0110  0.1530  0.1752 -0.5961  0.4376  0.7177 -0.6412 -0.4838
-#> 
-#> Columns 10 to 18  0.1736 -0.0666 -0.2277 -0.7187 -0.5891  0.4111 -0.1654 -0.1506 -0.1801
-#>  -0.6499 -0.7127  0.6249  0.0295  0.4223  0.6072  0.0140 -0.2525 -0.6280
-#>  -0.1540  0.4431 -0.8291 -0.8185 -0.5821 -0.3455 -0.0026 -0.2175 -0.2324
-#> 
-#> Columns 19 to 20  0.6187 -0.0353
-#>  -0.5784  0.3641
-#>   0.0805 -0.6822
-#> 
-#> (2,.,.) = 
-#>  Columns 1 to 9  0.2262  0.1554 -0.0319 -0.2820  0.7412  0.6688 -0.3427  0.3909 -0.1387
-#>   0.2347  0.1486  0.2258 -0.1337  0.6556 -0.3578 -0.5898  0.3473 -0.4774
-#>   0.1207  0.4732 -0.1367  0.1095  0.5994  0.3849 -0.5638 -0.0245  0.3990
-#> 
-#> Columns 10 to 18  0.1378 -0.0556 -0.1344 -0.0664  0.3107 -0.1199  0.0441 -0.1585 -0.3549
-#>  -0.4922  0.1313 -0.1017 -0.1081  0.5549 -0.3796 -0.1600  0.1735 -0.7019
-#>  -0.0577  0.1594 -0.1253 -0.2464  0.5878  0.1730 -0.1709 -0.2612  0.0503
-#> 
-#> Columns 19 to 20  0.4534 -0.3872
-#>   0.2702 -0.3147
-#>   0.1245 -0.1123
-#> [ CPUFloatType{2,3,20} ][ grad_fn = <StackBackward> ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+rnn <- nn_rnn(10, 20, 2)
+input <- torch_randn(5, 3, 10)
+h0 <- torch_randn(2, 3, 20)
+rnn(input, h0)
+
+}
+#> [[1]]
+#> torch_tensor
+#> (1,.,.) = 
+#>  Columns 1 to 9  0.8151 -0.9126  0.7925  0.4162  0.8779  0.4135  0.6306  0.2790 -0.0684
+#>  -0.2975 -0.6975  0.0221  0.6586  0.1209  0.1307  0.3097 -0.0856  0.6346
+#>  -0.4389  0.4784 -0.5604 -0.5076 -0.1134 -0.7180 -0.3771  0.0440 -0.1945
+#> 
+#> Columns 10 to 18 -0.3559  0.7532  0.1535 -0.1897 -0.0318  0.0660 -0.4554 -0.7305 -0.4830
+#>   0.3291  0.5938 -0.1580 -0.0256  0.7757  0.5597  0.2496 -0.8533 -0.6728
+#>   0.3867  0.3826  0.6037  0.2194 -0.8727 -0.2273 -0.0487  0.4780  0.0478
+#> 
+#> Columns 19 to 20  0.1453  0.3385
+#>   0.4557  0.2498
+#>  -0.8115 -0.7927
+#> 
+#> (2,.,.) = 
+#>  Columns 1 to 9 -0.3547  0.3358 -0.7325  0.0840 -0.1946  0.0746 -0.0018 -0.1379  0.6122
+#>  -0.3111  0.0079 -0.3550  0.3159  0.0701 -0.3570 -0.0675 -0.3477 -0.1488
+#>  -0.1756  0.0801  0.4807  0.4962  0.1773 -0.4569 -0.3086  0.8569  0.1023
+#> 
+#> Columns 10 to 18  0.5539  0.4786  0.6005 -0.0331 -0.3642  0.7405  0.2720 -0.4793 -0.1393
+#>   0.5237  0.6290  0.3903  0.4339 -0.0550 -0.1318  0.3963 -0.2887 -0.2035
+#>  -0.0897  0.6944 -0.4864 -0.5382 -0.2428  0.3736 -0.3641 -0.3070 -0.3879
+#> 
+#> Columns 19 to 20 -0.0526 -0.5596
+#>   0.0423 -0.6651
+#>  -0.3033  0.5023
+#> 
+#> (3,.,.) = 
+#>  Columns 1 to 9 -0.5135 -0.5440  0.1840 -0.3275  0.5917  0.1246 -0.2307  0.2042  0.2278
+#>  -0.2464 -0.0492  0.1699  0.4248  0.0244 -0.3513 -0.3097  0.4564  0.6540
+#>  -0.6782  0.1460 -0.5084  0.1322  0.2505 -0.2866 -0.1240 -0.1537  0.0573
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{5,3,20} ][ grad_fn = <StackBackward> ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> (1,.,.) = 
+#>  Columns 1 to 9  0.1670 -0.4890  0.5699 -0.3973 -0.4142  0.3482  0.0137 -0.0430  0.1137
+#>  -0.6782 -0.7114 -0.5058  0.0504 -0.2827  0.3313 -0.4248  0.6271 -0.4972
+#>  -0.2978  0.0077  0.7071  0.2982 -0.6884  0.2393 -0.1173 -0.0212 -0.4838
+#> 
+#> Columns 10 to 18  0.1074 -0.2342  0.3958 -0.0304 -0.4835 -0.3942 -0.4408  0.5227  0.3777
+#>   0.3242 -0.0638  0.6458  0.4217 -0.0239  0.5016  0.1291 -0.0518 -0.2399
+#>   0.2497 -0.3841  0.1687  0.0261  0.0378  0.2633  0.6329 -0.0934 -0.3909
+#> 
+#> Columns 19 to 20  0.2282  0.1931
+#>  -0.2832  0.0805
+#>  -0.4288 -0.3152
+#> 
+#> (2,.,.) = 
+#>  Columns 1 to 9 -0.1940 -0.0990  0.3328 -0.1539  0.3886 -0.3395 -0.1728  0.4307  0.2911
+#>  -0.4875 -0.2433 -0.0134 -0.0179  0.3017 -0.1317 -0.3105 -0.0153  0.0239
+#>  -0.1129  0.1023 -0.4527  0.4459  0.0685 -0.1138 -0.0374 -0.0025 -0.1651
+#> 
+#> Columns 10 to 18 -0.1252  0.3388  0.0032  0.2867 -0.0074 -0.0060 -0.4365 -0.1213 -0.4371
+#>   0.0588  0.3093  0.3018 -0.2821 -0.3577  0.1485 -0.0840 -0.3334 -0.3366
+#>   0.3933  0.4992  0.2778 -0.5162 -0.0348  0.5516  0.1173 -0.1657 -0.3380
+#> 
+#> Columns 19 to 20 -0.1512  0.1389
+#>   0.1232  0.1467
+#>  -0.1274 -0.1682
+#> [ CPUFloatType{2,3,20} ][ grad_fn = <StackBackward> ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_rrelu.html b/dev/reference/nn_rrelu.html index 0df4d777705be3c0d0d92ad3279884e5ace20070..79caf42e095da66098bd5d1fc54eb10de1fec01e 100644 --- a/dev/reference/nn_rrelu.html +++ b/dev/reference/nn_rrelu.html @@ -1,80 +1,19 @@ - - - - - - - -RReLU module — nn_rrelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -RReLU module — nn_rrelu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,27 +113,21 @@ as described in the paper:" /> as described in the paper:

-
nn_rrelu(lower = 1/8, upper = 1/3, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
lower

lower bound of the uniform distribution. Default: \(\frac{1}{8}\)

upper

upper bound of the uniform distribution. Default: \(\frac{1}{3}\)

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_rrelu(lower = 1/8, upper = 1/3, inplace = FALSE)
+
+
+

Arguments

+
lower
+

lower bound of the uniform distribution. Default: \(\frac{1}{8}\)

+
upper
+

upper bound of the uniform distribution. Default: \(\frac{1}{3}\)

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

Empirical Evaluation of Rectified Activations in Convolutional Network.

The function is defined as:

$$ @@ -225,54 +141,52 @@ $$

where \(a\) is randomly sampled from uniform distribution \(\mathcal{U}(\mbox{lower}, \mbox{upper})\). See: https://arxiv.org/pdf/1505.00853.pdf

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_rrelu(0.1, 0.3)
    -input <- torch_randn(2)
    -m(input)
    -
    -}
    -#> torch_tensor
    -#> -0.2211
    -#> -0.1064
    -#> [ CPUFloatType{2} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_rrelu(0.1, 0.3)
+input <- torch_randn(2)
+m(input)
+
+}
+#> torch_tensor
+#> 0.01 *
+#> -6.3835
+#> -0.7508
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_selu.html b/dev/reference/nn_selu.html index b533fdfce9755b6a53404b9839c6209455632d6d..ead42c6b23c3d3c3772d7ce2391345c2fd650257 100644 --- a/dev/reference/nn_selu.html +++ b/dev/reference/nn_selu.html @@ -1,79 +1,18 @@ - - - - - - - -SELU module — nn_selu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -SELU module — nn_selu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,70 +111,65 @@

Applied element-wise, as:

-
nn_selu(inplace = FALSE)
- -

Arguments

- - - - - - -
inplace

(bool, optional): can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_selu(inplace = FALSE)
+
+
+

Arguments

+
inplace
+

(bool, optional): can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{SELU}(x) = \mbox{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) $$

with \(\alpha = 1.6732632423543772848170429916717\) and \(\mbox{scale} = 1.0507009873554804934193349852946\).

More details can be found in the paper -Self-Normalizing Neural Networks.

-

Shape

- +Self-Normalizing Neural Networks.

+
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_selu()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_selu()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_sequential.html b/dev/reference/nn_sequential.html index 894bafeee814ca77a537d13038f49d14c1bb09bd..9b9f639950a20585eac9ddcc37e9787e8187415d 100644 --- a/dev/reference/nn_sequential.html +++ b/dev/reference/nn_sequential.html @@ -1,81 +1,20 @@ - - - - - - - -A sequential container — nn_sequential • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -A sequential container — nn_sequential • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,57 +115,53 @@ Modules will be added to it in the order they are passed in the constructor. See examples.

-
nn_sequential(...)
- -

Arguments

- - - - - - -
...

sequence of modules to be added

- - -

Examples

-
if (torch_is_installed()) {
-
-model <- nn_sequential(
-  nn_conv2d(1, 20, 5),
-  nn_relu(),
-  nn_conv2d(20, 64, 5),
-  nn_relu()
-)
-input <- torch_randn(32, 1, 28, 28)
-output <- model(input)
-
-}
-
+
+
nn_sequential(...)
+
+ +
+

Arguments

+
...
+

sequence of modules to be added

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+model <- nn_sequential(
+  nn_conv2d(1, 20, 5),
+  nn_relu(),
+  nn_conv2d(20, 64, 5),
+  nn_relu()
+)
+input <- torch_randn(32, 1, 28, 28)
+output <- model(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_sigmoid.html b/dev/reference/nn_sigmoid.html index 925ca1d28756013f472c8ed3a11b1df249380d60..558486d4cf56797a1700245d67215b2dfc154561 100644 --- a/dev/reference/nn_sigmoid.html +++ b/dev/reference/nn_sigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Sigmoid module — nn_sigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sigmoid module — nn_sigmoid • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,58 +111,56 @@

Applies the element-wise function:

-
nn_sigmoid()
- - -

Details

+
+
nn_sigmoid()
+
+
+

Details

$$ \mbox{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_sigmoid()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_sigmoid()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_smooth_l1_loss.html b/dev/reference/nn_smooth_l1_loss.html index 212c105203a1d45e0c1adf5d0605aefc5cf1dccd..be4d06d68c866de8a0fecfeac045f6ccff4a0ded 100644 --- a/dev/reference/nn_smooth_l1_loss.html +++ b/dev/reference/nn_smooth_l1_loss.html @@ -1,83 +1,22 @@ - - - - - - - -Smooth L1 loss — nn_smooth_l1_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Smooth L1 loss — nn_smooth_l1_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,24 +119,22 @@ prevents exploding gradients (e.g. see Fast R-CNN paper by Ross Gir Also known as the Huber loss:

-
nn_smooth_l1_loss(reduction = "mean")
+
+
nn_smooth_l1_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \mbox{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i} $$

@@ -229,44 +149,39 @@ $$

\(x\) and \(y\) arbitrary shapes with a total of \(n\) elements each the sum operation still operates over all the elements, and divides by \(n\). The division by \(n\) can be avoided if sets reduction = 'sum'.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) means, any number of additional +

    • Input: \((N, *)\) where \(*\) means, any number of additional dimensions

    • Target: \((N, *)\), same shape as the input

    • Output: scalar. If reduction is 'none', then \((N, *)\), same shape as the input

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/nn_soft_margin_loss.html b/dev/reference/nn_soft_margin_loss.html index c968d85ed201f9530a22c622e9fa4aea51942c31..892eaadd3f8b453259f07fd91b4a40294c2578c6 100644 --- a/dev/reference/nn_soft_margin_loss.html +++ b/dev/reference/nn_soft_margin_loss.html @@ -1,81 +1,20 @@ - - - - - - - -Soft margin loss — nn_soft_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Soft margin loss — nn_soft_margin_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,64 +115,57 @@ logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).

-
nn_soft_margin_loss(reduction = "mean")
+
+
nn_soft_margin_loss(reduction = "mean")
+
-

Arguments

- - - - - - -
reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

$$ \mbox{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\mbox{x.nelement}()} $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\) where \(*\) means, any number of additional +

    • Input: \((*)\) where \(*\) means, any number of additional dimensions

    • Target: \((*)\), same shape as the input

    • Output: scalar. If reduction is 'none', then same shape as the input

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/nn_softmax.html b/dev/reference/nn_softmax.html index 03e6904351b3144fbc9742fd51e5baa24fc9e587..8a4b8b8a8f84e41f3bcdf9687ddc7bc2190ddf29 100644 --- a/dev/reference/nn_softmax.html +++ b/dev/reference/nn_softmax.html @@ -1,82 +1,21 @@ - - - - - - - -Softmax module — nn_softmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softmax module — nn_softmax • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,79 +117,76 @@ lie in the range [0,1] and sum to 1. Softmax is defined as:

-
nn_softmax(dim)
- -

Arguments

- - - - - - -
dim

(int): A dimension along which Softmax will be computed (so every slice -along dim will sum to 1).

- -

Value

+
+
nn_softmax(dim)
+
+
+

Arguments

+
dim
+

(int): A dimension along which Softmax will be computed (so every slice +along dim will sum to 1).

+
+
+

Value

: a Tensor of the same dimension and shape as the input with values in the range [0, 1]

-

Details

- +
+
+

Details

$$ \mbox{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} $$

When the input Tensor is a sparse tensor then the unspecifed values are treated as -Inf.

-

Note

- +
+
+

Note

This module doesn't work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use LogSoftmax instead (it's faster and has better numerical properties).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\) where * means, any number of additional +

    • Input: \((*)\) where * means, any number of additional dimensions

    • Output: \((*)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softmax(1)
    -input <- torch_randn(2, 3)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softmax(1)
+input <- torch_randn(2, 3)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_softmax2d.html b/dev/reference/nn_softmax2d.html index 340f2f4fd70e8299a78439dc68b3a77f29d8e86c..615481e30297f59ba2d578e9dab374ef359f2b4a 100644 --- a/dev/reference/nn_softmax2d.html +++ b/dev/reference/nn_softmax2d.html @@ -1,81 +1,20 @@ - - - - - - - -Softmax2d module — nn_softmax2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softmax2d module — nn_softmax2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,56 +115,54 @@ When given an image of Channels x Height x Width, it will apply Softmax to each location \((Channels, h_i, w_j)\)

-
nn_softmax2d()
- - -

Value

+
+
nn_softmax2d()
+
+
+

Value

a Tensor of the same dimension and shape as the input with values in the range [0, 1]

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, C, H, W)\)

  • +
    • Input: \((N, C, H, W)\)

    • Output: \((N, C, H, W)\) (same shape as input)

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softmax2d()
    -input <- torch_randn(2, 3, 12, 13)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softmax2d()
+input <- torch_randn(2, 3, 12, 13)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_softmin.html b/dev/reference/nn_softmin.html index ce9c241d9f530559d431c64a7ced3b56db15d84a..6c0f1cf84c1268f111e9598ee15bbb7ce86a53b7 100644 --- a/dev/reference/nn_softmin.html +++ b/dev/reference/nn_softmin.html @@ -1,82 +1,21 @@ - - - - - - - -Softmin — nn_softmin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softmin — nn_softmin • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,71 +117,67 @@ lie in the range [0, 1] and sum to 1. Softmin is defined as:

-
nn_softmin(dim)
- -

Arguments

- - - - - - -
dim

(int): A dimension along which Softmin will be computed (so every slice -along dim will sum to 1).

- -

Value

+
+
nn_softmin(dim)
+
+
+

Arguments

+
dim
+

(int): A dimension along which Softmin will be computed (so every slice +along dim will sum to 1).

+
+
+

Value

a Tensor of the same dimension and shape as the input, with values in the range [0, 1].

-

Details

- +
+
+

Details

$$ \mbox{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)} $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((*)\) where * means, any number of additional +

    • Input: \((*)\) where * means, any number of additional dimensions

    • Output: \((*)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softmin(dim = 1)
    -input <- torch_randn(2, 2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softmin(dim = 1)
+input <- torch_randn(2, 2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_softplus.html b/dev/reference/nn_softplus.html index a1c99c9446d2c72231bd4eaf83a866395841f0fa..f63ce14242b6ebbc0997d8abd1debed20ad6a9b5 100644 --- a/dev/reference/nn_softplus.html +++ b/dev/reference/nn_softplus.html @@ -1,82 +1,21 @@ - - - - - - - -Softplus module — nn_softplus • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softplus module — nn_softplus • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,71 +117,64 @@ $$ $$

-
nn_softplus(beta = 1, threshold = 20)
- -

Arguments

- - - - - - - - - - -
beta

the \(\beta\) value for the Softplus formulation. Default: 1

threshold

values above this revert to a linear function. Default: 20

- -

Details

+
+
nn_softplus(beta = 1, threshold = 20)
+
+
+

Arguments

+
beta
+

the \(\beta\) value for the Softplus formulation. Default: 1

+
threshold
+

values above this revert to a linear function. Default: 20

+
+
+

Details

SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. For numerical stability the implementation reverts to the linear function when \(input \times \beta > threshold\).

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softplus()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softplus()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_softshrink.html b/dev/reference/nn_softshrink.html index 7f781b92489088dbfd8e3386f8bbfac74aeb25f1..42c47bf2647f8f2aa51b96f33df089b653c62df4 100644 --- a/dev/reference/nn_softshrink.html +++ b/dev/reference/nn_softshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Softshrink module — nn_softshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softshrink module — nn_softshrink • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Applies the soft shrinkage function elementwise:

-
nn_softshrink(lambd = 0.5)
- -

Arguments

- - - - - - -
lambd

the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5

- -

Details

+
+
nn_softshrink(lambd = 0.5)
+
+
+

Arguments

+
lambd
+

the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5

+
+
+

Details

$$ \mbox{SoftShrinkage}(x) = \left\{ \begin{array}{ll} @@ -211,50 +131,47 @@ x + \lambda, & \mbox{ if } x < -\lambda \\ \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softshrink()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softshrink()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_softsign.html b/dev/reference/nn_softsign.html index a7a9eed9872845d9f9ef97a494adbff9e5552878..a064535496d35e086d4fedc02088fb31effb26f4 100644 --- a/dev/reference/nn_softsign.html +++ b/dev/reference/nn_softsign.html @@ -1,82 +1,21 @@ - - - - - - - -Softsign module — nn_softsign • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softsign module — nn_softsign • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,53 +117,50 @@ $$ $$

-
nn_softsign()
- - -

Shape

+
+
nn_softsign()
+
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_softsign()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_softsign()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_tanh.html b/dev/reference/nn_tanh.html index 34a0377798d9293ce1621eb6efb4bb911edfb3da..0901fe27d7e53c876ac616369500d15f02e2e719 100644 --- a/dev/reference/nn_tanh.html +++ b/dev/reference/nn_tanh.html @@ -1,79 +1,18 @@ - - - - - - - -Tanh module — nn_tanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tanh module — nn_tanh • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,58 +111,56 @@

Applies the element-wise function:

-
nn_tanh()
- - -

Details

+
+
nn_tanh()
+
+
+

Details

$$ \mbox{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_tanh()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_tanh()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_tanhshrink.html b/dev/reference/nn_tanhshrink.html index 8fc690cd47c092fc393f7b116faafe0fb5509514..55322ad69ae28e87da9f6f0500d85ca608a5072d 100644 --- a/dev/reference/nn_tanhshrink.html +++ b/dev/reference/nn_tanhshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Tanhshrink module — nn_tanhshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tanhshrink module — nn_tanhshrink • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,58 +111,56 @@

Applies the element-wise function:

-
nn_tanhshrink()
- - -

Details

+
+
nn_tanhshrink()
+
+
+

Details

$$ \mbox{Tanhshrink}(x) = x - \tanh(x) $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_tanhshrink()
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_tanhshrink()
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_threshold.html b/dev/reference/nn_threshold.html index 1331bc8af542d8e9370505ed90ff43eae3249625..7c7c4977e482a1aac3e75c1f5205194566ad7e6c 100644 --- a/dev/reference/nn_threshold.html +++ b/dev/reference/nn_threshold.html @@ -1,79 +1,18 @@ - - - - - - - -Threshoold module — nn_threshold • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Threshoold module — nn_threshold • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Thresholds each element of the input Tensor.

-
nn_threshold(threshold, value, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
threshold

The value to threshold at

value

The value to replace with

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nn_threshold(threshold, value, inplace = FALSE)
+
+
+

Arguments

+
threshold
+

The value to threshold at

+
value
+

The value to replace with

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

Threshold is defined as: $$ y = @@ -219,50 +135,47 @@ $$ \end{array} \right. $$

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where * means, any number of additional +

    • Input: \((N, *)\) where * means, any number of additional dimensions

    • Output: \((N, *)\), same shape as the input

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -m <- nn_threshold(0.1, 20)
    -input <- torch_randn(2)
    -output <- m(input)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+m <- nn_threshold(0.1, 20)
+input <- torch_randn(2)
+output <- m(input)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_triplet_margin_loss.html b/dev/reference/nn_triplet_margin_loss.html index 827a757b28daf4a42ea8718cad7118a207f22f13..3e8261b9ce1f1d1da6e0cefd08722b695751ade6 100644 --- a/dev/reference/nn_triplet_margin_loss.html +++ b/dev/reference/nn_triplet_margin_loss.html @@ -1,83 +1,22 @@ - - - - - - - -Triplet margin loss — nn_triplet_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triplet margin loss — nn_triplet_margin_loss • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,50 +119,40 @@ is composed by a, p and n (i.e., an \((N, D)\).

-
nn_triplet_margin_loss(
-  margin = 1,
-  p = 2,
-  eps = 1e-06,
-  swap = FALSE,
-  reduction = "mean"
-)
+
+
nn_triplet_margin_loss(
+  margin = 1,
+  p = 2,
+  eps = 1e-06,
+  swap = FALSE,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
margin

(float, optional): Default: \(1\).

p

(int, optional): The norm degree for pairwise distance. Default: \(2\).

eps

constant to avoid NaN's

swap

(bool, optional): The distance swap is described in detail in the paper -Learning shallow convolutional feature descriptors with triplet losses by -V. Balntas, E. Riba et al. Default: FALSE.

reduction

(string, optional): Specifies the reduction to apply to the output: +

+

Arguments

+
margin
+

(float, optional): Default: \(1\).

+
p
+

(int, optional): The norm degree for pairwise distance. Default: \(2\).

+
eps
+

constant to avoid NaN's

+
swap
+

(bool, optional): The distance swap is described in detail in the paper +Learning shallow convolutional feature descriptors with triplet losses by +V. Balntas, E. Riba et al. Default: FALSE.

+
reduction
+

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, -specifying either of those two args will override reduction. Default: 'mean'

- -

Details

- +specifying either of those two args will override reduction. Default: 'mean'

+
+
+

Details

The distance swap is described in detail in the paper -Learning shallow convolutional feature descriptors with triplet losses by +Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.

The loss function for each sample in the mini-batch is:

$$ @@ -250,55 +162,52 @@ $$

$$ d(x_i, y_i) = | {\bf x}_i - {\bf y}_i |_p $$

-

See also nn_triplet_margin_with_distance_loss(), which computes the +

See also nn_triplet_margin_with_distance_loss(), which computes the triplet margin loss for input tensors using a custom distance function.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, D)\) where \(D\) is the vector dimension.

  • +
    • Input: \((N, D)\) where \(D\) is the vector dimension.

    • Output: A Tensor of shape \((N)\) if reduction is 'none', or a scalar otherwise.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -triplet_loss <- nn_triplet_margin_loss(margin = 1, p = 2)
    -anchor <- torch_randn(100, 128, requires_grad=TRUE)
    -positive <- torch_randn(100, 128, requires_grad=TRUE)
    -negative <- torch_randn(100, 128, requires_grad=TRUE)
    -output <- triplet_loss(anchor, positive, negative)
    -output$backward()
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+triplet_loss <- nn_triplet_margin_loss(margin = 1, p = 2)
+anchor <- torch_randn(100, 128, requires_grad=TRUE)
+positive <- torch_randn(100, 128, requires_grad=TRUE)
+negative <- torch_randn(100, 128, requires_grad=TRUE)
+output <- triplet_loss(anchor, positive, negative)
+output$backward()
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_triplet_margin_with_distance_loss.html b/dev/reference/nn_triplet_margin_with_distance_loss.html index 9be53514fa592b43f4e5c6d1db73c4b8f80ce892..2aa02526126051c0e418e796e4655fd76469eb96 100644 --- a/dev/reference/nn_triplet_margin_with_distance_loss.html +++ b/dev/reference/nn_triplet_margin_with_distance_loss.html @@ -1,84 +1,23 @@ - - - - - - - -Triplet margin with distance loss — nn_triplet_margin_with_distance_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triplet margin with distance loss — nn_triplet_margin_with_distance_loss • torch - - - - - - - - + + -
-
- -
- -
+
@@ -199,48 +121,40 @@ between the anchor and positive example ("positive distance") and the anchor and negative example ("negative distance").

-
nn_triplet_margin_with_distance_loss(
-  distance_function = NULL,
-  margin = 1,
-  swap = FALSE,
-  reduction = "mean"
-)
+
+
nn_triplet_margin_with_distance_loss(
+  distance_function = NULL,
+  margin = 1,
+  swap = FALSE,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
distance_function

(callable, optional): A nonnegative, real-valued function that +

+

Arguments

+
distance_function
+

(callable, optional): A nonnegative, real-valued function that quantifies the closeness of two tensors. If not specified, -nn_pairwise_distance() will be used. Default: None

margin

(float, optional): A non-negative margin representing the minimum difference +nn_pairwise_distance() will be used. Default: None

+
margin
+

(float, optional): A non-negative margin representing the minimum difference between the positive and negative distances required for the loss to be 0. Larger margins penalize cases where the negative examples are not distant enough from the -anchors, relative to the positives. Default: \(1\).

swap

(bool, optional): Whether to use the distance swap described in the paper -Learning shallow convolutional feature descriptors with triplet losses by +anchors, relative to the positives. Default: \(1\).

+
swap
+

(bool, optional): Whether to use the distance swap described in the paper +Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. If TRUE, and if the positive example is closer to the negative example than the anchor is, swaps the positive example and the anchor in -the loss computation. Default: FALSE.

reduction

(string, optional): Specifies the (optional) reduction to apply to the output: +the loss computation. Default: FALSE.

+
reduction
+

(string, optional): Specifies the (optional) reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of -elements in the output, 'sum': the output will be summed. Default: 'mean'

- -

Details

- +elements in the output, 'sum': the output will be summed. Default: 'mean'

+
+
+

Details

The unreduced loss (i.e., with reduction set to 'none') can be described as:

$$ @@ -262,83 +176,80 @@ If reduction is not 'none' \mbox{sum}(L), & \mbox{if reduction} = \mbox{`sum'.} \end{array} $$

-

See also nn_triplet_margin_loss(), which computes the triplet +

See also nn_triplet_margin_loss(), which computes the triplet loss for input tensors using the \(l_p\) distance as the distance function.

-

Shape

- +
+
+

Shape

-
    -
  • Input: \((N, *)\) where \(*\) represents any number of additional dimensions +

    • Input: \((N, *)\) where \(*\) represents any number of additional dimensions as supported by the distance function.

    • Output: A Tensor of shape \((N)\) if reduction is 'none', or a scalar otherwise.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -# Initialize embeddings
    -embedding <- nn_embedding(1000, 128)
    -anchor_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
    -positive_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
    -negative_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
    -anchor <- embedding(anchor_ids)
    -positive <- embedding(positive_ids)
    -negative <- embedding(negative_ids)
    -
    -# Built-in Distance Function
    -triplet_loss <- nn_triplet_margin_with_distance_loss(
    -  distance_function=nn_pairwise_distance()
    -)
    -output <- triplet_loss(anchor, positive, negative)
    -
    -# Custom Distance Function
    -l_infinity <- function(x1, x2) {
    -  torch_max(torch_abs(x1 - x2), dim = 1)[[1]]
    -}
    -
    -triplet_loss <- nn_triplet_margin_with_distance_loss(
    -  distance_function=l_infinity, margin=1.5
    -)
    -output <- triplet_loss(anchor, positive, negative)
    -
    -# Custom Distance Function (Lambda)
    -triplet_loss <- nn_triplet_margin_with_distance_loss(
    -  distance_function = function(x, y) {
    -    1 - nnf_cosine_similarity(x, y)
    -  }
    -)
    -
    -output <- triplet_loss(anchor, positive, negative)
    -
    -}
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+# Initialize embeddings
+embedding <- nn_embedding(1000, 128)
+anchor_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
+positive_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
+negative_ids <- torch_randint(1, 1000, 1, dtype = torch_long())
+anchor <- embedding(anchor_ids)
+positive <- embedding(positive_ids)
+negative <- embedding(negative_ids)
+
+# Built-in Distance Function
+triplet_loss <- nn_triplet_margin_with_distance_loss(
+  distance_function=nn_pairwise_distance()
+)
+output <- triplet_loss(anchor, positive, negative)
+
+# Custom Distance Function
+l_infinity <- function(x1, x2) {
+  torch_max(torch_abs(x1 - x2), dim = 1)[[1]]
+}
+
+triplet_loss <- nn_triplet_margin_with_distance_loss(
+  distance_function=l_infinity, margin=1.5
+)
+output <- triplet_loss(anchor, positive, negative)
+
+# Custom Distance Function (Lambda)
+triplet_loss <- nn_triplet_margin_with_distance_loss(
+  distance_function = function(x, y) {
+    1 - nnf_cosine_similarity(x, y)
+  }
+)
+
+output <- triplet_loss(anchor, positive, negative)
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_clip_grad_norm_.html b/dev/reference/nn_utils_clip_grad_norm_.html index 71ae310d7f5cd9399027d9587f873f853c7032f5..ed751d88143339b775951a6700cae56063963c04 100644 --- a/dev/reference/nn_utils_clip_grad_norm_.html +++ b/dev/reference/nn_utils_clip_grad_norm_.html @@ -1,80 +1,19 @@ - - - - - - - -Clips gradient norm of an iterable of parameters. — nn_utils_clip_grad_norm_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Clips gradient norm of an iterable of parameters. — nn_utils_clip_grad_norm_ • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,56 +113,47 @@ concatenated into a single vector. Gradients are modified in-place." /> concatenated into a single vector. Gradients are modified in-place.

-
nn_utils_clip_grad_norm_(parameters, max_norm, norm_type = 2)
- -

Arguments

- - - - - - - - - - - - - - -
parameters

(IterableTensor or Tensor): an iterable of Tensors or a -single Tensor that will have gradients normalized

max_norm

(float or int): max norm of the gradients

norm_type

(float or int): type of the used p-norm. Can be Inf for -infinity norm.

- -

Value

+
+
nn_utils_clip_grad_norm_(parameters, max_norm, norm_type = 2)
+
+
+

Arguments

+
parameters
+

(IterableTensor or Tensor): an iterable of Tensors or a +single Tensor that will have gradients normalized

+
max_norm
+

(float or int): max norm of the gradients

+
norm_type
+

(float or int): type of the used p-norm. Can be Inf for +infinity norm.

+
+
+

Value

Total norm of the parameters (viewed as a single vector).

+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_clip_grad_value_.html b/dev/reference/nn_utils_clip_grad_value_.html index e06c766388f27a6c50d49d602e5e0b282eacf7c8..01de7d2203cf79e1c9407d7dcf8ffba3e65c777d 100644 --- a/dev/reference/nn_utils_clip_grad_value_.html +++ b/dev/reference/nn_utils_clip_grad_value_.html @@ -1,79 +1,18 @@ - - - - - - - -Clips gradient of an iterable of parameters at specified value. — nn_utils_clip_grad_value_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Clips gradient of an iterable of parameters at specified value. — nn_utils_clip_grad_value_ • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,45 @@

Gradients are modified in-place.

-
nn_utils_clip_grad_value_(parameters, clip_value)
- -

Arguments

- - - - - - - - - - -
parameters

(Iterable(Tensor) or Tensor): an iterable of Tensors or a -single Tensor that will have gradients normalized

clip_value

(float or int): maximum allowed value of the gradients.

- -

Details

+
+
nn_utils_clip_grad_value_(parameters, clip_value)
+
+
+

Arguments

+
parameters
+

(Iterable(Tensor) or Tensor): an iterable of Tensors or a +single Tensor that will have gradients normalized

+
clip_value
+

(float or int): maximum allowed value of the gradients.

+
+
+

Details

The gradients are clipped in the range \(\left[\mbox{-clip\_value}, \mbox{clip\_value}\right]\)

+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_rnn_pack_padded_sequence.html b/dev/reference/nn_utils_rnn_pack_padded_sequence.html index 8d8d04c8786e0ba5994eece6e7a5c6b3cae7051e..01fc1dcb9f1b25b6e046c765a5098b84476a39df 100644 --- a/dev/reference/nn_utils_rnn_pack_padded_sequence.html +++ b/dev/reference/nn_utils_rnn_pack_padded_sequence.html @@ -1,82 +1,21 @@ - - - - - - - -Packs a Tensor containing padded sequences of variable length. — nn_utils_rnn_pack_padded_sequence • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Packs a Tensor containing padded sequences of variable length. — nn_utils_rnn_pack_padded_sequence • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,78 +117,69 @@ longest sequence (equal to lengths[1]), B is the batch TRUE, B x T x * input is expected.

-
nn_utils_rnn_pack_padded_sequence(
-  input,
-  lengths,
-  batch_first = FALSE,
-  enforce_sorted = TRUE
-)
+
+
nn_utils_rnn_pack_padded_sequence(
+  input,
+  lengths,
+  batch_first = FALSE,
+  enforce_sorted = TRUE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
input

(Tensor): padded batch of variable length sequences.

lengths

(Tensor): list of sequences lengths of each batch element.

batch_first

(bool, optional): if TRUE, the input is expected in B x T x * -format.

enforce_sorted

(bool, optional): if TRUE, the input is expected to +

+

Arguments

+
input
+

(Tensor): padded batch of variable length sequences.

+
lengths
+

(Tensor): list of sequences lengths of each batch element.

+
batch_first
+

(bool, optional): if TRUE, the input is expected in B x T x * +format.

+
enforce_sorted
+

(bool, optional): if TRUE, the input is expected to contain sequences sorted by length in a decreasing order. If -FALSE, the input will get sorted unconditionally. Default: TRUE.

- -

Value

- +FALSE, the input will get sorted unconditionally. Default: TRUE.

+
+
+

Value

a PackedSequence object

-

Details

- +
+
+

Details

For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted is TRUE, the sequences should be sorted by length in a decreasing order, i.e. input[,1] should be the longest sequence, and input[,B] the shortest one. enforce_sorted = TRUE is only necessary for ONNX export.

-

Note

- +
+
+

Note

This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a PackedSequence object by accessing its .data attribute.

+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_rnn_pack_sequence.html b/dev/reference/nn_utils_rnn_pack_sequence.html index b24841f80b964d834e5e173695cea5cb84eacc1e..ab5f40b00f74039ba9dece53dc0167f63a696e92 100644 --- a/dev/reference/nn_utils_rnn_pack_sequence.html +++ b/dev/reference/nn_utils_rnn_pack_sequence.html @@ -1,81 +1,20 @@ - - - - - - - -Packs a list of variable length Tensors — nn_utils_rnn_pack_sequence • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Packs a list of variable length Tensors — nn_utils_rnn_pack_sequence • torch - - - - - - - - - - - - - - + + - - - -
-
- -
- -
+
@@ -193,67 +115,63 @@ the length of a sequence and * is any number of trailing dimensions including zero.

-
nn_utils_rnn_pack_sequence(sequences, enforce_sorted = TRUE)
+
+
nn_utils_rnn_pack_sequence(sequences, enforce_sorted = TRUE)
+
-

Arguments

- - - - - - - - - - -
sequences

(list[Tensor]): A list of sequences of decreasing length.

enforce_sorted

(bool, optional): if TRUE, checks that the input +

+

Arguments

+
sequences
+

(list[Tensor]): A list of sequences of decreasing length.

+
enforce_sorted
+

(bool, optional): if TRUE, checks that the input contains sequences sorted by length in a decreasing order. If -FALSE, this condition is not checked. Default: TRUE.

- -

Value

- +FALSE, this condition is not checked. Default: TRUE.

+
+
+

Value

a PackedSequence object

-

Details

- +
+
+

Details

For unsorted sequences, use enforce_sorted = FALSE. If enforce_sorted is TRUE, the sequences should be sorted in the order of decreasing length. enforce_sorted = TRUE is only necessary for ONNX export.

+
-

Examples

-
if (torch_is_installed()) {
-x <- torch_tensor(c(1,2,3), dtype = torch_long())
-y <- torch_tensor(c(4, 5), dtype = torch_long())
-z <- torch_tensor(c(6), dtype = torch_long())
-
-p <- nn_utils_rnn_pack_sequence(list(x, y, z))
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+x <- torch_tensor(c(1,2,3), dtype = torch_long())
+y <- torch_tensor(c(4, 5), dtype = torch_long())
+z <- torch_tensor(c(6), dtype = torch_long())
+
+p <- nn_utils_rnn_pack_sequence(list(x, y, z))
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_rnn_pad_packed_sequence.html b/dev/reference/nn_utils_rnn_pad_packed_sequence.html index f4f17c3c005b259a5b73602499b2814c1a3e70e3..1b916cef40b9636789702f2d3a84496c51b1049f 100644 --- a/dev/reference/nn_utils_rnn_pad_packed_sequence.html +++ b/dev/reference/nn_utils_rnn_pad_packed_sequence.html @@ -1,79 +1,18 @@ - - - - - - - -Pads a packed batch of variable length sequences. — nn_utils_rnn_pad_packed_sequence • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pads a packed batch of variable length sequences. — nn_utils_rnn_pad_packed_sequence • torch - - - - - - + + - - -
-
- -
- -
+
-

It is an inverse operation to nn_utils_rnn_pack_padded_sequence().

+

It is an inverse operation to nn_utils_rnn_pack_padded_sequence().

-
nn_utils_rnn_pad_packed_sequence(
-  sequence,
-  batch_first = FALSE,
-  padding_value = 0,
-  total_length = NULL
-)
+
+
nn_utils_rnn_pad_packed_sequence(
+  sequence,
+  batch_first = FALSE,
+  padding_value = 0,
+  total_length = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
sequence

(PackedSequence): batch to pad

batch_first

(bool, optional): if True, the output will be in ``B x T x *` -format.

padding_value

(float, optional): values for padded elements.

total_length

(int, optional): if not NULL, the output will be padded to +

+

Arguments

+
sequence
+

(PackedSequence): batch to pad

+
batch_first
+

(bool, optional): if True, the output will be in ``B x T x *` +format.

+
padding_value
+

(float, optional): values for padded elements.

+
total_length
+

(int, optional): if not NULL, the output will be padded to have length total_length. This method will throw ValueError if total_length is less than the max sequence length in -sequence.

- -

Value

- +sequence.

+
+
+

Value

Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch. Batch elements will be re-ordered as they were ordered originally when -the batch was passed to nn_utils_rnn_pack_padded_sequence() or -nn_utils_rnn_pack_sequence().

-

Details

- +the batch was passed to nn_utils_rnn_pack_padded_sequence() or +nn_utils_rnn_pack_sequence().

+
+
+

Details

The returned Tensor's data will be of size T x B x *, where T is the length of the longest sequence and B is the batch size. If batch_first is TRUE, the data will be transposed into B x T x * format.

-

Note

- +
+
+

Note

total_length is useful to implement the pack sequence -> recurrent network -> unpack sequence pattern in a nn_module wrapped in ~torch.nn.DataParallel.

+
-

Examples

-
if (torch_is_installed()) {
-seq <- torch_tensor(rbind(c(1,2,0), c(3,0,0), c(4,5,6)))        
-lens <- c(2,1,3)
-packed <- nn_utils_rnn_pack_padded_sequence(seq, lens, batch_first = TRUE,
-                                            enforce_sorted = FALSE)
-packed
-nn_utils_rnn_pad_packed_sequence(packed, batch_first=TRUE)
-
-}
-#> [[1]]
-#> torch_tensor
-#>  1  2  0
-#>  3  0  0
-#>  4  5  6
-#> [ CPUFloatType{3,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  2
-#>  1
-#>  3
-#> [ CPULongType{3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+seq <- torch_tensor(rbind(c(1,2,0), c(3,0,0), c(4,5,6)))        
+lens <- c(2,1,3)
+packed <- nn_utils_rnn_pack_padded_sequence(seq, lens, batch_first = TRUE,
+                                            enforce_sorted = FALSE)
+packed
+nn_utils_rnn_pad_packed_sequence(packed, batch_first=TRUE)
+
+}
+#> [[1]]
+#> torch_tensor
+#>  1  2  0
+#>  3  0  0
+#>  4  5  6
+#> [ CPUFloatType{3,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  2
+#>  1
+#>  3
+#> [ CPULongType{3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/nn_utils_rnn_pad_sequence.html b/dev/reference/nn_utils_rnn_pad_sequence.html index dfa5acb3dc9c7bbb06b65d7c34ceac610ebfc802..aaf66a8c5fa00411427fbbb2d44b47b2c8ac348f 100644 --- a/dev/reference/nn_utils_rnn_pad_sequence.html +++ b/dev/reference/nn_utils_rnn_pad_sequence.html @@ -1,82 +1,21 @@ - - - - - - - -Pad a list of variable length Tensors with padding_value — nn_utils_rnn_pad_sequence • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pad a list of variable length Tensors with padding_value — nn_utils_rnn_pad_sequence • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,77 +117,72 @@ sequences with size L x * and if batch_first is False, and T otherwise.

-
nn_utils_rnn_pad_sequence(sequences, batch_first = FALSE, padding_value = 0)
- -

Arguments

- - - - - - - - - - - - - - -
sequences

(list[Tensor]): list of variable length sequences.

batch_first

(bool, optional): output will be in B x T x * if TRUE, -or in T x B x * otherwise

padding_value

(float, optional): value for padded elements. Default: 0.

- -

Value

+
+
nn_utils_rnn_pad_sequence(sequences, batch_first = FALSE, padding_value = 0)
+
+
+

Arguments

+
sequences
+

(list[Tensor]): list of variable length sequences.

+
batch_first
+

(bool, optional): output will be in B x T x * if TRUE, +or in T x B x * otherwise

+
padding_value
+

(float, optional): value for padded elements. Default: 0.

+
+
+

Value

Tensor of size T x B x * if batch_first is FALSE. Tensor of size B x T x * otherwise

-

Details

- +
+
+

Details

B is batch size. It is equal to the number of elements in sequences. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.

-

Note

- +
+
+

Note

This function returns a Tensor of size T x B x * or B x T x * where T is the length of the longest sequence. This function assumes trailing dimensions and type of all the Tensors in sequences are same.

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_ones(25, 300)
-b <- torch_ones(22, 300)
-c <- torch_ones(15, 300)
-nn_utils_rnn_pad_sequence(list(a, b, c))$size()
-
-}
-#> [1]  25   3 300
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_ones(25, 300)
+b <- torch_ones(22, 300)
+c <- torch_ones(15, 300)
+nn_utils_rnn_pad_sequence(list(a, b, c))$size()
+
+}
+#> [1]  25   3 300
+
+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_avg_pool1d.html b/dev/reference/nnf_adaptive_avg_pool1d.html index 2a36dd0b3e7561e74e5cc5dc351018766751864c..d471af9929cc72c79fd7ada24ef014c45028f08f 100644 --- a/dev/reference/nnf_adaptive_avg_pool1d.html +++ b/dev/reference/nnf_adaptive_avg_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_avg_pool1d — nnf_adaptive_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_avg_pool1d — nnf_adaptive_avg_pool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,47 +113,39 @@ several input planes." /> several input planes.

-
nnf_adaptive_avg_pool1d(input, output_size)
- -

Arguments

- - - - - - - - - - -
input

input tensor of shape (minibatch , in_channels , iW)

output_size

the target output size (single integer)

+
+
nnf_adaptive_avg_pool1d(input, output_size)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch , in_channels , iW)

+
output_size
+

the target output size (single integer)

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_avg_pool2d.html b/dev/reference/nnf_adaptive_avg_pool2d.html index 0beedd6e9ad63780e215d8facc1558a6c40b1e9b..531605c229d05abd0074cde53b22e68c294c3739 100644 --- a/dev/reference/nnf_adaptive_avg_pool2d.html +++ b/dev/reference/nnf_adaptive_avg_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_avg_pool2d — nnf_adaptive_avg_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_avg_pool2d — nnf_adaptive_avg_pool2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,47 +113,39 @@ several input planes." /> several input planes.

-
nnf_adaptive_avg_pool2d(input, output_size)
- -

Arguments

- - - - - - - - - - -
input

input tensor (minibatch, in_channels , iH , iW)

output_size

the target output size (single integer or double-integer tuple)

+
+
nnf_adaptive_avg_pool2d(input, output_size)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iH , iW)

+
output_size
+

the target output size (single integer or double-integer tuple)

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_avg_pool3d.html b/dev/reference/nnf_adaptive_avg_pool3d.html index bab2be48c62d54c06d431f49ee88f366b8cbfe37..2d3ca0ef130dbcbfe373f331f4e932ca9a44f7ef 100644 --- a/dev/reference/nnf_adaptive_avg_pool3d.html +++ b/dev/reference/nnf_adaptive_avg_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_avg_pool3d — nnf_adaptive_avg_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_avg_pool3d — nnf_adaptive_avg_pool3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,47 +113,39 @@ several input planes." /> several input planes.

-
nnf_adaptive_avg_pool3d(input, output_size)
- -

Arguments

- - - - - - - - - - -
input

input tensor (minibatch, in_channels , iT * iH , iW)

output_size

the target output size (single integer or triple-integer tuple)

+
+
nnf_adaptive_avg_pool3d(input, output_size)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iT * iH , iW)

+
output_size
+

the target output size (single integer or triple-integer tuple)

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_max_pool1d.html b/dev/reference/nnf_adaptive_max_pool1d.html index ecc18c936074c83654a053fe7e30cd8c72559785..5407caed40eea3b67de0056bcb6f7c6de00521a9 100644 --- a/dev/reference/nnf_adaptive_max_pool1d.html +++ b/dev/reference/nnf_adaptive_max_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_max_pool1d — nnf_adaptive_max_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_max_pool1d — nnf_adaptive_max_pool1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,51 +113,41 @@ several input planes." /> several input planes.

-
nnf_adaptive_max_pool1d(input, output_size, return_indices = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
input

input tensor of shape (minibatch , in_channels , iW)

output_size

the target output size (single integer)

return_indices

whether to return pooling indices. Default: FALSE

+
+
nnf_adaptive_max_pool1d(input, output_size, return_indices = FALSE)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch , in_channels , iW)

+
output_size
+

the target output size (single integer)

+
return_indices
+

whether to return pooling indices. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_max_pool2d.html b/dev/reference/nnf_adaptive_max_pool2d.html index 97b2a233949ac7a2db34e8cc9c03a25cb6767de8..510085564e0bded131ff7217a86b32abc790f19a 100644 --- a/dev/reference/nnf_adaptive_max_pool2d.html +++ b/dev/reference/nnf_adaptive_max_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_max_pool2d — nnf_adaptive_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_max_pool2d — nnf_adaptive_max_pool2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,51 +113,41 @@ several input planes." /> several input planes.

-
nnf_adaptive_max_pool2d(input, output_size, return_indices = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iH , iW)

output_size

the target output size (single integer or double-integer tuple)

return_indices

whether to return pooling indices. Default: FALSE

+
+
nnf_adaptive_max_pool2d(input, output_size, return_indices = FALSE)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iH , iW)

+
output_size
+

the target output size (single integer or double-integer tuple)

+
return_indices
+

whether to return pooling indices. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_adaptive_max_pool3d.html b/dev/reference/nnf_adaptive_max_pool3d.html index 9e3d42bacf323fa74c388c7dcb0ec7b6c423e334..8d383e87773c72386b896a4a8172a79e9336842e 100644 --- a/dev/reference/nnf_adaptive_max_pool3d.html +++ b/dev/reference/nnf_adaptive_max_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Adaptive_max_pool3d — nnf_adaptive_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_max_pool3d — nnf_adaptive_max_pool3d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,51 +113,41 @@ several input planes." /> several input planes.

-
nnf_adaptive_max_pool3d(input, output_size, return_indices = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iT * iH , iW)

output_size

the target output size (single integer or triple-integer tuple)

return_indices

whether to return pooling indices. Default:FALSE

+
+
nnf_adaptive_max_pool3d(input, output_size, return_indices = FALSE)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iT * iH , iW)

+
output_size
+

the target output size (single integer or triple-integer tuple)

+
return_indices
+

whether to return pooling indices. Default:FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_affine_grid.html b/dev/reference/nnf_affine_grid.html index 15fb9ed31095bcf19b45d50292018e77bb833d09..14e06a663e8436f0ace468493a0bf70e0ee72c29 100644 --- a/dev/reference/nnf_affine_grid.html +++ b/dev/reference/nnf_affine_grid.html @@ -1,80 +1,19 @@ - - - - - - - -Affine_grid — nnf_affine_grid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Affine_grid — nnf_affine_grid • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,65 +113,56 @@ affine matrices theta." /> affine matrices theta.

-
nnf_affine_grid(theta, size, align_corners = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
theta

(Tensor) input batch of affine matrices with shape -(\(N \times 2 \times 3\)) for 2D or (\(N \times 3 \times 4\)) for 3D

size

(torch.Size) the target output image size. (\(N \times C \times H \times W\) +

+
nnf_affine_grid(theta, size, align_corners = FALSE)
+
+ +
+

Arguments

+
theta
+

(Tensor) input batch of affine matrices with shape +(\(N \times 2 \times 3\)) for 2D or (\(N \times 3 \times 4\)) for 3D

+
size
+

(torch.Size) the target output image size. (\(N \times C \times H \times W\) for 2D or \(N \times C \times D \times H \times W\) for 3D) -Example: torch.Size((32, 3, 24, 24))

align_corners

(bool, optional) if True, consider -1 and 1 +Example: torch.Size((32, 3, 24, 24))

+
align_corners
+

(bool, optional) if True, consider -1 and 1 to refer to the centers of the corner pixels rather than the image corners. -Refer to nnf_grid_sample() for a more complete description. A grid generated by -nnf_affine_grid() should be passed to nnf_grid_sample() with the same setting for -this option. Default: False

- -

Note

- +Refer to nnf_grid_sample() for a more complete description. A grid generated by +nnf_affine_grid() should be passed to nnf_grid_sample() with the same setting for +this option. Default: False

+
+
+

Note

-

This function is often used in conjunction with nnf_grid_sample() +

This function is often used in conjunction with nnf_grid_sample() to build Spatial Transformer Networks_ .

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_alpha_dropout.html b/dev/reference/nnf_alpha_dropout.html index 185eeb3efa9c83eaac045bd3cae3a55370aaf9af..f3c9c85d40c46367bc5c5783a2176f026bead428 100644 --- a/dev/reference/nnf_alpha_dropout.html +++ b/dev/reference/nnf_alpha_dropout.html @@ -1,79 +1,18 @@ - - - - - - - -Alpha_dropout — nnf_alpha_dropout • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Alpha_dropout — nnf_alpha_dropout • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,56 +111,44 @@

Applies alpha dropout to the input.

-
nnf_alpha_dropout(input, p = 0.5, training = FALSE, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

the input tensor

p

probability of an element to be zeroed. Default: 0.5

training

apply dropout if is TRUE. Default: TRUE

inplace

If set to TRUE, will do this operation in-place. -Default: FALSE

+
+
nnf_alpha_dropout(input, p = 0.5, training = FALSE, inplace = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
p
+

probability of an element to be zeroed. Default: 0.5

+
training
+

apply dropout if is TRUE. Default: TRUE

+
inplace
+

If set to TRUE, will do this operation in-place. +Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_avg_pool1d.html b/dev/reference/nnf_avg_pool1d.html index 39d3a8f45aafb1bac66a76bed386e84c4262fa57..5ed0da83e5ea70832662f9ab9970a4069192dbf0 100644 --- a/dev/reference/nnf_avg_pool1d.html +++ b/dev/reference/nnf_avg_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Avg_pool1d — nnf_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Avg_pool1d — nnf_avg_pool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,75 +113,59 @@ input planes." /> input planes.

-
nnf_avg_pool1d(
-  input,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch , in_channels , iW)

kernel_size

the size of the window. Can be a single number or a -tuple (kW,).

stride

the stride of the window. Can be a single number or a tuple -(sW,). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padW,). Default: 0

ceil_mode

when True, will use ceil instead of floor to compute the -output shape. Default: FALSE

count_include_pad

when True, will include the zero-padding in the -averaging calculation. Default: TRUE

+
+
nnf_avg_pool1d(
+  input,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch , in_channels , iW)

+
kernel_size
+

the size of the window. Can be a single number or a +tuple (kW,).

+
stride
+

the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

+
ceil_mode
+

when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

+
count_include_pad
+

when True, will include the zero-padding in the +averaging calculation. Default: TRUE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_avg_pool2d.html b/dev/reference/nnf_avg_pool2d.html index 6d438ba4edf3286dd91ca978cb923fc442c147c7..0ad5fd93e6e034b802a6d3276206d14b6aa085b3 100644 --- a/dev/reference/nnf_avg_pool2d.html +++ b/dev/reference/nnf_avg_pool2d.html @@ -1,81 +1,20 @@ - - - - - - - -Avg_pool2d — nnf_avg_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Avg_pool2d — nnf_avg_pool2d • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,81 +115,63 @@ input planes." /> input planes.

-
nnf_avg_pool2d(
-  input,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE,
-  divisor_override = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iH , iW)

kernel_size

size of the pooling region. Can be a single number or a -tuple (kH, kW)

stride

stride of the pooling operation. Can be a single number or a -tuple (sH, sW). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padH, padW). Default: 0

ceil_mode

when True, will use ceil instead of floor in the formula -to compute the output shape. Default: FALSE

count_include_pad

when True, will include the zero-padding in the -averaging calculation. Default: TRUE

divisor_override

if specified, it will be used as divisor, otherwise -size of the pooling region will be used. Default: NULL

+
+
nnf_avg_pool2d(
+  input,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE,
+  divisor_override = NULL
+)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iH , iW)

+
kernel_size
+

size of the pooling region. Can be a single number or a +tuple (kH, kW)

+
stride
+

stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

+
ceil_mode
+

when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

+
count_include_pad
+

when True, will include the zero-padding in the +averaging calculation. Default: TRUE

+
divisor_override
+

if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_avg_pool3d.html b/dev/reference/nnf_avg_pool3d.html index 71e3cd443b25ac10e7d671b0737bb3f5ddaced6a..07288505c58c5c1fbf3dcc56c7d458b4f02cf190 100644 --- a/dev/reference/nnf_avg_pool3d.html +++ b/dev/reference/nnf_avg_pool3d.html @@ -1,81 +1,20 @@ - - - - - - - -Avg_pool3d — nnf_avg_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Avg_pool3d — nnf_avg_pool3d • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,81 +115,63 @@ size \(sT * sH * sW\) steps. The number of output features is equal to \(\lfloor \frac{ \mbox{input planes} }{sT} \rfloor\).

-
nnf_avg_pool3d(
-  input,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE,
-  divisor_override = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iT * iH , iW)

kernel_size

size of the pooling region. Can be a single number or a -tuple (kT, kH, kW)

stride

stride of the pooling operation. Can be a single number or a -tuple (sT, sH, sW). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padT, padH, padW), Default: 0

ceil_mode

when True, will use ceil instead of floor in the formula -to compute the output shape

count_include_pad

when True, will include the zero-padding in the -averaging calculation

divisor_override

NA if specified, it will be used as divisor, otherwise -size of the pooling region will be used. Default: NULL

+
+
nnf_avg_pool3d(
+  input,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE,
+  divisor_override = NULL
+)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iT * iH , iW)

+
kernel_size
+

size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

+
stride
+

stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

+
ceil_mode
+

when True, will use ceil instead of floor in the formula +to compute the output shape

+
count_include_pad
+

when True, will include the zero-padding in the +averaging calculation

+
divisor_override
+

NA if specified, it will be used as divisor, otherwise +size of the pooling region will be used. Default: NULL

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_batch_norm.html b/dev/reference/nnf_batch_norm.html index 841651fb30f81ecade907a2bb3383de0dc435465..cd125ace1b2b1b1b93c40d39777e4243f856aa89 100644 --- a/dev/reference/nnf_batch_norm.html +++ b/dev/reference/nnf_batch_norm.html @@ -1,79 +1,18 @@ - - - - - - - -Batch_norm — nnf_batch_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Batch_norm — nnf_batch_norm • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,81 +111,61 @@

Applies Batch Normalization for each channel across a batch of data.

-
nnf_batch_norm(
-  input,
-  running_mean,
-  running_var,
-  weight = NULL,
-  bias = NULL,
-  training = FALSE,
-  momentum = 0.1,
-  eps = 1e-05
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor

running_mean

the running_mean tensor

running_var

the running_var tensor

weight

the weight tensor

bias

the bias tensor

training

bool wether it's training. Default: FALSE

momentum

the value used for the running_mean and running_var computation. -Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1

eps

a value added to the denominator for numerical stability. Default: 1e-5

+
+
nnf_batch_norm(
+  input,
+  running_mean,
+  running_var,
+  weight = NULL,
+  bias = NULL,
+  training = FALSE,
+  momentum = 0.1,
+  eps = 1e-05
+)
+
+
+

Arguments

+
input
+

input tensor

+
running_mean
+

the running_mean tensor

+
running_var
+

the running_var tensor

+
weight
+

the weight tensor

+
bias
+

the bias tensor

+
training
+

bool wether it's training. Default: FALSE

+
momentum
+

the value used for the running_mean and running_var computation. +Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1

+
eps
+

a value added to the denominator for numerical stability. Default: 1e-5

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_bilinear.html b/dev/reference/nnf_bilinear.html index 876b6094905200f19a390788254e4c255918ecde..d3e9610ad701fb21d6cc23f972779ea67833c251 100644 --- a/dev/reference/nnf_bilinear.html +++ b/dev/reference/nnf_bilinear.html @@ -1,80 +1,19 @@ - - - - - - - -Bilinear — nnf_bilinear • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bilinear — nnf_bilinear • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,62 +113,50 @@ \(y = x_1 A x_2 + b\)

-
nnf_bilinear(input1, input2, weight, bias = NULL)
+
+
nnf_bilinear(input1, input2, weight, bias = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
input1

\((N, *, H_{in1})\) where \(H_{in1}=\mbox{in1\_features}\) +

+

Arguments

+
input1
+

\((N, *, H_{in1})\) where \(H_{in1}=\mbox{in1\_features}\) and \(*\) means any number of additional dimensions. -All but the last dimension of the inputs should be the same.

input2

\((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\)

weight

\((\mbox{out\_features}, \mbox{in1\_features}, -\mbox{in2\_features})\)

bias

\((\mbox{out\_features})\)

- -

Value

- -

output \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\) -and all but the last dimension are the same shape as the input.

+All but the last dimension of the inputs should be the same.

+
input2
+

\((N, *, H_{in2})\) where \(H_{in2}=\mbox{in2\_features}\)

+
weight
+

\((\mbox{out\_features}, \mbox{in1\_features}, +\mbox{in2\_features})\)

+
bias
+

\((\mbox{out\_features})\)

+
+
+

Value

+

output \((N, *, H_{out})\) where \(H_{out}=\mbox{out\_features}\)and all but the last dimension are the same shape as the input.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_binary_cross_entropy.html b/dev/reference/nnf_binary_cross_entropy.html index a6e1be698d98f8e65d3f338aadfb0d738949f4d4..6577917f91a612b199782e2d34f01adf460868a6 100644 --- a/dev/reference/nnf_binary_cross_entropy.html +++ b/dev/reference/nnf_binary_cross_entropy.html @@ -1,80 +1,19 @@ - - - - - - - -Binary_cross_entropy — nnf_binary_cross_entropy • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Binary_cross_entropy — nnf_binary_cross_entropy • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,63 +113,51 @@ between the target and the output." /> between the target and the output.

-
nnf_binary_cross_entropy(
-  input,
-  target,
-  weight = NULL,
-  reduction = c("mean", "sum", "none")
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

weight

(tensor) weight for each value.

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_binary_cross_entropy(
+  input,
+  target,
+  weight = NULL,
+  reduction = c("mean", "sum", "none")
+)
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
weight
+

(tensor) weight for each value.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_binary_cross_entropy_with_logits.html b/dev/reference/nnf_binary_cross_entropy_with_logits.html index 24dc29d6419a850547d37a569ca4c5ea98407c41..be84f6f587ac5bcfbe028ecabc78864b131bd2b5 100644 --- a/dev/reference/nnf_binary_cross_entropy_with_logits.html +++ b/dev/reference/nnf_binary_cross_entropy_with_logits.html @@ -1,80 +1,19 @@ - - - - - - - -Binary_cross_entropy_with_logits — nnf_binary_cross_entropy_with_logits • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Binary_cross_entropy_with_logits — nnf_binary_cross_entropy_with_logits • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,70 +113,56 @@ logits." /> logits.

-
nnf_binary_cross_entropy_with_logits(
-  input,
-  target,
-  weight = NULL,
-  reduction = c("mean", "sum", "none"),
-  pos_weight = NULL
-)
+
+
nnf_binary_cross_entropy_with_logits(
+  input,
+  target,
+  weight = NULL,
+  reduction = c("mean", "sum", "none"),
+  pos_weight = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

Tensor of arbitrary shape

target

Tensor of the same shape as input

weight

(Tensor, optional) a manual rescaling weight if provided it's -repeated to match input tensor shape.

reduction

(string, optional) – Specifies the reduction to apply to the +

+

Arguments

+
input
+

Tensor of arbitrary shape

+
target
+

Tensor of the same shape as input

+
weight
+

(Tensor, optional) a manual rescaling weight if provided it's +repeated to match input tensor shape.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

pos_weight

(Tensor, optional) a weight of positive examples. -Must be a vector with length equal to the number of classes.

- +'sum': the output will be summed. Default: 'mean'

+
pos_weight
+

(Tensor, optional) a weight of positive examples. +Must be a vector with length equal to the number of classes.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_celu.html b/dev/reference/nnf_celu.html index ac91a2d408bb79977a81411db02bb9224ecc370d..7afbaef31176be37b3ab71d5f8ba9045b75862d1 100644 --- a/dev/reference/nnf_celu.html +++ b/dev/reference/nnf_celu.html @@ -1,79 +1,18 @@ - - - - - - - -Celu — nnf_celu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Celu — nnf_celu • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,54 +111,44 @@

Applies element-wise, \(CELU(x) = max(0,x) + min(0, \alpha * (exp(x \alpha) - 1))\).

-
nnf_celu(input, alpha = 1, inplace = FALSE)
-
-nnf_celu_(input, alpha = 1)
- -

Arguments

- - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

alpha

the alpha value for the CELU formulation. Default: 1.0

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_celu(input, alpha = 1, inplace = FALSE)
 
+nnf_celu_(input, alpha = 1)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
alpha
+

the alpha value for the CELU formulation. Default: 1.0

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_contrib_sparsemax.html b/dev/reference/nnf_contrib_sparsemax.html index a67bd0b8e23637abc3404542b5b58fe87880ff96..99ec67432c9bd558b3f7c6b4e944ff3add6fce9c 100644 --- a/dev/reference/nnf_contrib_sparsemax.html +++ b/dev/reference/nnf_contrib_sparsemax.html @@ -1,79 +1,18 @@ - - - - - - - -Sparsemax — nnf_contrib_sparsemax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sparsemax — nnf_contrib_sparsemax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,45 @@

Applies the SparseMax activation.

-
nnf_contrib_sparsemax(input, dim = -1)
- -

Arguments

- - - - - - - - - - -
input

the input tensor

dim

The dimension over which to apply the sparsemax function. (-1)

- -

Details

+
+
nnf_contrib_sparsemax(input, dim = -1)
+
+
+

Arguments

+
input
+

the input tensor

+
dim
+

The dimension over which to apply the sparsemax function. (-1)

+
+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv1d.html b/dev/reference/nnf_conv1d.html index 965290bf5dc59b01001a5cdd714ae1af2b79e462..2eae698e1bbebe1820c77bdc12e62bcc51637374 100644 --- a/dev/reference/nnf_conv1d.html +++ b/dev/reference/nnf_conv1d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv1d — nnf_conv1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv1d — nnf_conv1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,79 +113,61 @@ planes." /> planes.

-
nnf_conv1d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels , iW)

weight

filters of shape (out_channels, in_channels/groups , kW)

bias

optional bias of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or -a one-element tuple (sW,). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a one-element tuple (padW,). Default: 0

dilation

the spacing between kernel elements. Can be a single number or -a one-element tuple (dW,). Default: 1

groups

split input into groups, in_channels should be divisible by -the number of groups. Default: 1

+
+
nnf_conv1d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels , iW)

+
weight
+

filters of shape (out_channels, in_channels/groups , kW)

+
bias
+

optional bias of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

+
groups
+

split input into groups, in_channels should be divisible by +the number of groups. Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv2d.html b/dev/reference/nnf_conv2d.html index 214a90231c17dd90416558f487a88b1c8e17f7d1..a4b2249d824824c886e42f0a4a9a1c4665b882e2 100644 --- a/dev/reference/nnf_conv2d.html +++ b/dev/reference/nnf_conv2d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv2d — nnf_conv2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv2d — nnf_conv2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,79 +113,61 @@ planes." /> planes.

-
nnf_conv2d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels, iH , iW)

weight

filters of shape (out_channels , in_channels/groups, kH , kW)

bias

optional bias tensor of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a -tuple (sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a tuple (padH, padW). Default: 0

dilation

the spacing between kernel elements. Can be a single number or -a tuple (dH, dW). Default: 1

groups

split input into groups, in_channels should be divisible by the -number of groups. Default: 1

+
+
nnf_conv2d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels, iH , iW)

+
weight
+

filters of shape (out_channels , in_channels/groups, kH , kW)

+
bias
+

optional bias tensor of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

+
groups
+

split input into groups, in_channels should be divisible by the +number of groups. Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv3d.html b/dev/reference/nnf_conv3d.html index cc3bae9e802fd37414519b221b91a11198265e0c..3d338005c30fb8c8936ad3f41784651f5f55d229 100644 --- a/dev/reference/nnf_conv3d.html +++ b/dev/reference/nnf_conv3d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv3d — nnf_conv3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv3d — nnf_conv3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,79 +113,61 @@ planes." /> planes.

-
nnf_conv3d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  dilation = 1,
-  groups = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels , iT , iH , iW)

weight

filters of shape (out_channels , in_channels/groups, kT , kH , kW)

bias

optional bias tensor of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a -tuple (sT, sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a tuple (padT, padH, padW). Default: 0

dilation

the spacing between kernel elements. Can be a single number or -a tuple (dT, dH, dW). Default: 1

groups

split input into groups, in_channels should be divisible by -the number of groups. Default: 1

+
+
nnf_conv3d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  dilation = 1,
+  groups = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels , iT , iH , iW)

+
weight
+

filters of shape (out_channels , in_channels/groups, kT , kH , kW)

+
bias
+

optional bias tensor of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

+
groups
+

split input into groups, in_channels should be divisible by +the number of groups. Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv_tbc.html b/dev/reference/nnf_conv_tbc.html index 3ff5f53b3e389bc1e1cd63812c587e3ccd622493..f90388052426c698c016b0c2925741550510101a 100644 --- a/dev/reference/nnf_conv_tbc.html +++ b/dev/reference/nnf_conv_tbc.html @@ -1,80 +1,19 @@ - - - - - - - -Conv_tbc — nnf_conv_tbc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_tbc — nnf_conv_tbc • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,57 +113,45 @@ Input and output dimensions are (Time, Batch, Channels) - hence TBC." /> Input and output dimensions are (Time, Batch, Channels) - hence TBC.

-
nnf_conv_tbc(input, weight, bias, pad = 0)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{sequence length} \times -batch \times \mbox{in\_channels})\)

weight

filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} -\times \mbox{out\_channels}\))

bias

bias of shape (\(\mbox{out\_channels}\))

pad

number of timesteps to pad. Default: 0

+
+
nnf_conv_tbc(input, weight, bias, pad = 0)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{sequence length} \times +batch \times \mbox{in\_channels})\)

+
weight
+

filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} +\times \mbox{out\_channels}\))

+
bias
+

bias of shape (\(\mbox{out\_channels}\))

+
pad
+

number of timesteps to pad. Default: 0

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv_transpose1d.html b/dev/reference/nnf_conv_transpose1d.html index e7647400bd03af6b7dcbb0876aada092b410e335..94f1b533bc653ed453d221e7a1f4406d041090c3 100644 --- a/dev/reference/nnf_conv_transpose1d.html +++ b/dev/reference/nnf_conv_transpose1d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv_transpose1d — nnf_conv_transpose1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose1d — nnf_conv_transpose1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,84 +113,64 @@ composed of several input planes, sometimes also called "deconvolution" composed of several input planes, sometimes also called "deconvolution".

-
nnf_conv_transpose1d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  dilation = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels , iW)

weight

filters of shape (out_channels, in_channels/groups , kW)

bias

optional bias of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or -a one-element tuple (sW,). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a one-element tuple (padW,). Default: 0

output_padding

padding applied to the output

groups

split input into groups, in_channels should be divisible by -the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or -a one-element tuple (dW,). Default: 1

+
+
nnf_conv_transpose1d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  dilation = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels , iW)

+
weight
+

filters of shape (out_channels, in_channels/groups , kW)

+
bias
+

optional bias of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or +a one-element tuple (sW,). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a one-element tuple (padW,). Default: 0

+
output_padding
+

padding applied to the output

+
groups
+

split input into groups, in_channels should be divisible by +the number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or +a one-element tuple (dW,). Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv_transpose2d.html b/dev/reference/nnf_conv_transpose2d.html index df5f9d17913f8af8582f01b265108335101f515b..a11a198f24f774d3fc90a741080e6537cbecfce8 100644 --- a/dev/reference/nnf_conv_transpose2d.html +++ b/dev/reference/nnf_conv_transpose2d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv_transpose2d — nnf_conv_transpose2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose2d — nnf_conv_transpose2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,84 +113,64 @@ composed of several input planes, sometimes also called "deconvolution" composed of several input planes, sometimes also called "deconvolution".

-
nnf_conv_transpose2d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  dilation = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels, iH , iW)

weight

filters of shape (out_channels , in_channels/groups, kH , kW)

bias

optional bias tensor of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a -tuple (sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a tuple (padH, padW). Default: 0

output_padding

padding applied to the output

groups

split input into groups, in_channels should be divisible by the -number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or -a tuple (dH, dW). Default: 1

+
+
nnf_conv_transpose2d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  dilation = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels, iH , iW)

+
weight
+

filters of shape (out_channels , in_channels/groups, kH , kW)

+
bias
+

optional bias tensor of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a +tuple (sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

+
output_padding
+

padding applied to the output

+
groups
+

split input into groups, in_channels should be divisible by the +number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or +a tuple (dH, dW). Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_conv_transpose3d.html b/dev/reference/nnf_conv_transpose3d.html index 332ec2d5252e745a99ea0a9477a0b2acf61ca262..e8d1568a38ac57abd2a33e79fa96c2bedf922de2 100644 --- a/dev/reference/nnf_conv_transpose3d.html +++ b/dev/reference/nnf_conv_transpose3d.html @@ -1,80 +1,19 @@ - - - - - - - -Conv_transpose3d — nnf_conv_transpose3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose3d — nnf_conv_transpose3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,84 +113,64 @@ composed of several input planes, sometimes also called "deconvolution" composed of several input planes, sometimes also called "deconvolution"

-
nnf_conv_transpose3d(
-  input,
-  weight,
-  bias = NULL,
-  stride = 1,
-  padding = 0,
-  output_padding = 0,
-  groups = 1,
-  dilation = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch, in_channels , iT , iH , iW)

weight

filters of shape (out_channels , in_channels/groups, kT , kH , kW)

bias

optional bias tensor of shape (out_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a -tuple (sT, sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a -single number or a tuple (padT, padH, padW). Default: 0

output_padding

padding applied to the output

groups

split input into groups, in_channels should be divisible by -the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or -a tuple (dT, dH, dW). Default: 1

+
+
nnf_conv_transpose3d(
+  input,
+  weight,
+  bias = NULL,
+  stride = 1,
+  padding = 0,
+  output_padding = 0,
+  groups = 1,
+  dilation = 1
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch, in_channels , iT , iH , iW)

+
weight
+

filters of shape (out_channels , in_channels/groups, kT , kH , kW)

+
bias
+

optional bias tensor of shape (out_channels). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a +tuple (sT, sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW). Default: 0

+
output_padding
+

padding applied to the output

+
groups
+

split input into groups, in_channels should be divisible by +the number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or +a tuple (dT, dH, dW). Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_cosine_embedding_loss.html b/dev/reference/nnf_cosine_embedding_loss.html index 141a6d252c9c31271522bdaf6d00f085e179f33d..d604184b2c9139206b4ca3d25f86d9001436386b 100644 --- a/dev/reference/nnf_cosine_embedding_loss.html +++ b/dev/reference/nnf_cosine_embedding_loss.html @@ -1,82 +1,21 @@ - - - - - - - -Cosine_embedding_loss — nnf_cosine_embedding_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cosine_embedding_loss — nnf_cosine_embedding_loss • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,69 +117,55 @@ are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.

-
nnf_cosine_embedding_loss(
-  input1,
-  input2,
-  target,
-  margin = 0,
-  reduction = c("mean", "sum", "none")
-)
+
+
nnf_cosine_embedding_loss(
+  input1,
+  input2,
+  target,
+  margin = 0,
+  reduction = c("mean", "sum", "none")
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input1

the input x_1 tensor

input2

the input x_2 tensor

target

the target tensor

margin

Should be a number from -1 to 1 , 0 to 0.5 is suggested. If margin -is missing, the default value is 0.

reduction

(string, optional) – Specifies the reduction to apply to the +

+

Arguments

+
input1
+

the input x_1 tensor

+
input2
+

the input x_2 tensor

+
target
+

the target tensor

+
margin
+

Should be a number from -1 to 1 , 0 to 0.5 is suggested. If margin +is missing, the default value is 0.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_cosine_similarity.html b/dev/reference/nnf_cosine_similarity.html index 9ce93dcd1242e217b2be55759acc71d7f85c8bde..b0d1c39b657e598e02cc1096b9de74cb31f44fce 100644 --- a/dev/reference/nnf_cosine_similarity.html +++ b/dev/reference/nnf_cosine_similarity.html @@ -1,79 +1,18 @@ - - - - - - - -Cosine_similarity — nnf_cosine_similarity • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cosine_similarity — nnf_cosine_similarity • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,61 +111,50 @@

Returns cosine similarity between x1 and x2, computed along dim.

-
nnf_cosine_similarity(x1, x2, dim = 1, eps = 1e-08)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
x1

(Tensor) First input.

x2

(Tensor) Second input (of size matching x1).

dim

(int, optional) Dimension of vectors. Default: 1

eps

(float, optional) Small value to avoid division by zero. -Default: 1e-8

- -

Details

+
+
nnf_cosine_similarity(x1, x2, dim = 1, eps = 1e-08)
+
+
+

Arguments

+
x1
+

(Tensor) First input.

+
x2
+

(Tensor) Second input (of size matching x1).

+
dim
+

(int, optional) Dimension of vectors. Default: 1

+
eps
+

(float, optional) Small value to avoid division by zero. +Default: 1e-8

+
+
+

Details

$$ \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} $$

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_cross_entropy.html b/dev/reference/nnf_cross_entropy.html index b4315f1a7e236c6ca97ebca6afb0903ad10ef2b5..5efdc83f88adc82672c754a48f8ca2d7c5ad7822 100644 --- a/dev/reference/nnf_cross_entropy.html +++ b/dev/reference/nnf_cross_entropy.html @@ -1,80 +1,19 @@ - - - - - - - -Cross_entropy — nnf_cross_entropy • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cross_entropy — nnf_cross_entropy • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,73 +113,59 @@ function." /> function.

-
nnf_cross_entropy(
-  input,
-  target,
-  weight = NULL,
-  ignore_index = -100,
-  reduction = c("mean", "sum", "none")
-)
+
+
nnf_cross_entropy(
+  input,
+  target,
+  weight = NULL,
+  ignore_index = -100,
+  reduction = c("mean", "sum", "none")
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) \((N, C)\) where C = number of classes or \((N, C, H, W)\) +

+

Arguments

+
input
+

(Tensor) \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) -in the case of K-dimensional loss.

target

(Tensor) \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), -or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

weight

(Tensor, optional) a manual rescaling weight given to each class. If -given, has to be a Tensor of size C

ignore_index

(int, optional) Specifies a target value that is ignored -and does not contribute to the input gradient.

reduction

(string, optional) – Specifies the reduction to apply to the +in the case of K-dimensional loss.

+
target
+

(Tensor) \((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

+
weight
+

(Tensor, optional) a manual rescaling weight given to each class. If +given, has to be a Tensor of size C

+
ignore_index
+

(int, optional) Specifies a target value that is ignored +and does not contribute to the input gradient.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_ctc_loss.html b/dev/reference/nnf_ctc_loss.html index 05e64014222552cac9df306726c7cffaee0f0b53..8741cc784b0755badb7f3dbe6a33e989d9ec1fc8 100644 --- a/dev/reference/nnf_ctc_loss.html +++ b/dev/reference/nnf_ctc_loss.html @@ -1,79 +1,18 @@ - - - - - - - -Ctc_loss — nnf_ctc_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ctc_loss — nnf_ctc_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,83 +111,65 @@

The Connectionist Temporal Classification loss.

-
nnf_ctc_loss(
-  log_probs,
-  targets,
-  input_lengths,
-  target_lengths,
-  blank = 0,
-  reduction = c("mean", "sum", "none"),
-  zero_infinity = FALSE
-)
+
+
nnf_ctc_loss(
+  log_probs,
+  targets,
+  input_lengths,
+  target_lengths,
+  blank = 0,
+  reduction = c("mean", "sum", "none"),
+  zero_infinity = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
log_probs

\((T, N, C)\) where C = number of characters in alphabet including blank, +

+

Arguments

+
log_probs
+

\((T, N, C)\) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of -the outputs (e.g. obtained with nnf_log_softmax).

targets

\((N, S)\) or (sum(target_lengths)). Targets cannot be blank. -In the second form, the targets are assumed to be concatenated.

input_lengths

\((N)\). Lengths of the inputs (must each be \(\leq T\))

target_lengths

\((N)\). Lengths of the targets

blank

(int, optional) Blank label. Default \(0\).

reduction

(string, optional) – Specifies the reduction to apply to the +the outputs (e.g. obtained with nnf_log_softmax).

+
targets
+

\((N, S)\) or (sum(target_lengths)). Targets cannot be blank. +In the second form, the targets are assumed to be concatenated.

+
input_lengths
+

\((N)\). Lengths of the inputs (must each be \(\leq T\))

+
target_lengths
+

\((N)\). Lengths of the targets

+
blank
+

(int, optional) Blank label. Default \(0\).

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

zero_infinity

(bool, optional) Whether to zero infinite losses and the +'sum': the output will be summed. Default: 'mean'

+
zero_infinity
+

(bool, optional) Whether to zero infinite losses and the associated gradients. Default: FALSE Infinite losses mainly occur when the -inputs are too short to be aligned to the targets.

- +inputs are too short to be aligned to the targets.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_dropout.html b/dev/reference/nnf_dropout.html index 3fc05f2103ed7edc2e040de4de062aee984c97e4..ecd90dc1b19d05844c374b8f0ae85bbc883b9c3f 100644 --- a/dev/reference/nnf_dropout.html +++ b/dev/reference/nnf_dropout.html @@ -1,81 +1,20 @@ - - - - - - - -Dropout — nnf_dropout • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout — nnf_dropout • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,56 +115,44 @@ tensor with probability p using samples from a Bernoulli distribution.

-
nnf_dropout(input, p = 0.5, training = TRUE, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

the input tensor

p

probability of an element to be zeroed. Default: 0.5

training

apply dropout if is TRUE. Default: TRUE

inplace

If set to TRUE, will do this operation in-place. -Default: FALSE

+
+
nnf_dropout(input, p = 0.5, training = TRUE, inplace = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
p
+

probability of an element to be zeroed. Default: 0.5

+
training
+

apply dropout if is TRUE. Default: TRUE

+
inplace
+

If set to TRUE, will do this operation in-place. +Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_dropout2d.html b/dev/reference/nnf_dropout2d.html index 850d977724efc71ef72c931e9b7c4a3185ade9f3..95c589970511a22f3a5f7cedacf65fed003d5374 100644 --- a/dev/reference/nnf_dropout2d.html +++ b/dev/reference/nnf_dropout2d.html @@ -1,83 +1,22 @@ - - - - - - - -Dropout2d — nnf_dropout2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout2d — nnf_dropout2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,56 +119,44 @@ Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

-
nnf_dropout2d(input, p = 0.5, training = TRUE, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

the input tensor

p

probability of a channel to be zeroed. Default: 0.5

training

apply dropout if is TRUE. Default: TRUE.

inplace

If set to TRUE, will do this operation in-place. -Default: FALSE

+
+
nnf_dropout2d(input, p = 0.5, training = TRUE, inplace = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
p
+

probability of a channel to be zeroed. Default: 0.5

+
training
+

apply dropout if is TRUE. Default: TRUE.

+
inplace
+

If set to TRUE, will do this operation in-place. +Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_dropout3d.html b/dev/reference/nnf_dropout3d.html index 1489b3bb5b627b76385c1ada16c1ec6920fea525..ec29f00c4efa1e89180eded6601e244d5876c855 100644 --- a/dev/reference/nnf_dropout3d.html +++ b/dev/reference/nnf_dropout3d.html @@ -1,83 +1,22 @@ - - - - - - - -Dropout3d — nnf_dropout3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dropout3d — nnf_dropout3d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -197,56 +119,44 @@ Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

-
nnf_dropout3d(input, p = 0.5, training = TRUE, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

the input tensor

p

probability of a channel to be zeroed. Default: 0.5

training

apply dropout if is TRUE. Default: TRUE.

inplace

If set to TRUE, will do this operation in-place. -Default: FALSE

+
+
nnf_dropout3d(input, p = 0.5, training = TRUE, inplace = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
p
+

probability of a channel to be zeroed. Default: 0.5

+
training
+

apply dropout if is TRUE. Default: TRUE.

+
inplace
+

If set to TRUE, will do this operation in-place. +Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_elu.html b/dev/reference/nnf_elu.html index 37042aefb9758238a008fcb29e4ffc6ce4a4c04b..2b7d84b509175d2fbb887d12b0d5ac81b384988b 100644 --- a/dev/reference/nnf_elu.html +++ b/dev/reference/nnf_elu.html @@ -1,80 +1,19 @@ - - - - - - - -Elu — nnf_elu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Elu — nnf_elu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,64 +113,56 @@ $$ELU(x) = max(0,x) + min(0, \alpha * (exp(x) - 1))$$." /> $$ELU(x) = max(0,x) + min(0, \alpha * (exp(x) - 1))$$.

-
nnf_elu(input, alpha = 1, inplace = FALSE)
-
-nnf_elu_(input, alpha = 1)
- -

Arguments

- - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

alpha

the alpha value for the ELU formulation. Default: 1.0

inplace

can optionally do the operation in-place. Default: FALSE

- - -

Examples

-
if (torch_is_installed()) {
-x <- torch_randn(2, 2)
-y <- nnf_elu(x, alpha = 1)
-nnf_elu_(x, alpha = 1)
-torch_equal(x, y)
-
-}
-#> [1] TRUE
-
+
+
nnf_elu(input, alpha = 1, inplace = FALSE)
+
+nnf_elu_(input, alpha = 1)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
alpha
+

the alpha value for the ELU formulation. Default: 1.0

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+ +
+

Examples

+
if (torch_is_installed()) {
+x <- torch_randn(2, 2)
+y <- nnf_elu(x, alpha = 1)
+nnf_elu_(x, alpha = 1)
+torch_equal(x, y)
+
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/nnf_embedding.html b/dev/reference/nnf_embedding.html index 6d5075148fba1f2a2e67b5fdf3b144ed058156ce..19cee9d265f61e50a7c4017a6d436aa9e54fe7d9 100644 --- a/dev/reference/nnf_embedding.html +++ b/dev/reference/nnf_embedding.html @@ -1,79 +1,18 @@ - - - - - - - -Embedding — nnf_embedding • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Embedding — nnf_embedding • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,88 +111,71 @@

A simple lookup table that looks up embeddings in a fixed dictionary and size.

-
nnf_embedding(
-  input,
-  weight,
-  padding_idx = NULL,
-  max_norm = NULL,
-  norm_type = 2,
-  scale_grad_by_freq = FALSE,
-  sparse = FALSE
-)
+
+
nnf_embedding(
+  input,
+  weight,
+  padding_idx = NULL,
+  max_norm = NULL,
+  norm_type = 2,
+  scale_grad_by_freq = FALSE,
+  sparse = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(LongTensor) Tensor containing indices into the embedding matrix

weight

(Tensor) The embedding matrix with number of rows equal to the -maximum possible index + 1, and number of columns equal to the embedding size

padding_idx

(int, optional) If given, pads the output with the embedding -vector at padding_idx (initialized to zeros) whenever it encounters the index.

max_norm

(float, optional) If given, each embedding vector with norm larger +

+

Arguments

+
input
+

(LongTensor) Tensor containing indices into the embedding matrix

+
weight
+

(Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

+
padding_idx
+

(int, optional) If given, pads the output with the embedding +vector at padding_idx (initialized to zeros) whenever it encounters the index.

+
max_norm
+

(float, optional) If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify -weight in-place.

norm_type

(float, optional) The p of the p-norm to compute for the max_norm -option. Default 2.

scale_grad_by_freq

(boolean, optional) If given, this will scale gradients -by the inverse of frequency of the words in the mini-batch. Default FALSE.

sparse

(bool, optional) If TRUE, gradient w.r.t. weight will be a +weight in-place.

+
norm_type
+

(float, optional) The p of the p-norm to compute for the max_norm +option. Default 2.

+
scale_grad_by_freq
+

(boolean, optional) If given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE.

+
sparse
+

(bool, optional) If TRUE, gradient w.r.t. weight will be a sparse tensor. See Notes under nn_embedding for more details regarding -sparse gradients.

- -

Details

- +sparse gradients.

+
+
+

Details

This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_embedding_bag.html b/dev/reference/nnf_embedding_bag.html index 9a3341aa66a4ae4d5fb1fa4f60db0d58288aed92..82cccdcee83c6d7afa9d88bd69a81584e95b4d85 100644 --- a/dev/reference/nnf_embedding_bag.html +++ b/dev/reference/nnf_embedding_bag.html @@ -1,80 +1,19 @@ - - - - - - - -Embedding_bag — nnf_embedding_bag • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Embedding_bag — nnf_embedding_bag • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,103 +113,79 @@ intermediate embeddings." /> intermediate embeddings.

-
nnf_embedding_bag(
-  input,
-  weight,
-  offsets = NULL,
-  max_norm = NULL,
-  norm_type = 2,
-  scale_grad_by_freq = FALSE,
-  mode = "mean",
-  sparse = FALSE,
-  per_sample_weights = NULL,
-  include_last_offset = FALSE
-)
+
+
nnf_embedding_bag(
+  input,
+  weight,
+  offsets = NULL,
+  max_norm = NULL,
+  norm_type = 2,
+  scale_grad_by_freq = FALSE,
+  mode = "mean",
+  sparse = FALSE,
+  per_sample_weights = NULL,
+  include_last_offset = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(LongTensor) Tensor containing bags of indices into the embedding matrix

weight

(Tensor) The embedding matrix with number of rows equal to the -maximum possible index + 1, and number of columns equal to the embedding size

offsets

(LongTensor, optional) Only used when input is 1D. offsets -determines the starting index position of each bag (sequence) in input.

max_norm

(float, optional) If given, each embedding vector with norm +

+

Arguments

+
input
+

(LongTensor) Tensor containing bags of indices into the embedding matrix

+
weight
+

(Tensor) The embedding matrix with number of rows equal to the +maximum possible index + 1, and number of columns equal to the embedding size

+
offsets
+

(LongTensor, optional) Only used when input is 1D. offsets +determines the starting index position of each bag (sequence) in input.

+
max_norm
+

(float, optional) If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. -Note: this will modify weight in-place.

norm_type

(float, optional) The p in the p-norm to compute for the -max_norm option. Default 2.

scale_grad_by_freq

(boolean, optional) if given, this will scale gradients -by the inverse of frequency of the words in the mini-batch. Default FALSE. Note: this option is not supported when mode="max".

mode

(string, optional) "sum", "mean" or "max". Specifies -the way to reduce the bag. Default: 'mean'

sparse

(bool, optional) if TRUE, gradient w.r.t. weight will be a +Note: this will modify weight in-place.

+
norm_type
+

(float, optional) The p in the p-norm to compute for the +max_norm option. Default 2.

+
scale_grad_by_freq
+

(boolean, optional) if given, this will scale gradients +by the inverse of frequency of the words in the mini-batch. Default FALSE. Note: this option is not supported when mode="max".

+
mode
+

(string, optional) "sum", "mean" or "max". Specifies +the way to reduce the bag. Default: 'mean'

+
sparse
+

(bool, optional) if TRUE, gradient w.r.t. weight will be a sparse tensor. See Notes under nn_embedding for more details regarding -sparse gradients. Note: this option is not supported when mode="max".

per_sample_weights

(Tensor, optional) a tensor of float / double weights, +sparse gradients. Note: this option is not supported when mode="max".

+
per_sample_weights
+

(Tensor, optional) a tensor of float / double weights, or NULL to indicate all weights should be taken to be 1. If specified, per_sample_weights must have exactly the same shape as input and is treated -as having the same offsets, if those are not NULL.

include_last_offset

(bool, optional) if TRUE, the size of offsets is -equal to the number of bags + 1.

- +as having the same offsets, if those are not NULL.

+
include_last_offset
+

(bool, optional) if TRUE, the size of offsets is +equal to the number of bags + 1.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_fold.html b/dev/reference/nnf_fold.html index 53bc70f0fa8d275de561a86c3cd2baeeae82d535..214e40e0794074a475b55e08031b4d8e56c6100b 100644 --- a/dev/reference/nnf_fold.html +++ b/dev/reference/nnf_fold.html @@ -1,80 +1,19 @@ - - - - - - - -Fold — nnf_fold • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fold — nnf_fold • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,81 +113,66 @@ tensor." /> tensor.

-
nnf_fold(
-  input,
-  output_size,
-  kernel_size,
-  dilation = 1,
-  padding = 0,
-  stride = 1
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

output_size

the shape of the spatial dimensions of the output (i.e., -output$sizes()[-c(1,2)])

kernel_size

the size of the sliding blocks

dilation

a parameter that controls the stride of elements within the -neighborhood. Default: 1

padding

implicit zero padding to be added on both sides of input. -Default: 0

stride

the stride of the sliding blocks in the input spatial dimensions. -Default: 1

- -

Warning

+
+
nnf_fold(
+  input,
+  output_size,
+  kernel_size,
+  dilation = 1,
+  padding = 0,
+  stride = 1
+)
+
+
+

Arguments

+
input
+

the input tensor

+
output_size
+

the shape of the spatial dimensions of the output (i.e., +output$sizes()[-c(1,2)])

+
kernel_size
+

the size of the sliding blocks

+
dilation
+

a parameter that controls the stride of elements within the +neighborhood. Default: 1

+
padding
+

implicit zero padding to be added on both sides of input. +Default: 0

+
stride
+

the stride of the sliding blocks in the input spatial dimensions. +Default: 1

+
+
+

Warning

Currently, only 4-D output tensors (batched image-like tensors) are supported.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_fractional_max_pool2d.html b/dev/reference/nnf_fractional_max_pool2d.html index 08d444f4ba93627949a7916e928226fb67c3ab23..d666b4c02db7837bf17ffd20f8731fef02abd565 100644 --- a/dev/reference/nnf_fractional_max_pool2d.html +++ b/dev/reference/nnf_fractional_max_pool2d.html @@ -1,79 +1,18 @@ - - - - - - - -Fractional_max_pool2d — nnf_fractional_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fractional_max_pool2d — nnf_fractional_max_pool2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,80 +111,65 @@

Applies 2D fractional max pooling over an input signal composed of several input planes.

-
nnf_fractional_max_pool2d(
-  input,
-  kernel_size,
-  output_size = NULL,
-  output_ratio = NULL,
-  return_indices = FALSE,
-  random_samples = NULL
-)
+
+
nnf_fractional_max_pool2d(
+  input,
+  kernel_size,
+  output_size = NULL,
+  output_ratio = NULL,
+  return_indices = FALSE,
+  random_samples = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

kernel_size

the size of the window to take a max over. Can be a +

+

Arguments

+
input
+

the input tensor

+
kernel_size
+

the size of the window to take a max over. Can be a single number \(k\) (for a square kernel of \(k * k\)) or -a tuple (kH, kW)

output_size

the target output size of the image of the form \(oH * oW\). -Can be a tuple (oH, oW) or a single number \(oH\) for a square image \(oH * oH\)

output_ratio

If one wants to have an output size as a ratio of the input size, -this option can be given. This has to be a number or tuple in the range (0, 1)

return_indices

if True, will return the indices along with the outputs.

random_samples

optional random samples.

- -

Details

- +a tuple (kH, kW)

+
output_size
+

the target output size of the image of the form \(oH * oW\). +Can be a tuple (oH, oW) or a single number \(oH\) for a square image \(oH * oH\)

+
output_ratio
+

If one wants to have an output size as a ratio of the input size, +this option can be given. This has to be a number or tuple in the range (0, 1)

+
return_indices
+

if True, will return the indices along with the outputs.

+
random_samples
+

optional random samples.

+
+
+

Details

Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

The max-pooling operation is applied in \(kH * kW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_fractional_max_pool3d.html b/dev/reference/nnf_fractional_max_pool3d.html index c8e6bea42ea7262e930ba9c33e01fac9e7ce294f..17926b2e7b39c8bcff8f68bbb9101b7d781c90d8 100644 --- a/dev/reference/nnf_fractional_max_pool3d.html +++ b/dev/reference/nnf_fractional_max_pool3d.html @@ -1,79 +1,18 @@ - - - - - - - -Fractional_max_pool3d — nnf_fractional_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fractional_max_pool3d — nnf_fractional_max_pool3d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,81 +111,66 @@

Applies 3D fractional max pooling over an input signal composed of several input planes.

-
nnf_fractional_max_pool3d(
-  input,
-  kernel_size,
-  output_size = NULL,
-  output_ratio = NULL,
-  return_indices = FALSE,
-  random_samples = NULL
-)
+
+
nnf_fractional_max_pool3d(
+  input,
+  kernel_size,
+  output_size = NULL,
+  output_ratio = NULL,
+  return_indices = FALSE,
+  random_samples = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

kernel_size

the size of the window to take a max over. Can be a single number \(k\) -(for a square kernel of \(k * k * k\)) or a tuple (kT, kH, kW)

output_size

the target output size of the form \(oT * oH * oW\). +

+

Arguments

+
input
+

the input tensor

+
kernel_size
+

the size of the window to take a max over. Can be a single number \(k\) +(for a square kernel of \(k * k * k\)) or a tuple (kT, kH, kW)

+
output_size
+

the target output size of the form \(oT * oH * oW\). Can be a tuple (oT, oH, oW) or a single number \(oH\) for a cubic output -\(oH * oH * oH\)

output_ratio

If one wants to have an output size as a ratio of the +\(oH * oH * oH\)

+
output_ratio
+

If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the -range (0, 1)

return_indices

if True, will return the indices along with the outputs.

random_samples

undocumented argument.

- -

Details

- +range (0, 1)

+
return_indices
+

if True, will return the indices along with the outputs.

+
random_samples
+

undocumented argument.

+
+
+

Details

Fractional MaxPooling is described in detail in the paper Fractional MaxPooling_ by Ben Graham

The max-pooling operation is applied in \(kT * kH * kW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_gelu.html b/dev/reference/nnf_gelu.html index 3f93d48d699e92f4d323000db515d9809656ea69..10546ec13ee6bd2e767677c6e7d6b3864131d9ce 100644 --- a/dev/reference/nnf_gelu.html +++ b/dev/reference/nnf_gelu.html @@ -1,79 +1,18 @@ - - - - - - - -Gelu — nnf_gelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gelu — nnf_gelu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,20 +111,18 @@

Gelu

-
nnf_gelu(input)
- -

Arguments

- - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

- -

gelu(input) -> Tensor

+
+
nnf_gelu(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
+
+

gelu(input) -> Tensor

@@ -210,33 +130,30 @@ dimensions

\(GELU(x) = x * \Phi(x)\)

where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.

-

See Gaussian Error Linear Units (GELUs).

+

See Gaussian Error Linear Units (GELUs).

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_glu.html b/dev/reference/nnf_glu.html index 519236086798dd23d7429c55c6c5b43173e8f762..d3e8dbd34d9dd37d6fca2088407f6c65ed202a56 100644 --- a/dev/reference/nnf_glu.html +++ b/dev/reference/nnf_glu.html @@ -1,79 +1,18 @@ - - - - - - - -Glu — nnf_glu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Glu — nnf_glu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,54 +111,47 @@

The gated linear unit. Computes:

-
nnf_glu(input, dim = -1)
- -

Arguments

- - - - - - - - - - -
input

(Tensor) input tensor

dim

(int) dimension on which to split the input. Default: -1

- -

Details

+
+
nnf_glu(input, dim = -1)
+
+
+

Arguments

+
input
+

(Tensor) input tensor

+
dim
+

(int) dimension on which to split the input. Default: -1

+
+
+

Details

$$GLU(a, b) = a \otimes \sigma(b)$$

where input is split in half along dim to form a and b, \(\sigma\) is the sigmoid function and \(\otimes\) is the element-wise product between matrices.

-

See Language Modeling with Gated Convolutional Networks.

+

See Language Modeling with Gated Convolutional Networks.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_grid_sample.html b/dev/reference/nnf_grid_sample.html index 1eb12f63c27a766b308c6559cde14b9dba4e0325..afe364f1714e8302cfd613d93ad4382e6da1d22f 100644 --- a/dev/reference/nnf_grid_sample.html +++ b/dev/reference/nnf_grid_sample.html @@ -1,80 +1,19 @@ - - - - - - - -Grid_sample — nnf_grid_sample • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Grid_sample — nnf_grid_sample • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,50 +113,40 @@ output using input values and pixel locations from grid." /> output using input values and pixel locations from grid.

-
nnf_grid_sample(
-  input,
-  grid,
-  mode = c("bilinear", "nearest"),
-  padding_mode = c("zeros", "border", "reflection"),
-  align_corners = FALSE
-)
+
+
nnf_grid_sample(
+  input,
+  grid,
+  mode = c("bilinear", "nearest"),
+  padding_mode = c("zeros", "border", "reflection"),
+  align_corners = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) input of shape \((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) (4-D case) or \((N, C, D_{\mbox{in}}, H_{\mbox{in}}, W_{\mbox{in}})\) (5-D case)

grid

(Tensor) flow-field of shape \((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\) (4-D case) or \((N, D_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}, 3)\) (5-D case)

mode

(str) interpolation mode to calculate output values 'bilinear' | 'nearest'. -Default: 'bilinear'

padding_mode

(str) padding mode for outside grid values 'zeros' | 'border' -| 'reflection'. Default: 'zeros'

align_corners

(bool, optional) Geometrically, we consider the pixels of the +

+

Arguments

+
input
+

(Tensor) input of shape \((N, C, H_{\mbox{in}}, W_{\mbox{in}})\) (4-D case) or \((N, C, D_{\mbox{in}}, H_{\mbox{in}}, W_{\mbox{in}})\) (5-D case)

+
grid
+

(Tensor) flow-field of shape \((N, H_{\mbox{out}}, W_{\mbox{out}}, 2)\) (4-D case) or \((N, D_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}, 3)\) (5-D case)

+
mode
+

(str) interpolation mode to calculate output values 'bilinear' | 'nearest'. +Default: 'bilinear'

+
padding_mode
+

(str) padding mode for outside grid values 'zeros' | 'border' +| 'reflection'. Default: 'zeros'

+
align_corners
+

(bool, optional) Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input's corner pixels. If set to False, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution -agnostic. This option parallels the align_corners option in nnf_interpolate(), and +agnostic. This option parallels the align_corners option in nnf_interpolate(), and so whichever option is used here should also be used there to resize the input -image before grid sampling. Default: False

- -

Details

- +image before grid sampling. Default: False

+
+
+

Details

Currently, only spatial (4-D) and volumetric (5-D) input are supported.

In the spatial (4-D) case, for input with shape @@ -254,8 +166,7 @@ the range of [-1, 1]. For example, values x = -1, y = -1input, and values x = 1, y = 1 is the right-bottom pixel of input.

If grid has values outside the range of [-1, 1], the corresponding -outputs are handled as defined by padding_mode. Options are

    -
  • padding_mode="zeros": use 0 for out-of-bound grid locations,

  • +outputs are handled as defined by padding_mode. Options are

    • padding_mode="zeros": use 0 for out-of-bound grid locations,

    • padding_mode="border": use border values for out-of-bound grid locations,

    • padding_mode="reflection": use values at locations reflected by the border for out-of-bound grid locations. For location far away @@ -263,41 +174,37 @@ from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel location x = -3.5 reflects by border -1 and becomes x' = 1.5, then reflects by border 1 and becomes x'' = -0.5.

    • -
    - -

    Note

    - +
+
+

Note

-

This function is often used in conjunction with nnf_affine_grid() +

This function is often used in conjunction with nnf_affine_grid() to build Spatial Transformer Networks_ .

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_group_norm.html b/dev/reference/nnf_group_norm.html index 4cb040338d45c735089beed35cbd41acbf7f5917..b2e66f054f23d76e30f82f7b6e4c2ba0012f5f59 100644 --- a/dev/reference/nnf_group_norm.html +++ b/dev/reference/nnf_group_norm.html @@ -1,79 +1,18 @@ - - - - - - - -Group_norm — nnf_group_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Group_norm — nnf_group_norm • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,59 +111,45 @@

Applies Group Normalization for last certain number of dimensions.

-
nnf_group_norm(input, num_groups, weight = NULL, bias = NULL, eps = 1e-05)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

num_groups

number of groups to separate the channels into

weight

the weight tensor

bias

the bias tensor

eps

a value added to the denominator for numerical stability. Default: 1e-5

+
+
nnf_group_norm(input, num_groups, weight = NULL, bias = NULL, eps = 1e-05)
+
+
+

Arguments

+
input
+

the input tensor

+
num_groups
+

number of groups to separate the channels into

+
weight
+

the weight tensor

+
bias
+

the bias tensor

+
eps
+

a value added to the denominator for numerical stability. Default: 1e-5

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_gumbel_softmax.html b/dev/reference/nnf_gumbel_softmax.html index 801547ffa9b7a7e9b1b4ba966fcc50235a4d24c1..fb732860fdb22924f4b00f0d1d0d4ab1b487af64 100644 --- a/dev/reference/nnf_gumbel_softmax.html +++ b/dev/reference/nnf_gumbel_softmax.html @@ -1,80 +1,19 @@ - - - - - - - -Gumbel_softmax — nnf_gumbel_softmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gumbel_softmax — nnf_gumbel_softmax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,55 +113,43 @@ optionally discretizes." /> optionally discretizes.

-
nnf_gumbel_softmax(logits, tau = 1, hard = FALSE, dim = -1)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
logits

[..., num_features] unnormalized log probabilities

tau

non-negative scalar temperature

hard

if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd

dim

(int) A dimension along which softmax will be computed. Default: -1.

+
+
nnf_gumbel_softmax(logits, tau = 1, hard = FALSE, dim = -1)
+
+
+

Arguments

+
logits
+

[..., num_features] unnormalized log probabilities

+
tau
+

non-negative scalar temperature

+
hard
+

if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd

+
dim
+

(int) A dimension along which softmax will be computed. Default: -1.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_hardshrink.html b/dev/reference/nnf_hardshrink.html index cb3218ac201bec9c2cb885fe76eef1e30c1ef4b9..9c1e90aec74b41cffc0260b7bfd7092a81370c71 100644 --- a/dev/reference/nnf_hardshrink.html +++ b/dev/reference/nnf_hardshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Hardshrink — nnf_hardshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardshrink — nnf_hardshrink • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Applies the hard shrinkage function element-wise

-
nnf_hardshrink(input, lambd = 0.5)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

lambd

the lambda value for the Hardshrink formulation. Default: 0.5

+
+
nnf_hardshrink(input, lambd = 0.5)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
lambd
+

the lambda value for the Hardshrink formulation. Default: 0.5

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_hardsigmoid.html b/dev/reference/nnf_hardsigmoid.html index 758378945c893796725d3c76683640c99259aab6..ac8d05a197816e1a9068747a1aca94d2fd6c17b8 100644 --- a/dev/reference/nnf_hardsigmoid.html +++ b/dev/reference/nnf_hardsigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Hardsigmoid — nnf_hardsigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardsigmoid — nnf_hardsigmoid • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Applies the element-wise function \(\mbox{Hardsigmoid}(x) = \frac{ReLU6(x + 3)}{6}\)

-
nnf_hardsigmoid(input, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

inplace

NA If set to True, will do this operation in-place. Default: False

+
+
nnf_hardsigmoid(input, inplace = FALSE)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
inplace
+

NA If set to True, will do this operation in-place. Default: False

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_hardswish.html b/dev/reference/nnf_hardswish.html index cdffac5bc0f7a055ff826a21857bf7b6c1b014e0..4c5e5ddfea35e54d1d33ffdb5926c7442d9fab34 100644 --- a/dev/reference/nnf_hardswish.html +++ b/dev/reference/nnf_hardswish.html @@ -1,80 +1,19 @@ - - - - - - - -Hardswish — nnf_hardswish • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardswish — nnf_hardswish • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,24 +113,20 @@ Searching for MobileNetV3." /> Searching for MobileNetV3.

-
nnf_hardswish(input, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

inplace

can optionally do the operation in-place. Default: FALSE

- -

Details

+
+
nnf_hardswish(input, inplace = FALSE)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
+

Details

$$ \mbox{Hardswish}(x) = \left\{ \begin{array}{ll} 0 & \mbox{if } x \le -3, \\ @@ -216,32 +134,29 @@ dimensions

x \cdot (x + 3)/6 & \mbox{otherwise} \end{array} \right. $$

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_hardtanh.html b/dev/reference/nnf_hardtanh.html index f4292c8f5e84472b6b07c292b5dc1b6181a59768..c3163721d764cd87af053481abb56d77574b32e8 100644 --- a/dev/reference/nnf_hardtanh.html +++ b/dev/reference/nnf_hardtanh.html @@ -1,79 +1,18 @@ - - - - - - - -Hardtanh — nnf_hardtanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hardtanh — nnf_hardtanh • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,58 +111,46 @@

Applies the HardTanh function element-wise.

-
nnf_hardtanh(input, min_val = -1, max_val = 1, inplace = FALSE)
-
-nnf_hardtanh_(input, min_val = -1, max_val = 1)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

min_val

minimum value of the linear region range. Default: -1

max_val

maximum value of the linear region range. Default: 1

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_hardtanh(input, min_val = -1, max_val = 1, inplace = FALSE)
 
+nnf_hardtanh_(input, min_val = -1, max_val = 1)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
min_val
+

minimum value of the linear region range. Default: -1

+
max_val
+

maximum value of the linear region range. Default: 1

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_hinge_embedding_loss.html b/dev/reference/nnf_hinge_embedding_loss.html index 0695bd5635445dcec96ff5123eb6041d02feb843..98ed37d70ecc500c05f96cf0f397cbb390a610ee 100644 --- a/dev/reference/nnf_hinge_embedding_loss.html +++ b/dev/reference/nnf_hinge_embedding_loss.html @@ -1,82 +1,21 @@ - - - - - - - -Hinge_embedding_loss — nnf_hinge_embedding_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hinge_embedding_loss — nnf_hinge_embedding_loss • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -195,58 +117,46 @@ using the L1 pairwise distance as xx , and is typically used for learning nonlin embeddings or semi-supervised learning.

-
nnf_hinge_embedding_loss(input, target, margin = 1, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

margin

Has a default value of 1.

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_hinge_embedding_loss(input, target, margin = 1, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
margin
+

Has a default value of 1.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_instance_norm.html b/dev/reference/nnf_instance_norm.html index cb6e74c3ad99d78adf18c0542d6700cbb77e6cbc..2e9a0d7daeedca4cb8eedf231d52769bdc9c8961 100644 --- a/dev/reference/nnf_instance_norm.html +++ b/dev/reference/nnf_instance_norm.html @@ -1,80 +1,19 @@ - - - - - - - -Instance_norm — nnf_instance_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Instance_norm — nnf_instance_norm • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,80 +113,60 @@ batch." /> batch.

-
nnf_instance_norm(
-  input,
-  running_mean = NULL,
-  running_var = NULL,
-  weight = NULL,
-  bias = NULL,
-  use_input_stats = TRUE,
-  momentum = 0.1,
-  eps = 1e-05
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

running_mean

the running_mean tensor

running_var

the running var tensor

weight

the weight tensor

bias

the bias tensor

use_input_stats

whether to use input stats

momentum

a double for the momentum

eps

an eps double for numerical stability

+
+
nnf_instance_norm(
+  input,
+  running_mean = NULL,
+  running_var = NULL,
+  weight = NULL,
+  bias = NULL,
+  use_input_stats = TRUE,
+  momentum = 0.1,
+  eps = 1e-05
+)
+
+
+

Arguments

+
input
+

the input tensor

+
running_mean
+

the running_mean tensor

+
running_var
+

the running var tensor

+
weight
+

the weight tensor

+
bias
+

the bias tensor

+
use_input_stats
+

whether to use input stats

+
momentum
+

a double for the momentum

+
eps
+

an eps double for numerical stability

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_interpolate.html b/dev/reference/nnf_interpolate.html index fb8d98c033fa011caab311232872e9ec0d83f298..5439eecdaa7367eaf01da0bbdbdbbae191f02a81 100644 --- a/dev/reference/nnf_interpolate.html +++ b/dev/reference/nnf_interpolate.html @@ -1,80 +1,19 @@ - - - - - - - -Interpolate — nnf_interpolate • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Interpolate — nnf_interpolate • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,40 +113,32 @@ scale_factor" /> scale_factor

-
nnf_interpolate(
-  input,
-  size = NULL,
-  scale_factor = NULL,
-  mode = "nearest",
-  align_corners = FALSE,
-  recompute_scale_factor = NULL
-)
+
+
nnf_interpolate(
+  input,
+  size = NULL,
+  scale_factor = NULL,
+  mode = "nearest",
+  align_corners = FALSE,
+  recompute_scale_factor = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the input tensor

size

(int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) -output spatial size.

scale_factor

(float or Tuple[float]) multiplier for spatial size. -Has to match input size if it is a tuple.

mode

(str) algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' -| 'bicubic' | 'trilinear' | 'area' Default: 'nearest'

align_corners

(bool, optional) Geometrically, we consider the pixels +

+

Arguments

+
input
+

(Tensor) the input tensor

+
size
+

(int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) +output spatial size.

+
scale_factor
+

(float or Tuple[float]) multiplier for spatial size. +Has to match input size if it is a tuple.

+
mode
+

(str) algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' +| 'bicubic' | 'trilinear' | 'area' Default: 'nearest'

+
align_corners
+

(bool, optional) Geometrically, we consider the pixels of the input and output as squares rather than points. If set to TRUE, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the @@ -232,11 +146,9 @@ input and output tensors are aligned by the corner points of their corner pixels and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', -'bicubic' or 'trilinear'. Default: False

recompute_scale_factor

(bool, optional) recompute the scale_factor +'bicubic' or 'trilinear'. Default: False

+
recompute_scale_factor
+

(bool, optional) recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is ```True`` or not specified, a new scale_factor will be computed based on @@ -245,12 +157,10 @@ computation will be identical to if the computed `output_size` were passed-in explicitly). Otherwise, the passed-in `scale_factor` will be used in the interpolation computation. Note that when `scale_factor` is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding -and precision issues.

- -

Details

- +and precision issues.

+
+
+

Details

The algorithm used for interpolation is determined by mode.

Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.

@@ -258,32 +168,29 @@ expected inputs are 3-D, 4-D or 5-D in shape.

mini-batch x channels x [optional depth] x [optional height] x width.

The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_kl_div.html b/dev/reference/nnf_kl_div.html index 7affdfb4719a42cc79d4d9742d12baad71d8f81b..8d58abfa7f425fa78a731dc21627831135ee8a71 100644 --- a/dev/reference/nnf_kl_div.html +++ b/dev/reference/nnf_kl_div.html @@ -1,79 +1,18 @@ - - - - - - - -Kl_div — nnf_kl_div • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kl_div — nnf_kl_div • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,54 +111,44 @@

The Kullback-Leibler divergence Loss.

-
nnf_kl_div(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_kl_div(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_l1_loss.html b/dev/reference/nnf_l1_loss.html index b826a02ccad1409c946a8204f5e325301bc25da0..dd9ce93a341b2c54e413af288d681ea2165fe8b2 100644 --- a/dev/reference/nnf_l1_loss.html +++ b/dev/reference/nnf_l1_loss.html @@ -1,79 +1,18 @@ - - - - - - - -L1_loss — nnf_l1_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -L1_loss — nnf_l1_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,54 +111,44 @@

Function that takes the mean element-wise absolute value difference.

-
nnf_l1_loss(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_l1_loss(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_layer_norm.html b/dev/reference/nnf_layer_norm.html index add5c019acb1c4ec2569045a0e8f11857536e54b..b9b0866812e854dcafec5c7b0cac425acbe1d211 100644 --- a/dev/reference/nnf_layer_norm.html +++ b/dev/reference/nnf_layer_norm.html @@ -1,79 +1,18 @@ - - - - - - - -Layer_norm — nnf_layer_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Layer_norm — nnf_layer_norm • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,67 +111,53 @@

Applies Layer Normalization for last certain number of dimensions.

-
nnf_layer_norm(
-  input,
-  normalized_shape,
-  weight = NULL,
-  bias = NULL,
-  eps = 1e-05
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

normalized_shape

input shape from an expected input of size. If a single -integer is used, it is treated as a singleton list, and this module will normalize -over the last dimension which is expected to be of that specific size.

weight

the weight tensor

bias

the bias tensor

eps

a value added to the denominator for numerical stability. Default: 1e-5

+
+
nnf_layer_norm(
+  input,
+  normalized_shape,
+  weight = NULL,
+  bias = NULL,
+  eps = 1e-05
+)
+
+
+

Arguments

+
input
+

the input tensor

+
normalized_shape
+

input shape from an expected input of size. If a single +integer is used, it is treated as a singleton list, and this module will normalize +over the last dimension which is expected to be of that specific size.

+
weight
+

the weight tensor

+
bias
+

the bias tensor

+
eps
+

a value added to the denominator for numerical stability. Default: 1e-5

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_leaky_relu.html b/dev/reference/nnf_leaky_relu.html index 8e5b7a81485667ff48bb9507ff7f567783939b06..c82e619d6092934b5fbc4c3fb356d054316fa7d5 100644 --- a/dev/reference/nnf_leaky_relu.html +++ b/dev/reference/nnf_leaky_relu.html @@ -1,80 +1,19 @@ - - - - - - - -Leaky_relu — nnf_leaky_relu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Leaky_relu — nnf_leaky_relu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,52 +113,42 @@ \(LeakyReLU(x) = max(0, x) + negative_slope * min(0, x)\)

-
nnf_leaky_relu(input, negative_slope = 0.01, inplace = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

negative_slope

Controls the angle of the negative slope. Default: 1e-2

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_leaky_relu(input, negative_slope = 0.01, inplace = FALSE)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
negative_slope
+

Controls the angle of the negative slope. Default: 1e-2

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_linear.html b/dev/reference/nnf_linear.html index 496ce6fb8249f4d4d83c55ecf5fd3d9e8b833bec..c9be07b5147ebf42862b3eed3091360c64859312 100644 --- a/dev/reference/nnf_linear.html +++ b/dev/reference/nnf_linear.html @@ -1,79 +1,18 @@ - - - - - - - -Linear — nnf_linear • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Linear — nnf_linear • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,42 @@

Applies a linear transformation to the incoming data: \(y = xA^T + b\).

-
nnf_linear(input, weight, bias = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
input

\((N, *, in\_features)\) where * means any number of -additional dimensions

weight

\((out\_features, in\_features)\) the weights tensor.

bias

optional tensor \((out\_features)\)

+
+
nnf_linear(input, weight, bias = NULL)
+
+
+

Arguments

+
input
+

\((N, *, in\_features)\) where * means any number of +additional dimensions

+
weight
+

\((out\_features, in\_features)\) the weights tensor.

+
bias
+

optional tensor \((out\_features)\)

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_local_response_norm.html b/dev/reference/nnf_local_response_norm.html index ce6cd0714d8a2e4903086df2153129b5f41d5ace..5c965ed2378d368022d6dc21e96d2190eff97e24 100644 --- a/dev/reference/nnf_local_response_norm.html +++ b/dev/reference/nnf_local_response_norm.html @@ -1,81 +1,20 @@ - - - - - - - -Local_response_norm — nnf_local_response_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Local_response_norm — nnf_local_response_norm • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,59 +115,45 @@ several input planes, where channels occupy the second dimension. Applies normalization across channels.

-
nnf_local_response_norm(input, size, alpha = 1e-04, beta = 0.75, k = 1)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

size

amount of neighbouring channels used for normalization

alpha

multiplicative factor. Default: 0.0001

beta

exponent. Default: 0.75

k

additive factor. Default: 1

+
+
nnf_local_response_norm(input, size, alpha = 1e-04, beta = 0.75, k = 1)
+
+
+

Arguments

+
input
+

the input tensor

+
size
+

amount of neighbouring channels used for normalization

+
alpha
+

multiplicative factor. Default: 0.0001

+
beta
+

exponent. Default: 0.75

+
k
+

additive factor. Default: 1

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_log_softmax.html b/dev/reference/nnf_log_softmax.html index 25ded6ecf3c80150841fc8ebc3080f48723f4b29..7e97bc85bdf1d54322644d583b5143cdde78b466 100644 --- a/dev/reference/nnf_log_softmax.html +++ b/dev/reference/nnf_log_softmax.html @@ -1,79 +1,18 @@ - - - - - - - -Log_softmax — nnf_log_softmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Log_softmax — nnf_log_softmax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,59 +111,50 @@

Applies a softmax followed by a logarithm.

-
nnf_log_softmax(input, dim = NULL, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
input

(Tensor) input

dim

(int) A dimension along which log_softmax will be computed.

dtype

(torch.dtype, optional) the desired data type of returned tensor. +

+
nnf_log_softmax(input, dim = NULL, dtype = NULL)
+
+ +
+

Arguments

+
input
+

(Tensor) input

+
dim
+

(int) A dimension along which log_softmax will be computed.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. -Default: NULL.

- -

Details

- +Default: NULL.

+
+
+

Details

While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_logsigmoid.html b/dev/reference/nnf_logsigmoid.html index bef09e463c108505ac29723d66a6b5a8706ae408..643dd77099c6c37897fc6eed901db724bef57bd9 100644 --- a/dev/reference/nnf_logsigmoid.html +++ b/dev/reference/nnf_logsigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Logsigmoid — nnf_logsigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logsigmoid — nnf_logsigmoid • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,44 +111,38 @@

Applies element-wise \(LogSigmoid(x_i) = log(\frac{1}{1 + exp(-x_i)})\)

-
nnf_logsigmoid(input)
- -

Arguments

- - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

+
+
nnf_logsigmoid(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_lp_pool1d.html b/dev/reference/nnf_lp_pool1d.html index 806f3809cf1cbd73f77fd6a32364477468f29187..8bbe26f125450d31bbedc71a0f2f0936375beceb 100644 --- a/dev/reference/nnf_lp_pool1d.html +++ b/dev/reference/nnf_lp_pool1d.html @@ -1,81 +1,20 @@ - - - - - - - -Lp_pool1d — nnf_lp_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lp_pool1d — nnf_lp_pool1d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,60 +115,46 @@ several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well.

-
nnf_lp_pool1d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

norm_type

if inf than one gets max pooling if 0 you get sum pooling ( -proportional to the avg pooling)

kernel_size

a single int, the size of the window

stride

a single int, the stride of the window. Default value is kernel_size

ceil_mode

when True, will use ceil instead of floor to compute the output shape

+
+
nnf_lp_pool1d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
norm_type
+

if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

+
kernel_size
+

a single int, the size of the window

+
stride
+

a single int, the stride of the window. Default value is kernel_size

+
ceil_mode
+

when True, will use ceil instead of floor to compute the output shape

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_lp_pool2d.html b/dev/reference/nnf_lp_pool2d.html index afad2d11b42019d1cc2817118937bed13976b35f..a33bd52a170ea5612b5711f0e1e47727226b2332 100644 --- a/dev/reference/nnf_lp_pool2d.html +++ b/dev/reference/nnf_lp_pool2d.html @@ -1,81 +1,20 @@ - - - - - - - -Lp_pool2d — nnf_lp_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lp_pool2d — nnf_lp_pool2d • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,60 +115,46 @@ several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well.

-
nnf_lp_pool2d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

norm_type

if inf than one gets max pooling if 0 you get sum pooling ( -proportional to the avg pooling)

kernel_size

a single int, the size of the window

stride

a single int, the stride of the window. Default value is kernel_size

ceil_mode

when True, will use ceil instead of floor to compute the output shape

+
+
nnf_lp_pool2d(input, norm_type, kernel_size, stride = NULL, ceil_mode = FALSE)
+
+
+

Arguments

+
input
+

the input tensor

+
norm_type
+

if inf than one gets max pooling if 0 you get sum pooling ( +proportional to the avg pooling)

+
kernel_size
+

a single int, the size of the window

+
stride
+

a single int, the stride of the window. Default value is kernel_size

+
ceil_mode
+

when True, will use ceil instead of floor to compute the output shape

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_margin_ranking_loss.html b/dev/reference/nnf_margin_ranking_loss.html index 1b86a0220688ff629cc8acade297bf64177c8ba0..383b4bf8c28154548f06745e3e26f0add8d39d6e 100644 --- a/dev/reference/nnf_margin_ranking_loss.html +++ b/dev/reference/nnf_margin_ranking_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Margin_ranking_loss — nnf_margin_ranking_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Margin_ranking_loss — nnf_margin_ranking_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,62 +113,48 @@ mini-batch Tensors, and a label 1D mini-batch tensor y (containing 1 or -1)." /> mini-batch Tensors, and a label 1D mini-batch tensor y (containing 1 or -1).

-
nnf_margin_ranking_loss(input1, input2, target, margin = 0, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input1

the first tensor

input2

the second input tensor

target

the target tensor

margin

Has a default value of 00 .

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_margin_ranking_loss(input1, input2, target, margin = 0, reduction = "mean")
+
+ +
+

Arguments

+
input1
+

the first tensor

+
input2
+

the second input tensor

+
target
+

the target tensor

+
margin
+

Has a default value of 00 .

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_pool1d.html b/dev/reference/nnf_max_pool1d.html index e97ddf1b9fa0714a94ffc49f48a7cf0ee37461f8..59927fb2543e0c012a9ab5d681d83703d20f9fa0 100644 --- a/dev/reference/nnf_max_pool1d.html +++ b/dev/reference/nnf_max_pool1d.html @@ -1,80 +1,19 @@ - - - - - - - -Max_pool1d — nnf_max_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_pool1d — nnf_max_pool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,80 +113,62 @@ planes." /> planes.

-
nnf_max_pool1d(
-  input,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  dilation = 1,
-  ceil_mode = FALSE,
-  return_indices = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape (minibatch , in_channels , iW)

kernel_size

the size of the window. Can be a single number or a -tuple (kW,).

stride

the stride of the window. Can be a single number or a tuple -(sW,). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padW,). Default: 0

dilation

controls the spacing between the kernel points; also known as -the à trous algorithm.

ceil_mode

when True, will use ceil instead of floor to compute the -output shape. Default: FALSE

return_indices

whether to return the indices where the max occurs.

+
+
nnf_max_pool1d(
+  input,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  dilation = 1,
+  ceil_mode = FALSE,
+  return_indices = FALSE
+)
+
+
+

Arguments

+
input
+

input tensor of shape (minibatch , in_channels , iW)

+
kernel_size
+

the size of the window. Can be a single number or a +tuple (kW,).

+
stride
+

the stride of the window. Can be a single number or a tuple +(sW,). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padW,). Default: 0

+
dilation
+

controls the spacing between the kernel points; also known as +the à trous algorithm.

+
ceil_mode
+

when True, will use ceil instead of floor to compute the +output shape. Default: FALSE

+
return_indices
+

whether to return the indices where the max occurs.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_pool2d.html b/dev/reference/nnf_max_pool2d.html index b2c9026e6441ee1539b781fea6b378e4641470e5..638b8e05cd66c56a594757f1a8ef7849c48284e0 100644 --- a/dev/reference/nnf_max_pool2d.html +++ b/dev/reference/nnf_max_pool2d.html @@ -1,80 +1,19 @@ - - - - - - - -Max_pool2d — nnf_max_pool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_pool2d — nnf_max_pool2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,80 +113,62 @@ planes." /> planes.

-
nnf_max_pool2d(
-  input,
-  kernel_size,
-  stride = kernel_size,
-  padding = 0,
-  dilation = 1,
-  ceil_mode = FALSE,
-  return_indices = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iH , iW)

kernel_size

size of the pooling region. Can be a single number or a -tuple (kH, kW)

stride

stride of the pooling operation. Can be a single number or a -tuple (sH, sW). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padH, padW). Default: 0

dilation

controls the spacing between the kernel points; also known as -the à trous algorithm.

ceil_mode

when True, will use ceil instead of floor in the formula -to compute the output shape. Default: FALSE

return_indices

whether to return the indices where the max occurs.

+
+
nnf_max_pool2d(
+  input,
+  kernel_size,
+  stride = kernel_size,
+  padding = 0,
+  dilation = 1,
+  ceil_mode = FALSE,
+  return_indices = FALSE
+)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iH , iW)

+
kernel_size
+

size of the pooling region. Can be a single number or a +tuple (kH, kW)

+
stride
+

stride of the pooling operation. Can be a single number or a +tuple (sH, sW). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padH, padW). Default: 0

+
dilation
+

controls the spacing between the kernel points; also known as +the à trous algorithm.

+
ceil_mode
+

when True, will use ceil instead of floor in the formula +to compute the output shape. Default: FALSE

+
return_indices
+

whether to return the indices where the max occurs.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_pool3d.html b/dev/reference/nnf_max_pool3d.html index df79f8a52f1f779e7315fc3dd7de207bd268dd40..781fde0a5e9283e96190e0ef7ae6d06416d4ea3c 100644 --- a/dev/reference/nnf_max_pool3d.html +++ b/dev/reference/nnf_max_pool3d.html @@ -1,80 +1,19 @@ - - - - - - - -Max_pool3d — nnf_max_pool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_pool3d — nnf_max_pool3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,80 +113,62 @@ planes." /> planes.

-
nnf_max_pool3d(
-  input,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  dilation = 1,
-  ceil_mode = FALSE,
-  return_indices = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor (minibatch, in_channels , iT * iH , iW)

kernel_size

size of the pooling region. Can be a single number or a -tuple (kT, kH, kW)

stride

stride of the pooling operation. Can be a single number or a -tuple (sT, sH, sW). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a -single number or a tuple (padT, padH, padW), Default: 0

dilation

controls the spacing between the kernel points; also known as -the à trous algorithm.

ceil_mode

when True, will use ceil instead of floor in the formula -to compute the output shape

return_indices

whether to return the indices where the max occurs.

+
+
nnf_max_pool3d(
+  input,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  dilation = 1,
+  ceil_mode = FALSE,
+  return_indices = FALSE
+)
+
+
+

Arguments

+
input
+

input tensor (minibatch, in_channels , iT * iH , iW)

+
kernel_size
+

size of the pooling region. Can be a single number or a +tuple (kT, kH, kW)

+
stride
+

stride of the pooling operation. Can be a single number or a +tuple (sT, sH, sW). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a +single number or a tuple (padT, padH, padW), Default: 0

+
dilation
+

controls the spacing between the kernel points; also known as +the à trous algorithm.

+
ceil_mode
+

when True, will use ceil instead of floor in the formula +to compute the output shape

+
return_indices
+

whether to return the indices where the max occurs.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_unpool1d.html b/dev/reference/nnf_max_unpool1d.html index cca2af42fbde09b3f5b65b9f8aecd57a48c1f0bd..16391e58c101e3f5259f3f4ece4d58ed7710f4c7 100644 --- a/dev/reference/nnf_max_unpool1d.html +++ b/dev/reference/nnf_max_unpool1d.html @@ -1,79 +1,18 @@ - - - - - - - -Max_unpool1d — nnf_max_unpool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_unpool1d — nnf_max_unpool1d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,70 +111,54 @@

Computes a partial inverse of MaxPool1d.

-
nnf_max_unpool1d(
-  input,
-  indices,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  output_size = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input Tensor to invert

indices

the indices given out by max pool

kernel_size

Size of the max pooling window.

stride

Stride of the max pooling window. It is set to kernel_size by default.

padding

Padding that was added to the input

output_size

the targeted output size

+
+
nnf_max_unpool1d(
+  input,
+  indices,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  output_size = NULL
+)
+
+
+

Arguments

+
input
+

the input Tensor to invert

+
indices
+

the indices given out by max pool

+
kernel_size
+

Size of the max pooling window.

+
stride
+

Stride of the max pooling window. It is set to kernel_size by default.

+
padding
+

Padding that was added to the input

+
output_size
+

the targeted output size

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_unpool2d.html b/dev/reference/nnf_max_unpool2d.html index a61d95e416b1fe0143fa6d9b26ba44875becaa5a..59cbefc6d797f4de472a4713b2d2e9efef2821b3 100644 --- a/dev/reference/nnf_max_unpool2d.html +++ b/dev/reference/nnf_max_unpool2d.html @@ -1,79 +1,18 @@ - - - - - - - -Max_unpool2d — nnf_max_unpool2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_unpool2d — nnf_max_unpool2d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,70 +111,54 @@

Computes a partial inverse of MaxPool2d.

-
nnf_max_unpool2d(
-  input,
-  indices,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  output_size = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input Tensor to invert

indices

the indices given out by max pool

kernel_size

Size of the max pooling window.

stride

Stride of the max pooling window. It is set to kernel_size by default.

padding

Padding that was added to the input

output_size

the targeted output size

+
+
nnf_max_unpool2d(
+  input,
+  indices,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  output_size = NULL
+)
+
+
+

Arguments

+
input
+

the input Tensor to invert

+
indices
+

the indices given out by max pool

+
kernel_size
+

Size of the max pooling window.

+
stride
+

Stride of the max pooling window. It is set to kernel_size by default.

+
padding
+

Padding that was added to the input

+
output_size
+

the targeted output size

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_max_unpool3d.html b/dev/reference/nnf_max_unpool3d.html index 0d390ee90c7bb54c8ee600df14f1691c11415b23..2108b5cd9872b33dc12f6c8e98c1e660fc5ed3de 100644 --- a/dev/reference/nnf_max_unpool3d.html +++ b/dev/reference/nnf_max_unpool3d.html @@ -1,79 +1,18 @@ - - - - - - - -Max_unpool3d — nnf_max_unpool3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max_unpool3d — nnf_max_unpool3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,70 +111,54 @@

Computes a partial inverse of MaxPool3d.

-
nnf_max_unpool3d(
-  input,
-  indices,
-  kernel_size,
-  stride = NULL,
-  padding = 0,
-  output_size = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

the input Tensor to invert

indices

the indices given out by max pool

kernel_size

Size of the max pooling window.

stride

Stride of the max pooling window. It is set to kernel_size by default.

padding

Padding that was added to the input

output_size

the targeted output size

+
+
nnf_max_unpool3d(
+  input,
+  indices,
+  kernel_size,
+  stride = NULL,
+  padding = 0,
+  output_size = NULL
+)
+
+
+

Arguments

+
input
+

the input Tensor to invert

+
indices
+

the indices given out by max pool

+
kernel_size
+

Size of the max pooling window.

+
stride
+

Stride of the max pooling window. It is set to kernel_size by default.

+
padding
+

Padding that was added to the input

+
output_size
+

the targeted output size

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_mse_loss.html b/dev/reference/nnf_mse_loss.html index cd43552e4b478925b64b27b8c56133cae13cb2aa..0cceac70f745a2c5ee07275f794d707b40c0c05f 100644 --- a/dev/reference/nnf_mse_loss.html +++ b/dev/reference/nnf_mse_loss.html @@ -1,79 +1,18 @@ - - - - - - - -Mse_loss — nnf_mse_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mse_loss — nnf_mse_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,54 +111,44 @@

Measures the element-wise mean squared error.

-
nnf_mse_loss(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_mse_loss(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_multi_head_attention_forward.html b/dev/reference/nnf_multi_head_attention_forward.html index 04df8d8e65f41663e3141d02d1c3ddb42303cf77..7644bb3839deb2a9ba465f1eeb859968bb2adf46 100644 --- a/dev/reference/nnf_multi_head_attention_forward.html +++ b/dev/reference/nnf_multi_head_attention_forward.html @@ -1,80 +1,19 @@ - - - - - - - -Multi head attention forward — nnf_multi_head_attention_forward • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multi head attention forward — nnf_multi_head_attention_forward • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,177 +113,125 @@ subspaces. See reference: Attention Is All You Need" /> subspaces. See reference: Attention Is All You Need

-
nnf_multi_head_attention_forward(
-  query,
-  key,
-  value,
-  embed_dim_to_check,
-  num_heads,
-  in_proj_weight,
-  in_proj_bias,
-  bias_k,
-  bias_v,
-  add_zero_attn,
-  dropout_p,
-  out_proj_weight,
-  out_proj_bias,
-  training = TRUE,
-  key_padding_mask = NULL,
-  need_weights = TRUE,
-  attn_mask = NULL,
-  avg_weights = TRUE,
-  use_separate_proj_weight = FALSE,
-  q_proj_weight = NULL,
-  k_proj_weight = NULL,
-  v_proj_weight = NULL,
-  static_k = NULL,
-  static_v = NULL
-)
+
+
nnf_multi_head_attention_forward(
+  query,
+  key,
+  value,
+  embed_dim_to_check,
+  num_heads,
+  in_proj_weight,
+  in_proj_bias,
+  bias_k,
+  bias_v,
+  add_zero_attn,
+  dropout_p,
+  out_proj_weight,
+  out_proj_bias,
+  training = TRUE,
+  key_padding_mask = NULL,
+  need_weights = TRUE,
+  attn_mask = NULL,
+  avg_weights = TRUE,
+  use_separate_proj_weight = FALSE,
+  q_proj_weight = NULL,
+  k_proj_weight = NULL,
+  v_proj_weight = NULL,
+  static_k = NULL,
+  static_v = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
query

\((L, N, E)\) where L is the target sequence length, N is the batch size, E is -the embedding dimension.

key

\((S, N, E)\), where S is the source sequence length, N is the batch size, E is -the embedding dimension.

value

\((S, N, E)\) where S is the source sequence length, N is the batch size, E is -the embedding dimension.

embed_dim_to_check

total dimension of the model.

num_heads

parallel attention heads.

in_proj_weight

input projection weight and bias.

in_proj_bias

currently undocumented.

bias_k

bias of the key and value sequences to be added at dim=0.

bias_v

currently undocumented.

add_zero_attn

add a new batch of zeros to the key and -value sequences at dim=1.

dropout_p

probability of an element to be zeroed.

out_proj_weight

the output projection weight and bias.

out_proj_bias

currently undocumented.

training

apply dropout if is TRUE.

key_padding_mask

\((N, S)\) where N is the batch size, S is the source sequence length. +

+

Arguments

+
query
+

\((L, N, E)\) where L is the target sequence length, N is the batch size, E is +the embedding dimension.

+
key
+

\((S, N, E)\), where S is the source sequence length, N is the batch size, E is +the embedding dimension.

+
value
+

\((S, N, E)\) where S is the source sequence length, N is the batch size, E is +the embedding dimension.

+
embed_dim_to_check
+

total dimension of the model.

+
num_heads
+

parallel attention heads.

+
in_proj_weight
+

input projection weight and bias.

+
in_proj_bias
+

currently undocumented.

+
bias_k
+

bias of the key and value sequences to be added at dim=0.

+
bias_v
+

currently undocumented.

+
add_zero_attn
+

add a new batch of zeros to the key and +value sequences at dim=1.

+
dropout_p
+

probability of an element to be zeroed.

+
out_proj_weight
+

the output projection weight and bias.

+
out_proj_bias
+

currently undocumented.

+
training
+

apply dropout if is TRUE.

+
key_padding_mask
+

\((N, S)\) where N is the batch size, S is the source sequence length. If a ByteTensor is provided, the non-zero positions will be ignored while the position with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the -value of True will be ignored while the position with the value of False will be unchanged.

need_weights

output attn_output_weights.

attn_mask

2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. +value of True will be ignored while the position with the value of False will be unchanged.

+
need_weights
+

output attn_output_weights.

+
attn_mask
+

2D mask \((L, S)\) where L is the target sequence length, S is the source sequence length. 3D mask \((N*num_heads, L, S)\) where N is the batch size, L is the target sequence length, S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged. If a FloatTensor -is provided, it will be added to the attention weight.

avg_weights

Logical; whether to average attn_output_weights over the +is provided, it will be added to the attention weight.

+
avg_weights
+

Logical; whether to average attn_output_weights over the attention heads before outputting them. This doesn't change the returned -value of attn_output; it only affects the returned attention weight matrix.

use_separate_proj_weight

the function accept the proj. weights for +value of attn_output; it only affects the returned attention weight matrix.

+
use_separate_proj_weight
+

the function accept the proj. weights for query, key, and value in different forms. If false, in_proj_weight will be used, -which is a combination of q_proj_weight, k_proj_weight, v_proj_weight.

q_proj_weight

input projection weight and bias.

k_proj_weight

currently undocumented.

v_proj_weight

currently undocumented.

static_k

static key and value used for attention operators.

static_v

currently undocumented.

- +which is a combination of q_proj_weight, k_proj_weight, v_proj_weight.

+
q_proj_weight
+

input projection weight and bias.

+
k_proj_weight
+

currently undocumented.

+
v_proj_weight
+

currently undocumented.

+
static_k
+

static key and value used for attention operators.

+
static_v
+

currently undocumented.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_multi_margin_loss.html b/dev/reference/nnf_multi_margin_loss.html index ef6d1eced81bab8bd5b965cd5d374fcd4e1b901a..48dc26e5484137eebe67d990d87000291be7f029 100644 --- a/dev/reference/nnf_multi_margin_loss.html +++ b/dev/reference/nnf_multi_margin_loss.html @@ -1,81 +1,20 @@ - - - - - - - -Multi_margin_loss — nnf_multi_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multi_margin_loss — nnf_multi_margin_loss • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,74 +115,58 @@ (which is a 1D tensor of target class indices, 0 <= y <= x$size(2) - 1 ).

-
nnf_multi_margin_loss(
-  input,
-  target,
-  p = 1,
-  margin = 1,
-  weight = NULL,
-  reduction = "mean"
-)
+
+
nnf_multi_margin_loss(
+  input,
+  target,
+  p = 1,
+  margin = 1,
+  weight = NULL,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

p

Has a default value of 1. 1 and 2 are the only supported values.

margin

Has a default value of 1.

weight

a manual rescaling weight given to each class. If given, it has to -be a Tensor of size C. Otherwise, it is treated as if having all ones.

reduction

(string, optional) – Specifies the reduction to apply to the +

+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
p
+

Has a default value of 1. 1 and 2 are the only supported values.

+
margin
+

Has a default value of 1.

+
weight
+

a manual rescaling weight given to each class. If given, it has to +be a Tensor of size C. Otherwise, it is treated as if having all ones.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_multilabel_margin_loss.html b/dev/reference/nnf_multilabel_margin_loss.html index c2e9a5c0c2b3f090e6326fae644b6ace84ccc5f8..ea89b6cbb9f85a74c1954bae5e4784caf4d2f969 100644 --- a/dev/reference/nnf_multilabel_margin_loss.html +++ b/dev/reference/nnf_multilabel_margin_loss.html @@ -1,81 +1,20 @@ - - - - - - - -Multilabel_margin_loss — nnf_multilabel_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multilabel_margin_loss — nnf_multilabel_margin_loss • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,54 +115,44 @@ is a 2D Tensor of target class indices)." /> is a 2D Tensor of target class indices).

-
nnf_multilabel_margin_loss(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_multilabel_margin_loss(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_multilabel_soft_margin_loss.html b/dev/reference/nnf_multilabel_soft_margin_loss.html index 0664cc2f2eb46697e132e66dd56d9f0af72abf99..03086d892d0dfb03d90278276932073401700d85 100644 --- a/dev/reference/nnf_multilabel_soft_margin_loss.html +++ b/dev/reference/nnf_multilabel_soft_margin_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Multilabel_soft_margin_loss — nnf_multilabel_soft_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multilabel_soft_margin_loss — nnf_multilabel_soft_margin_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,63 +113,51 @@ max-entropy, between input x and target y of size (N, C)." /> max-entropy, between input x and target y of size (N, C).

-
nnf_multilabel_soft_margin_loss(
-  input,
-  target,
-  weight = NULL,
-  reduction = "mean"
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

weight

weight tensor to apply on the loss.

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_multilabel_soft_margin_loss(
+  input,
+  target,
+  weight = NULL,
+  reduction = "mean"
+)
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
weight
+

weight tensor to apply on the loss.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_nll_loss.html b/dev/reference/nnf_nll_loss.html index 6523573e75a116b859a7ddf0858764933e8b2ae3..991ccd754c4a9c2dd7108851ea6315605f9de1c7 100644 --- a/dev/reference/nnf_nll_loss.html +++ b/dev/reference/nnf_nll_loss.html @@ -1,79 +1,18 @@ - - - - - - - -Nll_loss — nnf_nll_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nll_loss — nnf_nll_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,73 +111,59 @@

The negative log likelihood loss.

-
nnf_nll_loss(
-  input,
-  target,
-  weight = NULL,
-  ignore_index = -100,
-  reduction = "mean"
-)
+
+
nnf_nll_loss(
+  input,
+  target,
+  weight = NULL,
+  ignore_index = -100,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

\((N, C)\) where C = number of classes or \((N, C, H, W)\) in +

+

Arguments

+
input
+

\((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in -the case of K-dimensional loss.

target

\((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), -or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

weight

(Tensor, optional) a manual rescaling weight given to each class. -If given, has to be a Tensor of size C

ignore_index

(int, optional) Specifies a target value that is ignored and -does not contribute to the input gradient.

reduction

(string, optional) – Specifies the reduction to apply to the +the case of K-dimensional loss.

+
target
+

\((N)\) where each value is \(0 \leq \mbox{targets}[i] \leq C-1\), +or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.

+
weight
+

(Tensor, optional) a manual rescaling weight given to each class. +If given, has to be a Tensor of size C

+
ignore_index
+

(int, optional) Specifies a target value that is ignored and +does not contribute to the input gradient.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_normalize.html b/dev/reference/nnf_normalize.html index b84357e4954ed78ae964dde4ad9d27c245ebbd3e..07cdccd4e6ebe7e4f8cbf56625be959a444b66a5 100644 --- a/dev/reference/nnf_normalize.html +++ b/dev/reference/nnf_normalize.html @@ -1,79 +1,18 @@ - - - - - - - -Normalize — nnf_normalize • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Normalize — nnf_normalize • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,35 +111,25 @@

Performs \(L_p\) normalization of inputs over specified dimension.

-
nnf_normalize(input, p = 2, dim = 2, eps = 1e-12, out = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of any shape

p

(float) the exponent value in the norm formulation. Default: 2

dim

(int) the dimension to reduce. Default: 1

eps

(float) small value to avoid division by zero. Default: 1e-12

out

(Tensor, optional) the output tensor. If out is used, this operation won't be differentiable.

- -

Details

+
+
nnf_normalize(input, p = 2, dim = 2, eps = 1e-12, out = NULL)
+
+
+

Arguments

+
input
+

input tensor of any shape

+
p
+

(float) the exponent value in the norm formulation. Default: 2

+
dim
+

(int) the dimension to reduce. Default: 1

+
eps
+

(float) small value to avoid division by zero. Default: 1e-12

+
out
+

(Tensor, optional) the output tensor. If out is used, this operation won't be differentiable.

+
+
+

Details

For a tensor input of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each \(n_{dim}\) -element vector \(v\) along dimension dim is transformed as

$$ @@ -225,32 +137,29 @@ $$

With the default arguments it uses the Euclidean norm over vectors along dimension \(1\) for normalization.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_one_hot.html b/dev/reference/nnf_one_hot.html index e73ebccb669b75585f6db2de1b929f1be83fc931..4a7ad8ebef58a4f688ba395913134bc3ecd046b8 100644 --- a/dev/reference/nnf_one_hot.html +++ b/dev/reference/nnf_one_hot.html @@ -1,82 +1,21 @@ - - - - - - - -One_hot — nnf_one_hot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -One_hot — nnf_one_hot • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,52 +117,45 @@ index of last dimension matches the corresponding value of the input tensor, in which case it will be 1.

-
nnf_one_hot(tensor, num_classes = -1)
- -

Arguments

- - - - - - - - - - -
tensor

(LongTensor) class values of any shape.

num_classes

(int) Total number of classes. If set to -1, the number -of classes will be inferred as one greater than the largest class value in -the input tensor.

- -

Details

+
+
nnf_one_hot(tensor, num_classes = -1)
+
+
+

Arguments

+
tensor
+

(LongTensor) class values of any shape.

+
num_classes
+

(int) Total number of classes. If set to -1, the number +of classes will be inferred as one greater than the largest class value in +the input tensor.

+
+
+

Details

One-hot on Wikipedia: https://en.wikipedia.org/wiki/One-hot

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_pad.html b/dev/reference/nnf_pad.html index ed87fcc0c1cc8c373cfba43ac737963c7d6e63ae..fec417500db7c7932300ca4168afd879babb02a3 100644 --- a/dev/reference/nnf_pad.html +++ b/dev/reference/nnf_pad.html @@ -1,79 +1,18 @@ - - - - - - - -Pad — nnf_pad • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pad — nnf_pad • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,32 +111,24 @@

Pads tensor.

-
nnf_pad(input, pad, mode = "constant", value = 0)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

(Tensor) N-dimensional tensor

pad

(tuple) m-elements tuple, where \(\frac{m}{2} \leq\) input dimensions -and \(m\) is even.

mode

'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'

value

fill value for 'constant' padding. Default: 0.

- -

Padding size

+
+
nnf_pad(input, pad, mode = "constant", value = 0)
+
+
+

Arguments

+
input
+

(Tensor) N-dimensional tensor

+
pad
+

(tuple) m-elements tuple, where \(\frac{m}{2} \leq\) input dimensions +and \(m\) is even.

+
mode
+

'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'

+
value
+

fill value for 'constant' padding. Default: 0.

+
+
+

Padding size

@@ -232,8 +146,9 @@ to pad the last 3 dimensions, use \((\mbox{padding\_left}, \mbox{padding\_right},\) \(\mbox{padding\_top}, \mbox{padding\_bottom}\) \(\mbox{padding\_front}, \mbox{padding\_back})\).

-

Padding mode

- +
+
+

Padding mode

@@ -243,32 +158,29 @@ padding modes works. Constant padding is implemented for arbitrary dimensions. tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_pairwise_distance.html b/dev/reference/nnf_pairwise_distance.html index 8ed95eb5a840cd857ae5a6eb1d6037de1f966fdd..57f8e386e5aebcd6faebe26aa893f30a821a190f 100644 --- a/dev/reference/nnf_pairwise_distance.html +++ b/dev/reference/nnf_pairwise_distance.html @@ -1,79 +1,18 @@ - - - - - - - -Pairwise_distance — nnf_pairwise_distance • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pairwise_distance — nnf_pairwise_distance • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,60 +111,46 @@

Computes the batchwise pairwise distance between vectors using the p-norm.

-
nnf_pairwise_distance(x1, x2, p = 2, eps = 1e-06, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
x1

(Tensor) First input.

x2

(Tensor) Second input (of size matching x1).

p

the norm degree. Default: 2

eps

(float, optional) Small value to avoid division by zero. -Default: 1e-8

keepdim

Determines whether or not to keep the vector dimension. Default: False

+
+
nnf_pairwise_distance(x1, x2, p = 2, eps = 1e-06, keepdim = FALSE)
+
+
+

Arguments

+
x1
+

(Tensor) First input.

+
x2
+

(Tensor) Second input (of size matching x1).

+
p
+

the norm degree. Default: 2

+
eps
+

(float, optional) Small value to avoid division by zero. +Default: 1e-8

+
keepdim
+

Determines whether or not to keep the vector dimension. Default: False

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_pdist.html b/dev/reference/nnf_pdist.html index f42bb9ffe6b73f1a419edec28d512e066750fde1..07ce44bc031f9c1c3497219565efe5e3f1329108 100644 --- a/dev/reference/nnf_pdist.html +++ b/dev/reference/nnf_pdist.html @@ -1,82 +1,21 @@ - - - - - - - -Pdist — nnf_pdist • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pdist — nnf_pdist • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,52 +117,45 @@ This is identical to the upper triangular portion, excluding the diagonal, of if the rows are contiguous.

-
nnf_pdist(input, p = 2)
- -

Arguments

- - - - - - - - - - -
input

input tensor of shape \(N \times M\).

p

p value for the p-norm distance to calculate between each vector pair -\(\in [0, \infty]\).

- -

Details

+
+
nnf_pdist(input, p = 2)
+
+
+

Arguments

+
input
+

input tensor of shape \(N \times M\).

+
p
+

p value for the p-norm distance to calculate between each vector pair +\(\in [0, \infty]\).

+
+
+

Details

If input has shape \(N \times M\) then the output will have shape \(\frac{1}{2} N (N - 1)\).

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_pixel_shuffle.html b/dev/reference/nnf_pixel_shuffle.html index f52ee374d9e61d52a848c11353b0399c33396606..614e459afbdc0f0c1fb47d304631ed84e14b7449 100644 --- a/dev/reference/nnf_pixel_shuffle.html +++ b/dev/reference/nnf_pixel_shuffle.html @@ -1,80 +1,19 @@ - - - - - - - -Pixel_shuffle — nnf_pixel_shuffle • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pixel_shuffle — nnf_pixel_shuffle • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,47 +113,39 @@ tensor of shape \((*, C, H \times r, W \times r)\)." /> tensor of shape \((*, C, H \times r, W \times r)\).

-
nnf_pixel_shuffle(input, upscale_factor)
- -

Arguments

- - - - - - - - - - -
input

(Tensor) the input tensor

upscale_factor

(int) factor to increase spatial resolution by

+
+
nnf_pixel_shuffle(input, upscale_factor)
+
+
+

Arguments

+
input
+

(Tensor) the input tensor

+
upscale_factor
+

(int) factor to increase spatial resolution by

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_poisson_nll_loss.html b/dev/reference/nnf_poisson_nll_loss.html index 0ac6d58eadf9b538a418babc9e2ca170fcb767ea..3cfea463d3e785c7ce945dccd180aeb85a0d011c 100644 --- a/dev/reference/nnf_poisson_nll_loss.html +++ b/dev/reference/nnf_poisson_nll_loss.html @@ -1,79 +1,18 @@ - - - - - - - -Poisson_nll_loss — nnf_poisson_nll_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Poisson_nll_loss — nnf_poisson_nll_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,77 +111,61 @@

Poisson negative log likelihood loss.

-
nnf_poisson_nll_loss(
-  input,
-  target,
-  log_input = TRUE,
-  full = FALSE,
-  eps = 1e-08,
-  reduction = "mean"
-)
+
+
nnf_poisson_nll_loss(
+  input,
+  target,
+  log_input = TRUE,
+  full = FALSE,
+  eps = 1e-08,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

log_input

if TRUE the loss is computed as \(\exp(\mbox{input}) - \mbox{target} * \mbox{input}\), +

+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
log_input
+

if TRUE the loss is computed as \(\exp(\mbox{input}) - \mbox{target} * \mbox{input}\), if FALSE then loss is \(\mbox{input} - \mbox{target} * \log(\mbox{input}+\mbox{eps})\). -Default: TRUE.

full

whether to compute full loss, i. e. to add the Stirling approximation -term. Default: FALSE.

eps

(float, optional) Small value to avoid evaluation of \(\log(0)\) when -log_input=FALSE. Default: 1e-8

reduction

(string, optional) – Specifies the reduction to apply to the +Default: TRUE.

+
full
+

whether to compute full loss, i. e. to add the Stirling approximation +term. Default: FALSE.

+
eps
+

(float, optional) Small value to avoid evaluation of \(\log(0)\) when +log_input=FALSE. Default: 1e-8

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_prelu.html b/dev/reference/nnf_prelu.html index adcaef0d3bbca9e0f297cf5832c9dddaecaee789..6ddc62b893c2d3755c5fe11c53c324cdd13b50c8 100644 --- a/dev/reference/nnf_prelu.html +++ b/dev/reference/nnf_prelu.html @@ -1,81 +1,20 @@ - - - - - - - -Prelu — nnf_prelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Prelu — nnf_prelu • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,48 +115,40 @@ where weight is a learnable parameter." /> where weight is a learnable parameter.

-
nnf_prelu(input, weight)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

weight

(Tensor) the learnable weights

+
+
nnf_prelu(input, weight)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
weight
+

(Tensor) the learnable weights

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_relu.html b/dev/reference/nnf_relu.html index 065895e568b0d3decb837b6926ba6f9f3a4a3c06..9591a3f6e4d50ef0218c60a4b8ec06359368a2bb 100644 --- a/dev/reference/nnf_relu.html +++ b/dev/reference/nnf_relu.html @@ -1,79 +1,18 @@ - - - - - - - -Relu — nnf_relu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Relu — nnf_relu • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,50 +111,42 @@

Applies the rectified linear unit function element-wise.

-
nnf_relu(input, inplace = FALSE)
+    
+
nnf_relu(input, inplace = FALSE)
 
-nnf_relu_(input)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

inplace

can optionally do the operation in-place. Default: FALSE

+nnf_relu_(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+ -
- +
- - + + diff --git a/dev/reference/nnf_relu6.html b/dev/reference/nnf_relu6.html index 361ff1b646b51ec252df6652c515a7b7f57a1f27..ab285bc9bae24c3e21f64236e966a22394b2fc42 100644 --- a/dev/reference/nnf_relu6.html +++ b/dev/reference/nnf_relu6.html @@ -1,79 +1,18 @@ - - - - - - - -Relu6 — nnf_relu6 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Relu6 — nnf_relu6 • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Applies the element-wise function \(ReLU6(x) = min(max(0,x), 6)\).

-
nnf_relu6(input, inplace = FALSE)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_relu6(input, inplace = FALSE)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_rrelu.html b/dev/reference/nnf_rrelu.html index 52bba455cb90762764a9a0b4a88dbb491d8084bc..4db7e858e03abf064b5a68d6f30d39803310286d 100644 --- a/dev/reference/nnf_rrelu.html +++ b/dev/reference/nnf_rrelu.html @@ -1,79 +1,18 @@ - - - - - - - -Rrelu — nnf_rrelu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rrelu — nnf_rrelu • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,62 +111,48 @@

Randomized leaky ReLU.

-
nnf_rrelu(input, lower = 1/8, upper = 1/3, training = FALSE, inplace = FALSE)
-
-nnf_rrelu_(input, lower = 1/8, upper = 1/3, training = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

lower

lower bound of the uniform distribution. Default: 1/8

upper

upper bound of the uniform distribution. Default: 1/3

training

bool wether it's a training pass. DEfault: FALSE

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_rrelu(input, lower = 1/8, upper = 1/3, training = FALSE, inplace = FALSE)
 
+nnf_rrelu_(input, lower = 1/8, upper = 1/3, training = FALSE)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
lower
+

lower bound of the uniform distribution. Default: 1/8

+
upper
+

upper bound of the uniform distribution. Default: 1/3

+
training
+

bool wether it's a training pass. DEfault: FALSE

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_selu.html b/dev/reference/nnf_selu.html index 17ef302b0e2acfda78a0f3094111388ab445c95b..356af2bf515c31e651cebea977593fa294ae4e07 100644 --- a/dev/reference/nnf_selu.html +++ b/dev/reference/nnf_selu.html @@ -1,82 +1,21 @@ - - - - - - - -Selu — nnf_selu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Selu — nnf_selu • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -195,60 +117,54 @@ with \(\alpha=1.6732632423543772848170429916717\) and \(scale=1.0507009873554804934193349852946\).

-
nnf_selu(input, inplace = FALSE)
-
-nnf_selu_(input)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

inplace

can optionally do the operation in-place. Default: FALSE

- - -

Examples

-
if (torch_is_installed()) {
-x <- torch_randn(2, 2)
-y <- nnf_selu(x)
-nnf_selu_(x)
-torch_equal(x, y)
-
-}
-#> [1] TRUE
-
+
+
nnf_selu(input, inplace = FALSE)
+
+nnf_selu_(input)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+ +
+

Examples

+
if (torch_is_installed()) {
+x <- torch_randn(2, 2)
+y <- nnf_selu(x)
+nnf_selu_(x)
+torch_equal(x, y)
+
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/nnf_sigmoid.html b/dev/reference/nnf_sigmoid.html index a328ed1490155cf08125bd58d55ec00259dbf8c9..a191edb54a1e9ed2480838adc32a0415a278d027 100644 --- a/dev/reference/nnf_sigmoid.html +++ b/dev/reference/nnf_sigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Sigmoid — nnf_sigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sigmoid — nnf_sigmoid • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,44 +111,38 @@

Applies element-wise \(Sigmoid(x_i) = \frac{1}{1 + exp(-x_i)}\)

-
nnf_sigmoid(input)
- -

Arguments

- - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

+
+
nnf_sigmoid(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_smooth_l1_loss.html b/dev/reference/nnf_smooth_l1_loss.html index 9390ba445d96305538d7633f77c766ac749c3b7d..f59bad5b16f8886d3aca704a848a380d9402152f 100644 --- a/dev/reference/nnf_smooth_l1_loss.html +++ b/dev/reference/nnf_smooth_l1_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Smooth_l1_loss — nnf_smooth_l1_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Smooth_l1_loss — nnf_smooth_l1_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,54 +113,44 @@ element-wise error falls below 1 and an L1 term otherwise." /> element-wise error falls below 1 and an L1 term otherwise.

-
nnf_smooth_l1_loss(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_smooth_l1_loss(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_soft_margin_loss.html b/dev/reference/nnf_soft_margin_loss.html index eaaa3dc57d400993efdd8a020834b5c8ba8de03b..32a5a33e367a47abfb04af04f6730c416f393f5c 100644 --- a/dev/reference/nnf_soft_margin_loss.html +++ b/dev/reference/nnf_soft_margin_loss.html @@ -1,80 +1,19 @@ - - - - - - - -Soft_margin_loss — nnf_soft_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Soft_margin_loss — nnf_soft_margin_loss • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,54 +113,44 @@ between input tensor x and target tensor y (containing 1 or -1)." /> between input tensor x and target tensor y (containing 1 or -1).

-
nnf_soft_margin_loss(input, target, reduction = "mean")
- -

Arguments

- - - - - - - - - - - - - - -
input

tensor (N,*) where ** means, any number of additional dimensions

target

tensor (N,*) , same shape as the input

reduction

(string, optional) – Specifies the reduction to apply to the +

+
nnf_soft_margin_loss(input, target, reduction = "mean")
+
+ +
+

Arguments

+
input
+

tensor (N,*) where ** means, any number of additional dimensions

+
target
+

tensor (N,*) , same shape as the input

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_softmax.html b/dev/reference/nnf_softmax.html index d39e3ad4ca862055935237abf79d98faa1730350..b70e7229f02f1a9c5e497653733f116d087bdd64 100644 --- a/dev/reference/nnf_softmax.html +++ b/dev/reference/nnf_softmax.html @@ -1,79 +1,18 @@ - - - - - - - -Softmax — nnf_softmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softmax — nnf_softmax • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,58 +111,49 @@

Applies a softmax function.

-
nnf_softmax(input, dim, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
input

(Tensor) input

dim

(int) A dimension along which softmax will be computed.

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. -Default: NULL.

- -

Details

+
+
nnf_softmax(input, dim, dtype = NULL)
+
+
+

Arguments

+
input
+

(Tensor) input

+
dim
+

(int) A dimension along which softmax will be computed.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. +Default: NULL.

+
+
+

Details

Softmax is defined as:

$$Softmax(x_{i}) = exp(x_i)/\sum_j exp(x_j)$$

It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_softmin.html b/dev/reference/nnf_softmin.html index f73920d163f7609154b0e8f66fb358c2ef092ff6..ed3ec9668ac18ac05705c79eed2faa1065ffeea7 100644 --- a/dev/reference/nnf_softmin.html +++ b/dev/reference/nnf_softmin.html @@ -1,79 +1,18 @@ - - - - - - - -Softmin — nnf_softmin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softmin — nnf_softmin • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,58 +111,49 @@

Applies a softmin function.

-
nnf_softmin(input, dim, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
input

(Tensor) input

dim

(int) A dimension along which softmin will be computed -(so every slice along dim will sum to 1).

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. -This is useful for preventing data type overflows. Default: NULL.

- -

Details

+
+
nnf_softmin(input, dim, dtype = NULL)
+
+
+

Arguments

+
input
+

(Tensor) input

+
dim
+

(int) A dimension along which softmin will be computed +(so every slice along dim will sum to 1).

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. +This is useful for preventing data type overflows. Default: NULL.

+
+
+

Details

Note that

$$Softmin(x) = Softmax(-x)$$.

-

See nnf_softmax definition for mathematical formula.

+

See nnf_softmax definition for mathematical formula.

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_softplus.html b/dev/reference/nnf_softplus.html index e827697369491b824c1b771ad054899a4d3897d2..0299a9005829e0d3a5c3d0cfbd9379c7b930468b 100644 --- a/dev/reference/nnf_softplus.html +++ b/dev/reference/nnf_softplus.html @@ -1,79 +1,18 @@ - - - - - - - -Softplus — nnf_softplus • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softplus — nnf_softplus • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,56 +111,47 @@

Applies element-wise, the function \(Softplus(x) = 1/\beta * log(1 + exp(\beta * x))\).

-
nnf_softplus(input, beta = 1, threshold = 20)
- -

Arguments

- - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

beta

the beta value for the Softplus formulation. Default: 1

threshold

values above this revert to a linear function. Default: 20

- -

Details

+
+
nnf_softplus(input, beta = 1, threshold = 20)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
beta
+

the beta value for the Softplus formulation. Default: 1

+
threshold
+

values above this revert to a linear function. Default: 20

+
+
+

Details

For numerical stability the implementation reverts to the linear function when \(input * \beta > threshold\).

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_softshrink.html b/dev/reference/nnf_softshrink.html index 5d1ddbf8b2a112c971d60efa4bcfc66f6f5aee70..aed100b44f9a12f7518c96a187c6732758b2fdeb 100644 --- a/dev/reference/nnf_softshrink.html +++ b/dev/reference/nnf_softshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Softshrink — nnf_softshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softshrink — nnf_softshrink • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,49 +111,41 @@

Applies the soft shrinkage function elementwise

-
nnf_softshrink(input, lambd = 0.5)
- -

Arguments

- - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

lambd

the lambda (must be no less than zero) value for the Softshrink -formulation. Default: 0.5

+
+
nnf_softshrink(input, lambd = 0.5)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
lambd
+

the lambda (must be no less than zero) value for the Softshrink +formulation. Default: 0.5

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_softsign.html b/dev/reference/nnf_softsign.html index 2d7213e2aa658689e6c4a67679e24d7d21c9f7d7..5608418eb9aac6572f147064c4dbd398aa225b00 100644 --- a/dev/reference/nnf_softsign.html +++ b/dev/reference/nnf_softsign.html @@ -1,79 +1,18 @@ - - - - - - - -Softsign — nnf_softsign • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Softsign — nnf_softsign • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,44 +111,38 @@

Applies element-wise, the function \(SoftSign(x) = x/(1 + |x|\)

-
nnf_softsign(input)
- -

Arguments

- - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

+
+
nnf_softsign(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_tanhshrink.html b/dev/reference/nnf_tanhshrink.html index a76b6a65d0bdef8076791765a9ba2bf65f12ce27..379f14756a56f4048e9c731cf6b15c98b43e45cb 100644 --- a/dev/reference/nnf_tanhshrink.html +++ b/dev/reference/nnf_tanhshrink.html @@ -1,79 +1,18 @@ - - - - - - - -Tanhshrink — nnf_tanhshrink • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tanhshrink — nnf_tanhshrink • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,44 +111,38 @@

Applies element-wise, \(Tanhshrink(x) = x - Tanh(x)\)

-
nnf_tanhshrink(input)
- -

Arguments

- - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

+
+
nnf_tanhshrink(input)
+
+
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_threshold.html b/dev/reference/nnf_threshold.html index d946ba79017e2a1047c90f3174e2c26950ba1828..f61bb64835b87908ac634372b94c4626e40fcbe8 100644 --- a/dev/reference/nnf_threshold.html +++ b/dev/reference/nnf_threshold.html @@ -1,79 +1,18 @@ - - - - - - - -Threshold — nnf_threshold • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Threshold — nnf_threshold • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,58 +111,46 @@

Thresholds each element of the input Tensor.

-
nnf_threshold(input, threshold, value, inplace = FALSE)
-
-nnf_threshold_(input, threshold, value)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
input

(N,*) tensor, where * means, any number of additional -dimensions

threshold

The value to threshold at

value

The value to replace with

inplace

can optionally do the operation in-place. Default: FALSE

+
+
nnf_threshold(input, threshold, value, inplace = FALSE)
 
+nnf_threshold_(input, threshold, value)
+
+ +
+

Arguments

+
input
+

(N,*) tensor, where * means, any number of additional +dimensions

+
threshold
+

The value to threshold at

+
value
+

The value to replace with

+
inplace
+

can optionally do the operation in-place. Default: FALSE

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_triplet_margin_loss.html b/dev/reference/nnf_triplet_margin_loss.html index 5ab5cf8f830caeee0afc62c88b54d40aad2ed09c..7473d27f32301f892d9063c12f90677452a87570 100644 --- a/dev/reference/nnf_triplet_margin_loss.html +++ b/dev/reference/nnf_triplet_margin_loss.html @@ -1,83 +1,22 @@ - - - - - - - -Triplet_margin_loss — nnf_triplet_margin_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triplet_margin_loss — nnf_triplet_margin_loss • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -197,85 +119,65 @@ anchor, positive examples and negative examples respectively). The shapes of all input tensors should be (N, D).

-
nnf_triplet_margin_loss(
-  anchor,
-  positive,
-  negative,
-  margin = 1,
-  p = 2,
-  eps = 1e-06,
-  swap = FALSE,
-  reduction = "mean"
-)
+
+
nnf_triplet_margin_loss(
+  anchor,
+  positive,
+  negative,
+  margin = 1,
+  p = 2,
+  eps = 1e-06,
+  swap = FALSE,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
anchor

the anchor input tensor

positive

the positive input tensor

negative

the negative input tensor

margin

Default: 1.

p

The norm degree for pairwise distance. Default: 2.

eps

(float, optional) Small value to avoid division by zero.

swap

The distance swap is described in detail in the paper Learning shallow +

+

Arguments

+
anchor
+

the anchor input tensor

+
positive
+

the positive input tensor

+
negative
+

the negative input tensor

+
margin
+

Default: 1.

+
p
+

The norm degree for pairwise distance. Default: 2.

+
eps
+

(float, optional) Small value to avoid division by zero.

+
swap
+

The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. -Default: FALSE.

reduction

(string, optional) – Specifies the reduction to apply to the +Default: FALSE.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_triplet_margin_with_distance_loss.html b/dev/reference/nnf_triplet_margin_with_distance_loss.html index e2d8f0ebced7ff7bf198a1ca6f39deac046f52f6..e45ccdc6a0c51b39698ab09bd642623b2e862781 100644 --- a/dev/reference/nnf_triplet_margin_with_distance_loss.html +++ b/dev/reference/nnf_triplet_margin_with_distance_loss.html @@ -1,79 +1,18 @@ - - - - - - - -Triplet margin with distance loss — nnf_triplet_margin_with_distance_loss • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triplet margin with distance loss — nnf_triplet_margin_with_distance_loss • torch - - - - - - + + - - -
-
- -
- -
+
-
nnf_triplet_margin_with_distance_loss(
-  anchor,
-  positive,
-  negative,
-  distance_function = NULL,
-  margin = 1,
-  swap = FALSE,
-  reduction = "mean"
-)
+
+
nnf_triplet_margin_with_distance_loss(
+  anchor,
+  positive,
+  negative,
+  distance_function = NULL,
+  margin = 1,
+  swap = FALSE,
+  reduction = "mean"
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
anchor

the anchor input tensor

positive

the positive input tensor

negative

the negative input tensor

distance_function

(callable, optional): A nonnegative, real-valued function that +

+

Arguments

+
anchor
+

the anchor input tensor

+
positive
+

the positive input tensor

+
negative
+

the negative input tensor

+
distance_function
+

(callable, optional): A nonnegative, real-valued function that quantifies the closeness of two tensors. If not specified, -nn_pairwise_distance() will be used. Default: None

margin

Default: 1.

swap

The distance swap is described in detail in the paper Learning shallow +nn_pairwise_distance() will be used. Default: None

+
margin
+

Default: 1.

+
swap
+

The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. -Default: FALSE.

reduction

(string, optional) – Specifies the reduction to apply to the +Default: FALSE.

+
reduction
+

(string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, -'sum': the output will be summed. Default: 'mean'

- +'sum': the output will be summed. Default: 'mean'

+
+
-
- +
- - + + diff --git a/dev/reference/nnf_unfold.html b/dev/reference/nnf_unfold.html index a8285b6f32400954d6ea89053e38afb165b930d0..c2ed3b40ecf761e57538af3cd48dfab1b70fdba7 100644 --- a/dev/reference/nnf_unfold.html +++ b/dev/reference/nnf_unfold.html @@ -1,79 +1,18 @@ - - - - - - - -Unfold — nnf_unfold • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unfold — nnf_unfold • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,38 +111,28 @@

Extracts sliding local blocks from an batched input tensor.

-
nnf_unfold(input, kernel_size, dilation = 1, padding = 0, stride = 1)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
input

the input tensor

kernel_size

the size of the sliding blocks

dilation

a parameter that controls the stride of elements within the -neighborhood. Default: 1

padding

implicit zero padding to be added on both sides of input. -Default: 0

stride

the stride of the sliding blocks in the input spatial dimensions. -Default: 1

- -

Warning

+
+
nnf_unfold(input, kernel_size, dilation = 1, padding = 0, stride = 1)
+
+
+

Arguments

+
input
+

the input tensor

+
kernel_size
+

the size of the sliding blocks

+
dilation
+

a parameter that controls the stride of elements within the +neighborhood. Default: 1

+
padding
+

implicit zero padding to be added on both sides of input. +Default: 0

+
stride
+

the stride of the sliding blocks in the input spatial dimensions. +Default: 1

+
+
+

Warning

@@ -232,32 +144,29 @@ supported.

memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first.

+
+
-
- +
- - + + diff --git a/dev/reference/optim_adadelta.html b/dev/reference/optim_adadelta.html index 1d0d97ef5e977ead9406629c8b9a86b0a9337726..3e8a7e26e525730c9d109ab31b812b1a6967d28e 100644 --- a/dev/reference/optim_adadelta.html +++ b/dev/reference/optim_adadelta.html @@ -1,79 +1,18 @@ - - - - - - - -Adadelta optimizer — optim_adadelta • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adadelta optimizer — optim_adadelta • torch - - - - - + + - - - -
-
- -
- -
+
-
optim_adadelta(params, lr = 1, rho = 0.9, eps = 1e-06, weight_decay = 0)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): list of parameters to optimize or list defining -parameter groups

lr

(float, optional): learning rate (default: 1e-3)

rho

(float, optional): coefficient used for computing a running average -of squared gradients (default: 0.9)

eps

(float, optional): term added to the denominator to improve -numerical stability (default: 1e-6)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

- -

Note

+
+
optim_adadelta(params, lr = 1, rho = 0.9, eps = 1e-06, weight_decay = 0)
+
+
+

Arguments

+
params
+

(iterable): list of parameters to optimize or list defining +parameter groups

+
lr
+

(float, optional): learning rate (default: 1e-3)

+
rho
+

(float, optional): coefficient used for computing a running average +of squared gradients (default: 0.9)

+
eps
+

(float, optional): term added to the denominator to improve +numerical stability (default: 1e-6)

+
weight_decay
+

(float, optional): weight decay (L2 penalty) (default: 0)

+
+
+

Note

According to the original paper, decaying average of the squared gradients is computed as follows: $$ @@ -237,50 +149,50 @@ $$ \theta_{t+1} = \theta_{t} + \Delta \theta_{t} \end{array} $$

-

Warning

- +
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-optimizer <- optim_adadelta(model$parameters, lr = 0.1)
-optimizer$zero_grad()
-loss_fn(model(input), target)$backward()
-optimizer$step()
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+optimizer <- optim_adadelta(model$parameters, lr = 0.1)
+optimizer$zero_grad()
+loss_fn(model(input), target)$backward()
+optimizer$step()
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/optim_adagrad.html b/dev/reference/optim_adagrad.html index b29577e1cb74efec5edf1babdea7db364a503fd1..b84ae8175d356606cd7b6255229aabf2979a91ab 100644 --- a/dev/reference/optim_adagrad.html +++ b/dev/reference/optim_adagrad.html @@ -1,79 +1,18 @@ - - - - - - - -Adagrad optimizer — optim_adagrad • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adagrad optimizer — optim_adagrad • torch - - - - - + + - - - -
-
- -
- -
+
-
optim_adagrad(
-  params,
-  lr = 0.01,
-  lr_decay = 0,
-  weight_decay = 0,
-  initial_accumulator_value = 0,
-  eps = 1e-10
-)
+
+
optim_adagrad(
+  params,
+  lr = 0.01,
+  lr_decay = 0,
+  weight_decay = 0,
+  initial_accumulator_value = 0,
+  eps = 1e-10
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): list of parameters to optimize or list parameter groups

lr

(float, optional): learning rate (default: 1e-2)

lr_decay

(float, optional): learning rate decay (default: 0)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

initial_accumulator_value

the initial value for the accumulator. (default: 0)

+
+

Arguments

+
params
+

(iterable): list of parameters to optimize or list parameter groups

+
lr
+

(float, optional): learning rate (default: 1e-2)

+
lr_decay
+

(float, optional): learning rate decay (default: 0)

+
weight_decay
+

(float, optional): weight decay (L2 penalty) (default: 0)

+
initial_accumulator_value
+

the initial value for the accumulator. (default: 0)

Adagrad is an especially good optimizer for sparse data. It individually modifies learning rate for every single parameter, dividing the original learning rate value by sum of the squares of the gradients. It causes that the rarely occurring features get greater learning rates. The main downside of this method is the fact that learning rate may be -getting small too fast, so that at some point a model cannot learn anymore.

eps

(float, optional): term added to the denominator to improve -numerical stability (default: 1e-10)

- -

Note

- +getting small too fast, so that at some point a model cannot learn anymore.

+
eps
+

(float, optional): term added to the denominator to improve +numerical stability (default: 1e-10)

+
+
+

Note

Update rule: $$ \theta_{t+1} = \theta_{t} - \frac{\eta }{\sqrt{G_{t} + \epsilon}} \odot g_{t} $$ The equation above and some remarks quoted -after An overview of gradient descent optimization algorithms +after An overview of gradient descent optimization algorithms by Sebastian Ruder.

-

Warning

- +
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
+
-
- +
- - + + diff --git a/dev/reference/optim_adam.html b/dev/reference/optim_adam.html index b3e7110c0e3136d6d60be0da26102cf84c5d9f50..19e6d843a87ec95b61fceab3c5b16402f52b00a9 100644 --- a/dev/reference/optim_adam.html +++ b/dev/reference/optim_adam.html @@ -1,79 +1,18 @@ - - - - - - - -Implements Adam algorithm. — optim_adam • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Implements Adam algorithm. — optim_adam • torch - - - - - - - - + + -
-
- -
- -
+
-

It has been proposed in Adam: A Method for Stochastic Optimization.

+

It has been proposed in Adam: A Method for Stochastic Optimization.

-
optim_adam(
-  params,
-  lr = 0.001,
-  betas = c(0.9, 0.999),
-  eps = 1e-08,
-  weight_decay = 0,
-  amsgrad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or dicts defining -parameter groups

lr

(float, optional): learning rate (default: 1e-3)

betas

(Tuple[float, float], optional): coefficients used for computing -running averages of gradient and its square (default: (0.9, 0.999))

eps

(float, optional): term added to the denominator to improve -numerical stability (default: 1e-8)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

amsgrad

(boolean, optional): whether to use the AMSGrad variant of this -algorithm from the paper On the Convergence of Adam and Beyond -(default: FALSE)

- -

Warning

+
+
optim_adam(
+  params,
+  lr = 0.001,
+  betas = c(0.9, 0.999),
+  eps = 1e-08,
+  weight_decay = 0,
+  amsgrad = FALSE
+)
+
+
+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or dicts defining +parameter groups

+
lr
+

(float, optional): learning rate (default: 1e-3)

+
betas
+

(Tuple[float, float], optional): coefficients used for computing +running averages of gradient and its square (default: (0.9, 0.999))

+
eps
+

(float, optional): term added to the denominator to improve +numerical stability (default: 1e-8)

+
weight_decay
+

(float, optional): weight decay (L2 penalty) (default: 0)

+
amsgrad
+

(boolean, optional): whether to use the AMSGrad variant of this +algorithm from the paper On the Convergence of Adam and Beyond +(default: FALSE)

+
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-optimizer <- optim_adam(model$parameters(), lr=0.1)
-optimizer$zero_grad()
-loss_fn(model(input), target)$backward()
-optimizer$step()
-}
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+optimizer <- optim_adam(model$parameters(), lr=0.1)
+optimizer$zero_grad()
+loss_fn(model(input), target)$backward()
+optimizer$step()
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/optim_asgd.html b/dev/reference/optim_asgd.html index ee04a83024ee6f52a242811ba6d865a749ef7430..aa33e6b2bcd9d195ba2137d90302526e22753bd0 100644 --- a/dev/reference/optim_asgd.html +++ b/dev/reference/optim_asgd.html @@ -1,79 +1,18 @@ - - - - - - - -Averaged Stochastic Gradient Descent optimizer — optim_asgd • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Averaged Stochastic Gradient Descent optimizer — optim_asgd • torch - - - - - - - - + + -
-
- -
- -
+
-
optim_asgd(
-  params,
-  lr = 0.01,
-  lambda = 1e-04,
-  alpha = 0.75,
-  t0 = 1e+06,
-  weight_decay = 0
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or lists defining -parameter groups

lr

(float): learning rate

lambda

(float, optional): decay term (default: 1e-4)

alpha

(float, optional): power for eta update (default: 0.75)

t0

(float, optional): point at which to start averaging (default: 1e6)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

- -

Warning

+
+
optim_asgd(
+  params,
+  lr = 0.01,
+  lambda = 1e-04,
+  alpha = 0.75,
+  t0 = 1e+06,
+  weight_decay = 0
+)
+
+
+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or lists defining +parameter groups

+
lr
+

(float): learning rate

+
lambda
+

(float, optional): decay term (default: 1e-4)

+
alpha
+

(float, optional): power for eta update (default: 0.75)

+
t0
+

(float, optional): point at which to start averaging (default: 1e6)

+
weight_decay
+

(float, optional): weight decay (L2 penalty) (default: 0)

+
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-optimizer <- optim_asgd(model$parameters(), lr=0.1)
-optimizer$zero_grad()
-loss_fn(model(input), target)$backward()
-optimizer$step()
-}
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+optimizer <- optim_asgd(model$parameters(), lr=0.1)
+optimizer$zero_grad()
+loss_fn(model(input), target)$backward()
+optimizer$step()
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/optim_lbfgs.html b/dev/reference/optim_lbfgs.html index 6ad01fbcc7ea3c55273c338a4addbb28f3a96ecd..4e32b5c247e71fe10a849344982cbeebb0d60faa 100644 --- a/dev/reference/optim_lbfgs.html +++ b/dev/reference/optim_lbfgs.html @@ -1,80 +1,19 @@ - - - - - - - -LBFGS optimizer — optim_lbfgs • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -LBFGS optimizer — optim_lbfgs • torch - - - - - - - - + + -
-
- -
- -
+

Implements L-BFGS algorithm, heavily inspired by -minFunc

+minFunc

-
optim_lbfgs(
-  params,
-  lr = 1,
-  max_iter = 20,
-  max_eval = NULL,
-  tolerance_grad = 1e-07,
-  tolerance_change = 1e-09,
-  history_size = 100,
-  line_search_fn = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or dicts defining -parameter groups

lr

(float): learning rate (default: 1)

max_iter

(int): maximal number of iterations per optimization step -(default: 20)

max_eval

(int): maximal number of function evaluations per optimization -step (default: max_iter * 1.25).

tolerance_grad

(float): termination tolerance on first order optimality -(default: 1e-5).

tolerance_change

(float): termination tolerance on function -value/parameter changes (default: 1e-9).

history_size

(int): update history size (default: 100).

line_search_fn

(str): either 'strong_wolfe' or None (default: None).

- -

Note

+
+
optim_lbfgs(
+  params,
+  lr = 1,
+  max_iter = 20,
+  max_eval = NULL,
+  tolerance_grad = 1e-07,
+  tolerance_change = 1e-09,
+  history_size = 100,
+  line_search_fn = NULL
+)
+
+
+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or dicts defining +parameter groups

+
lr
+

(float): learning rate (default: 1)

+
max_iter
+

(int): maximal number of iterations per optimization step +(default: 20)

+
max_eval
+

(int): maximal number of function evaluations per optimization +step (default: max_iter * 1.25).

+
tolerance_grad
+

(float): termination tolerance on first order optimality +(default: 1e-5).

+
tolerance_change
+

(float): termination tolerance on function +value/parameter changes (default: 1e-9).

+
history_size
+

(int): update history size (default: 100).

+
line_search_fn
+

(str): either 'strong_wolfe' or None (default: None).

+
+
+

Note

This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn't fit in memory try reducing the history size, or use a different algorithm.

-

Warning

- +
+
+

Warning

@@ -264,32 +171,29 @@ will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
+
-
- +
- - + + diff --git a/dev/reference/optim_required.html b/dev/reference/optim_required.html index d6c74c79319399bee3d34db22344273e49d14575..fec60a8d385857944f3a16b50420396263c9833a 100644 --- a/dev/reference/optim_required.html +++ b/dev/reference/optim_required.html @@ -1,79 +1,18 @@ - - - - - - - -Dummy value indicating a required value. — optim_required • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dummy value indicating a required value. — optim_required • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

export

-
optim_required()
- +
+
optim_required()
+
+
-
- +
- - + + diff --git a/dev/reference/optim_rmsprop.html b/dev/reference/optim_rmsprop.html index dea7b3e44011b8880062f4200a18e178c8fb9809..91f1346a92719487503494673b24d59412146fb7 100644 --- a/dev/reference/optim_rmsprop.html +++ b/dev/reference/optim_rmsprop.html @@ -1,79 +1,18 @@ - - - - - - - -RMSprop optimizer — optim_rmsprop • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -RMSprop optimizer — optim_rmsprop • torch - - - - - + + - - - -
-
- -
- -
+
@@ -189,56 +111,42 @@

Proposed by G. Hinton in his course.

-
optim_rmsprop(
-  params,
-  lr = 0.01,
-  alpha = 0.99,
-  eps = 1e-08,
-  weight_decay = 0,
-  momentum = 0,
-  centered = FALSE
-)
+
+
optim_rmsprop(
+  params,
+  lr = 0.01,
+  alpha = 0.99,
+  eps = 1e-08,
+  weight_decay = 0,
+  momentum = 0,
+  centered = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or list defining parameter groups

lr

(float, optional): learning rate (default: 1e-2)

alpha

(float, optional): smoothing constant (default: 0.99)

eps

(float, optional): term added to the denominator to improve -numerical stability (default: 1e-8)

weight_decay

optional weight decay penalty. (default: 0)

momentum

(float, optional): momentum factor (default: 0)

centered

(bool, optional) : if TRUE, compute the centered RMSProp, +

+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or list defining parameter groups

+
lr
+

(float, optional): learning rate (default: 1e-2)

+
alpha
+

(float, optional): smoothing constant (default: 0.99)

+
eps
+

(float, optional): term added to the denominator to improve +numerical stability (default: 1e-8)

+
weight_decay
+

optional weight decay penalty. (default: 0)

+
momentum
+

(float, optional): momentum factor (default: 0)

+
centered
+

(bool, optional) : if TRUE, compute the centered RMSProp, the gradient is normalized by an estimation of its variance -weight_decay (float, optional): weight decay (L2 penalty) (default: 0)

- -

Note

- +weight_decay (float, optional): weight decay (L2 penalty) (default: 0)

+
+
+

Note

The centered version first appears in -Generating Sequences With Recurrent Neural Networks. +Generating Sequences With Recurrent Neural Networks. The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus \(\alpha/(\sqrt{v} + \epsilon)\) where \(\alpha\) @@ -248,40 +156,38 @@ of the squared gradient.

$$ \theta_{t+1} = \theta_{t} - \frac{\eta }{\sqrt{{E[g^2]}_{t} + \epsilon}} * g_{t} $$

-

Warning

- +
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
+
-
- +
- - + + diff --git a/dev/reference/optim_rprop.html b/dev/reference/optim_rprop.html index be824a5f64784e95789969e93b0dd4019c14dec5..aba751ca3793489a793543e6302d1cfa7573fc57 100644 --- a/dev/reference/optim_rprop.html +++ b/dev/reference/optim_rprop.html @@ -1,79 +1,18 @@ - - - - - - - -Implements the resilient backpropagation algorithm. — optim_rprop • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Implements the resilient backpropagation algorithm. — optim_rprop • torch - - - - - - - - + + -
-
- -
- -
+
-
optim_rprop(params, lr = 0.01, etas = c(0.5, 1.2), step_sizes = c(1e-06, 50))
+
+
optim_rprop(params, lr = 0.01, etas = c(0.5, 1.2), step_sizes = c(1e-06, 50))
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or lists defining -parameter groups

lr

(float, optional): learning rate (default: 1e-2)

etas

(Tuple(float, float), optional): pair of (etaminus, etaplis), that +

+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or lists defining +parameter groups

+
lr
+

(float, optional): learning rate (default: 1e-2)

+
etas
+

(Tuple(float, float), optional): pair of (etaminus, etaplis), that are multiplicative increase and decrease factors -(default: (0.5, 1.2))

step_sizes

(vector(float, float), optional): a pair of minimal and -maximal allowed step sizes (default: (1e-6, 50))

- -

Warning

- +(default: (0.5, 1.2))

+
step_sizes
+

(vector(float, float), optional): a pair of minimal and +maximal allowed step sizes (default: (1e-6, 50))

+
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-optimizer <- optim_rprop(model$parameters(), lr=0.1)
-optimizer$zero_grad()
-loss_fn(model(input), target)$backward()
-optimizer$step()
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+optimizer <- optim_rprop(model$parameters(), lr=0.1)
+optimizer$zero_grad()
+loss_fn(model(input), target)$backward()
+optimizer$step()
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/optim_sgd.html b/dev/reference/optim_sgd.html index 2c90c1e85e65b66f93e35275de034a3a17544306..be2df9626117b08687516fcafda87e1104f6a4a3 100644 --- a/dev/reference/optim_sgd.html +++ b/dev/reference/optim_sgd.html @@ -1,81 +1,20 @@ - - - - - - - -SGD optimizer — optim_sgd • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -SGD optimizer — optim_sgd • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,47 +115,35 @@ Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.

-
optim_sgd(
-  params,
-  lr = optim_required(),
-  momentum = 0,
-  dampening = 0,
-  weight_decay = 0,
-  nesterov = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
params

(iterable): iterable of parameters to optimize or dicts defining -parameter groups

lr

(float): learning rate

momentum

(float, optional): momentum factor (default: 0)

dampening

(float, optional): dampening for momentum (default: 0)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

nesterov

(bool, optional): enables Nesterov momentum (default: FALSE)

- -

Note

+
+
optim_sgd(
+  params,
+  lr = optim_required(),
+  momentum = 0,
+  dampening = 0,
+  weight_decay = 0,
+  nesterov = FALSE
+)
+
+
+

Arguments

+
params
+

(iterable): iterable of parameters to optimize or dicts defining +parameter groups

+
lr
+

(float): learning rate

+
momentum
+

(float, optional): momentum factor (default: 0)

+
dampening
+

(float, optional): dampening for momentum (default: 0)

+
weight_decay
+

(float, optional): weight decay (L2 penalty) (default: 0)

+
nesterov
+

(bool, optional): enables Nesterov momentum (default: FALSE)

+
+
+

Note

@@ -257,51 +167,51 @@ p_{t+1} & = p_{t} - v_{t+1}. \end{array} $$ The Nesterov version is analogously modified.

-

Warning

- +
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-optimizer <- optim_sgd(model$parameters(), lr=0.1, momentum=0.9)
-optimizer$zero_grad()
-loss_fn(model(input), target)$backward()
-optimizer$step()
-}
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+optimizer <- optim_sgd(model$parameters(), lr=0.1, momentum=0.9)
+optimizer$zero_grad()
+loss_fn(model(input), target)$backward()
+optimizer$step()
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/optimizer.html b/dev/reference/optimizer.html index 85080eff07592b7beea8e8958db10e0e2bf27e8a..a39c8cef022cdd780a7faa6d4cba3c82cb74736d 100644 --- a/dev/reference/optimizer.html +++ b/dev/reference/optimizer.html @@ -1,81 +1,20 @@ - - - - - - - -Creates a custom optimizer — optimizer • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a custom optimizer — optimizer • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,124 +115,111 @@ the initialize and step methods. See the example secti for a full example.

-
optimizer(
-  name = NULL,
-  inherit = Optimizer,
-  ...,
-  private = NULL,
-  active = NULL,
-  parent_env = parent.frame()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
name

(optional) name of the optimizer

inherit

(optional) you can inherit from other optimizers to re-use -some methods.

...

Pass any number of fields or methods. You should at least define -the initialize and step methods. See the examples section.

private

(optional) a list of private methods for the optimizer.

active

(optional) a list of active methods for the optimizer.

parent_env

used to capture the right environment to define the class. -The default is fine for most situations.

- -

Warning

+
+
optimizer(
+  name = NULL,
+  inherit = Optimizer,
+  ...,
+  private = NULL,
+  active = NULL,
+  parent_env = parent.frame()
+)
+
+
+

Arguments

+
name
+

(optional) name of the optimizer

+
inherit
+

(optional) you can inherit from other optimizers to re-use +some methods.

+
...
+

Pass any number of fields or methods. You should at least define +the initialize and step methods. See the examples section.

+
private
+

(optional) a list of private methods for the optimizer.

+
active
+

(optional) a list of active methods for the optimizer.

+
parent_env
+

used to capture the right environment to define the class. +The default is fine for most situations.

+
+
+

Warning

If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda() will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

+
-

Examples

-
if (torch_is_installed()) {
-
-# In this example we will create a custom optimizer
-# that's just a simplified version of the `optim_sgd` function.
-
-optim_sgd2 <- optimizer(
-  initialize = function(params, learning_rate) {
-    defaults <- list(
-      learning_rate = learning_rate
-    )
-    super$initialize(params, defaults)
-  },
-  step = function() {
-    with_no_grad({
-      for (g in seq_along(self$param_groups)) {
-        group <- self$param_groups[[g]]
-        for (p in seq_along(group$params)) {
-          param <- group$params[[p]]
-          
-          if (is.null(param$grad) || is_undefined_tensor(param$grad))
-            next
-          
-          param$add_(param$grad, alpha = -group$learning_rate)
-        }
-      }
-    })
-  }
-)
-
-x <- torch_randn(1, requires_grad = TRUE)
-opt <- optim_sgd2(x, learning_rate = 0.1)
-for (i in 1:100) {
-  opt$zero_grad()
-  y <- x^2
-  y$backward()
-  opt$step()
-}
-all.equal(x$item(), 0, tolerance = 1e-9)
-
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+
+# In this example we will create a custom optimizer
+# that's just a simplified version of the `optim_sgd` function.
+
+optim_sgd2 <- optimizer(
+  initialize = function(params, learning_rate) {
+    defaults <- list(
+      learning_rate = learning_rate
+    )
+    super$initialize(params, defaults)
+  },
+  step = function() {
+    with_no_grad({
+      for (g in seq_along(self$param_groups)) {
+        group <- self$param_groups[[g]]
+        for (p in seq_along(group$params)) {
+          param <- group$params[[p]]
+          
+          if (is.null(param$grad) || is_undefined_tensor(param$grad))
+            next
+          
+          param$add_(param$grad, alpha = -group$learning_rate)
+        }
+      }
+    })
+  }
+)
+
+x <- torch_randn(1, requires_grad = TRUE)
+opt <- optim_sgd2(x, learning_rate = 0.1)
+for (i in 1:100) {
+  opt$zero_grad()
+  y <- x^2
+  y$backward()
+  opt$step()
+}
+all.equal(x$item(), 0, tolerance = 1e-9)
+
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/pipe.html b/dev/reference/pipe.html index 118f13cea716d08ea8bc192f645e98af825d039a..c22be5afbd96b9f31a768354d64a05a59a1b0137 100644 --- a/dev/reference/pipe.html +++ b/dev/reference/pipe.html @@ -1,79 +1,18 @@ - - - - - - - -Pipe operator — %>% • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pipe operator — %>% • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

See magrittr::%>% for details.

-
lhs %>% rhs
- +
+
lhs %>% rhs
+
+
-
- +
- - + + diff --git a/dev/reference/probs_to_logits.html b/dev/reference/probs_to_logits.html deleted file mode 100644 index 24690fa0d344ed9785072121ed21646c869fbc42..0000000000000000000000000000000000000000 --- a/dev/reference/probs_to_logits.html +++ /dev/null @@ -1,238 +0,0 @@ - - - - - - - - -Converts a tensor of probabilities into logits. For the binary case, -this denotes the probability of occurrence of the event indexed by 1. -For the multi-dimensional case, the values along the last dimension -denote the probabilities of occurrence of each of the events. — probs_to_logits • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Converts a tensor of probabilities into logits. For the binary case, -this denotes the probability of occurrence of the event indexed by 1. -For the multi-dimensional case, the values along the last dimension -denote the probabilities of occurrence of each of the events.

-
- -
probs_to_logits(probs, is_binary = FALSE)
- - - -
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/reexports.html b/dev/reference/reexports.html index d76afbec3b00b45dbee621d0cdf3ba0573e3f35b..7d635e48b5dbce4fea349b4fed193fa1d7f96520 100644 --- a/dev/reference/reexports.html +++ b/dev/reference/reexports.html @@ -1,84 +1,25 @@ - - - - - - - -Re-exporting the as_iterator function. — reexports • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Re-exporting the as_iterator function. — reexports • torch - - - + coro +as_iterator, loop, yield - - - - - - - - - - + + - - -
-
- -
- -
+

These objects are imported from other packages. Follow the links below to see their documentation.

-
-
coro

as_iterator, loop, yield

+
coro
+

as_iterator, loop, yield

-
-
+
+
-
- +
- - + + diff --git a/dev/reference/slc.html b/dev/reference/slc.html index e58d88276f9e78227eaaca47cea2c6015b4d8525..1d51eee475bbf12b41d112d85e9295afe3bcaf61 100644 --- a/dev/reference/slc.html +++ b/dev/reference/slc.html @@ -1,79 +1,18 @@ - - - - - - - -Creates a slice — slc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates a slice — slc • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,63 +111,55 @@

Creates a slice object that can be used when indexing torch tensors.

-
slc(start, end, step = 1)
- -

Arguments

- - - - - - - - - - - - - - -
start

(integer) starting index.

end

(integer) the last selected index.

step

(integer) the step between indexes.

- - -

Examples

-
if (torch_is_installed()) {
-x <- torch_randn(10)
-x[slc(start = 1, end = 5, step = 2)]
-
-}
-#> torch_tensor
-#> -0.7003
-#>  1.6018
-#>  0.3824
-#> [ CPUFloatType{3} ]
-
+
+
slc(start, end, step = 1)
+
+ +
+

Arguments

+
start
+

(integer) starting index.

+
end
+

(integer) the last selected index.

+
step
+

(integer) the step between indexes.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+x <- torch_randn(10)
+x[slc(start = 1, end = 5, step = 2)]
+
+}
+#> torch_tensor
+#> -0.4307
+#>  0.2810
+#> -0.0825
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/tensor_dataset.html b/dev/reference/tensor_dataset.html index cc1f2015740b385a33907ec0b6ad27ff9abd5e1e..904648eccab5c026224f0636a3f4a5d8facda300 100644 --- a/dev/reference/tensor_dataset.html +++ b/dev/reference/tensor_dataset.html @@ -1,79 +1,18 @@ - - - - - - - -Dataset wrapping tensors. — tensor_dataset • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dataset wrapping tensors. — tensor_dataset • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Each sample will be retrieved by indexing tensors along the first dimension.

-
tensor_dataset(...)
- -

Arguments

- - - - - - -
...

tensors that have the same size of the first dimension.

+
+
tensor_dataset(...)
+
+
+

Arguments

+
...
+

tensors that have the same size of the first dimension.

+
+
-
- +
- - + + diff --git a/dev/reference/threads.html b/dev/reference/threads.html index 5346dbb2e7dcf9419d40e34fbb6e31f15b672a74..63cec71f0267ceb815f26718b2b3261af4c7d755 100644 --- a/dev/reference/threads.html +++ b/dev/reference/threads.html @@ -1,79 +1,18 @@ - - - - - - - -Number of threads — threads • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Number of threads — threads • torch - - - - - + + - - - -
-
- -
- -
+
@@ -189,56 +111,52 @@

Get and set the numbers used by torch computations.

-
torch_set_num_threads(num_threads)
-
-torch_set_num_interop_threads(num_threads)
+    
+
torch_set_num_threads(num_threads)
 
-torch_get_num_interop_threads()
+torch_set_num_interop_threads(num_threads)
 
-torch_get_num_threads()
+torch_get_num_interop_threads() -

Arguments

- - - - - - -
num_threads

number of threads to set.

- -

Details

+torch_get_num_threads()
+
-

For details see the CPU threading article +

+

Arguments

+
num_threads
+

number of threads to set.

+
+
+

Details

+

For details see the CPU threading article in the PyTorch documentation.

-

Note

- +
+
+

Note

torch_set_threads do not work on macOS system as it must be 1.

+
+ -
- +
- - + + diff --git a/dev/reference/torch_abs.html b/dev/reference/torch_abs.html index fff57040db7c82d431bbc9165223d61f6e9cfd34..faa57de0d6ae220465837f651ae86a7f0d4e24bc 100644 --- a/dev/reference/torch_abs.html +++ b/dev/reference/torch_abs.html @@ -1,79 +1,18 @@ - - - - - - - -Abs — torch_abs • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Abs — torch_abs • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_abs(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

abs(input) -> Tensor

+
+
torch_abs(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

abs(input) -> Tensor

@@ -209,43 +129,42 @@

$$ \mbox{out}_{i} = |\mbox{input}_{i}| $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_abs(torch_tensor(c(-1, -2, 3)))
-}
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_abs(torch_tensor(c(-1, -2, 3)))
+}
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_absolute.html b/dev/reference/torch_absolute.html index c83338711b416963b97f5b1eb1d7b7797d213b92..ad1f3d022dd518e874031f195e569afefb97a845 100644 --- a/dev/reference/torch_absolute.html +++ b/dev/reference/torch_absolute.html @@ -1,79 +1,18 @@ - - - - - - - -Absolute — torch_absolute • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Absolute — torch_absolute • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,49 +111,44 @@

Absolute

-
torch_absolute(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

absolute(input, *, out=None) -> Tensor

+
+
torch_absolute(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

absolute(input, *, out=None) -> Tensor

-

Alias for torch_abs()

+

Alias for torch_abs()

+
+
-
- +
- - + + diff --git a/dev/reference/torch_acos.html b/dev/reference/torch_acos.html index c7f8b094f16879ccaa0171edac34b14d179a3f1f..4598f96f8d49c19fe877df6d5fc9b5e440f1f1da 100644 --- a/dev/reference/torch_acos.html +++ b/dev/reference/torch_acos.html @@ -1,79 +1,18 @@ - - - - - - - -Acos — torch_acos • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Acos — torch_acos • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_acos(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

acos(input) -> Tensor

+
+
torch_acos(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

acos(input) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \cos^{-1}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_acos(a)
-}
-#> torch_tensor
-#>  1.9154
-#>     nan
-#>  1.7598
-#>  2.8927
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_acos(a)
+}
+#> torch_tensor
+#> nan
+#> nan
+#> nan
+#> nan
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_acosh.html b/dev/reference/torch_acosh.html index 7b97c2c7f0448898e86cd3a25ffa1e93e65fa700..9c0290861ec72120f418b604f985852b41587e17 100644 --- a/dev/reference/torch_acosh.html +++ b/dev/reference/torch_acosh.html @@ -1,79 +1,18 @@ - - - - - - - -Acosh — torch_acosh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Acosh — torch_acosh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_acosh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

Note

+
+
torch_acosh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

Note

The domain of the inverse hyperbolic cosine is [1, inf) and values outside this range will be mapped to NaN, except for + INF for which the output is mapped to + INF.

$$ \mbox{out}_{i} = \cosh^{-1}(\mbox{input}_{i}) $$

-

acosh(input, *, out=None) -> Tensor

- +
+
+

acosh(input, *, out=None) -> Tensor

Returns a new tensor with the inverse hyperbolic cosine of the elements of input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(4))$uniform_(1, 2)
-a
-torch_acosh(a)
-}
-#> torch_tensor
-#>  0.7407
-#>  0.8963
-#>  1.2158
-#>  0.9407
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(4))$uniform_(1, 2)
+a
+torch_acosh(a)
+}
+#> torch_tensor
+#>  0.6152
+#>  1.1088
+#>  1.0838
+#>  1.2457
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_adaptive_avg_pool1d.html b/dev/reference/torch_adaptive_avg_pool1d.html index 95c7ec91e716e48090e69666e080e3b4b2807995..16bc0ad1f2226262c6576dc05b654b704b4902aa 100644 --- a/dev/reference/torch_adaptive_avg_pool1d.html +++ b/dev/reference/torch_adaptive_avg_pool1d.html @@ -1,79 +1,18 @@ - - - - - - - -Adaptive_avg_pool1d — torch_adaptive_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Adaptive_avg_pool1d — torch_adaptive_avg_pool1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,55 +111,48 @@

Adaptive_avg_pool1d

-
torch_adaptive_avg_pool1d(self, output_size)
- -

Arguments

- - - - - - - - - - -
self

the input tensor

output_size

the target output size (single integer)

- -

adaptive_avg_pool1d(input, output_size) -> Tensor

+
+
torch_adaptive_avg_pool1d(self, output_size)
+
+
+

Arguments

+
self
+

the input tensor

+
output_size
+

the target output size (single integer)

+
+
+

adaptive_avg_pool1d(input, output_size) -> Tensor

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

-

See nn_adaptive_avg_pool1d() for details and output shape.

+

See nn_adaptive_avg_pool1d() for details and output shape.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_add.html b/dev/reference/torch_add.html index ec6363a7eb4b31f84b1e565882857df119a730c2..68831626c76dbf57700c0ba9502ca9180d6fb91e 100644 --- a/dev/reference/torch_add.html +++ b/dev/reference/torch_add.html @@ -1,79 +1,18 @@ - - - - - - - -Add — torch_add • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Add — torch_add • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_add(self, other, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor/Number) the second input tensor/number.

alpha

(Number) the scalar multiplier for other

- -

add(input, other, out=NULL)

+
+
torch_add(self, other, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor/Number) the second input tensor/number.

+
alpha
+

(Number) the scalar multiplier for other

+
+
+

add(input, other, out=NULL)

@@ -220,8 +136,9 @@ and returns a new resulting tensor.

$$ If input is of type FloatTensor or DoubleTensor, other must be a real number, otherwise it should be an integer.

-

add(input, other, *, alpha=1, out=NULL)

- +
+
+

add(input, other, *, alpha=1, out=NULL)

@@ -235,53 +152,52 @@ broadcastable .

$$ If other is of type FloatTensor or DoubleTensor, alpha must be a real number, otherwise it should be an integer.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_add(a, 20)
-
-
-a = torch_randn(c(4))
-a
-b = torch_randn(c(4, 1))
-b
-torch_add(a, b)
-}
-#> torch_tensor
-#> -0.1948 -1.0123 -1.7895  0.3602
-#> -0.0349 -0.8524 -1.6297  0.5200
-#> -0.6679 -1.4855 -2.2627 -0.1130
-#> -0.9886 -1.8062 -2.5834 -0.4337
-#> [ CPUFloatType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_add(a, 20)
+
+
+a = torch_randn(c(4))
+a
+b = torch_randn(c(4, 1))
+b
+torch_add(a, b)
+}
+#> torch_tensor
+#>  0.3543  2.7229  1.5358  0.2578
+#> -0.6951  1.6735  0.4864 -0.7916
+#> -2.1592  0.2094 -0.9777 -2.2557
+#> -1.9392  0.4294 -0.7577 -2.0357
+#> [ CPUFloatType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addbmm.html b/dev/reference/torch_addbmm.html index 4ca57811ba98e7feb668dbdeed4e91cc70b1fad5..d22e3f219ea9e5cd11abf499707198a11b337bdf 100644 --- a/dev/reference/torch_addbmm.html +++ b/dev/reference/torch_addbmm.html @@ -1,79 +1,18 @@ - - - - - - - -Addbmm — torch_addbmm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addbmm — torch_addbmm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) matrix to be added

batch1

(Tensor) the first batch of matrices to be multiplied

batch2

(Tensor) the second batch of matrices to be multiplied

beta

(Number, optional) multiplier for input (\(\beta\))

alpha

(Number, optional) multiplier for batch1 @ batch2 (\(\alpha\))

- -

addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

+
+
torch_addbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) matrix to be added

+
batch1
+

(Tensor) the first batch of matrices to be multiplied

+
batch2
+

(Tensor) the second batch of matrices to be multiplied

+
beta
+

(Number, optional) multiplier for input (\(\beta\))

+
alpha
+

(Number, optional) multiplier for batch1 @ batch2 (\(\alpha\))

+
+
+

addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

@@ -237,46 +149,45 @@ and out will be a \((n \times p)\) tensor.

$$ For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers.

+
-

Examples

-
if (torch_is_installed()) {
-
-M = torch_randn(c(3, 5))
-batch1 = torch_randn(c(10, 3, 4))
-batch2 = torch_randn(c(10, 4, 5))
-torch_addbmm(M, batch1, batch2)
-}
-#> torch_tensor
-#>  -0.2711  -1.1760  -2.5380  10.8767  -2.0978
-#>  -2.8442   3.0542  -2.4135   5.0735   0.5867
-#>  -6.5183  -7.9325  -4.9244  -0.1476 -10.0104
-#> [ CPUFloatType{3,5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+M = torch_randn(c(3, 5))
+batch1 = torch_randn(c(10, 3, 4))
+batch2 = torch_randn(c(10, 4, 5))
+torch_addbmm(M, batch1, batch2)
+}
+#> torch_tensor
+#>   5.5040   3.1313   6.0254  -1.2240  14.6943
+#>  -1.8030   4.7817  -8.3587  -8.0430   7.2059
+#>  -8.5861  15.5201  -4.4082  -7.4407   2.6947
+#> [ CPUFloatType{3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addcdiv.html b/dev/reference/torch_addcdiv.html index d8691b30a643df6c3140fceb852e872cfd116d9c..18edc253c21cf2629dd5fd79c0e4881e06009d04 100644 --- a/dev/reference/torch_addcdiv.html +++ b/dev/reference/torch_addcdiv.html @@ -1,79 +1,18 @@ - - - - - - - -Addcdiv — torch_addcdiv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addcdiv — torch_addcdiv • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addcdiv(self, tensor1, tensor2, value = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the tensor to be added

tensor1

(Tensor) the numerator tensor

tensor2

(Tensor) the denominator tensor

value

(Number, optional) multiplier for \(\mbox{tensor1} / \mbox{tensor2}\)

- -

addcdiv(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

+
+
torch_addcdiv(self, tensor1, tensor2, value = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to be added

+
tensor1
+

(Tensor) the numerator tensor

+
tensor2
+

(Tensor) the denominator tensor

+
value
+

(Number, optional) multiplier for \(\mbox{tensor1} / \mbox{tensor2}\)

+
+
+

addcdiv(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

Performs the element-wise division of tensor1 by tensor2, multiply the result by the scalar value and add it to input.

-

Warning

- +
+
+

Warning

Integer division with addcdiv is deprecated, and in a future release addcdiv will perform a true division of tensor1 and tensor2. -The current addcdiv behavior can be replicated using torch_floor_divide() +The current addcdiv behavior can be replicated using torch_floor_divide() for integral inputs (input + value * tensor1 // tensor2) -and torch_div() for float inputs +and torch_div() for float inputs (input + value * tensor1 / tensor2). -The new addcdiv behavior can be implemented with torch_true_divide() +The new addcdiv behavior can be implemented with torch_true_divide() (input + value * torch.true_divide(tensor1, tensor2).

$$ @@ -240,46 +155,45 @@ $$

broadcastable .

For inputs of type FloatTensor or DoubleTensor, value must be a real number, otherwise an integer.

+
-

Examples

-
if (torch_is_installed()) {
-
-t = torch_randn(c(1, 3))
-t1 = torch_randn(c(3, 1))
-t2 = torch_randn(c(1, 3))
-torch_addcdiv(t, t1, t2, 0.1)
-}
-#> torch_tensor
-#>  1.8376 -0.4355 -1.2186
-#>  1.8460 -0.4288 -1.2220
-#>  1.9036 -0.3834 -1.2445
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t = torch_randn(c(1, 3))
+t1 = torch_randn(c(3, 1))
+t2 = torch_randn(c(1, 3))
+torch_addcdiv(t, t1, t2, 0.1)
+}
+#> torch_tensor
+#> -0.2266 -0.5436 -0.4552
+#> -0.3244 -0.4842 -0.3736
+#> -0.3762 -0.4527 -0.3304
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addcmul.html b/dev/reference/torch_addcmul.html index 48c6a19c7e1cb8ce90fb96bfef06f15612559a25..8c660b11efe11581b3b48c00f61ae31b45ca6dbf 100644 --- a/dev/reference/torch_addcmul.html +++ b/dev/reference/torch_addcmul.html @@ -1,79 +1,18 @@ - - - - - - - -Addcmul — torch_addcmul • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addcmul — torch_addcmul • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addcmul(self, tensor1, tensor2, value = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the tensor to be added

tensor1

(Tensor) the tensor to be multiplied

tensor2

(Tensor) the tensor to be multiplied

value

(Number, optional) multiplier for \(tensor1 .* tensor2\)

- -

addcmul(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

+
+
torch_addcmul(self, tensor1, tensor2, value = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to be added

+
tensor1
+

(Tensor) the tensor to be multiplied

+
tensor2
+

(Tensor) the tensor to be multiplied

+
value
+

(Number, optional) multiplier for \(tensor1 .* tensor2\)

+
+
+

addcmul(input, tensor1, tensor2, *, value=1, out=NULL) -> Tensor

@@ -227,47 +141,45 @@ The shapes of tensor, tensor1, and tensor2

For inputs of type FloatTensor or DoubleTensor, value must be a real number, otherwise an integer.

+
-

Examples

-
if (torch_is_installed()) {
-
-t = torch_randn(c(1, 3))
-t1 = torch_randn(c(3, 1))
-t2 = torch_randn(c(1, 3))
-torch_addcmul(t, t1, t2, 0.1)
-}
-#> torch_tensor
-#> 0.01 *
-#> -7.0631 -243.5519  0.5258
-#>  -10.9131 -245.7551  0.0293
-#>  -15.9715 -248.6499 -0.6231
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t = torch_randn(c(1, 3))
+t1 = torch_randn(c(3, 1))
+t2 = torch_randn(c(1, 3))
+torch_addcmul(t, t1, t2, 0.1)
+}
+#> torch_tensor
+#>  0.4796 -1.0442 -0.2288
+#>  0.4374 -1.0310 -0.2786
+#>  0.3577 -1.0059 -0.3727
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addmm.html b/dev/reference/torch_addmm.html index c442063159bf3c7ee91c01d1c0864da984e31e3d..a8a24523afe086b816235fd445ad58d6d7893d62 100644 --- a/dev/reference/torch_addmm.html +++ b/dev/reference/torch_addmm.html @@ -1,79 +1,18 @@ - - - - - - - -Addmm — torch_addmm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addmm — torch_addmm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addmm(self, mat1, mat2, beta = 1L, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) matrix to be added

mat1

(Tensor) the first matrix to be multiplied

mat2

(Tensor) the second matrix to be multiplied

beta

(Number, optional) multiplier for input (\(\beta\))

alpha

(Number, optional) multiplier for \(mat1 @ mat2\) (\(\alpha\))

- -

addmm(input, mat1, mat2, *, beta=1, alpha=1, out=NULL) -> Tensor

+
+
torch_addmm(self, mat1, mat2, beta = 1L, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) matrix to be added

+
mat1
+

(Tensor) the first matrix to be multiplied

+
mat2
+

(Tensor) the second matrix to be multiplied

+
beta
+

(Number, optional) multiplier for input (\(\beta\))

+
alpha
+

(Number, optional) multiplier for \(mat1 @ mat2\) (\(\alpha\))

+
+
+

addmm(input, mat1, mat2, *, beta=1, alpha=1, out=NULL) -> Tensor

@@ -234,45 +146,44 @@ and out will be a \((n \times p)\) tensor.

$$ For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers.

+
-

Examples

-
if (torch_is_installed()) {
-
-M = torch_randn(c(2, 3))
-mat1 = torch_randn(c(2, 3))
-mat2 = torch_randn(c(3, 3))
-torch_addmm(M, mat1, mat2)
-}
-#> torch_tensor
-#>  8.2535  2.8206  0.7891
-#> -4.0131 -1.6696  0.6269
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+M = torch_randn(c(2, 3))
+mat1 = torch_randn(c(2, 3))
+mat2 = torch_randn(c(3, 3))
+torch_addmm(M, mat1, mat2)
+}
+#> torch_tensor
+#> -0.9713  3.1402 -2.9559
+#>  2.6256 -2.4291  5.2249
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addmv.html b/dev/reference/torch_addmv.html index 491fcf20ee56aa0280123bdf9bd54df2f3b43c31..d327633290aa5b4e3e17db9b3b6c9a1e95f9a0f7 100644 --- a/dev/reference/torch_addmv.html +++ b/dev/reference/torch_addmv.html @@ -1,79 +1,18 @@ - - - - - - - -Addmv — torch_addmv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addmv — torch_addmv • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addmv(self, mat, vec, beta = 1L, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) vector to be added

mat

(Tensor) matrix to be multiplied

vec

(Tensor) vector to be multiplied

beta

(Number, optional) multiplier for input (\(\beta\))

alpha

(Number, optional) multiplier for \(mat @ vec\) (\(\alpha\))

- -

addmv(input, mat, vec, *, beta=1, alpha=1, out=NULL) -> Tensor

+
+
torch_addmv(self, mat, vec, beta = 1L, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) vector to be added

+
mat
+

(Tensor) matrix to be multiplied

+
vec
+

(Tensor) vector to be multiplied

+
beta
+

(Number, optional) multiplier for input (\(\beta\))

+
alpha
+

(Number, optional) multiplier for \(mat @ vec\) (\(\alpha\))

+
+
+

addmv(input, mat, vec, *, beta=1, alpha=1, out=NULL) -> Tensor

@@ -235,45 +147,44 @@ broadcastable with a 1-D tensor of size n and $$ For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers

+
-

Examples

-
if (torch_is_installed()) {
-
-M = torch_randn(c(2))
-mat = torch_randn(c(2, 3))
-vec = torch_randn(c(3))
-torch_addmv(M, mat, vec)
-}
-#> torch_tensor
-#>  2.8809
-#>  1.3260
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+M = torch_randn(c(2))
+mat = torch_randn(c(2, 3))
+vec = torch_randn(c(3))
+torch_addmv(M, mat, vec)
+}
+#> torch_tensor
+#> -1.6625
+#>  1.2665
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_addr.html b/dev/reference/torch_addr.html index c60dd43fecd273a1a4fb3448208e3654c028134b..63deef8597b2c74a1a98900b01ed59b2cf644f84 100644 --- a/dev/reference/torch_addr.html +++ b/dev/reference/torch_addr.html @@ -1,79 +1,18 @@ - - - - - - - -Addr — torch_addr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Addr — torch_addr • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_addr(self, vec1, vec2, beta = 1L, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) matrix to be added

vec1

(Tensor) the first vector of the outer product

vec2

(Tensor) the second vector of the outer product

beta

(Number, optional) multiplier for input (\(\beta\))

alpha

(Number, optional) multiplier for \(\mbox{vec1} \otimes \mbox{vec2}\) (\(\alpha\))

- -

addr(input, vec1, vec2, *, beta=1, alpha=1, out=NULL) -> Tensor

+
+
torch_addr(self, vec1, vec2, beta = 1L, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) matrix to be added

+
vec1
+

(Tensor) the first vector of the outer product

+
vec2
+

(Tensor) the second vector of the outer product

+
beta
+

(Number, optional) multiplier for input (\(\beta\))

+
alpha
+

(Number, optional) multiplier for \(\mbox{vec1} \otimes \mbox{vec2}\) (\(\alpha\))

+
+
+

addr(input, vec1, vec2, *, beta=1, alpha=1, out=NULL) -> Tensor

@@ -236,46 +148,45 @@ broadcastable with a matrix of size \((n \times m)\).

For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers

+
-

Examples

-
if (torch_is_installed()) {
-
-vec1 = torch_arange(1, 3)
-vec2 = torch_arange(1, 2)
-M = torch_zeros(c(3, 2))
-torch_addr(M, vec1, vec2)
-}
-#> torch_tensor
-#>  1  2
-#>  2  4
-#>  3  6
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+vec1 = torch_arange(1, 3)
+vec2 = torch_arange(1, 2)
+M = torch_zeros(c(3, 2))
+torch_addr(M, vec1, vec2)
+}
+#> torch_tensor
+#>  1  2
+#>  2  4
+#>  3  6
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_allclose.html b/dev/reference/torch_allclose.html index 184df2920894749bc08b7e59b9c9d4e666a6a89f..c2923a4ec03bd56251a19734ebbab7cc3b653146 100644 --- a/dev/reference/torch_allclose.html +++ b/dev/reference/torch_allclose.html @@ -1,79 +1,18 @@ - - - - - - - -Allclose — torch_allclose • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Allclose — torch_allclose • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,35 +111,25 @@

Allclose

-
torch_allclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) first tensor to compare

other

(Tensor) second tensor to compare

rtol

(float, optional) relative tolerance. Default: 1e-05

atol

(float, optional) absolute tolerance. Default: 1e-08

equal_nan

(bool, optional) if TRUE, then two NaN s will be compared as equal. Default: FALSE

- -

allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool

+
+
torch_allclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) first tensor to compare

+
other
+

(Tensor) second tensor to compare

+
rtol
+

(float, optional) relative tolerance. Default: 1e-05

+
atol
+

(float, optional) absolute tolerance. Default: 1e-08

+
equal_nan
+

(bool, optional) if TRUE, then two NaN s will be compared as equal. Default: FALSE

+
+
+

allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool

@@ -227,42 +139,41 @@ $$ elementwise, for all elements of input and other. The behaviour of this function is analogous to numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>_

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_allclose(torch_tensor(c(10000., 1e-07)), torch_tensor(c(10000.1, 1e-08)))
-torch_allclose(torch_tensor(c(10000., 1e-08)), torch_tensor(c(10000.1, 1e-09)))
-torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)))
-torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)), equal_nan=TRUE)
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_allclose(torch_tensor(c(10000., 1e-07)), torch_tensor(c(10000.1, 1e-08)))
+torch_allclose(torch_tensor(c(10000., 1e-08)), torch_tensor(c(10000.1, 1e-09)))
+torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)))
+torch_allclose(torch_tensor(c(1.0, NaN)), torch_tensor(c(1.0, NaN)), equal_nan=TRUE)
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_amax.html b/dev/reference/torch_amax.html index 8d5aceb6092fe12bae4253486600a726d1d3d4a6..83708a5dd2cfa48daba5765543a4776b2f4091f5 100644 --- a/dev/reference/torch_amax.html +++ b/dev/reference/torch_amax.html @@ -1,79 +1,18 @@ - - - - - - - -Amax — torch_amax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Amax — torch_amax • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_amax(self, dim = list(), keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

- -

Note

+
+
torch_amax(self, dim = list(), keepdim = FALSE)
+
-

The difference between max/min and amax/amin is:

    -
  • amax/amin supports reducing on multiple dimensions,

  • +
    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor.

    +
    dim
    +

    (int or tuple of ints) the dimension or dimensions to reduce.

    +
    keepdim
    +

    (bool) whether the output tensor has dim retained or not.

    +
    +
    +

    Note

    +

    The difference between max/min and amax/amin is:

    • amax/amin supports reducing on multiple dimensions,

    • amax/amin does not return indices,

    • amax/amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor.

    • -
    - -

    If keepdim is TRUE, the output tensors are of the same size as inputexcept in the dimension(s)dimwhere they are of size 1. Otherwise,dims are squeezed (see [torch_squeeze()]), resulting in the output tensors having fewer dimension than input`.

    -

    amax(input, dim, keepdim=FALSE, *, out=None) -> Tensor

    - +

If keepdim is TRUE, the output tensors are of the same size as inputexcept in the dimension(s)dimwhere they are of size 1. Otherwise,dims are squeezed (see [torch_squeeze()]), resulting in the output tensors having fewer dimension than input`.

+
+
+

amax(input, dim, keepdim=FALSE, *, out=None) -> Tensor

Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(4, 4))
-a
-torch_amax(a, 1)
-}
-#> torch_tensor
-#>  0.8230
-#>  1.4208
-#>  0.4793
-#>  1.5478
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(4, 4))
+a
+torch_amax(a, 1)
+}
+#> torch_tensor
+#>  1.8245
+#>  1.3432
+#>  1.2888
+#> -0.4561
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_amin.html b/dev/reference/torch_amin.html index b33cabf63885f87a7b7790dbf4dedadf91d51437..e26cf55415a0ea3e05f73d44e3e803de7de5e66e 100644 --- a/dev/reference/torch_amin.html +++ b/dev/reference/torch_amin.html @@ -1,79 +1,18 @@ - - - - - - - -Amin — torch_amin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Amin — torch_amin • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_amin(self, dim = list(), keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

- -

Note

+
+
torch_amin(self, dim = list(), keepdim = FALSE)
+
-

The difference between max/min and amax/amin is:

    -
  • amax/amin supports reducing on multiple dimensions,

  • +
    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor.

    +
    dim
    +

    (int or tuple of ints) the dimension or dimensions to reduce.

    +
    keepdim
    +

    (bool) whether the output tensor has dim retained or not.

    +
    +
    +

    Note

    +

    The difference between max/min and amax/amin is:

    • amax/amin supports reducing on multiple dimensions,

    • amax/amin does not return indices,

    • amax/amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor.

    • -
    - -

    If keepdim is TRUE, the output tensors are of the same size as +

If keepdim is TRUE, the output tensors are of the same size as input except in the dimension(s) dim where they are of size 1. -Otherwise, dims are squeezed (see torch_squeeze()), resulting in +Otherwise, dims are squeezed (see torch_squeeze()), resulting in the output tensors having fewer dimensions than input.

-

amin(input, dim, keepdim=FALSE, *, out=None) -> Tensor

- +
+
+

amin(input, dim, keepdim=FALSE, *, out=None) -> Tensor

Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(4, 4))
-a
-torch_amin(a, 1)
-}
-#> torch_tensor
-#> -0.3340
-#> -0.8503
-#> -1.2803
-#> -0.3193
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(4, 4))
+a
+torch_amin(a, 1)
+}
+#> torch_tensor
+#> -1.5179
+#> -1.2768
+#> -0.7408
+#> -1.0835
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_angle.html b/dev/reference/torch_angle.html index b9fefc527874ed520451489aa98314785358186e..a6486b5ceff39a3ef5b74ac96d60716c2f006f1b 100644 --- a/dev/reference/torch_angle.html +++ b/dev/reference/torch_angle.html @@ -1,79 +1,18 @@ - - - - - - - -Angle — torch_angle • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Angle — torch_angle • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_angle(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

angle(input) -> Tensor

+
+
torch_angle(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

angle(input) -> Tensor

@@ -209,40 +129,39 @@

$$ \mbox{out}_{i} = angle(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-torch_angle(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))*180/3.14159
-}
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+torch_angle(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))*180/3.14159
+}
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_arange.html b/dev/reference/torch_arange.html index 2a58c9b6a2526a51ae357e77937987b0b7c1f9a1..e6a46e467bc1a26d3761438271fe68e19f7a04a6 100644 --- a/dev/reference/torch_arange.html +++ b/dev/reference/torch_arange.html @@ -1,79 +1,18 @@ - - - - - - - -Arange — torch_arange • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arange — torch_arange • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arange(
-  start,
-  end,
-  step = 1,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
start

(Number) the starting value for the set of points. Default: 0.

end

(Number) the ending value for the set of points

step

(Number) the gap between each pair of adjacent points. Default: 1.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

arange(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_arange(
+  start,
+  end,
+  step = 1,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
start
+

(Number) the starting value for the set of points. Default: 0.

+
end
+

(Number) the ending value for the set of points

+
step
+

(Number) the gap between each pair of adjacent points. Default: 1.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

arange(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -246,46 +154,45 @@ in such cases.

$$ \mbox{out}_{{i+1}} = \mbox{out}_{i} + \mbox{step} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_arange(start = 0, end = 5)
-torch_arange(1, 4)
-torch_arange(1, 2.5, 0.5)
-}
-#> torch_tensor
-#>  1.0000
-#>  1.5000
-#>  2.0000
-#>  2.5000
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_arange(start = 0, end = 5)
+torch_arange(1, 4)
+torch_arange(1, 2.5, 0.5)
+}
+#> torch_tensor
+#>  1.0000
+#>  1.5000
+#>  2.0000
+#>  2.5000
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_arccos.html b/dev/reference/torch_arccos.html index fbc36aaaa8181b65556029144ad0f3d41a90462d..f14a7f327920e660ccbbcd569b838b609f615d6b 100644 --- a/dev/reference/torch_arccos.html +++ b/dev/reference/torch_arccos.html @@ -1,79 +1,18 @@ - - - - - - - -Arccos — torch_arccos • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arccos — torch_arccos • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arccos(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arccos(input, *, out=None) -> Tensor

+
+
torch_arccos(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arccos(input, *, out=None) -> Tensor

-

Alias for torch_acos().

+

Alias for torch_acos().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_arccosh.html b/dev/reference/torch_arccosh.html index 7aad2bc28978bffa4e0147429705f61e681f25d4..4eaa0e34d48dcbe9bacda02ed755ad09f83fac21 100644 --- a/dev/reference/torch_arccosh.html +++ b/dev/reference/torch_arccosh.html @@ -1,79 +1,18 @@ - - - - - - - -Arccosh — torch_arccosh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arccosh — torch_arccosh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arccosh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arccosh(input, *, out=None) -> Tensor

+
+
torch_arccosh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arccosh(input, *, out=None) -> Tensor

-

Alias for torch_acosh().

+

Alias for torch_acosh().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_arcsin.html b/dev/reference/torch_arcsin.html index 56bbd507eb4ebfca17860c856266fa1100cf734c..3e8d761a76bb39bb3be417ee54ce3607a3570d91 100644 --- a/dev/reference/torch_arcsin.html +++ b/dev/reference/torch_arcsin.html @@ -1,79 +1,18 @@ - - - - - - - -Arcsin — torch_arcsin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arcsin — torch_arcsin • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arcsin(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arcsin(input, *, out=None) -> Tensor

+
+
torch_arcsin(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arcsin(input, *, out=None) -> Tensor

-

Alias for torch_asin().

+

Alias for torch_asin().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_arcsinh.html b/dev/reference/torch_arcsinh.html index c711eb731648b864311e9829a3719ffc264dcda1..fe1d8b374b14020e2224642e3b9da6a741522611 100644 --- a/dev/reference/torch_arcsinh.html +++ b/dev/reference/torch_arcsinh.html @@ -1,79 +1,18 @@ - - - - - - - -Arcsinh — torch_arcsinh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arcsinh — torch_arcsinh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arcsinh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arcsinh(input, *, out=None) -> Tensor

+
+
torch_arcsinh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arcsinh(input, *, out=None) -> Tensor

-

Alias for torch_asinh().

+

Alias for torch_asinh().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_arctan.html b/dev/reference/torch_arctan.html index ae88ecb5adc3dc735dfbd495d2ddda217577703f..f5941a9d844d85f531d415b5911df0c576bed16c 100644 --- a/dev/reference/torch_arctan.html +++ b/dev/reference/torch_arctan.html @@ -1,79 +1,18 @@ - - - - - - - -Arctan — torch_arctan • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arctan — torch_arctan • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arctan(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arctan(input, *, out=None) -> Tensor

+
+
torch_arctan(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arctan(input, *, out=None) -> Tensor

-

Alias for torch_atan().

+

Alias for torch_atan().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_arctanh.html b/dev/reference/torch_arctanh.html index 476aa94413a05249fb7ccae665924cb63e23f1b0..b0fae24eedb19c4996af82e3a8d46051c6a32b6b 100644 --- a/dev/reference/torch_arctanh.html +++ b/dev/reference/torch_arctanh.html @@ -1,79 +1,18 @@ - - - - - - - -Arctanh — torch_arctanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Arctanh — torch_arctanh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_arctanh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

arctanh(input, *, out=None) -> Tensor

+
+
torch_arctanh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

arctanh(input, *, out=None) -> Tensor

-

Alias for torch_atanh().

+

Alias for torch_atanh().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_argmax.html b/dev/reference/torch_argmax.html index 522f19f64622a84a4454b91ab31ab645121e420d..09b4a127769e1bd09759e497f48690569e6c2f8e 100644 --- a/dev/reference/torch_argmax.html +++ b/dev/reference/torch_argmax.html @@ -1,79 +1,18 @@ - - - - - - - -Argmax — torch_argmax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Argmax — torch_argmax • torch - - - - - - - - + + -
-
- -
- -
+
@@ -190,86 +112,78 @@
-

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce. If NULL, the argmax of the flattened input is returned.

keepdim

(bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

- -

argmax(input) -> LongTensor

- +
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce. If NULL, the argmax of the flattened input is returned.

+
keepdim
+

(bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

+
+
+

argmax(input) -> LongTensor

Returns the indices of the maximum value of all elements in the input tensor.

This is the second value returned by torch_max. See its documentation for the exact semantics of this method.

-

argmax(input, dim, keepdim=False) -> LongTensor

- +
+
+

argmax(input, dim, keepdim=False) -> LongTensor

Returns the indices of the maximum values of a tensor across a dimension.

This is the second value returned by torch_max. See its documentation for the exact semantics of this method.

+
-

Examples

-
if (torch_is_installed()) {
-
-if (FALSE) {
-a = torch_randn(c(4, 4))
-a
-torch_argmax(a)
-}
-
-
-a = torch_randn(c(4, 4))
-a
-torch_argmax(a, dim=1)
-}
-#> torch_tensor
-#>  1
-#>  4
-#>  3
-#>  4
-#> [ CPULongType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (FALSE) {
+a = torch_randn(c(4, 4))
+a
+torch_argmax(a)
+}
+
+
+a = torch_randn(c(4, 4))
+a
+torch_argmax(a, dim=1)
+}
+#> torch_tensor
+#>  1
+#>  1
+#>  4
+#>  1
+#> [ CPULongType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_argmin.html b/dev/reference/torch_argmin.html index 132849983cba330c2f542f8f66d17026ee3f0102..0df775e3e580a526ccdbf5e6e80b4dd186a8f0ae 100644 --- a/dev/reference/torch_argmin.html +++ b/dev/reference/torch_argmin.html @@ -1,79 +1,18 @@ - - - - - - - -Argmin — torch_argmin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Argmin — torch_argmin • torch - - - - - - - - + + -
-
- -
- -
+
@@ -190,84 +112,76 @@
-

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce. If NULL, the argmin of the flattened input is returned.

keepdim

(bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

- -

argmin(input) -> LongTensor

- +
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce. If NULL, the argmin of the flattened input is returned.

+
keepdim
+

(bool) whether the output tensor has dim retained or not. Ignored if dim=NULL.

+
+
+

argmin(input) -> LongTensor

Returns the indices of the minimum value of all elements in the input tensor.

This is the second value returned by torch_min. See its documentation for the exact semantics of this method.

-

argmin(input, dim, keepdim=False, out=NULL) -> LongTensor

- +
+
+

argmin(input, dim, keepdim=False, out=NULL) -> LongTensor

Returns the indices of the minimum values of a tensor across a dimension.

This is the second value returned by torch_min. See its documentation for the exact semantics of this method.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4, 4))
-a
-torch_argmin(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_argmin(a, dim=1)
-}
-#> torch_tensor
-#>  2
-#>  4
-#>  4
-#>  1
-#> [ CPULongType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4, 4))
+a
+torch_argmin(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_argmin(a, dim=1)
+}
+#> torch_tensor
+#>  2
+#>  3
+#>  2
+#>  3
+#> [ CPULongType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_argsort.html b/dev/reference/torch_argsort.html index a596ca7151978dc964fc70cea67e8318ff6e7ea8..f0fbd18c0bcd90247ac0f0909a95500b86bed6ca 100644 --- a/dev/reference/torch_argsort.html +++ b/dev/reference/torch_argsort.html @@ -1,79 +1,18 @@ - - - - - - - -Argsort — torch_argsort • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Argsort — torch_argsort • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Argsort

-
torch_argsort(self, dim = -1L, descending = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int, optional) the dimension to sort along

descending

(bool, optional) controls the sorting order (ascending or descending)

- -

argsort(input, dim=-1, descending=False) -> LongTensor

+
+
torch_argsort(self, dim = -1L, descending = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int, optional) the dimension to sort along

+
descending
+

(bool, optional) controls the sorting order (ascending or descending)

+
+
+

argsort(input, dim=-1, descending=False) -> LongTensor

@@ -217,46 +133,45 @@ order by value.

This is the second value returned by torch_sort. See its documentation for the exact semantics of this method.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4, 4))
-a
-torch_argsort(a, dim=1)
-}
-#> torch_tensor
-#>  1  1  3  3
-#>  2  2  1  1
-#>  4  3  2  2
-#>  3  4  4  4
-#> [ CPULongType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4, 4))
+a
+torch_argsort(a, dim=1)
+}
+#> torch_tensor
+#>  3  2  1  4
+#>  4  1  2  3
+#>  1  3  4  2
+#>  2  4  3  1
+#> [ CPULongType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_as_strided.html b/dev/reference/torch_as_strided.html index d980bbf76248649a7666ec8df994698220047199..396edb6ac91a0c91f4259539fa02d32c94263cdd 100644 --- a/dev/reference/torch_as_strided.html +++ b/dev/reference/torch_as_strided.html @@ -1,79 +1,18 @@ - - - - - - - -As_strided — torch_as_strided • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -As_strided — torch_as_strided • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,90 +111,82 @@

As_strided

-
torch_as_strided(self, size, stride, storage_offset = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

size

(tuple or ints) the shape of the output tensor

stride

(tuple or ints) the stride of the output tensor

storage_offset

(int, optional) the offset in the underlying storage of the output tensor

- -

as_strided(input, size, stride, storage_offset=0) -> Tensor

+
+
torch_as_strided(self, size, stride, storage_offset = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
size
+

(tuple or ints) the shape of the output tensor

+
stride
+

(tuple or ints) the stride of the output tensor

+
storage_offset
+

(int, optional) the offset in the underlying storage of the output tensor

+
+
+

as_strided(input, size, stride, storage_offset=0) -> Tensor

Create a view of an existing torch_Tensor input with specified size, stride and storage_offset.

-

Warning

- +
+
+

Warning

More than one element of a created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to -the tensors, please clone them first.

Many PyTorch functions, which return a view of a tensor, are internally
+the tensors, please clone them first.

Many PyTorch functions, which return a view of a tensor, are internally
 implemented with this function. Those functions, like
 `torch_Tensor.expand`, are easier to read and are therefore more
 advisable to use.
-
+
+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(3, 3))
-x
-t = torch_as_strided(x, list(2, 2), list(1, 2))
-t
-t = torch_as_strided(x, list(2, 2), list(1, 2), 1)
-t
-}
-#> torch_tensor
-#>  0.4258  0.2849
-#> -0.7091 -0.0600
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(3, 3))
+x
+t = torch_as_strided(x, list(2, 2), list(1, 2))
+t
+t = torch_as_strided(x, list(2, 2), list(1, 2), 1)
+t
+}
+#> torch_tensor
+#>  0.2001  0.7893
+#>  0.9597 -0.8947
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_asin.html b/dev/reference/torch_asin.html index 73b16fafbcb9411474c88447bd872747d6c68b60..41de336fb7fb5ecf32fc9dcccdd8f8b2beb7b230 100644 --- a/dev/reference/torch_asin.html +++ b/dev/reference/torch_asin.html @@ -1,79 +1,18 @@ - - - - - - - -Asin — torch_asin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Asin — torch_asin • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_asin(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

asin(input, out=NULL) -> Tensor

+
+
torch_asin(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

asin(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \sin^{-1}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_asin(a)
-}
-#> torch_tensor
-#>     nan
-#>  0.2758
-#> -0.0260
-#>     nan
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_asin(a)
+}
+#> torch_tensor
+#>  0.2008
+#>     nan
+#>  0.0981
+#>  0.2174
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_asinh.html b/dev/reference/torch_asinh.html index 0fc3269ce9ad07e558966a94ede0a70b65be7a75..4900c910be625289eb55fe0d161da6fae77db688 100644 --- a/dev/reference/torch_asinh.html +++ b/dev/reference/torch_asinh.html @@ -1,79 +1,18 @@ - - - - - - - -Asinh — torch_asinh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Asinh — torch_asinh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_asinh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

asinh(input, *, out=None) -> Tensor

+
+
torch_asinh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

asinh(input, *, out=None) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \sinh^{-1}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(4))
-a
-torch_asinh(a)
-}
-#> torch_tensor
-#>  0.7173
-#>  1.2948
-#> -0.5477
-#> -0.3671
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(4))
+a
+torch_asinh(a)
+}
+#> torch_tensor
+#> -0.8178
+#> -0.2430
+#>  0.6351
+#> -0.5750
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atan.html b/dev/reference/torch_atan.html index 74da035af91e784479634ca94f4b5fd47b56ab9b..ab7ac544405e10de87b5cae4245d10cba474c774 100644 --- a/dev/reference/torch_atan.html +++ b/dev/reference/torch_atan.html @@ -1,79 +1,18 @@ - - - - - - - -Atan — torch_atan • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atan — torch_atan • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_atan(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

atan(input, out=NULL) -> Tensor

+
+
torch_atan(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

atan(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \tan^{-1}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_atan(a)
-}
-#> torch_tensor
-#> -0.5706
-#> -0.5375
-#>  0.9102
-#>  0.8680
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_atan(a)
+}
+#> torch_tensor
+#> -0.1993
+#> -0.4418
+#>  0.1197
+#>  0.8482
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atan2.html b/dev/reference/torch_atan2.html index 856e56a16a9fa31d66d0bf4e63e33fe1d988c23e..be0665963cd0f60a1ebed09f40c7b3dc5a6d49f9 100644 --- a/dev/reference/torch_atan2.html +++ b/dev/reference/torch_atan2.html @@ -1,79 +1,18 @@ - - - - - - - -Atan2 — torch_atan2 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atan2 — torch_atan2 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_atan2(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first input tensor

other

(Tensor) the second input tensor

- -

atan2(input, other, out=NULL) -> Tensor

+
+
torch_atan2(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first input tensor

+
other
+

(Tensor) the second input tensor

+
+
+

atan2(input, other, out=NULL) -> Tensor

@@ -217,46 +135,45 @@ parameter, is the x-coordinate, while \(\mbox{input}_{i}\), the first parameter, is the y-coordinate.)

The shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_atan2(a, torch_randn(c(4)))
-}
-#> torch_tensor
-#>  2.2546
-#>  2.0275
-#>  1.8975
-#> -3.0418
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_atan2(a, torch_randn(c(4)))
+}
+#> torch_tensor
+#>  0.4798
+#>  0.4301
+#> -2.5647
+#> -0.8750
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atanh.html b/dev/reference/torch_atanh.html index 6f77804df57ba69986b1bd5e3bc262be1b659c54..7ea3a1345783be49b60ade6b4f42a1392af4df5e 100644 --- a/dev/reference/torch_atanh.html +++ b/dev/reference/torch_atanh.html @@ -1,79 +1,18 @@ - - - - - - - -Atanh — torch_atanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atanh — torch_atanh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_atanh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

Note

+
+
torch_atanh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

Note

The domain of the inverse hyperbolic tangent is (-1, 1) and values outside this range will be mapped to NaN, except for the values 1 and -1 for which the output is mapped to +/-INF respectively.

$$ \mbox{out}_{i} = \tanh^{-1}(\mbox{input}_{i}) $$

-

atanh(input, *, out=None) -> Tensor

- +
+
+

atanh(input, *, out=None) -> Tensor

Returns a new tensor with the inverse hyperbolic tangent of the elements of input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))$uniform_(-1, 1)
-a
-torch_atanh(a)
-}
-#> torch_tensor
-#>  0.6934
-#> -1.4547
-#> -0.2354
-#>  0.0969
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))$uniform_(-1, 1)
+a
+torch_atanh(a)
+}
+#> torch_tensor
+#>  0.1630
+#> -0.3109
+#> -2.2191
+#>  0.4318
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atleast_1d.html b/dev/reference/torch_atleast_1d.html index a3d874a78fd62aec10fa1b424cfc0a085716db2e..9984fe448edc8f2e631338926c20406751be25dc 100644 --- a/dev/reference/torch_atleast_1d.html +++ b/dev/reference/torch_atleast_1d.html @@ -1,80 +1,19 @@ - - - - - - - -Atleast_1d — torch_atleast_1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atleast_1d — torch_atleast_1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,67 +113,63 @@ Input tensors with one or more dimensions are returned as-is." /> Input tensors with one or more dimensions are returned as-is.

-
torch_atleast_1d(self)
- -

Arguments

- - - - - - -
self

(Tensor or list of Tensors)

- +
+
torch_atleast_1d(self)
+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_randn(c(2))
-x
-torch_atleast_1d(x)
-x <- torch_tensor(1.)
-x
-torch_atleast_1d(x)
-x <- torch_tensor(0.5)
-y <- torch_tensor(1.)
-torch_atleast_1d(list(x,y))
-}
-#> [[1]]
-#> torch_tensor
-#>  0.5000
-#> [ CPUFloatType{1} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1
-#> [ CPUFloatType{1} ]
-#> 
-
+
+

Arguments

+
self
+

(Tensor or list of Tensors)

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_randn(c(2))
+x
+torch_atleast_1d(x)
+x <- torch_tensor(1.)
+x
+torch_atleast_1d(x)
+x <- torch_tensor(0.5)
+y <- torch_tensor(1.)
+torch_atleast_1d(list(x,y))
+}
+#> [[1]]
+#> torch_tensor
+#>  0.5000
+#> [ CPUFloatType{1} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  1
+#> [ CPUFloatType{1} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atleast_2d.html b/dev/reference/torch_atleast_2d.html index ba2614cc6692e480b647aaf93e2be6693051ad74..0ccc12328101e4a7735e0aa07e94c6dfb8f4c59e 100644 --- a/dev/reference/torch_atleast_2d.html +++ b/dev/reference/torch_atleast_2d.html @@ -1,80 +1,19 @@ - - - - - - - -Atleast_2d — torch_atleast_2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atleast_2d — torch_atleast_2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,67 +113,63 @@ Input tensors with two or more dimensions are returned as-is." /> Input tensors with two or more dimensions are returned as-is.

-
torch_atleast_2d(self)
- -

Arguments

- - - - - - -
self

(Tensor or list of Tensors)

- +
+
torch_atleast_2d(self)
+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_tensor(1.)
-x
-torch_atleast_2d(x)
-x <- torch_randn(c(2,2))
-x
-torch_atleast_2d(x)
-x <- torch_tensor(0.5)
-y <- torch_tensor(1.)
-torch_atleast_2d(list(x,y))
-}
-#> [[1]]
-#> torch_tensor
-#>  0.5000
-#> [ CPUFloatType{1,1} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1
-#> [ CPUFloatType{1,1} ]
-#> 
-
+
+

Arguments

+
self
+

(Tensor or list of Tensors)

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_tensor(1.)
+x
+torch_atleast_2d(x)
+x <- torch_randn(c(2,2))
+x
+torch_atleast_2d(x)
+x <- torch_tensor(0.5)
+y <- torch_tensor(1.)
+torch_atleast_2d(list(x,y))
+}
+#> [[1]]
+#> torch_tensor
+#>  0.5000
+#> [ CPUFloatType{1,1} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  1
+#> [ CPUFloatType{1,1} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_atleast_3d.html b/dev/reference/torch_atleast_3d.html index 49880af521815191869385882cefef4b373c06e4..ceac69c09f08c37b6f5e4acde76999c3dfaeadf9 100644 --- a/dev/reference/torch_atleast_3d.html +++ b/dev/reference/torch_atleast_3d.html @@ -1,80 +1,19 @@ - - - - - - - -Atleast_3d — torch_atleast_3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Atleast_3d — torch_atleast_3d • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,43 +113,37 @@ Input tensors with three or more dimensions are returned as-is." /> Input tensors with three or more dimensions are returned as-is.

-
torch_atleast_3d(self)
- -

Arguments

- - - - - - -
self

(Tensor or list of Tensors)

+
+
torch_atleast_3d(self)
+
+
+

Arguments

+
self
+

(Tensor or list of Tensors)

+
+
-
- +
- - + + diff --git a/dev/reference/torch_avg_pool1d.html b/dev/reference/torch_avg_pool1d.html index 29f2c30d219b722447550f56cb46414c0f518df0..a8696710f9b4dc04f2ae61b4439e4ab7bada1f8b 100644 --- a/dev/reference/torch_avg_pool1d.html +++ b/dev/reference/torch_avg_pool1d.html @@ -1,79 +1,18 @@ - - - - - - - -Avg_pool1d — torch_avg_pool1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Avg_pool1d — torch_avg_pool1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,78 +111,63 @@

Avg_pool1d

-
torch_avg_pool1d(
-  self,
-  kernel_size,
-  stride = list(),
-  padding = 0L,
-  ceil_mode = FALSE,
-  count_include_pad = TRUE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
self

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

kernel_size

the size of the window. Can be a single number or a tuple (kW,)

stride

the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size

padding

implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0

ceil_mode

when TRUE, will use ceil instead of floor to compute the output shape. Default: FALSE

count_include_pad

when TRUE, will include the zero-padding in the averaging calculation. Default: TRUE

- -

avg_pool1d(input, kernel_size, stride=NULL, padding=0, ceil_mode=FALSE, count_include_pad=TRUE) -> Tensor

+
+
torch_avg_pool1d(
+  self,
+  kernel_size,
+  stride = list(),
+  padding = 0L,
+  ceil_mode = FALSE,
+  count_include_pad = TRUE
+)
+
+
+

Arguments

+
self
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

+
kernel_size
+

the size of the window. Can be a single number or a tuple (kW,)

+
stride
+

the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size

+
padding
+

implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0

+
ceil_mode
+

when TRUE, will use ceil instead of floor to compute the output shape. Default: FALSE

+
count_include_pad
+

when TRUE, will include the zero-padding in the averaging calculation. Default: TRUE

+
+
+

avg_pool1d(input, kernel_size, stride=NULL, padding=0, ceil_mode=FALSE, count_include_pad=TRUE) -> Tensor

Applies a 1D average pooling over an input signal composed of several input planes.

-

See nn_avg_pool1d() for details and output shape.

+

See nn_avg_pool1d() for details and output shape.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_baddbmm.html b/dev/reference/torch_baddbmm.html index eaecd9c15214221ae97389add82e31670f568fc7..fae63b416396d057e2291d593b15762375adabc7 100644 --- a/dev/reference/torch_baddbmm.html +++ b/dev/reference/torch_baddbmm.html @@ -1,79 +1,18 @@ - - - - - - - -Baddbmm — torch_baddbmm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Baddbmm — torch_baddbmm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_baddbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the tensor to be added

batch1

(Tensor) the first batch of matrices to be multiplied

batch2

(Tensor) the second batch of matrices to be multiplied

beta

(Number, optional) multiplier for input (\(\beta\))

alpha

(Number, optional) multiplier for \(\mbox{batch1} \mathbin{@} \mbox{batch2}\) (\(\alpha\))

- -

baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

+
+
torch_baddbmm(self, batch1, batch2, beta = 1L, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to be added

+
batch1
+

(Tensor) the first batch of matrices to be multiplied

+
batch2
+

(Tensor) the second batch of matrices to be multiplied

+
beta
+

(Number, optional) multiplier for input (\(\beta\))

+
alpha
+

(Number, optional) multiplier for \(\mbox{batch1} \mathbin{@} \mbox{batch2}\) (\(\alpha\))

+
+
+

baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=NULL) -> Tensor

@@ -237,74 +149,73 @@ same as the scaling factors used in torch_addbmm.

$$ For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers.

+
-

Examples

-
if (torch_is_installed()) {
-
-M = torch_randn(c(10, 3, 5))
-batch1 = torch_randn(c(10, 3, 4))
-batch2 = torch_randn(c(10, 4, 5))
-torch_baddbmm(M, batch1, batch2)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   4.3464 -0.4051  0.4696  0.2110  4.1729
-#>   2.1066 -0.9985 -0.9629  0.0291  1.8801
-#>  -0.6028  1.8406 -1.3645  0.1841 -0.5609
-#> 
-#> (2,.,.) = 
-#>  -0.1065  0.8978  0.9137 -1.0981 -0.0932
-#>  -1.0557  2.0460  5.1924 -1.9568  2.8678
-#>  -0.4578 -2.8497 -3.1706  4.5577 -0.7404
-#> 
-#> (3,.,.) = 
-#>   0.3946  1.2566  0.6208 -2.8887  0.4048
-#>   2.1199 -0.7715  3.0887 -6.8372  2.4818
-#>  -0.4372 -2.3157  0.9876  0.0002  1.2094
-#> 
-#> (4,.,.) = 
-#>  -3.5533 -0.4875  0.8294  1.2279  3.3800
-#>   1.2462 -1.4685  0.6320 -0.2946 -0.6690
-#>   1.0329  0.3516 -1.8146  0.8297 -0.7119
-#> 
-#> (5,.,.) = 
-#>   3.4987  2.0561 -0.2480  2.0030  0.6470
-#>  -0.9070  0.4882  0.3976 -1.1844  1.4144
-#>  -0.4725 -1.9061  3.4186 -1.2536  0.6809
-#> 
-#> (6,.,.) = 
-#>  -4.2958  0.3544  2.3612 -1.9604  0.6662
-#>   1.2806  0.9611 -3.1324  1.1374  2.4481
-#>  -1.3846  0.2244  0.3634 -0.1219  0.0915
-#> 
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{10,3,5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+M = torch_randn(c(10, 3, 5))
+batch1 = torch_randn(c(10, 3, 4))
+batch2 = torch_randn(c(10, 4, 5))
+torch_baddbmm(M, batch1, batch2)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  -2.6456 -1.5109 -4.0038  0.5978  5.2351
+#>   0.4636 -1.8890  0.2969 -4.1503 -2.7782
+#>  -0.7897 -1.4635 -4.0233 -3.2587  0.0264
+#> 
+#> (2,.,.) = 
+#>  -1.5407 -1.4278 -2.1220  3.7224 -3.4227
+#>   0.2850  1.4041  0.9306 -2.1612 -1.9094
+#>  -1.4091 -2.5417  1.2896  0.4374 -1.6743
+#> 
+#> (3,.,.) = 
+#>   1.7047 -2.8495 -0.2730  0.1837  0.8958
+#>   3.2721  1.4411 -4.6668  1.2510 -4.6568
+#>  -4.2487 -0.6997  0.4481 -1.8493  1.1994
+#> 
+#> (4,.,.) = 
+#>   2.5333 -1.7727  0.5164 -2.3464 -2.1124
+#>  -0.2756 -0.0301  2.4311 -1.0183  5.7782
+#>  -1.3856  6.1734 -0.3122  3.7483  1.2592
+#> 
+#> (5,.,.) = 
+#>   0.0220  0.2616  0.0148 -1.1699 -3.8855
+#>  -2.0172  1.7288  0.2708 -0.3954  3.8760
+#>  -0.1094  0.1998  0.3465 -0.5783 -1.6827
+#> 
+#> (6,.,.) = 
+#>   0.3194  1.3878 -2.0560  1.8844  3.1713
+#>  -2.0322  3.2651  0.5243  1.8287 -1.3876
+#>   0.5429  1.0187 -0.9298  1.9311  0.0935
+#> 
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{10,3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_bartlett_window.html b/dev/reference/torch_bartlett_window.html index a8327b436f85f49334689d89a6845571e538fec7..ee14bc3c197b5d9e72c202daec2fbf46ab732cc2 100644 --- a/dev/reference/torch_bartlett_window.html +++ b/dev/reference/torch_bartlett_window.html @@ -1,79 +1,18 @@ - - - - - - - -Bartlett_window — torch_bartlett_window • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bartlett_window — torch_bartlett_window • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,41 @@

Bartlett_window

-
torch_bartlett_window(
-  window_length,
-  periodic = TRUE,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
window_length

(int) the size of returned window

periodic

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

layout

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

Note

+
+
torch_bartlett_window(
+  window_length,
+  periodic = TRUE,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
window_length
+

(int) the size of returned window

+
periodic
+

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

+
layout
+

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

Note

-
If `window_length` \eqn{=1}, the returned window contains a single value 1.
-
- -

bartlett_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
If `window_length` \eqn{=1}, the returned window contains a single value 1.
+
+
+
+

bartlett_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -255,32 +166,29 @@ ready to be used as a periodic window with functions like above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have torch_bartlett_window(L, periodic=TRUE) equal to torch_bartlett_window(L + 1, periodic=False)[:-1]).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_bernoulli.html b/dev/reference/torch_bernoulli.html index 728e4699c8474e22902c3b909900afcc967d3f1b..0efa9fa6c8d80922a65cbeded0c1bba0148acc8c 100644 --- a/dev/reference/torch_bernoulli.html +++ b/dev/reference/torch_bernoulli.html @@ -1,79 +1,18 @@ - - - - - - - -Bernoulli — torch_bernoulli • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bernoulli — torch_bernoulli • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,29 +111,23 @@

Bernoulli

-
torch_bernoulli(self, p, generator = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor of probability values for the Bernoulli -distribution

p

(Number) a probability value. If p is passed than it's used instead of -the values in self tensor.

generator

(torch.Generator, optional) a pseudorandom number generator for sampling

- -

bernoulli(input, *, generator=NULL, out=NULL) -> Tensor

+
+
torch_bernoulli(self, p, generator = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of probability values for the Bernoulli +distribution

+
p
+

(Number) a probability value. If p is passed than it's used instead of +the values in self tensor.

+
generator
+

(torch.Generator, optional) a pseudorandom number generator for sampling

+
+
+

bernoulli(input, *, generator=NULL, out=NULL) -> Tensor

@@ -230,49 +146,48 @@ The returned out tensor only has values 0 or 1 and is of the same shape as input.

out can have integral dtype, but input must have floating point dtype.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_empty(c(3, 3))$uniform_(0, 1)  # generate a uniform random matrix with range c(0, 1)
-a
-torch_bernoulli(a)
-a = torch_ones(c(3, 3)) # probability of drawing "1" is 1
-torch_bernoulli(a)
-a = torch_zeros(c(3, 3)) # probability of drawing "1" is 0
-torch_bernoulli(a)
-}
-#> torch_tensor
-#>  0  0  0
-#>  0  0  0
-#>  0  0  0
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_empty(c(3, 3))$uniform_(0, 1)  # generate a uniform random matrix with range c(0, 1)
+a
+torch_bernoulli(a)
+a = torch_ones(c(3, 3)) # probability of drawing "1" is 1
+torch_bernoulli(a)
+a = torch_zeros(c(3, 3)) # probability of drawing "1" is 0
+torch_bernoulli(a)
+}
+#> torch_tensor
+#>  0  0  0
+#>  0  0  0
+#>  0  0  0
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_bincount.html b/dev/reference/torch_bincount.html index 84cde5638600469f7f1c850dd082c2633b3ac913..eb682130913f60d0f0369552c12129bc7352104b 100644 --- a/dev/reference/torch_bincount.html +++ b/dev/reference/torch_bincount.html @@ -1,79 +1,18 @@ - - - - - - - -Bincount — torch_bincount • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bincount — torch_bincount • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Bincount

-
torch_bincount(self, weights = list(), minlength = 0L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) 1-d int tensor

weights

(Tensor) optional, weight for each value in the input tensor. Should be of same size as input tensor.

minlength

(int) optional, minimum number of bins. Should be non-negative.

- -

bincount(input, weights=NULL, minlength=0) -> Tensor

+
+
torch_bincount(self, weights = list(), minlength = 0L)
+
+
+

Arguments

+
self
+

(Tensor) 1-d int tensor

+
weights
+

(Tensor) optional, weight for each value in the input tensor. Should be of same size as input tensor.

+
minlength
+

(int) optional, minimum number of bins. Should be non-negative.

+
+
+

bincount(input, weights=NULL, minlength=0) -> Tensor

@@ -222,50 +138,50 @@ tensor of size 0. If minlength is specified, the number of bins is out[n] += weights[i] if weights is specified else out[n] += 1.

.. include:: cuda_deterministic.rst

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_randint(0, 8, list(5), dtype=torch_int64())
-weights = torch_linspace(0, 1, steps=5)
-input
-weights
-torch_bincount(input, weights)
-input$bincount(weights)
-}
-#> torch_tensor
-#>  0.0000
-#>  0.2500
-#>  0.0000
-#>  1.2500
-#>  1.0000
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_randint(0, 8, list(5), dtype=torch_int64())
+weights = torch_linspace(0, 1, steps=5)
+input
+weights
+torch_bincount(input, weights)
+input$bincount(weights)
+}
+#> torch_tensor
+#>  0.0000
+#>  0.5000
+#>  0.0000
+#>  0.0000
+#>  0.2500
+#>  1.7500
+#> [ CPUFloatType{6} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_bitwise_and.html b/dev/reference/torch_bitwise_and.html index 04c52c36352f9e4d5e8a6653b3a21eb8305d98f4..b20114a38609b8bbff13e8a8885499a2842d52c1 100644 --- a/dev/reference/torch_bitwise_and.html +++ b/dev/reference/torch_bitwise_and.html @@ -1,79 +1,18 @@ - - - - - - - -Bitwise_and — torch_bitwise_and • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bitwise_and — torch_bitwise_and • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,54 +111,47 @@

Bitwise_and

-
torch_bitwise_and(self, other)
- -

Arguments

- - - - - - - - - - -
self

NA the first input tensor

other

NA the second input tensor

- -

bitwise_and(input, other, out=NULL) -> Tensor

+
+
torch_bitwise_and(self, other)
+
+
+

Arguments

+
self
+

NA the first input tensor

+
other
+

NA the second input tensor

+
+
+

bitwise_and(input, other, out=NULL) -> Tensor

Computes the bitwise AND of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical AND.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_bitwise_not.html b/dev/reference/torch_bitwise_not.html index 7185ae028f21ec3cc45c2d035951530480dc5fb6..85cea1672a944bb53b8fb600def94671071949cf 100644 --- a/dev/reference/torch_bitwise_not.html +++ b/dev/reference/torch_bitwise_not.html @@ -1,79 +1,18 @@ - - - - - - - -Bitwise_not — torch_bitwise_not • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bitwise_not — torch_bitwise_not • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,50 +111,45 @@

Bitwise_not

-
torch_bitwise_not(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

bitwise_not(input, out=NULL) -> Tensor

+
+
torch_bitwise_not(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

bitwise_not(input, out=NULL) -> Tensor

Computes the bitwise NOT of the given input tensor. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical NOT.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_bitwise_or.html b/dev/reference/torch_bitwise_or.html index e6573669042af5251aa08331c00a40740b046699..3383585c6d83d1c008bf658c24dd7d7b197347a1 100644 --- a/dev/reference/torch_bitwise_or.html +++ b/dev/reference/torch_bitwise_or.html @@ -1,79 +1,18 @@ - - - - - - - -Bitwise_or — torch_bitwise_or • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bitwise_or — torch_bitwise_or • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,54 +111,47 @@

Bitwise_or

-
torch_bitwise_or(self, other)
- -

Arguments

- - - - - - - - - - -
self

NA the first input tensor

other

NA the second input tensor

- -

bitwise_or(input, other, out=NULL) -> Tensor

+
+
torch_bitwise_or(self, other)
+
+
+

Arguments

+
self
+

NA the first input tensor

+
other
+

NA the second input tensor

+
+
+

bitwise_or(input, other, out=NULL) -> Tensor

Computes the bitwise OR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical OR.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_bitwise_xor.html b/dev/reference/torch_bitwise_xor.html index 86a986d801f70df15534a10ba1334ef2776fc2eb..f94fc7315a28e0f1db954edc7b515571b12dd44e 100644 --- a/dev/reference/torch_bitwise_xor.html +++ b/dev/reference/torch_bitwise_xor.html @@ -1,79 +1,18 @@ - - - - - - - -Bitwise_xor — torch_bitwise_xor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bitwise_xor — torch_bitwise_xor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,54 +111,47 @@

Bitwise_xor

-
torch_bitwise_xor(self, other)
- -

Arguments

- - - - - - - - - - -
self

NA the first input tensor

other

NA the second input tensor

- -

bitwise_xor(input, other, out=NULL) -> Tensor

+
+
torch_bitwise_xor(self, other)
+
+
+

Arguments

+
self
+

NA the first input tensor

+
other
+

NA the second input tensor

+
+
+

bitwise_xor(input, other, out=NULL) -> Tensor

Computes the bitwise XOR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical XOR.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_blackman_window.html b/dev/reference/torch_blackman_window.html index f6bdbada2d8dbabdebd657ef0697b752cb4cd650..0996104537d0f6e0c9f719329b3e3d78d46d4465 100644 --- a/dev/reference/torch_blackman_window.html +++ b/dev/reference/torch_blackman_window.html @@ -1,79 +1,18 @@ - - - - - - - -Blackman_window — torch_blackman_window • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Blackman_window — torch_blackman_window • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,41 @@

Blackman_window

-
torch_blackman_window(
-  window_length,
-  periodic = TRUE,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
window_length

(int) the size of returned window

periodic

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

layout

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

Note

+
+
torch_blackman_window(
+  window_length,
+  periodic = TRUE,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
window_length
+

(int) the size of returned window

+
periodic
+

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

+
layout
+

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

Note

-
If `window_length` \eqn{=1}, the returned window contains a single value 1.
-
- -

blackman_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
If `window_length` \eqn{=1}, the returned window contains a single value 1.
+
+
+
+

blackman_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -251,32 +162,29 @@ ready to be used as a periodic window with functions like above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have torch_blackman_window(L, periodic=TRUE) equal to torch_blackman_window(L + 1, periodic=False)[:-1]).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_block_diag.html b/dev/reference/torch_block_diag.html index ec0345d006c89b4dd02b35f073e17e2f938bbb09..f671744ee70a271f4a038e77e3e4a7c08843cd3e 100644 --- a/dev/reference/torch_block_diag.html +++ b/dev/reference/torch_block_diag.html @@ -1,79 +1,18 @@ - - - - - - - -Block_diag — torch_block_diag • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Block_diag — torch_block_diag • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,66 +111,62 @@

Create a block diagonal matrix from provided tensors.

-
torch_block_diag(tensors)
- -

Arguments

- - - - - - -
tensors

(list of tensors) One or more tensors with 0, 1, or 2 -dimensions.

- - -

Examples

-
if (torch_is_installed()) {
-
-A <- torch_tensor(rbind(c(0, 1), c(1, 0)))
-B <- torch_tensor(rbind(c(3, 4, 5), c(6, 7, 8)))
-C <- torch_tensor(7)
-D <- torch_tensor(c(1, 2, 3))
-E <- torch_tensor(rbind(4, 5, 6))
-torch_block_diag(list(A, B, C, D, E))
-}
-#> torch_tensor
-#>  0  1  0  0  0  0  0  0  0  0
-#>  1  0  0  0  0  0  0  0  0  0
-#>  0  0  3  4  5  0  0  0  0  0
-#>  0  0  6  7  8  0  0  0  0  0
-#>  0  0  0  0  0  7  0  0  0  0
-#>  0  0  0  0  0  0  1  2  3  0
-#>  0  0  0  0  0  0  0  0  0  4
-#>  0  0  0  0  0  0  0  0  0  5
-#>  0  0  0  0  0  0  0  0  0  6
-#> [ CPUFloatType{9,10} ]
-
+
+
torch_block_diag(tensors)
+
+ +
+

Arguments

+
tensors
+

(list of tensors) One or more tensors with 0, 1, or 2 +dimensions.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+A <- torch_tensor(rbind(c(0, 1), c(1, 0)))
+B <- torch_tensor(rbind(c(3, 4, 5), c(6, 7, 8)))
+C <- torch_tensor(7)
+D <- torch_tensor(c(1, 2, 3))
+E <- torch_tensor(rbind(4, 5, 6))
+torch_block_diag(list(A, B, C, D, E))
+}
+#> torch_tensor
+#>  0  1  0  0  0  0  0  0  0  0
+#>  1  0  0  0  0  0  0  0  0  0
+#>  0  0  3  4  5  0  0  0  0  0
+#>  0  0  6  7  8  0  0  0  0  0
+#>  0  0  0  0  0  7  0  0  0  0
+#>  0  0  0  0  0  0  1  2  3  0
+#>  0  0  0  0  0  0  0  0  0  4
+#>  0  0  0  0  0  0  0  0  0  5
+#>  0  0  0  0  0  0  0  0  0  6
+#> [ CPUFloatType{9,10} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_bmm.html b/dev/reference/torch_bmm.html index c3ef5606f8befc0d6eb7d68891c01aaa91bbd94d..d5170653c21a552c8151e833b1887a8b03915a20 100644 --- a/dev/reference/torch_bmm.html +++ b/dev/reference/torch_bmm.html @@ -1,79 +1,18 @@ - - - - - - - -Bmm — torch_bmm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bmm — torch_bmm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_bmm(self, mat2)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first batch of matrices to be multiplied

mat2

(Tensor) the second batch of matrices to be multiplied

- -

Note

+
+
torch_bmm(self, mat2)
+
+
+

Arguments

+
self
+

(Tensor) the first batch of matrices to be multiplied

+
mat2
+

(Tensor) the second batch of matrices to be multiplied

+
+
+

Note

This function does not broadcast . -For broadcasting matrix products, see torch_matmul.

-

bmm(input, mat2, out=NULL) -> Tensor

- +For broadcasting matrix products, see torch_matmul.

+
+
+

bmm(input, mat2, out=NULL) -> Tensor

@@ -223,74 +142,73 @@ the same number of matrices.

$$ \mbox{out}_i = \mbox{input}_i \mathbin{@} \mbox{mat2}_i $$

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_randn(c(10, 3, 4))
-mat2 = torch_randn(c(10, 4, 5))
-res = torch_bmm(input, mat2)
-res
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   4.7101  1.2719  1.2553 -1.8853 -1.0864
-#>   2.4834  0.9094 -1.2402  1.1361 -1.1803
-#>   3.1523  1.0687 -0.0883 -1.2969 -2.8504
-#> 
-#> (2,.,.) = 
-#>  -0.8574  2.7700 -2.4213 -0.7642 -0.5478
-#>   0.5348 -1.7877 -0.2510  0.6889  1.0793
-#>  -0.8720 -2.0425  0.3504  0.2370 -1.0926
-#> 
-#> (3,.,.) = 
-#>  -1.0773 -2.7929 -1.8812 -1.1311 -0.3457
-#>  -4.0041 -1.8878  0.1238 -0.1811  2.0995
-#>   3.1337  0.3895 -0.4637  1.8006 -4.2420
-#> 
-#> (4,.,.) = 
-#>   2.8617  2.8683  4.6492  3.4233  2.7582
-#>   2.5538  1.2732  4.5067 -0.0615 -0.6925
-#>   1.3528  0.8462  0.9483 -2.7502 -2.6945
-#> 
-#> (5,.,.) = 
-#>  -0.5374  0.9753 -1.5163  1.1165  4.5170
-#>   0.3424  0.0067  1.0019  1.2782 -1.2151
-#>   0.6167 -1.8804  0.9421 -2.2355 -5.0822
-#> 
-#> (6,.,.) = 
-#>   1.6531  0.7906  0.3333  1.3987  0.2642
-#>   3.2508  2.9889  2.0874 -1.7649  1.2378
-#>  -4.3740 -2.3497 -1.6982 -1.0726 -2.4815
-#> 
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{10,3,5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_randn(c(10, 3, 4))
+mat2 = torch_randn(c(10, 4, 5))
+res = torch_bmm(input, mat2)
+res
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  -1.1804  2.4008  1.2972 -1.2609  0.2916
+#>  -1.2435  1.7476 -0.0375  0.5632  0.8524
+#>  -0.2906  2.7009  1.5216  2.0198  0.8757
+#> 
+#> (2,.,.) = 
+#>   0.6727  0.5568 -3.1894 -4.0757 -1.5135
+#>   0.1410 -0.6714 -1.0290 -0.5648  0.8848
+#>  -0.1977  2.5337  1.3466 -2.0314 -3.1292
+#> 
+#> (3,.,.) = 
+#>  -1.8371 -0.0734  1.1069  1.8448  0.6464
+#>   3.1013  4.1337  0.6627 -0.7050 -1.3699
+#>   3.6031 -0.6864 -1.5408 -0.8107 -2.9430
+#> 
+#> (4,.,.) = 
+#>   1.1189 -2.2202  0.7229 -0.8875  2.6842
+#>  -1.2592  2.6973 -1.1588  0.7857 -3.7787
+#>   3.8589  5.3162  3.1421 -1.5760 -2.9945
+#> 
+#> (5,.,.) = 
+#>   0.7197  2.7396 -4.3375 -0.6431  6.7907
+#>  -1.1321 -1.2566  1.9878 -0.9290 -1.4026
+#>   2.0956  0.8502 -3.1667  1.3219  4.3534
+#> 
+#> (6,.,.) = 
+#>   0.9372 -1.8218  1.2632 -1.2736 -0.8857
+#>  -0.7833  0.2500  1.1682 -1.2441  0.1772
+#>   0.5542  1.9554 -1.3445  2.8310  3.1771
+#> 
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{10,3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_broadcast_tensors.html b/dev/reference/torch_broadcast_tensors.html index 8b63cd73e6e91ba4e42883f1e4a2b12f1588c1df..66329a362447bd92cf578c686e94c8c4f56644bf 100644 --- a/dev/reference/torch_broadcast_tensors.html +++ b/dev/reference/torch_broadcast_tensors.html @@ -1,79 +1,18 @@ - - - - - - - -Broadcast_tensors — torch_broadcast_tensors • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Broadcast_tensors — torch_broadcast_tensors • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,63 +111,60 @@

Broadcast_tensors

-
torch_broadcast_tensors(tensors)
- -

Arguments

- - - - - - -
tensors

a list containing any number of tensors of the same type

- -

broadcast_tensors(tensors) -> List of Tensors

+
+
torch_broadcast_tensors(tensors)
+
+
+

Arguments

+
tensors
+

a list containing any number of tensors of the same type

+
+
+

broadcast_tensors(tensors) -> List of Tensors

Broadcasts the given tensors according to broadcasting-semantics.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_arange(0, 3)$view(c(1, 4))
-y = torch_arange(0, 2)$view(c(3, 1))
-out = torch_broadcast_tensors(list(x, y))
-out[[1]]
-}
-#> torch_tensor
-#>  0  1  2  3
-#>  0  1  2  3
-#>  0  1  2  3
-#> [ CPUFloatType{3,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_arange(0, 3)$view(c(1, 4))
+y = torch_arange(0, 2)$view(c(3, 1))
+out = torch_broadcast_tensors(list(x, y))
+out[[1]]
+}
+#> torch_tensor
+#>  0  1  2  3
+#>  0  1  2  3
+#>  0  1  2  3
+#> [ CPUFloatType{3,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_bucketize.html b/dev/reference/torch_bucketize.html index 54a83452e8c27d641498e80e42e497513267cd17..653e41e8e4209c2695459d507413a38b41040f68 100644 --- a/dev/reference/torch_bucketize.html +++ b/dev/reference/torch_bucketize.html @@ -1,79 +1,18 @@ - - - - - - - -Bucketize — torch_bucketize • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Bucketize — torch_bucketize • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,85 +111,76 @@

Bucketize

-
torch_bucketize(self, boundaries, out_int32 = FALSE, right = FALSE)
+
+
torch_bucketize(self, boundaries, out_int32 = FALSE, right = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor or Scalar) N-D tensor or a Scalar containing the search value(s).

boundaries

(Tensor) 1-D tensor, must contain a monotonically increasing sequence.

out_int32

(bool, optional) – indicate the output data type. torch_int32() -if True, torch_int64() otherwise. Default value is FALSE, i.e. default output -data type is torch_int64().

right

(bool, optional) – if False, return the first suitable location +

+

Arguments

+
self
+

(Tensor or Scalar) N-D tensor or a Scalar containing the search value(s).

+
boundaries
+

(Tensor) 1-D tensor, must contain a monotonically increasing sequence.

+
out_int32
+

(bool, optional) – indicate the output data type. torch_int32() +if True, torch_int64() otherwise. Default value is FALSE, i.e. default output +data type is torch_int64().

+
right
+

(bool, optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of boundaries (one pass the last index). In other words, if False, gets the lower bound index for each value in input from boundaries. If True, gets the upper bound index -instead. Default value is False.

- -

bucketize(input, boundaries, *, out_int32=FALSE, right=FALSE, out=None) -> Tensor

- +instead. Default value is False.

+
+
+

bucketize(input, boundaries, *, out_int32=FALSE, right=FALSE, out=None) -> Tensor

Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries. Return a new tensor with the same size as input. If right is FALSE (default), then the left boundary is closed.

+
-

Examples

-
if (torch_is_installed()) {
-
-boundaries <- torch_tensor(c(1, 3, 5, 7, 9))
-boundaries
-v <- torch_tensor(rbind(c(3, 6, 9), c(3, 6, 9)))
-v
-torch_bucketize(v, boundaries)
-torch_bucketize(v, boundaries, right=TRUE)
-}
-#> torch_tensor
-#>  2  3  5
-#>  2  3  5
-#> [ CPULongType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+boundaries <- torch_tensor(c(1, 3, 5, 7, 9))
+boundaries
+v <- torch_tensor(rbind(c(3, 6, 9), c(3, 6, 9)))
+v
+torch_bucketize(v, boundaries)
+torch_bucketize(v, boundaries, right=TRUE)
+}
+#> torch_tensor
+#>  2  3  5
+#>  2  3  5
+#> [ CPULongType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_can_cast.html b/dev/reference/torch_can_cast.html index 10cd9721d588ad7baae0557b3a23be643c46385c..7a76b20b202069db33ac3aeabb6c4c0098cf2291 100644 --- a/dev/reference/torch_can_cast.html +++ b/dev/reference/torch_can_cast.html @@ -1,79 +1,18 @@ - - - - - - - -Can_cast — torch_can_cast • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Can_cast — torch_can_cast • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,62 +111,57 @@

Can_cast

-
torch_can_cast(from, to)
- -

Arguments

- - - - - - - - - - -
from

(dtype) The original torch_dtype.

to

(dtype) The target torch_dtype.

- -

can_cast(from, to) -> bool

+
+
torch_can_cast(from, to)
+
+
+

Arguments

+
from
+

(dtype) The original torch_dtype.

+
to
+

(dtype) The target torch_dtype.

+
+
+

can_cast(from, to) -> bool

Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation .

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_can_cast(torch_double(), torch_float())
-torch_can_cast(torch_float(), torch_int())
-}
-#> [1] FALSE
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_can_cast(torch_double(), torch_float())
+torch_can_cast(torch_float(), torch_int())
+}
+#> [1] FALSE
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cartesian_prod.html b/dev/reference/torch_cartesian_prod.html index 52283da588d5cff83f0c62a697a5392ef6ab546f..3ac605005b59aebcabd0b5131b6c40593983bb0f 100644 --- a/dev/reference/torch_cartesian_prod.html +++ b/dev/reference/torch_cartesian_prod.html @@ -1,79 +1,18 @@ - - - - - - - -Cartesian_prod — torch_cartesian_prod • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cartesian_prod — torch_cartesian_prod • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,61 +111,57 @@

Do cartesian product of the given sequence of tensors.

-
torch_cartesian_prod(tensors)
- -

Arguments

- - - - - - -
tensors

a list containing any number of 1 dimensional tensors.

- - -

Examples

-
if (torch_is_installed()) {
-
-a = c(1, 2, 3)
-b = c(4, 5)
-tensor_a = torch_tensor(a)
-tensor_b = torch_tensor(b)
-torch_cartesian_prod(list(tensor_a, tensor_b))
-}
-#> torch_tensor
-#>  1  4
-#>  1  5
-#>  2  4
-#>  2  5
-#>  3  4
-#>  3  5
-#> [ CPUFloatType{6,2} ]
-
+
+
torch_cartesian_prod(tensors)
+
+ +
+

Arguments

+
tensors
+

a list containing any number of 1 dimensional tensors.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+a = c(1, 2, 3)
+b = c(4, 5)
+tensor_a = torch_tensor(a)
+tensor_b = torch_tensor(b)
+torch_cartesian_prod(list(tensor_a, tensor_b))
+}
+#> torch_tensor
+#>  1  4
+#>  1  5
+#>  2  4
+#>  2  5
+#>  3  4
+#>  3  5
+#> [ CPUFloatType{6,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cat.html b/dev/reference/torch_cat.html index 91dabdbea13baf833b78f1af79e9af8653ede0d2..e9d03a70130690cd01598969c9886cb2655af0bc 100644 --- a/dev/reference/torch_cat.html +++ b/dev/reference/torch_cat.html @@ -1,79 +1,18 @@ - - - - - - - -Cat — torch_cat • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cat — torch_cat • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cat(tensors, dim = 1L)
- -

Arguments

- - - - - - - - - - -
tensors

(sequence of Tensors) any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

dim

(int, optional) the dimension over which the tensors are concatenated

- -

cat(tensors, dim=0, out=NULL) -> Tensor

+
+
torch_cat(tensors, dim = 1L)
+
+
+

Arguments

+
tensors
+

(sequence of Tensors) any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

+
dim
+

(int, optional) the dimension over which the tensors are concatenated

+
+
+

cat(tensors, dim=0, out=NULL) -> Tensor

Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

-

torch_cat can be seen as an inverse operation for torch_split() -and torch_chunk.

+

torch_cat can be seen as an inverse operation for torch_split() +and torch_chunk.

torch_cat can be best understood via examples.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(2, 3))
-x
-torch_cat(list(x, x, x), 1)
-torch_cat(list(x, x, x), 2)
-}
-#> torch_tensor
-#>  0.8514 -1.2676 -0.3305  0.8514 -1.2676 -0.3305  0.8514 -1.2676 -0.3305
-#> -1.1066  3.2175 -1.0893 -1.1066  3.2175 -1.0893 -1.1066  3.2175 -1.0893
-#> [ CPUFloatType{2,9} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(2, 3))
+x
+torch_cat(list(x, x, x), 1)
+torch_cat(list(x, x, x), 2)
+}
+#> torch_tensor
+#> -0.7550  1.0633  0.2062 -0.7550  1.0633  0.2062 -0.7550  1.0633  0.2062
+#>  0.6587 -1.1408  1.6991  0.6587 -1.1408  1.6991  0.6587 -1.1408  1.6991
+#> [ CPUFloatType{2,9} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cdist.html b/dev/reference/torch_cdist.html index 301b1595888c408c2bd903647d44d7c4530de1f4..78fa271cb899c2016323c10e300f62abeb754485 100644 --- a/dev/reference/torch_cdist.html +++ b/dev/reference/torch_cdist.html @@ -1,79 +1,18 @@ - - - - - - - -Cdist — torch_cdist • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cdist — torch_cdist • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cdist(x1, x2, p = 2L, compute_mode = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
x1

(Tensor) input tensor of shape \(B \times P \times M\).

x2

(Tensor) input tensor of shape \(B \times R \times M\).

p

NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

compute_mode

NA 'use_mm_for_euclid_dist_if_necessary' - will use matrix multiplication approach to calculate euclidean distance (p = 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always use matrix multiplication approach to calculate euclidean distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never use matrix multiplication approach to calculate euclidean distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.

- -

TEST

+
+
torch_cdist(x1, x2, p = 2L, compute_mode = NULL)
+
+
+

Arguments

+
x1
+

(Tensor) input tensor of shape \(B \times P \times M\).

+
x2
+

(Tensor) input tensor of shape \(B \times R \times M\).

+
p
+

NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

+
compute_mode
+

NA 'use_mm_for_euclid_dist_if_necessary' - will use matrix multiplication approach to calculate euclidean distance (p = 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always use matrix multiplication approach to calculate euclidean distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never use matrix multiplication approach to calculate euclidean distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.

+
+
+

TEST

Computes batched the p-norm distance between each pair of the two collections of row vectors.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_ceil.html b/dev/reference/torch_ceil.html index 13675e44c80246be843bc3ed08c281529e4b845f..0fe239bd28f70367747cacb270b8c3910b20a36e 100644 --- a/dev/reference/torch_ceil.html +++ b/dev/reference/torch_ceil.html @@ -1,79 +1,18 @@ - - - - - - - -Ceil — torch_ceil • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ceil — torch_ceil • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ceil(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

ceil(input, out=NULL) -> Tensor

+
+
torch_ceil(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

ceil(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@ the smallest integer greater than or equal to each element.

$$ \mbox{out}_{i} = \left\lceil \mbox{input}_{i} \right\rceil = \left\lfloor \mbox{input}_{i} \right\rfloor + 1 $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_ceil(a)
-}
-#> torch_tensor
-#> -0
-#>  1
-#> -0
-#>  1
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_ceil(a)
+}
+#> torch_tensor
+#>  2
+#>  1
+#> -1
+#>  1
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_celu.html b/dev/reference/torch_celu.html index f2084c5ba39f5923bba5dda3e92079aba3af47f4..58f1deb5591a0a0bd9c359f08292ad1ada61177e 100644 --- a/dev/reference/torch_celu.html +++ b/dev/reference/torch_celu.html @@ -1,79 +1,18 @@ - - - - - - - -Celu — torch_celu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Celu — torch_celu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Celu

-
torch_celu(self, alpha = 1)
- -

Arguments

- - - - - - - - - - -
self

the input tensor

alpha

the alpha value for the CELU formulation. Default: 1.0

- -

celu(input, alpha=1.) -> Tensor

+
+
torch_celu(self, alpha = 1)
+
+
+

Arguments

+
self
+

the input tensor

+
alpha
+

the alpha value for the CELU formulation. Default: 1.0

+
+
+

celu(input, alpha=1.) -> Tensor

-

See nnf_celu() for more info.

+

See nnf_celu() for more info.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_celu_.html b/dev/reference/torch_celu_.html index 68c23430ecf9b39113d7f8ef47969fc8fd6486f5..a970c14bbfc1986b1e4794279e42b3fa6639c5a6 100644 --- a/dev/reference/torch_celu_.html +++ b/dev/reference/torch_celu_.html @@ -1,79 +1,18 @@ - - - - - - - -Celu_ — torch_celu_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Celu_ — torch_celu_ • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_celu_(self, alpha = 1)
- -

Arguments

- - - - - - - - - - -
self

the input tensor

alpha

the alpha value for the CELU formulation. Default: 1.0

- -

celu_(input, alpha=1.) -> Tensor

+
+
torch_celu_(self, alpha = 1)
+
+
+

Arguments

+
self
+

the input tensor

+
alpha
+

the alpha value for the CELU formulation. Default: 1.0

+
+
+

celu_(input, alpha=1.) -> Tensor

-

In-place version of torch_celu().

+

In-place version of torch_celu().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_chain_matmul.html b/dev/reference/torch_chain_matmul.html index 67b0acfd19866522278da4611baebfbf253746c8..4101420f43814128e1ea6921e3324e9009b32894 100644 --- a/dev/reference/torch_chain_matmul.html +++ b/dev/reference/torch_chain_matmul.html @@ -1,79 +1,18 @@ - - - - - - - -Chain_matmul — torch_chain_matmul • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Chain_matmul — torch_chain_matmul • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Chain_matmul

-
torch_chain_matmul(matrices)
- -

Arguments

- - - - - - -
matrices

(Tensors...) a sequence of 2 or more 2-D tensors whose product is to be determined.

- -

TEST

+
+
torch_chain_matmul(matrices)
+
+
+

Arguments

+
matrices
+

(Tensors...) a sequence of 2 or more 2-D tensors whose product is to be determined.

+
+
+

TEST

@@ -210,47 +130,46 @@ using the matrix chain order algorithm which selects the order in which incurs t of arithmetic operations ([CLRS]_). Note that since this is a function to compute the product, \(N\) needs to be greater than or equal to 2; if equal to 2 then a trivial matrix-matrix product is returned. If \(N\) is 1, then this is a no-op - the original matrix is returned as is.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 4))
-b = torch_randn(c(4, 5))
-c = torch_randn(c(5, 6))
-d = torch_randn(c(6, 7))
-torch_chain_matmul(list(a, b, c, d))
-}
-#> torch_tensor
-#> -0.3777 -1.4092  1.5154  2.9518  7.8162 -2.8613 -5.8090
-#>  0.1198  1.7039  0.5408 -2.5631 -0.6127  7.7041  0.5558
-#>  0.2350  1.0389 -0.9496 -1.8136 -1.8948  1.8337  1.1657
-#> [ CPUFloatType{3,7} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 4))
+b = torch_randn(c(4, 5))
+c = torch_randn(c(5, 6))
+d = torch_randn(c(6, 7))
+torch_chain_matmul(list(a, b, c, d))
+}
+#> torch_tensor
+#>   3.8934  10.6934  -7.2619   2.5499  -5.0013  -2.2418  -7.6920
+#>   0.3776  21.6973  16.8113   0.2529  -2.8479  22.2058 -20.9102
+#>  -3.9659   7.1272  16.7742  -6.1935  14.3516   4.0720   1.1775
+#> [ CPUFloatType{3,7} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_channel_shuffle.html b/dev/reference/torch_channel_shuffle.html index 441c7ba3ef7e8b2493aad04d7f31ae731e60cbc7..2a75ba9e7abd0df97a0d159fd8902d0b26420b4d 100644 --- a/dev/reference/torch_channel_shuffle.html +++ b/dev/reference/torch_channel_shuffle.html @@ -1,79 +1,18 @@ - - - - - - - -Channel_shuffle — torch_channel_shuffle • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Channel_shuffle — torch_channel_shuffle • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,96 +111,91 @@

Channel_shuffle

-
torch_channel_shuffle(self, groups)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor

groups

(int) number of groups to divide channels in and rearrange.

- -

Divide the channels in a tensor of shape

+
+
torch_channel_shuffle(self, groups)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor

+
groups
+

(int) number of groups to divide channels in and rearrange.

+
+
+

Divide the channels in a tensor of shape

math:(*, C , H, W) :

Divide the channels in a tensor of shape \((*, C , H, W)\) into g groups and rearrange them as \((*, C \frac g, g, H, W)\), while keeping the original tensor shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-input <- torch_randn(c(1, 4, 2, 2))
-print(input)
-output <- torch_channel_shuffle(input, 2)
-print(output)
-}
-#> torch_tensor
-#> (1,1,.,.) = 
-#>   0.0759  0.7033
-#>   1.7052 -0.9880
-#> 
-#> (1,2,.,.) = 
-#>  -1.1317  0.2770
-#>   0.0039  0.0861
-#> 
-#> (1,3,.,.) = 
-#>  -0.1417 -0.8457
-#>  -1.4147 -1.0284
-#> 
-#> (1,4,.,.) = 
-#>  -0.3633 -0.0816
-#>  -2.5159 -1.0359
-#> [ CPUFloatType{1,4,2,2} ]
-#> torch_tensor
-#> (1,1,.,.) = 
-#>   0.0759  0.7033
-#>   1.7052 -0.9880
-#> 
-#> (1,2,.,.) = 
-#>  -0.1417 -0.8457
-#>  -1.4147 -1.0284
-#> 
-#> (1,3,.,.) = 
-#>  -1.1317  0.2770
-#>   0.0039  0.0861
-#> 
-#> (1,4,.,.) = 
-#>  -0.3633 -0.0816
-#>  -2.5159 -1.0359
-#> [ CPUFloatType{1,4,2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input <- torch_randn(c(1, 4, 2, 2))
+print(input)
+output <- torch_channel_shuffle(input, 2)
+print(output)
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>  -0.6117  2.5418
+#>  -0.0918  1.6715
+#> 
+#> (1,2,.,.) = 
+#>  -0.9030  0.4923
+#>  -0.1221  0.5425
+#> 
+#> (1,3,.,.) = 
+#>  -1.0168  1.2916
+#>   1.4654 -0.1773
+#> 
+#> (1,4,.,.) = 
+#>  -1.0663 -0.2279
+#>   0.1211 -0.3612
+#> [ CPUFloatType{1,4,2,2} ]
+#> torch_tensor
+#> (1,1,.,.) = 
+#>  -0.6117  2.5418
+#>  -0.0918  1.6715
+#> 
+#> (1,2,.,.) = 
+#>  -1.0168  1.2916
+#>   1.4654 -0.1773
+#> 
+#> (1,3,.,.) = 
+#>  -0.9030  0.4923
+#>  -0.1221  0.5425
+#> 
+#> (1,4,.,.) = 
+#>  -1.0663 -0.2279
+#>   0.1211 -0.3612
+#> [ CPUFloatType{1,4,2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cholesky.html b/dev/reference/torch_cholesky.html index 46a2f1e53408448604b93f3df4f52d96cdb2a4b9..bfad62cdd924b818bcdc3f6f9b303241133418d4 100644 --- a/dev/reference/torch_cholesky.html +++ b/dev/reference/torch_cholesky.html @@ -1,79 +1,18 @@ - - - - - - - -Cholesky — torch_cholesky • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cholesky — torch_cholesky • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,25 +111,21 @@

Cholesky

-
torch_cholesky(self, upper = FALSE)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor \(A\) of size \((*, n, n)\) where * is zero or more -batch dimensions consisting of symmetric positive-definite matrices.

upper

(bool, optional) flag that indicates whether to return a -upper or lower triangular matrix. Default: FALSE

- -

cholesky(input, upper=False, out=NULL) -> Tensor

+
+
torch_cholesky(self, upper = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor \(A\) of size \((*, n, n)\) where * is zero or more +batch dimensions consisting of symmetric positive-definite matrices.

+
upper
+

(bool, optional) flag that indicates whether to return a +upper or lower triangular matrix. Default: FALSE

+
+
+

cholesky(input, upper=False, out=NULL) -> Tensor

@@ -228,50 +146,49 @@ matrices, then the returned tensor will be composed of upper-triangular Cholesky of each of the individual matrices. Similarly, when upper is FALSE, the returned tensor will be composed of lower-triangular Cholesky factors of each of the individual matrices.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-a = torch_mm(a, a$t()) # make symmetric positive-definite
-l = torch_cholesky(a)
-a
-l
-torch_mm(l, l$t())
-a = torch_randn(c(3, 2, 2))
-if (FALSE) {
-a = torch_matmul(a, a$transpose(-1, -2)) + 1e-03 # make symmetric positive-definite
-l = torch_cholesky(a)
-z = torch_matmul(l, l$transpose(-1, -2))
-torch_max(torch_abs(z - a)) # Max non-zero
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+a = torch_mm(a, a$t()) # make symmetric positive-definite
+l = torch_cholesky(a)
+a
+l
+torch_mm(l, l$t())
+a = torch_randn(c(3, 2, 2))
+if (FALSE) {
+a = torch_matmul(a, a$transpose(-1, -2)) + 1e-03 # make symmetric positive-definite
+l = torch_cholesky(a)
+z = torch_matmul(l, l$transpose(-1, -2))
+torch_max(torch_abs(z - a)) # Max non-zero
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cholesky_inverse.html b/dev/reference/torch_cholesky_inverse.html index acc869d58ac9e7b47068ff78e43c75e1456cbdf3..b41741a735ef59cbce43430ba7a28b658299011f 100644 --- a/dev/reference/torch_cholesky_inverse.html +++ b/dev/reference/torch_cholesky_inverse.html @@ -1,79 +1,18 @@ - - - - - - - -Cholesky_inverse — torch_cholesky_inverse • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cholesky_inverse — torch_cholesky_inverse • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Cholesky_inverse

-
torch_cholesky_inverse(self, upper = FALSE)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input 2-D tensor \(u\), a upper or lower triangular Cholesky factor

upper

(bool, optional) whether to return a lower (default) or upper triangular matrix

- -

cholesky_inverse(input, upper=False, out=NULL) -> Tensor

+
+
torch_cholesky_inverse(self, upper = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input 2-D tensor \(u\), a upper or lower triangular Cholesky factor

+
upper
+

(bool, optional) whether to return a lower (default) or upper triangular matrix

+
+
+

cholesky_inverse(input, upper=False, out=NULL) -> Tensor

@@ -222,45 +140,44 @@ triangular such that the returned tensor is

$$ inv = (u^T u)^{{-1}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-if (FALSE) {
-a = torch_randn(c(3, 3))
-a = torch_mm(a, a$t()) + 1e-05 * torch_eye(3) # make symmetric positive definite
-u = torch_cholesky(a)
-a
-torch_cholesky_inverse(u)
-a$inverse()
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (FALSE) {
+a = torch_randn(c(3, 3))
+a = torch_mm(a, a$t()) + 1e-05 * torch_eye(3) # make symmetric positive definite
+u = torch_cholesky(a)
+a
+torch_cholesky_inverse(u)
+a$inverse()
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cholesky_solve.html b/dev/reference/torch_cholesky_solve.html index b30e60f5a715aad52065b53f232e4d5e4a91b673..122ca7afe3fd535029a93672cf3f64df3e75a151 100644 --- a/dev/reference/torch_cholesky_solve.html +++ b/dev/reference/torch_cholesky_solve.html @@ -1,79 +1,18 @@ - - - - - - - -Cholesky_solve — torch_cholesky_solve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cholesky_solve — torch_cholesky_solve • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Cholesky_solve

-
torch_cholesky_solve(self, input2, upper = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) input matrix \(b\) of size \((*, m, k)\), where \(*\) is zero or more batch dimensions

input2

(Tensor) input matrix \(u\) of size \((*, m, m)\), where \(*\) is zero of more batch dimensions composed of upper or lower triangular Cholesky factor

upper

(bool, optional) whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: FALSE.

- -

cholesky_solve(input, input2, upper=False, out=NULL) -> Tensor

+
+
torch_cholesky_solve(self, input2, upper = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) input matrix \(b\) of size \((*, m, k)\), where \(*\) is zero or more batch dimensions

+
input2
+

(Tensor) input matrix \(u\) of size \((*, m, m)\), where \(*\) is zero of more batch dimensions composed of upper or lower triangular Cholesky factor

+
upper
+

(bool, optional) whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: FALSE.

+
+
+

cholesky_solve(input, input2, upper=False, out=NULL) -> Tensor

@@ -228,50 +144,49 @@ $$ torch_cholesky_solve(b, u) can take in 2D inputs b, u or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs c

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-a = torch_mm(a, a$t()) # make symmetric positive definite
-u = torch_cholesky(a)
-a
-b = torch_randn(c(3, 2))
-b
-torch_cholesky_solve(b, u)
-torch_mm(a$inverse(), b)
-}
-#> torch_tensor
-#>  0.6713 -0.0665
-#> -36.2960  3.6688
-#>  6.5144 -1.1723
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+a = torch_mm(a, a$t()) # make symmetric positive definite
+u = torch_cholesky(a)
+a
+b = torch_randn(c(3, 2))
+b
+torch_cholesky_solve(b, u)
+torch_mm(a$inverse(), b)
+}
+#> torch_tensor
+#> -27.1156 -21.7349
+#> -30.0626 -23.2445
+#>  35.0691  26.9374
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_chunk.html b/dev/reference/torch_chunk.html index d7e80d2dff4981c4b18ec4334eb1d254238c2e00..7434ba2c75c65e1bc8c1438090d0b02cf21d5089 100644 --- a/dev/reference/torch_chunk.html +++ b/dev/reference/torch_chunk.html @@ -1,79 +1,18 @@ - - - - - - - -Chunk — torch_chunk • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Chunk — torch_chunk • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_chunk(self, chunks, dim = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the tensor to split

chunks

(int) number of chunks to return

dim

(int) dimension along which to split the tensor

- -

chunk(input, chunks, dim=0) -> List of Tensors

+
+
torch_chunk(self, chunks, dim = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to split

+
chunks
+

(int) number of chunks to return

+
dim
+

(int) dimension along which to split the tensor

+
+
+

chunk(input, chunks, dim=0) -> List of Tensors

@@ -217,32 +133,29 @@ the input tensor.

Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by chunks.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_clamp.html b/dev/reference/torch_clamp.html index 1948b018e9ed6a7c656b6732b657c50ca078325f..a63d33af4a0703a51d123c42dc5d296846fce912 100644 --- a/dev/reference/torch_clamp.html +++ b/dev/reference/torch_clamp.html @@ -1,79 +1,18 @@ - - - - - - - -Clamp — torch_clamp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Clamp — torch_clamp • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_clamp(self, min = NULL, max = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

min

(Number) lower-bound of the range to be clamped to

max

(Number) upper-bound of the range to be clamped to

- -

clamp(input, min, max, out=NULL) -> Tensor

+
+
torch_clamp(self, min = NULL, max = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
min
+

(Number) lower-bound of the range to be clamped to

+
max
+

(Number) upper-bound of the range to be clamped to

+
+
+

clamp(input, min, max, out=NULL) -> Tensor

@@ -225,72 +141,73 @@ a resulting tensor:

$$ If input is of type FloatTensor or DoubleTensor, args min and max must be real numbers, otherwise they should be integers.

-

clamp(input, *, min, out=NULL) -> Tensor

- +
+
+

clamp(input, *, min, out=NULL) -> Tensor

Clamps all elements in input to be larger or equal min.

If input is of type FloatTensor or DoubleTensor, value should be a real number, otherwise it should be an integer.

-

clamp(input, *, max, out=NULL) -> Tensor

- +
+
+

clamp(input, *, max, out=NULL) -> Tensor

Clamps all elements in input to be smaller or equal max.

If input is of type FloatTensor or DoubleTensor, value should be a real number, otherwise it should be an integer.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_clamp(a, min=-0.5, max=0.5)
-
-
-a = torch_randn(c(4))
-a
-torch_clamp(a, min=0.5)
-
-
-a = torch_randn(c(4))
-a
-torch_clamp(a, max=0.5)
-}
-#> torch_tensor
-#>  0.2336
-#> -1.8506
-#> -0.6078
-#>  0.5000
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_clamp(a, min=-0.5, max=0.5)
+
+
+a = torch_randn(c(4))
+a
+torch_clamp(a, min=0.5)
+
+
+a = torch_randn(c(4))
+a
+torch_clamp(a, max=0.5)
+}
+#> torch_tensor
+#>  0.5000
+#>  0.5000
+#> -0.0240
+#> -0.5295
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_clip.html b/dev/reference/torch_clip.html index db227a0f94a27e044860f34577f7a4c0015281b9..72f11e8a5e56ed53a9c2ae9c905e00d6169d2753 100644 --- a/dev/reference/torch_clip.html +++ b/dev/reference/torch_clip.html @@ -1,79 +1,18 @@ - - - - - - - -Clip — torch_clip • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Clip — torch_clip • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_clip(self, min = NULL, max = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

min

(Number) lower-bound of the range to be clamped to

max

(Number) upper-bound of the range to be clamped to

- -

clip(input, min, max, *, out=None) -> Tensor

+
+
torch_clip(self, min = NULL, max = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
min
+

(Number) lower-bound of the range to be clamped to

+
max
+

(Number) upper-bound of the range to be clamped to

+
+
+

clip(input, min, max, *, out=None) -> Tensor

-

Alias for torch_clamp().

+

Alias for torch_clamp().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_clone.html b/dev/reference/torch_clone.html index 5675496ade8171d6f8de58e2eb777f7f6901efa0..5e05493e17347282ad6000344c03f4f6016e97b3 100644 --- a/dev/reference/torch_clone.html +++ b/dev/reference/torch_clone.html @@ -1,79 +1,18 @@ - - - - - - - -Clone — torch_clone • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Clone — torch_clone • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_clone(self, memory_format = NULL)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

memory_format

a torch memory format. see torch_preserve_format().

- -

Note

+
+
torch_clone(self, memory_format = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
memory_format
+

a torch memory format. see torch_preserve_format().

+
+
+

Note

This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see Tensor$detach.

-

clone(input, *, memory_format=torch.preserve_format) -> Tensor

- +
+
+

clone(input, *, memory_format=torch.preserve_format) -> Tensor

Returns a copy of input.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_combinations.html b/dev/reference/torch_combinations.html index ddaa47daafa46bf6632293c08c7584d6ef314fa3..2127f4bac01784baa0adee2a8c09e2857fd6de37 100644 --- a/dev/reference/torch_combinations.html +++ b/dev/reference/torch_combinations.html @@ -1,79 +1,18 @@ - - - - - - - -Combinations — torch_combinations • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Combinations — torch_combinations • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,77 +111,70 @@

Combinations

-
torch_combinations(self, r = 2L, with_replacement = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) 1D vector.

r

(int, optional) number of elements to combine

with_replacement

(boolean, optional) whether to allow duplication in combination

- -

combinations(input, r=2, with_replacement=False) -> seq

+
+
torch_combinations(self, r = 2L, with_replacement = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) 1D vector.

+
r
+

(int, optional) number of elements to combine

+
with_replacement
+

(boolean, optional) whether to allow duplication in combination

+
+
+

combinations(input, r=2, with_replacement=False) -> seq

Compute combinations of length \(r\) of the given tensor. The behavior is similar to python's itertools.combinations when with_replacement is set to False, and itertools.combinations_with_replacement when with_replacement is set to TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = c(1, 2, 3)
-tensor_a = torch_tensor(a)
-torch_combinations(tensor_a)
-torch_combinations(tensor_a, r=3)
-torch_combinations(tensor_a, with_replacement=TRUE)
-}
-#> torch_tensor
-#>  1  1
-#>  1  2
-#>  1  3
-#>  2  2
-#>  2  3
-#>  3  3
-#> [ CPUFloatType{6,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = c(1, 2, 3)
+tensor_a = torch_tensor(a)
+torch_combinations(tensor_a)
+torch_combinations(tensor_a, r=3)
+torch_combinations(tensor_a, with_replacement=TRUE)
+}
+#> torch_tensor
+#>  1  1
+#>  1  2
+#>  1  3
+#>  2  2
+#>  2  3
+#>  3  3
+#> [ CPUFloatType{6,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_complex.html b/dev/reference/torch_complex.html index e21c829a2cf36d5b6a54c284880bd441633612db..e21e2d91713ba3e639aeb5ef2420d7effbb31f03 100644 --- a/dev/reference/torch_complex.html +++ b/dev/reference/torch_complex.html @@ -1,79 +1,18 @@ - - - - - - - -Complex — torch_complex • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Complex — torch_complex • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_complex(real, imag)
- -

Arguments

- - - - - - - - - - -
real

(Tensor) The real part of the complex tensor. Must be float or double.

imag

(Tensor) The imaginary part of the complex tensor. Must be same dtype -as real.

- -

complex(real, imag, *, out=None) -> Tensor

+
+
torch_complex(real, imag)
+
+
+

Arguments

+
real
+

(Tensor) The real part of the complex tensor. Must be float or double.

+
imag
+

(Tensor) The imaginary part of the complex tensor. Must be same dtype +as real.

+
+
+

complex(real, imag, *, out=None) -> Tensor

Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag.

+
-

Examples

-
if (torch_is_installed()) {
-
-real <- torch_tensor(c(1, 2), dtype=torch_float32())
-imag <- torch_tensor(c(3, 4), dtype=torch_float32())
-z <- torch_complex(real, imag)
-z
-z$dtype
-}
-#> torch_ComplexFloat
-
+
+

Examples

+
if (torch_is_installed()) {
+
+real <- torch_tensor(c(1, 2), dtype=torch_float32())
+imag <- torch_tensor(c(3, 4), dtype=torch_float32())
+z <- torch_complex(real, imag)
+z
+z$dtype
+}
+#> torch_ComplexFloat
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conj.html b/dev/reference/torch_conj.html index 3db54ddfaf5186fd193417b4b834fcf14ffb2c43..05bf4d567e00732eefe2ee7c015c599573bc9b1c 100644 --- a/dev/reference/torch_conj.html +++ b/dev/reference/torch_conj.html @@ -1,79 +1,18 @@ - - - - - - - -Conj — torch_conj • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conj — torch_conj • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_conj(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

conj(input) -> Tensor

+
+
torch_conj(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

conj(input) -> Tensor

@@ -209,39 +129,38 @@

$$ \mbox{out}_{i} = conj(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-torch_conj(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+torch_conj(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv1d.html b/dev/reference/torch_conv1d.html index 0f5710c6a2de75ad93fa5d17fa5e75aa176f61ae..00afb5a7fb84262576ec19f4ffa4c35efa822235 100644 --- a/dev/reference/torch_conv1d.html +++ b/dev/reference/torch_conv1d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv1d — torch_conv1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv1d — torch_conv1d • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_conv1d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  dilation = 1L,
-  groups = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1

padding

implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0

dilation

the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

- -

conv1d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

+
+
torch_conv1d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  dilation = 1L,
+  groups = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

+
weight
+

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kW)\)

+
bias
+

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
+
+

conv1d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

Applies a 1D convolution over an input signal composed of several input planes.

-

See nn_conv1d() for details and output shape.

+

See nn_conv1d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-filters = torch_randn(c(33, 16, 3))
-inputs = torch_randn(c(20, 16, 50))
-nnf_conv1d(inputs, filters)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>  Columns 1 to 8  -7.8525  -2.0407   0.8150  -1.5617  -8.7560  12.2625  -2.5097   0.5251
-#>    9.4414   6.2884  -0.8566  -4.0809  -2.7999   3.8187  -0.7887   4.1019
-#>   -6.0896  -9.1828  -0.1717   6.7823   3.0234  -7.0734  -1.8927  -7.2024
-#>    5.4572   3.0925  -2.4005   4.2250   7.8562   2.9267  -5.2216  -0.4159
-#>   -3.2907  -0.2557   4.5709  -4.7395   7.1890  -2.5823  13.6924   0.9032
-#>    9.1754   4.1761   2.0614  -4.4622   4.5278  11.9110   4.7233 -22.2930
-#>    1.5346   7.9970  -7.1129  -2.6370   2.6200   5.3420   5.4175  -8.3752
-#>   -9.9433  -1.0131   0.1250   3.7050  14.4213  -1.2026  -5.3813  -6.1677
-#>   -2.0734  -7.3434   3.0369  10.7258  -2.2571   7.8338   6.5770   4.6638
-#>   14.5355  -9.2446 -11.8788  -1.4874   0.6724   0.4364 -13.0620  10.5158
-#>    3.1183  -6.3421   3.5872   1.2476  -8.8728  -0.6727  -2.1018   3.0694
-#>   -6.4806   5.2587   0.6850   0.4648   0.4876   7.5190  -4.9007  -6.9071
-#>   -4.4151   3.7822  -4.3640  10.6634  -9.7924   7.1546  -5.6291   0.2431
-#>    1.6135   5.3794 -12.0290   7.0067  -2.5272   7.5007  -5.4069  -0.6051
-#>  -10.7322 -13.6759   3.1178   7.4365  -8.3466   5.7762   6.1908 -13.9317
-#>    2.8588   4.8001   1.4602  -1.0226   5.0607  -0.0700  -3.6341  -0.8736
-#>    6.6313  -2.5016  -4.7764   3.9594  10.7264 -11.9547   1.5272  -5.7071
-#>    3.2704  -0.1894  -4.0513  -7.1200   1.3507  -2.7097  -6.7959   1.3229
-#>   -7.3841  -5.7800   5.2865  -4.4110  -4.0300  -5.9256   2.9355   9.6800
-#>   -3.4307  -0.9973   4.4428   1.2845   9.8502  -1.7181  -4.8242   0.7905
-#>   -4.1857  10.8171  -1.2946   0.1674  -2.8792  -2.9051 -12.2320  -3.5684
-#>    2.7573   4.8187  -4.4594   1.6466   7.3383  12.7499  -0.4632  -0.7312
-#>    4.9108   2.0215  -5.3181   5.9072 -11.0859   0.5918   2.1680  -1.0367
-#>    1.4069   2.5627   9.8556  -5.6374   4.6778   2.7842   0.3822  -0.6612
-#>    3.1161  -1.6331   9.6244 -10.1978   2.0901   3.0103   5.0626  -3.6221
-#>   -5.0599   1.8432  -8.5181  10.2846  -4.9048  -6.4187  -6.4874  11.4115
-#>    7.6255  -0.8659  -2.0644  11.1742  -5.3905  -0.8722  -0.9200  -3.7735
-#>   -0.8229  -0.7254  -7.2426   0.4983   8.5504  -2.4634  -2.9146  -9.7880
-#>  -11.7479  -9.0805   7.2011  -5.6097  -5.1247   4.5020   0.6660   8.3494
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{20,33,48} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+filters = torch_randn(c(33, 16, 3))
+inputs = torch_randn(c(20, 16, 50))
+nnf_conv1d(inputs, filters)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  Columns 1 to 8   1.9853  -0.4145   3.4898  -3.1410   8.6812  -2.3423  -5.6584  10.3646
+#>    2.0027  -8.2080  -1.9526   4.0067   5.0721   5.4718   4.0759  -6.2809
+#>    8.2727  -4.4903   1.7663   1.4390  -1.3945   0.2365   8.5239  -2.9072
+#>  -12.1987   0.6573   5.6493  -4.4717  -9.6901  10.5695   6.0054  -4.4900
+#>   10.3204  -4.7072  -0.4309   5.2172   6.6748  -2.6728  -4.5023  -0.1650
+#>    8.9178  -1.1371  -7.2524   1.8454   7.2972 -10.1614   8.5542  10.4275
+#>   -3.2563  -1.1356  -7.5541 -16.1150   8.2792   9.6813   1.2226   3.9844
+#>    5.9068   0.7154  -0.8661  -4.8683   1.0732  -2.3573   1.5017 -17.0607
+#>    0.2960   9.6000  -6.2068  -0.5079  -6.3821   3.7258   6.0201  -4.2248
+#>    0.7526  -2.7993  -9.0437  -2.6109  -0.6761  13.2288 -21.4468   5.1298
+#>    3.3459  -1.3167   5.6159 -10.6618   6.5603  -7.3441   6.6849  -5.8503
+#>    3.8326  -4.3182   0.5829   3.3527  21.9380  -0.1036  -7.3275   5.0299
+#>   -0.2216 -11.9784  -2.7788  -3.2930   9.9663   2.8894   5.2879   0.6955
+#>   -1.9241   1.2050  -0.9701  -1.5404   9.7774  -2.7900  -1.3815  -7.2038
+#>   -4.1820   5.1908   1.8816  -5.8548  -3.6811   6.3057   1.4454  -4.0578
+#>    5.3453 -11.6965  -1.4017   5.0661   8.4545  -2.0280  -1.4592   4.9007
+#>   -7.8462   3.8918 -10.1120   9.6228 -10.6059  -4.1354  -3.1337   3.3622
+#>   -5.4985   3.1384  -1.7546   2.8298  -7.7870  13.2166 -14.4154   2.5063
+#>   -6.5086   2.2085  10.6032  -2.8578 -15.0458   1.5431  -8.5347   2.1013
+#>   -9.5277  -7.4451  11.0819   6.7900   1.6646   2.5842  -6.5905  -5.2784
+#>   -1.6813  -0.8671   1.1241   0.1566   1.2475  -1.8196   5.6795  -0.7130
+#>  -14.7110  -0.0383   7.7003  10.3171  -3.7681   9.6258   9.4546  -6.7951
+#>    7.1557   4.3944   7.4061  -1.7069  -5.3293  -6.4612  -8.9116   8.8942
+#>   -4.0033  -8.2283   7.4060   4.1749  -5.1196  -0.3640   4.2965   1.3732
+#>   10.4250   9.0730   4.2918   1.7370 -12.0283  -6.9838   9.9492   5.5577
+#>    1.4511  -2.6171  -7.8845  -5.2033  -2.4371  -2.7697  10.7783  -2.9823
+#>   -8.0807   0.6709  12.0511   0.6286   0.1480  -9.3920  -8.4591   9.6192
+#>    5.9110  -3.6756  -3.9411   7.1610  14.2643   4.2974  -9.6810  -5.7104
+#>   -2.9611 -10.7208   3.5580  12.1450  12.2906  -2.8964  -0.1299  11.9842
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{20,33,48} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv2d.html b/dev/reference/torch_conv2d.html index a9cc6aeb652ebdd82492cf8d5831cd3b393bf941..30e5542805441a24a49c0d0f0ce3764938a6d1d7 100644 --- a/dev/reference/torch_conv2d.html +++ b/dev/reference/torch_conv2d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv2d — torch_conv2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv2d — torch_conv2d • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_conv2d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  dilation = 1L,
-  groups = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

weight

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kH , kW)\)

bias

optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

dilation

the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

- -

conv2d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

+
+
torch_conv2d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  dilation = 1L,
+  groups = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

+
weight
+

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kH , kW)\)

+
bias
+

optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
+
+

conv2d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

Applies a 2D convolution over an input image composed of several input planes.

-

See nn_conv2d() for details and output shape.

+

See nn_conv2d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-# With square kernels and equal stride
-filters = torch_randn(c(8,4,3,3))
-inputs = torch_randn(c(1,4,5,5))
-nnf_conv2d(inputs, filters, padding=1)
-}
-#> torch_tensor
-#> (1,1,.,.) = 
-#>   1.3010 -3.8266 -2.0754  2.3768 -1.5136
-#>  -5.1038  5.7941  0.6686 -3.2564 -0.9188
-#>   1.3512 -0.0968  5.4276  4.1043 -3.0454
-#>  -6.6700  0.0899  3.5636 -1.7755  0.6650
-#>   6.1587  6.5902  4.1504 -1.8508 -1.2869
-#> 
-#> (1,2,.,.) = 
-#>  -5.6143 -0.8724 -3.7113  0.2869  0.1920
-#>   6.7031  3.3272 -3.5086 -2.5507 -0.5642
-#>  -1.6727 -5.5105  2.1701 -1.1776 -10.1545
-#>   0.3751  1.1698 -5.5290  1.1856  0.5100
-#>  -0.5208 -4.1720 -2.4027  0.2828  1.7904
-#> 
-#> (1,3,.,.) = 
-#>   -1.2270   4.3284   1.8797  -6.1424  -0.0578
-#>    4.4318   3.0126  -0.1128  -7.7084   0.4294
-#>   -3.7026   8.1070  -5.1812   2.8599  13.3832
-#>    4.4232  -4.4565  10.4726  10.5606   9.0606
-#>    3.5237   4.5724  -6.0063 -10.0640   0.8633
-#> 
-#> (1,4,.,.) = 
-#>   5.8401 -2.3004  0.4396  3.0189  0.7347
-#>   3.2647 -3.4107  1.1328  5.5715 -3.8395
-#>  -3.2507  3.7227 -4.9675  3.0068 -5.1521
-#>   2.9304  1.8668 -5.0428 -2.4748  0.4221
-#>  -2.7381 -3.2545  3.4447 -2.3784  0.3404
-#> 
-#> (1,5,.,.) = 
-#>   5.0448  1.2105 -0.2328  2.6849 -2.4929
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{1,8,5,5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+# With square kernels and equal stride
+filters = torch_randn(c(8,4,3,3))
+inputs = torch_randn(c(1,4,5,5))
+nnf_conv2d(inputs, filters, padding=1)
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>   3.0084 -5.3392  0.8916 -5.9357  4.9299
+#>  -4.6841 -1.0547 -1.9839  0.4329  0.5662
+#>   5.7762 -5.7486  1.0964 -3.6455  0.4517
+#>   1.0485  4.4317  1.0435  3.1080 -1.0954
+#>   1.3437  5.9563  0.4232  4.4031 -1.6114
+#> 
+#> (1,2,.,.) = 
+#>    3.4789   2.0053  -2.5920  10.4240  -5.6458
+#>    2.3907   4.4177   9.9248  -2.8303  -0.8741
+#>    6.8289  -4.8249  -4.0972   6.0992  -5.1429
+#>    0.5024   0.8175 -14.3830   2.9596 -12.0636
+#>  -10.1116  -5.4844  -8.6464 -12.7355  -0.8968
+#> 
+#> (1,3,.,.) = 
+#>  -3.5986 -0.5763 -2.8229 -0.1336  2.9402
+#>   5.3503 -7.3509  6.7232 -2.6656 -0.4504
+#>  -2.1034  4.9948 -8.7319 -2.7165 -4.7009
+#>  -1.0407  3.2368 -6.8384 -7.8150 -0.4372
+#>  -0.1875 -8.2452 -6.6283 -6.9655  0.3963
+#> 
+#> (1,4,.,.) = 
+#>  -0.7238 -5.2456  4.1168  8.7662 -5.4334
+#>  -7.9181 -1.0623  3.3308 -5.0665  3.3577
+#>   5.2597  1.6785  9.1345 -8.4690 -1.6616
+#>  -0.3970 -5.0369 -1.2243 -9.3130  5.8999
+#>  -3.6919 -7.8341  3.2629 -2.5463  4.2951
+#> 
+#> (1,5,.,.) = 
+#>   1.4886 -3.1286  6.6257  1.4238  3.0997
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{1,8,5,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv3d.html b/dev/reference/torch_conv3d.html index f2b0117c0e628f757c725b5b069beac490d1813b..b4092958093fe9fe4a8fb21add34ee8493db03f8 100644 --- a/dev/reference/torch_conv3d.html +++ b/dev/reference/torch_conv3d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv3d — torch_conv3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv3d — torch_conv3d • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_conv3d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  dilation = 1L,
-  groups = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

weight

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kT , kH , kW)\)

bias

optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

padding

implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

dilation

the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

- -

conv3d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

+
+
torch_conv3d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  dilation = 1L,
+  groups = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

+
weight
+

filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kT , kH , kW)\)

+
bias
+

optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

+
padding
+

implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

+
dilation
+

the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
+
+

conv3d(input, weight, bias=NULL, stride=1, padding=0, dilation=1, groups=1) -> Tensor

Applies a 3D convolution over an input image composed of several input planes.

-

See nn_conv3d() for details and output shape.

+

See nn_conv3d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-# filters = torch_randn(c(33, 16, 3, 3, 3))
-# inputs = torch_randn(c(20, 16, 50, 10, 20))
-# nnf_conv3d(inputs, filters)
-}
-#> NULL
-
+
+

Examples

+
if (torch_is_installed()) {
+
+# filters = torch_randn(c(33, 16, 3, 3, 3))
+# inputs = torch_randn(c(20, 16, 50, 10, 20))
+# nnf_conv3d(inputs, filters)
+}
+#> NULL
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv_tbc.html b/dev/reference/torch_conv_tbc.html index 0e2a7918dbebff60335cca766f42fc829e5b25ce..1595679701b766ca72e28df5af73e2280e359891 100644 --- a/dev/reference/torch_conv_tbc.html +++ b/dev/reference/torch_conv_tbc.html @@ -1,79 +1,18 @@ - - - - - - - -Conv_tbc — torch_conv_tbc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_tbc — torch_conv_tbc • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,62 +111,51 @@

Conv_tbc

-
torch_conv_tbc(self, weight, bias, pad = 0L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

NA input tensor of shape \((\mbox{sequence length} \times batch \times \mbox{in\_channels})\)

weight

NA filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} \times \mbox{out\_channels}\))

bias

NA bias of shape (\(\mbox{out\_channels}\))

pad

NA number of timesteps to pad. Default: 0

- -

TEST

+
+
torch_conv_tbc(self, weight, bias, pad = 0L)
+
+
+

Arguments

+
self
+

NA input tensor of shape \((\mbox{sequence length} \times batch \times \mbox{in\_channels})\)

+
weight
+

NA filter of shape (\(\mbox{kernel width} \times \mbox{in\_channels} \times \mbox{out\_channels}\))

+
bias
+

NA bias of shape (\(\mbox{out\_channels}\))

+
pad
+

NA number of timesteps to pad. Default: 0

+
+
+

TEST

Applies a 1-dimensional sequence convolution over an input sequence. Input and output dimensions are (Time, Batch, Channels) - hence TBC.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv_transpose1d.html b/dev/reference/torch_conv_transpose1d.html index d3dba5e0ce7e3c09339685cc35670d8fc6563512..658d35216bcd81d99b95080d82fbe242f915e79f 100644 --- a/dev/reference/torch_conv_transpose1d.html +++ b/dev/reference/torch_conv_transpose1d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv_transpose1d — torch_conv_transpose1d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose1d — torch_conv_transpose1d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,129 +111,112 @@

Conv_transpose1d

-
torch_conv_transpose1d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  output_padding = 0L,
-  groups = 1L,
-  dilation = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

- -

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

+
+
torch_conv_transpose1d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  output_padding = 0L,
+  groups = 1L,
+  dilation = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

+
weight
+

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

+
bias
+

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

+
padding
+

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

+
output_padding
+

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

+
+
+

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

-

See nn_conv_transpose1d() for details and output shape.

+

See nn_conv_transpose1d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-inputs = torch_randn(c(20, 16, 50))
-weights = torch_randn(c(16, 33, 5))
-nnf_conv_transpose1d(inputs, weights)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>  Columns 1 to 8  -1.0826  -3.2328  -3.1780   0.0298  -8.2496   5.5252   3.8265 -16.0103
-#>   -8.5469   0.1609   8.9542  -5.2216  19.1733   6.7107   6.8135 -15.0976
-#>   -2.9840  -7.8190  -8.3076   2.8882 -16.1129   1.2617   0.8731   2.6579
-#>   -1.0351   6.5417   0.7302   9.6141 -14.6469  -8.6066   4.2328  -5.5201
-#>    4.9492   9.2436  -1.0317  -2.8284   6.6090   5.7863 -24.2305   0.0088
-#>   -1.9106   3.9358  -6.7325   6.2673 -10.3832   1.3956   4.0447  15.1818
-#>   -2.1144   2.8540  18.4233   8.6695   1.7338   9.5815   2.0380   1.4900
-#>    5.2017   0.4362  -4.8418  -1.2964   0.6606  -3.5772 -12.4188   0.3288
-#>    3.1737   0.4119   2.6829   2.9172  22.7073 -10.6735   6.8735  -5.9445
-#>    2.1190   2.3197  -3.2908 -15.7091  11.0776   7.3606   2.0950  -3.0806
-#>   -0.3880 -15.1609  -8.9384  -3.4452   6.9085  15.2111  -3.5011   9.9428
-#>   -1.3822  -6.1490  -6.0171  -0.3526  -1.9978  -0.7748   5.2712   9.7836
-#>    0.5850  -2.6186   1.2230  -7.9513  -7.2304   4.6257  -3.8087   5.1504
-#>    5.0742   3.4272   0.1988   5.0003  10.3331   0.3624 -17.0828   4.9309
-#>   -2.1510  -6.3021  -3.3675   0.2285  -4.2040   6.4637  -7.7324   6.1154
-#>    0.4576 -10.4537  -9.5633  13.5898  16.5553   8.1340   9.4123   5.8942
-#>   -1.3384 -11.5494  -3.2358  -7.8198   1.6578   4.1971   5.2327 -12.0638
-#>   -0.7149  -1.4571  20.1630  -5.7633  -4.3603   3.1665 -10.3730  14.2092
-#>    8.4559   5.7431 -13.6568 -15.2266   6.5101   3.5963  -7.4127  -5.1881
-#>   -3.6863   3.6167  -6.5865   3.9422  11.6411   5.9919  -2.3535   6.7854
-#>    2.2983   1.1177  -3.0682 -11.6921   0.8689  11.6850  -5.7185  -8.9297
-#>    8.5941  -5.6829 -20.0004   1.0551  -3.7755 -18.6705   0.8061   5.8681
-#>   -8.0366   3.6885  -2.1077   7.3701  -8.4471  -3.3633 -10.6362  10.2637
-#>    0.3454 -15.4397  -4.5829 -10.9853  -5.3428  12.4051  -5.9169  15.0745
-#>    3.9859   4.8788  14.9213 -15.5092 -20.9979  18.4543  -0.2538 -15.4775
-#>   -3.2215   4.1086   6.8447   8.8186 -14.7425   9.6034   0.9043   6.4579
-#>   -1.8873   6.2358  -0.2238  15.5462  -7.2763 -10.3539   4.6643  -5.6230
-#>    0.9980   8.0700   8.9527  -9.4624  -3.3798   5.6563   1.8500  -5.8979
-#>   -3.0767  -3.6875  11.8195   2.3209  12.3317 -10.0017   1.0081   6.3692
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{20,33,54} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+inputs = torch_randn(c(20, 16, 50))
+weights = torch_randn(c(16, 33, 5))
+nnf_conv_transpose1d(inputs, weights)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  Columns 1 to 6  1.4234e+00 -1.5031e+00 -6.3616e-01 -1.6005e+01 -1.7653e+00 -1.1944e+01
+#>  -2.6130e+00  2.1109e+00  5.0137e+00 -8.3263e+00  3.1316e+00 -6.7171e+00
+#>  -7.7618e-02 -3.6638e+00  3.7729e+00 -6.2228e+00  1.4163e+01 -1.2355e+01
+#>   8.0153e+00  1.8125e+00 -5.2152e+00 -6.1568e+00 -5.3453e+00 -3.1439e-01
+#>  -4.8352e+00  7.3619e-01  4.5109e+00  1.3390e+00 -1.2117e+00  6.1506e+00
+#>  -2.3411e+00  4.9159e+00 -4.8826e+00 -2.8050e+00  1.4312e+01  3.6273e+00
+#>  -4.3436e+00 -2.9203e+00  3.0042e+00 -1.0826e+01 -3.2666e+00  3.9990e+00
+#>  -1.7722e+00 -2.3305e+00  5.1786e+00 -9.0884e+00 -5.4102e+00  2.9818e+00
+#>   1.2349e+00  4.8561e+00 -1.6917e+01  1.1370e+01 -1.1253e+01 -1.0506e+00
+#>  -5.0320e-01 -5.4563e+00  2.2738e+00 -6.8237e+00  5.2816e+00 -2.3220e+01
+#>  -5.2823e+00 -1.6666e+00 -5.6597e+00 -2.6170e+00 -6.4845e+00 -4.1117e+00
+#>   1.0925e+00 -1.3915e+00  3.9011e+00  1.1433e+01  1.4513e+00  1.2619e+01
+#>  -1.6974e+00  6.7357e+00 -6.2011e+00 -9.4999e-01 -3.8088e+00 -3.8713e+00
+#>   3.2832e+00 -7.8925e+00  3.5728e+00 -3.6817e+00  3.9862e-01  1.4450e+00
+#>   4.0368e+00  1.3909e+01 -4.8242e-01  7.8844e+00 -5.5844e+00  2.1601e+01
+#>   3.8193e+00 -1.6119e+00  7.8622e+00 -4.2775e+00 -4.7867e+00  3.9014e+00
+#>  -1.3596e+00  3.1102e-02  7.2094e+00 -4.2222e+00  9.9116e+00  4.0948e+00
+#>   5.1430e+00 -7.9472e-01 -1.7387e+00 -5.0144e-01  5.8753e+00  3.1581e+00
+#>  -4.0289e+00  1.2223e+01 -1.2749e+01  3.5464e+00 -1.4313e+00 -3.6972e+00
+#>   7.3912e+00  6.3914e-01  5.8189e+00 -5.0191e+00  1.4044e+01 -1.5735e+00
+#>  -4.0717e+00 -6.0249e+00 -4.7371e+00  2.7147e-01 -1.1632e+01 -8.3076e+00
+#>   8.9647e-01 -1.8814e+00  5.1120e+00  1.4819e+00  9.9320e-01 -8.1805e-01
+#>   3.3988e-01 -4.4315e+00  8.0483e+00 -8.9666e+00  3.7973e+00  5.9623e+00
+#>  -1.1679e+00 -3.1561e+00 -2.7813e+00  1.5826e+00 -2.9887e+00  1.7741e+00
+#>   2.6976e+00 -3.1211e-01 -1.0025e+01  1.8118e+00 -5.4081e+00  6.9539e+00
+#>  -4.7193e+00 -2.1988e+00 -1.4907e+01  1.5257e+00 -4.3648e+00  1.0104e+01
+#>   8.9952e+00  6.7141e+00  7.4326e+00  8.3896e+00  1.1992e+01  1.3093e+00
+#>  -4.8924e+00 -1.9185e+00 -4.1971e+00 -7.3296e+00  4.8186e+00 -1.1817e+01
+#>  -1.8650e+00 -5.4170e-01  3.7811e+00 -3.6127e+00  9.1621e+00  6.1826e-02
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{20,33,54} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv_transpose2d.html b/dev/reference/torch_conv_transpose2d.html index 461b64feba7c1da1e2e0689eb39523c7ee343729..1da43682f32f6aae9d2eb11d657eab9d33a4d30b 100644 --- a/dev/reference/torch_conv_transpose2d.html +++ b/dev/reference/torch_conv_transpose2d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv_transpose2d — torch_conv_transpose2d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose2d — torch_conv_transpose2d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,130 +111,113 @@

Conv_transpose2d

-
torch_conv_transpose2d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  output_padding = 0L,
-  groups = 1L,
-  dilation = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kH , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

- -

conv_transpose2d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

+
+
torch_conv_transpose2d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  output_padding = 0L,
+  groups = 1L,
+  dilation = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)

+
weight
+

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kH , kW)\)

+
bias
+

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1

+
padding
+

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0

+
output_padding
+

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1

+
+
+

conv_transpose2d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution".

-

See nn_conv_transpose2d() for details and output shape.

+

See nn_conv_transpose2d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-# With square kernels and equal stride
-inputs = torch_randn(c(1, 4, 5, 5))
-weights = torch_randn(c(4, 8, 3, 3))
-nnf_conv_transpose2d(inputs, weights, padding=1)
-}
-#> torch_tensor
-#> (1,1,.,.) = 
-#>    2.2788  -1.9283   7.3256   5.1428  -2.9257
-#>   -0.6896   1.4029  -5.3229  10.2390  -1.2402
-#>   -1.2060  10.2222  -4.7248  -1.5488   3.8320
-#>   -1.4776   1.1281   1.8286  -4.8006   8.1204
-#>    5.9955   0.2092   0.9957  -2.9968  -1.0158
-#> 
-#> (1,2,.,.) = 
-#>  -4.0071 -2.0428 -1.4256 -1.4038 -0.1210
-#>  -5.2884 -1.0475  7.1233  1.1916  5.0784
-#>   3.7178 -1.1443  2.2506 -3.7226  0.3739
-#>  -4.2545  2.0034  2.7678 -2.2835 -0.9902
-#>  -3.2392  3.3695  1.1751 -1.6468 -2.7825
-#> 
-#> (1,3,.,.) = 
-#>   0.5532 -7.5668  2.3785  3.4235  2.2801
-#>  -0.4576  0.1908  3.4579  6.5030  2.8629
-#>   5.5145  5.7911 -5.0308 -4.4961  0.1457
-#>   0.1480 -0.5277 -3.9319  6.4784 -1.5596
-#>  -0.1217  0.2231 -2.0216  2.9053 -0.2022
-#> 
-#> (1,4,.,.) = 
-#>   0.3142 -8.9630 -1.8311 -2.6733 -1.9425
-#>  -6.8940  1.7532 -5.0590 -4.0202 -4.1672
-#>   5.6343 -3.8787 -7.2058 -10.6780  1.3036
-#>  -1.8006 -1.8730 -5.1896  5.3562 -0.2618
-#>  -5.8934  0.0295  0.0948  1.0247 -2.2523
-#> 
-#> (1,5,.,.) = 
-#>   5.1338 -0.8517  5.0228  3.4769  3.7733
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{1,8,5,5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+# With square kernels and equal stride
+inputs = torch_randn(c(1, 4, 5, 5))
+weights = torch_randn(c(4, 8, 3, 3))
+nnf_conv_transpose2d(inputs, weights, padding=1)
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>   -1.3726   4.1691  -0.6799  -8.8039   9.7267
+#>    1.2701  11.1259   1.8226   1.1498   5.5541
+#>    4.8290   8.9775   3.8598  -3.7915   6.4184
+#>    0.7798   0.1942  -8.4466   8.5685  13.7212
+#>   -6.0247  -1.9721   9.1484   1.8498  -1.4937
+#> 
+#> (1,2,.,.) = 
+#>    4.2714  -3.3043   3.0402  -2.7384  -1.8807
+#>   10.8012   0.9600  -0.7262  -0.3902   1.1458
+#>   -0.3774   1.3983  -5.0742  16.1730  -9.9699
+#>   -1.2521   2.6366  -1.6142  -0.5783   1.0759
+#>   -1.4430   6.3296  -2.1135  -9.1693   6.9743
+#> 
+#> (1,3,.,.) = 
+#>   -1.6246  -0.3577   0.6327   1.5703  -5.3536
+#>    2.6579  -2.6281  -3.6799   3.9506  -9.8939
+#>    0.3707  -3.6971  -5.7174  -5.9126  -2.3601
+#>   -4.1118  -6.3271  11.9254  -8.0915  -9.3881
+#>    1.6068  -2.7051  -2.1547  -1.2867  -4.6118
+#> 
+#> (1,4,.,.) = 
+#>  -2.7534 -0.3291  0.1000 -3.3781  0.2460
+#>  -5.2972 -0.0770  1.5451  7.3312  0.5779
+#>   2.8656  7.4773  4.1043 -3.4999  1.5514
+#>  -7.0985  2.1793 -3.9563  9.2614 -2.9280
+#>  -3.6937 -2.7629 -0.0644 -0.4670  1.9274
+#> 
+#> (1,5,.,.) = 
+#>  -0.8408 -1.6140 -6.8559  3.9026  3.2767
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{1,8,5,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_conv_transpose3d.html b/dev/reference/torch_conv_transpose3d.html index 512efe938006407f7b24e5c45f19a167884f8c2f..5353d8d9987c34f3c7e5b1bae57281d775ad3d95 100644 --- a/dev/reference/torch_conv_transpose3d.html +++ b/dev/reference/torch_conv_transpose3d.html @@ -1,79 +1,18 @@ - - - - - - - -Conv_transpose3d — torch_conv_transpose3d • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Conv_transpose3d — torch_conv_transpose3d • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,97 +111,80 @@

Conv_transpose3d

-
torch_conv_transpose3d(
-  input,
-  weight,
-  bias = list(),
-  stride = 1L,
-  padding = 0L,
-  output_padding = 0L,
-  groups = 1L,
-  dilation = 1L
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kT , kH , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

- -

conv_transpose3d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

+
+
torch_conv_transpose3d(
+  input,
+  weight,
+  bias = list(),
+  stride = 1L,
+  padding = 0L,
+  output_padding = 0L,
+  groups = 1L,
+  dilation = 1L
+)
+
+
+

Arguments

+
input
+

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iT , iH , iW)\)

+
weight
+

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kT , kH , kW)\)

+
bias
+

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

+
stride
+

the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1

+
padding
+

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0

+
output_padding
+

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0

+
groups
+

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

+
dilation
+

the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1

+
+
+

conv_transpose3d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called "deconvolution"

-

See nn_conv_transpose3d() for details and output shape.

+

See nn_conv_transpose3d() for details and output shape.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-inputs = torch_randn(c(20, 16, 50, 10, 20))
-weights = torch_randn(c(16, 33, 3, 3, 3))
-nnf_conv_transpose3d(inputs, weights)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+inputs = torch_randn(c(20, 16, 50, 10, 20))
+weights = torch_randn(c(16, 33, 3, 3, 3))
+nnf_conv_transpose3d(inputs, weights)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cos.html b/dev/reference/torch_cos.html index 22abb0cb7bc7e0d226b8719186d9c2ae52936d9f..434754e1fdae8ac4cc85137ca3d7629f42e9e9f8 100644 --- a/dev/reference/torch_cos.html +++ b/dev/reference/torch_cos.html @@ -1,79 +1,18 @@ - - - - - - - -Cos — torch_cos • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cos — torch_cos • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cos(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

cos(input, out=NULL) -> Tensor

+
+
torch_cos(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

cos(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \cos(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_cos(a)
-}
-#> torch_tensor
-#>  0.8024
-#>  0.8990
-#>  0.2236
-#>  0.8580
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_cos(a)
+}
+#> torch_tensor
+#>  0.9920
+#>  0.9991
+#>  0.7859
+#>  0.9709
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cosh.html b/dev/reference/torch_cosh.html index b3315590a855bb0566b75074fee30ede243d2bf4..d3c4ebc0c38c0ac5e190fa1a14eb81f0bc4fe03a 100644 --- a/dev/reference/torch_cosh.html +++ b/dev/reference/torch_cosh.html @@ -1,79 +1,18 @@ - - - - - - - -Cosh — torch_cosh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cosh — torch_cosh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cosh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

cosh(input, out=NULL) -> Tensor

+
+
torch_cosh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

cosh(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@

$$ \mbox{out}_{i} = \cosh(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_cosh(a)
-}
-#> torch_tensor
-#>  1.5223
-#>  1.0311
-#>  1.0992
-#>  2.1630
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_cosh(a)
+}
+#> torch_tensor
+#>  1.2383
+#>  1.2462
+#>  1.3107
+#>  1.0561
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cosine_similarity.html b/dev/reference/torch_cosine_similarity.html index 884b13498fc0a4b4cb3e2ba08e28b7de2d937827..24019e9676787939353d689c702bcac7942976d2 100644 --- a/dev/reference/torch_cosine_similarity.html +++ b/dev/reference/torch_cosine_similarity.html @@ -1,79 +1,18 @@ - - - - - - - -Cosine_similarity — torch_cosine_similarity • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cosine_similarity — torch_cosine_similarity • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,31 +111,23 @@

Cosine_similarity

-
torch_cosine_similarity(x1, x2, dim = 2L, eps = 0)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
x1

(Tensor) First input.

x2

(Tensor) Second input (of size matching x1).

dim

(int, optional) Dimension of vectors. Default: 1

eps

(float, optional) Small value to avoid division by zero. Default: 1e-8

- -

cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor

+
+
torch_cosine_similarity(x1, x2, dim = 2L, eps = 0)
+
+
+

Arguments

+
x1
+

(Tensor) First input.

+
x2
+

(Tensor) Second input (of size matching x1).

+
dim
+

(int, optional) Dimension of vectors. Default: 1

+
eps
+

(float, optional) Small value to avoid division by zero. Default: 1e-8

+
+
+

cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor

@@ -221,74 +135,73 @@

$$ \mbox{similarity} = \frac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-input1 = torch_randn(c(100, 128))
-input2 = torch_randn(c(100, 128))
-output = torch_cosine_similarity(input1, input2)
-output
-}
-#> torch_tensor
-#> -0.1184
-#>  0.1200
-#> -0.0379
-#> -0.0425
-#>  0.0001
-#> -0.0877
-#>  0.0792
-#> -0.0593
-#>  0.0396
-#>  0.1051
-#>  0.1250
-#> -0.0644
-#>  0.0106
-#> -0.1027
-#> -0.0870
-#>  0.0140
-#>  0.0807
-#>  0.0402
-#> -0.0976
-#>  0.0128
-#> -0.0453
-#> -0.0566
-#> -0.0092
-#> -0.0925
-#> -0.0542
-#> -0.0686
-#> -0.0115
-#>  0.0797
-#>  0.1791
-#>  0.0080
-#> ... [the output was truncated (use n=-1 to disable)]
-#> [ CPUFloatType{100} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input1 = torch_randn(c(100, 128))
+input2 = torch_randn(c(100, 128))
+output = torch_cosine_similarity(input1, input2)
+output
+}
+#> torch_tensor
+#>  0.1401
+#> -0.0774
+#> -0.0501
+#> -0.0187
+#>  0.0501
+#> -0.1047
+#> -0.0906
+#> -0.0484
+#> -0.0588
+#>  0.0217
+#>  0.1372
+#> -0.0457
+#> -0.0843
+#> -0.1109
+#> -0.1297
+#>  0.0795
+#> -0.0365
+#> -0.0491
+#>  0.0171
+#>  0.0443
+#> -0.0111
+#> -0.0364
+#> -0.0006
+#> -0.0901
+#>  0.0134
+#>  0.0503
+#>  0.1285
+#>  0.0019
+#> -0.1816
+#> -0.0819
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{100} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_count_nonzero.html b/dev/reference/torch_count_nonzero.html index d7fff42b3cb0c6f1228ea79ade3e5f57d1f09e4c..3c6404e47fb634378db0344ba3edc5b65b8a9bb9 100644 --- a/dev/reference/torch_count_nonzero.html +++ b/dev/reference/torch_count_nonzero.html @@ -1,79 +1,18 @@ - - - - - - - -Count_nonzero — torch_count_nonzero • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Count_nonzero — torch_count_nonzero • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,70 +111,65 @@

Count_nonzero

-
torch_count_nonzero(self, dim = NULL)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints, optional) Dim or tuple of dims along which -to count non-zeros.

- -

count_nonzero(input, dim=None) -> Tensor

+
+
torch_count_nonzero(self, dim = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints, optional) Dim or tuple of dims along which +to count non-zeros.

+
+
+

count_nonzero(input, dim=None) -> Tensor

Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_zeros(3,3)
-x[torch_randn(3,3) > 0.5] = 1
-x
-torch_count_nonzero(x)
-torch_count_nonzero(x, dim=1)
-}
-#> torch_tensor
-#>  1
-#>  1
-#>  0
-#> [ CPULongType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_zeros(3,3)
+x[torch_randn(3,3) > 0.5] = 1
+x
+torch_count_nonzero(x)
+torch_count_nonzero(x, dim=1)
+}
+#> torch_tensor
+#>  1
+#>  3
+#>  1
+#> [ CPULongType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cross.html b/dev/reference/torch_cross.html index 0db345fcbc88d4c495deaddb16135be36d27c280..575d2cd3d0b3b5585307df2b3de27401a6c497c1 100644 --- a/dev/reference/torch_cross.html +++ b/dev/reference/torch_cross.html @@ -1,79 +1,18 @@ - - - - - - - -Cross — torch_cross • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cross — torch_cross • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cross(self, other, dim = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

dim

(int, optional) the dimension to take the cross-product in.

- -

cross(input, other, dim=-1, out=NULL) -> Tensor

+
+
torch_cross(self, other, dim = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
dim
+

(int, optional) the dimension to take the cross-product in.

+
+
+

cross(input, other, dim=-1, out=NULL) -> Tensor

@@ -219,49 +135,48 @@ and other.

dim dimension should be 3.

If dim is not given, it defaults to the first dimension found with the size 3.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4, 3))
-a
-b = torch_randn(c(4, 3))
-b
-torch_cross(a, b, dim=2)
-torch_cross(a, b)
-}
-#> torch_tensor
-#>  1.6620  4.7874 -4.1398
-#> -2.3831  0.6922  0.5992
-#>  0.8354  0.3776 -1.1564
-#> -0.7426  0.8224  1.7921
-#> [ CPUFloatType{4,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4, 3))
+a
+b = torch_randn(c(4, 3))
+b
+torch_cross(a, b, dim=2)
+torch_cross(a, b)
+}
+#> torch_tensor
+#>  0.9962  0.8935  0.5735
+#>  3.0543  0.1183  2.5553
+#>  1.5996  1.6032  0.5181
+#>  0.6109 -0.2408  0.0755
+#> [ CPUFloatType{4,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cummax.html b/dev/reference/torch_cummax.html index 5d80da25948c001e5af4581ff92e9d3b69eff8ed..aedd46cf65da62185b7811a9e50d9c9990993c7e 100644 --- a/dev/reference/torch_cummax.html +++ b/dev/reference/torch_cummax.html @@ -1,79 +1,18 @@ - - - - - - - -Cummax — torch_cummax • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cummax — torch_cummax • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cummax(self, dim)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to do the operation over

- -

cummax(input, dim) -> (Tensor, LongTensor)

+
+
torch_cummax(self, dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to do the operation over

+
+
+

cummax(input, dim) -> (Tensor, LongTensor)

@@ -215,68 +133,67 @@ location of each maximum value found in the dimension dim.

$$ y_i = max(x_1, x_2, x_3, \dots, x_i) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(10))
-a
-torch_cummax(a, dim=1)
-}
-#> [[1]]
-#> torch_tensor
-#> -0.3404
-#> -0.3404
-#> -0.3404
-#>  1.2517
-#>  1.2517
-#>  1.2517
-#>  1.2517
-#>  1.2517
-#>  1.2517
-#>  1.2517
-#> [ CPUFloatType{10} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  0
-#>  0
-#>  0
-#>  3
-#>  3
-#>  3
-#>  3
-#>  3
-#>  3
-#>  3
-#> [ CPULongType{10} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(10))
+a
+torch_cummax(a, dim=1)
+}
+#> [[1]]
+#> torch_tensor
+#>  0.9719
+#>  0.9719
+#>  0.9719
+#>  0.9719
+#>  0.9719
+#>  2.4032
+#>  2.4032
+#>  2.4032
+#>  2.4032
+#>  2.4032
+#> [ CPUFloatType{10} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  0
+#>  0
+#>  0
+#>  0
+#>  0
+#>  5
+#>  5
+#>  5
+#>  5
+#>  5
+#> [ CPULongType{10} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cummin.html b/dev/reference/torch_cummin.html index dbe981b58caf499558307aefd9dd127a86471364..38db4661239231473719b0761a249f5462296a65 100644 --- a/dev/reference/torch_cummin.html +++ b/dev/reference/torch_cummin.html @@ -1,79 +1,18 @@ - - - - - - - -Cummin — torch_cummin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cummin — torch_cummin • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cummin(self, dim)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to do the operation over

- -

cummin(input, dim) -> (Tensor, LongTensor)

+
+
torch_cummin(self, dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to do the operation over

+
+
+

cummin(input, dim) -> (Tensor, LongTensor)

@@ -215,68 +133,67 @@ location of each maximum value found in the dimension dim.

$$ y_i = min(x_1, x_2, x_3, \dots, x_i) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(10))
-a
-torch_cummin(a, dim=1)
-}
-#> [[1]]
-#> torch_tensor
-#> -0.6391
-#> -1.2281
-#> -1.2281
-#> -1.2281
-#> -1.3487
-#> -1.3487
-#> -1.3487
-#> -1.3487
-#> -1.3487
-#> -1.3487
-#> [ CPUFloatType{10} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  0
-#>  1
-#>  1
-#>  1
-#>  4
-#>  4
-#>  4
-#>  4
-#>  4
-#>  4
-#> [ CPULongType{10} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(10))
+a
+torch_cummin(a, dim=1)
+}
+#> [[1]]
+#> torch_tensor
+#>  1.5906
+#> -0.5795
+#> -0.7169
+#> -0.7169
+#> -0.7169
+#> -0.7169
+#> -0.7169
+#> -1.2209
+#> -1.2209
+#> -1.2209
+#> [ CPUFloatType{10} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  0
+#>  1
+#>  2
+#>  2
+#>  2
+#>  2
+#>  2
+#>  7
+#>  7
+#>  7
+#> [ CPULongType{10} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cumprod.html b/dev/reference/torch_cumprod.html index 099e1e7d3b7491a389ba8c93c920dd9020126ee2..176a22f78dc4a4bb31b4cb18dc55acc907c3136f 100644 --- a/dev/reference/torch_cumprod.html +++ b/dev/reference/torch_cumprod.html @@ -1,79 +1,18 @@ - - - - - - - -Cumprod — torch_cumprod • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cumprod — torch_cumprod • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cumprod(self, dim, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to do the operation over

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

- -

cumprod(input, dim, out=NULL, dtype=NULL) -> Tensor

+
+
torch_cumprod(self, dim, dtype = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to do the operation over

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

+
+
+

cumprod(input, dim, out=NULL, dtype=NULL) -> Tensor

@@ -220,52 +136,51 @@ a vector of size N, with elements.

$$ y_i = x_1 \times x_2\times x_3\times \dots \times x_i $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(10))
-a
-torch_cumprod(a, dim=1)
-}
-#> torch_tensor
-#>  7.4728e-01
-#> -1.6046e+00
-#> -3.8913e-01
-#>  3.3774e-01
-#> -5.0653e-03
-#> -5.6490e-03
-#>  1.8383e-04
-#> -2.7820e-05
-#> -8.6142e-06
-#> -1.6616e-05
-#> [ CPUFloatType{10} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(10))
+a
+torch_cumprod(a, dim=1)
+}
+#> torch_tensor
+#> -0.0460
+#>  0.1204
+#>  0.0138
+#>  0.0022
+#>  0.0026
+#> -0.0002
+#>  0.0002
+#> -0.0001
+#> -0.0000
+#>  0.0000
+#> [ CPUFloatType{10} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_cumsum.html b/dev/reference/torch_cumsum.html index 0f6cf653a4f959e0f42df85b383e267940d272f2..0ad4b5f0423b88cfe44058ab0ecf3cd54868b0da 100644 --- a/dev/reference/torch_cumsum.html +++ b/dev/reference/torch_cumsum.html @@ -1,79 +1,18 @@ - - - - - - - -Cumsum — torch_cumsum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cumsum — torch_cumsum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_cumsum(self, dim, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to do the operation over

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

- -

cumsum(input, dim, out=NULL, dtype=NULL) -> Tensor

+
+
torch_cumsum(self, dim, dtype = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to do the operation over

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

+
+
+

cumsum(input, dim, out=NULL, dtype=NULL) -> Tensor

@@ -220,52 +136,51 @@ a vector of size N, with elements.

$$ y_i = x_1 + x_2 + x_3 + \dots + x_i $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(10))
-a
-torch_cumsum(a, dim=1)
-}
-#> torch_tensor
-#> -0.7437
-#> -1.7625
-#> -2.3490
-#> -1.0311
-#> -1.6541
-#> -0.5724
-#>  0.2588
-#>  0.8501
-#>  0.8480
-#>  0.8757
-#> [ CPUFloatType{10} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(10))
+a
+torch_cumsum(a, dim=1)
+}
+#> torch_tensor
+#>  1.0243
+#>  2.7605
+#>  1.7986
+#>  0.7469
+#>  0.4758
+#>  1.7117
+#>  0.0738
+#> -0.2902
+#> -0.4753
+#> -0.1214
+#> [ CPUFloatType{10} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_deg2rad.html b/dev/reference/torch_deg2rad.html index ef9d4ef027305cac2358dcc36e10b1f475269e39..e8ae8ded0ec13900f681716443b205bee8f69b5a 100644 --- a/dev/reference/torch_deg2rad.html +++ b/dev/reference/torch_deg2rad.html @@ -1,79 +1,18 @@ - - - - - - - -Deg2rad — torch_deg2rad • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Deg2rad — torch_deg2rad • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_deg2rad(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

deg2rad(input, *, out=None) -> Tensor

+
+
torch_deg2rad(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

deg2rad(input, *, out=None) -> Tensor

Returns a new tensor with each of the elements of input converted from angles in degrees to radians.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(rbind(c(180.0, -180.0), c(360.0, -360.0), c(90.0, -90.0)))
-torch_deg2rad(a)
-}
-#> torch_tensor
-#>  3.1416 -3.1416
-#>  6.2832 -6.2832
-#>  1.5708 -1.5708
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(rbind(c(180.0, -180.0), c(360.0, -360.0), c(90.0, -90.0)))
+torch_deg2rad(a)
+}
+#> torch_tensor
+#>  3.1416 -3.1416
+#>  6.2832 -6.2832
+#>  1.5708 -1.5708
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_dequantize.html b/dev/reference/torch_dequantize.html index 4c40c078daccddd040a23ac05b33fa610c72f28c..c54b400714ce8346a0e0009ed711dec7dfd8c745 100644 --- a/dev/reference/torch_dequantize.html +++ b/dev/reference/torch_dequantize.html @@ -1,79 +1,18 @@ - - - - - - - -Dequantize — torch_dequantize • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dequantize — torch_dequantize • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,55 +111,51 @@

Dequantize

-
torch_dequantize(tensor)
- -

Arguments

- - - - - - -
tensor

(Tensor) A quantized Tensor or a list oof quantized tensors

- -

dequantize(tensor) -> Tensor

+
+
torch_dequantize(tensor)
+
+
+

Arguments

+
tensor
+

(Tensor) A quantized Tensor or a list oof quantized tensors

+
+
+

dequantize(tensor) -> Tensor

Returns an fp32 Tensor by dequantizing a quantized Tensor

-

dequantize(tensors) -> sequence of Tensors

- +
+
+

dequantize(tensors) -> sequence of Tensors

Given a list of quantized Tensors, dequantize them and return a list of fp32 Tensors

+
+
-
- +
- - + + diff --git a/dev/reference/torch_det.html b/dev/reference/torch_det.html index 2458f2799a2448e6f75af0be0d917d711dba7879..8f5dce3d087c29bc2bb1f27377566722856583b4 100644 --- a/dev/reference/torch_det.html +++ b/dev/reference/torch_det.html @@ -1,79 +1,18 @@ - - - - - - - -Det — torch_det • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Det — torch_det • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_det(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

- -

Note

+
+
torch_det(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

+
+
+

Note

-
Backward through `det` internally uses SVD results when `input` is
+
Backward through `det` internally uses SVD results when `input` is
 not invertible. In this case, double backward through `det` will be
 unstable in when `input` doesn't have distinct singular values. See
 `~torch.svd` for details.
-
- -

det(input) -> Tensor

+
+
+
+

det(input) -> Tensor

Calculates determinant of a square matrix or batches of square matrices.

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_randn(c(3, 3))
-torch_det(A)
-A = torch_randn(c(3, 2, 2))
-A
-A$det()
-}
-#> torch_tensor
-#> -0.7524
-#>  3.0036
-#> -0.7466
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_randn(c(3, 3))
+torch_det(A)
+A = torch_randn(c(3, 2, 2))
+A
+A$det()
+}
+#> torch_tensor
+#>  2.1413
+#> -1.0651
+#>  2.5635
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_device.html b/dev/reference/torch_device.html index e26504dc6f6cedbd8fa9897153aaf47d7c73e509..9849772563779d764fa0dc467a21058f98c79939 100644 --- a/dev/reference/torch_device.html +++ b/dev/reference/torch_device.html @@ -1,80 +1,19 @@ - - - - - - - -Create a Device object — torch_device • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Create a Device object — torch_device • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,67 +113,61 @@ is or will be allocated." /> is or will be allocated.

-
torch_device(type, index = NULL)
+
+
torch_device(type, index = NULL)
+
-

Arguments

- - - - - - - - - - -
type

(character) a device type "cuda" or "cpu"

index

(integer) optional device ordinal for the device type. If the device ordinal +

+

Arguments

+
type
+

(character) a device type "cuda" or "cpu"

+
index
+

(integer) optional device ordinal for the device type. If the device ordinal is not present, this object will always represent the current device for the device type, even after torch_cuda_set_device() is called; e.g., a torch_tensor constructed with device 'cuda' is equivalent to 'cuda:X' where X is the result of torch_cuda_current_device().

-

A torch_device can be constructed via a string or via a string and device ordinal

- - -

Examples

-
if (torch_is_installed()) {
-
-# Via string
-torch_device("cuda:1")
-torch_device("cpu")
-torch_device("cuda") # current cuda device
-
-# Via string and device ordinal
-torch_device("cuda", 0)
-torch_device("cpu", 0)
-
-}
-#> torch_device(type='cpu', index=0)
-
+

A torch_device can be constructed via a string or via a string and device ordinal

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+# Via string
+torch_device("cuda:1")
+torch_device("cpu")
+torch_device("cuda") # current cuda device
+
+# Via string and device ordinal
+torch_device("cuda", 0)
+torch_device("cpu", 0)
+
+}
+#> torch_device(type='cpu', index=0)
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_diag.html b/dev/reference/torch_diag.html index 565462946879931ba22dec31428ea876a2776011..fa26b59b9614ce0a92adc2ce003aa5e8428a6cd0 100644 --- a/dev/reference/torch_diag.html +++ b/dev/reference/torch_diag.html @@ -1,79 +1,18 @@ - - - - - - - -Diag — torch_diag • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Diag — torch_diag • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_diag(self, diagonal = 0L)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

diagonal

(int, optional) the diagonal to consider

- -

diag(input, diagonal=0, out=NULL) -> Tensor

+
+
torch_diag(self, diagonal = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
diagonal
+

(int, optional) the diagonal to consider

+
+
+

diag(input, diagonal=0, out=NULL) -> Tensor

-
    -
  • If input is a vector (1-D tensor), then returns a 2-D square tensor +

    • If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal.

    • If input is a matrix (2-D tensor), then returns a 1-D tensor with the diagonal elements of input.

    • -
    - -

    The argument diagonal controls which diagonal to consider:

      -
    • If diagonal = 0, it is the main diagonal.

    • +

    The argument diagonal controls which diagonal to consider:

    • If diagonal = 0, it is the main diagonal.

    • If diagonal > 0, it is above the main diagonal.

    • If diagonal < 0, it is below the main diagonal.

    • -
    - +
+
-
- +
- - + + diff --git a/dev/reference/torch_diag_embed.html b/dev/reference/torch_diag_embed.html index f7e5f316075d18465cbc397633e33cb9eaf15e60..fb735e238c00aca8f9a2e8e12e11546211e5a3d1 100644 --- a/dev/reference/torch_diag_embed.html +++ b/dev/reference/torch_diag_embed.html @@ -1,79 +1,18 @@ - - - - - - - -Diag_embed — torch_diag_embed • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Diag_embed — torch_diag_embed • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,31 +111,23 @@

Diag_embed

-
torch_diag_embed(self, offset = 0L, dim1 = -2L, dim2 = -1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor. Must be at least 1-dimensional.

offset

(int, optional) which diagonal to consider. Default: 0 (main diagonal).

dim1

(int, optional) first dimension with respect to which to take diagonal. Default: -2.

dim2

(int, optional) second dimension with respect to which to take diagonal. Default: -1.

- -

diag_embed(input, offset=0, dim1=-2, dim2=-1) -> Tensor

+
+
torch_diag_embed(self, offset = 0L, dim1 = -2L, dim2 = -1L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor. Must be at least 1-dimensional.

+
offset
+

(int, optional) which diagonal to consider. Default: 0 (main diagonal).

+
dim1
+

(int, optional) first dimension with respect to which to take diagonal. Default: -2.

+
dim2
+

(int, optional) second dimension with respect to which to take diagonal. Default: -1.

+
+
+

diag_embed(input, offset=0, dim1=-2, dim2=-1) -> Tensor

@@ -221,13 +135,10 @@ dim1 and dim2) are filled by input. To facilitate creating batched diagonal matrices, the 2D planes formed by the last two dimensions of the returned tensor are chosen by default.

-

The argument offset controls which diagonal to consider:

    -
  • If offset = 0, it is the main diagonal.

  • +

    The argument offset controls which diagonal to consider:

    • If offset = 0, it is the main diagonal.

    • If offset > 0, it is above the main diagonal.

    • If offset < 0, it is below the main diagonal.

    • -
    - -

    The size of the new matrix will be calculated to make the specified diagonal +

The size of the new matrix will be calculated to make the specified diagonal of the size of the last input dimension. Note that for offset other than \(0\), the order of dim1 and dim2 matters. Exchanging them is equivalent to changing the @@ -236,57 +147,56 @@ sign of offset.

the same arguments yields a matrix identical to input. However, torch_diagonal has different default dimensions, so those need to be explicitly specified.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(2, 3))
-torch_diag_embed(a)
-torch_diag_embed(a, offset=1, dim1=1, dim2=3)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   0.0000  0.1475  0.0000  0.0000
-#>   0.0000  0.6404  0.0000  0.0000
-#> 
-#> (2,.,.) = 
-#>   0.0000  0.0000  1.2464  0.0000
-#>   0.0000  0.0000 -1.0443  0.0000
-#> 
-#> (3,.,.) = 
-#>   0.0000  0.0000  0.0000  0.2586
-#>   0.0000  0.0000  0.0000  1.2482
-#> 
-#> (4,.,.) = 
-#>   0  0  0  0
-#>   0  0  0  0
-#> [ CPUFloatType{4,2,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(2, 3))
+torch_diag_embed(a)
+torch_diag_embed(a, offset=1, dim1=1, dim2=3)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   0.0000 -0.9834  0.0000  0.0000
+#>   0.0000  1.4305  0.0000  0.0000
+#> 
+#> (2,.,.) = 
+#>   0.0000  0.0000  0.8077  0.0000
+#>   0.0000  0.0000 -0.6973  0.0000
+#> 
+#> (3,.,.) = 
+#>   0.0000  0.0000  0.0000 -0.9459
+#>   0.0000  0.0000  0.0000  0.9271
+#> 
+#> (4,.,.) = 
+#>   0  0  0  0
+#>   0  0  0  0
+#> [ CPUFloatType{4,2,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_diagflat.html b/dev/reference/torch_diagflat.html index 28761e3551e50d5cc7e54162a0eb50e2b42781e7..cb2a1304f8a38cefcb8e0f862cbae3a00d436111 100644 --- a/dev/reference/torch_diagflat.html +++ b/dev/reference/torch_diagflat.html @@ -1,79 +1,18 @@ - - - - - - - -Diagflat — torch_diagflat • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Diagflat — torch_diagflat • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,82 +111,71 @@

Diagflat

-
torch_diagflat(self, offset = 0L)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

offset

(int, optional) the diagonal to consider. Default: 0 (main diagonal).

- -

diagflat(input, offset=0) -> Tensor

+
+
torch_diagflat(self, offset = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
offset
+

(int, optional) the diagonal to consider. Default: 0 (main diagonal).

+
+
+

diagflat(input, offset=0) -> Tensor

-
    -
  • If input is a vector (1-D tensor), then returns a 2-D square tensor +

    • If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal.

    • If input is a tensor with more than one dimension, then returns a 2-D tensor with diagonal elements equal to a flattened input.

    • -
    - -

    The argument offset controls which diagonal to consider:

      -
    • If offset = 0, it is the main diagonal.

    • +

    The argument offset controls which diagonal to consider:

    • If offset = 0, it is the main diagonal.

    • If offset > 0, it is above the main diagonal.

    • If offset < 0, it is below the main diagonal.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -a = torch_randn(c(3))
    -a
    -torch_diagflat(a)
    -torch_diagflat(a, 1)
    -a = torch_randn(c(2, 2))
    -a
    -torch_diagflat(a)
    -}
    -#> torch_tensor
    -#>  0.2683  0.0000  0.0000  0.0000
    -#>  0.0000 -0.6468  0.0000  0.0000
    -#>  0.0000  0.0000  0.1648  0.0000
    -#>  0.0000  0.0000  0.0000  1.3355
    -#> [ CPUFloatType{4,4} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3))
+a
+torch_diagflat(a)
+torch_diagflat(a, 1)
+a = torch_randn(c(2, 2))
+a
+torch_diagflat(a)
+}
+#> torch_tensor
+#>  0.5539  0.0000  0.0000  0.0000
+#>  0.0000  1.0772  0.0000  0.0000
+#>  0.0000  0.0000 -0.2999  0.0000
+#>  0.0000  0.0000  0.0000 -2.5768
+#> [ CPUFloatType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_diagonal.html b/dev/reference/torch_diagonal.html index 50e2f0ee6a706d2bac4e5bda157ad016e672644c..14608a2735a86827a2f366f87714e57fbafcf073 100644 --- a/dev/reference/torch_diagonal.html +++ b/dev/reference/torch_diagonal.html @@ -1,79 +1,18 @@ - - - - - - - -Diagonal — torch_diagonal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Diagonal — torch_diagonal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,105 +111,92 @@

Diagonal

-
torch_diagonal(self, outdim, dim1 = 1L, dim2 = 2L, offset = 0L)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor. Must be at least 2-dimensional.

outdim

dimension name if self is a named tensor.

dim1

(int, optional) first dimension with respect to which to take diagonal. Default: 0.

dim2

(int, optional) second dimension with respect to which to take diagonal. Default: 1.

offset

(int, optional) which diagonal to consider. Default: 0 (main diagonal).

- -

diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor

+
+
torch_diagonal(self, outdim, dim1 = 1L, dim2 = 2L, offset = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor. Must be at least 2-dimensional.

+
outdim
+

dimension name if self is a named tensor.

+
dim1
+

(int, optional) first dimension with respect to which to take diagonal. Default: 0.

+
dim2
+

(int, optional) second dimension with respect to which to take diagonal. Default: 1.

+
offset
+

(int, optional) which diagonal to consider. Default: 0 (main diagonal).

+
+
+

diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor

Returns a partial view of input with the its diagonal elements with respect to dim1 and dim2 appended as a dimension at the end of the shape.

-

The argument offset controls which diagonal to consider:

    -
  • If offset = 0, it is the main diagonal.

  • +

    The argument offset controls which diagonal to consider:

    • If offset = 0, it is the main diagonal.

    • If offset > 0, it is above the main diagonal.

    • If offset < 0, it is below the main diagonal.

    • -
    - -

    Applying torch_diag_embed to the output of this function with +

Applying torch_diag_embed to the output of this function with the same arguments yields a diagonal matrix with the diagonal entries of the input. However, torch_diag_embed has different default dimensions, so those need to be explicitly specified.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-a
-torch_diagonal(a, offset = 0)
-torch_diagonal(a, offset = 1)
-x = torch_randn(c(2, 5, 4, 2))
-torch_diagonal(x, offset=-1, dim1=1, dim2=2)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   0.3008
-#>  -0.1631
-#> 
-#> (2,.,.) = 
-#>   0.6683
-#>  -1.5185
-#> 
-#> (3,.,.) = 
-#>   1.3402
-#>  -0.8849
-#> 
-#> (4,.,.) = 
-#>  -0.6020
-#>   0.3549
-#> [ CPUFloatType{4,2,1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+a
+torch_diagonal(a, offset = 0)
+torch_diagonal(a, offset = 1)
+x = torch_randn(c(2, 5, 4, 2))
+torch_diagonal(x, offset=-1, dim1=1, dim2=2)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  0.01 *
+#>  -5.6102
+#>   -170.4098
+#> 
+#> (2,.,.) = 
+#>   1.7879
+#>   0.4863
+#> 
+#> (3,.,.) = 
+#>  -0.6161
+#>  -0.1745
+#> 
+#> (4,.,.) = 
+#>   0.0258
+#>   0.5843
+#> [ CPUFloatType{4,2,1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_diff.html b/dev/reference/torch_diff.html index 7696f04c2b90adbe3c0723d1a3972851f64e34ca..065487e890ef641506cf6ad923a39bc58f50ba14 100644 --- a/dev/reference/torch_diff.html +++ b/dev/reference/torch_diff.html @@ -1,80 +1,19 @@ - - - - - - - -Computes the n-th forward difference along the given dimension. — torch_diff • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Computes the n-th forward difference along the given dimension. — torch_diff • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,84 +113,73 @@ Higher-order differences are calculated by using torch_diff() recursively." /> Higher-order differences are calculated by using torch_diff() recursively.

-
torch_diff(self, n = 1L, dim = -1L, prepend = list(), append = list())
+
+
torch_diff(self, n = 1L, dim = -1L, prepend = list(), append = list())
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

the tensor to compute the differences on

n

the number of times to recursively compute the difference

dim

the dimension to compute the difference along. Default is the last dimension.

prepend

values to prepend to input along dim before computing the +

+

Arguments

+
self
+

the tensor to compute the differences on

+
n
+

the number of times to recursively compute the difference

+
dim
+

the dimension to compute the difference along. Default is the last dimension.

+
prepend
+

values to prepend to input along dim before computing the difference. Their dimensions must be equivalent to that of input, and their -shapes must match input’s shape except on dim.

append

values to append to input along dim before computing the +shapes must match input’s shape except on dim.

+
append
+

values to append to input along dim before computing the difference. Their dimensions must be equivalent to that of input, and their -shapes must match input’s shape except on dim.

- -

Note

- +shapes must match input’s shape except on dim.

+
+
+

Note

Only n = 1 is currently supported

+
-

Examples

-
if (torch_is_installed()) {
-a <- torch_tensor(c(1,2,3))
-torch_diff(a)
-
-b <- torch_tensor(c(4, 5))
-torch_diff(a, append = b)
-
-c <- torch_tensor(rbind(c(1,2,3), c(3,4,5)))
-torch_diff(c, dim = 1)
-torch_diff(c, dim = 2) 
-
-}
-#> torch_tensor
-#>  1  1
-#>  1  1
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+a <- torch_tensor(c(1,2,3))
+torch_diff(a)
+
+b <- torch_tensor(c(4, 5))
+torch_diff(a, append = b)
+
+c <- torch_tensor(rbind(c(1,2,3), c(3,4,5)))
+torch_diff(c, dim = 1)
+torch_diff(c, dim = 2) 
+
+}
+#> torch_tensor
+#>  1  1
+#>  1  1
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_digamma.html b/dev/reference/torch_digamma.html index a8934f49b5e6fb8a3f53b973163aeebd608c5654..d9989c2e22d80907fb11a1e9d09d272f2f5f0366 100644 --- a/dev/reference/torch_digamma.html +++ b/dev/reference/torch_digamma.html @@ -1,79 +1,18 @@ - - - - - - - -Digamma — torch_digamma • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Digamma — torch_digamma • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_digamma(self)
- -

Arguments

- - - - - - -
self

(Tensor) the tensor to compute the digamma function on

- -

digamma(input, out=NULL) -> Tensor

+
+
torch_digamma(self)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compute the digamma function on

+
+
+

digamma(input, out=NULL) -> Tensor

@@ -209,43 +129,42 @@

$$ \psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_tensor(c(1, 0.5))
-torch_digamma(a)
-}
-#> torch_tensor
-#> -0.5772
-#> -1.9635
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_tensor(c(1, 0.5))
+torch_digamma(a)
+}
+#> torch_tensor
+#> -0.5772
+#> -1.9635
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_dist.html b/dev/reference/torch_dist.html index 2fe75f1ef219a3d8aee82e8832982a5f158d0743..bfd7186b32f28bb49185d30f9630de7b9977e749 100644 --- a/dev/reference/torch_dist.html +++ b/dev/reference/torch_dist.html @@ -1,79 +1,18 @@ - - - - - - - -Dist — torch_dist • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dist — torch_dist • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_dist(self, other, p = 2L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the Right-hand-side input tensor

p

(float, optional) the norm to be computed

- -

dist(input, other, p=2) -> Tensor

+
+
torch_dist(self, other, p = 2L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the Right-hand-side input tensor

+
p
+

(float, optional) the norm to be computed

+
+
+

dist(input, other, p=2) -> Tensor

Returns the p-norm of (input - other)

The shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(4))
-x
-y = torch_randn(c(4))
-y
-torch_dist(x, y, 3.5)
-torch_dist(x, y, 3)
-torch_dist(x, y, 0)
-torch_dist(x, y, 1)
-}
-#> torch_tensor
-#> 1.87642
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(4))
+x
+y = torch_randn(c(4))
+y
+torch_dist(x, y, 3.5)
+torch_dist(x, y, 3)
+torch_dist(x, y, 0)
+torch_dist(x, y, 1)
+}
+#> torch_tensor
+#> 7.78286
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_div.html b/dev/reference/torch_div.html index 8dd32a13bb3969a7c875aa51815441ebd028d715..e9f78f5a7694a65f332cf1073a86895b8ad7e587 100644 --- a/dev/reference/torch_div.html +++ b/dev/reference/torch_div.html @@ -1,79 +1,18 @@ - - - - - - - -Div — torch_div • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Div — torch_div • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_div(self, other, rounding_mode)
+
+
torch_div(self, other, rounding_mode)
+
-

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Number) the number to be divided to each element of input

rounding_mode

(str, optional) – Type of rounding applied to the result:

    -
  • NULL - default behavior. Performs no rounding and, if both input and +

    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor.

    +
    other
    +

    (Number) the number to be divided to each element of input

    +
    rounding_mode
    +

    (str, optional) – Type of rounding applied to the result:

    • NULL - default behavior. Performs no rounding and, if both input and other are integer types, promotes the inputs to the default scalar type. Equivalent to true division in Python (the / operator) and NumPy’s np.true_divide.

    • @@ -213,12 +130,10 @@ Equivalent to true division in Python (the / operator) and NumPy’s C-style integer division.

    • "floor" - rounds the results of the division down. Equivalent to floor division in Python (the // operator) and NumPy’s np.floor_divide.

    • -
- -

div(input, other, out=NULL) -> Tensor

- + +
+
+

div(input, other, out=NULL) -> Tensor

@@ -238,13 +153,14 @@ following rules described in the type promotion documentation . If out is specified, the result must be castable to the torch_dtype of the specified output tensor. Integral division by zero leads to undefined behavior.

-

Warning

- +
+
+

Warning

Integer division using div is deprecated, and in a future release div will -perform true division like torch_true_divide(). -Use torch_floor_divide() to perform integer division, +perform true division like torch_true_divide(). +Use torch_floor_divide() to perform integer division, instead.

$$ \mbox{out}_i = \frac{\mbox{input}_i}{\mbox{other}} @@ -255,53 +171,52 @@ described in the type promotion documentation . If out is specified, the result must be castable to the torch_dtype of the specified output tensor. Integral division by zero leads to undefined behavior.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5))
-a
-torch_div(a, 0.5)
-
-
-a = torch_randn(c(4, 4))
-a
-b = torch_randn(c(4))
-b
-torch_div(a, b)
-}
-#> torch_tensor
-#>   1.3734   0.7856   0.3981 -11.5037
-#>  -1.4715  -1.4632   0.6354  16.9294
-#>  -4.8421   0.2883  -1.0024  -7.8869
-#>   2.0493  -0.4843   0.4296 -19.8034
-#> [ CPUFloatType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5))
+a
+torch_div(a, 0.5)
+
+
+a = torch_randn(c(4, 4))
+a
+b = torch_randn(c(4))
+b
+torch_div(a, b)
+}
+#> torch_tensor
+#> -21.0201  -1.1992  -0.5805  -1.8830
+#> -49.4112  -1.0154  -2.6516   1.1126
+#>  22.5517  -1.3136  -1.4411  -3.0914
+#>  -2.2162   1.9874   0.3000   0.4237
+#> [ CPUFloatType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_divide.html b/dev/reference/torch_divide.html index 2e148177a2e465b7087ce35364fdec6ee3f17cb9..d0ef796ac99221403274415a5395b9f9ed90835a 100644 --- a/dev/reference/torch_divide.html +++ b/dev/reference/torch_divide.html @@ -1,79 +1,18 @@ - - - - - - - -Divide — torch_divide • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Divide — torch_divide • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_divide(self, other, rounding_mode)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Number) the number to be divided to each element of input

rounding_mode

(str, optional) – Type of rounding applied to the result:

    -
  • NULL - default behavior. Performs no rounding and, if both input and +

    +
    torch_divide(self, other, rounding_mode)
    +
    + +
    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor.

    +
    other
    +

    (Number) the number to be divided to each element of input

    +
    rounding_mode
    +

    (str, optional) – Type of rounding applied to the result:

    • NULL - default behavior. Performs no rounding and, if both input and other are integer types, promotes the inputs to the default scalar type. Equivalent to true division in Python (the / operator) and NumPy’s np.true_divide.

    • @@ -213,42 +130,37 @@ Equivalent to true division in Python (the / operator) and NumPy’s C-style integer division.

    • "floor" - rounds the results of the division down. Equivalent to floor division in Python (the // operator) and NumPy’s np.floor_divide.

    • -
- -

divide(input, other, *, out=None) -> Tensor

- + +
+
+

divide(input, other, *, out=None) -> Tensor

-

Alias for torch_div().

+

Alias for torch_div().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_dot.html b/dev/reference/torch_dot.html index b4a06f9ca72dc0e79fe737e72299609c4f77ce0c..710bf6dd10c20b5b36ea02efbeb052a1e1456a8e 100644 --- a/dev/reference/torch_dot.html +++ b/dev/reference/torch_dot.html @@ -1,79 +1,18 @@ - - - - - - - -Dot — torch_dot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dot — torch_dot • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_dot(self, tensor)
- -

Arguments

- - - - - - - - - - -
self

the input tensor

tensor

the other input tensor

- -

Note

+
+
torch_dot(self, tensor)
+
+
+

Arguments

+
self
+

the input tensor

+
tensor
+

the other input tensor

+
+
+

Note

This function does not broadcast .

-

dot(input, tensor) -> Tensor

- +
+
+

dot(input, tensor) -> Tensor

Computes the dot product (inner product) of two tensors.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_dot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1)))
-}
-#> torch_tensor
-#> 7
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_dot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1)))
+}
+#> torch_tensor
+#> 7
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_dstack.html b/dev/reference/torch_dstack.html index 9089a6d56d5144e0a51b6e429163b4ee7194471b..d3e6f5b39462583655230bbaeb6c99d7312b4c4f 100644 --- a/dev/reference/torch_dstack.html +++ b/dev/reference/torch_dstack.html @@ -1,79 +1,18 @@ - - - - - - - -Dstack — torch_dstack • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dstack — torch_dstack • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_dstack(tensors)
- -

Arguments

- - - - - - -
tensors

(sequence of Tensors) sequence of tensors to concatenate

- -

dstack(tensors, *, out=None) -> Tensor

+
+
torch_dstack(tensors)
+
+
+

Arguments

+
tensors
+

(sequence of Tensors) sequence of tensors to concatenate

+
+
+

dstack(tensors, *, out=None) -> Tensor

Stack tensors in sequence depthwise (along third axis).

This is equivalent to concatenation along the third axis after 1-D and 2-D -tensors have been reshaped by torch_atleast_3d().

+tensors have been reshaped by torch_atleast_3d().

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2, 3))
-b <- torch_tensor(c(4, 5, 6))
-torch_dstack(list(a,b))
-a <- torch_tensor(rbind(1,2,3))
-b <- torch_tensor(rbind(4,5,6))
-torch_dstack(list(a,b))
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   1  4
-#> 
-#> (2,.,.) = 
-#>   2  5
-#> 
-#> (3,.,.) = 
-#>   3  6
-#> [ CPUFloatType{3,1,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2, 3))
+b <- torch_tensor(c(4, 5, 6))
+torch_dstack(list(a,b))
+a <- torch_tensor(rbind(1,2,3))
+b <- torch_tensor(rbind(4,5,6))
+torch_dstack(list(a,b))
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   1  4
+#> 
+#> (2,.,.) = 
+#>   2  5
+#> 
+#> (3,.,.) = 
+#>   3  6
+#> [ CPUFloatType{3,1,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_dtype.html b/dev/reference/torch_dtype.html index aaa7035dd2b740d49cd7cca0fff56c647333d6ef..aa1b7f4739fe69e8f2bf1991abf9f1e6de9735d1 100644 --- a/dev/reference/torch_dtype.html +++ b/dev/reference/torch_dtype.html @@ -1,79 +1,18 @@ - - - - - - - -Torch data types — torch_dtype • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Torch data types — torch_dtype • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,69 +111,66 @@

Returns the correspondent data type.

-
torch_float32()
+    
+
torch_float32()
 
-torch_float()
+torch_float()
 
-torch_float64()
+torch_float64()
 
-torch_double()
+torch_double()
 
-torch_float16()
+torch_float16()
 
-torch_half()
+torch_half()
 
-torch_uint8()
+torch_uint8()
 
-torch_int8()
+torch_int8()
 
-torch_int16()
+torch_int16()
 
-torch_short()
+torch_short()
 
-torch_int32()
+torch_int32()
 
-torch_int()
+torch_int()
 
-torch_int64()
+torch_int64()
 
-torch_long()
+torch_long()
 
-torch_bool()
+torch_bool()
 
-torch_quint8()
+torch_quint8()
 
-torch_qint8()
-
-torch_qint32()
+torch_qint8() +torch_qint32()
+
+ -
- +
- - + + diff --git a/dev/reference/torch_eig.html b/dev/reference/torch_eig.html index e6dd83566fcd0dc9067227cf5b5cb78c721b1a70..926670a615bc634d7ce3f0c76773a19cba32183e 100644 --- a/dev/reference/torch_eig.html +++ b/dev/reference/torch_eig.html @@ -1,79 +1,18 @@ - - - - - - - -Eig — torch_eig • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Eig — torch_eig • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_eig(self, eigenvectors = FALSE)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the square matrix of shape \((n \times n)\) for which the eigenvalues and eigenvectors will be computed

eigenvectors

(bool) TRUE to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed

- -

Note

+
+
torch_eig(self, eigenvectors = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the square matrix of shape \((n \times n)\) for which the eigenvalues and eigenvectors will be computed

+
eigenvectors
+

(bool) TRUE to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed

+
+
+

Note

-
Since eigenvalues and eigenvectors might be complex, backward pass is supported only
+
Since eigenvalues and eigenvectors might be complex, backward pass is supported only
 for [`torch_symeig`]
-
- -

eig(input, eigenvectors=False, out=NULL) -> (Tensor, Tensor)

+
+
+
+

eig(input, eigenvectors=False, out=NULL) -> (Tensor, Tensor)

Computes the eigenvalues and eigenvectors of a real square matrix.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_einsum.html b/dev/reference/torch_einsum.html index 969cf7c45104d152bb6c170a4317abe5a9f3e2c2..d7fd7aa867288af3f6a5b21640682144b57f66a7 100644 --- a/dev/reference/torch_einsum.html +++ b/dev/reference/torch_einsum.html @@ -1,79 +1,18 @@ - - - - - - - -Einsum — torch_einsum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Einsum — torch_einsum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_einsum(equation, tensors)
- -

Arguments

- - - - - - - - - - -
equation

(string) The equation is given in terms of lower case letters (indices) to be associated with each dimension of the operands and result. The left hand side lists the operands dimensions, separated by commas. There should be one index letter per tensor dimension. The right hand side follows after -> and gives the indices for the output. If the -> and right hand side are omitted, it implicitly defined as the alphabetically sorted list of all indices appearing exactly once in the left hand side. The indices not apprearing in the output are summed over after multiplying the operands entries. If an index appears several times for the same operand, a diagonal is taken. Ellipses ... represent a fixed number of dimensions. If the right hand side is inferred, the ellipsis dimensions are at the beginning of the output.

tensors

(Tensor) The operands to compute the Einstein sum of.

- -

einsum(equation, *operands) -> Tensor

+
+
torch_einsum(equation, tensors)
+
+
+

Arguments

+
equation
+

(string) The equation is given in terms of lower case letters (indices) to be associated with each dimension of the operands and result. The left hand side lists the operands dimensions, separated by commas. There should be one index letter per tensor dimension. The right hand side follows after -> and gives the indices for the output. If the -> and right hand side are omitted, it implicitly defined as the alphabetically sorted list of all indices appearing exactly once in the left hand side. The indices not apprearing in the output are summed over after multiplying the operands entries. If an index appears several times for the same operand, a diagonal is taken. Ellipses ... represent a fixed number of dimensions. If the right hand side is inferred, the ellipsis dimensions are at the beginning of the output.

+
tensors
+

(Tensor) The operands to compute the Einstein sum of.

+
+
+

einsum(equation, *operands) -> Tensor

This function provides a way of computing multilinear expressions (i.e. sums of products) using the Einstein summation convention.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(5))
-y = torch_randn(c(4))
-torch_einsum('i,j->ij', list(x, y))  # outer product
-A = torch_randn(c(3,5,4))
-l = torch_randn(c(2,5))
-r = torch_randn(c(2,4))
-torch_einsum('bn,anm,bm->ba', list(l, A, r)) # compare torch_nn$functional$bilinear
-As = torch_randn(c(3,2,5))
-Bs = torch_randn(c(3,5,4))
-torch_einsum('bij,bjk->bik', list(As, Bs)) # batch matrix multiplication
-A = torch_randn(c(3, 3))
-torch_einsum('ii->i', list(A)) # diagonal
-A = torch_randn(c(4, 3, 3))
-torch_einsum('...ii->...i', list(A)) # batch diagonal
-A = torch_randn(c(2, 3, 4, 5))
-torch_einsum('...ij->...ji', list(A))$shape # batch permute
-
-}
-#> [1] 2 3 5 4
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(5))
+y = torch_randn(c(4))
+torch_einsum('i,j->ij', list(x, y))  # outer product
+A = torch_randn(c(3,5,4))
+l = torch_randn(c(2,5))
+r = torch_randn(c(2,4))
+torch_einsum('bn,anm,bm->ba', list(l, A, r)) # compare torch_nn$functional$bilinear
+As = torch_randn(c(3,2,5))
+Bs = torch_randn(c(3,5,4))
+torch_einsum('bij,bjk->bik', list(As, Bs)) # batch matrix multiplication
+A = torch_randn(c(3, 3))
+torch_einsum('ii->i', list(A)) # diagonal
+A = torch_randn(c(4, 3, 3))
+torch_einsum('...ii->...i', list(A)) # batch diagonal
+A = torch_randn(c(2, 3, 4, 5))
+torch_einsum('...ij->...ji', list(A))$shape # batch permute
+
+}
+#> [1] 2 3 5 4
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_empty.html b/dev/reference/torch_empty.html index d073162bac9569b38fb54c52a0b1ab8a5dbd692b..14b498e36a8d1924d01ba22bca4ce71c70615641 100644 --- a/dev/reference/torch_empty.html +++ b/dev/reference/torch_empty.html @@ -1,79 +1,18 @@ - - - - - - - -Empty — torch_empty • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Empty — torch_empty • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_empty(
-  ...,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
...

a sequence of integers defining the shape of the output tensor.

names

optional character vector naming each dimension.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

empty(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

+
+
torch_empty(
+  ...,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
...
+

a sequence of integers defining the shape of the output tensor.

+
names
+

optional character vector naming each dimension.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

empty(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

Returns a tensor filled with uninitialized data. The shape of the tensor is defined by the variable argument size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_empty(c(2, 3))
-}
-#> torch_tensor
-#>  0  0  0
-#>  0  0  0
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_empty(c(2, 3))
+}
+#> torch_tensor
+#> -1.6713e+34  4.5879e-41  3.2762e-35
+#>  4.5880e-41  3.2762e-35  4.5880e-41
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_empty_like.html b/dev/reference/torch_empty_like.html index c316d1b94cceb5bbfbd0edbacfc9b7be3e3b983c..16c2a441b5c7ffa2165ced8e855c0a3d16724e16 100644 --- a/dev/reference/torch_empty_like.html +++ b/dev/reference/torch_empty_like.html @@ -1,79 +1,18 @@ - - - - - - - -Empty_like — torch_empty_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Empty_like — torch_empty_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,88 +111,75 @@

Empty_like

-
torch_empty_like(
-  input,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

empty_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

+
+
torch_empty_like(
+  input,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

empty_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

Returns an uninitialized tensor with the same size as input. torch_empty_like(input) is equivalent to torch_empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_empty(list(2,3), dtype = torch_int64())
-}
-#> torch_tensor
-#> -8.0705e+18 -6.9175e+18  1.4056e+14
-#>  1.4056e+14  1.4056e+14  0.0000e+00
-#> [ CPULongType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_empty(list(2,3), dtype = torch_int64())
+}
+#> torch_tensor
+#>  8.5899e+09  1.4062e+14  1.4062e+14
+#>  1.0000e+00  3.4360e+10  1.0000e+00
+#> [ CPULongType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_empty_strided.html b/dev/reference/torch_empty_strided.html index 380efc2b8b29deb5df0342fbab2ef0b8e4106f4c..d709c7ecb022cb152ce0338dccae80d61227455f 100644 --- a/dev/reference/torch_empty_strided.html +++ b/dev/reference/torch_empty_strided.html @@ -1,79 +1,18 @@ - - - - - - - -Empty_strided — torch_empty_strided • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Empty_strided — torch_empty_strided • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,51 +111,37 @@

Empty_strided

-
torch_empty_strided(
-  size,
-  stride,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  pin_memory = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
size

(tuple of ints) the shape of the output tensor

stride

(tuple of ints) the strides of the output tensor

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

pin_memory

(bool, optional) If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: FALSE.

- -

empty_strided(size, stride, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

+
+
torch_empty_strided(
+  size,
+  stride,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  pin_memory = FALSE
+)
+
+
+

Arguments

+
size
+

(tuple of ints) the shape of the output tensor

+
stride
+

(tuple of ints) the strides of the output tensor

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
pin_memory
+

(bool, optional) If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: FALSE.

+
+
+

empty_strided(size, stride, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, pin_memory=False) -> Tensor

@@ -241,50 +149,50 @@ defined by the variable argument size and stride respectively. torch_empty_strided(size, stride) is equivalent to torch_empty(size).as_strided(size, stride).

-

Warning

- +
+
+

Warning

More than one element of the created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_empty_strided(list(2, 3), list(1, 2))
-a
-a$stride(1)
-a$size(1)
-}
-#> [1] 2
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_empty_strided(list(2, 3), list(1, 2))
+a
+a$stride(1)
+a$size(1)
+}
+#> [1] 2
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_eq.html b/dev/reference/torch_eq.html index 200c256d958582f09db66dcac8ba347973536357..8cd959be4eaad23a929849b04f235f7cf8ba6263 100644 --- a/dev/reference/torch_eq.html +++ b/dev/reference/torch_eq.html @@ -1,79 +1,18 @@ - - - - - - - -Eq — torch_eq • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Eq — torch_eq • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_eq(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare -Must be a ByteTensor

- -

eq(input, other, out=NULL) -> Tensor

+
+
torch_eq(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare +Must be a ByteTensor

+
+
+

eq(input, other, out=NULL) -> Tensor

Computes element-wise equality

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_eq(torch_tensor(c(1,2,3,4)), torch_tensor(c(1, 3, 2, 4)))
-}
-#> torch_tensor
-#>  1
-#>  0
-#>  0
-#>  1
-#> [ CPUBoolType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_eq(torch_tensor(c(1,2,3,4)), torch_tensor(c(1, 3, 2, 4)))
+}
+#> torch_tensor
+#>  1
+#>  0
+#>  0
+#>  1
+#> [ CPUBoolType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_equal.html b/dev/reference/torch_equal.html index 5ca70ed068c2585d3d94ee46a7135e682e8c8bbb..ff98182f010d1789bd66d399315eb43c7574ed37 100644 --- a/dev/reference/torch_equal.html +++ b/dev/reference/torch_equal.html @@ -1,79 +1,18 @@ - - - - - - - -Equal — torch_equal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Equal — torch_equal • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_equal(self, other)
- -

Arguments

- - - - - - - - - - -
self

the input tensor

other

the other input tensor

- -

equal(input, other) -> bool

+
+
torch_equal(self, other)
+
+
+

Arguments

+
self
+

the input tensor

+
other
+

the other input tensor

+
+
+

equal(input, other) -> bool

TRUE if two tensors have the same size and elements, FALSE otherwise.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_equal(torch_tensor(c(1, 2)), torch_tensor(c(1, 2)))
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_equal(torch_tensor(c(1, 2)), torch_tensor(c(1, 2)))
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_erf.html b/dev/reference/torch_erf.html index 819abbead4b6f579c3ffa2a40d9b73cf73848a5f..bb0040f57789b2fc87806920cca4e01a0c5321c3 100644 --- a/dev/reference/torch_erf.html +++ b/dev/reference/torch_erf.html @@ -1,79 +1,18 @@ - - - - - - - -Erf — torch_erf • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Erf — torch_erf • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_erf(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

erf(input, out=NULL) -> Tensor

+
+
torch_erf(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

erf(input, out=NULL) -> Tensor

@@ -209,43 +129,42 @@

$$ \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_erf(torch_tensor(c(0, -1., 10.)))
-}
-#> torch_tensor
-#>  0.0000
-#> -0.8427
-#>  1.0000
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_erf(torch_tensor(c(0, -1., 10.)))
+}
+#> torch_tensor
+#>  0.0000
+#> -0.8427
+#>  1.0000
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_erfc.html b/dev/reference/torch_erfc.html index 2aa55847fabc77760d0b88368c2dd32754a50120..1b97711493e06bcd7ada621fab3bc2d7ac4a0446 100644 --- a/dev/reference/torch_erfc.html +++ b/dev/reference/torch_erfc.html @@ -1,79 +1,18 @@ - - - - - - - -Erfc — torch_erfc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Erfc — torch_erfc • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_erfc(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

erfc(input, out=NULL) -> Tensor

+
+
torch_erfc(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

erfc(input, out=NULL) -> Tensor

@@ -210,43 +130,42 @@ The complementary error function is defined as follows:

$$ \mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_erfc(torch_tensor(c(0, -1., 10.)))
-}
-#> torch_tensor
-#>  1.0000e+00
-#>  1.8427e+00
-#>  1.4013e-45
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_erfc(torch_tensor(c(0, -1., 10.)))
+}
+#> torch_tensor
+#>  1.0000e+00
+#>  1.8427e+00
+#>  1.4013e-45
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_erfinv.html b/dev/reference/torch_erfinv.html index 319213a23fda2513b50163a1b908c697e06f390d..a62f045a666b495c5654dd09bf4ffe5ed69bd631 100644 --- a/dev/reference/torch_erfinv.html +++ b/dev/reference/torch_erfinv.html @@ -1,79 +1,18 @@ - - - - - - - -Erfinv — torch_erfinv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Erfinv — torch_erfinv • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_erfinv(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

erfinv(input, out=NULL) -> Tensor

+
+
torch_erfinv(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

erfinv(input, out=NULL) -> Tensor

@@ -210,43 +130,42 @@ The inverse error function is defined in the range \((-1, 1)\) as:

$$ \mathrm{erfinv}(\mathrm{erf}(x)) = x $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_erfinv(torch_tensor(c(0, 0.5, -1.)))
-}
-#> torch_tensor
-#>  0.0000
-#>  0.4769
-#>    -inf
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_erfinv(torch_tensor(c(0, 0.5, -1.)))
+}
+#> torch_tensor
+#>  0.0000
+#>  0.4769
+#>    -inf
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_exp.html b/dev/reference/torch_exp.html index de97c936788b164e1bac307eb98d384b11270be1..9d3f5f12c9f90079f120adc7886d1e5cdc04c73f 100644 --- a/dev/reference/torch_exp.html +++ b/dev/reference/torch_exp.html @@ -1,79 +1,18 @@ - - - - - - - -Exp — torch_exp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Exp — torch_exp • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_exp(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

exp(input, out=NULL) -> Tensor

+
+
torch_exp(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

exp(input, out=NULL) -> Tensor

@@ -210,42 +130,41 @@ of the input tensor input.

$$ y_{i} = e^{x_{i}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_exp(torch_tensor(c(0, log(2))))
-}
-#> torch_tensor
-#>  1
-#>  2
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_exp(torch_tensor(c(0, log(2))))
+}
+#> torch_tensor
+#>  1
+#>  2
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_exp2.html b/dev/reference/torch_exp2.html index afb76ab80dcccb478c01e7b24cb85c22af924823..b16055a51956e851bd9e37b38dcce2c1674d53a9 100644 --- a/dev/reference/torch_exp2.html +++ b/dev/reference/torch_exp2.html @@ -1,79 +1,18 @@ - - - - - - - -Exp2 — torch_exp2 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Exp2 — torch_exp2 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_exp2(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

exp2(input, *, out=None) -> Tensor

+
+
torch_exp2(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

exp2(input, *, out=None) -> Tensor

@@ -209,44 +129,43 @@

$$ y_{i} = 2^{x_{i}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_exp2(torch_tensor(c(0, log2(2.), 3, 4)))
-}
-#> torch_tensor
-#>   1
-#>   2
-#>   8
-#>  16
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_exp2(torch_tensor(c(0, log2(2.), 3, 4)))
+}
+#> torch_tensor
+#>   1
+#>   2
+#>   8
+#>  16
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_expm1.html b/dev/reference/torch_expm1.html index 44f3d2723e3208ef350004696425dcd3bec20da2..b25d4a92275a1005259137adec1e2290440932cb 100644 --- a/dev/reference/torch_expm1.html +++ b/dev/reference/torch_expm1.html @@ -1,79 +1,18 @@ - - - - - - - -Expm1 — torch_expm1 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Expm1 — torch_expm1 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_expm1(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

expm1(input, out=NULL) -> Tensor

+
+
torch_expm1(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

expm1(input, out=NULL) -> Tensor

@@ -210,42 +130,41 @@ of input.

$$ y_{i} = e^{x_{i}} - 1 $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_expm1(torch_tensor(c(0, log(2))))
-}
-#> torch_tensor
-#>  0
-#>  1
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_expm1(torch_tensor(c(0, log(2))))
+}
+#> torch_tensor
+#>  0
+#>  1
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_eye.html b/dev/reference/torch_eye.html index ef63173a32a347f341347cb9d897beb46500ee2e..31e1f04c915a12e2d382b8bc1a3668c379d11533 100644 --- a/dev/reference/torch_eye.html +++ b/dev/reference/torch_eye.html @@ -1,79 +1,18 @@ - - - - - - - -Eye — torch_eye • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Eye — torch_eye • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_eye(
-  n,
-  m = n,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
n

(int) the number of rows

m

(int, optional) the number of columns with default being n

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

eye(n, m=NULL, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_eye(
+  n,
+  m = n,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
n
+

(int) the number of rows

+
m
+

(int, optional) the number of columns with default being n

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

eye(n, m=NULL, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_eye(3)
-}
-#> torch_tensor
-#>  1  0  0
-#>  0  1  0
-#>  0  0  1
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_eye(3)
+}
+#> torch_tensor
+#>  1  0  0
+#>  0  1  0
+#>  0  0  1
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fft.html b/dev/reference/torch_fft.html deleted file mode 100644 index f56e51f26854765a80990203c8d0392d7cc958f4..0000000000000000000000000000000000000000 --- a/dev/reference/torch_fft.html +++ /dev/null @@ -1,322 +0,0 @@ - - - - - - - - -Fft — torch_fft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Fft

-
- -
torch_fft(self, signal_ndim, normalized = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor of at least signal_ndim + 1 dimensions

signal_ndim

(int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

normalized

(bool, optional) controls whether to return normalized results. Default: FALSE

- -

Note

- - -
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
-repeatedly running FFT methods on tensors of same geometry with same
-configuration. See cufft-plan-cache for more details on how to
-monitor and control the cache.
-
- -

fft(input, signal_ndim, normalized=False) -> Tensor

- - - - -

Complex-to-complex Discrete Fourier Transform

-

This method computes the complex-to-complex discrete Fourier transform. -Ignoring the batch dimensions, it computes the following expression:

-

$$ - X[\omega_1, \dots, \omega_d] = - \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] - e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, -$$ -where \(d\) = signal_ndim is number of dimensions for the -signal, and \(N_i\) is the size of signal dimension \(i\).

-

This method supports 1D, 2D and 3D complex-to-complex transforms, indicated -by signal_ndim. input must be a tensor with last dimension -of size 2, representing the real and imaginary components of complex -numbers, and should have at least signal_ndim + 1 dimensions with optionally -arbitrary number of leading batch dimensions. If normalized is set to -TRUE, this normalizes the result by dividing it with -\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary.

-

Returns the real and the imaginary parts together as one tensor of the same -shape of input.

-

The inverse of this function is torch_ifft.

-

Warning

- - - -

For CPU tensors, this method is currently only available with MKL. Use -torch_backends.mkl.is_available to check if MKL is installed.

- -

Examples

-
if (torch_is_installed()) { - -# unbatched 2D FFT -x = torch_randn(c(4, 3, 2)) -torch_fft(x, 2) -# batched 1D FFT -torch_fft(x, 1) -# arbitrary number of batch dimensions, 2D FFT -x = torch_randn(c(3, 3, 5, 5, 2)) -torch_fft(x, 2) - -} -
#> torch_tensor -#> (1,1,1,.,.) = -#> 5.1798 -0.9818 -#> -1.6249 10.5638 -#> 0.0123 3.9055 -#> 5.1700 -8.4138 -#> -4.8670 5.2947 -#> -#> (2,1,1,.,.) = -#> -11.6410 2.5917 -#> -3.6874 3.6093 -#> -1.0265 -0.2814 -#> 8.0958 -1.7365 -#> 1.2423 2.8675 -#> -#> (3,1,1,.,.) = -#> -5.5626 4.0090 -#> 2.2682 -11.9331 -#> -2.7152 8.0993 -#> 5.8430 1.8109 -#> 7.8418 5.9927 -#> -#> (1,2,1,.,.) = -#> 4.1790 -2.7687 -#> 0.7335 -1.0919 -#> -5.1091 12.8679 -#> 0.4303 5.3978 -#> -0.1444 -1.7745 -#> -#> (2,2,1,.,.) = -#> 0.8005 -4.2031 -#> ... [the output was truncated (use n=-1 to disable)] -#> [ CPUFloatType{3,3,5,5,2} ]
-
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/torch_fft_fft.html b/dev/reference/torch_fft_fft.html index 821e569afce61fff92f64b1e88fa3c08fc28b1c4..15e73c4de7f766bceaf94c02cf5fe61f3e57cdaa 100644 --- a/dev/reference/torch_fft_fft.html +++ b/dev/reference/torch_fft_fft.html @@ -1,79 +1,18 @@ - - - - - - - -Fft — torch_fft_fft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fft — torch_fft_fft • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,86 +111,76 @@

Computes the one dimensional discrete Fourier transform of input.

-
torch_fft_fft(self, n = NULL, dim = -1L, norm = NULL)
+
+
torch_fft_fft(self, n = NULL, dim = -1L, norm = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor

n

(int) Signal length. If given, the input will either be zero-padded -or trimmed to this length before computing the FFT.

dim

(int, optional) The dimension along which to take the one dimensional FFT.

norm

(str, optional) Normalization mode. For the forward transform, these -correspond to:

    -
  • "forward" - normalize by 1/n

  • +
    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor

    +
    n
    +

    (int) Signal length. If given, the input will either be zero-padded +or trimmed to this length before computing the FFT.

    +
    dim
    +

    (int, optional) The dimension along which to take the one dimensional FFT.

    +
    norm
    +

    (str, optional) Normalization mode. For the forward transform, these +correspond to:

    • "forward" - normalize by 1/n

    • "backward" - no normalization

    • "ortho" - normalize by 1/sqrt(n) (making the FFT orthonormal) Calling the backward transform (ifft()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make IFFT the exact inverse. Default is "backward" (no normalization).

    • -
- -

Note

- + +
+
+

Note

The Fourier domain representation of any real signal satisfies the Hermitian property: X[i] = conj(X[-i]). This function always returns both the positive and negative frequency terms even though, for real inputs, the negative frequencies are redundant. rfft() returns the more compact one-sided representation where only the positive frequencies are returned.

+
-

Examples

-
if (torch_is_installed()) {
-t <- torch_arange(start = 0, end = 3)
-t
-torch_fft_fft(t, norm = "backward")
-
-}
-#> torch_tensor
-#>  6
-#> -2
-#> -2
-#> -2
-#> [ CPUComplexFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+t <- torch_arange(start = 0, end = 3)
+t
+torch_fft_fft(t, norm = "backward")
+
+}
+#> torch_tensor
+#>  6
+#> -2
+#> -2
+#> -2
+#> [ CPUComplexFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fft_ifft.html b/dev/reference/torch_fft_ifft.html index 60e8f3a492b5298526c7341ca2034cd9ce0220b8..8dc5a3dd3aa1eea30669dabab76ceb33428e70dc 100644 --- a/dev/reference/torch_fft_ifft.html +++ b/dev/reference/torch_fft_ifft.html @@ -1,79 +1,18 @@ - - - - - - - -Ifft — torch_fft_ifft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ifft — torch_fft_ifft • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,82 +111,71 @@

Computes the one dimensional inverse discrete Fourier transform of input.

-
torch_fft_ifft(self, n = NULL, dim = -1L, norm = NULL)
+
+
torch_fft_ifft(self, n = NULL, dim = -1L, norm = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor

n

(int, optional) – Signal length. If given, the input will either be -zero-padded or trimmed to this length before computing the IFFT.

dim

(int, optional) – The dimension along which to take the one -dimensional IFFT.

norm

(str, optional) – Normalization mode. For the backward transform, -these correspond to:

    -
  • "forward" - no normalization

  • +
    +

    Arguments

    +
    self
    +

    (Tensor) the input tensor

    +
    n
    +

    (int, optional) – Signal length. If given, the input will either be +zero-padded or trimmed to this length before computing the IFFT.

    +
    dim
    +

    (int, optional) – The dimension along which to take the one +dimensional IFFT.

    +
    norm
    +

    (str, optional) – Normalization mode. For the backward transform, +these correspond to:

    • "forward" - no normalization

    • "backward" - normalize by 1/n

    • "ortho" - normalize by 1/sqrt(n) (making the IFFT orthonormal) Calling the forward transform with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make ifft() the exact inverse. Default is "backward" (normalize by 1/n).

    • -
- - -

Examples

-
if (torch_is_installed()) {
-t <- torch_arange(start = 0, end = 3)
-t
-x <- torch_fft_fft(t, norm = "backward")
-torch_fft_ifft(x)
-
-
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  2
-#>  3
-#> [ CPUComplexFloatType{4} ]
-
+ +
+ +
+

Examples

+
if (torch_is_installed()) {
+t <- torch_arange(start = 0, end = 3)
+t
+x <- torch_fft_fft(t, norm = "backward")
+torch_fft_ifft(x)
+
+
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  2
+#>  3
+#> [ CPUComplexFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fft_irfft.html b/dev/reference/torch_fft_irfft.html index 2e6e0816794382b798f7f028bffe29b34daaa694..1214094db6dc8c37d72e001a5a2c5ed5cdc73b34 100644 --- a/dev/reference/torch_fft_irfft.html +++ b/dev/reference/torch_fft_irfft.html @@ -1,82 +1,21 @@ - - - - - - - -Irfft — torch_fft_irfft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Irfft — torch_fft_irfft • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
-

Computes the inverse of torch_fft_rfft(). +

Computes the inverse of torch_fft_rfft(). Input is interpreted as a one-sided Hermitian signal in the Fourier domain, -as produced by torch_fft_rfft(). By the Hermitian property, the output will +as produced by torch_fft_rfft(). By the Hermitian property, the output will be real-valued.

-
torch_fft_irfft(self, n = NULL, dim = -1L, norm = NULL)
+
+
torch_fft_irfft(self, n = NULL, dim = -1L, norm = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor representing a half-Hermitian signal

n

(int) Output signal length. This determines the length of the output +

+

Arguments

+
self
+

(Tensor) the input tensor representing a half-Hermitian signal

+
n
+

(int) Output signal length. This determines the length of the output signal. If given, the input will either be zero-padded or trimmed to this -length before computing the real IFFT. Defaults to even output: n=2*(input.size(dim) - 1).

dim

(int, optional) – The dimension along which to take the one -dimensional real IFFT.

norm

(str, optional) – Normalization mode. For the backward transform, -these correspond to:

    -
  • "forward" - no normalization

  • +length before computing the real IFFT. Defaults to even output: n=2*(input.size(dim) - 1).

    +
    dim
    +

    (int, optional) – The dimension along which to take the one +dimensional real IFFT.

    +
    norm
    +

    (str, optional) – Normalization mode. For the backward transform, +these correspond to:

    • "forward" - no normalization

    • "backward" - normalize by 1/n

    • "ortho" - normalize by 1/sqrt(n) (making the real IFFT orthonormal) -Calling the forward transform (torch_fft_rfft()) with the same normalization +Calling the forward transform (torch_fft_rfft()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make irfft() the exact inverse. Default is "backward" (normalize by 1/n).

    • -
- -

Note

- + +
+
+

Note

Some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in the zero-frequency term cannot be represented in a real @@ -241,48 +154,47 @@ original data, as given by n. This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. So, it is recommended to always pass the signal length n.

+
-

Examples

-
if (torch_is_installed()) {
-t <- torch_arange(start = 0, end = 4)
-x <- torch_fft_rfft(t)
-torch_fft_irfft(x)
-torch_fft_irfft(x, n = t$numel())
-
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  2
-#>  3
-#>  4
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+t <- torch_arange(start = 0, end = 4)
+x <- torch_fft_rfft(t)
+torch_fft_irfft(x)
+torch_fft_irfft(x, n = t$numel())
+
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  2
+#>  3
+#>  4
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fft_rfft.html b/dev/reference/torch_fft_rfft.html index aed081b7133bd4e1e5e9708d97d56011e1bbc9b1..7ece690e47f4b9b57626457fb758abfe5ebd1de2 100644 --- a/dev/reference/torch_fft_rfft.html +++ b/dev/reference/torch_fft_rfft.html @@ -1,79 +1,18 @@ - - - - - - - -Rfft — torch_fft_rfft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rfft — torch_fft_rfft • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,83 +111,73 @@

Computes the one dimensional Fourier transform of real-valued input.

-
torch_fft_rfft(self, n = NULL, dim = -1L, norm = NULL)
+
+
torch_fft_rfft(self, n = NULL, dim = -1L, norm = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the real input tensor

n

(int) Signal length. If given, the input will either be zero-padded -or trimmed to this length before computing the real FFT.

dim

(int, optional) – The dimension along which to take the one -dimensional real FFT.

norm

norm (str, optional) – Normalization mode. For the forward -transform, these correspond to:

    -
  • "forward" - normalize by 1/n

  • +
    +

    Arguments

    +
    self
    +

    (Tensor) the real input tensor

    +
    n
    +

    (int) Signal length. If given, the input will either be zero-padded +or trimmed to this length before computing the real FFT.

    +
    dim
    +

    (int, optional) – The dimension along which to take the one +dimensional real FFT.

    +
    norm
    +

    norm (str, optional) – Normalization mode. For the forward +transform, these correspond to:

    • "forward" - normalize by 1/n

    • "backward" - no normalization

    • "ortho" - normalize by 1/sqrt(n) (making the FFT orthonormal) -Calling the backward transform (torch_fft_irfft()) with the same +Calling the backward transform (torch_fft_irfft()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make irfft() the exact inverse. Default is "backward" (no normalization).

    • -
- -

Details

- + +
+
+

Details

The FFT of a real signal is Hermitian-symmetric, X[i] = conj(X[-i]) so the output contains only the positive frequencies below the Nyquist frequency. -To compute the full output, use torch_fft_fft().

+To compute the full output, use torch_fft_fft().

+
-

Examples

-
if (torch_is_installed()) {
-t <- torch_arange(start = 0, end = 3)
-torch_fft_rfft(t)
-
-}
-#> torch_tensor
-#>  6
-#> -2
-#> -2
-#> [ CPUComplexFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+t <- torch_arange(start = 0, end = 3)
+torch_fft_rfft(t)
+
+}
+#> torch_tensor
+#>  6
+#> -2
+#> -2
+#> [ CPUComplexFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_finfo.html b/dev/reference/torch_finfo.html index fc96713c17454ef17b605218316014c7723a797b..c2612fe306039f1d30593e58a6e352c209932e85 100644 --- a/dev/reference/torch_finfo.html +++ b/dev/reference/torch_finfo.html @@ -1,80 +1,19 @@ - - - - - - - -Floating point type info — torch_finfo • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Floating point type info — torch_finfo • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,43 +113,37 @@ floating point torch.dtype" /> floating point torch.dtype

-
torch_finfo(dtype)
- -

Arguments

- - - - - - -
dtype

dtype to check information

+
+
torch_finfo(dtype)
+
+
+

Arguments

+
dtype
+

dtype to check information

+
+
-
- +
- - + + diff --git a/dev/reference/torch_fix.html b/dev/reference/torch_fix.html index e9e4849a997d9fa51c99a87f76426a362b9fb725..6449bce82a7965f5dff8e31ddd5a136f1ca1a6da 100644 --- a/dev/reference/torch_fix.html +++ b/dev/reference/torch_fix.html @@ -1,79 +1,18 @@ - - - - - - - -Fix — torch_fix • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fix — torch_fix • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_fix(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

fix(input, *, out=None) -> Tensor

+
+
torch_fix(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

fix(input, *, out=None) -> Tensor

-

Alias for torch_trunc()

+

Alias for torch_trunc()

+
+
-
- +
- - + + diff --git a/dev/reference/torch_flatten.html b/dev/reference/torch_flatten.html index 1df981bb575feada8dd967019401b115cdeafb6e..230d40db725df31f7d63424cd96f32ff65b95a6a 100644 --- a/dev/reference/torch_flatten.html +++ b/dev/reference/torch_flatten.html @@ -1,79 +1,18 @@ - - - - - - - -Flatten — torch_flatten • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Flatten — torch_flatten • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_flatten(self, dims, start_dim = 1L, end_dim = -1L, out_dim)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dims

if tensor is named you can pass the name of the dimensions to -flatten

start_dim

(int) the first dim to flatten

end_dim

(int) the last dim to flatten

out_dim

the name of the resulting dimension if a named tensor.

- -

flatten(input, start_dim=0, end_dim=-1) -> Tensor

+
+
torch_flatten(self, dims, start_dim = 1L, end_dim = -1L, out_dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dims
+

if tensor is named you can pass the name of the dimensions to +flatten

+
start_dim
+

(int) the first dim to flatten

+
end_dim
+

(int) the last dim to flatten

+
out_dim
+

the name of the resulting dimension if a named tensor.

+
+
+

flatten(input, start_dim=0, end_dim=-1) -> Tensor

Flattens a contiguous range of dims in a tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-t = torch_tensor(matrix(c(1, 2), ncol = 2))
-torch_flatten(t)
-torch_flatten(t, start_dim=2)
-}
-#> torch_tensor
-#>  1  2
-#> [ CPUFloatType{1,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t = torch_tensor(matrix(c(1, 2), ncol = 2))
+torch_flatten(t)
+torch_flatten(t, start_dim=2)
+}
+#> torch_tensor
+#>  1  2
+#> [ CPUFloatType{1,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_flip.html b/dev/reference/torch_flip.html index cd4e947c3464fb1edf55b9ddc16926299087a3cd..47fe4b23f7861aba75b9b3bea1e28c2587f5ee70 100644 --- a/dev/reference/torch_flip.html +++ b/dev/reference/torch_flip.html @@ -1,79 +1,18 @@ - - - - - - - -Flip — torch_flip • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Flip — torch_flip • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_flip(self, dims)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dims

(a list or tuple) axis to flip on

- -

flip(input, dims) -> Tensor

+
+
torch_flip(self, dims)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dims
+

(a list or tuple) axis to flip on

+
+
+

flip(input, dims) -> Tensor

Reverse the order of a n-D tensor along given axis in dims.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(1, 8)$view(c(2, 2, 2))
-x
-torch_flip(x, c(1, 2))
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   7  8
-#>   5  6
-#> 
-#> (2,.,.) = 
-#>   3  4
-#>   1  2
-#> [ CPUFloatType{2,2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(1, 8)$view(c(2, 2, 2))
+x
+torch_flip(x, c(1, 2))
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   7  8
+#>   5  6
+#> 
+#> (2,.,.) = 
+#>   3  4
+#>   1  2
+#> [ CPUFloatType{2,2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fliplr.html b/dev/reference/torch_fliplr.html index 71d3138e35df6f6290fbacc7b76e486545c5a810..91411f422d96ed86e4861c5e38dbfac5cc4c74c6 100644 --- a/dev/reference/torch_fliplr.html +++ b/dev/reference/torch_fliplr.html @@ -1,79 +1,18 @@ - - - - - - - -Fliplr — torch_fliplr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fliplr — torch_fliplr • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_fliplr(self)
- -

Arguments

- - - - - - -
self

(Tensor) Must be at least 2-dimensional.

- -

Note

+
+
torch_fliplr(self)
+
+
+

Arguments

+
self
+

(Tensor) Must be at least 2-dimensional.

+
+
+

Note

Equivalent to input[,-1]. Requires the array to be at least 2-D.

-

fliplr(input) -> Tensor

- +
+
+

fliplr(input) -> Tensor

Flip array in the left/right direction, returning a new tensor.

Flip the entries in each row in the left/right direction. Columns are preserved, but appear in a different order than before.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(start = 1, end = 4)$view(c(2, 2))
-x
-torch_fliplr(x)
-}
-#> torch_tensor
-#>  2  1
-#>  4  3
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(start = 1, end = 4)$view(c(2, 2))
+x
+torch_fliplr(x)
+}
+#> torch_tensor
+#>  2  1
+#>  4  3
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_flipud.html b/dev/reference/torch_flipud.html index 8ae2a727a8dd6503581a2aed18993303434bde34..584eab9899d02c4845706636cf4582d9bfbf552c 100644 --- a/dev/reference/torch_flipud.html +++ b/dev/reference/torch_flipud.html @@ -1,79 +1,18 @@ - - - - - - - -Flipud — torch_flipud • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Flipud — torch_flipud • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_flipud(self)
- -

Arguments

- - - - - - -
self

(Tensor) Must be at least 1-dimensional.

- -

Note

+
+
torch_flipud(self)
+
+
+

Arguments

+
self
+

(Tensor) Must be at least 1-dimensional.

+
+
+

Note

Equivalent to input[-1,]. Requires the array to be at least 1-D.

-

flipud(input) -> Tensor

- +
+
+

flipud(input) -> Tensor

Flip array in the up/down direction, returning a new tensor.

Flip the entries in each column in the up/down direction. Rows are preserved, but appear in a different order than before.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(start = 1, end = 4)$view(c(2, 2))
-x
-torch_flipud(x)
-}
-#> torch_tensor
-#>  3  4
-#>  1  2
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(start = 1, end = 4)$view(c(2, 2))
+x
+torch_flipud(x)
+}
+#> torch_tensor
+#>  3  4
+#>  1  2
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_floor.html b/dev/reference/torch_floor.html index c6028dc0c775db7b8ed9d6765aaf5d51c7044932..524775e92231940e6848d28c3ba69d782049cda1 100644 --- a/dev/reference/torch_floor.html +++ b/dev/reference/torch_floor.html @@ -1,79 +1,18 @@ - - - - - - - -Floor — torch_floor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Floor — torch_floor • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_floor(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

floor(input, out=NULL) -> Tensor

+
+
torch_floor(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

floor(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@ the largest integer less than or equal to each element.

$$ \mbox{out}_{i} = \left\lfloor \mbox{input}_{i} \right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_floor(a)
-}
-#> torch_tensor
-#> -1
-#> -2
-#>  0
-#> -1
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_floor(a)
+}
+#> torch_tensor
+#>  1
+#>  1
+#>  0
+#> -1
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_floor_divide.html b/dev/reference/torch_floor_divide.html index b312cbd495acf2512e106d10c206f201e3794e91..ec1fa162c8f158117162856ef8e9492335a07130 100644 --- a/dev/reference/torch_floor_divide.html +++ b/dev/reference/torch_floor_divide.html @@ -1,79 +1,18 @@ - - - - - - - -Floor_divide — torch_floor_divide • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Floor_divide — torch_floor_divide • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,70 +111,65 @@

Floor_divide

-
torch_floor_divide(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the numerator tensor

other

(Tensor or Scalar) the denominator

- -

floor_divide(input, other, out=NULL) -> Tensor

+
+
torch_floor_divide(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the numerator tensor

+
other
+

(Tensor or Scalar) the denominator

+
+
+

floor_divide(input, other, out=NULL) -> Tensor

-

Return the division of the inputs rounded down to the nearest integer. See torch_div +

Return the division of the inputs rounded down to the nearest integer. See torch_div for type promotion and broadcasting rules.

$$ \mbox{{out}}_i = \left\lfloor \frac{{\mbox{{input}}_i}}{{\mbox{{other}}_i}} \right\rfloor $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_tensor(c(4.0, 3.0))
-b = torch_tensor(c(2.0, 2.0))
-torch_floor_divide(a, b)
-torch_floor_divide(a, 1.4)
-}
-#> torch_tensor
-#>  2
-#>  2
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_tensor(c(4.0, 3.0))
+b = torch_tensor(c(2.0, 2.0))
+torch_floor_divide(a, b)
+torch_floor_divide(a, 1.4)
+}
+#> torch_tensor
+#>  2
+#>  2
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_fmod.html b/dev/reference/torch_fmod.html index 27e45d6765f2ea048218d2e1a0b5bf5653f14cd3..c0c710a1a8ccdb446d15668aaffe505ff9748ebc 100644 --- a/dev/reference/torch_fmod.html +++ b/dev/reference/torch_fmod.html @@ -1,79 +1,18 @@ - - - - - - - -Fmod — torch_fmod • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Fmod — torch_fmod • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_fmod(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the dividend

other

(Tensor or float) the divisor, which may be either a number or a tensor of the same shape as the dividend

- -

fmod(input, other, out=NULL) -> Tensor

+
+
torch_fmod(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the dividend

+
other
+

(Tensor or float) the divisor, which may be either a number or a tensor of the same shape as the dividend

+
+
+

fmod(input, other, out=NULL) -> Tensor

@@ -214,46 +132,45 @@ numbers. The remainder has the same sign as the dividend input.

When other is a tensor, the shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_fmod(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2)
-torch_fmod(torch_tensor(c(1., 2, 3, 4, 5)), 1.5)
-}
-#> torch_tensor
-#>  1.0000
-#>  0.5000
-#>  0.0000
-#>  1.0000
-#>  0.5000
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_fmod(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2)
+torch_fmod(torch_tensor(c(1., 2, 3, 4, 5)), 1.5)
+}
+#> torch_tensor
+#>  1.0000
+#>  0.5000
+#>  0.0000
+#>  1.0000
+#>  0.5000
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_frac.html b/dev/reference/torch_frac.html index d6a1fa147a746327cfb53e5a4b55814bf9ddd9be..7860217f86acef4306d781ef14b8b4ac26b3539a 100644 --- a/dev/reference/torch_frac.html +++ b/dev/reference/torch_frac.html @@ -1,79 +1,18 @@ - - - - - - - -Frac — torch_frac • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Frac — torch_frac • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_frac(self)
- -

Arguments

- - - - - - -
self

the input tensor.

- -

frac(input, out=NULL) -> Tensor

+
+
torch_frac(self)
+
+
+

Arguments

+
self
+

the input tensor.

+
+
+

frac(input, out=NULL) -> Tensor

@@ -209,43 +129,42 @@

$$ \mbox{out}_{i} = \mbox{input}_{i} - \left\lfloor |\mbox{input}_{i}| \right\rfloor * \mbox{sgn}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_frac(torch_tensor(c(1, 2.5, -3.2)))
-}
-#> torch_tensor
-#>  0.0000
-#>  0.5000
-#> -0.2000
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_frac(torch_tensor(c(1, 2.5, -3.2)))
+}
+#> torch_tensor
+#>  0.0000
+#>  0.5000
+#> -0.2000
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_full.html b/dev/reference/torch_full.html index bc49acd19dbb50a7c9b4ff7d1ee83f455a57e981..cc69205b58e94f6a517b03a79f7c6fdec7a1c1b6 100644 --- a/dev/reference/torch_full.html +++ b/dev/reference/torch_full.html @@ -1,79 +1,18 @@ - - - - - - - -Full — torch_full • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Full — torch_full • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_full(
-  size,
-  fill_value,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
size

(int...) a list, tuple, or torch_Size of integers defining the shape of the output tensor.

fill_value

NA the number to fill the output tensor with.

names

optional names of the dimensions

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

full(size, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_full(
+  size,
+  fill_value,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
size
+

(int...) a list, tuple, or torch_Size of integers defining the shape of the output tensor.

+
fill_value
+

NA the number to fill the output tensor with.

+
names
+

optional names of the dimensions

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

full(size, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a tensor of size size filled with fill_value.

-

Warning

- +
+
+

Warning

In PyTorch 1.5 a bool or integral fill_value will produce a warning if @@ -247,42 +156,41 @@ In a future PyTorch release, when dtype and out are not set a bool fill_value will return a tensor of torch.bool dtype, and an integral fill_value will return a tensor of torch.long dtype.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_full(list(2, 3), 3.141592)
-}
-#> torch_tensor
-#>  3.1416  3.1416  3.1416
-#>  3.1416  3.1416  3.1416
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_full(list(2, 3), 3.141592)
+}
+#> torch_tensor
+#>  3.1416  3.1416  3.1416
+#>  3.1416  3.1416  3.1416
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_full_like.html b/dev/reference/torch_full_like.html index ac90d5fff140c83dee28b06a8d5219049e407dd9..b23f8c4b460efc0da41594cf3d6b2b30ea246cb7 100644 --- a/dev/reference/torch_full_like.html +++ b/dev/reference/torch_full_like.html @@ -1,79 +1,18 @@ - - - - - - - -Full_like — torch_full_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Full_like — torch_full_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,51 +111,37 @@

Full_like

-
torch_full_like(
-  input,
-  fill_value,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

fill_value

the number to fill the output tensor with.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

full_like(input, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

+
+
torch_full_like(
+  input,
+  fill_value,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
fill_value
+

the number to fill the output tensor with.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

full_like(input, fill_value, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

@@ -241,32 +149,29 @@

Returns a tensor with the same size as input filled with fill_value. torch_full_like(input, fill_value) is equivalent to torch_full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_gather.html b/dev/reference/torch_gather.html index f4039f44f59ec9ee37feb0b45d13eb117acbef26..62a10a65a8086be2c6965bf6200f0aceadc09117 100644 --- a/dev/reference/torch_gather.html +++ b/dev/reference/torch_gather.html @@ -1,79 +1,18 @@ - - - - - - - -Gather — torch_gather • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gather — torch_gather • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_gather(self, dim, index, sparse_grad = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the source tensor

dim

(int) the axis along which to index

index

(LongTensor) the indices of elements to gather

sparse_grad

(bool,optional) If TRUE, gradient w.r.t. input will be a sparse tensor.

- -

gather(input, dim, index, sparse_grad=FALSE) -> Tensor

+
+
torch_gather(self, dim, index, sparse_grad = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the source tensor

+
dim
+

(int) the axis along which to index

+
index
+

(LongTensor) the indices of elements to gather

+
sparse_grad
+

(bool,optional) If TRUE, gradient w.r.t. input will be a sparse tensor.

+
+
+

gather(input, dim, index, sparse_grad=FALSE) -> Tensor

Gathers values along an axis specified by dim.

-

For a 3-D tensor the output is specified by::

out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
-out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
-out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2
-
+

For a 3-D tensor the output is specified by::

out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
+out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
+out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2

If input is an n-dimensional tensor with size \((x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1})\) and dim = i, then index must be an \(n\)-dimensional tensor with size \((x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1})\) where \(y \geq 1\) and out will have the same size as index.

+
-

Examples

-
if (torch_is_installed()) {
-
-t = torch_tensor(matrix(c(1,2,3,4), ncol = 2, byrow = TRUE))
-torch_gather(t, 2, torch_tensor(matrix(c(1,1,2,1), ncol = 2, byrow=TRUE), dtype = torch_int64()))
-}
-#> torch_tensor
-#>  1  1
-#>  4  3
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t = torch_tensor(matrix(c(1,2,3,4), ncol = 2, byrow = TRUE))
+torch_gather(t, 2, torch_tensor(matrix(c(1,1,2,1), ncol = 2, byrow=TRUE), dtype = torch_int64()))
+}
+#> torch_tensor
+#>  1  1
+#>  4  3
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_gcd.html b/dev/reference/torch_gcd.html index 2b190cf31e73d7f80665d7e76d11fe28c12f8bf0..ea3b0c8d520bb5675b85096a579491bb9c26ac9c 100644 --- a/dev/reference/torch_gcd.html +++ b/dev/reference/torch_gcd.html @@ -1,79 +1,18 @@ - - - - - - - -Gcd — torch_gcd • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gcd — torch_gcd • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_gcd(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

Note

+
+
torch_gcd(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

Note

This defines \(gcd(0, 0) = 0\).

-

gcd(input, other, *, out=None) -> Tensor

- +
+
+

gcd(input, other, *, out=None) -> Tensor

Computes the element-wise greatest common divisor (GCD) of input and other.

Both input and other must have integer types.

+
-

Examples

-
if (torch_is_installed()) {
-
-if (torch::cuda_is_available()) {
-a <- torch_tensor(c(5, 10, 15), dtype = torch_long(), device = "cuda")
-b <- torch_tensor(c(3, 4, 5), dtype = torch_long(), device = "cuda")
-torch_gcd(a, b)
-c <- torch_tensor(c(3L), device = "cuda")
-torch_gcd(a, c)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (torch::cuda_is_available()) {
+a <- torch_tensor(c(5, 10, 15), dtype = torch_long(), device = "cuda")
+b <- torch_tensor(c(3, 4, 5), dtype = torch_long(), device = "cuda")
+torch_gcd(a, b)
+c <- torch_tensor(c(3L), device = "cuda")
+torch_gcd(a, c)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_ge.html b/dev/reference/torch_ge.html index fde8d9050bb5908b04a67bb70f1daf137e41cf9a..e367e6dee9e8cc666d71cf5703810c4c9184fa6e 100644 --- a/dev/reference/torch_ge.html +++ b/dev/reference/torch_ge.html @@ -1,79 +1,18 @@ - - - - - - - -Ge — torch_ge • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ge — torch_ge • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ge(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

ge(input, other, out=NULL) -> Tensor

+
+
torch_ge(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

ge(input, other, out=NULL) -> Tensor

Computes \(\mbox{input} \geq \mbox{other}\) element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_ge(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
-         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
-}
-#> torch_tensor
-#>  1  1
-#>  0  1
-#> [ CPUBoolType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_ge(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
+         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
+}
+#> torch_tensor
+#>  1  1
+#>  0  1
+#> [ CPUBoolType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_generator.html b/dev/reference/torch_generator.html index d33c60d7b262a4560fef5ea82d0cf391d7bb4205..db382a4d9b15cd0947757cd54d190d058e799abb 100644 --- a/dev/reference/torch_generator.html +++ b/dev/reference/torch_generator.html @@ -1,81 +1,20 @@ - - - - - - - -Create a Generator object — torch_generator • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Create a Generator object — torch_generator • torch - - - - - - - - - - - - - - - + + - - -
-
- -
- -
+
@@ -193,49 +115,48 @@ that produces pseudo random numbers. Used as a keyword argument in many In-place random sampling functions.

-
torch_generator()
- +
+
torch_generator()
+
-

Examples

-
if (torch_is_installed()) {
-
-# Via string
-generator <- torch_generator()
-generator$current_seed()
-generator$set_current_seed(1234567L)
-generator$current_seed()
-
-
-}
-#> integer64
-#> [1] 1234567
-
+
+

Examples

+
if (torch_is_installed()) {
+
+# Via string
+generator <- torch_generator()
+generator$current_seed()
+generator$set_current_seed(1234567L)
+generator$current_seed()
+
+
+}
+#> integer64
+#> [1] 1234567
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_geqrf.html b/dev/reference/torch_geqrf.html index 4650500f564e5d7458287fae5c24362b86f65b93..4cdc6e6fa348cda32d63b8bee9bc69e14777d262 100644 --- a/dev/reference/torch_geqrf.html +++ b/dev/reference/torch_geqrf.html @@ -1,79 +1,18 @@ - - - - - - - -Geqrf — torch_geqrf • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Geqrf — torch_geqrf • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_geqrf(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input matrix

- -

geqrf(input, out=NULL) -> (Tensor, Tensor)

+
+
torch_geqrf(self)
+
+
+

Arguments

+
self
+

(Tensor) the input matrix

+
+
+

geqrf(input, out=NULL) -> (Tensor, Tensor)

This is a low-level function for calling LAPACK directly. This function returns a namedtuple (a, tau) as defined in LAPACK documentation for geqrf_ .

-

You'll generally want to use torch_qr instead.

+

You'll generally want to use torch_qr instead.

Computes a QR decomposition of input, but without constructing \(Q\) and \(R\) as explicit separate matrices.

Rather, this directly calls the underlying LAPACK function ?geqrf which produces a sequence of 'elementary reflectors'.

See LAPACK documentation for geqrf_ for further details.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_ger.html b/dev/reference/torch_ger.html index 13eb3eaac2b085cb0395c90e3ecf30a8205a3e3f..5bb9c9a73614204415717c93ca642a79dcea1710 100644 --- a/dev/reference/torch_ger.html +++ b/dev/reference/torch_ger.html @@ -1,79 +1,18 @@ - - - - - - - -Ger — torch_ger • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ger — torch_ger • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ger(self, vec2)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) 1-D input vector

vec2

(Tensor) 1-D input vector

- -

Note

+
+
torch_ger(self, vec2)
+
+
+

Arguments

+
self
+

(Tensor) 1-D input vector

+
vec2
+

(Tensor) 1-D input vector

+
+
+

Note

This function does not broadcast .

-

ger(input, vec2, out=NULL) -> Tensor

- +
+
+

ger(input, vec2, out=NULL) -> Tensor

Outer product of input and vec2. If input is a vector of size \(n\) and vec2 is a vector of size \(m\), then out must be a matrix of size \((n \times m)\).

+
-

Examples

-
if (torch_is_installed()) {
-
-v1 = torch_arange(1., 5.)
-v2 = torch_arange(1., 4.)
-torch_ger(v1, v2)
-}
-#> torch_tensor
-#>   1   2   3   4
-#>   2   4   6   8
-#>   3   6   9  12
-#>   4   8  12  16
-#>   5  10  15  20
-#> [ CPUFloatType{5,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+v1 = torch_arange(1., 5.)
+v2 = torch_arange(1., 4.)
+torch_ger(v1, v2)
+}
+#> torch_tensor
+#>   1   2   3   4
+#>   2   4   6   8
+#>   3   6   9  12
+#>   4   8  12  16
+#>   5  10  15  20
+#> [ CPUFloatType{5,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_greater.html b/dev/reference/torch_greater.html index 12280230156562ec871c0bf4db881f52025dc553..2ab197d552b8766e156fb26f4037b1b9f1b5445f 100644 --- a/dev/reference/torch_greater.html +++ b/dev/reference/torch_greater.html @@ -1,79 +1,18 @@ - - - - - - - -Greater — torch_greater • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Greater — torch_greater • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_greater(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

greater(input, other, *, out=None) -> Tensor

+
+
torch_greater(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

greater(input, other, *, out=None) -> Tensor

-

Alias for torch_gt().

+

Alias for torch_gt().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_greater_equal.html b/dev/reference/torch_greater_equal.html index edc88fe9a161c3bb84c7a38e72cdc179e2c31115..06ba61155a88752b42e70b806ba8cc56041d2b1c 100644 --- a/dev/reference/torch_greater_equal.html +++ b/dev/reference/torch_greater_equal.html @@ -1,79 +1,18 @@ - - - - - - - -Greater_equal — torch_greater_equal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Greater_equal — torch_greater_equal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Greater_equal

-
torch_greater_equal(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

greater_equal(input, other, *, out=None) -> Tensor

+
+
torch_greater_equal(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

greater_equal(input, other, *, out=None) -> Tensor

-

Alias for torch_ge().

+

Alias for torch_ge().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_gt.html b/dev/reference/torch_gt.html index 3923efe2aa85e683c5f73148795607d1d7d901fc..13cd64f6227fce93663d5970fa5d57c06495e038 100644 --- a/dev/reference/torch_gt.html +++ b/dev/reference/torch_gt.html @@ -1,79 +1,18 @@ - - - - - - - -Gt — torch_gt • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gt — torch_gt • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_gt(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

gt(input, other, out=NULL) -> Tensor

+
+
torch_gt(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

gt(input, other, out=NULL) -> Tensor

Computes \(\mbox{input} > \mbox{other}\) element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_gt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
-         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
-}
-#> torch_tensor
-#>  0  1
-#>  0  0
-#> [ CPUBoolType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_gt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
+         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
+}
+#> torch_tensor
+#>  0  1
+#>  0  0
+#> [ CPUBoolType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_hamming_window.html b/dev/reference/torch_hamming_window.html index 193bb77f207802dab36272d04411803f8819e8e0..027afb37029108488c6f136049a1f4017a46fd02 100644 --- a/dev/reference/torch_hamming_window.html +++ b/dev/reference/torch_hamming_window.html @@ -1,79 +1,18 @@ - - - - - - - -Hamming_window — torch_hamming_window • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hamming_window — torch_hamming_window • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,65 +111,50 @@

Hamming_window

-
torch_hamming_window(
-  window_length,
-  periodic = TRUE,
-  alpha = 0.54,
-  beta = 0.46,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
window_length

(int) the size of returned window

periodic

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

alpha

(float, optional) The coefficient \(\alpha\) in the equation above

beta

(float, optional) The coefficient \(\beta\) in the equation above

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

layout

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

Note

+
+
torch_hamming_window(
+  window_length,
+  periodic = TRUE,
+  alpha = 0.54,
+  beta = 0.46,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
window_length
+

(int) the size of returned window

+
periodic
+

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

+
alpha
+

(float, optional) The coefficient \(\alpha\) in the equation above

+
beta
+

(float, optional) The coefficient \(\beta\) in the equation above

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

+
layout
+

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

Note

-
If `window_length` \eqn{=1}, the returned window contains a single value 1.
-
+
If `window_length` \eqn{=1}, the returned window contains a single value 1.
+
-
This is a generalized version of `torch_hann_window`.
-
- -

hamming_window(window_length, periodic=TRUE, alpha=0.54, beta=0.46, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
This is a generalized version of `torch_hann_window`.
+
+
+
+

hamming_window(window_length, periodic=TRUE, alpha=0.54, beta=0.46, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -264,32 +171,29 @@ ready to be used as a periodic window with functions like above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have torch_hamming_window(L, periodic=TRUE) equal to torch_hamming_window(L + 1, periodic=False)[:-1]).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_hann_window.html b/dev/reference/torch_hann_window.html index 14700f96b72532df975b03a8a3150e75cec73b9c..33eaafba5e9cb269677898f27dafb14618cc4d7d 100644 --- a/dev/reference/torch_hann_window.html +++ b/dev/reference/torch_hann_window.html @@ -1,79 +1,18 @@ - - - - - - - -Hann_window — torch_hann_window • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hann_window — torch_hann_window • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,52 +111,41 @@

Hann_window

-
torch_hann_window(
-  window_length,
-  periodic = TRUE,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
window_length

(int) the size of returned window

periodic

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

layout

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

Note

+
+
torch_hann_window(
+  window_length,
+  periodic = TRUE,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
window_length
+

(int) the size of returned window

+
periodic
+

(bool, optional) If TRUE, returns a window to be used as periodic function. If False, return a symmetric window.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). Only floating point types are supported.

+
layout
+

(torch.layout, optional) the desired layout of returned window tensor. Only torch_strided (dense layout) is supported.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

Note

-
If `window_length` \eqn{=1}, the returned window contains a single value 1.
-
- -

hann_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
If `window_length` \eqn{=1}, the returned window contains a single value 1.
+
+
+
+

hann_window(window_length, periodic=TRUE, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -252,32 +163,29 @@ ready to be used as a periodic window with functions like above formula is in fact \(\mbox{window\_length} + 1\). Also, we always have torch_hann_window(L, periodic=TRUE) equal to torch_hann_window(L + 1, periodic=False)[:-1]).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_heaviside.html b/dev/reference/torch_heaviside.html index 478673f85bc6f2ec6824a2c2b2283daa9286987c..a1b65ed7df683c73ddde9b4332cd59fbdb043a65 100644 --- a/dev/reference/torch_heaviside.html +++ b/dev/reference/torch_heaviside.html @@ -1,79 +1,18 @@ - - - - - - - -Heaviside — torch_heaviside • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Heaviside — torch_heaviside • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Heaviside

-
torch_heaviside(self, values)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

values

(Tensor) The values to use where input is zero.

- -

heaviside(input, values, *, out=None) -> Tensor

+
+
torch_heaviside(self, values)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
values
+

(Tensor) The values to use where input is zero.

+
+
+

heaviside(input, values, *, out=None) -> Tensor

@@ -218,47 +136,46 @@ The Heaviside step function is defined as:

1, & \mbox{if input > 0} \end{array} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-input <- torch_tensor(c(-1.5, 0, 2.0))
-values <- torch_tensor(c(0.5))
-torch_heaviside(input, values)
-values <- torch_tensor(c(1.2, -2.0, 3.5))
-torch_heaviside(input, values)
-}
-#> torch_tensor
-#>  0
-#> -2
-#>  1
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input <- torch_tensor(c(-1.5, 0, 2.0))
+values <- torch_tensor(c(0.5))
+torch_heaviside(input, values)
+values <- torch_tensor(c(1.2, -2.0, 3.5))
+torch_heaviside(input, values)
+}
+#> torch_tensor
+#>  0
+#> -2
+#>  1
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_histc.html b/dev/reference/torch_histc.html index 2c077ddd30ae3391d96242b881eaae30929fbbb0..8ae8fed5f83ca11753783f5091b84073bf65dc2b 100644 --- a/dev/reference/torch_histc.html +++ b/dev/reference/torch_histc.html @@ -1,79 +1,18 @@ - - - - - - - -Histc — torch_histc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Histc — torch_histc • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_histc(self, bins = 100L, min = 0L, max = 0L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

bins

(int) number of histogram bins

min

(int) lower end of the range (inclusive)

max

(int) upper end of the range (inclusive)

- -

histc(input, bins=100, min=0, max=0, out=NULL) -> Tensor

+
+
torch_histc(self, bins = 100L, min = 0L, max = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
bins
+

(int) number of histogram bins

+
min
+

(int) lower end of the range (inclusive)

+
max
+

(int) upper end of the range (inclusive)

+
+
+

histc(input, bins=100, min=0, max=0, out=NULL) -> Tensor

@@ -221,44 +135,43 @@

The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_histc(torch_tensor(c(1., 2, 1)), bins=4, min=0, max=3)
-}
-#> torch_tensor
-#>  0
-#>  2
-#>  1
-#>  0
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_histc(torch_tensor(c(1., 2, 1)), bins=4, min=0, max=3)
+}
+#> torch_tensor
+#>  0
+#>  2
+#>  1
+#>  0
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_hstack.html b/dev/reference/torch_hstack.html index 2c7f27acef9e36e8556a5694fb3577f12b9741d9..40b650e9e628eb9c66fe7a8e9b09d739af281c28 100644 --- a/dev/reference/torch_hstack.html +++ b/dev/reference/torch_hstack.html @@ -1,79 +1,18 @@ - - - - - - - -Hstack — torch_hstack • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hstack — torch_hstack • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_hstack(tensors)
- -

Arguments

- - - - - - -
tensors

(sequence of Tensors) sequence of tensors to concatenate

- -

hstack(tensors, *, out=None) -> Tensor

+
+
torch_hstack(tensors)
+
+
+

Arguments

+
tensors
+

(sequence of Tensors) sequence of tensors to concatenate

+
+
+

hstack(tensors, *, out=None) -> Tensor

Stack tensors in sequence horizontally (column wise).

This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2, 3))
-b <- torch_tensor(c(4, 5, 6))
-torch_hstack(list(a,b))
-a <- torch_tensor(rbind(1,2,3))
-b <- torch_tensor(rbind(4,5,6))
-torch_hstack(list(a,b))
-}
-#> torch_tensor
-#>  1  4
-#>  2  5
-#>  3  6
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2, 3))
+b <- torch_tensor(c(4, 5, 6))
+torch_hstack(list(a,b))
+a <- torch_tensor(rbind(1,2,3))
+b <- torch_tensor(rbind(4,5,6))
+torch_hstack(list(a,b))
+}
+#> torch_tensor
+#>  1  4
+#>  2  5
+#>  3  6
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_hypot.html b/dev/reference/torch_hypot.html index 40ee1e38ecfc98673b2865b7495571d04beb4515..4597e542a5afd17333974d87f2c859da89fee61c 100644 --- a/dev/reference/torch_hypot.html +++ b/dev/reference/torch_hypot.html @@ -1,79 +1,18 @@ - - - - - - - -Hypot — torch_hypot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Hypot — torch_hypot • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_hypot(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first input tensor

other

(Tensor) the second input tensor

- -

hypot(input, other, *, out=None) -> Tensor

+
+
torch_hypot(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first input tensor

+
other
+

(Tensor) the second input tensor

+
+
+

hypot(input, other, *, out=None) -> Tensor

@@ -215,43 +133,42 @@ $$

The shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_hypot(torch_tensor(c(4.0)), torch_tensor(c(3.0, 4.0, 5.0)))
-}
-#> torch_tensor
-#>  5.0000
-#>  5.6569
-#>  6.4031
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_hypot(torch_tensor(c(4.0)), torch_tensor(c(3.0, 4.0, 5.0)))
+}
+#> torch_tensor
+#>  5.0000
+#>  5.6569
+#>  6.4031
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_i0.html b/dev/reference/torch_i0.html index a4e70f0e5557f0f017ba168de3d30adf235bac1d..494ad9c9af554d166d6dc1905db2dd8270745ca0 100644 --- a/dev/reference/torch_i0.html +++ b/dev/reference/torch_i0.html @@ -1,79 +1,18 @@ - - - - - - - -I0 — torch_i0 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -I0 — torch_i0 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_i0(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor

- -

i0(input, *, out=None) -> Tensor

+
+
torch_i0(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor

+
+
+

i0(input, *, out=None) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = I_0(\mbox{input}_{i}) = \sum_{k=0}^{\infty} \frac{(\mbox{input}_{i}^2/4)^k}{(k!)^2} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_i0(torch_arange(start = 0, end = 5, dtype=torch_float32()))
-}
-#> torch_tensor
-#>   1.0000
-#>   1.2661
-#>   2.2796
-#>   4.8808
-#>  11.3019
-#>  27.2399
-#> [ CPUFloatType{6} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_i0(torch_arange(start = 0, end = 5, dtype=torch_float32()))
+}
+#> torch_tensor
+#>   1.0000
+#>   1.2661
+#>   2.2796
+#>   4.8808
+#>  11.3019
+#>  27.2399
+#> [ CPUFloatType{6} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_ifft.html b/dev/reference/torch_ifft.html deleted file mode 100644 index b79900da6b48cfe3c622605592c924004196b84d..0000000000000000000000000000000000000000 --- a/dev/reference/torch_ifft.html +++ /dev/null @@ -1,299 +0,0 @@ - - - - - - - - -Ifft — torch_ifft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Ifft

-
- -
torch_ifft(self, signal_ndim, normalized = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor of at least signal_ndim + 1 dimensions

signal_ndim

(int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

normalized

(bool, optional) controls whether to return normalized results. Default: FALSE

- -

Note

- - -
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
-repeatedly running FFT methods on tensors of same geometry with same
-configuration. See cufft-plan-cache for more details on how to
-monitor and control the cache.
-
- -

ifft(input, signal_ndim, normalized=False) -> Tensor

- - - - -

Complex-to-complex Inverse Discrete Fourier Transform

-

This method computes the complex-to-complex inverse discrete Fourier -transform. Ignoring the batch dimensions, it computes the following -expression:

-

$$ - X[\omega_1, \dots, \omega_d] = - \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] - e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, -$$ -where \(d\) = signal_ndim is number of dimensions for the -signal, and \(N_i\) is the size of signal dimension \(i\).

-

The argument specifications are almost identical with torch_fft. -However, if normalized is set to TRUE, this instead returns the -results multiplied by \(\sqrt{\prod_{i=1}^d N_i}\), to become a unitary -operator. Therefore, to invert a torch_fft, the normalized -argument should be set identically for torch_fft.

-

Returns the real and the imaginary parts together as one tensor of the same -shape of input.

-

The inverse of this function is torch_fft.

-

Warning

- - - -

For CPU tensors, this method is currently only available with MKL. Use -torch_backends.mkl.is_available to check if MKL is installed.

- -

Examples

-
if (torch_is_installed()) { - -x = torch_randn(c(3, 3, 2)) -x -y = torch_fft(x, 2) -torch_ifft(y, 2) # recover x -} -
#> torch_tensor -#> (1,.,.) = -#> -0.9437 0.6436 -#> -1.2636 0.7639 -#> 0.9802 -1.7995 -#> -#> (2,.,.) = -#> -1.5355 -0.9216 -#> 2.7922 -2.0162 -#> 1.0606 0.5383 -#> -#> (3,.,.) = -#> 0.3394 -0.3011 -#> -0.6207 0.1157 -#> -0.1890 0.0129 -#> [ CPUFloatType{3,3,2} ]
-
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/torch_iinfo.html b/dev/reference/torch_iinfo.html index b83552d77f238e98267e39367dfadcc952c68298..c808eece671e946e141f7daa9e820183a5c3c2f2 100644 --- a/dev/reference/torch_iinfo.html +++ b/dev/reference/torch_iinfo.html @@ -1,80 +1,19 @@ - - - - - - - -Integer type info — torch_iinfo • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Integer type info — torch_iinfo • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,43 +113,37 @@ type." /> type.

-
torch_iinfo(dtype)
- -

Arguments

- - - - - - -
dtype

dtype to get information from.

+
+
torch_iinfo(dtype)
+
+
+

Arguments

+
dtype
+

dtype to get information from.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_imag.html b/dev/reference/torch_imag.html index b06869105ea7aac3c07445a1a4b7121c777080ca..e06cfc2744b40165cdbc7780aa7669d031d81bcd 100644 --- a/dev/reference/torch_imag.html +++ b/dev/reference/torch_imag.html @@ -1,79 +1,18 @@ - - - - - - - -Imag — torch_imag • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Imag — torch_imag • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_imag(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

imag(input) -> Tensor

+
+
torch_imag(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

imag(input) -> Tensor

Returns the imaginary part of the input tensor.

-

Warning

- +
+
+

Warning

Not yet implemented.

$$ \mbox{out}_{i} = imag(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-torch_imag(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+torch_imag(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_index.html b/dev/reference/torch_index.html index 5d1c7402e3250ecb4c41f97f655ca974cf27ce74..8308dfb85054515e834b264b6d864722459d59b6 100644 --- a/dev/reference/torch_index.html +++ b/dev/reference/torch_index.html @@ -1,79 +1,18 @@ - - - - - - - -Index torch tensors — torch_index • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Index torch tensors — torch_index • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,48 +111,40 @@

Helper functions to index tensors.

-
torch_index(self, indices)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) Tensor that will be indexed.

indices

(List[Tensor]) List of indices. Indices are torch tensors with -torch_long() dtype.

+
+
torch_index(self, indices)
+
+
+

Arguments

+
self
+

(Tensor) Tensor that will be indexed.

+
indices
+

(List[Tensor]) List of indices. Indices are torch tensors with +torch_long() dtype.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_index_put.html b/dev/reference/torch_index_put.html index db1fbfaadf629f1717c215fccfac3ad7d0439388..1bbf71c3fe89da23389e7408e127571514d6160f 100644 --- a/dev/reference/torch_index_put.html +++ b/dev/reference/torch_index_put.html @@ -1,79 +1,18 @@ - - - - - - - -Modify values selected by indices. — torch_index_put • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Modify values selected by indices. — torch_index_put • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,58 +111,46 @@

Modify values selected by indices.

-
torch_index_put(self, indices, values, accumulate = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) Tensor that will be indexed.

indices

(List[Tensor]) List of indices. Indices are torch tensors with -torch_long() dtype.

values

(Tensor) values that will be replaced the indexed location. Used -for torch_index_put and torch_index_put_.

accumulate

(bool) Wether instead of replacing the current values with values, -you want to add them.

+
+
torch_index_put(self, indices, values, accumulate = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) Tensor that will be indexed.

+
indices
+

(List[Tensor]) List of indices. Indices are torch tensors with +torch_long() dtype.

+
values
+

(Tensor) values that will be replaced the indexed location. Used +for torch_index_put and torch_index_put_.

+
accumulate
+

(bool) Wether instead of replacing the current values with values, +you want to add them.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_index_put_.html b/dev/reference/torch_index_put_.html index dba03096b7b2224aeea6229c32be2030882a57a5..24bb884ea7c9ad9d1f47084dfba8f986b096e9ae 100644 --- a/dev/reference/torch_index_put_.html +++ b/dev/reference/torch_index_put_.html @@ -1,79 +1,18 @@ - - - - - - - -In-place version of torch_index_put. — torch_index_put_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -In-place version of torch_index_put. — torch_index_put_ • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,58 +111,46 @@

In-place version of torch_index_put.

-
torch_index_put_(self, indices, values, accumulate = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) Tensor that will be indexed.

indices

(List[Tensor]) List of indices. Indices are torch tensors with -torch_long() dtype.

values

(Tensor) values that will be replaced the indexed location. Used -for torch_index_put and torch_index_put_.

accumulate

(bool) Wether instead of replacing the current values with values, -you want to add them.

+
+
torch_index_put_(self, indices, values, accumulate = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) Tensor that will be indexed.

+
indices
+

(List[Tensor]) List of indices. Indices are torch tensors with +torch_long() dtype.

+
values
+

(Tensor) values that will be replaced the indexed location. Used +for torch_index_put and torch_index_put_.

+
accumulate
+

(bool) Wether instead of replacing the current values with values, +you want to add them.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_index_select.html b/dev/reference/torch_index_select.html index cb496e22683fe50057fcc4903a042ba9b9e25e89..a7c0595fe21e20c4afa56cc81c05dc8ed3026229 100644 --- a/dev/reference/torch_index_select.html +++ b/dev/reference/torch_index_select.html @@ -1,79 +1,18 @@ - - - - - - - -Index_select — torch_index_select • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Index_select — torch_index_select • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,33 +111,28 @@

Index_select

-
torch_index_select(self, dim, index)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension in which we index

index

(LongTensor) the 1-D tensor containing the indices to index

- -

Note

+
+
torch_index_select(self, dim, index)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension in which we index

+
index
+

(LongTensor) the 1-D tensor containing the indices to index

+
+
+

Note

The returned tensor does not use the same storage as the original tensor. If out has a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary.

-

index_select(input, dim, index, out=NULL) -> Tensor

- +
+
+

index_select(input, dim, index, out=NULL) -> Tensor

@@ -224,47 +141,46 @@ storage if necessary.

The returned tensor has the same number of dimensions as the original tensor (input). The dim\ th dimension has the same size as the length of index; other dimensions have the same size as in the original tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(3, 4))
-x
-indices = torch_tensor(c(1, 3), dtype = torch_int64())
-torch_index_select(x, 1, indices)
-torch_index_select(x, 2, indices)
-}
-#> torch_tensor
-#>  0.2325  0.8443
-#> -0.5426 -0.2814
-#> -1.4507 -0.8767
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(3, 4))
+x
+indices = torch_tensor(c(1, 3), dtype = torch_int64())
+torch_index_select(x, 1, indices)
+torch_index_select(x, 2, indices)
+}
+#> torch_tensor
+#>  0.5332  0.7924
+#>  1.8411 -0.1383
+#> -0.0094  0.9842
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_inverse.html b/dev/reference/torch_inverse.html index 7778298c96dbb6b2afd13240ac7f98ecabd7755e..9d4844dc49ff96e1793cc32f1c6a42f8e568c557 100644 --- a/dev/reference/torch_inverse.html +++ b/dev/reference/torch_inverse.html @@ -1,79 +1,18 @@ - - - - - - - -Inverse — torch_inverse • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Inverse — torch_inverse • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_inverse(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions

- -

Note

+
+
torch_inverse(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions

+
+
+

Note

-
Irrespective of the original strides, the returned tensors will be
+
Irrespective of the original strides, the returned tensors will be
 transposed, i.e. with strides like `input.contiguous().transpose(-2, -1).stride()`
-
- -

inverse(input, out=NULL) -> Tensor

+
+
+
+

inverse(input, out=NULL) -> Tensor

Takes the inverse of the square matrix input. input can be batches of 2D square tensors, in which case this function would return a tensor composed of individual inverses.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-x = torch_rand(c(4, 4))
-y = torch_inverse(x)
-z = torch_mm(x, y)
-z
-torch_max(torch_abs(z - torch_eye(4))) # Max non-zero
-# Batched inverse example
-x = torch_randn(c(2, 3, 4, 4))
-y = torch_inverse(x)
-z = torch_matmul(x, y)
-torch_max(torch_abs(z - torch_eye(4)$expand_as(x))) # Max non-zero
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+x = torch_rand(c(4, 4))
+y = torch_inverse(x)
+z = torch_mm(x, y)
+z
+torch_max(torch_abs(z - torch_eye(4))) # Max non-zero
+# Batched inverse example
+x = torch_randn(c(2, 3, 4, 4))
+y = torch_inverse(x)
+z = torch_matmul(x, y)
+torch_max(torch_abs(z - torch_eye(4)$expand_as(x))) # Max non-zero
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_irfft.html b/dev/reference/torch_irfft.html deleted file mode 100644 index a336c302949b9501e839855f2c7a2973d522ae3a..0000000000000000000000000000000000000000 --- a/dev/reference/torch_irfft.html +++ /dev/null @@ -1,321 +0,0 @@ - - - - - - - - -Irfft — torch_irfft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Irfft

-
- -
torch_irfft(
-  self,
-  signal_ndim,
-  normalized = FALSE,
-  onesided = TRUE,
-  signal_sizes = list()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor of at least signal_ndim + 1 dimensions

signal_ndim

(int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

normalized

(bool, optional) controls whether to return normalized results. Default: FALSE

onesided

(bool, optional) controls whether input was halfed to avoid redundancy, e.g., by torch_rfft(). Default: TRUE

signal_sizes

(list or torch.Size, optional) the size of the original signal (without batch dimension). Default: NULL

- -

Note

- - -
Due to the conjugate symmetry, `input` do not need to contain the full
-complex frequency values. Roughly half of the values will be sufficient, as
-is the case when `input` is given by [`~torch.rfft`] with
-`rfft(signal, onesided=TRUE)`. In such case, set the `onesided`
-argument of this method to `TRUE`. Moreover, the original signal shape
-information can sometimes be lost, optionally set `signal_sizes` to be
-the size of the original signal (without the batch dimensions if in batched
-mode) to recover it with correct shape.
-
-Therefore, to invert an [torch_rfft()], the `normalized` and
-`onesided` arguments should be set identically for [torch_irfft()],
-and preferably a `signal_sizes` is given to avoid size mismatch. See the
-example below for a case of size mismatch.
-
-See [torch_rfft()] for details on conjugate symmetry.
-
- -

The inverse of this function is torch_rfft().

-
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
-repeatedly running FFT methods on tensors of same geometry with same
-configuration. See cufft-plan-cache for more details on how to
-monitor and control the cache.
-
- -

irfft(input, signal_ndim, normalized=False, onesided=TRUE, signal_sizes=NULL) -> Tensor

- - - - -

Complex-to-real Inverse Discrete Fourier Transform

-

This method computes the complex-to-real inverse discrete Fourier transform. -It is mathematically equivalent with torch_ifft with differences only in -formats of the input and output.

-

The argument specifications are almost identical with torch_ifft. -Similar to torch_ifft, if normalized is set to TRUE, -this normalizes the result by multiplying it with -\(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary, where -\(N_i\) is the size of signal dimension \(i\).

-

Warning

- - - -

Generally speaking, input to this function should contain values -following conjugate symmetry. Note that even if onesided is -TRUE, often symmetry on some part is still needed. When this -requirement is not satisfied, the behavior of torch_irfft is -undefined. Since torch_autograd.gradcheck estimates numerical -Jacobian with point perturbations, torch_irfft will almost -certainly fail the check.

- -

For CPU tensors, this method is currently only available with MKL. Use -torch_backends.mkl.is_available to check if MKL is installed.

- -

Examples

-
if (torch_is_installed()) { - -x = torch_randn(c(4, 4)) -torch_rfft(x, 2, onesided=TRUE) -x = torch_randn(c(4, 5)) -torch_rfft(x, 2, onesided=TRUE) -y = torch_rfft(x, 2, onesided=TRUE) -torch_irfft(y, 2, onesided=TRUE, signal_sizes=c(4,5)) # recover x -} -
#> torch_tensor -#> 1.1598 0.9531 1.3629 1.8066 0.7223 -#> -0.3411 -0.7091 0.7843 -1.0341 0.3639 -#> 1.7390 -0.0613 0.0746 1.5620 1.1026 -#> 0.1532 0.1681 0.3220 0.0096 0.8583 -#> [ CPUFloatType{4,5} ]
-
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/torch_is_complex.html b/dev/reference/torch_is_complex.html index 6ba4d29e13cfdc39edbb02c4dad61944df41a60f..b7602005ebacc6a85053396d51bc028f71f43bc7 100644 --- a/dev/reference/torch_is_complex.html +++ b/dev/reference/torch_is_complex.html @@ -1,79 +1,18 @@ - - - - - - - -Is_complex — torch_is_complex • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Is_complex — torch_is_complex • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,50 +111,45 @@

Is_complex

-
torch_is_complex(self)
- -

Arguments

- - - - - - -
self

(Tensor) the PyTorch tensor to test

- -

is_complex(input) -> (bool)

+
+
torch_is_complex(self)
+
+
+

Arguments

+
self
+

(Tensor) the PyTorch tensor to test

+
+
+

is_complex(input) -> (bool)

Returns TRUE if the data type of input is a complex data type i.e., one of torch_complex64, and torch.complex128.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_is_floating_point.html b/dev/reference/torch_is_floating_point.html index 715de2e9b13c08d454c1f3c876732189462337be..ce48712915921dde6659a974faa8fcf5a2ad4be9 100644 --- a/dev/reference/torch_is_floating_point.html +++ b/dev/reference/torch_is_floating_point.html @@ -1,79 +1,18 @@ - - - - - - - -Is_floating_point — torch_is_floating_point • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Is_floating_point — torch_is_floating_point • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,50 +111,45 @@

Is_floating_point

-
torch_is_floating_point(self)
- -

Arguments

- - - - - - -
self

(Tensor) the PyTorch tensor to test

- -

is_floating_point(input) -> (bool)

+
+
torch_is_floating_point(self)
+
+
+

Arguments

+
self
+

(Tensor) the PyTorch tensor to test

+
+
+

is_floating_point(input) -> (bool)

Returns TRUE if the data type of input is a floating point data type i.e., one of torch_float64, torch.float32 and torch.float16.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_is_installed.html b/dev/reference/torch_is_installed.html index 1eb6ddf2a924df02532b1a3970ffb783d4f02401..31d80e120e973d4c5a2ae5b60e8afb24e0e91ba1 100644 --- a/dev/reference/torch_is_installed.html +++ b/dev/reference/torch_is_installed.html @@ -1,79 +1,18 @@ - - - - - - - -Verifies if torch is installed — torch_is_installed • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Verifies if torch is installed — torch_is_installed • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,35 +111,32 @@

Verifies if torch is installed

-
torch_is_installed()
- +
+
torch_is_installed()
+
+
-
- +
- - + + diff --git a/dev/reference/torch_is_nonzero.html b/dev/reference/torch_is_nonzero.html index 058363de0d9aefa51d61d6b21d98a3c3ca3b360b..e058b63a6a1ea422551e0177ecfeace9ff36ffb1 100644 --- a/dev/reference/torch_is_nonzero.html +++ b/dev/reference/torch_is_nonzero.html @@ -1,79 +1,18 @@ - - - - - - - -Is_nonzero — torch_is_nonzero • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Is_nonzero — torch_is_nonzero • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Is_nonzero

-
torch_is_nonzero(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

is_nonzero(input) -> (bool)

+
+
torch_is_nonzero(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

is_nonzero(input) -> (bool)

@@ -211,45 +131,44 @@ i.e. not equal to torch_tensor(c(0)) or torch_tensor(c(0))torch_tensor(c(FALSE)). Throws a RuntimeError if torch_numel() != 1 (even in case of sparse tensors).

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_is_nonzero(torch_tensor(c(0.)))
-torch_is_nonzero(torch_tensor(c(1.5)))
-torch_is_nonzero(torch_tensor(c(FALSE)))
-torch_is_nonzero(torch_tensor(c(3)))
-if (FALSE) {
-torch_is_nonzero(torch_tensor(c(1, 3, 5)))
-torch_is_nonzero(torch_tensor(c()))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_is_nonzero(torch_tensor(c(0.)))
+torch_is_nonzero(torch_tensor(c(1.5)))
+torch_is_nonzero(torch_tensor(c(FALSE)))
+torch_is_nonzero(torch_tensor(c(3)))
+if (FALSE) {
+torch_is_nonzero(torch_tensor(c(1, 3, 5)))
+torch_is_nonzero(torch_tensor(c()))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isclose.html b/dev/reference/torch_isclose.html index 6ae8da9630b31831b78778c3d58c284c66672abc..cb9481ca5c1e1f049dc6dc3df413faa4e7b8634c 100644 --- a/dev/reference/torch_isclose.html +++ b/dev/reference/torch_isclose.html @@ -1,79 +1,18 @@ - - - - - - - -Isclose — torch_isclose • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isclose — torch_isclose • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_isclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) first tensor to compare

other

(Tensor) second tensor to compare

rtol

(float, optional) relative tolerance. Default: 1e-05

atol

(float, optional) absolute tolerance. Default: 1e-08

equal_nan

(bool, optional) if TRUE, then two NaN s will be -considered equal. Default: FALSE

- -

isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

+
+
torch_isclose(self, other, rtol = 1e-05, atol = 0, equal_nan = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) first tensor to compare

+
other
+

(Tensor) second tensor to compare

+
rtol
+

(float, optional) relative tolerance. Default: 1e-05

+
atol
+

(float, optional) absolute tolerance. Default: 1e-08

+
equal_nan
+

(bool, optional) if TRUE, then two NaN s will be +considered equal. Default: FALSE

+
+
+

isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=FALSE) -> Tensor

@@ -232,43 +144,42 @@ $$

and/or other are nonfinite they are close if and only if they are equal, with NaNs being considered equal to each other when equal_nan is TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_isclose(torch_tensor(c(1., 2, 3)), torch_tensor(c(1 + 1e-10, 3, 4)))
-torch_isclose(torch_tensor(c(Inf, 4)), torch_tensor(c(Inf, 6)), rtol=.5)
-}
-#> torch_tensor
-#>  1
-#>  1
-#> [ CPUBoolType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_isclose(torch_tensor(c(1., 2, 3)), torch_tensor(c(1 + 1e-10, 3, 4)))
+torch_isclose(torch_tensor(c(Inf, 4)), torch_tensor(c(Inf, 6)), rtol=.5)
+}
+#> torch_tensor
+#>  1
+#>  1
+#> [ CPUBoolType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isfinite.html b/dev/reference/torch_isfinite.html index 653de5cacc2e3690a4b8900cfd12fc107194a630..2e0fbef22b88cbb469b4330e1c6e5d71d0c672d0 100644 --- a/dev/reference/torch_isfinite.html +++ b/dev/reference/torch_isfinite.html @@ -1,79 +1,18 @@ - - - - - - - -Isfinite — torch_isfinite • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isfinite — torch_isfinite • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,62 +111,59 @@

Isfinite

-
torch_isfinite(self)
- -

Arguments

- - - - - - -
self

(Tensor) A tensor to check

- -

TEST

+
+
torch_isfinite(self)
+
+
+

Arguments

+
self
+

(Tensor) A tensor to check

+
+
+

TEST

Returns a new tensor with boolean elements representing if each element is Finite or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_isfinite(torch_tensor(c(1, Inf, 2, -Inf, NaN)))
-}
-#> torch_tensor
-#>  1
-#>  0
-#>  1
-#>  0
-#>  0
-#> [ CPUBoolType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_isfinite(torch_tensor(c(1, Inf, 2, -Inf, NaN)))
+}
+#> torch_tensor
+#>  1
+#>  0
+#>  1
+#>  0
+#>  0
+#> [ CPUBoolType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isinf.html b/dev/reference/torch_isinf.html index a55e67b59a628cb0de4458520ac5942557cd2251..7da92463c5290009e1a43824f058967b20fe75b3 100644 --- a/dev/reference/torch_isinf.html +++ b/dev/reference/torch_isinf.html @@ -1,79 +1,18 @@ - - - - - - - -Isinf — torch_isinf • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isinf — torch_isinf • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_isinf(self)
- -

Arguments

- - - - - - -
self

(Tensor) A tensor to check

- -

TEST

+
+
torch_isinf(self)
+
+
+

Arguments

+
self
+

(Tensor) A tensor to check

+
+
+

TEST

Returns a new tensor with boolean elements representing if each element is +/-INF or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_isinf(torch_tensor(c(1, Inf, 2, -Inf, NaN)))
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  0
-#>  1
-#>  0
-#> [ CPUBoolType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_isinf(torch_tensor(c(1, Inf, 2, -Inf, NaN)))
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  0
+#>  1
+#>  0
+#> [ CPUBoolType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isnan.html b/dev/reference/torch_isnan.html index 9fcf067f3184a8d7e06396ea1274590dd1a7b09f..aba057d2de030e23418601efe12191a5268ffb3a 100644 --- a/dev/reference/torch_isnan.html +++ b/dev/reference/torch_isnan.html @@ -1,79 +1,18 @@ - - - - - - - -Isnan — torch_isnan • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isnan — torch_isnan • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_isnan(self)
- -

Arguments

- - - - - - -
self

(Tensor) A tensor to check

- -

TEST

+
+
torch_isnan(self)
+
+
+

Arguments

+
self
+

(Tensor) A tensor to check

+
+
+

TEST

Returns a new tensor with boolean elements representing if each element is NaN or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_isnan(torch_tensor(c(1, NaN, 2)))
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  0
-#> [ CPUBoolType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_isnan(torch_tensor(c(1, NaN, 2)))
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  0
+#> [ CPUBoolType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isneginf.html b/dev/reference/torch_isneginf.html index 4d326e35c845826dac2cce0cf7b0689735e49697..f9de577e0b354792df4bd5769581bea449e5429c 100644 --- a/dev/reference/torch_isneginf.html +++ b/dev/reference/torch_isneginf.html @@ -1,79 +1,18 @@ - - - - - - - -Isneginf — torch_isneginf • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isneginf — torch_isneginf • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,61 +111,58 @@

Isneginf

-
torch_isneginf(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

isneginf(input, *, out=None) -> Tensor

+
+
torch_isneginf(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

isneginf(input, *, out=None) -> Tensor

Tests if each element of input is negative infinity or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(-Inf, Inf, 1.2))
-torch_isneginf(a)
-}
-#> torch_tensor
-#>  1
-#>  0
-#>  0
-#> [ CPUBoolType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(-Inf, Inf, 1.2))
+torch_isneginf(a)
+}
+#> torch_tensor
+#>  1
+#>  0
+#>  0
+#> [ CPUBoolType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isposinf.html b/dev/reference/torch_isposinf.html index 243be0ad0d0019915776689feb11a4e7adfe42d5..4a5a4219849b3c02685f0d36b12c0ac6329da2fb 100644 --- a/dev/reference/torch_isposinf.html +++ b/dev/reference/torch_isposinf.html @@ -1,79 +1,18 @@ - - - - - - - -Isposinf — torch_isposinf • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isposinf — torch_isposinf • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,61 +111,58 @@

Isposinf

-
torch_isposinf(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

isposinf(input, *, out=None) -> Tensor

+
+
torch_isposinf(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

isposinf(input, *, out=None) -> Tensor

Tests if each element of input is positive infinity or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(-Inf, Inf, 1.2))
-torch_isposinf(a)
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  0
-#> [ CPUBoolType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(-Inf, Inf, 1.2))
+torch_isposinf(a)
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  0
+#> [ CPUBoolType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_isreal.html b/dev/reference/torch_isreal.html index fcabb8dd36e435299ae4ba9e6904a02082b6db6a..0adc9ca817287b34ce8615a6a3c63331287d0c75 100644 --- a/dev/reference/torch_isreal.html +++ b/dev/reference/torch_isreal.html @@ -1,79 +1,18 @@ - - - - - - - -Isreal — torch_isreal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Isreal — torch_isreal • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_isreal(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

isreal(input) -> Tensor

+
+
torch_isreal(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

isreal(input) -> Tensor

Returns a new tensor with boolean elements representing if each element of input is real-valued or not. All real-valued types are considered real. Complex values are considered real when their imaginary part is 0.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-torch_isreal(torch_tensor(c(1, 1+1i, 2+0i)))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+torch_isreal(torch_tensor(c(1, 1+1i, 2+0i)))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_istft.html b/dev/reference/torch_istft.html index c2e905f7655a3f360dfa5b40cc708b59f97e771b..0001c008822c9012d3f729c1863d22652a7200ef 100644 --- a/dev/reference/torch_istft.html +++ b/dev/reference/torch_istft.html @@ -1,79 +1,18 @@ - - - - - - - -Istft — torch_istft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Istft — torch_istft • torch - - - - - - - - + + -
-
- -
- -
+
-

Inverse short time Fourier Transform. This is expected to be the inverse of torch_stft().

+

Inverse short time Fourier Transform. This is expected to be the inverse of torch_stft().

-
torch_istft(
-  self,
-  n_fft,
-  hop_length = NULL,
-  win_length = NULL,
-  window = list(),
-  center = TRUE,
-  normalized = FALSE,
-  onesided = NULL,
-  length = NULL,
-  return_complex = FALSE
-)
+
+
torch_istft(
+  self,
+  n_fft,
+  hop_length = NULL,
+  win_length = NULL,
+  window = list(),
+  center = TRUE,
+  normalized = FALSE,
+  onesided = NULL,
+  length = NULL,
+  return_complex = FALSE
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) The input tensor. Expected to be output of torch_stft(), +

+

Arguments

+
self
+

(Tensor) The input tensor. Expected to be output of torch_stft(), can either be complex (channel, fft_size, n_frame), or real (channel, fft_size, n_frame, 2) where the channel dimension is -optional.

n_fft

(int) Size of Fourier transform

hop_length

(Optional[int]) The distance between neighboring sliding window frames. -(Default: n_fft %% 4)

win_length

(Optional[int]) The size of window frame and STFT filter. -(Default: n_fft)

window

(Optional(torch.Tensor)) The optional window function. -(Default: torch_ones(win_length))

center

(bool) Whether input was padded on both sides so that the +optional.

+
n_fft
+

(int) Size of Fourier transform

+
hop_length
+

(Optional[int]) The distance between neighboring sliding window frames. +(Default: n_fft %% 4)

+
win_length
+

(Optional[int]) The size of window frame and STFT filter. +(Default: n_fft)

+
window
+

(Optional(torch.Tensor)) The optional window function. +(Default: torch_ones(win_length))

+
center
+

(bool) Whether input was padded on both sides so that the \(t\)-th frame is centered at time \(t \times \mbox{hop\_length}\). -(Default: TRUE)

normalized

(bool) Whether the STFT was normalized. (Default: FALSE)

onesided

(Optional(bool)) Whether the STFT was onesided. -(Default: TRUE if n_fft != fft_size in the input size)

length

(Optional(int)]) The amount to trim the signal by (i.e. the -original signal length). (Default: whole signal)

return_complex

(Optional(bool)) Whether the output should be complex, +(Default: TRUE)

+
normalized
+

(bool) Whether the STFT was normalized. (Default: FALSE)

+
onesided
+

(Optional(bool)) Whether the STFT was onesided. +(Default: TRUE if n_fft != fft_size in the input size)

+
length
+

(Optional(int)]) The amount to trim the signal by (i.e. the +original signal length). (Default: whole signal)

+
return_complex
+

(Optional(bool)) Whether the output should be complex, or if the input should be assumed to derive from a real signal and window. -Note that this is incompatible with onesided=TRUE. (Default: FALSE)

- -

Details

- +Note that this is incompatible with onesided=TRUE. (Default: FALSE)

+
+
+

Details

It has the same parameters (+ additional optional parameter of length) and it should return the least squares estimation of the original signal. The algorithm will check using the NOLA condition (nonzero overlap).

Important consideration in the parameters window and center so that the envelop created by the summation of all the windows is never zero at certain point in time. Specifically, \(\sum_{t=-\infty}^{\infty} |w|^2(n-t\times hop_length) \neq 0\).

-

Since torch_stft() discards elements at the end of the signal if they do not fit in a frame, +

Since torch_stft() discards elements at the end of the signal if they do not fit in a frame, istft may return a shorter signal than the original signal (can occur if center is FALSE since the signal isn't padded).

If center is TRUE, then there will be padding e.g. 'constant', 'reflect', etc. @@ -281,32 +183,29 @@ of right padding. These additional values could be zeros or a reflection of the (some loss of signal).

D. W. Griffin and J. S. Lim, "Signal estimation from modified short-time Fourier transform," IEEE Trans. ASSP, vol.32, no.2, pp.236-243, Apr. 1984.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_kaiser_window.html b/dev/reference/torch_kaiser_window.html index a79b0eeb4e041a4f9df706bd58781d78f6c24be9..e4104007a54a42c9b7beb869ac8c6c715f4a3040 100644 --- a/dev/reference/torch_kaiser_window.html +++ b/dev/reference/torch_kaiser_window.html @@ -1,79 +1,18 @@ - - - - - - - -Kaiser_window — torch_kaiser_window • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kaiser_window — torch_kaiser_window • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,60 +111,47 @@

Kaiser_window

-
torch_kaiser_window(
-  window_length,
-  periodic,
-  beta,
-  dtype = torch_float(),
-  layout = NULL,
-  device = NULL,
-  requires_grad = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
window_length

(int) length of the window.

periodic

(bool, optional) If TRUE, returns a periodic window suitable for use in spectral analysis. If FALSE, returns a symmetric window suitable for use in filter design.

beta

(float, optional) shape parameter for the window.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

Note

+
+
torch_kaiser_window(
+  window_length,
+  periodic,
+  beta,
+  dtype = torch_float(),
+  layout = NULL,
+  device = NULL,
+  requires_grad = NULL
+)
+
+
+

Arguments

+
window_length
+

(int) length of the window.

+
periodic
+

(bool, optional) If TRUE, returns a periodic window suitable for use in spectral analysis. If FALSE, returns a symmetric window suitable for use in filter design.

+
beta
+

(float, optional) shape parameter for the window.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

Note

If window_length is one, then the returned window is a single element tensor containing a one.

-

kaiser_window(window_length, periodic=TRUE, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=FALSE) -> Tensor

- +
+
+

kaiser_window(window_length, periodic=TRUE, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=FALSE) -> Tensor

Computes the Kaiser window with window length window_length and shape parameter beta.

-

Let I_0 be the zeroth order modified Bessel function of the first kind (see torch_i0()) and +

Let I_0 be the zeroth order modified Bessel function of the first kind (see torch_i0()) and N = L - 1 if periodic is FALSE and L if periodic is TRUE, where L is the window_length. This function computes:

$$ @@ -251,33 +160,30 @@ $$

Calling torch_kaiser_window(L, B, periodic=TRUE) is equivalent to calling torch_kaiser_window(L + 1, B, periodic=FALSE)[:-1]). The periodic argument is intended as a helpful shorthand -to produce a periodic window as input to functions like torch_stft().

+to produce a periodic window as input to functions like torch_stft().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_kthvalue.html b/dev/reference/torch_kthvalue.html index 8473d16f0ccd695f18e55534aa3c88187e7027a4..49df71c8be4ffe446773fa1738ad834edf2ed8b8 100644 --- a/dev/reference/torch_kthvalue.html +++ b/dev/reference/torch_kthvalue.html @@ -1,79 +1,18 @@ - - - - - - - -Kthvalue — torch_kthvalue • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Kthvalue — torch_kthvalue • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,31 +111,23 @@

Kthvalue

-
torch_kthvalue(self, k, dim = -1L, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

k

(int) k for the k-th smallest element

dim

(int, optional) the dimension to find the kth value along

keepdim

(bool) whether the output tensor has dim retained or not.

- -

kthvalue(input, k, dim=NULL, keepdim=False, out=NULL) -> (Tensor, LongTensor)

+
+
torch_kthvalue(self, k, dim = -1L, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
k
+

(int) k for the k-th smallest element

+
dim
+

(int, optional) the dimension to find the kth value along

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

kthvalue(input, k, dim=NULL, keepdim=False, out=NULL) -> (Tensor, LongTensor)

@@ -224,55 +138,54 @@ smallest element of each row of the input tensor in the given dimen

If keepdim is TRUE, both the values and indices tensors are the same size as input, except in the dimension dim where they are of size 1. Otherwise, dim is squeezed -(see torch_squeeze), resulting in both the values and +(see torch_squeeze), resulting in both the values and indices tensors having 1 fewer dimension than the input tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(1, 6)
-x
-torch_kthvalue(x, 4)
-x <- torch_arange(1,6)$resize_(c(2,3))
-x
-torch_kthvalue(x, 2, 1, TRUE)
-}
-#> [[1]]
-#> torch_tensor
-#>  4  5  6
-#> [ CPUFloatType{1,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1  1  1
-#> [ CPULongType{1,3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(1, 6)
+x
+torch_kthvalue(x, 4)
+x <- torch_arange(1,6)$resize_(c(2,3))
+x
+torch_kthvalue(x, 2, 1, TRUE)
+}
+#> [[1]]
+#> torch_tensor
+#>  4  5  6
+#> [ CPUFloatType{1,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  1  1  1
+#> [ CPULongType{1,3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_layout.html b/dev/reference/torch_layout.html index 7dac5b0ad5cd34f7ecc62804b0dc78ce892f69ff..eccca974350dfeba009554c0759760a8ab58c5e7 100644 --- a/dev/reference/torch_layout.html +++ b/dev/reference/torch_layout.html @@ -1,79 +1,18 @@ - - - - - - - -Creates the corresponding layout — torch_layout • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates the corresponding layout — torch_layout • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,37 +111,34 @@

Creates the corresponding layout

-
torch_strided()
-
-torch_sparse_coo()
+
+
torch_strided()
 
+torch_sparse_coo()
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lcm.html b/dev/reference/torch_lcm.html index 7c39a192fc45de4c8fdf08fde83cb0b504d359a8..e8b3bd59a3385187af269e0f460225f7e9c8aa80 100644 --- a/dev/reference/torch_lcm.html +++ b/dev/reference/torch_lcm.html @@ -1,79 +1,18 @@ - - - - - - - -Lcm — torch_lcm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lcm — torch_lcm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_lcm(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

Note

+
+
torch_lcm(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

Note

This defines \(lcm(0, 0) = 0\) and \(lcm(0, a) = 0\).

-

lcm(input, other, *, out=None) -> Tensor

- +
+
+

lcm(input, other, *, out=None) -> Tensor

Computes the element-wise least common multiple (LCM) of input and other.

Both input and other must have integer types.

+
-

Examples

-
if (torch_is_installed()) {
-
-if (torch::cuda_is_available()) {
-a <- torch_tensor(c(5, 10, 15), dtype = torch_long(), device = "cuda")
-b <- torch_tensor(c(3, 4, 5), dtype = torch_long(), device = "cuda")
-torch_lcm(a, b)
-c <- torch_tensor(c(3L), device = "cuda")
-torch_lcm(a, c)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (torch::cuda_is_available()) {
+a <- torch_tensor(c(5, 10, 15), dtype = torch_long(), device = "cuda")
+b <- torch_tensor(c(3, 4, 5), dtype = torch_long(), device = "cuda")
+torch_lcm(a, b)
+c <- torch_tensor(c(3L), device = "cuda")
+torch_lcm(a, c)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_le.html b/dev/reference/torch_le.html index 88c98945b1e45150b32dde2a8713d8095b4cceec..cb0bef4bbdf038af58607cf29d679c96a4973330 100644 --- a/dev/reference/torch_le.html +++ b/dev/reference/torch_le.html @@ -1,79 +1,18 @@ - - - - - - - -Le — torch_le • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Le — torch_le • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_le(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

le(input, other, out=NULL) -> Tensor

+
+
torch_le(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

le(input, other, out=NULL) -> Tensor

Computes \(\mbox{input} \leq \mbox{other}\) element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_le(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
-         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
-}
-#> torch_tensor
-#>  1  0
-#>  1  1
-#> [ CPUBoolType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_le(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
+         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
+}
+#> torch_tensor
+#>  1  0
+#>  1  1
+#> [ CPUBoolType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lerp.html b/dev/reference/torch_lerp.html index 6050251703a477b582d7965ce6cf8c148075494a..97b68fc2c37fbb9fa0eeead6457e31d1bf3cfadf 100644 --- a/dev/reference/torch_lerp.html +++ b/dev/reference/torch_lerp.html @@ -1,79 +1,18 @@ - - - - - - - -Lerp — torch_lerp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lerp — torch_lerp • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_lerp(self, end, weight)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the tensor with the starting points

end

(Tensor) the tensor with the ending points

weight

(float or tensor) the weight for the interpolation formula

- -

lerp(input, end, weight, out=NULL)

+
+
torch_lerp(self, end, weight)
+
+
+

Arguments

+
self
+

(Tensor) the tensor with the starting points

+
end
+

(Tensor) the tensor with the ending points

+
weight
+

(float or tensor) the weight for the interpolation formula

+
+
+

lerp(input, end, weight, out=NULL)

@@ -221,49 +137,48 @@ $$ The shapes of start and end must be broadcastable . If weight is a tensor, then the shapes of weight, start, and end must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-start = torch_arange(1, 4)
-end = torch_empty(4)$fill_(10)
-start
-end
-torch_lerp(start, end, 0.5)
-torch_lerp(start, end, torch_full_like(start, 0.5))
-}
-#> torch_tensor
-#>  5.5000
-#>  6.0000
-#>  6.5000
-#>  7.0000
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+start = torch_arange(1, 4)
+end = torch_empty(4)$fill_(10)
+start
+end
+torch_lerp(start, end, 0.5)
+torch_lerp(start, end, torch_full_like(start, 0.5))
+}
+#> torch_tensor
+#>  5.5000
+#>  6.0000
+#>  6.5000
+#>  7.0000
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_less.html b/dev/reference/torch_less.html index 3320d6f741cda9f2286a2843a96de643ca8449bf..d2fdcc656502fde60ec27b7872f09e2ff88385c9 100644 --- a/dev/reference/torch_less.html +++ b/dev/reference/torch_less.html @@ -1,79 +1,18 @@ - - - - - - - -Less — torch_less • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Less — torch_less • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_less(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

less(input, other, *, out=None) -> Tensor

+
+
torch_less(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

less(input, other, *, out=None) -> Tensor

-

Alias for torch_lt().

+

Alias for torch_lt().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_less_equal.html b/dev/reference/torch_less_equal.html index 9ea0aa0ddcd788dfe33393d415d174fe86247ef7..d9d28a5999851425e28f5216c71bd2849bfcdea8 100644 --- a/dev/reference/torch_less_equal.html +++ b/dev/reference/torch_less_equal.html @@ -1,79 +1,18 @@ - - - - - - - -Less_equal — torch_less_equal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Less_equal — torch_less_equal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Less_equal

-
torch_less_equal(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

less_equal(input, other, *, out=None) -> Tensor

+
+
torch_less_equal(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

less_equal(input, other, *, out=None) -> Tensor

-

Alias for torch_le().

+

Alias for torch_le().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_lgamma.html b/dev/reference/torch_lgamma.html index e78b534354750046775a4479c65e50bccf6d8d6b..8b81d496821aee1209e1e11fc1a476cb2880a7ba 100644 --- a/dev/reference/torch_lgamma.html +++ b/dev/reference/torch_lgamma.html @@ -1,79 +1,18 @@ - - - - - - - -Lgamma — torch_lgamma • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lgamma — torch_lgamma • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_lgamma(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

lgamma(input, out=NULL) -> Tensor

+
+
torch_lgamma(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

lgamma(input, out=NULL) -> Tensor

@@ -209,45 +129,44 @@

$$ \mbox{out}_{i} = \log \Gamma(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_arange(0.5, 2, 0.5)
-torch_lgamma(a)
-}
-#> torch_tensor
-#>  0.5724
-#>  0.0000
-#> -0.1208
-#>  0.0000
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_arange(0.5, 2, 0.5)
+torch_lgamma(a)
+}
+#> torch_tensor
+#>  0.5724
+#>  0.0000
+#> -0.1208
+#>  0.0000
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_linspace.html b/dev/reference/torch_linspace.html index e98b3b046b43f27bb90366345580d8053b595a64..2fdfa78adb2cb77480ffd99af2e2f4d33d5d58c0 100644 --- a/dev/reference/torch_linspace.html +++ b/dev/reference/torch_linspace.html @@ -1,79 +1,18 @@ - - - - - - - -Linspace — torch_linspace • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Linspace — torch_linspace • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,95 +111,80 @@

Linspace

-
torch_linspace(
-  start,
-  end,
-  steps = 100,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
start

(float) the starting value for the set of points

end

(float) the ending value for the set of points

steps

(int) number of points to sample between start and end. Default: 100.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

linspace(start, end, steps=100, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_linspace(
+  start,
+  end,
+  steps = 100,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
start
+

(float) the starting value for the set of points

+
end
+

(float) the ending value for the set of points

+
steps
+

(int) number of points to sample between start and end. Default: 100.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

linspace(start, end, steps=100, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a one-dimensional tensor of steps equally spaced points between start and end.

The output tensor is 1-D of size steps.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_linspace(3, 10, steps=5)
-torch_linspace(-10, 10, steps=5)
-torch_linspace(start=-10, end=10, steps=5)
-torch_linspace(start=-10, end=10, steps=1)
-}
-#> torch_tensor
-#> -10
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_linspace(3, 10, steps=5)
+torch_linspace(-10, 10, steps=5)
+torch_linspace(start=-10, end=10, steps=5)
+torch_linspace(start=-10, end=10, steps=1)
+}
+#> torch_tensor
+#> -10
+#> [ CPUFloatType{1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_load.html b/dev/reference/torch_load.html index a08b8b98d67ac067d159741b02e24ff9ea00bd66..ac6703313b0bbce0a0863fe3051952fde5be1655 100644 --- a/dev/reference/torch_load.html +++ b/dev/reference/torch_load.html @@ -1,79 +1,18 @@ - - - - - - - -Loads a saved object — torch_load • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Loads a saved object — torch_load • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Loads a saved object

-
torch_load(path, device = "cpu")
- -

Arguments

- - - - - - - - - - -
path

a path to the saved object

device

a device to load tensors to. By default we load to the cpu but you can also -load them to any cuda device. If NULL then the device where the tensor has been saved will -be reused.

- -

See also

+
+
torch_load(path, device = "cpu")
+
-

Other torch_save: -torch_save()

+
+

Arguments

+
path
+

a path to the saved object

+
device
+

a device to load tensors to. By default we load to the cpu but you can also +load them to any cuda device. If NULL then the device where the tensor has been saved will +be reused.

+
+
+

See also

+

Other torch_save: +torch_save()

+
+
-
- +
- - + + diff --git a/dev/reference/torch_log.html b/dev/reference/torch_log.html index e26daea061f6c26e199b20ba36dd75f957149524..97c36e1dacbf5c0e5450b3e26d58f7c34a8ffb08 100644 --- a/dev/reference/torch_log.html +++ b/dev/reference/torch_log.html @@ -1,79 +1,18 @@ - - - - - - - -Log — torch_log • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Log — torch_log • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_log(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

log(input, out=NULL) -> Tensor

+
+
torch_log(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

log(input, out=NULL) -> Tensor

@@ -210,47 +130,46 @@ of input.

$$ y_{i} = \log_{e} (x_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5))
-a
-torch_log(a)
-}
-#> torch_tensor
-#>     nan
-#>     nan
-#> -3.6425
-#> -2.8690
-#>     nan
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5))
+a
+torch_log(a)
+}
+#> torch_tensor
+#> -0.4426
+#>     nan
+#>     nan
+#>  0.3628
+#>     nan
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_log10.html b/dev/reference/torch_log10.html index 416c500dbdec53a47784c94d9783aa10fa0e4c1f..6c0023dca0044673902c4e7e7a23d39ac0b7d7dc 100644 --- a/dev/reference/torch_log10.html +++ b/dev/reference/torch_log10.html @@ -1,79 +1,18 @@ - - - - - - - -Log10 — torch_log10 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Log10 — torch_log10 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_log10(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

log10(input, out=NULL) -> Tensor

+
+
torch_log10(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

log10(input, out=NULL) -> Tensor

@@ -210,48 +130,46 @@ of input.

$$ y_{i} = \log_{10} (x_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_rand(5)
-a
-torch_log10(a)
-}
-#> torch_tensor
-#> 0.01 *
-#> -3.1319
-#> -89.8706
-#> -90.8191
-#> -3.1578
-#> -23.1152
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_rand(5)
+a
+torch_log10(a)
+}
+#> torch_tensor
+#> -0.2518
+#> -0.8868
+#> -0.2750
+#> -1.1953
+#> -0.0861
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_log1p.html b/dev/reference/torch_log1p.html index b00bda321801c253c43b10543d06535bd5b66ba5..d25bbd1580609e8456a6d4377e01ce6848dd58a6 100644 --- a/dev/reference/torch_log1p.html +++ b/dev/reference/torch_log1p.html @@ -1,79 +1,18 @@ - - - - - - - -Log1p — torch_log1p • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Log1p — torch_log1p • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_log1p(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

Note

+
+
torch_log1p(self)
+
-

This function is more accurate than torch_log for small +

+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

Note

+

This function is more accurate than torch_log for small values of input

-

log1p(input, out=NULL) -> Tensor

- +
+
+

log1p(input, out=NULL) -> Tensor

@@ -213,47 +134,47 @@ values of input

$$ y_i = \log_{e} (x_i + 1) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5))
-a
-torch_log1p(a)
-}
-#> torch_tensor
-#>     nan
-#>  1.0895
-#>  1.2127
-#>  0.8769
-#> -0.1600
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5))
+a
+torch_log1p(a)
+}
+#> torch_tensor
+#> 0.01 *
+#>     nan
+#>     nan
+#>     nan
+#>  6.4439
+#> -212.4724
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_log2.html b/dev/reference/torch_log2.html index 9be04e34069653bc51d89d0d6fd16a88ccfb125d..a3a4f3232fdff79e60c25e6c2582b71389636d0f 100644 --- a/dev/reference/torch_log2.html +++ b/dev/reference/torch_log2.html @@ -1,79 +1,18 @@ - - - - - - - -Log2 — torch_log2 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Log2 — torch_log2 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_log2(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

log2(input, out=NULL) -> Tensor

+
+
torch_log2(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

log2(input, out=NULL) -> Tensor

@@ -210,47 +130,46 @@ of input.

$$ y_{i} = \log_{2} (x_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_rand(5)
-a
-torch_log2(a)
-}
-#> torch_tensor
-#> -1.2392
-#> -0.1789
-#> -1.0968
-#> -0.3876
-#> -1.6382
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_rand(5)
+a
+torch_log2(a)
+}
+#> torch_tensor
+#> -5.7196
+#> -0.4841
+#> -0.3836
+#> -0.1539
+#> -2.7668
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logaddexp.html b/dev/reference/torch_logaddexp.html index fff4473cef9303a331d8b3fd24d58cb7629958c8..76d5cdf07deb4dc02d9ecc685e10acb49f8ec3b8 100644 --- a/dev/reference/torch_logaddexp.html +++ b/dev/reference/torch_logaddexp.html @@ -1,79 +1,18 @@ - - - - - - - -Logaddexp — torch_logaddexp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logaddexp — torch_logaddexp • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Logaddexp

-
torch_logaddexp(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

logaddexp(input, other, *, out=None) -> Tensor

+
+
torch_logaddexp(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

logaddexp(input, other, *, out=None) -> Tensor

@@ -215,47 +133,46 @@ in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the logarithm of the calculated probability is stored. This function allows adding probabilities stored in such a fashion.

-

This op should be disambiguated with torch_logsumexp() which performs a +

This op should be disambiguated with torch_logsumexp() which performs a reduction on a single tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logaddexp(torch_tensor(c(-1.0)), torch_tensor(c(-1.0, -2, -3)))
-torch_logaddexp(torch_tensor(c(-100.0, -200, -300)), torch_tensor(c(-1.0, -2, -3)))
-torch_logaddexp(torch_tensor(c(1.0, 2000, 30000)), torch_tensor(c(-1.0, -2, -3)))
-}
-#> torch_tensor
-#>      1.1269
-#>   2000.0000
-#>  30000.0000
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logaddexp(torch_tensor(c(-1.0)), torch_tensor(c(-1.0, -2, -3)))
+torch_logaddexp(torch_tensor(c(-100.0, -200, -300)), torch_tensor(c(-1.0, -2, -3)))
+torch_logaddexp(torch_tensor(c(1.0, 2000, 30000)), torch_tensor(c(-1.0, -2, -3)))
+}
+#> torch_tensor
+#>      1.1269
+#>   2000.0000
+#>  30000.0000
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logaddexp2.html b/dev/reference/torch_logaddexp2.html index 64afff8c0413db937115fd3acbfb7484fea75231..c71c65ca756a2e03ac974ec05a71774202534bff 100644 --- a/dev/reference/torch_logaddexp2.html +++ b/dev/reference/torch_logaddexp2.html @@ -1,79 +1,18 @@ - - - - - - - -Logaddexp2 — torch_logaddexp2 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logaddexp2 — torch_logaddexp2 • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,55 +111,48 @@

Logaddexp2

-
torch_logaddexp2(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

logaddexp2(input, other, *, out=None) -> Tensor

+
+
torch_logaddexp2(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

logaddexp2(input, other, *, out=None) -> Tensor

Logarithm of the sum of exponentiations of the inputs in base-2.

Calculates pointwise \(\log_2\left(2^x + 2^y\right)\). See -torch_logaddexp() for more details.

+torch_logaddexp() for more details.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_logcumsumexp.html b/dev/reference/torch_logcumsumexp.html index 3a487bd5e0287c4ef032dc2e097d9c48fcd74a5e..efc94a126b471adaf32e8be3a9791d427f468b50 100644 --- a/dev/reference/torch_logcumsumexp.html +++ b/dev/reference/torch_logcumsumexp.html @@ -1,79 +1,18 @@ - - - - - - - -Logcumsumexp — torch_logcumsumexp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logcumsumexp — torch_logcumsumexp • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Logcumsumexp

-
torch_logcumsumexp(self, dim)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to do the operation over

- -

logcumsumexp(input, dim, *, out=None) -> Tensor

+
+
torch_logcumsumexp(self, dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to do the operation over

+
+
+

logcumsumexp(input, dim, *, out=None) -> Tensor

@@ -215,51 +133,50 @@ elements of input in the dimension dim.

$$ \mbox{logcumsumexp}(x)_{ij} = \log \sum\limits_{j=0}^{i} \exp(x_{ij}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(10))
-torch_logcumsumexp(a, dim=1)
-}
-#> torch_tensor
-#> -1.5370
-#>  1.2680
-#>  1.5851
-#>  1.6548
-#>  1.7451
-#>  2.1327
-#>  2.1618
-#>  2.2632
-#>  2.3241
-#>  2.3405
-#> [ CPUFloatType{10} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(10))
+torch_logcumsumexp(a, dim=1)
+}
+#> torch_tensor
+#> -1.4950
+#>  1.7300
+#>  1.7635
+#>  1.8110
+#>  2.0972
+#>  2.3175
+#>  2.3406
+#>  2.4778
+#>  2.5082
+#>  2.5843
+#> [ CPUFloatType{10} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logdet.html b/dev/reference/torch_logdet.html index 1ca2beb5aee244dcfeb32d8f63a943aea14e392a..fe93c3920c1aa7a22da61082fdaf5f8607a4a1ef 100644 --- a/dev/reference/torch_logdet.html +++ b/dev/reference/torch_logdet.html @@ -1,79 +1,18 @@ - - - - - - - -Logdet — torch_logdet • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logdet — torch_logdet • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_logdet(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

- -

Note

+
+
torch_logdet(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

+
+
+

Note

-
Result is `-inf` if `input` has zero log determinant, and is `NaN` if
+
Result is `-inf` if `input` has zero log determinant, and is `NaN` if
 `input` has negative determinant.
-
+
-
Backward through `logdet` internally uses SVD results when `input`
+
Backward through `logdet` internally uses SVD results when `input`
 is not invertible. In this case, double backward through `logdet` will
 be unstable in when `input` doesn't have distinct singular values. See
 `~torch.svd` for details.
-
- -

logdet(input) -> Tensor

+
+
+
+

logdet(input) -> Tensor

Calculates log determinant of a square matrix or batches of square matrices.

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_randn(c(3, 3))
-torch_det(A)
-torch_logdet(A)
-A
-A$det()
-A$det()$log()
-}
-#> torch_tensor
-#> nan
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_randn(c(3, 3))
+torch_det(A)
+torch_logdet(A)
+A
+A$det()
+A$det()$log()
+}
+#> torch_tensor
+#> 0.547764
+#> [ CPUFloatType{} ]
+
+
+ -
- +
- - + + diff --git a/dev/reference/torch_logical_and.html b/dev/reference/torch_logical_and.html index 7f556decc3edfa4184d19bcda3951bcded618078..9d4bb17f9442e9b52fbc8f7ba96b4839ff73fdc6 100644 --- a/dev/reference/torch_logical_and.html +++ b/dev/reference/torch_logical_and.html @@ -1,79 +1,18 @@ - - - - - - - -Logical_and — torch_logical_and • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logical_and — torch_logical_and • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,66 +111,61 @@

Logical_and

-
torch_logical_and(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the tensor to compute AND with

- -

logical_and(input, other, out=NULL) -> Tensor

+
+
torch_logical_and(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the tensor to compute AND with

+
+
+

logical_and(input, other, out=NULL) -> Tensor

Computes the element-wise logical AND of the given input tensors. Zeros are treated as FALSE and nonzeros are treated as TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logical_and(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
-a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
-b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
-torch_logical_and(a, b)
-if (FALSE) {
-torch_logical_and(a, b, out=torch_empty(4, dtype=torch_bool()))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logical_and(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
+a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
+b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
+torch_logical_and(a, b)
+if (FALSE) {
+torch_logical_and(a, b, out=torch_empty(4, dtype=torch_bool()))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logical_not.html b/dev/reference/torch_logical_not.html index 6ee928c99c9fe2dcb7736489cd095589871ab02c..617d9b262a992be110532782f2d7a09a3e8d88a2 100644 --- a/dev/reference/torch_logical_not.html +++ b/dev/reference/torch_logical_not.html @@ -1,79 +1,18 @@ - - - - - - - -Logical_not — torch_logical_not • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logical_not — torch_logical_not • torch - - - - - - - - + + -
-
- -
- -
+
@@ -190,61 +112,56 @@
-

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

logical_not(input, out=NULL) -> Tensor

- +
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

logical_not(input, out=NULL) -> Tensor

Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as FALSE and non-zeros are treated as TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logical_not(torch_tensor(c(TRUE, FALSE)))
-torch_logical_not(torch_tensor(c(0, 1, -10), dtype=torch_int8()))
-torch_logical_not(torch_tensor(c(0., 1.5, -10.), dtype=torch_double()))
-}
-#> torch_tensor
-#>  1
-#>  0
-#>  0
-#> [ CPUBoolType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logical_not(torch_tensor(c(TRUE, FALSE)))
+torch_logical_not(torch_tensor(c(0, 1, -10), dtype=torch_int8()))
+torch_logical_not(torch_tensor(c(0., 1.5, -10.), dtype=torch_double()))
+}
+#> torch_tensor
+#>  1
+#>  0
+#>  0
+#> [ CPUBoolType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logical_or.html b/dev/reference/torch_logical_or.html index ce3577a1a4bf8dd39363f46733dde6390e8ebe2f..f8f95d43e78052220d28a5da409915c5996be748 100644 --- a/dev/reference/torch_logical_or.html +++ b/dev/reference/torch_logical_or.html @@ -1,79 +1,18 @@ - - - - - - - -Logical_or — torch_logical_or • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logical_or — torch_logical_or • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,68 +111,63 @@

Logical_or

-
torch_logical_or(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the tensor to compute OR with

- -

logical_or(input, other, out=NULL) -> Tensor

+
+
torch_logical_or(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the tensor to compute OR with

+
+
+

logical_or(input, other, out=NULL) -> Tensor

Computes the element-wise logical OR of the given input tensors. Zeros are treated as FALSE and nonzeros are treated as TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logical_or(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
-a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
-b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
-torch_logical_or(a, b)
-if (FALSE) {
-torch_logical_or(a$double(), b$double())
-torch_logical_or(a$double(), b)
-torch_logical_or(a, b, out=torch_empty(4, dtype=torch_bool()))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logical_or(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
+a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
+b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
+torch_logical_or(a, b)
+if (FALSE) {
+torch_logical_or(a$double(), b$double())
+torch_logical_or(a$double(), b)
+torch_logical_or(a, b, out=torch_empty(4, dtype=torch_bool()))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logical_xor.html b/dev/reference/torch_logical_xor.html index 0f1e083a78882fe974e362397a36e274d73c0581..5f4ac1d541e70a3d8951d6861928b083cfc209e2 100644 --- a/dev/reference/torch_logical_xor.html +++ b/dev/reference/torch_logical_xor.html @@ -1,79 +1,18 @@ - - - - - - - -Logical_xor — torch_logical_xor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logical_xor — torch_logical_xor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,71 +111,66 @@

Logical_xor

-
torch_logical_xor(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the tensor to compute XOR with

- -

logical_xor(input, other, out=NULL) -> Tensor

+
+
torch_logical_xor(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the tensor to compute XOR with

+
+
+

logical_xor(input, other, out=NULL) -> Tensor

Computes the element-wise logical XOR of the given input tensors. Zeros are treated as FALSE and nonzeros are treated as TRUE.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logical_xor(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
-a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
-b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
-torch_logical_xor(a, b)
-torch_logical_xor(a$to(dtype=torch_double()), b$to(dtype=torch_double()))
-torch_logical_xor(a$to(dtype=torch_double()), b)
-}
-#> torch_tensor
-#>  1
-#>  1
-#>  0
-#>  0
-#> [ CPUBoolType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logical_xor(torch_tensor(c(TRUE, FALSE, TRUE)), torch_tensor(c(TRUE, FALSE, FALSE)))
+a = torch_tensor(c(0, 1, 10, 0), dtype=torch_int8())
+b = torch_tensor(c(4, 0, 1, 0), dtype=torch_int8())
+torch_logical_xor(a, b)
+torch_logical_xor(a$to(dtype=torch_double()), b$to(dtype=torch_double()))
+torch_logical_xor(a$to(dtype=torch_double()), b)
+}
+#> torch_tensor
+#>  1
+#>  1
+#>  0
+#>  0
+#> [ CPUBoolType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logit.html b/dev/reference/torch_logit.html index 7ee2d8e3ba8ed93478db7a16f714cbd4ad1b292b..ed67113b0e3a975a0f27e076a5eba7001b082cd9 100644 --- a/dev/reference/torch_logit.html +++ b/dev/reference/torch_logit.html @@ -1,79 +1,18 @@ - - - - - - - -Logit — torch_logit • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logit — torch_logit • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_logit(self, eps = NULL)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

eps

(float, optional) the epsilon for input clamp bound. Default: None

- -

logit(input, eps=None, *, out=None) -> Tensor

+
+
torch_logit(self, eps = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
eps
+

(float, optional) the epsilon for input clamp bound. Default: None

+
+
+

logit(input, eps=None, *, out=None) -> Tensor

@@ -221,47 +139,46 @@ When eps is None and input < 0 or input > 1, the 1 - \mbox{eps} & \mbox{if } x_{i} > 1 - \mbox{eps} \end{array} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_rand(5)
-a
-torch_logit(a, eps=1e-6)
-}
-#> torch_tensor
-#>  0.1445
-#>  0.1620
-#>  0.0578
-#>  2.8378
-#>  1.9980
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_rand(5)
+a
+torch_logit(a, eps=1e-6)
+}
+#> torch_tensor
+#>  2.2091
+#>  1.5547
+#> -1.8264
+#> -0.1012
+#> -1.9238
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logspace.html b/dev/reference/torch_logspace.html index b6b3b90f666083bc5d532c31c6e7378d78127e57..607dabdc9c5bbcf718c810b7d2c8d7fe7ad8adc5 100644 --- a/dev/reference/torch_logspace.html +++ b/dev/reference/torch_logspace.html @@ -1,79 +1,18 @@ - - - - - - - -Logspace — torch_logspace • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logspace — torch_logspace • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,56 +111,40 @@

Logspace

-
torch_logspace(
-  start,
-  end,
-  steps = 100,
-  base = 10,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
start

(float) the starting value for the set of points

end

(float) the ending value for the set of points

steps

(int) number of points to sample between start and end. Default: 100.

base

(float) base of the logarithm function. Default: 10.0.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

logspace(start, end, steps=100, base=10.0, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_logspace(
+  start,
+  end,
+  steps = 100,
+  base = 10,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
start
+

(float) the starting value for the set of points

+
end
+

(float) the ending value for the set of points

+
steps
+

(int) number of points to sample between start and end. Default: 100.

+
base
+

(float) base of the logarithm function. Default: 10.0.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

logspace(start, end, steps=100, base=10.0, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -246,44 +152,43 @@ logarithmically spaced with base base between \({\mbox{base}}^{\mbox{start}}\) and \({\mbox{base}}^{\mbox{end}}\).

The output tensor is 1-D of size steps.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_logspace(start=-10, end=10, steps=5)
-torch_logspace(start=0.1, end=1.0, steps=5)
-torch_logspace(start=0.1, end=1.0, steps=1)
-torch_logspace(start=2, end=2, steps=1, base=2)
-}
-#> torch_tensor
-#>  4
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_logspace(start=-10, end=10, steps=5)
+torch_logspace(start=0.1, end=1.0, steps=5)
+torch_logspace(start=0.1, end=1.0, steps=1)
+torch_logspace(start=2, end=2, steps=1, base=2)
+}
+#> torch_tensor
+#>  4
+#> [ CPUFloatType{1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_logsumexp.html b/dev/reference/torch_logsumexp.html index f9491c3a05f8331162b2869b5d49a137973fd0de..f6fdbb4dfb702ee9bdd4f8cd295c2b17bbca4c13 100644 --- a/dev/reference/torch_logsumexp.html +++ b/dev/reference/torch_logsumexp.html @@ -1,79 +1,18 @@ - - - - - - - -Logsumexp — torch_logsumexp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Logsumexp — torch_logsumexp • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Logsumexp

-
torch_logsumexp(self, dim, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

- -

logsumexp(input, dim, keepdim=False, out=NULL)

+
+
torch_logsumexp(self, dim, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

logsumexp(input, dim, keepdim=False, out=NULL)

@@ -222,46 +138,45 @@ stabilized.

$$

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-torch_logsumexp(a, 1)
-}
-#> torch_tensor
-#>  1.2870
-#>  1.3744
-#>  0.5878
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+torch_logsumexp(a, 1)
+}
+#> torch_tensor
+#>  2.5873
+#>  1.6834
+#>  1.4515
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lstsq.html b/dev/reference/torch_lstsq.html index 64ba35e69e71ed05361d7515243057049586e3a6..660eae4f859aa37124e97b68fe65bfaeca2aeaf9 100644 --- a/dev/reference/torch_lstsq.html +++ b/dev/reference/torch_lstsq.html @@ -1,79 +1,18 @@ - - - - - - - -Lstsq — torch_lstsq • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lstsq — torch_lstsq • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_lstsq(self, A)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the matrix \(B\)

A

(Tensor) the \(m\) by \(n\) matrix \(A\)

- -

Note

+
+
torch_lstsq(self, A)
+
+
+

Arguments

+
self
+

(Tensor) the matrix \(B\)

+
A
+

(Tensor) the \(m\) by \(n\) matrix \(A\)

+
+
+

Note

-
The case when \eqn{m < n} is not supported on the GPU.
-
- -

lstsq(input, A, out=NULL) -> Tensor

+
The case when \eqn{m < n} is not supported on the GPU.
+
+
+
+

lstsq(input, A, out=NULL) -> Tensor

@@ -234,60 +153,59 @@ Returned tensor \(X\) has shape \((\mbox{max}(m, n) \times k)\). The first \(n\) rows of \(X\) contains the solution. If \(m \geq n\), the residual sum of squares for the solution in each column is given by the sum of squares of elements in the remaining \(m - n\) rows of that column.

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_tensor(rbind(
- c(1,1,1),
- c(2,3,4),
- c(3,5,2),
- c(4,2,5),
- c(5,4,3)
-))
-B = torch_tensor(rbind(
- c(-10, -3),
- c(12, 14),
- c(14, 12),
- c(16, 16),
- c(18, 16)
-))
-out = torch_lstsq(B, A)
-out[[1]]
-}
-#> torch_tensor
-#>   2.0000   1.0000
-#>   1.0000   1.0000
-#>   1.0000   2.0000
-#>  10.9635   4.8501
-#>   8.9332   5.2418
-#> [ CPUFloatType{5,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_tensor(rbind(
+ c(1,1,1),
+ c(2,3,4),
+ c(3,5,2),
+ c(4,2,5),
+ c(5,4,3)
+))
+B = torch_tensor(rbind(
+ c(-10, -3),
+ c(12, 14),
+ c(14, 12),
+ c(16, 16),
+ c(18, 16)
+))
+out = torch_lstsq(B, A)
+out[[1]]
+}
+#> torch_tensor
+#>   2.0000   1.0000
+#>   1.0000   1.0000
+#>   1.0000   2.0000
+#>  10.9635   4.8501
+#>   8.9332   5.2418
+#> [ CPUFloatType{5,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lt.html b/dev/reference/torch_lt.html index 6f2ffd7bbfed14b1f4ba3bf4f39498f7397355bb..71d6f7b3366137e05ef509080cf31fc8b0097a67 100644 --- a/dev/reference/torch_lt.html +++ b/dev/reference/torch_lt.html @@ -1,79 +1,18 @@ - - - - - - - -Lt — torch_lt • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lt — torch_lt • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_lt(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

lt(input, other, out=NULL) -> Tensor

+
+
torch_lt(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

lt(input, other, out=NULL) -> Tensor

Computes \(\mbox{input} < \mbox{other}\) element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_lt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
-         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
-}
-#> torch_tensor
-#>  0  0
-#>  1  0
-#> [ CPUBoolType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_lt(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
+         torch_tensor(matrix(c(1,1,4,4), ncol = 2, byrow=TRUE)))
+}
+#> torch_tensor
+#>  0  0
+#>  1  0
+#> [ CPUBoolType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lu.html b/dev/reference/torch_lu.html index 8361e3d4cf034397ecefa8880baa84a32debe800..27489ca745628c25f90432a807a67476042afdcc 100644 --- a/dev/reference/torch_lu.html +++ b/dev/reference/torch_lu.html @@ -1,81 +1,20 @@ - - - - - - - -LU — torch_lu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -LU — torch_lu • torch - - - - - - - - - - - - - - - - - + + -
-
- -
- -
+
@@ -193,84 +115,74 @@ tuple containing the LU factorization and pivots of A. Pivoting is done if pivot is set to True.

-
torch_lu(A, pivot = TRUE, get_infos = FALSE, out = NULL)
+
+
torch_lu(A, pivot = TRUE, get_infos = FALSE, out = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
A

(Tensor) the tensor to factor of size (, m, n)(,m,n)

pivot

(bool, optional) – controls whether pivoting is done. Default: TRUE

get_infos

(bool, optional) – if set to True, returns an info IntTensor. Default: FALSE

out

(tuple, optional) – optional output tuple. If get_infos is True, then the elements +

+

Arguments

+
A
+

(Tensor) the tensor to factor of size (, m, n)(,m,n)

+
pivot
+

(bool, optional) – controls whether pivoting is done. Default: TRUE

+
get_infos
+

(bool, optional) – if set to True, returns an info IntTensor. Default: FALSE

+
out
+

(tuple, optional) – optional output tuple. If get_infos is True, then the elements in the tuple are Tensor, IntTensor, and IntTensor. If get_infos is False, then the -elements in the tuple are Tensor, IntTensor. Default: NULL

- - -

Examples

-
if (torch_is_installed()) {
-
-A = torch_randn(c(2, 3, 3))
-torch_lu(A)
-
-}
-#> [[1]]
-#> torch_tensor
-#> (1,.,.) = 
-#>  -1.1690 -0.3246 -0.0451
-#>   0.0292  0.4267  1.1366
-#>  -0.3251 -0.1691  0.9301
-#> 
-#> (2,.,.) = 
-#>   1.8523 -1.9457  0.8976
-#>  -0.5732 -1.6425  0.3803
-#>  -0.0552 -0.2128 -0.7883
-#> [ CPUFloatType{2,3,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1  3  3
-#>  2  2  3
-#> [ CPUIntType{2,3} ]
-#> 
-
+elements in the tuple are Tensor, IntTensor. Default: NULL

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_randn(c(2, 3, 3))
+torch_lu(A)
+
+}
+#> [[1]]
+#> torch_tensor
+#> (1,.,.) = 
+#>  -1.8450  0.5889  1.0984
+#>   0.2584  1.2155 -1.6606
+#>  -0.2986  0.6794  0.8909
+#> 
+#> (2,.,.) = 
+#>   0.9005  0.1914  0.1164
+#>   0.7049  1.8139 -0.0791
+#>   0.2704  0.3101 -1.3346
+#> [ CPUFloatType{2,3,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  3  2  3
+#>  3  2  3
+#> [ CPUIntType{2,3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_lu_solve.html b/dev/reference/torch_lu_solve.html index 76da53d9a31f0ebe0c30c0078fa71d264397455b..e1300f7487b128d59a5cbdd35a8232d16c2ab3df 100644 --- a/dev/reference/torch_lu_solve.html +++ b/dev/reference/torch_lu_solve.html @@ -1,79 +1,18 @@ - - - - - - - -Lu_solve — torch_lu_solve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Lu_solve — torch_lu_solve • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,70 +111,63 @@

Lu_solve

-
torch_lu_solve(self, LU_data, LU_pivots)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the RHS tensor of size \((*, m, k)\), where \(*\) is zero or more batch dimensions.

LU_data

(Tensor) the pivoted LU factorization of A from torch_lu of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

LU_pivots

(IntTensor) the pivots of the LU factorization from torch_lu of size \((*, m)\), where \(*\) is zero or more batch dimensions. The batch dimensions of LU_pivots must be equal to the batch dimensions of LU_data.

- -

lu_solve(input, LU_data, LU_pivots, out=NULL) -> Tensor

+
+
torch_lu_solve(self, LU_data, LU_pivots)
+
+
+

Arguments

+
self
+

(Tensor) the RHS tensor of size \((*, m, k)\), where \(*\) is zero or more batch dimensions.

+
LU_data
+

(Tensor) the pivoted LU factorization of A from torch_lu of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

+
LU_pivots
+

(IntTensor) the pivots of the LU factorization from torch_lu of size \((*, m)\), where \(*\) is zero or more batch dimensions. The batch dimensions of LU_pivots must be equal to the batch dimensions of LU_data.

+
+
+

lu_solve(input, LU_data, LU_pivots, out=NULL) -> Tensor

Returns the LU solve of the linear system \(Ax = b\) using the partially pivoted LU factorization of A from torch_lu.

+
-

Examples

-
if (torch_is_installed()) {
-A = torch_randn(c(2, 3, 3))
-b = torch_randn(c(2, 3, 1))
-out = torch_lu(A)
-x = torch_lu_solve(b, out[[1]], out[[2]])
-torch_norm(torch_bmm(A, x) - b)
-}
-#> torch_tensor
-#> 3.27571e-07
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+A = torch_randn(c(2, 3, 3))
+b = torch_randn(c(2, 3, 1))
+out = torch_lu(A)
+x = torch_lu_solve(b, out[[1]], out[[2]])
+torch_norm(torch_bmm(A, x) - b)
+}
+#> torch_tensor
+#> 2.30848e-07
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_manual_seed.html b/dev/reference/torch_manual_seed.html index f3898bd4344b8d26bd227cbc141676f5f3e7f6ec..2e3c4dfdd1b565cb7d59003cd43d3c00e4fc0e32 100644 --- a/dev/reference/torch_manual_seed.html +++ b/dev/reference/torch_manual_seed.html @@ -1,79 +1,18 @@ - - - - - - - -Sets the seed for generating random numbers. — torch_manual_seed • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sets the seed for generating random numbers. — torch_manual_seed • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,43 +111,37 @@

Sets the seed for generating random numbers.

-
torch_manual_seed(seed)
- -

Arguments

- - - - - - -
seed

integer seed.

+
+
torch_manual_seed(seed)
+
+
+

Arguments

+
seed
+

integer seed.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_masked_select.html b/dev/reference/torch_masked_select.html index a3d138f444a9c27c4660e56017818b1666f5d261..e0c409d55d6e191b52858862d2ae61744eaceeea 100644 --- a/dev/reference/torch_masked_select.html +++ b/dev/reference/torch_masked_select.html @@ -1,79 +1,18 @@ - - - - - - - -Masked_select — torch_masked_select • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Masked_select — torch_masked_select • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,24 @@

Masked_select

-
torch_masked_select(self, mask)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

mask

(BoolTensor) the tensor containing the binary mask to index with

- -

Note

+
+
torch_masked_select(self, mask)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
mask
+

(BoolTensor) the tensor containing the binary mask to index with

+
+
+

Note

The returned tensor does not use the same storage as the original tensor

-

masked_select(input, mask, out=NULL) -> Tensor

- +
+
+

masked_select(input, mask, out=NULL) -> Tensor

@@ -217,50 +136,48 @@ as the original tensor

the boolean mask mask which is a BoolTensor.

The shapes of the mask tensor and the input tensor don't need to match, but they must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(3, 4))
-x
-mask = x$ge(0.5)
-mask
-torch_masked_select(x, mask)
-}
-#> torch_tensor
-#>  0.6649
-#>  1.7534
-#>  0.7085
-#>  0.6806
-#>  1.4154
-#>  0.8181
-#> [ CPUFloatType{6} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(3, 4))
+x
+mask = x$ge(0.5)
+mask
+torch_masked_select(x, mask)
+}
+#> torch_tensor
+#>  0.5900
+#>  0.9864
+#>  1.1042
+#>  1.2145
+#>  1.7813
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_matmul.html b/dev/reference/torch_matmul.html index bfb662d2b54b78e7ec87129629328ebc88b4889b..7b8049ef9a6d36b650b3661edbf256e5fb36b928 100644 --- a/dev/reference/torch_matmul.html +++ b/dev/reference/torch_matmul.html @@ -1,79 +1,18 @@ - - - - - - - -Matmul — torch_matmul • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Matmul — torch_matmul • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_matmul(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first tensor to be multiplied

other

(Tensor) the second tensor to be multiplied

- -

Note

+
+
torch_matmul(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first tensor to be multiplied

+
other
+

(Tensor) the second tensor to be multiplied

+
+
+

Note

-
The 1-dimensional dot product version of this function does not support an `out` parameter.
-
- -

matmul(input, other, out=NULL) -> Tensor

+
The 1-dimensional dot product version of this function does not support an `out` parameter.
+
+
+
+

matmul(input, other, out=NULL) -> Tensor

Matrix product of two tensors.

-

The behavior depends on the dimensionality of the tensors as follows:

    -
  • If both tensors are 1-dimensional, the dot product (scalar) is returned.

  • +

    The behavior depends on the dimensionality of the tensors as follows:

    • If both tensors are 1-dimensional, the dot product (scalar) is returned.

    • If both arguments are 2-dimensional, the matrix-matrix product is returned.

    • If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. @@ -233,92 +151,89 @@ The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if input is a \((j \times 1 \times n \times m)\) tensor and other is a \((k \times m \times p)\) tensor, out will be an \((j \times k \times n \times p)\) tensor.

    • -
    - - -

    Examples

    -
    if (torch_is_installed()) {
    -
    -# vector x vector
    -tensor1 = torch_randn(c(3))
    -tensor2 = torch_randn(c(3))
    -torch_matmul(tensor1, tensor2)
    -# matrix x vector
    -tensor1 = torch_randn(c(3, 4))
    -tensor2 = torch_randn(c(4))
    -torch_matmul(tensor1, tensor2)
    -# batched matrix x broadcasted vector
    -tensor1 = torch_randn(c(10, 3, 4))
    -tensor2 = torch_randn(c(4))
    -torch_matmul(tensor1, tensor2)
    -# batched matrix x batched matrix
    -tensor1 = torch_randn(c(10, 3, 4))
    -tensor2 = torch_randn(c(10, 4, 5))
    -torch_matmul(tensor1, tensor2)
    -# batched matrix x broadcasted matrix
    -tensor1 = torch_randn(c(10, 3, 4))
    -tensor2 = torch_randn(c(4, 5))
    -torch_matmul(tensor1, tensor2)
    -}
    -#> torch_tensor
    -#> (1,.,.) = 
    -#>  -1.2063 -6.8766  1.1109 -6.0989 -1.1909
    -#>   1.4220 -3.4092 -1.6674  0.9007  0.8536
    -#>   0.1917  2.4697 -1.1957  2.9119  0.9873
    -#> 
    -#> (2,.,.) = 
    -#>  -0.0787 -1.9481  0.5705 -1.8254 -1.2973
    -#>  -0.2669 -5.6817  0.5308 -4.0862 -0.6659
    -#>  -1.6253  3.3177  1.2590 -0.6411 -0.3724
    -#> 
    -#> (3,.,.) = 
    -#>  -1.3716 -0.9883  0.1034 -1.6704  1.5420
    -#>  -1.5986  1.4622 -0.0779 -0.2004  0.2051
    -#>  -0.6369  3.0030  1.8239 -0.7490 -2.6883
    -#> 
    -#> (4,.,.) = 
    -#>   2.4502  1.6105 -0.1097  2.8107 -3.0820
    -#>  -0.4510 -4.5248 -1.5240 -1.2485  3.4928
    -#>  -0.7771  3.2090  0.2471  1.0349 -0.0540
    -#> 
    -#> (5,.,.) = 
    -#>  -2.3422 -2.4093  1.5523 -4.8100  0.0554
    -#>  -0.3010  1.7311 -1.2540  2.1921  2.9976
    -#>  -2.9541 -2.8641  0.5600 -4.3982  1.2514
    -#> 
    -#> (6,.,.) = 
    -#>   1.5921  1.8522 -0.1120  2.3396 -1.7727
    -#>  -0.0341 -1.0648  0.9502 -1.7022 -2.9162
    -#>  -0.3313 -1.5967 -0.5851 -0.5068  0.1574
    -#> 
    -#> ... [the output was truncated (use n=-1 to disable)]
    -#> [ CPUFloatType{10,3,5} ]
    -
    +
+ +
+

Examples

+
if (torch_is_installed()) {
+
+# vector x vector
+tensor1 = torch_randn(c(3))
+tensor2 = torch_randn(c(3))
+torch_matmul(tensor1, tensor2)
+# matrix x vector
+tensor1 = torch_randn(c(3, 4))
+tensor2 = torch_randn(c(4))
+torch_matmul(tensor1, tensor2)
+# batched matrix x broadcasted vector
+tensor1 = torch_randn(c(10, 3, 4))
+tensor2 = torch_randn(c(4))
+torch_matmul(tensor1, tensor2)
+# batched matrix x batched matrix
+tensor1 = torch_randn(c(10, 3, 4))
+tensor2 = torch_randn(c(10, 4, 5))
+torch_matmul(tensor1, tensor2)
+# batched matrix x broadcasted matrix
+tensor1 = torch_randn(c(10, 3, 4))
+tensor2 = torch_randn(c(4, 5))
+torch_matmul(tensor1, tensor2)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  -0.1864  0.1016 -1.3007 -0.9707 -0.3342
+#>   1.0613 -1.5288  0.7496 -1.9112 -0.8753
+#>   0.5912 -0.3715  0.5712 -2.0003 -0.6216
+#> 
+#> (2,.,.) = 
+#>   1.1324 -4.2583  0.4448  1.9275  1.2737
+#>  -0.1222  1.5433  1.3553 -1.1332  0.3043
+#>  -0.1790  0.3083 -0.2293  0.5862 -0.2805
+#> 
+#> (3,.,.) = 
+#>   0.8300 -2.3243 -0.1491 -1.1395  0.3158
+#>   0.9951 -1.4299  0.7763 -1.6064 -0.8442
+#>  -0.6136  0.5805  0.6603  3.1572  1.3482
+#> 
+#> (4,.,.) = 
+#>   1.0351 -2.2652 -1.3221 -2.5933 -1.7704
+#>  -0.1478  1.0040  2.0664  2.6581  0.0679
+#>   0.2997  1.4344  1.7470  0.4908 -2.5126
+#> 
+#> (5,.,.) = 
+#>  -0.1510  0.9140  1.0146 -0.3200  0.7257
+#>   1.4374 -3.0519  0.5056 -2.5645  0.0626
+#>  -1.3307  0.8666 -1.6609  3.7003  1.3258
+#> 
+#> (6,.,.) = 
+#>   0.5835 -1.6227 -0.4386 -0.1676 -0.7077
+#>   0.1231  0.4861 -0.9410 -2.6570 -1.3280
+#>  -0.4508  2.3568  1.3555  0.4188 -0.4210
+#> 
+#> ... [the output was truncated (use n=-1 to disable)]
+#> [ CPUFloatType{10,3,5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_matrix_exp.html b/dev/reference/torch_matrix_exp.html index 3186dd2d8001ae3f64a44b03f9a4a0b6f95e6619..e7cf43dca6fcd77d1db7bfd0600aad4ada869719 100644 --- a/dev/reference/torch_matrix_exp.html +++ b/dev/reference/torch_matrix_exp.html @@ -1,79 +1,18 @@ - - - - - - - -Matrix_exp — torch_matrix_exp • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Matrix_exp — torch_matrix_exp • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Matrix_exp

-
torch_matrix_exp(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

matrix_power(input) -> Tensor

+
+
torch_matrix_exp(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

matrix_power(input) -> Tensor

@@ -214,49 +134,48 @@ $$

Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(2, 2, 2))
-a[1, , ] <- torch_eye(2, 2)
-a[2, , ] <- 2 * torch_eye(2, 2)
-a
-torch_matrix_exp(a)
-
-x <- torch_tensor(rbind(c(0, pi/3), c(-pi/3, 0)))
-x$matrix_exp() # should be [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]
-}
-#> torch_tensor
-#>  0.5000  0.8660
-#> -0.8660  0.5000
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(2, 2, 2))
+a[1, , ] <- torch_eye(2, 2)
+a[2, , ] <- 2 * torch_eye(2, 2)
+a
+torch_matrix_exp(a)
+
+x <- torch_tensor(rbind(c(0, pi/3), c(-pi/3, 0)))
+x$matrix_exp() # should be [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]
+}
+#> torch_tensor
+#>  0.5000  0.8660
+#> -0.8660  0.5000
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_matrix_power.html b/dev/reference/torch_matrix_power.html index d4547287d89c16d399669961d188dd7911efcdf4..09c4e5aee9f628f003f36d58c67d1a90c43ce826 100644 --- a/dev/reference/torch_matrix_power.html +++ b/dev/reference/torch_matrix_power.html @@ -1,79 +1,18 @@ - - - - - - - -Matrix_power — torch_matrix_power • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Matrix_power — torch_matrix_power • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Matrix_power

-
torch_matrix_power(self, n)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

n

(int) the power to raise the matrix to

- -

matrix_power(input, n) -> Tensor

+
+
torch_matrix_power(self, n)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
n
+

(int) the power to raise the matrix to

+
+
+

matrix_power(input, n) -> Tensor

@@ -215,49 +133,49 @@ For batch of matrices, each individual matrix is raised to the power nn. For a batch of matrices, the batched inverse (if invertible) is raised to the power n. If n is 0, then an identity matrix is returned.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(2, 2, 2))
-a
-torch_matrix_power(a, 3)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   6.5659  8.4898
-#>   1.7631  1.3786
-#> 
-#> (2,.,.) = 
-#>   0.5305  0.9388
-#>   2.1003  3.1272
-#> [ CPUFloatType{2,2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(2, 2, 2))
+a
+torch_matrix_power(a, 3)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   1.0986  0.8319
+#>  -0.8203 -0.5705
+#> 
+#> (2,.,.) = 
+#>  0.01 *
+#>  -7.7616 -4.4825
+#>    2.1736 -6.2858
+#> [ CPUFloatType{2,2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_matrix_rank.html b/dev/reference/torch_matrix_rank.html index 55ef166104246baf77d1e76d51f0d6d66ae265d2..06971e1a979460d9bab38c4c79426772c691295b 100644 --- a/dev/reference/torch_matrix_rank.html +++ b/dev/reference/torch_matrix_rank.html @@ -1,79 +1,18 @@ - - - - - - - -Matrix_rank — torch_matrix_rank • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Matrix_rank — torch_matrix_rank • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Matrix_rank

-
torch_matrix_rank(self, tol, symmetric = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input 2-D tensor

tol

(float, optional) the tolerance value. Default: NULL

symmetric

(bool, optional) indicates whether input is symmetric. Default: FALSE

- -

matrix_rank(input, tol=NULL, symmetric=False) -> Tensor

+
+
torch_matrix_rank(self, tol, symmetric = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input 2-D tensor

+
tol
+

(float, optional) the tolerance value. Default: NULL

+
symmetric
+

(bool, optional) indicates whether input is symmetric. Default: FALSE

+
+
+

matrix_rank(input, tol=NULL, symmetric=False) -> Tensor

@@ -222,42 +138,41 @@ when symmetric is TRUE) are considered to be 0. If tol
is set to S.max() * max(S.size()) * eps where S is the singular values (or the eigenvalues when symmetric is TRUE), and eps is the epsilon value for the datatype of input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_eye(10)
-torch_matrix_rank(a)
-}
-#> torch_tensor
-#> 10
-#> [ CPULongType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_eye(10)
+torch_matrix_rank(a)
+}
+#> torch_tensor
+#> 10
+#> [ CPULongType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_max.html b/dev/reference/torch_max.html index eaecb23ce6777c593d0cb969251e277b24aebde6..8ba1d57fb45e71148358b74a1453c557e019ca46 100644 --- a/dev/reference/torch_max.html +++ b/dev/reference/torch_max.html @@ -1,79 +1,18 @@ - - - - - - - -Max — torch_max • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Max — torch_max • torch - - - - - - - - + + -
-
- -
- -
+
@@ -190,43 +112,33 @@
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not. Default: FALSE.

out

(tuple, optional) the result tuple of two output tensors (max, max_indices)

other

(Tensor) the second input tensor

- -

Note

- +
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not. Default: FALSE.

+
out
+

(tuple, optional) the result tuple of two output tensors (max, max_indices)

+
other
+

(Tensor) the second input tensor

+
+
+

Note

When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules .

-

max(input) -> Tensor

- +
+
+

max(input) -> Tensor

Returns the maximum value of all elements in the input tensor.

-

max(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

- +
+
+

max(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

@@ -234,8 +146,9 @@ follows the broadcasting rules .

value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).

-

Warning

- +
+
+

Warning

indices does not necessarily contain the first occurrence of each @@ -244,10 +157,11 @@ The exact implementation details are device-specific. Do not expect the same result when run on CPU and GPU in general.

If keepdim is TRUE, the output tensors are of the same size as input except in the dimension dim where they are of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensors having 1 fewer dimension than input.

-

max(input, other, out=NULL) -> Tensor

- +
+
+

max(input, other, out=NULL) -> Tensor

@@ -258,58 +172,57 @@ but they must be broadcastable .

$$ \mbox{out}_i = \max(\mbox{tensor}_i, \mbox{other}_i) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_max(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_max(a, dim = 1)
-
-
-a = torch_randn(c(4))
-a
-b = torch_randn(c(4))
-b
-torch_max(a, other = b)
-}
-#> torch_tensor
-#>  1.6337
-#>  0.3268
-#> -0.3574
-#>  0.9872
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_max(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_max(a, dim = 1)
+
+
+a = torch_randn(c(4))
+a
+b = torch_randn(c(4))
+b
+torch_max(a, other = b)
+}
+#> torch_tensor
+#>  0.3832
+#> -0.0201
+#> -0.1780
+#>  1.6716
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_maximum.html b/dev/reference/torch_maximum.html index 3b734e8c0d6eb5f2696ffe78b67b1d3b790b2d9c..8ed7e68877b2f62731be39fad37e5e1571423711 100644 --- a/dev/reference/torch_maximum.html +++ b/dev/reference/torch_maximum.html @@ -1,79 +1,18 @@ - - - - - - - -Maximum — torch_maximum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Maximum — torch_maximum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_maximum(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

Note

+
+
torch_maximum(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

Note

If one of the elements being compared is a NaN, then that element is returned. torch_maximum() is not supported for tensors with complex dtypes.

-

maximum(input, other, *, out=None) -> Tensor

- +
+
+

maximum(input, other, *, out=None) -> Tensor

Computes the element-wise maximum of input and other.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2, -1))
-b <- torch_tensor(c(3, 0, 4))
-torch_maximum(a, b)
-}
-#> torch_tensor
-#>  3
-#>  2
-#>  4
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2, -1))
+b <- torch_tensor(c(3, 0, 4))
+torch_maximum(a, b)
+}
+#> torch_tensor
+#>  3
+#>  2
+#>  4
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_mean.html b/dev/reference/torch_mean.html index f1cab7e0e9977c68eb112906c403c704448f91e0..6c0f184eb7d360b33c70b10122df8eb734bd35d9 100644 --- a/dev/reference/torch_mean.html +++ b/dev/reference/torch_mean.html @@ -1,79 +1,18 @@ - - - - - - - -Mean — torch_mean • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mean — torch_mean • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_mean(self, dim, keepdim = FALSE, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

dtype

the resulting data type.

- -

mean(input) -> Tensor

+
+
torch_mean(self, dim, keepdim = FALSE, dtype = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
dtype
+

the resulting data type.

+
+
+

mean(input) -> Tensor

Returns the mean value of all elements in the input tensor.

-

mean(input, dim, keepdim=False, out=NULL) -> Tensor

- +
+
+

mean(input, dim, keepdim=False, out=NULL) -> Tensor

@@ -228,51 +143,50 @@ dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_mean(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_mean(a, 1)
-torch_mean(a, 1, TRUE)
-}
-#> torch_tensor
-#> -0.1064 -1.1264 -0.4517 -0.3350
-#> [ CPUFloatType{1,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_mean(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_mean(a, 1)
+torch_mean(a, 1, TRUE)
+}
+#> torch_tensor
+#> -0.8314 -0.6149  0.2161  0.1284
+#> [ CPUFloatType{1,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_median.html b/dev/reference/torch_median.html index b4b7dff4f869e7a1ce21bc2de687459bb4b7c237..a5414632d6314f0ea741858a2ddfbb84fa4b4173 100644 --- a/dev/reference/torch_median.html +++ b/dev/reference/torch_median.html @@ -1,79 +1,18 @@ - - - - - - - -Median — torch_median • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Median — torch_median • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_median(self, dim, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

- -

median(input) -> Tensor

+
+
torch_median(self, dim, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

median(input) -> Tensor

Returns the median value of all elements in the input tensor.

-

median(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

- +
+
+

median(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

@@ -225,65 +142,64 @@ value of each row of the input tensor in the given dimension

By default, dim is the last dimension of the input tensor.

If keepdim is TRUE, the output tensors are of the same size as input except in the dimension dim where they are of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in +Otherwise, dim is squeezed (see torch_squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_median(a)
-
-
-a = torch_randn(c(4, 5))
-a
-torch_median(a, 1)
-}
-#> [[1]]
-#> torch_tensor
-#>  0.2975
-#>  1.0890
-#> -0.3717
-#>  0.2556
-#> -0.2185
-#> [ CPUFloatType{5} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1
-#>  0
-#>  3
-#>  1
-#>  3
-#> [ CPULongType{5} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_median(a)
+
+
+a = torch_randn(c(4, 5))
+a
+torch_median(a, 1)
+}
+#> [[1]]
+#> torch_tensor
+#> -0.9108
+#> -0.5864
+#> -1.2530
+#> -0.1658
+#>  0.0646
+#> [ CPUFloatType{5} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  3
+#>  2
+#>  2
+#>  3
+#>  2
+#> [ CPULongType{5} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_memory_format.html b/dev/reference/torch_memory_format.html index 957f2579e5cba44944dbe7da4e69fdedb2878798..5f6f6e0ccc9430e7a0c3d1be0bd68f420b1b3af3 100644 --- a/dev/reference/torch_memory_format.html +++ b/dev/reference/torch_memory_format.html @@ -1,79 +1,18 @@ - - - - - - - -Memory format — torch_memory_format • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Memory format — torch_memory_format • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,39 +111,36 @@

Returns the correspondent memory format.

-
torch_contiguous_format()
+    
+
torch_contiguous_format()
 
-torch_preserve_format()
-
-torch_channels_last_format()
+torch_preserve_format() +torch_channels_last_format()
+
+ -
- +
- - + + diff --git a/dev/reference/torch_meshgrid.html b/dev/reference/torch_meshgrid.html index 0f33c0f70f58746c294bdc058bab5abd518138ef..b2a3b9e99ac3f839b8e348748c5e1f4acf9267a2 100644 --- a/dev/reference/torch_meshgrid.html +++ b/dev/reference/torch_meshgrid.html @@ -1,79 +1,18 @@ - - - - - - - -Meshgrid — torch_meshgrid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Meshgrid — torch_meshgrid • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,75 +111,72 @@

Meshgrid

-
torch_meshgrid(tensors)
- -

Arguments

- - - - - - -
tensors

(list of Tensor) list of scalars or 1 dimensional tensors. Scalars will be -treated (1,).

- -

TEST

+
+
torch_meshgrid(tensors)
+
+
+

Arguments

+
tensors
+

(list of Tensor) list of scalars or 1 dimensional tensors. Scalars will be +treated (1,).

+
+
+

TEST

Take \(N\) tensors, each of which can be either scalar or 1-dimensional vector, and create \(N\) N-dimensional grids, where the \(i\) th grid is defined by expanding the \(i\) th input over dimensions defined by other inputs.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_tensor(c(1, 2, 3))
-y = torch_tensor(c(4, 5, 6))
-out = torch_meshgrid(list(x, y))
-out
-}
-#> [[1]]
-#> torch_tensor
-#>  1  1  1
-#>  2  2  2
-#>  3  3  3
-#> [ CPUFloatType{3,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  4  5  6
-#>  4  5  6
-#>  4  5  6
-#> [ CPUFloatType{3,3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_tensor(c(1, 2, 3))
+y = torch_tensor(c(4, 5, 6))
+out = torch_meshgrid(list(x, y))
+out
+}
+#> [[1]]
+#> torch_tensor
+#>  1  1  1
+#>  2  2  2
+#>  3  3  3
+#> [ CPUFloatType{3,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  4  5  6
+#>  4  5  6
+#>  4  5  6
+#> [ CPUFloatType{3,3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_min.html b/dev/reference/torch_min.html index a8d7a3980fda56e0951f4fe70982a9a56a8c88a2..5b6538015dc8f7a270f44bf5507b6a4412949c6f 100644 --- a/dev/reference/torch_min.html +++ b/dev/reference/torch_min.html @@ -1,79 +1,18 @@ - - - - - - - -Min — torch_min • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Min — torch_min • torch - - - - - - - - + + -
-
- -
- -
+
@@ -190,43 +112,33 @@
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

out

(tuple, optional) the tuple of two output tensors (min, min_indices)

other

(Tensor) the second input tensor

- -

Note

- +
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
out
+

(tuple, optional) the tuple of two output tensors (min, min_indices)

+
other
+

(Tensor) the second input tensor

+
+
+

Note

When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules .

-

min(input) -> Tensor

- +
+
+

min(input) -> Tensor

Returns the minimum value of all elements in the input tensor.

-

min(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

- +
+
+

min(input, dim, keepdim=False, out=NULL) -> (Tensor, LongTensor)

@@ -234,8 +146,9 @@ follows the broadcasting rules .

value of each row of the input tensor in the given dimension dim. And indices is the index location of each minimum value found (argmin).

-

Warning

- +
+
+

Warning

indices does not necessarily contain the first occurrence of each @@ -244,10 +157,11 @@ The exact implementation details are device-specific. Do not expect the same result when run on CPU and GPU in general.

If keepdim is TRUE, the output tensors are of the same size as input except in the dimension dim where they are of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensors having 1 fewer dimension than input.

-

min(input, other, out=NULL) -> Tensor

- +
+
+

min(input, other, out=NULL) -> Tensor

@@ -259,58 +173,57 @@ but they must be broadcastable .

$$ \mbox{out}_i = \min(\mbox{tensor}_i, \mbox{other}_i) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_min(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_min(a, dim = 1)
-
-
-a = torch_randn(c(4))
-a
-b = torch_randn(c(4))
-b
-torch_min(a, other = b)
-}
-#> torch_tensor
-#> -0.8957
-#> -0.9380
-#> -0.5400
-#>  0.2250
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_min(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_min(a, dim = 1)
+
+
+a = torch_randn(c(4))
+a
+b = torch_randn(c(4))
+b
+torch_min(a, other = b)
+}
+#> torch_tensor
+#> -1.5517
+#> -0.7872
+#> -0.1816
+#> -0.6999
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_minimum.html b/dev/reference/torch_minimum.html index 532e735f58d9f6e382164d62a363696ce2e7b1f0..00436fa357123d5621fc53152d485ffff680d992 100644 --- a/dev/reference/torch_minimum.html +++ b/dev/reference/torch_minimum.html @@ -1,79 +1,18 @@ - - - - - - - -Minimum — torch_minimum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Minimum — torch_minimum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_minimum(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor) the second input tensor

- -

Note

+
+
torch_minimum(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor) the second input tensor

+
+
+

Note

If one of the elements being compared is a NaN, then that element is returned. torch_minimum() is not supported for tensors with complex dtypes.

-

minimum(input, other, *, out=None) -> Tensor

- +
+
+

minimum(input, other, *, out=None) -> Tensor

Computes the element-wise minimum of input and other.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2, -1))
-b <- torch_tensor(c(3, 0, 4))
-torch_minimum(a, b)
-}
-#> torch_tensor
-#>  1
-#>  0
-#> -1
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2, -1))
+b <- torch_tensor(c(3, 0, 4))
+torch_minimum(a, b)
+}
+#> torch_tensor
+#>  1
+#>  0
+#> -1
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_mm.html b/dev/reference/torch_mm.html index 350983076a65649736de79e760f741b4124032c9..b1c17fc13aec7cc77e51ebaadb4d4be236ed361b 100644 --- a/dev/reference/torch_mm.html +++ b/dev/reference/torch_mm.html @@ -1,79 +1,18 @@ - - - - - - - -Mm — torch_mm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mm — torch_mm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_mm(self, mat2)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first matrix to be multiplied

mat2

(Tensor) the second matrix to be multiplied

- -

Note

+
+
torch_mm(self, mat2)
+
+
+

Arguments

+
self
+

(Tensor) the first matrix to be multiplied

+
mat2
+

(Tensor) the second matrix to be multiplied

+
+
+

Note

This function does not broadcast . -For broadcasting matrix products, see torch_matmul.

-

mm(input, mat2, out=NULL) -> Tensor

- +For broadcasting matrix products, see torch_matmul.

+
+
+

mm(input, mat2, out=NULL) -> Tensor

Performs a matrix multiplication of the matrices input and mat2.

If input is a \((n \times m)\) tensor, mat2 is a \((m \times p)\) tensor, out will be a \((n \times p)\) tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-mat1 = torch_randn(c(2, 3))
-mat2 = torch_randn(c(3, 3))
-torch_mm(mat1, mat2)
-}
-#> torch_tensor
-#> -0.0497 -0.2810  0.7637
-#>  0.0435 -0.0214 -0.1599
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+mat1 = torch_randn(c(2, 3))
+mat2 = torch_randn(c(3, 3))
+torch_mm(mat1, mat2)
+}
+#> torch_tensor
+#>  3.3528 -1.1272  3.9619
+#> -1.1036 -0.5797 -1.0366
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_mode.html b/dev/reference/torch_mode.html index 55d5d5eca207747df59d06a2d1c532b68d38bb24..92e5230dd9be5db0bc14674dd98b3075f5cca447 100644 --- a/dev/reference/torch_mode.html +++ b/dev/reference/torch_mode.html @@ -1,79 +1,18 @@ - - - - - - - -Mode — torch_mode • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mode — torch_mode • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_mode(self, dim = -1L, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

- -

Note

+
+
torch_mode(self, dim = -1L, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

Note

This function is not defined for torch_cuda.Tensor yet.

-

mode(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

- +
+
+

mode(input, dim=-1, keepdim=False, out=NULL) -> (Tensor, LongTensor)

@@ -223,52 +140,51 @@ in that row, and indices is the index location of each mode value f

By default, dim is the last dimension of the input tensor.

If keepdim is TRUE, the output tensors are of the same size as input except in the dimension dim where they are of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensors having 1 fewer dimension than input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randint(0, 50, size = list(5))
-a
-torch_mode(a, 1)
-}
-#> [[1]]
-#> torch_tensor
-#> 12
-#> [ CPUFloatType{} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> 0
-#> [ CPULongType{} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randint(0, 50, size = list(5))
+a
+torch_mode(a, 1)
+}
+#> [[1]]
+#> torch_tensor
+#> 4
+#> [ CPUFloatType{} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> 3
+#> [ CPULongType{} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_movedim.html b/dev/reference/torch_movedim.html index 351c6ffe55c1705dc1b21434d065203bbf6a3889..9c7162f5ac7b9b1917db4f79e99a1c1346ad37e6 100644 --- a/dev/reference/torch_movedim.html +++ b/dev/reference/torch_movedim.html @@ -1,79 +1,18 @@ - - - - - - - -Movedim — torch_movedim • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Movedim — torch_movedim • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Movedim

-
torch_movedim(self, source, destination)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

source

(int or tuple of ints) Original positions of the dims to move. These must be unique.

destination

(int or tuple of ints) Destination positions for each of the original dims. These must also be unique.

- -

movedim(input, source, destination) -> Tensor

+
+
torch_movedim(self, source, destination)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
source
+

(int or tuple of ints) Original positions of the dims to move. These must be unique.

+
destination
+

(int or tuple of ints) Destination positions for each of the original dims. These must also be unique.

+
+
+

movedim(input, source, destination) -> Tensor

@@ -217,51 +133,49 @@ to the position(s) in destination.

Other dimensions of input that are not explicitly moved remain in their original order and appear at the positions not specified in destination.

+
-

Examples

-
if (torch_is_installed()) {
-
-t <- torch_randn(c(3,2,1))
-t
-torch_movedim(t, 2, 1)$shape
-torch_movedim(t, 2, 1)
-torch_movedim(t, c(2, 3), c(1, 2))$shape
-torch_movedim(t, c(2, 3), c(1, 2))
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>  0.01 *
-#>  -4.4751 -142.7783 -86.9915
-#> 
-#> (2,.,.) = 
-#>   0.3034 -0.3926 -1.1589
-#> [ CPUFloatType{2,1,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t <- torch_randn(c(3,2,1))
+t
+torch_movedim(t, 2, 1)$shape
+torch_movedim(t, 2, 1)
+torch_movedim(t, c(2, 3), c(1, 2))$shape
+torch_movedim(t, c(2, 3), c(1, 2))
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>  -1.8958 -0.1665 -2.2735
+#> 
+#> (2,.,.) = 
+#>  -1.3904  0.3690 -0.8153
+#> [ CPUFloatType{2,1,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_mul.html b/dev/reference/torch_mul.html index 9123fd13c85d5086e005c492f8c9b00ec9214d24..6e0a04c6d1952ff5b16cfa572a3fba41e8f69b3c 100644 --- a/dev/reference/torch_mul.html +++ b/dev/reference/torch_mul.html @@ -1,79 +1,18 @@ - - - - - - - -Mul — torch_mul • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mul — torch_mul • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_mul(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first multiplicand tensor

other

(Tensor) the second multiplicand tensor

- -

mul(input, other, out=NULL)

+
+
torch_mul(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first multiplicand tensor

+
other
+

(Tensor) the second multiplicand tensor

+
+
+

mul(input, other, out=NULL)

@@ -225,53 +143,52 @@ broadcastable .

$$ \mbox{out}_i = \mbox{input}_i \times \mbox{other}_i $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3))
-a
-torch_mul(a, 100)
-
-
-a = torch_randn(c(4, 1))
-a
-b = torch_randn(c(1, 4))
-b
-torch_mul(a, b)
-}
-#> torch_tensor
-#>  0.1237 -0.1472  0.1136 -0.0869
-#>  0.6887 -0.8197  0.6324 -0.4839
-#>  0.8830 -1.0509  0.8107 -0.6204
-#>  1.2582 -1.4975  1.1553 -0.8840
-#> [ CPUFloatType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3))
+a
+torch_mul(a, 100)
+
+
+a = torch_randn(c(4, 1))
+a
+b = torch_randn(c(1, 4))
+b
+torch_mul(a, b)
+}
+#> torch_tensor
+#>  0.2176 -0.3330 -1.2895 -0.0865
+#> -0.6225  0.9527  3.6894  0.2474
+#>  0.4232 -0.6478 -2.5085 -0.1682
+#>  0.2894 -0.4430 -1.7154 -0.1150
+#> [ CPUFloatType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_multinomial.html b/dev/reference/torch_multinomial.html index af6fa6fe14ad81a07c57bc75a749853180ba7101..75c39442dc1847d58b7cbd6b9d617291f3678875 100644 --- a/dev/reference/torch_multinomial.html +++ b/dev/reference/torch_multinomial.html @@ -1,79 +1,18 @@ - - - - - - - -Multinomial — torch_multinomial • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multinomial — torch_multinomial • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,36 +111,28 @@

Multinomial

-
torch_multinomial(self, num_samples, replacement = FALSE, generator = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor containing probabilities

num_samples

(int) number of samples to draw

replacement

(bool, optional) whether to draw with replacement or not

generator

(torch.Generator, optional) a pseudorandom number generator for sampling

- -

Note

+
+
torch_multinomial(self, num_samples, replacement = FALSE, generator = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor containing probabilities

+
num_samples
+

(int) number of samples to draw

+
replacement
+

(bool, optional) whether to draw with replacement or not

+
generator
+

(torch.Generator, optional) a pseudorandom number generator for sampling

+
+
+

Note

-
The rows of `input` do not need to sum to one (in which case we use
+
The rows of `input` do not need to sum to one (in which case we use
 the values as weights), but must be non-negative, finite and have
 a non-zero sum.
-
+

Indices are ordered from left to right according to when each was sampled (first samples are placed in first column).

@@ -228,59 +142,59 @@ a non-zero sum.

If replacement is TRUE, samples are drawn with replacement.

If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.

-
When drawn without replacement, `num_samples` must be lower than
+
When drawn without replacement, `num_samples` must be lower than
 number of non-zero elements in `input` (or the min number of non-zero
 elements in each row of `input` if it is a matrix).
-
- -

multinomial(input, num_samples, replacement=False, *, generator=NULL, out=NULL) -> LongTensor

+
+
+
+

multinomial(input, num_samples, replacement=False, *, generator=NULL, out=NULL) -> LongTensor

Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input.

+
-

Examples

-
if (torch_is_installed()) {
-
-weights = torch_tensor(c(0, 10, 3, 0), dtype=torch_float()) # create a tensor of weights
-torch_multinomial(weights, 2)
-torch_multinomial(weights, 4, replacement=TRUE)
-}
-#> torch_tensor
-#>  2
-#>  2
-#>  2
-#>  2
-#> [ CPULongType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+weights = torch_tensor(c(0, 10, 3, 0), dtype=torch_float()) # create a tensor of weights
+torch_multinomial(weights, 2)
+torch_multinomial(weights, 4, replacement=TRUE)
+}
+#> torch_tensor
+#>  2
+#>  2
+#>  3
+#>  3
+#> [ CPULongType{4} ]
+
+
+ -
- +
- - + + diff --git a/dev/reference/torch_multiply.html b/dev/reference/torch_multiply.html index 480141888b22a2557931b23e3fcbffc4a9647da3..af52df46ccc7942c226532e42d2b85ef77a8e57e 100644 --- a/dev/reference/torch_multiply.html +++ b/dev/reference/torch_multiply.html @@ -1,79 +1,18 @@ - - - - - - - -Multiply — torch_multiply • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Multiply — torch_multiply • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Multiply

-
torch_multiply(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first multiplicand tensor

other

(Tensor) the second multiplicand tensor

- -

multiply(input, other, *, out=None)

+
+
torch_multiply(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first multiplicand tensor

+
other
+

(Tensor) the second multiplicand tensor

+
+
+

multiply(input, other, *, out=None)

-

Alias for torch_mul().

+

Alias for torch_mul().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_mv.html b/dev/reference/torch_mv.html index 507c9a6f24a71c92c333e0b73eec61496a354579..d1995df86cc4f6eb9d5d6e5783b7a8eef2fb6402 100644 --- a/dev/reference/torch_mv.html +++ b/dev/reference/torch_mv.html @@ -1,79 +1,18 @@ - - - - - - - -Mv — torch_mv • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mv — torch_mv • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_mv(self, vec)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) matrix to be multiplied

vec

(Tensor) vector to be multiplied

- -

Note

+
+
torch_mv(self, vec)
+
+
+

Arguments

+
self
+

(Tensor) matrix to be multiplied

+
vec
+

(Tensor) vector to be multiplied

+
+
+

Note

This function does not broadcast .

-

mv(input, vec, out=NULL) -> Tensor

- +
+
+

mv(input, vec, out=NULL) -> Tensor

@@ -216,44 +135,43 @@ vec.

If input is a \((n \times m)\) tensor, vec is a 1-D tensor of size \(m\), out will be 1-D of size \(n\).

+
-

Examples

-
if (torch_is_installed()) {
-
-mat = torch_randn(c(2, 3))
-vec = torch_randn(c(3))
-torch_mv(mat, vec)
-}
-#> torch_tensor
-#>  1.8354
-#>  0.8041
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+mat = torch_randn(c(2, 3))
+vec = torch_randn(c(3))
+torch_mv(mat, vec)
+}
+#> torch_tensor
+#> -0.2872
+#>  1.4578
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_mvlgamma.html b/dev/reference/torch_mvlgamma.html index 6ef0ce47099f25b7ff81ba6b1fa9e0659d73680c..3d0e06f2f15bedafae783ff19714227bd3c8b7a8 100644 --- a/dev/reference/torch_mvlgamma.html +++ b/dev/reference/torch_mvlgamma.html @@ -1,79 +1,18 @@ - - - - - - - -Mvlgamma — torch_mvlgamma • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Mvlgamma — torch_mvlgamma • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Mvlgamma

-
torch_mvlgamma(self, p)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compute the multivariate log-gamma function

p

(int) the number of dimensions

- -

mvlgamma(input, p) -> Tensor

+
+
torch_mvlgamma(self, p)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compute the multivariate log-gamma function

+
p
+

(int) the number of dimensions

+
+
+

mvlgamma(input, p) -> Tensor

@@ -216,44 +134,43 @@ $$ where \(C = \log(\pi) \times \frac{p (p - 1)}{4}\) and \(\Gamma(\cdot)\) is the Gamma function.

All elements must be greater than \(\frac{p - 1}{2}\), otherwise an error would be thrown.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_empty(c(2, 3))$uniform_(1, 2)
-a
-torch_mvlgamma(a, 2)
-}
-#> torch_tensor
-#>  0.4208  0.3913  0.4130
-#>  0.5387  0.5505  0.3906
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_empty(c(2, 3))$uniform_(1, 2)
+a
+torch_mvlgamma(a, 2)
+}
+#> torch_tensor
+#>  0.4848  0.3945  0.4120
+#>  0.4038  0.3956  0.6886
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_nanquantile.html b/dev/reference/torch_nanquantile.html index a98a71ec1eb95d722e7bae49f44cb501031ca6fb..8b0e6dfe1d8a86616b268eeee43ba43c8da3934c 100644 --- a/dev/reference/torch_nanquantile.html +++ b/dev/reference/torch_nanquantile.html @@ -1,79 +1,18 @@ - - - - - - - -Nanquantile — torch_nanquantile • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nanquantile — torch_nanquantile • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,85 +111,74 @@

Nanquantile

-
torch_nanquantile(self, q, dim = NULL, keepdim = FALSE, interpolation)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

q

(float or Tensor) a scalar or 1D tensor of quantile values in the range [0, 1]

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

interpolation

The interpolation method.

- -

nanquantile(input, q, dim=None, keepdim=FALSE, *, out=None) -> Tensor

+
+
torch_nanquantile(self, q, dim = NULL, keepdim = FALSE, interpolation)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
q
+

(float or Tensor) a scalar or 1D tensor of quantile values in the range [0, 1]

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
interpolation
+

The interpolation method.

+
+
+

nanquantile(input, q, dim=None, keepdim=FALSE, *, out=None) -> Tensor

-

This is a variant of torch_quantile() that "ignores" NaN values, +

This is a variant of torch_quantile() that "ignores" NaN values, computing the quantiles q as if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles for -that reduction will be NaN. See the documentation for torch_quantile().

+that reduction will be NaN. See the documentation for torch_quantile().

+
-

Examples

-
if (torch_is_installed()) {
-
-t <- torch_tensor(c(NaN, 1, 2))
-t$quantile(0.5)
-t$nanquantile(0.5)
-t <- torch_tensor(rbind(c(NaN, NaN), c(1, 2)))
-t
-t$nanquantile(0.5, dim=1)
-t$nanquantile(0.5, dim=2)
-torch_nanquantile(t, 0.5, dim = 1)
-torch_nanquantile(t, 0.5, dim = 2)
-}
-#> torch_tensor
-#>     nan  1.5000
-#> [ CPUFloatType{1,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+t <- torch_tensor(c(NaN, 1, 2))
+t$quantile(0.5)
+t$nanquantile(0.5)
+t <- torch_tensor(rbind(c(NaN, NaN), c(1, 2)))
+t
+t$nanquantile(0.5, dim=1)
+t$nanquantile(0.5, dim=2)
+torch_nanquantile(t, 0.5, dim = 1)
+torch_nanquantile(t, 0.5, dim = 2)
+}
+#> torch_tensor
+#>     nan  1.5000
+#> [ CPUFloatType{1,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_nansum.html b/dev/reference/torch_nansum.html index 8a98dc3f252bb4497cdb3c82bacf465740ee5a01..e55eb54ac373093de896d0381f3497a082708824 100644 --- a/dev/reference/torch_nansum.html +++ b/dev/reference/torch_nansum.html @@ -1,79 +1,18 @@ - - - - - - - -Nansum — torch_nansum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nansum — torch_nansum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_nansum(self, dim, keepdim = FALSE, dtype = NULL)
+
+
torch_nansum(self, dim, keepdim = FALSE, dtype = NULL)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

dtype

the desired data type of returned tensor. If specified, the +

+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
dtype
+

the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is -useful for preventing data type overflows. Default: NULL.

- -

nansum(input, *, dtype=None) -> Tensor

- +useful for preventing data type overflows. Default: NULL.

+
+
+

nansum(input, *, dtype=None) -> Tensor

Returns the sum of all elements, treating Not a Numbers (NaNs) as zero.

-

nansum(input, dim, keepdim=FALSE, *, dtype=None) -> Tensor

- +
+
+

nansum(input, dim, keepdim=FALSE, *, dtype=None) -> Tensor

@@ -230,52 +145,51 @@ dimension dim, treating Not a Numbers (NaNs) as zero. If dim is a list of dimensions, reduce over all of them.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1., 2., NaN, 4.))
-torch_nansum(a)
-
-
-torch_nansum(torch_tensor(c(1., NaN)))
-a <- torch_tensor(rbind(c(1, 2), c(3., NaN)))
-torch_nansum(a)
-torch_nansum(a, dim=1)
-torch_nansum(a, dim=2)
-}
-#> torch_tensor
-#>  3
-#>  3
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1., 2., NaN, 4.))
+torch_nansum(a)
+
+
+torch_nansum(torch_tensor(c(1., NaN)))
+a <- torch_tensor(rbind(c(1, 2), c(3., NaN)))
+torch_nansum(a)
+torch_nansum(a, dim=1)
+torch_nansum(a, dim=2)
+}
+#> torch_tensor
+#>  3
+#>  3
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_narrow.html b/dev/reference/torch_narrow.html index ddcfe887d84c2e153f0840511f868bdf9623f92b..b2831a0d3ff82e2ef97b288d500f12cc8bf4779f 100644 --- a/dev/reference/torch_narrow.html +++ b/dev/reference/torch_narrow.html @@ -1,79 +1,18 @@ - - - - - - - -Narrow — torch_narrow • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Narrow — torch_narrow • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,76 +111,67 @@

Narrow

-
torch_narrow(self, dim, start, length)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the tensor to narrow

dim

(int) the dimension along which to narrow

start

(int) the starting dimension

length

(int) the distance to the ending dimension

- -

narrow(input, dim, start, length) -> Tensor

+
+
torch_narrow(self, dim, start, length)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to narrow

+
dim
+

(int) the dimension along which to narrow

+
start
+

(int) the starting dimension

+
length
+

(int) the distance to the ending dimension

+
+
+

narrow(input, dim, start, length) -> Tensor

Returns a new tensor that is a narrowed version of input tensor. The dimension dim is input from start to start + length. The returned tensor and input tensor share the same underlying storage.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_tensor(matrix(c(1:9), ncol = 3, byrow= TRUE))
-torch_narrow(x, 1, 1, 2)
-torch_narrow(x, 2, 2, 2)
-}
-#> torch_tensor
-#>  2  3
-#>  5  6
-#>  8  9
-#> [ CPULongType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_tensor(matrix(c(1:9), ncol = 3, byrow= TRUE))
+torch_narrow(x, 1, 1, 2)
+torch_narrow(x, 2, 2, 2)
+}
+#> torch_tensor
+#>  2  3
+#>  5  6
+#>  8  9
+#> [ CPULongType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_ne.html b/dev/reference/torch_ne.html index ab76e63a00fe6e8ecdb95e771ea5539b9e3085d5..b0208cbb0e0366ace822236bd7f26d1f1e1bc05f 100644 --- a/dev/reference/torch_ne.html +++ b/dev/reference/torch_ne.html @@ -1,79 +1,18 @@ - - - - - - - -Ne — torch_ne • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ne — torch_ne • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ne(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

ne(input, other, out=NULL) -> Tensor

+
+
torch_ne(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

ne(input, other, out=NULL) -> Tensor

Computes \(input \neq other\) element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_ne(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
-         torch_tensor(matrix(rep(c(1,4), each = 2), ncol = 2, byrow=TRUE)))
-}
-#> torch_tensor
-#>  0  1
-#>  1  0
-#> [ CPUBoolType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_ne(torch_tensor(matrix(1:4, ncol = 2, byrow=TRUE)), 
+         torch_tensor(matrix(rep(c(1,4), each = 2), ncol = 2, byrow=TRUE)))
+}
+#> torch_tensor
+#>  0  1
+#>  1  0
+#> [ CPUBoolType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_neg.html b/dev/reference/torch_neg.html index 6ed4bb1ada8066fdef169076ad421f44c3eb3f48..c94eebb00730ce1c86a2692e158b6cd1839b08ec 100644 --- a/dev/reference/torch_neg.html +++ b/dev/reference/torch_neg.html @@ -1,79 +1,18 @@ - - - - - - - -Neg — torch_neg • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Neg — torch_neg • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_neg(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

neg(input, out=NULL) -> Tensor

+
+
torch_neg(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

neg(input, out=NULL) -> Tensor

@@ -209,47 +129,46 @@

$$ \mbox{out} = -1 \times \mbox{input} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5))
-a
-torch_neg(a)
-}
-#> torch_tensor
-#>  0.4130
-#>  0.1488
-#> -1.3886
-#> -0.8531
-#> -0.1611
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5))
+a
+torch_neg(a)
+}
+#> torch_tensor
+#> -0.3462
+#> -0.0974
+#>  1.1529
+#>  0.0023
+#> -0.8103
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_negative.html b/dev/reference/torch_negative.html index b7403b2712275a0fe4bcbde41d41e0d491bc1c08..14d94f09f626e66a9d66852d67dcc978045a6e81 100644 --- a/dev/reference/torch_negative.html +++ b/dev/reference/torch_negative.html @@ -1,79 +1,18 @@ - - - - - - - -Negative — torch_negative • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Negative — torch_negative • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,49 +111,44 @@

Negative

-
torch_negative(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

negative(input, *, out=None) -> Tensor

+
+
torch_negative(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

negative(input, *, out=None) -> Tensor

-

Alias for torch_neg()

+

Alias for torch_neg()

+
+
-
- +
- - + + diff --git a/dev/reference/torch_nextafter.html b/dev/reference/torch_nextafter.html index ce72ce89010427bd5ac345ff6e05d1742fab3972..37e0676153c27f3f423978f159d176ca5fa6ff3b 100644 --- a/dev/reference/torch_nextafter.html +++ b/dev/reference/torch_nextafter.html @@ -1,79 +1,18 @@ - - - - - - - -Nextafter — torch_nextafter • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nextafter — torch_nextafter • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,66 +111,61 @@

Nextafter

-
torch_nextafter(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the first input tensor

other

(Tensor) the second input tensor

- -

nextafter(input, other, *, out=None) -> Tensor

+
+
torch_nextafter(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the first input tensor

+
other
+

(Tensor) the second input tensor

+
+
+

nextafter(input, other, *, out=None) -> Tensor

Return the next floating-point value after input towards other, elementwise.

The shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-eps <- torch_finfo(torch_float32())$eps
-torch_nextafter(torch_tensor(c(1, 2)), torch_tensor(c(2, 1))) == torch_tensor(c(eps + 1, 2 - eps))
-}
-#> torch_tensor
-#>  1
-#>  1
-#> [ CPUBoolType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+eps <- torch_finfo(torch_float32())$eps
+torch_nextafter(torch_tensor(c(1, 2)), torch_tensor(c(2, 1))) == torch_tensor(c(eps + 1, 2 - eps))
+}
+#> torch_tensor
+#>  1
+#>  1
+#> [ CPUBoolType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_nonzero.html b/dev/reference/torch_nonzero.html index 64ce47234f913a4f458bfce5fa98bc6419245626..5cb9f8331d811161f282b2c52964df2dc96191d8 100644 --- a/dev/reference/torch_nonzero.html +++ b/dev/reference/torch_nonzero.html @@ -1,79 +1,18 @@ - - - - - - - -Nonzero — torch_nonzero • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Nonzero — torch_nonzero • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,18 +111,16 @@

Nonzero elements of tensors.

-
torch_nonzero(self, as_list = FALSE)
+
+
torch_nonzero(self, as_list = FALSE)
+
-

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

as_list

If FALSE, the output tensor containing indices. If TRUE, one +

+

Arguments

+
self
+

(Tensor) the input tensor.

+
as_list
+

If FALSE, the output tensor containing indices. If TRUE, one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension.

When as_list is FALSE (default):

@@ -219,48 +139,44 @@ each containing the indices (in that dimension) of all non-zero elements of tensors of size \(z\), where \(z\) is the total number of non-zero elements in the input tensor.

As a special case, when input has zero dimensions and a nonzero scalar -value, it is treated as a one-dimensional tensor with one element.

- - -

Examples

-
if (torch_is_installed()) {
-
-torch_nonzero(torch_tensor(c(1, 1, 1, 0, 1)))
-}
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#>  5
-#> [ CPULongType{4,1} ]
-
+value, it is treated as a one-dimensional tensor with one element.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+torch_nonzero(torch_tensor(c(1, 1, 1, 0, 1)))
+}
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#>  5
+#> [ CPULongType{4,1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_norm.html b/dev/reference/torch_norm.html index 606ba32f2d2adfd39851141c5eabf8324068714f..a9f39a87a2666cf984dd4cb119672944483389ce 100644 --- a/dev/reference/torch_norm.html +++ b/dev/reference/torch_norm.html @@ -1,79 +1,18 @@ - - - - - - - -Norm — torch_norm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Norm — torch_norm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_norm(self, p = 2L, dim, keepdim = FALSE, dtype)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor

p

(int, float, inf, -inf, 'fro', 'nuc', optional) the order of norm. Default: 'fro' The following norms can be calculated: ===== ============================ ========================== ord matrix norm vector norm ===== ============================ ========================== NULL Frobenius norm 2-norm 'fro' Frobenius norm -- 'nuc' nuclear norm -- Other as vec norm when dim is NULL sum(abs(x)ord)(1./ord) ===== ============================ ==========================

dim

(int, 2-tuple of ints, 2-list of ints, optional) If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is NULL, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension.

keepdim

(bool, optional) whether the output tensors have dim retained or not. Ignored if dim = NULL and out = NULL. Default: FALSE -Ignored if dim = NULL and out = NULL.

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to 'dtype' while performing the operation. Default: NULL.

- -

TEST

+
+
torch_norm(self, p = 2L, dim, keepdim = FALSE, dtype)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor

+
p
+

(int, float, inf, -inf, 'fro', 'nuc', optional) the order of norm. Default: 'fro' The following norms can be calculated: ===== ============================ ========================== ord matrix norm vector norm ===== ============================ ========================== NULL Frobenius norm 2-norm 'fro' Frobenius norm -- 'nuc' nuclear norm -- Other as vec norm when dim is NULL sum(abs(x)ord)(1./ord) ===== ============================ ==========================

+
dim
+

(int, 2-tuple of ints, 2-list of ints, optional) If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is NULL, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension.

+
keepdim
+

(bool, optional) whether the output tensors have dim retained or not. Ignored if dim = NULL and out = NULL. Default: FALSE +Ignored if dim = NULL and out = NULL.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to 'dtype' while performing the operation. Default: NULL.

+
+
+

TEST

Returns the matrix norm or vector norm of a given tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_arange(1, 9, dtype = torch_float())
-b <- a$reshape(list(3, 3))
-torch_norm(a)
-torch_norm(b)
-torch_norm(a, Inf)
-torch_norm(b, Inf)
-
-}
-#> torch_tensor
-#> 9
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_arange(1, 9, dtype = torch_float())
+b <- a$reshape(list(3, 3))
+torch_norm(a)
+torch_norm(b)
+torch_norm(a, Inf)
+torch_norm(b, Inf)
+
+}
+#> torch_tensor
+#> 9
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_normal.html b/dev/reference/torch_normal.html index 3eb1b8c310b231e950dd51940715106bc41222d3..714b247469aa33937c23539bdc65b04903288774 100644 --- a/dev/reference/torch_normal.html +++ b/dev/reference/torch_normal.html @@ -1,80 +1,19 @@ - - - - - - - -Normal — torch_normal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Normal — torch_normal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -191,47 +113,38 @@ Normal distributed" />

Normal distributed

-
torch_normal(mean, std, size = NULL, generator = NULL, ...)
+
+
torch_normal(mean, std, size = NULL, generator = NULL, ...)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
mean

(tensor or scalar double) Mean of the normal distribution. -If this is a torch_tensor() then the output has the same dim as mean +

+

Arguments

+
mean
+

(tensor or scalar double) Mean of the normal distribution. +If this is a torch_tensor() then the output has the same dim as mean and it represents the per-element mean. If it's a scalar value, it's reused -for all elements.

std

(tensor or scalar double) The standard deviation of the normal -distribution. If this is a torch_tensor() then the output has the same size as std +for all elements.

+
std
+

(tensor or scalar double) The standard deviation of the normal +distribution. If this is a torch_tensor() then the output has the same size as std and it represents the per-element standard deviation. If it's a scalar value, -it's reused for all elements.

size

(integers, optional) only used if both mean and std are scalars.

generator

a random number generator created with torch_generator(). If NULL -a default generator is used.

...

Tensor option parameters like dtype, layout, and device. -Can only be used when mean and std are both scalar numerics.

- -

Note

- +it's reused for all elements.

+
size
+

(integers, optional) only used if both mean and std are scalars.

+
generator
+

a random number generator created with torch_generator(). If NULL +a default generator is used.

+
...
+

Tensor option parameters like dtype, layout, and device. +Can only be used when mean and std are both scalar numerics.

+
+
+

Note

When the shapes do not match, the shape of mean is used as the shape for the returned output tensor

-

normal(mean, std, *) -> Tensor

- +
+
+

normal(mean, std, *) -> Tensor

@@ -243,66 +156,68 @@ each output element's normal distribution

each output element's normal distribution

The shapes of mean and std don't need to match, but the total number of elements in each tensor need to be the same.

-

normal(mean=0.0, std) -> Tensor

- +
+
+

normal(mean=0.0, std) -> Tensor

Similar to the function above, but the means are shared among all drawn elements.

-

normal(mean, std=1.0) -> Tensor

- +
+
+

normal(mean, std=1.0) -> Tensor

Similar to the function above, but the standard-deviations are shared among all drawn elements.

-

normal(mean, std, size, *) -> Tensor

- +
+
+

normal(mean, std, size, *) -> Tensor

Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_normal(mean=0, std=torch_arange(1, 0, -0.1) + 1e-6)
-torch_normal(mean=0.5, std=torch_arange(1., 6.))
-torch_normal(mean=torch_arange(1., 6.))
-torch_normal(2, 3, size=c(1, 4))
-
-}
-#> torch_tensor
-#>  2.0100  3.2109 -0.0579  1.9645
-#> [ CPUFloatType{1,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_normal(mean=0, std=torch_arange(1, 0, -0.1) + 1e-6)
+torch_normal(mean=0.5, std=torch_arange(1., 6.))
+torch_normal(mean=torch_arange(1., 6.))
+torch_normal(2, 3, size=c(1, 4))
+
+}
+#> torch_tensor
+#>  6.7230  5.0421  4.4763  1.1071
+#> [ CPUFloatType{1,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_not_equal.html b/dev/reference/torch_not_equal.html index 2f87d846358d12983866767ce9f2117e4652083d..0eb3d4f4c45591966c5cd6c7f7d8ff40321eb830 100644 --- a/dev/reference/torch_not_equal.html +++ b/dev/reference/torch_not_equal.html @@ -1,79 +1,18 @@ - - - - - - - -Not_equal — torch_not_equal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Not_equal — torch_not_equal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,46 @@

Not_equal

-
torch_not_equal(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to compare

other

(Tensor or float) the tensor or value to compare

- -

not_equal(input, other, *, out=None) -> Tensor

+
+
torch_not_equal(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to compare

+
other
+

(Tensor or float) the tensor or value to compare

+
+
+

not_equal(input, other, *, out=None) -> Tensor

-

Alias for torch_ne().

+

Alias for torch_ne().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_ones.html b/dev/reference/torch_ones.html index 712b2c855d7fce5a260a87c7c8fd909620ad19ce..ead99862bb27aa48598838d7b31cd9bead8ed091 100644 --- a/dev/reference/torch_ones.html +++ b/dev/reference/torch_ones.html @@ -1,79 +1,18 @@ - - - - - - - -Ones — torch_ones • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ones — torch_ones • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ones(
-  ...,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
...

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

names

optional names for the dimensions

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

ones(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_ones(
+  ...,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
...
+

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

+
names
+

optional names for the dimensions

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

ones(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_ones(c(2, 3))
-torch_ones(c(5))
-}
-#> torch_tensor
-#>  1
-#>  1
-#>  1
-#>  1
-#>  1
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_ones(c(2, 3))
+torch_ones(c(5))
+}
+#> torch_tensor
+#>  1
+#>  1
+#>  1
+#>  1
+#>  1
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_ones_like.html b/dev/reference/torch_ones_like.html index 65a3bbe645eab5b3c6aee5ecf59ee9155b375dd3..e6fa80bf585d6e5aeaeb82d6b1fc45844ca2c488 100644 --- a/dev/reference/torch_ones_like.html +++ b/dev/reference/torch_ones_like.html @@ -1,79 +1,18 @@ - - - - - - - -Ones_like — torch_ones_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ones_like — torch_ones_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,96 +111,84 @@

Ones_like

-
torch_ones_like(
-  input,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

ones_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

+
+
torch_ones_like(
+  input,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

ones_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

Returns a tensor filled with the scalar value 1, with the same size as input. torch_ones_like(input) is equivalent to torch_ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

-

Warning

- +
+
+

Warning

As of 0.4, this function does not support an out keyword. As an alternative, the old torch_ones_like(input, out=output) is equivalent to torch_ones(input.size(), out=output).

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_empty(c(2, 3))
-torch_ones_like(input)
-}
-#> torch_tensor
-#>  1  1  1
-#>  1  1  1
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_empty(c(2, 3))
+torch_ones_like(input)
+}
+#> torch_tensor
+#>  1  1  1
+#>  1  1  1
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_orgqr.html b/dev/reference/torch_orgqr.html index c8dffffc8999803769660659ccae84de3d2f5e2f..4e4d1b9eeed849f864898adf31c5d1788170eccb 100644 --- a/dev/reference/torch_orgqr.html +++ b/dev/reference/torch_orgqr.html @@ -1,79 +1,18 @@ - - - - - - - -Orgqr — torch_orgqr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Orgqr — torch_orgqr • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_orgqr(self, input2)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the a from torch_geqrf.

input2

(Tensor) the tau from torch_geqrf.

- -

orgqr(input, input2) -> Tensor

+
+
torch_orgqr(self, input2)
+
+
+

Arguments

+
self
+

(Tensor) the a from torch_geqrf.

+
input2
+

(Tensor) the tau from torch_geqrf.

+
+
+

orgqr(input, input2) -> Tensor

Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) -tuple returned by torch_geqrf.

+tuple returned by torch_geqrf.

This directly calls the underlying LAPACK function ?orgqr. See LAPACK documentation for orgqr_ for further details.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_ormqr.html b/dev/reference/torch_ormqr.html index 12ad3d4875a840a7e2de35a6cffa9f68c87e5f48..bab9936d4d03d808d14596e53485f6e9af7cfb1c 100644 --- a/dev/reference/torch_ormqr.html +++ b/dev/reference/torch_ormqr.html @@ -1,79 +1,18 @@ - - - - - - - -Ormqr — torch_ormqr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Ormqr — torch_ormqr • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_ormqr(self, input2, input3, left = TRUE, transpose = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the a from torch_geqrf.

input2

(Tensor) the tau from torch_geqrf.

input3

(Tensor) the matrix to be multiplied.

left

see LAPACK documentation

transpose

see LAPACK documentation

- -

ormqr(input, input2, input3, left=TRUE, transpose=False) -> Tensor

+
+
torch_ormqr(self, input2, input3, left = TRUE, transpose = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the a from torch_geqrf.

+
input2
+

(Tensor) the tau from torch_geqrf.

+
input3
+

(Tensor) the matrix to be multiplied.

+
left
+

see LAPACK documentation

+
transpose
+

see LAPACK documentation

+
+
+

ormqr(input, input2, input3, left=TRUE, transpose=False) -> Tensor

Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization -formed by torch_geqrf() that is represented by (a, tau) (given by (input, input2)).

+formed by torch_geqrf() that is represented by (a, tau) (given by (input, input2)).

This directly calls the underlying LAPACK function ?ormqr. -See LAPACK documentation for ormqr for further details.

+See LAPACK documentation for ormqr for further details.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_outer.html b/dev/reference/torch_outer.html index 441f90d2f228ea6b307090f20daceb5678657ff4..f9816f81e22a85117d52b9d3689d57546329e42f 100644 --- a/dev/reference/torch_outer.html +++ b/dev/reference/torch_outer.html @@ -1,79 +1,18 @@ - - - - - - - -Outer — torch_outer • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Outer — torch_outer • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_outer(self, vec2)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) 1-D input vector

vec2

(Tensor) 1-D input vector

- -

Note

+
+
torch_outer(self, vec2)
+
+
+

Arguments

+
self
+

(Tensor) 1-D input vector

+
vec2
+

(Tensor) 1-D input vector

+
+
+

Note

This function does not broadcast.

-

outer(input, vec2, *, out=None) -> Tensor

- +
+
+

outer(input, vec2, *, out=None) -> Tensor

Outer product of input and vec2. If input is a vector of size \(n\) and vec2 is a vector of size \(m\), then out must be a matrix of size \((n \times m)\).

+
-

Examples

-
if (torch_is_installed()) {
-
-v1 <- torch_arange(1., 5.)
-v2 <- torch_arange(1., 4.)
-torch_outer(v1, v2)
-}
-#> torch_tensor
-#>   1   2   3   4
-#>   2   4   6   8
-#>   3   6   9  12
-#>   4   8  12  16
-#>   5  10  15  20
-#> [ CPUFloatType{5,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+v1 <- torch_arange(1., 5.)
+v2 <- torch_arange(1., 4.)
+torch_outer(v1, v2)
+}
+#> torch_tensor
+#>   1   2   3   4
+#>   2   4   6   8
+#>   3   6   9  12
+#>   4   8  12  16
+#>   5  10  15  20
+#> [ CPUFloatType{5,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_pdist.html b/dev/reference/torch_pdist.html index 5b08ba6a8900fe96e90de87c01eaf7a8ad1be2df..9cedfa210a76c1b6c1494f9840b4b25c22b83fad 100644 --- a/dev/reference/torch_pdist.html +++ b/dev/reference/torch_pdist.html @@ -1,79 +1,18 @@ - - - - - - - -Pdist — torch_pdist • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pdist — torch_pdist • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_pdist(self, p = 2L)
- -

Arguments

- - - - - - - - - - -
self

NA input tensor of shape \(N \times M\).

p

NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

- -

pdist(input, p=2) -> Tensor

+
+
torch_pdist(self, p = 2L)
+
+
+

Arguments

+
self
+

NA input tensor of shape \(N \times M\).

+
p
+

NA p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).

+
+
+

pdist(input, p=2) -> Tensor

@@ -219,32 +137,29 @@ if the rows are contiguous.

equivalent to scipy.spatial.distance.pdist(input, 'hamming') * M. When \(p = \infty\), the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_pinverse.html b/dev/reference/torch_pinverse.html index 98da7faea3b34c030c0c8ffd2b5a6ab4411550dc..ffa8a65a032f0f830d609a6be464befbee491f6b 100644 --- a/dev/reference/torch_pinverse.html +++ b/dev/reference/torch_pinverse.html @@ -1,79 +1,18 @@ - - - - - - - -Pinverse — torch_pinverse • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pinverse — torch_pinverse • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,90 +111,86 @@

Pinverse

-
torch_pinverse(self, rcond = 0)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) The input tensor of size \((*, m, n)\) where \(*\) is zero or more batch dimensions

rcond

(float) A floating point value to determine the cutoff for small singular values. Default: 1e-15

- -

Note

+
+
torch_pinverse(self, rcond = 0)
+
+
+

Arguments

+
self
+

(Tensor) The input tensor of size \((*, m, n)\) where \(*\) is zero or more batch dimensions

+
rcond
+

(float) A floating point value to determine the cutoff for small singular values. Default: 1e-15

+
+
+

Note

-
This method is implemented using the Singular Value Decomposition.
-
+
This method is implemented using the Singular Value Decomposition.
+
-
The pseudo-inverse is not necessarily a continuous function in the elements of the matrix `[1]`_.
+
The pseudo-inverse is not necessarily a continuous function in the elements of the matrix `[1]`_.
 Therefore, derivatives are not always existent, and exist for a constant rank only `[2]`_.
 However, this method is backprop-able due to the implementation by using SVD results, and
 could be unstable. Double-backward will also be unstable due to the usage of SVD internally.
 See `~torch.svd` for more details.
-
- -

pinverse(input, rcond=1e-15) -> Tensor

+
+
+
+

pinverse(input, rcond=1e-15) -> Tensor

Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. Please look at Moore-Penrose inverse_ for more details

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_randn(c(3, 5))
-input
-torch_pinverse(input)
-# Batched pinverse example
-a = torch_randn(c(2,6,3))
-b = torch_pinverse(a)
-torch_matmul(b, a)
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   1.0000e+00 -5.5879e-09  1.5320e-07
-#>   2.1420e-08  1.0000e+00  8.3819e-09
-#>   3.6787e-08  1.4063e-07  1.0000e+00
-#> 
-#> (2,.,.) = 
-#>   1.0000e+00  2.9802e-07 -8.9407e-08
-#>  -7.4506e-08  1.0000e+00  3.2783e-07
-#>   7.4506e-08  4.4703e-08  1.0000e+00
-#> [ CPUFloatType{2,3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_randn(c(3, 5))
+input
+torch_pinverse(input)
+# Batched pinverse example
+a = torch_randn(c(2,6,3))
+b = torch_pinverse(a)
+torch_matmul(b, a)
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   1.0000e+00 -5.1316e-07 -4.7684e-07
+#>   8.1956e-08  1.0000e+00  3.3528e-07
+#>   4.2841e-08 -9.6683e-08  1.0000e+00
+#> 
+#> (2,.,.) = 
+#>   1.0000e+00  2.8312e-07 -1.1921e-07
+#>  -1.4901e-08  1.0000e+00  1.1921e-07
+#>  -1.6391e-07  2.6822e-07  1.0000e+00
+#> [ CPUFloatType{2,3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_pixel_shuffle.html b/dev/reference/torch_pixel_shuffle.html index db81c1a74e2b67bbe583d6053c42e3b20fc91452..adf02ef42a6b8890568f40521d78e85f5936c55c 100644 --- a/dev/reference/torch_pixel_shuffle.html +++ b/dev/reference/torch_pixel_shuffle.html @@ -1,79 +1,18 @@ - - - - - - - -Pixel_shuffle — torch_pixel_shuffle • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pixel_shuffle — torch_pixel_shuffle • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,62 +111,57 @@

Pixel_shuffle

-
torch_pixel_shuffle(self, upscale_factor)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor

upscale_factor

(int) factor to increase spatial resolution by

- -

Rearranges elements in a tensor of shape

+
+
torch_pixel_shuffle(self, upscale_factor)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor

+
upscale_factor
+

(int) factor to increase spatial resolution by

+
+
+

Rearranges elements in a tensor of shape

math:(*, C \times r^2, H, W) to a :

Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a tensor of shape \((*, C, H \times r, W \times r)\).

See ~torch.nn.PixelShuffle for details.

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_randn(c(1, 9, 4, 4))
-output = nnf_pixel_shuffle(input, 3)
-print(output$size())
-}
-#> [1]  1  1 12 12
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_randn(c(1, 9, 4, 4))
+output = nnf_pixel_shuffle(input, 3)
+print(output$size())
+}
+#> [1]  1  1 12 12
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_poisson.html b/dev/reference/torch_poisson.html index 9cd5a348b4e3e1c7eb87ca4b8c67512f807f9a3f..647e6e7173ca86e116c52d0fba230c15eae3bba9 100644 --- a/dev/reference/torch_poisson.html +++ b/dev/reference/torch_poisson.html @@ -1,79 +1,18 @@ - - - - - - - -Poisson — torch_poisson • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Poisson — torch_poisson • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_poisson(self, generator = NULL)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor containing the rates of the Poisson distribution

generator

(torch.Generator, optional) a pseudorandom number generator for sampling

- -

poisson(input *, generator=NULL) -> Tensor

+
+
torch_poisson(self, generator = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor containing the rates of the Poisson distribution

+
generator
+

(torch.Generator, optional) a pseudorandom number generator for sampling

+
+
+

poisson(input *, generator=NULL) -> Tensor

@@ -215,45 +133,44 @@ element in input i.e.,

$$ \mbox{out}_i \sim \mbox{Poisson}(\mbox{input}_i) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-rates = torch_rand(c(4, 4)) * 5  # rate parameter between 0 and 5
-torch_poisson(rates)
-}
-#> torch_tensor
-#>  1  1  3  3
-#>  3  1  8  3
-#>  3  0  4  2
-#>  2  2  1  3
-#> [ CPUFloatType{4,4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+rates = torch_rand(c(4, 4)) * 5  # rate parameter between 0 and 5
+torch_poisson(rates)
+}
+#> torch_tensor
+#>  8  0  0  1
+#>  2  3  6  0
+#>  0  5  1  5
+#>  2  0  2  2
+#> [ CPUFloatType{4,4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_polar.html b/dev/reference/torch_polar.html index 8d4e5ad2712bba39d70c688f7171313ac532e78a..960b19a9abad3e861c10660fbd3b55f21f16c250 100644 --- a/dev/reference/torch_polar.html +++ b/dev/reference/torch_polar.html @@ -1,79 +1,18 @@ - - - - - - - -Polar — torch_polar • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Polar — torch_polar • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_polar(abs, angle)
- -

Arguments

- - - - - - - - - - -
abs

(Tensor) The absolute value the complex tensor. Must be float or -double.

angle

(Tensor) The angle of the complex tensor. Must be same dtype as -abs.

- -

polar(abs, angle, *, out=None) -> Tensor

+
+
torch_polar(abs, angle)
+
+
+

Arguments

+
abs
+

(Tensor) The absolute value the complex tensor. Must be float or +double.

+
angle
+

(Tensor) The angle of the complex tensor. Must be same dtype as +abs.

+
+
+

polar(abs, angle, *, out=None) -> Tensor

@@ -217,46 +135,45 @@ corresponding to the polar coordinates with absolute value abs and

$$ \mbox{out} = \mbox{abs} \cdot \cos(\mbox{angle}) + \mbox{abs} \cdot \sin(\mbox{angle}) \cdot j $$

+
-

Examples

-
if (torch_is_installed()) {
-
-abs <- torch_tensor(c(1, 2), dtype=torch_float64())
-angle <- torch_tensor(c(pi / 2, 5 * pi / 4), dtype=torch_float64())
-z <- torch_polar(abs, angle)
-z
-}
-#> torch_tensor
-#> 1e-17 *
-#>  6.1232
-#> -141421356237309520.0000
-#> [ CPUComplexDoubleType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+abs <- torch_tensor(c(1, 2), dtype=torch_float64())
+angle <- torch_tensor(c(pi / 2, 5 * pi / 4), dtype=torch_float64())
+z <- torch_polar(abs, angle)
+z
+}
+#> torch_tensor
+#> 1e-17 *
+#>  6.1232
+#> -141421356237309520.0000
+#> [ CPUComplexDoubleType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_polygamma.html b/dev/reference/torch_polygamma.html index 8549235cbd90cdabb39d0ce10f7975d3624287c9..d2e8bbda00919b0d25d6ee9b3d3e13404f538739 100644 --- a/dev/reference/torch_polygamma.html +++ b/dev/reference/torch_polygamma.html @@ -1,79 +1,18 @@ - - - - - - - -Polygamma — torch_polygamma • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Polygamma — torch_polygamma • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,29 +111,26 @@

Polygamma

-
torch_polygamma(n, input)
- -

Arguments

- - - - - - - - - - -
n

(int) the order of the polygamma function

input

(Tensor) the input tensor.

- -

Note

+
+
torch_polygamma(n, input)
+
+
+

Arguments

+
n
+

(int) the order of the polygamma function

+
input
+

(Tensor) the input tensor.

+
+
+

Note

-
This function is not implemented for \eqn{n \geq 2}.
-
- -

polygamma(n, input, out=NULL) -> Tensor

+
This function is not implemented for \eqn{n \geq 2}.
+
+
+
+

polygamma(n, input, out=NULL) -> Tensor

@@ -220,40 +139,39 @@

$$ \psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x) $$

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-a = torch_tensor(c(1, 0.5))
-torch_polygamma(1, a)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+a = torch_tensor(c(1, 0.5))
+torch_polygamma(1, a)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_pow.html b/dev/reference/torch_pow.html index 4db1474de98ab10ca31374f9b9c835756346ca7f..ada6ea5cde2529e255a1bdd24310e7d0c2efbd86 100644 --- a/dev/reference/torch_pow.html +++ b/dev/reference/torch_pow.html @@ -1,79 +1,18 @@ - - - - - - - -Pow — torch_pow • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Pow — torch_pow • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_pow(self, exponent)
- -

Arguments

- - - - - - - - - - -
self

(float) the scalar base value for the power operation

exponent

(float or tensor) the exponent value

- -

pow(input, exponent, out=NULL) -> Tensor

+
+
torch_pow(self, exponent)
+
+
+

Arguments

+
self
+

(float) the scalar base value for the power operation

+
exponent
+

(float or tensor) the exponent value

+
+
+

pow(input, exponent, out=NULL) -> Tensor

@@ -223,8 +141,9 @@ When exponent is a tensor, the operation applied is:

$$ When exponent is a tensor, the shapes of input and exponent must be broadcastable .

-

pow(self, exponent, out=NULL) -> Tensor

- +
+
+

pow(self, exponent, out=NULL) -> Tensor

@@ -234,57 +153,56 @@ The returned tensor out is of the same shape as exponent$$ \mbox{out}_i = \mbox{self} ^ {\mbox{exponent}_i} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_pow(a, 2)
-exp <- torch_arange(1, 5)
-a <- torch_arange(1, 5)
-a
-exp
-torch_pow(a, exp)
-
-
-exp <- torch_arange(1, 5)
-base <- 2
-torch_pow(base, exp)
-}
-#> torch_tensor
-#>   2
-#>   4
-#>   8
-#>  16
-#>  32
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_pow(a, 2)
+exp <- torch_arange(1, 5)
+a <- torch_arange(1, 5)
+a
+exp
+torch_pow(a, exp)
+
+
+exp <- torch_arange(1, 5)
+base <- 2
+torch_pow(base, exp)
+}
+#> torch_tensor
+#>   2
+#>   4
+#>   8
+#>  16
+#>  32
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_prod.html b/dev/reference/torch_prod.html index dc6a112fe9cdba93395c78cb17fdca20415126ff..bbb4ca752b7d59155f93aa360ab1235173e191f3 100644 --- a/dev/reference/torch_prod.html +++ b/dev/reference/torch_prod.html @@ -1,79 +1,18 @@ - - - - - - - -Prod — torch_prod • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Prod — torch_prod • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_prod(self, dim, keepdim = FALSE, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

- -

prod(input, dtype=NULL) -> Tensor

+
+
torch_prod(self, dim, keepdim = FALSE, dtype = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

+
+
+

prod(input, dtype=NULL) -> Tensor

Returns the product of all elements in the input tensor.

-

prod(input, dim, keepdim=False, dtype=NULL) -> Tensor

- +
+
+

prod(input, dim, keepdim=False, dtype=NULL) -> Tensor

@@ -227,52 +142,50 @@ dimension dim.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 fewer dimension than input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_prod(a)
-
-
-a = torch_randn(c(4, 2))
-a
-torch_prod(a, 1)
-}
-#> torch_tensor
-#> 0.001 *
-#>  4.4359
-#> -67.6969
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_prod(a)
+
+
+a = torch_randn(c(4, 2))
+a
+torch_prod(a, 1)
+}
+#> torch_tensor
+#> -0.1586
+#>  0.0262
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_promote_types.html b/dev/reference/torch_promote_types.html index 31a0579e7566b91ff5f8871a8156b3d8319d9550..dc5165d9e386f685919f719f0913e7a48bb35f2a 100644 --- a/dev/reference/torch_promote_types.html +++ b/dev/reference/torch_promote_types.html @@ -1,79 +1,18 @@ - - - - - - - -Promote_types — torch_promote_types • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Promote_types — torch_promote_types • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Promote_types

-
torch_promote_types(type1, type2)
- -

Arguments

- - - - - - - - - - -
type1

(torch.dtype)

type2

(torch.dtype)

- -

promote_types(type1, type2) -> dtype

+
+
torch_promote_types(type1, type2)
+
+
+

Arguments

+
type1
+

(torch.dtype)

+
type2
+

(torch.dtype)

+
+
+

promote_types(type1, type2) -> dtype

@@ -213,40 +131,39 @@ not smaller nor of lower kind than either type1 or type2. See type promotion documentation for more information on the type promotion logic.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_promote_types(torch_int32(), torch_float32())
-torch_promote_types(torch_uint8(), torch_long())
-}
-#> torch_Long
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_promote_types(torch_int32(), torch_float32())
+torch_promote_types(torch_uint8(), torch_long())
+}
+#> torch_Long
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_qr.html b/dev/reference/torch_qr.html index faadcd40d7767eb2d2efae30b1388ab2d7d73975..4868afaae49b9acb4f39cb2962047d24642ea80c 100644 --- a/dev/reference/torch_qr.html +++ b/dev/reference/torch_qr.html @@ -1,79 +1,18 @@ - - - - - - - -Qr — torch_qr • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Qr — torch_qr • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_qr(self, some = TRUE)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of matrices of dimension \(m \times n\).

some

(bool, optional) Set to TRUE for reduced QR decomposition and FALSE for complete QR decomposition.

- -

Note

+
+
torch_qr(self, some = TRUE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of matrices of dimension \(m \times n\).

+
some
+

(bool, optional) Set to TRUE for reduced QR decomposition and FALSE for complete QR decomposition.

+
+
+

Note

precision may be lost if the magnitudes of the elements of input are large

While it should always give you a valid decomposition, it may not give you the same one across platforms - it will depend on your LAPACK implementation.

-

qr(input, some=TRUE, out=NULL) -> (Tensor, Tensor)

- +
+
+

qr(input, some=TRUE, out=NULL) -> (Tensor, Tensor)

@@ -222,48 +141,47 @@ with \(Q\) being an orthogonal matrix or batch of orthogonal matrices and \(R\) being an upper triangular matrix or batch of upper triangular matrices.

If some is TRUE, then this function returns the thin (reduced) QR factorization. Otherwise, if some is FALSE, this function returns the complete QR factorization.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_tensor(matrix(c(12., -51, 4, 6, 167, -68, -4, 24, -41), ncol = 3, byrow = TRUE))
-out = torch_qr(a)
-q = out[[1]]
-r = out[[2]]
-torch_mm(q, r)$round()
-torch_mm(q$t(), q)$round()
-}
-#> torch_tensor
-#>  1  0  0
-#>  0  1  0
-#>  0  0  1
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_tensor(matrix(c(12., -51, 4, 6, 167, -68, -4, 24, -41), ncol = 3, byrow = TRUE))
+out = torch_qr(a)
+q = out[[1]]
+r = out[[2]]
+torch_mm(q, r)$round()
+torch_mm(q$t(), q)$round()
+}
+#> torch_tensor
+#>  1  0  0
+#>  0  1  0
+#>  0  0  1
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_qscheme.html b/dev/reference/torch_qscheme.html index 235234b5b554d264e8bf7f4cd88d64f3166f24f2..0d055fd3dc4e958a936802a87a057149c615fade 100644 --- a/dev/reference/torch_qscheme.html +++ b/dev/reference/torch_qscheme.html @@ -1,79 +1,18 @@ - - - - - - - -Creates the corresponding Scheme object — torch_qscheme • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates the corresponding Scheme object — torch_qscheme • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,41 +111,38 @@

Creates the corresponding Scheme object

-
torch_per_channel_affine()
+    
+
torch_per_channel_affine()
 
-torch_per_tensor_affine()
+torch_per_tensor_affine()
 
-torch_per_channel_symmetric()
-
-torch_per_tensor_symmetric()
+torch_per_channel_symmetric() +torch_per_tensor_symmetric()
+
+ -
- +
- - + + diff --git a/dev/reference/torch_quantile.html b/dev/reference/torch_quantile.html index 8f249176b4725339d98edf9c64b6fdba1f7aa63f..c9f0ea1de214735736659ace4fed7ad72c779b06 100644 --- a/dev/reference/torch_quantile.html +++ b/dev/reference/torch_quantile.html @@ -1,79 +1,18 @@ - - - - - - - -Quantile — torch_quantile • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Quantile — torch_quantile • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,42 +111,33 @@

Quantile

-
torch_quantile(self, q, dim = NULL, keepdim = FALSE, interpolation)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

q

(float or Tensor) a scalar or 1D tensor of quantile values in the range [0, 1]

dim

(int) the dimension to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

interpolation

The interpolation method.

- -

quantile(input, q) -> Tensor

+
+
torch_quantile(self, q, dim = NULL, keepdim = FALSE, interpolation)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
q
+

(float or Tensor) a scalar or 1D tensor of quantile values in the range [0, 1]

+
dim
+

(int) the dimension to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
interpolation
+

The interpolation method.

+
+
+

quantile(input, q) -> Tensor

Returns the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points.

-

quantile(input, q, dim=None, keepdim=FALSE, *, out=None) -> Tensor

- +
+
+

quantile(input, q, dim=None, keepdim=FALSE, *, out=None) -> Tensor

@@ -234,52 +147,51 @@ data points. By default, dim is None resulting in the being flattened before computation.

If keepdim is TRUE, the output dimensions are of the same size as input except in the dimensions being reduced (dim or all if dim is NULL) where they -have size 1. Otherwise, the dimensions being reduced are squeezed (see torch_squeeze). +have size 1. Otherwise, the dimensions being reduced are squeezed (see torch_squeeze). If q is a 1D tensor, an extra dimension is prepended to the output tensor with the same size as q which represents the quantiles.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_randn(c(1, 3))
-a
-q <- torch_tensor(c(0, 0.5, 1))
-torch_quantile(a, q)
-
-
-a <- torch_randn(c(2, 3))
-a
-q <- torch_tensor(c(0.25, 0.5, 0.75))
-torch_quantile(a, q, dim=1, keepdim=TRUE)
-torch_quantile(a, q, dim=1, keepdim=TRUE)$shape
-}
-#> [1] 3 1 3
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_randn(c(1, 3))
+a
+q <- torch_tensor(c(0, 0.5, 1))
+torch_quantile(a, q)
+
+
+a <- torch_randn(c(2, 3))
+a
+q <- torch_tensor(c(0.25, 0.5, 0.75))
+torch_quantile(a, q, dim=1, keepdim=TRUE)
+torch_quantile(a, q, dim=1, keepdim=TRUE)$shape
+}
+#> [1] 3 1 3
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_quantize_per_channel.html b/dev/reference/torch_quantize_per_channel.html index d57711e10cf4ac7db2551f0ed9c4404e213eeac6..04e429281fe78aa1ef834adc042b80b6988d3143 100644 --- a/dev/reference/torch_quantize_per_channel.html +++ b/dev/reference/torch_quantize_per_channel.html @@ -1,79 +1,18 @@ - - - - - - - -Quantize_per_channel — torch_quantize_per_channel • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Quantize_per_channel — torch_quantize_per_channel • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,78 +111,67 @@

Quantize_per_channel

-
torch_quantize_per_channel(self, scales, zero_points, axis, dtype)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) float tensor to quantize

scales

(Tensor) float 1D tensor of scales to use, size should match input.size(axis)

zero_points

(int) integer 1D tensor of offset to use, size should match input.size(axis)

axis

(int) dimension on which apply per-channel quantization

dtype

(torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

- -

quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor

+
+
torch_quantize_per_channel(self, scales, zero_points, axis, dtype)
+
+
+

Arguments

+
self
+

(Tensor) float tensor to quantize

+
scales
+

(Tensor) float 1D tensor of scales to use, size should match input.size(axis)

+
zero_points
+

(int) integer 1D tensor of offset to use, size should match input.size(axis)

+
axis
+

(int) dimension on which apply per-channel quantization

+
dtype
+

(torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

+
+
+

quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor

Converts a float tensor to per-channel quantized tensor with given scales and zero points.

+
-

Examples

-
if (torch_is_installed()) {
-x = torch_tensor(matrix(c(-1.0, 0.0, 1.0, 2.0), ncol = 2, byrow = TRUE))
-torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), 
-                           torch_tensor(c(10L, 0L)), 0, torch_quint8())
-torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), 
-                           torch_tensor(c(10L, 0L)), 0, torch_quint8())$int_repr()
-}
-#> torch_tensor
-#>    0   10
-#>  100  200
-#> [ CPUByteType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+x = torch_tensor(matrix(c(-1.0, 0.0, 1.0, 2.0), ncol = 2, byrow = TRUE))
+torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), 
+                           torch_tensor(c(10L, 0L)), 0, torch_quint8())
+torch_quantize_per_channel(x, torch_tensor(c(0.1, 0.01)), 
+                           torch_tensor(c(10L, 0L)), 0, torch_quint8())$int_repr()
+}
+#> torch_tensor
+#>    0   10
+#>  100  200
+#> [ CPUByteType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_quantize_per_tensor.html b/dev/reference/torch_quantize_per_tensor.html index 24c7d53bc7bf5996c3d4166930fc304c0d633404..59309becdf3e9fa3bd9584a8502ce0db698e8492 100644 --- a/dev/reference/torch_quantize_per_tensor.html +++ b/dev/reference/torch_quantize_per_tensor.html @@ -1,79 +1,18 @@ - - - - - - - -Quantize_per_tensor — torch_quantize_per_tensor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Quantize_per_tensor — torch_quantize_per_tensor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,73 +111,64 @@

Quantize_per_tensor

-
torch_quantize_per_tensor(self, scale, zero_point, dtype)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) float tensor to quantize

scale

(float) scale to apply in quantization formula

zero_point

(int) offset in integer value that maps to float zero

dtype

(torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

- -

quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor

+
+
torch_quantize_per_tensor(self, scale, zero_point, dtype)
+
+
+

Arguments

+
self
+

(Tensor) float tensor to quantize

+
scale
+

(float) scale to apply in quantization formula

+
zero_point
+

(int) offset in integer value that maps to float zero

+
dtype
+

(torch.dtype) the desired data type of returned tensor. Has to be one of the quantized dtypes: torch_quint8, torch.qint8, torch.qint32

+
+
+

quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor

Converts a float tensor to quantized tensor with given scale and zero point.

+
-

Examples

-
if (torch_is_installed()) {
-torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())
-torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())$int_repr()
-}
-#> torch_tensor
-#>   0
-#>  10
-#>  20
-#>  30
-#> [ CPUByteType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())
+torch_quantize_per_tensor(torch_tensor(c(-1.0, 0.0, 1.0, 2.0)), 0.1, 10, torch_quint8())$int_repr()
+}
+#> torch_tensor
+#>   0
+#>  10
+#>  20
+#>  30
+#> [ CPUByteType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rad2deg.html b/dev/reference/torch_rad2deg.html index 2d20d6b8f61c78fbcd444ea85e53f7b6de4aac91..5bd0a987d34813cb0062e8aac9fe5bac4ac5a72f 100644 --- a/dev/reference/torch_rad2deg.html +++ b/dev/reference/torch_rad2deg.html @@ -1,79 +1,18 @@ - - - - - - - -Rad2deg — torch_rad2deg • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rad2deg — torch_rad2deg • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_rad2deg(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

rad2deg(input, *, out=None) -> Tensor

+
+
torch_rad2deg(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

rad2deg(input, *, out=None) -> Tensor

Returns a new tensor with each of the elements of input converted from angles in radians to degrees.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(rbind(c(3.142, -3.142), c(6.283, -6.283), c(1.570, -1.570)))
-torch_rad2deg(a)
-}
-#> torch_tensor
-#>  180.0233 -180.0233
-#>  359.9894 -359.9894
-#>   89.9544  -89.9544
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(rbind(c(3.142, -3.142), c(6.283, -6.283), c(1.570, -1.570)))
+torch_rad2deg(a)
+}
+#> torch_tensor
+#>  180.0233 -180.0233
+#>  359.9894 -359.9894
+#>   89.9544  -89.9544
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rand.html b/dev/reference/torch_rand.html index 02c4c18a76d54bc302c081fa36ab656f8388dd82..3e466f60536db0de9b9612de23b97b6600ee978f 100644 --- a/dev/reference/torch_rand.html +++ b/dev/reference/torch_rand.html @@ -1,79 +1,18 @@ - - - - - - - -Rand — torch_rand • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rand — torch_rand • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_rand(
-  ...,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
...

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

names

optional dimension names

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

rand(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_rand(
+  ...,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
...
+

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

+
names
+

optional dimension names

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

rand(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a tensor filled with random numbers from a uniform distribution on the interval \([0, 1)\)

The shape of the tensor is defined by the variable argument size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_rand(4)
-torch_rand(c(2, 3))
-}
-#> torch_tensor
-#>  0.2689  0.7828  0.4615
-#>  0.4835  0.8843  0.2912
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_rand(4)
+torch_rand(c(2, 3))
+}
+#> torch_tensor
+#>  0.1373  0.2308  0.4866
+#>  0.4432  0.6311  0.7305
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rand_like.html b/dev/reference/torch_rand_like.html index 1d2dd9078694b5c81c5232da8eb161eb3199156b..6dcd47aedbd0a7965244457ae390853e5f8729ea 100644 --- a/dev/reference/torch_rand_like.html +++ b/dev/reference/torch_rand_like.html @@ -1,79 +1,18 @@ - - - - - - - -Rand_like — torch_rand_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rand_like — torch_rand_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,46 +111,34 @@

Rand_like

-
torch_rand_like(
-  input,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

rand_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

+
+
torch_rand_like(
+  input,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

rand_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

@@ -236,32 +146,29 @@ random numbers from a uniform distribution on the interval \([0, 1)\). torch_rand_like(input) is equivalent to torch_rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_randint.html b/dev/reference/torch_randint.html index 03c06b0c0bc53094ff5375c4fef8633d86d2ba9f..14c22bef852374aa9ff525c3c2cd6f59e496f58d 100644 --- a/dev/reference/torch_randint.html +++ b/dev/reference/torch_randint.html @@ -1,79 +1,18 @@ - - - - - - - -Randint — torch_randint • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Randint — torch_randint • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,61 +111,43 @@

Randint

-
torch_randint(
-  low,
-  high,
-  size,
-  generator = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
low

(int, optional) Lowest integer to be drawn from the distribution. Default: 0.

high

(int) One above the highest integer to be drawn from the distribution.

size

(tuple) a tuple defining the shape of the output tensor.

generator

(torch.Generator, optional) a pseudorandom number generator for sampling

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

memory format for the resulting tensor.

- -

randint(low=0, high, size, *, generator=NULL, out=NULL, \

+
+
torch_randint(
+  low,
+  high,
+  size,
+  generator = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
low
+

(int, optional) Lowest integer to be drawn from the distribution. Default: 0.

+
high
+

(int) One above the highest integer to be drawn from the distribution.

+
size
+

(tuple) a tuple defining the shape of the output tensor.

+
generator
+

(torch.Generator, optional) a pseudorandom number generator for sampling

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

memory format for the resulting tensor.

+
+
+

randint(low=0, high, size, *, generator=NULL, out=NULL, \

@@ -254,44 +158,43 @@ between low (inclusive) and high (exclusive).

.. note: With the global dtype default (torch_float32), this function returns a tensor with dtype torch_int64.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_randint(3, 5, list(3))
-torch_randint(0, 10, size = list(2, 2))
-torch_randint(3, 10, list(2, 2))
-}
-#> torch_tensor
-#>  9  5
-#>  3  5
-#> [ CPUFloatType{2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_randint(3, 5, list(3))
+torch_randint(0, 10, size = list(2, 2))
+torch_randint(3, 10, list(2, 2))
+}
+#> torch_tensor
+#>  4  6
+#>  8  8
+#> [ CPUFloatType{2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_randint_like.html b/dev/reference/torch_randint_like.html index ccf21413e4a7681e7af0b932b313e21dd9bcafd3..1e7d8d1173c64d06017fc50cdebafc5e8c8eb693 100644 --- a/dev/reference/torch_randint_like.html +++ b/dev/reference/torch_randint_like.html @@ -1,79 +1,18 @@ - - - - - - - -Randint_like — torch_randint_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Randint_like — torch_randint_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,51 +111,37 @@

Randint_like

-
torch_randint_like(
-  input,
-  low,
-  high,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

low

(int, optional) Lowest integer to be drawn from the distribution. Default: 0.

high

(int) One above the highest integer to be drawn from the distribution.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

randint_like(input, low=0, high, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

+
+
torch_randint_like(
+  input,
+  low,
+  high,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
low
+

(int, optional) Lowest integer to be drawn from the distribution. Default: 0.

+
high
+

(int) One above the highest integer to be drawn from the distribution.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

randint_like(input, low=0, high, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False,

@@ -244,32 +152,29 @@ random integers generated uniformly between low (inclusive) and

.. note: With the global dtype default (torch_float32), this function returns a tensor with dtype torch_int64.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_randn.html b/dev/reference/torch_randn.html index 7aa37cb93879234c2b5d1274ea5fa0382e28e2e6..4ffc02199cb6152f7ace42016e299c9d8addf32e 100644 --- a/dev/reference/torch_randn.html +++ b/dev/reference/torch_randn.html @@ -1,79 +1,18 @@ - - - - - - - -Randn — torch_randn • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Randn — torch_randn • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_randn(
-  ...,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
...

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

names

optional names for the dimensions

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

randn(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_randn(
+  ...,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
...
+

(int...) a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

+
names
+

optional names for the dimensions

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

randn(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -239,43 +149,42 @@ distribution).

\mbox{out}_{i} \sim \mathcal{N}(0, 1) $$ The shape of the tensor is defined by the variable argument size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_randn(c(4))
-torch_randn(c(2, 3))
-}
-#> torch_tensor
-#>  0.1095 -0.4758  1.2508
-#> -1.4637 -0.8335  3.3825
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_randn(c(4))
+torch_randn(c(2, 3))
+}
+#> torch_tensor
+#> -1.0901  1.4860 -1.5434
+#>  1.4869 -1.7941  1.6130
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_randn_like.html b/dev/reference/torch_randn_like.html index 6effef6e3b957bc774eec2d30cbe9bede791fd35..cff57a8b617250e44bd76749af623f02920c7bbf 100644 --- a/dev/reference/torch_randn_like.html +++ b/dev/reference/torch_randn_like.html @@ -1,79 +1,18 @@ - - - - - - - -Randn_like — torch_randn_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Randn_like — torch_randn_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,46 +111,34 @@

Randn_like

-
torch_randn_like(
-  input,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

randn_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

+
+
torch_randn_like(
+  input,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

randn_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

@@ -236,32 +146,29 @@ random numbers from a normal distribution with mean 0 and variance 1. torch_randn_like(input) is equivalent to torch_randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

+
+
-
- +
- - + + diff --git a/dev/reference/torch_randperm.html b/dev/reference/torch_randperm.html index 0641a92f25738f6356399d58476dd39d7622c0af..8e89ef3641e786887976f348ff979a1ef2d0a455 100644 --- a/dev/reference/torch_randperm.html +++ b/dev/reference/torch_randperm.html @@ -1,79 +1,18 @@ - - - - - - - -Randperm — torch_randperm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Randperm — torch_randperm • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,83 +111,72 @@

Randperm

-
torch_randperm(
-  n,
-  dtype = torch_int64(),
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
n

(int) the upper bound (exclusive)

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: torch_int64.

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

randperm(n, out=NULL, dtype=torch.int64, layout=torch.strided, device=NULL, requires_grad=False) -> LongTensor

+
+
torch_randperm(
+  n,
+  dtype = torch_int64(),
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
n
+

(int) the upper bound (exclusive)

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: torch_int64.

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

randperm(n, out=NULL, dtype=torch.int64, layout=torch.strided, device=NULL, requires_grad=False) -> LongTensor

Returns a random permutation of integers from 0 to n - 1.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_randperm(4)
-}
-#> torch_tensor
-#>  3
-#>  0
-#>  2
-#>  1
-#> [ CPULongType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_randperm(4)
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  2
+#>  3
+#> [ CPULongType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_range.html b/dev/reference/torch_range.html index 04c6102ad61fc5129a664e4507af3237c2549256..7769cb51be43df3dc2250dd91a8da75bc8753928 100644 --- a/dev/reference/torch_range.html +++ b/dev/reference/torch_range.html @@ -1,79 +1,18 @@ - - - - - - - -Range — torch_range • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Range — torch_range • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_range(
-  start,
-  end,
-  step = 1,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
start

(float) the starting value for the set of points. Default: 0.

end

(float) the ending value for the set of points

step

(float) the gap between each pair of adjacent points. Default: 1.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

range(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_range(
+  start,
+  end,
+  step = 1,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
start
+

(float) the starting value for the set of points. Default: 0.

+
end
+

(float) the ending value for the set of points

+
step
+

(float) the gap between each pair of adjacent points. Default: 1.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see ~torch.get_default_dtype. Otherwise, the dtype is inferred to be torch.int64.

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

range(start=0, end, step=1, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

@@ -243,55 +151,55 @@ the gap between two values in the tensor.

$$ \mbox{out}_{i+1} = \mbox{out}_i + \mbox{step}. $$

-

Warning

- +
+
+

Warning

-

This function is deprecated in favor of torch_arange.

+

This function is deprecated in favor of torch_arange.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_range(1, 4)
-torch_range(1, 4, 0.5)
-}
-#> Warning: This function is deprecated in favor of torch_arange.
-#> Warning: This function is deprecated in favor of torch_arange.
-#> torch_tensor
-#>  1.0000
-#>  1.5000
-#>  2.0000
-#>  2.5000
-#>  3.0000
-#>  3.5000
-#>  4.0000
-#> [ CPUFloatType{7} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_range(1, 4)
+torch_range(1, 4, 0.5)
+}
+#> Warning: This function is deprecated in favor of torch_arange.
+#> Warning: This function is deprecated in favor of torch_arange.
+#> torch_tensor
+#>  1.0000
+#>  1.5000
+#>  2.0000
+#>  2.5000
+#>  3.0000
+#>  3.5000
+#>  4.0000
+#> [ CPUFloatType{7} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_real.html b/dev/reference/torch_real.html index 73cae93e567f8ecdaff5e5e974c3cdd73047a099..44bbcaeda2e4e91108552b370c5ad858edab8df5 100644 --- a/dev/reference/torch_real.html +++ b/dev/reference/torch_real.html @@ -1,79 +1,18 @@ - - - - - - - -Real — torch_real • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Real — torch_real • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_real(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

real(input) -> Tensor

+
+
torch_real(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

real(input) -> Tensor

Returns the real part of the input tensor. If input is a real (non-complex) tensor, this function just returns it.

-

Warning

- +
+
+

Warning

Not yet implemented for complex tensors.

$$ \mbox{out}_{i} = real(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-torch_real(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+torch_real(torch_tensor(c(-1 + 1i, -2 + 2i, 3 - 3i)))
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_reciprocal.html b/dev/reference/torch_reciprocal.html index 37ec1582c9ae537734bd36e85c1b19aef4c52f32..1c2e489420e11d0a54f38f5b15eecef02fda6323 100644 --- a/dev/reference/torch_reciprocal.html +++ b/dev/reference/torch_reciprocal.html @@ -1,79 +1,18 @@ - - - - - - - -Reciprocal — torch_reciprocal • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Reciprocal — torch_reciprocal • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

Reciprocal

-
torch_reciprocal(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

reciprocal(input, out=NULL) -> Tensor

+
+
torch_reciprocal(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

reciprocal(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \frac{1}{\mbox{input}_{i}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_reciprocal(a)
-}
-#> torch_tensor
-#>  9.0585
-#> -1.5912
-#> -1.0692
-#>  8.6893
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_reciprocal(a)
+}
+#> torch_tensor
+#> -0.4091
+#>  2.7370
+#> -1.8316
+#> -2.6616
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_reduction.html b/dev/reference/torch_reduction.html index 773d695f34e202eadda34abe5e148496868f833c..c170b824dbdafc8839e5336755435186eed496d6 100644 --- a/dev/reference/torch_reduction.html +++ b/dev/reference/torch_reduction.html @@ -1,79 +1,18 @@ - - - - - - - -Creates the reduction objet — torch_reduction • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Creates the reduction objet — torch_reduction • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,39 +111,36 @@

Creates the reduction objet

-
torch_reduction_sum()
+    
+
torch_reduction_sum()
 
-torch_reduction_mean()
-
-torch_reduction_none()
+torch_reduction_mean() +torch_reduction_none()
+
+ -
- +
- - + + diff --git a/dev/reference/torch_relu.html b/dev/reference/torch_relu.html index a7305a47cc76bbddf6b564f0a18d351097049be7..9d737b91d0c3fd7972c1f8a5a6fa453ac9825028 100644 --- a/dev/reference/torch_relu.html +++ b/dev/reference/torch_relu.html @@ -1,79 +1,18 @@ - - - - - - - -Relu — torch_relu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Relu — torch_relu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,49 +111,44 @@

Relu

-
torch_relu(self)
- -

Arguments

- - - - - - -
self

the input tensor

- -

relu(input) -> Tensor

+
+
torch_relu(self)
+
+
+

Arguments

+
self
+

the input tensor

+
+
+

relu(input) -> Tensor

Computes the relu tranformation.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_relu_.html b/dev/reference/torch_relu_.html index a6d7430c1c75eb97d2335011cb8250c5f6d58ceb..75704aca49bfa243bf5cffcd62a2eeb44a1bf7f2 100644 --- a/dev/reference/torch_relu_.html +++ b/dev/reference/torch_relu_.html @@ -1,79 +1,18 @@ - - - - - - - -Relu_ — torch_relu_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Relu_ — torch_relu_ • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_relu_(self)
- -

Arguments

- - - - - - -
self

the input tensor

- -

relu_(input) -> Tensor

+
+
torch_relu_(self)
+
+
+

Arguments

+
self
+

the input tensor

+
+
+

relu_(input) -> Tensor

-

In-place version of torch_relu().

+

In-place version of torch_relu().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_remainder.html b/dev/reference/torch_remainder.html index 6abb5569362367db1436e8eac38b8a72a2b81eb0..5229c73458d503efcaa82f6f9230af1bb54d51da 100644 --- a/dev/reference/torch_remainder.html +++ b/dev/reference/torch_remainder.html @@ -1,79 +1,18 @@ - - - - - - - -Remainder — torch_remainder • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Remainder — torch_remainder • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Remainder

-
torch_remainder(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the dividend

other

(Tensor or float) the divisor that may be either a number or a Tensor of the same shape as the dividend

- -

remainder(input, other, out=NULL) -> Tensor

+
+
torch_remainder(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the dividend

+
other
+

(Tensor or float) the divisor that may be either a number or a Tensor of the same shape as the dividend

+
+
+

remainder(input, other, out=NULL) -> Tensor

@@ -214,46 +132,45 @@ numbers. The remainder has the same sign as the divisor.

When other is a tensor, the shapes of input and other must be broadcastable .

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_remainder(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2)
-torch_remainder(torch_tensor(c(1., 2, 3, 4, 5)), 1.5)
-}
-#> torch_tensor
-#>  1.0000
-#>  0.5000
-#>  0.0000
-#>  1.0000
-#>  0.5000
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_remainder(torch_tensor(c(-3., -2, -1, 1, 2, 3)), 2)
+torch_remainder(torch_tensor(c(1., 2, 3, 4, 5)), 1.5)
+}
+#> torch_tensor
+#>  1.0000
+#>  0.5000
+#>  0.0000
+#>  1.0000
+#>  0.5000
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_renorm.html b/dev/reference/torch_renorm.html index 9a5f8bc48e32391cc500b860e38dd103b2b9fa11..bb11d2d01afdff19ab00a23a803434f35fd54ecf 100644 --- a/dev/reference/torch_renorm.html +++ b/dev/reference/torch_renorm.html @@ -1,79 +1,18 @@ - - - - - - - -Renorm — torch_renorm • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Renorm — torch_renorm • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_renorm(self, p, dim, maxnorm)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

p

(float) the power for the norm computation

dim

(int) the dimension to slice over to get the sub-tensors

maxnorm

(float) the maximum norm to keep each sub-tensor under

- -

Note

+
+
torch_renorm(self, p, dim, maxnorm)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
p
+

(float) the power for the norm computation

+
dim
+

(int) the dimension to slice over to get the sub-tensors

+
maxnorm
+

(float) the maximum norm to keep each sub-tensor under

+
+
+

Note

If the norm of a row is lower than maxnorm, the row is unchanged

-

renorm(input, p, dim, maxnorm, out=NULL) -> Tensor

- +
+
+

renorm(input, p, dim, maxnorm, out=NULL) -> Tensor

Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm

+
-

Examples

-
if (torch_is_installed()) {
-x = torch_ones(c(3, 3))
-x[2,]$fill_(2)
-x[3,]$fill_(3)
-x
-torch_renorm(x, 1, 1, 5)
-}
-#> torch_tensor
-#>  1.0000  1.0000  1.0000
-#>  1.6667  1.6667  1.6667
-#>  1.6667  1.6667  1.6667
-#> [ CPUFloatType{3,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+x = torch_ones(c(3, 3))
+x[2,]$fill_(2)
+x[3,]$fill_(3)
+x
+torch_renorm(x, 1, 1, 5)
+}
+#> torch_tensor
+#>  1.0000  1.0000  1.0000
+#>  1.6667  1.6667  1.6667
+#>  1.6667  1.6667  1.6667
+#> [ CPUFloatType{3,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_repeat_interleave.html b/dev/reference/torch_repeat_interleave.html index 7ce7918fbe22c719e41a9ca7998686691e1daef6..e37831713a8dc9e2c272dbdfa0e6e390be8090bf 100644 --- a/dev/reference/torch_repeat_interleave.html +++ b/dev/reference/torch_repeat_interleave.html @@ -1,79 +1,18 @@ - - - - - - - -Repeat_interleave — torch_repeat_interleave • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Repeat_interleave — torch_repeat_interleave • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,83 +111,78 @@

Repeat_interleave

-
torch_repeat_interleave(self, repeats, dim = NULL)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

repeats

(Tensor or int) The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

dim

(int, optional) The dimension along which to repeat values. By default, use the flattened input array, and return a flat output array.

- -

repeat_interleave(input, repeats, dim=NULL) -> Tensor

+
+
torch_repeat_interleave(self, repeats, dim = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
repeats
+

(Tensor or int) The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

+
dim
+

(int, optional) The dimension along which to repeat values. By default, use the flattened input array, and return a flat output array.

+
+
+

repeat_interleave(input, repeats, dim=NULL) -> Tensor

Repeat elements of a tensor.

-

Warning

- +
+
+

Warning

-
This is different from `torch_Tensor.repeat` but similar to `numpy.repeat`.
-
- -

repeat_interleave(repeats) -> Tensor

+
This is different from `torch_Tensor.repeat` but similar to `numpy.repeat`.
+
+
+
+

repeat_interleave(repeats) -> Tensor

If the repeats is tensor([n1, n2, n3, ...]), then the output will be tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0 appears n1 times, 1 appears n2 times, 2 appears n3 times, etc.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-x = torch_tensor(c(1, 2, 3))
-x$repeat_interleave(2)
-y = torch_tensor(matrix(c(1, 2, 3, 4), ncol = 2, byrow=TRUE))
-torch_repeat_interleave(y, 2)
-torch_repeat_interleave(y, 3, dim=1)
-torch_repeat_interleave(y, torch_tensor(c(1, 2)), dim=1)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+x = torch_tensor(c(1, 2, 3))
+x$repeat_interleave(2)
+y = torch_tensor(matrix(c(1, 2, 3, 4), ncol = 2, byrow=TRUE))
+torch_repeat_interleave(y, 2)
+torch_repeat_interleave(y, 3, dim=1)
+torch_repeat_interleave(y, torch_tensor(c(1, 2)), dim=1)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_reshape.html b/dev/reference/torch_reshape.html index 1a1138bc0a1b9f3efed0d32e7f038705a49d107a..48007b6d58df91f230fef70074dad000d0894afa 100644 --- a/dev/reference/torch_reshape.html +++ b/dev/reference/torch_reshape.html @@ -1,79 +1,18 @@ - - - - - - - -Reshape — torch_reshape • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Reshape — torch_reshape • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_reshape(self, shape)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to be reshaped

shape

(tuple of ints) the new shape

- -

reshape(input, shape) -> Tensor

+
+
torch_reshape(self, shape)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to be reshaped

+
shape
+

(tuple of ints) the new shape

+
+
+

reshape(input, shape) -> Tensor

@@ -217,47 +135,46 @@ depend on the copying vs. viewing behavior.

See torch_Tensor.view on when it is possible to return a view.

A single dimension may be -1, in which case it's inferred from the remaining dimensions and the number of elements in input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_arange(0, 3)
-torch_reshape(a, list(2, 2))
-b <- torch_tensor(matrix(c(0, 1, 2, 3), ncol = 2, byrow=TRUE))
-torch_reshape(b, list(-1))
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  2
-#>  3
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_arange(0, 3)
+torch_reshape(a, list(2, 2))
+b <- torch_tensor(matrix(c(0, 1, 2, 3), ncol = 2, byrow=TRUE))
+torch_reshape(b, list(-1))
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  2
+#>  3
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_result_type.html b/dev/reference/torch_result_type.html index 36370115b1abe20839d9c2ce957c618f049ffcf5..c63ab75eca018380e20d74d90ac539b23245469e 100644 --- a/dev/reference/torch_result_type.html +++ b/dev/reference/torch_result_type.html @@ -1,79 +1,18 @@ - - - - - - - -Result_type — torch_result_type • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Result_type — torch_result_type • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,62 +111,57 @@

Result_type

-
torch_result_type(tensor1, tensor2)
- -

Arguments

- - - - - - - - - - -
tensor1

(Tensor or Number) an input tensor or number

tensor2

(Tensor or Number) an input tensor or number

- -

result_type(tensor1, tensor2) -> dtype

+
+
torch_result_type(tensor1, tensor2)
+
+
+

Arguments

+
tensor1
+

(Tensor or Number) an input tensor or number

+
tensor2
+

(Tensor or Number) an input tensor or number

+
+
+

result_type(tensor1, tensor2) -> dtype

Returns the torch_dtype that would result from performing an arithmetic operation on the provided input tensors. See type promotion documentation for more information on the type promotion logic.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_result_type(tensor1 = torch_tensor(c(1, 2), dtype=torch_int()), tensor2 = 1)
-}
-#> torch_Float
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_result_type(tensor1 = torch_tensor(c(1, 2), dtype=torch_int()), tensor2 = 1)
+}
+#> torch_Float
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rfft.html b/dev/reference/torch_rfft.html deleted file mode 100644 index 942f46fb698a113215bbecf048b6e6e588c67c75..0000000000000000000000000000000000000000 --- a/dev/reference/torch_rfft.html +++ /dev/null @@ -1,322 +0,0 @@ - - - - - - - - -Rfft — torch_rfft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - - -
- -
-
- - -
-

Rfft

-
- -
torch_rfft(self, signal_ndim, normalized = FALSE, onesided = TRUE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor of at least signal_ndim dimensions

signal_ndim

(int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

normalized

(bool, optional) controls whether to return normalized results. Default: FALSE

onesided

(bool, optional) controls whether to return half of results to avoid redundancy. Default: TRUE

- -

Note

- - -
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
-repeatedly running FFT methods on tensors of same geometry with same
-configuration. See cufft-plan-cache for more details on how to
-monitor and control the cache.
-
- -

rfft(input, signal_ndim, normalized=False, onesided=TRUE) -> Tensor

- - - - -

Real-to-complex Discrete Fourier Transform

-

This method computes the real-to-complex discrete Fourier transform. It is -mathematically equivalent with torch_fft with differences only in -formats of the input and output.

-

This method supports 1D, 2D and 3D real-to-complex transforms, indicated -by signal_ndim. input must be a tensor with at least -signal_ndim dimensions with optionally arbitrary number of leading batch -dimensions. If normalized is set to TRUE, this normalizes the result -by dividing it with \(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is -unitary, where \(N_i\) is the size of signal dimension \(i\).

-

The real-to-complex Fourier transform results follow conjugate symmetry:

-

$$ - X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d], -$$ -where the index arithmetic is computed modulus the size of the corresponding -dimension, \(\ ^*\) is the conjugate operator, and -\(d\) = signal_ndim. onesided flag controls whether to avoid -redundancy in the output results. If set to TRUE (default), the output will -not be full complex result of shape \((*, 2)\), where \(*\) is the shape -of input, but instead the last dimension will be halfed as of size -\(\lfloor \frac{N_d}{2} \rfloor + 1\).

-

The inverse of this function is torch_irfft.

-

Warning

- - - -

For CPU tensors, this method is currently only available with MKL. Use -torch_backends.mkl.is_available to check if MKL is installed.

- -

Examples

-
if (torch_is_installed()) { - -x = torch_randn(c(5, 5)) -torch_rfft(x, 2) -torch_rfft(x, 2, onesided=FALSE) -} -
#> torch_tensor -#> (1,.,.) = -#> 9.3451 0.0000 -#> -3.6111 -1.2856 -#> -3.1741 0.6457 -#> -3.1741 -0.6457 -#> -3.6111 1.2856 -#> -#> (2,.,.) = -#> 0.7080 -5.3827 -#> -3.8377 3.3674 -#> -1.1930 -2.8132 -#> -0.3320 1.2334 -#> 4.8354 -5.1235 -#> -#> (3,.,.) = -#> -1.9403 0.5345 -#> -6.7969 0.1849 -#> -4.9812 1.6795 -#> -0.8296 3.4459 -#> -1.4997 -4.3439 -#> -#> (4,.,.) = -#> -1.9403 -0.5345 -#> -1.4997 4.3439 -#> -0.8296 -3.4459 -#> -4.9812 -1.6795 -#> -6.7969 -0.1849 -#> -#> (5,.,.) = -#> 0.7080 5.3827 -#> ... [the output was truncated (use n=-1 to disable)] -#> [ CPUFloatType{5,5,2} ]
-
- -
- - -
- - -
-

Site built with pkgdown 1.6.1.

-
- -
-
- - - - - - - - diff --git a/dev/reference/torch_roll.html b/dev/reference/torch_roll.html index 90cf0752d2a204af129dda12282fd8eb38e667bd..181d82249cfcb698342e4a1b64484aa67a08a04b 100644 --- a/dev/reference/torch_roll.html +++ b/dev/reference/torch_roll.html @@ -1,79 +1,18 @@ - - - - - - - -Roll — torch_roll • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Roll — torch_roll • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_roll(self, shifts, dims = list())
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

shifts

(int or tuple of ints) The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value

dims

(int or tuple of ints) Axis along which to roll

- -

roll(input, shifts, dims=NULL) -> Tensor

+
+
torch_roll(self, shifts, dims = list())
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
shifts
+

(int or tuple of ints) The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value

+
dims
+

(int or tuple of ints) Axis along which to roll

+
+
+

roll(input, shifts, dims=NULL) -> Tensor

@@ -217,48 +133,47 @@ last position are re-introduced at the first position. If a dimension is not specified, the tensor will be flattened before rolling and then restored to the original shape.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_tensor(c(1, 2, 3, 4, 5, 6, 7, 8))$view(c(4, 2))
-x
-torch_roll(x, 1, 1)
-torch_roll(x, -1, 1)
-torch_roll(x, shifts=list(2, 1), dims=list(1, 2))
-}
-#> torch_tensor
-#>  6  5
-#>  8  7
-#>  2  1
-#>  4  3
-#> [ CPUFloatType{4,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_tensor(c(1, 2, 3, 4, 5, 6, 7, 8))$view(c(4, 2))
+x
+torch_roll(x, 1, 1)
+torch_roll(x, -1, 1)
+torch_roll(x, shifts=list(2, 1), dims=list(1, 2))
+}
+#> torch_tensor
+#>  6  5
+#>  8  7
+#>  2  1
+#>  4  3
+#> [ CPUFloatType{4,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rot90.html b/dev/reference/torch_rot90.html index bca4810a8f871f7cbf76e3fa6d38216ba3648ddd..08581800694112868d9116eeebb785bd6aba1435 100644 --- a/dev/reference/torch_rot90.html +++ b/dev/reference/torch_rot90.html @@ -1,79 +1,18 @@ - - - - - - - -Rot90 — torch_rot90 • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rot90 — torch_rot90 • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_rot90(self, k = 1L, dims = c(0, 1))
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

k

(int) number of times to rotate

dims

(a list or tuple) axis to rotate

- -

rot90(input, k, dims) -> Tensor

+
+
torch_rot90(self, k = 1L, dims = c(0, 1))
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
k
+

(int) number of times to rotate

+
dims
+

(a list or tuple) axis to rotate

+
+
+

rot90(input, k, dims) -> Tensor

Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(1, 4)$view(c(2, 2))
-x
-torch_rot90(x, 1, c(1, 2))
-x <- torch_arange(1, 8)$view(c(2, 2, 2))
-x
-torch_rot90(x, 1, c(1, 2))
-}
-#> torch_tensor
-#> (1,.,.) = 
-#>   3  4
-#>   7  8
-#> 
-#> (2,.,.) = 
-#>   1  2
-#>   5  6
-#> [ CPUFloatType{2,2,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(1, 4)$view(c(2, 2))
+x
+torch_rot90(x, 1, c(1, 2))
+x <- torch_arange(1, 8)$view(c(2, 2, 2))
+x
+torch_rot90(x, 1, c(1, 2))
+}
+#> torch_tensor
+#> (1,.,.) = 
+#>   3  4
+#>   7  8
+#> 
+#> (2,.,.) = 
+#>   1  2
+#>   5  6
+#> [ CPUFloatType{2,2,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_round.html b/dev/reference/torch_round.html index 44f481071a965fdf8e13542b3b11e03fcf7e25e2..e548f2806e0aba966a7ce0e21f161f0a6f348378 100644 --- a/dev/reference/torch_round.html +++ b/dev/reference/torch_round.html @@ -1,79 +1,18 @@ - - - - - - - -Round — torch_round • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Round — torch_round • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_round(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

round(input, out=NULL) -> Tensor

+
+
torch_round(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

round(input, out=NULL) -> Tensor

Returns a new tensor with each of the elements of input rounded to the closest integer.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_round(a)
-}
-#> torch_tensor
-#> -0
-#> -1
-#> -1
-#> -1
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_round(a)
+}
+#> torch_tensor
+#>  1
+#> -0
+#>  0
+#> -1
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_rrelu_.html b/dev/reference/torch_rrelu_.html index 12930fcf027fe2648fc5270c052f4e4604f3145c..316ceca2523e51e7f93c5617230e8e210c033489 100644 --- a/dev/reference/torch_rrelu_.html +++ b/dev/reference/torch_rrelu_.html @@ -1,79 +1,18 @@ - - - - - - - -Rrelu_ — torch_rrelu_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rrelu_ — torch_rrelu_ • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_rrelu_(
-  self,
-  lower = 0.125,
-  upper = 0.333333,
-  training = FALSE,
-  generator = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

the input tensor

lower

lower bound of the uniform distribution. Default: 1/8

upper

upper bound of the uniform distribution. Default: 1/3

training

bool wether it's a training pass. DEfault: FALSE

generator

random number generator

- -

rrelu_(input, lower=1./8, upper=1./3, training=False) -> Tensor

+
+
torch_rrelu_(
+  self,
+  lower = 0.125,
+  upper = 0.333333,
+  training = FALSE,
+  generator = NULL
+)
+
+
+

Arguments

+
self
+

the input tensor

+
lower
+

lower bound of the uniform distribution. Default: 1/8

+
upper
+

upper bound of the uniform distribution. Default: 1/3

+
training
+

bool wether it's a training pass. DEfault: FALSE

+
generator
+

random number generator

+
+
+

rrelu_(input, lower=1./8, upper=1./3, training=False) -> Tensor

In-place version of torch_rrelu.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_rsqrt.html b/dev/reference/torch_rsqrt.html index 5f5cbed9bb69c0987ee42b2ee5609e3fa55ab07e..28736cda599c3c622f4ac57fe4d120def7868d98 100644 --- a/dev/reference/torch_rsqrt.html +++ b/dev/reference/torch_rsqrt.html @@ -1,79 +1,18 @@ - - - - - - - -Rsqrt — torch_rsqrt • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Rsqrt — torch_rsqrt • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_rsqrt(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

rsqrt(input, out=NULL) -> Tensor

+
+
torch_rsqrt(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

rsqrt(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@ the elements of input.

$$ \mbox{out}_{i} = \frac{1}{\sqrt{\mbox{input}_{i}}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_rsqrt(a)
-}
-#> torch_tensor
-#>     nan
-#>  2.4017
-#>     nan
-#>  1.3152
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_rsqrt(a)
+}
+#> torch_tensor
+#>     nan
+#>     nan
+#>  1.4510
+#>  0.7661
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_save.html b/dev/reference/torch_save.html index 80569f161208c2bb1f715c8615f4aa4b26a6d5e4..448dfb14a298c9b84f2963051b3e565eaeaff8b3 100644 --- a/dev/reference/torch_save.html +++ b/dev/reference/torch_save.html @@ -1,80 +1,19 @@ - - - - - - - -Saves an object to a disk file. — torch_save • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Saves an object to a disk file. — torch_save • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,55 +113,46 @@ term storage." /> term storage.

-
torch_save(obj, path, ...)
- -

Arguments

- - - - - - - - - - - - - - -
obj

the saved object

path

a connection or the name of the file to save.

...

not currently used.

- -

See also

- -

Other torch_save: -torch_load()

+
+
torch_save(obj, path, ...)
+
+ +
+

Arguments

+
obj
+

the saved object

+
path
+

a connection or the name of the file to save.

+
...
+

not currently used.

+
+
+

See also

+

Other torch_save: +torch_load()

+
+
-
- +
- - + + diff --git a/dev/reference/torch_scalar_tensor.html b/dev/reference/torch_scalar_tensor.html index 8fe034a5d7a474a42a702f050e2c6c43eb3b6c18..9a5bac52dfa3ba6e730fe12277245548f3246e63 100644 --- a/dev/reference/torch_scalar_tensor.html +++ b/dev/reference/torch_scalar_tensor.html @@ -1,79 +1,18 @@ - - - - - - - -Scalar tensor — torch_scalar_tensor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Scalar tensor — torch_scalar_tensor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,55 +111,43 @@

Creates a singleton dimension tensor.

-
torch_scalar_tensor(value, dtype = NULL, device = NULL, requires_grad = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
value

the value you want to use

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
torch_scalar_tensor(value, dtype = NULL, device = NULL, requires_grad = FALSE)
+
+
+

Arguments

+
value
+

the value you want to use

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_searchsorted.html b/dev/reference/torch_searchsorted.html index 15428a5ca6664c0459a65f1f670332c1bec71979..d8c6005526a209048851e3765384bbe5a34c9762 100644 --- a/dev/reference/torch_searchsorted.html +++ b/dev/reference/torch_searchsorted.html @@ -1,79 +1,18 @@ - - - - - - - -Searchsorted — torch_searchsorted • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Searchsorted — torch_searchsorted • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,39 +111,31 @@

Searchsorted

-
torch_searchsorted(sorted_sequence, self, out_int32 = FALSE, right = FALSE)
+
+
torch_searchsorted(sorted_sequence, self, out_int32 = FALSE, right = FALSE)
+
-

Arguments

- - - - - - - - - - - - - - - - - - -
sorted_sequence

(Tensor) N-D or 1-D tensor, containing monotonically increasing -sequence on the innermost dimension.

self

(Tensor or Scalar) N-D tensor or a Scalar containing the search value(s).

out_int32

(bool, optional) – indicate the output data type. torch_int32() -if True, torch_int64() otherwise. Default value is FALSE, i.e. default output -data type is torch_int64().

right

(bool, optional) – if False, return the first suitable location +

+

Arguments

+
sorted_sequence
+

(Tensor) N-D or 1-D tensor, containing monotonically increasing +sequence on the innermost dimension.

+
self
+

(Tensor or Scalar) N-D tensor or a Scalar containing the search value(s).

+
out_int32
+

(bool, optional) – indicate the output data type. torch_int32() +if True, torch_int64() otherwise. Default value is FALSE, i.e. default output +data type is torch_int64().

+
right
+

(bool, optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of boundaries (one pass the last index). In other words, if False, gets the lower bound index for each value in input from boundaries. If True, gets the upper bound index -instead. Default value is False.

- -

searchsorted(sorted_sequence, values, *, out_int32=FALSE, right=FALSE, out=None) -> Tensor

- +instead. Default value is False.

+
+
+

searchsorted(sorted_sequence, values, *, out_int32=FALSE, right=FALSE, out=None) -> Tensor

@@ -230,50 +144,49 @@ corresponding values in values were inserted before the indices, th corresponding innermost dimension within sorted_sequence would be preserved. Return a new tensor with the same size as values. If right is FALSE (default), then the left boundary of sorted_sequence is closed.

+
-

Examples

-
if (torch_is_installed()) {
-
-sorted_sequence <- torch_tensor(rbind(c(1, 3, 5, 7, 9), c(2, 4, 6, 8, 10)))
-sorted_sequence
-values <- torch_tensor(rbind(c(3, 6, 9), c(3, 6, 9)))
-values
-torch_searchsorted(sorted_sequence, values)
-torch_searchsorted(sorted_sequence, values, right=TRUE)
-sorted_sequence_1d <- torch_tensor(c(1, 3, 5, 7, 9))
-sorted_sequence_1d
-torch_searchsorted(sorted_sequence_1d, values)
-}
-#> torch_tensor
-#>  1  3  4
-#>  1  3  4
-#> [ CPULongType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+sorted_sequence <- torch_tensor(rbind(c(1, 3, 5, 7, 9), c(2, 4, 6, 8, 10)))
+sorted_sequence
+values <- torch_tensor(rbind(c(3, 6, 9), c(3, 6, 9)))
+values
+torch_searchsorted(sorted_sequence, values)
+torch_searchsorted(sorted_sequence, values, right=TRUE)
+sorted_sequence_1d <- torch_tensor(c(1, 3, 5, 7, 9))
+sorted_sequence_1d
+torch_searchsorted(sorted_sequence_1d, values)
+}
+#> torch_tensor
+#>  1  3  4
+#>  1  3  4
+#> [ CPULongType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_selu.html b/dev/reference/torch_selu.html index 9a0ebfa0baae8180491f3766dedf588f8a7e0188..d7fe86bb9f5ca03084683b392bff00d3f7f5e27f 100644 --- a/dev/reference/torch_selu.html +++ b/dev/reference/torch_selu.html @@ -1,79 +1,18 @@ - - - - - - - -Selu — torch_selu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Selu — torch_selu • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,49 +111,44 @@

Selu

-
torch_selu(self)
- -

Arguments

- - - - - - -
self

the input tensor

- -

selu(input) -> Tensor

+
+
torch_selu(self)
+
+
+

Arguments

+
self
+

the input tensor

+
+
+

selu(input) -> Tensor

Computes the selu transformation.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_selu_.html b/dev/reference/torch_selu_.html index 035fdadac0ab5e2828b20fffc76becb1ac3eb27a..7c090fcdff16fbcf3e88161f1213b30c92f131c1 100644 --- a/dev/reference/torch_selu_.html +++ b/dev/reference/torch_selu_.html @@ -1,79 +1,18 @@ - - - - - - - -Selu_ — torch_selu_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Selu_ — torch_selu_ • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_selu_(self)
- -

Arguments

- - - - - - -
self

the input tensor

- -

selu_(input) -> Tensor

+
+
torch_selu_(self)
+
+
+

Arguments

+
self
+

the input tensor

+
+
+

selu_(input) -> Tensor

-

In-place version of torch_selu().

+

In-place version of torch_selu().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_sgn.html b/dev/reference/torch_sgn.html index 256058e2f87968240cc70d3ff74492020e19dfe2..689c77e827a1b934c6bfadb505976c1b2fb9779e 100644 --- a/dev/reference/torch_sgn.html +++ b/dev/reference/torch_sgn.html @@ -1,79 +1,18 @@ - - - - - - - -Sgn — torch_sgn • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sgn — torch_sgn • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sgn(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sgn(input, *, out=None) -> Tensor

+
+
torch_sgn(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sgn(input, *, out=None) -> Tensor

For complex tensors, this function returns a new tensor whose elemants have the same angle as that of the elements of input and absolute value 1. For a non-complex tensor, this function -returns the signs of the elements of input (see torch_sign).

+returns the signs of the elements of input (see torch_sign).

\(\mbox{out}_{i} = 0\), if \(|{\mbox{{input}}_i}| == 0\) \(\mbox{out}_{i} = \frac{{\mbox{{input}}_i}}{|{\mbox{{input}}_i}|}\), otherwise

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-x <- torch_tensor(c(3+4i, 7-24i, 0, 1+2i))
-x$sgn()
-torch_sgn(x)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+x <- torch_tensor(c(3+4i, 7-24i, 0, 1+2i))
+x$sgn()
+torch_sgn(x)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sigmoid.html b/dev/reference/torch_sigmoid.html index 5e3dcb894baff5ff4dc7d9b970acf06bc4f432f0..fc2cde3354207e05a5a850fe947f5fd909bc608f 100644 --- a/dev/reference/torch_sigmoid.html +++ b/dev/reference/torch_sigmoid.html @@ -1,79 +1,18 @@ - - - - - - - -Sigmoid — torch_sigmoid • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sigmoid — torch_sigmoid • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sigmoid(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sigmoid(input, out=NULL) -> Tensor

+
+
torch_sigmoid(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sigmoid(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \frac{1}{1 + e^{-\mbox{input}_{i}}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_sigmoid(a)
-}
-#> torch_tensor
-#>  0.1341
-#>  0.3821
-#>  0.4059
-#>  0.3504
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_sigmoid(a)
+}
+#> torch_tensor
+#>  0.6978
+#>  0.3755
+#>  0.8494
+#>  0.5212
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sign.html b/dev/reference/torch_sign.html index 121fa5713f7cd5359a0d07f1287cdcac660f369b..217b44a71b3fbe3fe819bd30684c19d46ed7961f 100644 --- a/dev/reference/torch_sign.html +++ b/dev/reference/torch_sign.html @@ -1,79 +1,18 @@ - - - - - - - -Sign — torch_sign • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sign — torch_sign • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sign(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sign(input, out=NULL) -> Tensor

+
+
torch_sign(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sign(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \mbox{sgn}(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_tensor(c(0.7, -1.2, 0., 2.3))
-a
-torch_sign(a)
-}
-#> torch_tensor
-#>  1
-#> -1
-#>  0
-#>  1
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_tensor(c(0.7, -1.2, 0., 2.3))
+a
+torch_sign(a)
+}
+#> torch_tensor
+#>  1
+#> -1
+#>  0
+#>  1
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_signbit.html b/dev/reference/torch_signbit.html index b4ce59cf55997c67958cbea0d8fa8877210f398b..c229cec5ac03cd429b4a609857c3d9967e16b157 100644 --- a/dev/reference/torch_signbit.html +++ b/dev/reference/torch_signbit.html @@ -1,79 +1,18 @@ - - - - - - - -Signbit — torch_signbit • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Signbit — torch_signbit • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_signbit(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

signbit(input, *, out=None) -> Tensor

+
+
torch_signbit(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

signbit(input, *, out=None) -> Tensor

Tests if each element of input has its sign bit set (is less than zero) or not.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(0.7, -1.2, 0., 2.3))
-torch_signbit(a)
-}
-#> torch_tensor
-#>  0
-#>  1
-#>  0
-#>  0
-#> [ CPUBoolType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(0.7, -1.2, 0., 2.3))
+torch_signbit(a)
+}
+#> torch_tensor
+#>  0
+#>  1
+#>  0
+#>  0
+#> [ CPUBoolType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sin.html b/dev/reference/torch_sin.html index a7ec3c0fe590d8413aa2a5d92fc04c0dcdfc0cc2..3f77b48a3e1f5add0dde818e9cd33310b8704677 100644 --- a/dev/reference/torch_sin.html +++ b/dev/reference/torch_sin.html @@ -1,79 +1,18 @@ - - - - - - - -Sin — torch_sin • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sin — torch_sin • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sin(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sin(input, out=NULL) -> Tensor

+
+
torch_sin(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sin(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \sin(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_sin(a)
-}
-#> torch_tensor
-#> -0.3901
-#>  0.8413
-#>  0.8825
-#>  0.8864
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_sin(a)
+}
+#> torch_tensor
+#> -0.0434
+#>  0.1401
+#> -0.1475
+#>  0.2083
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sinh.html b/dev/reference/torch_sinh.html index 96c7353d0693126b6027b3f719e7436eb541a0f4..301053d467e34f65e70e4d43adc246ae8fb94366 100644 --- a/dev/reference/torch_sinh.html +++ b/dev/reference/torch_sinh.html @@ -1,79 +1,18 @@ - - - - - - - -Sinh — torch_sinh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sinh — torch_sinh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sinh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sinh(input, out=NULL) -> Tensor

+
+
torch_sinh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sinh(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@

$$ \mbox{out}_{i} = \sinh(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_sinh(a)
-}
-#> torch_tensor
-#> -0.7701
-#>  3.5733
-#>  0.1716
-#> -0.7064
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_sinh(a)
+}
+#> torch_tensor
+#>  0.0107
+#>  0.8594
+#> -2.5189
+#>  0.1451
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_slogdet.html b/dev/reference/torch_slogdet.html index 09b12000fdc7f7a58db1636602b67ff3cd672f5d..85d7808bdf577cadfb01f9e48450a59bac6fc0bd 100644 --- a/dev/reference/torch_slogdet.html +++ b/dev/reference/torch_slogdet.html @@ -1,79 +1,18 @@ - - - - - - - -Slogdet — torch_slogdet • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Slogdet — torch_slogdet • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_slogdet(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

- -

Note

+
+
torch_slogdet(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size (*, n, n) where * is zero or more batch dimensions.

+
+
+

Note

-
If `input` has zero determinant, this returns `(0, -inf)`.
-
+
If `input` has zero determinant, this returns `(0, -inf)`.
+
-
Backward through `slogdet` internally uses SVD results when `input`
+
Backward through `slogdet` internally uses SVD results when `input`
 is not invertible. In this case, double backward through `slogdet`
 will be unstable in when `input` doesn't have distinct singular values.
 See `~torch.svd` for details.
-
- -

slogdet(input) -> (Tensor, Tensor)

+
+
+
+

slogdet(input) -> (Tensor, Tensor)

Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices.

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_randn(c(3, 3))
-A
-torch_det(A)
-torch_logdet(A)
-torch_slogdet(A)
-}
-#> [[1]]
-#> torch_tensor
-#> -1
-#> [ CPUFloatType{} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> -1.39026
-#> [ CPUFloatType{} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_randn(c(3, 3))
+A
+torch_det(A)
+torch_logdet(A)
+torch_slogdet(A)
+}
+#> [[1]]
+#> torch_tensor
+#> 1
+#> [ CPUFloatType{} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> 1.07786
+#> [ CPUFloatType{} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_solve.html b/dev/reference/torch_solve.html index 5ea4bb20ce7528d15affcae9cb2c810faf17a09d..92a54c5791fdce24c0b180051f2622664bdf0a5a 100644 --- a/dev/reference/torch_solve.html +++ b/dev/reference/torch_solve.html @@ -1,79 +1,18 @@ - - - - - - - -Solve — torch_solve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Solve — torch_solve • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_solve(self, A)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) input matrix \(B\) of size \((*, m, k)\) , where \(*\) is zero or more batch dimensions.

A

(Tensor) input square matrix of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

- -

Note

+
+
torch_solve(self, A)
+
+
+

Arguments

+
self
+

(Tensor) input matrix \(B\) of size \((*, m, k)\) , where \(*\) is zero or more batch dimensions.

+
A
+

(Tensor) input square matrix of size \((*, m, m)\), where \(*\) is zero or more batch dimensions.

+
+
+

Note

-
Irrespective of the original strides, the returned matrices
+
Irrespective of the original strides, the returned matrices
 `solution` and `LU` will be transposed, i.e. with strides like
 `B$contiguous()$transpose(-1, -2)$stride()` and
 `A$contiguous()$transpose(-1, -2)$stride()` respectively.
-
- -

solve(input, A) -> (Tensor, Tensor)

+
+
+
+

solve(input, A) -> (Tensor, Tensor)

@@ -225,59 +144,58 @@ A, in order as a namedtuple solution, LU.

torch_solve(B, A) can take in 2D inputs B, A or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs solution, LU.

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_tensor(rbind(c(6.80, -2.11,  5.66,  5.97,  8.23),
-                      c(-6.05, -3.30,  5.36, -4.44,  1.08),
-                      c(-0.45,  2.58, -2.70,  0.27,  9.04),
-                      c(8.32,  2.71,  4.35,  -7.17,  2.14),
-                      c(-9.67, -5.14, -7.26,  6.08, -6.87)))$t()
-B = torch_tensor(rbind(c(4.02,  6.19, -8.22, -7.57, -3.03),
-                      c(-1.56,  4.00, -8.67,  1.75,  2.86),
-                      c(9.81, -4.09, -4.57, -8.61,  8.99)))$t()
-out = torch_solve(B, A)
-X = out[[1]]
-LU = out[[2]]
-torch_dist(B, torch_mm(A, X))
-# Batched solver example
-A = torch_randn(c(2, 3, 1, 4, 4))
-B = torch_randn(c(2, 3, 1, 4, 6))
-out = torch_solve(B, A)
-X = out[[1]]
-LU = out[[2]]
-torch_dist(B, A$matmul(X))
-}
-#> torch_tensor
-#> 2.55755e-06
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_tensor(rbind(c(6.80, -2.11,  5.66,  5.97,  8.23),
+                      c(-6.05, -3.30,  5.36, -4.44,  1.08),
+                      c(-0.45,  2.58, -2.70,  0.27,  9.04),
+                      c(8.32,  2.71,  4.35,  -7.17,  2.14),
+                      c(-9.67, -5.14, -7.26,  6.08, -6.87)))$t()
+B = torch_tensor(rbind(c(4.02,  6.19, -8.22, -7.57, -3.03),
+                      c(-1.56,  4.00, -8.67,  1.75,  2.86),
+                      c(9.81, -4.09, -4.57, -8.61,  8.99)))$t()
+out = torch_solve(B, A)
+X = out[[1]]
+LU = out[[2]]
+torch_dist(B, torch_mm(A, X))
+# Batched solver example
+A = torch_randn(c(2, 3, 1, 4, 4))
+B = torch_randn(c(2, 3, 1, 4, 6))
+out = torch_solve(B, A)
+X = out[[1]]
+LU = out[[2]]
+torch_dist(B, A$matmul(X))
+}
+#> torch_tensor
+#> 2.77394e-05
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sort.html b/dev/reference/torch_sort.html index 01d0c8721db4b7dc201ba54830c519c2b0ac2080..117ea63b9f9ab7c9ba137dec9e9be0e862e2a98f 100644 --- a/dev/reference/torch_sort.html +++ b/dev/reference/torch_sort.html @@ -1,79 +1,18 @@ - - - - - - - -Sort — torch_sort • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sort — torch_sort • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sort(self, dim = -1L, descending = FALSE, stable)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int, optional) the dimension to sort along

descending

(bool, optional) controls the sorting order (ascending or descending)

stable

(bool, optional) – makes the sorting routine stable, which guarantees -that the order of equivalent elements is preserved.

- -

sort(input, dim=-1, descending=FALSE) -> (Tensor, LongTensor)

+
+
torch_sort(self, dim = -1L, descending = FALSE, stable)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int, optional) the dimension to sort along

+
descending
+

(bool, optional) controls the sorting order (ascending or descending)

+
stable
+

(bool, optional) – makes the sorting routine stable, which guarantees +that the order of equivalent elements is preserved.

+
+
+

sort(input, dim=-1, descending=FALSE) -> (Tensor, LongTensor)

@@ -226,56 +140,55 @@ order by value.

A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(3, 4))
-out = torch_sort(x)
-out
-out = torch_sort(x, 1)
-out
-}
-#> [[1]]
-#> torch_tensor
-#> -1.7070 -0.4527 -0.8545  0.1054
-#> -0.6119  0.3404  0.3345  0.6154
-#>  1.2045  1.8430  1.2498  0.6608
-#> [ CPUFloatType{3,4} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  1  2  2  0
-#>  2  1  1  1
-#>  0  0  0  2
-#> [ CPULongType{3,4} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(3, 4))
+out = torch_sort(x)
+out
+out = torch_sort(x, 1)
+out
+}
+#> [[1]]
+#> torch_tensor
+#> -1.0790 -0.9029 -0.3870 -1.4936
+#> -0.3612 -0.7831  0.1737  0.0345
+#>  0.2319  0.2723  0.9356  1.6049
+#> [ CPUFloatType{3,4} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  0  0  2  0
+#>  2  1  0  1
+#>  1  2  1  2
+#> [ CPULongType{3,4} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_sparse_coo_tensor.html b/dev/reference/torch_sparse_coo_tensor.html index f3d3d5c2916f75289f778d989317e9ea94f53498..ad44596e95a65d43fbea0757aa10b68e38256f9d 100644 --- a/dev/reference/torch_sparse_coo_tensor.html +++ b/dev/reference/torch_sparse_coo_tensor.html @@ -1,79 +1,18 @@ - - - - - - - -Sparse_coo_tensor — torch_sparse_coo_tensor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sparse_coo_tensor — torch_sparse_coo_tensor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,46 +111,34 @@

Sparse_coo_tensor

-
torch_sparse_coo_tensor(
-  indices,
-  values,
-  size = NULL,
-  dtype = NULL,
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
indices

(array_like) Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch_LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.

values

(array_like) Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

size

(list, tuple, or torch.Size, optional) Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, infers data type from values.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

sparse_coo_tensor(indices, values, size=NULL, dtype=NULL, device=NULL, requires_grad=False) -> Tensor

+
+
torch_sparse_coo_tensor(
+  indices,
+  values,
+  size = NULL,
+  dtype = NULL,
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
indices
+

(array_like) Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch_LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.

+
values
+

(array_like) Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

+
size
+

(list, tuple, or torch.Size, optional) Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, infers data type from values.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

sparse_coo_tensor(indices, values, size=NULL, dtype=NULL, device=NULL, requires_grad=False) -> Tensor

@@ -236,53 +146,52 @@ with the given values. A sparse tensor can be uncoalesced, in that case, there are duplicate coordinates in the indices, and the value at that index is the sum of all duplicate value entries: torch_sparse_.

+
-

Examples

-
if (torch_is_installed()) {
-
-i = torch_tensor(matrix(c(1, 2, 2, 3, 1, 3), ncol = 3, byrow = TRUE), dtype=torch_int64())
-v = torch_tensor(c(3, 4, 5), dtype=torch_float32())
-torch_sparse_coo_tensor(i, v)
-torch_sparse_coo_tensor(i, v, c(2, 4))
-
-# create empty sparse tensors
-S = torch_sparse_coo_tensor(
-  torch_empty(c(1, 0), dtype = torch_int64()), 
-  torch_tensor(numeric(), dtype = torch_float32()), 
-  c(1)
-)
-S = torch_sparse_coo_tensor(
-  torch_empty(c(1, 0), dtype = torch_int64()), 
-  torch_empty(c(0, 2)), 
-  c(1, 2)
-)
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+i = torch_tensor(matrix(c(1, 2, 2, 3, 1, 3), ncol = 3, byrow = TRUE), dtype=torch_int64())
+v = torch_tensor(c(3, 4, 5), dtype=torch_float32())
+torch_sparse_coo_tensor(i, v)
+torch_sparse_coo_tensor(i, v, c(2, 4))
+
+# create empty sparse tensors
+S = torch_sparse_coo_tensor(
+  torch_empty(c(1, 0), dtype = torch_int64()), 
+  torch_tensor(numeric(), dtype = torch_float32()), 
+  c(1)
+)
+S = torch_sparse_coo_tensor(
+  torch_empty(c(1, 0), dtype = torch_int64()), 
+  torch_empty(c(0, 2)), 
+  c(1, 2)
+)
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_split.html b/dev/reference/torch_split.html index 8756d01e10cb4e67182f5f3d1764560053329950..8bd0916035355385045c78bec0b5b39e6976c237 100644 --- a/dev/reference/torch_split.html +++ b/dev/reference/torch_split.html @@ -1,79 +1,18 @@ - - - - - - - -Split — torch_split • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Split — torch_split • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,28 +111,22 @@

Splits the tensor into chunks. Each chunk is a view of the original tensor.

-
torch_split(self, split_size, dim = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) tensor to split.

split_size

(int) size of a single chunk or -list of sizes for each chunk

dim

(int) dimension along which to split the tensor.

- -

Details

+
+
torch_split(self, split_size, dim = 1L)
+
+
+

Arguments

+
self
+

(Tensor) tensor to split.

+
split_size
+

(int) size of a single chunk or +list of sizes for each chunk

+
dim
+

(int) dimension along which to split the tensor.

+
+
+

Details

If split_size is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by @@ -218,32 +134,29 @@ the tensor size along the given dimension dim is not divisible by

If split_size is a list, then tensor will be split into length(split_size) chunks with sizes in dim according to split_size_or_sections.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_sqrt.html b/dev/reference/torch_sqrt.html index 7834341ebf07cab34df7899e0c536e40965f2df9..b66c7cb2aaa32e14f2a0fed9333099b319bdd6fb 100644 --- a/dev/reference/torch_sqrt.html +++ b/dev/reference/torch_sqrt.html @@ -1,79 +1,18 @@ - - - - - - - -Sqrt — torch_sqrt • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sqrt — torch_sqrt • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sqrt(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

sqrt(input, out=NULL) -> Tensor

+
+
torch_sqrt(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

sqrt(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \sqrt{\mbox{input}_{i}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_sqrt(a)
-}
-#> torch_tensor
-#>     nan
-#>     nan
-#>     nan
-#>  0.8274
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_sqrt(a)
+}
+#> torch_tensor
+#>  0.5666
+#>  0.8515
+#>     nan
+#>  1.1684
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_square.html b/dev/reference/torch_square.html index 6994c31e11d3d39e8a430d32c93559531b8a22ee..9dce6e3b61cf5b5f54d960650351231e1b027cf9 100644 --- a/dev/reference/torch_square.html +++ b/dev/reference/torch_square.html @@ -1,79 +1,18 @@ - - - - - - - -Square — torch_square • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Square — torch_square • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_square(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

square(input, out=NULL) -> Tensor

+
+
torch_square(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

square(input, out=NULL) -> Tensor

Returns a new tensor with the square of the elements of input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_square(a)
-}
-#> torch_tensor
-#>  1.6439
-#>  1.9595
-#>  0.4154
-#>  0.1860
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_square(a)
+}
+#> torch_tensor
+#>  1.2311
+#>  0.2380
+#>  0.1663
+#>  2.7731
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_squeeze.html b/dev/reference/torch_squeeze.html index c825ad511ac77adfb93bacf67c84540a31aaaa6d..5580b83743af1ac22bfe38074b3104c7d5381ccd 100644 --- a/dev/reference/torch_squeeze.html +++ b/dev/reference/torch_squeeze.html @@ -1,79 +1,18 @@ - - - - - - - -Squeeze — torch_squeeze • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Squeeze — torch_squeeze • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_squeeze(self, dim)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int, optional) if given, the input will be squeezed only in this dimension

- -

Note

+
+
torch_squeeze(self, dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int, optional) if given, the input will be squeezed only in this dimension

+
+
+

Note

The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other.

-

squeeze(input, dim=NULL, out=NULL) -> Tensor

- +
+
+

squeeze(input, dim=NULL, out=NULL) -> Tensor

@@ -221,58 +140,57 @@ will be of shape: \((A \times B \times C \times D)\).

dimension. If input is of shape: \((A \times 1 \times B)\), squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape \((A \times B)\).

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_zeros(c(2, 1, 2, 1, 2))
-x
-y = torch_squeeze(x)
-y
-y = torch_squeeze(x, 1)
-y
-y = torch_squeeze(x, 2)
-y
-}
-#> torch_tensor
-#> (1,1,.,.) = 
-#>   0  0
-#> 
-#> (2,1,.,.) = 
-#>   0  0
-#> 
-#> (1,2,.,.) = 
-#>   0  0
-#> 
-#> (2,2,.,.) = 
-#>   0  0
-#> [ CPUFloatType{2,2,1,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_zeros(c(2, 1, 2, 1, 2))
+x
+y = torch_squeeze(x)
+y
+y = torch_squeeze(x, 1)
+y
+y = torch_squeeze(x, 2)
+y
+}
+#> torch_tensor
+#> (1,1,.,.) = 
+#>   0  0
+#> 
+#> (2,1,.,.) = 
+#>   0  0
+#> 
+#> (1,2,.,.) = 
+#>   0  0
+#> 
+#> (2,2,.,.) = 
+#>   0  0
+#> [ CPUFloatType{2,2,1,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_stack.html b/dev/reference/torch_stack.html index 2777f0d7af86dcf1d31ff063afe1a7438fc9a83d..8d7724136ba927f4df649d72d8f006bac76266df 100644 --- a/dev/reference/torch_stack.html +++ b/dev/reference/torch_stack.html @@ -1,79 +1,18 @@ - - - - - - - -Stack — torch_stack • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Stack — torch_stack • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_stack(tensors, dim = 1L)
- -

Arguments

- - - - - - - - - - -
tensors

(sequence of Tensors) sequence of tensors to concatenate

dim

(int) dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive)

- -

stack(tensors, dim=0, out=NULL) -> Tensor

+
+
torch_stack(tensors, dim = 1L)
+
+
+

Arguments

+
tensors
+

(sequence of Tensors) sequence of tensors to concatenate

+
dim
+

(int) dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive)

+
+
+

stack(tensors, dim=0, out=NULL) -> Tensor

Concatenates sequence of tensors along a new dimension.

All tensors need to be of the same size.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_std.html b/dev/reference/torch_std.html index ef9aefa51749a48bc75a194e6c67b7d0943ace8d..cdd2074971e2b99a87f06faba395cfb783616c5c 100644 --- a/dev/reference/torch_std.html +++ b/dev/reference/torch_std.html @@ -1,79 +1,18 @@ - - - - - - - -Std — torch_std • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Std — torch_std • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_std(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

correction

The type of correction.

unbiased

(bool) whether to use the unbiased estimation or not

keepdim

(bool) whether the output tensor has dim retained or not.

- -

std(input, unbiased=TRUE) -> Tensor

+
+
torch_std(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
correction
+

The type of correction.

+
unbiased
+

(bool) whether to use the unbiased estimation or not

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

std(input, unbiased=TRUE) -> Tensor

Returns the standard-deviation of all elements in the input tensor.

If unbiased is FALSE, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

-

std(input, dim, unbiased=TRUE, keepdim=False, out=NULL) -> Tensor

- +
+
+

std(input, dim, unbiased=TRUE, keepdim=False, out=NULL) -> Tensor

@@ -234,55 +147,54 @@ dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is FALSE, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_std(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_std(a, dim=1)
-}
-#> torch_tensor
-#>  1.0154
-#>  0.6545
-#>  1.1549
-#>  0.7787
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_std(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_std(a, dim=1)
+}
+#> torch_tensor
+#>  0.5710
+#>  1.0620
+#>  1.0028
+#>  0.3510
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_std_mean.html b/dev/reference/torch_std_mean.html index c532f68072fe63dd7997c1a48d4f8bd8917e88a1..d13aeac75eee88135408361c7c47abd00ee93a2a 100644 --- a/dev/reference/torch_std_mean.html +++ b/dev/reference/torch_std_mean.html @@ -1,79 +1,18 @@ - - - - - - - -Std_mean — torch_std_mean • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Std_mean — torch_std_mean • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,43 +111,34 @@

Std_mean

-
torch_std_mean(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

correction

The type of correction.

unbiased

(bool) whether to use the unbiased estimation or not

keepdim

(bool) whether the output tensor has dim retained or not.

- -

std_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

+
+
torch_std_mean(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
correction
+

The type of correction.

+
unbiased
+

(bool) whether to use the unbiased estimation or not

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

std_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

Returns the standard-deviation and mean of all elements in the input tensor.

If unbiased is FALSE, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

-

std_mean(input, dim, unbiased=TRUE, keepdim=False) -> (Tensor, Tensor)

- +
+
+

std_mean(input, dim, unbiased=TRUE, keepdim=False) -> (Tensor, Tensor)

@@ -234,65 +147,64 @@ dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is FALSE, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_std_mean(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_std_mean(a, 1)
-}
-#> [[1]]
-#> torch_tensor
-#>  0.5788
-#>  0.6651
-#>  1.1436
-#>  1.0714
-#> [ CPUFloatType{4} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> -0.1838
-#> -0.1379
-#> -0.9362
-#> -1.0955
-#> [ CPUFloatType{4} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_std_mean(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_std_mean(a, 1)
+}
+#> [[1]]
+#> torch_tensor
+#>  0.5861
+#>  0.4865
+#>  1.2701
+#>  1.2972
+#> [ CPUFloatType{4} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  0.1825
+#> -0.3469
+#> -0.8437
+#>  0.2128
+#> [ CPUFloatType{4} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_stft.html b/dev/reference/torch_stft.html index 12d5948794fd50bb055e384fe7f1381e21034ae9..3c22d52b01db36bcb7d1e73db0f5512c43bdbf2d 100644 --- a/dev/reference/torch_stft.html +++ b/dev/reference/torch_stft.html @@ -1,79 +1,18 @@ - - - - - - - -Stft — torch_stft • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Stft — torch_stft • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_stft(
-  input,
-  n_fft,
-  hop_length = NULL,
-  win_length = NULL,
-  window = NULL,
-  center = TRUE,
-  pad_mode = "reflect",
-  normalized = FALSE,
-  onesided = TRUE,
-  return_complex = NULL
-)
+
+
torch_stft(
+  input,
+  n_fft,
+  hop_length = NULL,
+  win_length = NULL,
+  window = NULL,
+  center = TRUE,
+  pad_mode = "reflect",
+  normalized = FALSE,
+  onesided = TRUE,
+  return_complex = NULL
+)
+
-

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the input tensor

n_fft

(int) size of Fourier transform

hop_length

(int, optional) the distance between neighboring sliding window -frames. Default: NULL (treated as equal to floor(n_fft / 4))

win_length

(int, optional) the size of window frame and STFT filter. -Default: NULL (treated as equal to n_fft)

window

(Tensor, optional) the optional window function. -Default: NULL (treated as window of all \(1\) s)

center

(bool, optional) whether to pad input on both sides so +

+

Arguments

+
input
+

(Tensor) the input tensor

+
n_fft
+

(int) size of Fourier transform

+
hop_length
+

(int, optional) the distance between neighboring sliding window +frames. Default: NULL (treated as equal to floor(n_fft / 4))

+
win_length
+

(int, optional) the size of window frame and STFT filter. +Default: NULL (treated as equal to n_fft)

+
window
+

(Tensor, optional) the optional window function. +Default: NULL (treated as window of all \(1\) s)

+
center
+

(bool, optional) whether to pad input on both sides so that the \(t\)-th frame is centered at time \(t \times \mbox{hop\_length}\). -Default: TRUE

pad_mode

(string, optional) controls the padding method used when -center is TRUE. Default: "reflect"

normalized

(bool, optional) controls whether to return the normalized -STFT results Default: FALSE

onesided

(bool, optional) controls whether to return half of results to -avoid redundancy Default: TRUE

return_complex

(bool, optional) controls whether to return complex tensors -or not.

- -

Short-time Fourier transform (STFT).

- +Default: TRUE

+
pad_mode
+

(string, optional) controls the padding method used when +center is TRUE. Default: "reflect"

+
normalized
+

(bool, optional) controls whether to return the normalized +STFT results Default: FALSE

+
onesided
+

(bool, optional) controls whether to return half of results to +avoid redundancy Default: TRUE

+
return_complex
+

(bool, optional) controls whether to return complex tensors +or not.

+
+
+

Short-time Fourier transform (STFT).

-

Short-time Fourier transform (STFT).

Ignoring the optional batch dimension, this method computes the following
+

Short-time Fourier transform (STFT).

Ignoring the optional batch dimension, this method computes the following
 expression:
-
+

$$ X[m, \omega] = \sum_{k = 0}^{\mbox{win\_length-1}}% @@ -272,7 +174,7 @@ expression: $$ where \(m\) is the index of the sliding window, and \(\omega\) is the frequency that \(0 \leq \omega < \mbox{n\_fft}\). When -onesided is the default value TRUE,

* `input` must be either a 1-D time sequence or a 2-D batch of time
+onesided is the default value TRUE,

* `input` must be either a 1-D time sequence or a 2-D batch of time
   sequences.
 
 * If `hop_length` is `NULL` (default), it is treated as equal to
@@ -310,40 +212,38 @@ batch size of `input`, \eqn{N} is the number of frequencies where
 STFT is applied, \eqn{T} is the total number of frames used, and each pair
 in the last dimension represents a complex number as the real part and the
 imaginary part.
-
- -

Warning

+
+
+
+

Warning

This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.

+
+ -
- +
- - + + diff --git a/dev/reference/torch_sub.html b/dev/reference/torch_sub.html index 1522858685644a07214ccfff146a1ad06e8f130a..aa44101c3e4b49308c6e3b4c2ed4d1b5655965af 100644 --- a/dev/reference/torch_sub.html +++ b/dev/reference/torch_sub.html @@ -1,79 +1,18 @@ - - - - - - - -Sub — torch_sub • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sub — torch_sub • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sub(self, other, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor or Scalar) the tensor or scalar to subtract from input

alpha

the scalar multiplier for other

- -

sub(input, other, *, alpha=1, out=None) -> Tensor

+
+
torch_sub(self, other, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor or Scalar) the tensor or scalar to subtract from input

+
alpha
+

the scalar multiplier for other

+
+
+

sub(input, other, *, alpha=1, out=None) -> Tensor

@@ -219,44 +135,43 @@ $$

Supports broadcasting to a common shape , type promotion , and integer, float, and complex inputs.

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2))
-b <- torch_tensor(c(0, 1))
-torch_sub(a, b, alpha=2)
-}
-#> torch_tensor
-#>  1
-#>  0
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2))
+b <- torch_tensor(c(0, 1))
+torch_sub(a, b, alpha=2)
+}
+#> torch_tensor
+#>  1
+#>  0
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_subtract.html b/dev/reference/torch_subtract.html index d7074d44c3921af94e4c83e9f3f114b7a4d8c607..e6a51b3a4681bece3e1ba685fa230874af55cde5 100644 --- a/dev/reference/torch_subtract.html +++ b/dev/reference/torch_subtract.html @@ -1,79 +1,18 @@ - - - - - - - -Subtract — torch_subtract • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Subtract — torch_subtract • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,57 +111,48 @@

Subtract

-
torch_subtract(self, other, alpha = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

other

(Tensor or Scalar) the tensor or scalar to subtract from input

alpha

the scalar multiplier for other

- -

subtract(input, other, *, alpha=1, out=None) -> Tensor

+
+
torch_subtract(self, other, alpha = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
other
+

(Tensor or Scalar) the tensor or scalar to subtract from input

+
alpha
+

the scalar multiplier for other

+
+
+

subtract(input, other, *, alpha=1, out=None) -> Tensor

-

Alias for torch_sub().

+

Alias for torch_sub().

+
+
-
- +
- - + + diff --git a/dev/reference/torch_sum.html b/dev/reference/torch_sum.html index 5e7e801ecf43d3ce41214e3893b67cb360c8bc96..8f71f9d56cd6a0b11ed71fd6fde9924ea85373d2 100644 --- a/dev/reference/torch_sum.html +++ b/dev/reference/torch_sum.html @@ -1,79 +1,18 @@ - - - - - - - -Sum — torch_sum • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Sum — torch_sum • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_sum(self, dim, keepdim = FALSE, dtype = NULL)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

keepdim

(bool) whether the output tensor has dim retained or not.

dtype

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

- -

sum(input, dtype=NULL) -> Tensor

+
+
torch_sum(self, dim, keepdim = FALSE, dtype = NULL)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: NULL.

+
+
+

sum(input, dtype=NULL) -> Tensor

Returns the sum of all elements in the input tensor.

-

sum(input, dim, keepdim=False, dtype=NULL) -> Tensor

- +
+
+

sum(input, dim, keepdim=False, dtype=NULL) -> Tensor

@@ -228,57 +143,56 @@ dimension dim. If dim is a list of dimensions, reduce over all of them.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_sum(a)
-
-
-a <- torch_randn(c(4, 4))
-a
-torch_sum(a, 1)
-b <- torch_arange(1, 4 * 5 * 6)$view(c(4, 5, 6))
-torch_sum(b, list(2, 1))
-}
-#> torch_tensor
-#>  1160
-#>  1180
-#>  1200
-#>  1220
-#>  1240
-#>  1260
-#> [ CPUFloatType{6} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_sum(a)
+
+
+a <- torch_randn(c(4, 4))
+a
+torch_sum(a, 1)
+b <- torch_arange(1, 4 * 5 * 6)$view(c(4, 5, 6))
+torch_sum(b, list(2, 1))
+}
+#> torch_tensor
+#>  1160
+#>  1180
+#>  1200
+#>  1220
+#>  1240
+#>  1260
+#> [ CPUFloatType{6} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_svd.html b/dev/reference/torch_svd.html index e6543ebd5573aee99aa13a5d218d936e02e80bbc..4d378762171f7f44ceb462049205fdf5b969657d 100644 --- a/dev/reference/torch_svd.html +++ b/dev/reference/torch_svd.html @@ -1,79 +1,18 @@ - - - - - - - -Svd — torch_svd • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Svd — torch_svd • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_svd(self, some = TRUE, compute_uv = TRUE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of \(m \times n\) matrices.

some

(bool, optional) controls the shape of returned U and V

compute_uv

(bool, optional) option whether to compute U and V or not

- -

Note

+
+
torch_svd(self, some = TRUE, compute_uv = TRUE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of \(m \times n\) matrices.

+
some
+

(bool, optional) controls the shape of returned U and V

+
compute_uv
+

(bool, optional) option whether to compute U and V or not

+
+
+

Note

The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order.

The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer @@ -228,8 +144,9 @@ and V[..., :, min(m, n):] will be ignored in backward as those vect can be arbitrary bases of the subspaces.

When compute_uv = FALSE, backward cannot be performed since U and V from the forward pass is required for the backward operation.

-

svd(input, some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

- +
+
+

svd(input, some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor)

@@ -241,53 +158,52 @@ i.e., if the last two dimensions of input are m and U
and V matrices will contain only \(min(n, m)\) orthonormal columns.

If compute_uv is FALSE, the returned U and V matrices will be zero matrices of shape \((m \times m)\) and \((n \times n)\) respectively. some will be ignored here.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5, 3))
-a
-out = torch_svd(a)
-u = out[[1]]
-s = out[[2]]
-v = out[[3]]
-torch_dist(a, torch_mm(torch_mm(u, torch_diag(s)), v$t()))
-a_big = torch_randn(c(7, 5, 3))
-out = torch_svd(a_big)
-u = out[[1]]
-s = out[[2]]
-v = out[[3]]
-torch_dist(a_big, torch_matmul(torch_matmul(u, torch_diag_embed(s)), v$transpose(-2, -1)))
-}
-#> torch_tensor
-#> 2.51836e-06
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5, 3))
+a
+out = torch_svd(a)
+u = out[[1]]
+s = out[[2]]
+v = out[[3]]
+torch_dist(a, torch_mm(torch_mm(u, torch_diag(s)), v$t()))
+a_big = torch_randn(c(7, 5, 3))
+out = torch_svd(a_big)
+u = out[[1]]
+s = out[[2]]
+v = out[[3]]
+torch_dist(a_big, torch_matmul(torch_matmul(u, torch_diag_embed(s)), v$transpose(-2, -1)))
+}
+#> torch_tensor
+#> 3.58089e-06
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_symeig.html b/dev/reference/torch_symeig.html index 6a4188516a48648506104bb356c08019c0e498d1..43a1f0eacb2fc3f59d6985fbee7874f9b0636069 100644 --- a/dev/reference/torch_symeig.html +++ b/dev/reference/torch_symeig.html @@ -1,79 +1,18 @@ - - - - - - - -Symeig — torch_symeig • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Symeig — torch_symeig • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_symeig(self, eigenvectors = FALSE, upper = TRUE)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions consisting of symmetric matrices.

eigenvectors

(boolean, optional) controls whether eigenvectors have to be computed

upper

(boolean, optional) controls whether to consider upper-triangular or lower-triangular region

- -

Note

+
+
torch_symeig(self, eigenvectors = FALSE, upper = TRUE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor of size \((*, n, n)\) where * is zero or more batch dimensions consisting of symmetric matrices.

+
eigenvectors
+

(boolean, optional) controls whether eigenvectors have to be computed

+
upper
+

(boolean, optional) controls whether to consider upper-triangular or lower-triangular region

+
+
+

Note

The eigenvalues are returned in ascending order. If input is a batch of matrices, then the eigenvalues of each matrix in the batch is returned in ascending order.

Irrespective of the original strides, the returned matrix V will @@ -217,8 +133,9 @@ be transposed, i.e. with strides V.contiguous().transpose(-1, -2).stride()

Extra care needs to be taken when backward through outputs. Such operation is really only stable when all eigenvalues are distinct. Otherwise, NaN can appear as the gradients are not properly defined.

-

symeig(input, eigenvectors=False, upper=TRUE) -> (Tensor, Tensor)

- +
+
+

symeig(input, eigenvectors=False, upper=TRUE) -> (Tensor, Tensor)

@@ -234,52 +151,51 @@ both eigenvalues and eigenvectors are computed.

Since the input matrix input is supposed to be symmetric, only the upper triangular portion is used by default.

If upper is FALSE, then lower triangular portion is used.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(5, 5))
-a = a + a$t()  # To make a symmetric
-a
-o = torch_symeig(a, eigenvectors=TRUE)
-e = o[[1]]
-v = o[[2]]
-e
-v
-a_big = torch_randn(c(5, 2, 2))
-a_big = a_big + a_big$transpose(-2, -1)  # To make a_big symmetric
-o = a_big$symeig(eigenvectors=TRUE)
-e = o[[1]]
-v = o[[2]]
-torch_allclose(torch_matmul(v, torch_matmul(e$diag_embed(), v$transpose(-2, -1))), a_big)
-}
-#> [1] TRUE
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(5, 5))
+a = a + a$t()  # To make a symmetric
+a
+o = torch_symeig(a, eigenvectors=TRUE)
+e = o[[1]]
+v = o[[2]]
+e
+v
+a_big = torch_randn(c(5, 2, 2))
+a_big = a_big + a_big$transpose(-2, -1)  # To make a_big symmetric
+o = a_big$symeig(eigenvectors=TRUE)
+e = o[[1]]
+v = o[[2]]
+torch_allclose(torch_matmul(v, torch_matmul(e$diag_embed(), v$transpose(-2, -1))), a_big)
+}
+#> [1] TRUE
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_t.html b/dev/reference/torch_t.html index f7deb8a2d18dbe82285c7b8f65f87e9918205b5c..b4709e39ac5cf4cd55736ca733a759db63657803 100644 --- a/dev/reference/torch_t.html +++ b/dev/reference/torch_t.html @@ -1,79 +1,18 @@ - - - - - - - -T — torch_t • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -T — torch_t • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_t(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

t(input) -> Tensor

+
+
torch_t(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

t(input) -> Tensor

@@ -209,51 +129,50 @@ and 1.

0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to transpose(input, 0, 1).

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(2,3))
-x
-torch_t(x)
-x = torch_randn(c(3))
-x
-torch_t(x)
-x = torch_randn(c(2, 3))
-x
-torch_t(x)
-}
-#> torch_tensor
-#>  0.8020 -0.9383
-#>  1.4307  0.0216
-#> -0.0582 -0.9145
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(2,3))
+x
+torch_t(x)
+x = torch_randn(c(3))
+x
+torch_t(x)
+x = torch_randn(c(2, 3))
+x
+torch_t(x)
+}
+#> torch_tensor
+#> -0.6858  0.8717
+#> -1.3576 -0.0799
+#> -0.0242  1.1285
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_take.html b/dev/reference/torch_take.html index 25d36c78396015657b49f9a9462b2d68a3bc5acf..8508ea11b21c6f0579b92415169d4227c6815748 100644 --- a/dev/reference/torch_take.html +++ b/dev/reference/torch_take.html @@ -1,79 +1,18 @@ - - - - - - - -Take — torch_take • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Take — torch_take • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_take(self, index)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

index

(LongTensor) the indices into tensor

- -

take(input, index) -> Tensor

+
+
torch_take(self, index)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
index
+

(LongTensor) the indices into tensor

+
+
+

take(input, index) -> Tensor

Returns a new tensor with the elements of input at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices.

+
-

Examples

-
if (torch_is_installed()) {
-
-src = torch_tensor(matrix(c(4,3,5,6,7,8), ncol = 3, byrow = TRUE))
-torch_take(src, torch_tensor(c(1, 2, 5), dtype = torch_int64()))
-}
-#> torch_tensor
-#>  4
-#>  3
-#>  7
-#> [ CPUFloatType{3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+src = torch_tensor(matrix(c(4,3,5,6,7,8), ncol = 3, byrow = TRUE))
+torch_take(src, torch_tensor(c(1, 2, 5), dtype = torch_int64()))
+}
+#> torch_tensor
+#>  4
+#>  3
+#>  7
+#> [ CPUFloatType{3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tan.html b/dev/reference/torch_tan.html index 93dda371e2c0890ef9f5105267b7d1b6d4201317..e4ff91733ae54001bc529bf9172a9e209daabfac 100644 --- a/dev/reference/torch_tan.html +++ b/dev/reference/torch_tan.html @@ -1,79 +1,18 @@ - - - - - - - -Tan — torch_tan • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tan — torch_tan • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_tan(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

tan(input, out=NULL) -> Tensor

+
+
torch_tan(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

tan(input, out=NULL) -> Tensor

@@ -209,46 +129,45 @@

$$ \mbox{out}_{i} = \tan(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_tan(a)
-}
-#> torch_tensor
-#> -3.9415
-#>  0.0214
-#> -1.8529
-#> -1.7272
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_tan(a)
+}
+#> torch_tensor
+#> -1.4210
+#>  0.6762
+#>  0.4898
+#>  2.7220
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tanh.html b/dev/reference/torch_tanh.html index 513dbe105e9af8b1a2730d35ecf31b8c42cb0ede..e644b1ea89a4a5ee228b60ca9e297b745f75bd09 100644 --- a/dev/reference/torch_tanh.html +++ b/dev/reference/torch_tanh.html @@ -1,79 +1,18 @@ - - - - - - - -Tanh — torch_tanh • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tanh — torch_tanh • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_tanh(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

tanh(input, out=NULL) -> Tensor

+
+
torch_tanh(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

tanh(input, out=NULL) -> Tensor

@@ -210,46 +130,45 @@ of input.

$$ \mbox{out}_{i} = \tanh(\mbox{input}_{i}) $$

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_tanh(a)
-}
-#> torch_tensor
-#>  0.8746
-#>  0.4747
-#>  0.9720
-#>  0.8456
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_tanh(a)
+}
+#> torch_tensor
+#> -0.4118
+#> -0.8532
+#> -0.4083
+#>  0.6851
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tensor.html b/dev/reference/torch_tensor.html index cbe7b598e79bad8cdb3d018afc9263aefaeac094..71df1258c803ce1b16a3c225a0d03429a4a8c1df 100644 --- a/dev/reference/torch_tensor.html +++ b/dev/reference/torch_tensor.html @@ -1,79 +1,18 @@ - - - - - - - -Converts R objects to a torch tensor — torch_tensor • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Converts R objects to a torch tensor — torch_tensor • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,78 +111,66 @@

Converts R objects to a torch tensor

-
torch_tensor(
-  data,
-  dtype = NULL,
-  device = NULL,
-  requires_grad = FALSE,
-  pin_memory = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
data

an R atomic vector, matrix or array

dtype

a torch_dtype instance

device

a device creted with torch_device()

requires_grad

if autograd should record operations on the returned tensor.

pin_memory

If set, returned tensor would be allocated in the pinned memory.

- +
+
torch_tensor(
+  data,
+  dtype = NULL,
+  device = NULL,
+  requires_grad = FALSE,
+  pin_memory = FALSE
+)
+
-

Examples

-
if (torch_is_installed()) {
-torch_tensor(c(1,2,3,4))
-torch_tensor(c(1,2,3,4), dtype = torch_int())
-
-}
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#>  4
-#> [ CPUIntType{4} ]
-
+
+

Arguments

+
data
+

an R atomic vector, matrix or array

+
dtype
+

a torch_dtype instance

+
device
+

a device creted with torch_device()

+
requires_grad
+

if autograd should record operations on the returned tensor.

+
pin_memory
+

If set, returned tensor would be allocated in the pinned memory.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+torch_tensor(c(1,2,3,4))
+torch_tensor(c(1,2,3,4), dtype = torch_int())
+
+}
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#>  4
+#> [ CPUIntType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tensordot.html b/dev/reference/torch_tensordot.html index 7d32b43560af3cb264657935062487fc05f2776b..e61d849be284b9d107789c13dce3bdc22f181b92 100644 --- a/dev/reference/torch_tensordot.html +++ b/dev/reference/torch_tensordot.html @@ -1,80 +1,19 @@ - - - - - - - -Tensordot — torch_tensordot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tensordot — torch_tensordot • torch - - - - - - + + - - -
-
- -
- -
+
@@ -191,64 +113,56 @@ tensordot implements a generalized matrix product." /> tensordot implements a generalized matrix product.

-
torch_tensordot(a, b, dims = 2)
- -

Arguments

- - - - - - - - - - - - - - -
a

(Tensor) Left tensor to contract

b

(Tensor) Right tensor to contract

dims

(int or tuple of two lists of integers) number of dimensions to contract or explicit lists of dimensions for a and b respectively

- - -

Examples

-
if (torch_is_installed()) {
-
-a <- torch_arange(start = 1, end = 60)$reshape(c(3, 4, 5))
-b <- torch_arange(start = 1, end = 24)$reshape(c(4, 3, 2))
-torch_tensordot(a, b, dims = list(c(2, 1), c(1, 2)))
-if (FALSE) {
-a = torch_randn(3, 4, 5, device='cuda')
-b = torch_randn(4, 5, 6, device='cuda')
-c = torch_tensordot(a, b, dims=2)$cpu()
-}
-}
-
+
+
torch_tensordot(a, b, dims = 2)
+
+ +
+

Arguments

+
a
+

(Tensor) Left tensor to contract

+
b
+

(Tensor) Right tensor to contract

+
dims
+

(int or tuple of two lists of integers) number of dimensions to contract or explicit lists of dimensions for a and b respectively

+
+ +
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_arange(start = 1, end = 60)$reshape(c(3, 4, 5))
+b <- torch_arange(start = 1, end = 24)$reshape(c(4, 3, 2))
+torch_tensordot(a, b, dims = list(c(2, 1), c(1, 2)))
+if (FALSE) {
+a = torch_randn(3, 4, 5, device='cuda')
+b = torch_randn(4, 5, 6, device='cuda')
+c = torch_tensordot(a, b, dims=2)$cpu()
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_threshold_.html b/dev/reference/torch_threshold_.html index e844be326282f4843a9a46fdf0444bd25832a435..1c044d19c37407bff3375c6626f3e5de82a1a410 100644 --- a/dev/reference/torch_threshold_.html +++ b/dev/reference/torch_threshold_.html @@ -1,79 +1,18 @@ - - - - - - - -Threshold_ — torch_threshold_ • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Threshold_ — torch_threshold_ • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,57 +111,48 @@

Threshold_

-
torch_threshold_(self, threshold, value)
- -

Arguments

- - - - - - - - - - - - - - -
self

input tensor

threshold

The value to threshold at

value

The value to replace with

- -

threshold_(input, threshold, value) -> Tensor

+
+
torch_threshold_(self, threshold, value)
+
+
+

Arguments

+
self
+

input tensor

+
threshold
+

The value to threshold at

+
value
+

The value to replace with

+
+
+

threshold_(input, threshold, value) -> Tensor

In-place version of torch_threshold.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_topk.html b/dev/reference/torch_topk.html index d0d4589d38984b6d126fe886bc8d291df013af50..86af2eae2bccf043151aa3363d319b534e8d42bc 100644 --- a/dev/reference/torch_topk.html +++ b/dev/reference/torch_topk.html @@ -1,79 +1,18 @@ - - - - - - - -Topk — torch_topk • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Topk — torch_topk • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_topk(self, k, dim = -1L, largest = TRUE, sorted = TRUE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

k

(int) the k in "top-k"

dim

(int, optional) the dimension to sort along

largest

(bool, optional) controls whether to return largest or smallest elements

sorted

(bool, optional) controls whether to return the elements in sorted order

- -

topk(input, k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

+
+
torch_topk(self, k, dim = -1L, largest = TRUE, sorted = TRUE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
k
+

(int) the k in "top-k"

+
dim
+

(int, optional) the dimension to sort along

+
largest
+

(bool, optional) controls whether to return largest or smallest elements

+
sorted
+

(bool, optional) controls whether to return the elements in sorted order

+
+
+

topk(input, k, dim=NULL, largest=TRUE, sorted=TRUE) -> (Tensor, LongTensor)

@@ -229,54 +141,53 @@ a given dimension.

of the elements in the original input tensor.

The boolean option sorted if TRUE, will make sure that the returned k elements are themselves sorted

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_arange(1., 6.)
-x
-torch_topk(x, 3)
-}
-#> [[1]]
-#> torch_tensor
-#>  6
-#>  5
-#>  4
-#> [ CPUFloatType{3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  6
-#>  5
-#>  4
-#> [ CPULongType{3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_arange(1., 6.)
+x
+torch_topk(x, 3)
+}
+#> [[1]]
+#> torch_tensor
+#>  6
+#>  5
+#>  4
+#> [ CPUFloatType{3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  6
+#>  5
+#>  4
+#> [ CPULongType{3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_trace.html b/dev/reference/torch_trace.html index b4a021afe42337a6248647e8abeac3235f8c7f53..f9df5db89ed43b6cb4fe4fae5ae7ba0d0b3b0a7e 100644 --- a/dev/reference/torch_trace.html +++ b/dev/reference/torch_trace.html @@ -1,79 +1,18 @@ - - - - - - - -Trace — torch_trace • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Trace — torch_trace • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_trace(self)
- -

Arguments

- - - - - - -
self

the input tensor

- -

trace(input) -> Tensor

+
+
torch_trace(self)
+
+
+

Arguments

+
self
+

the input tensor

+
+
+

trace(input) -> Tensor

Returns the sum of the elements of the diagonal of the input 2-D matrix.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_arange(1, 9)$view(c(3, 3))
-x
-torch_trace(x)
-}
-#> torch_tensor
-#> 15
-#> [ CPUFloatType{} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_arange(1, 9)$view(c(3, 3))
+x
+torch_trace(x)
+}
+#> torch_tensor
+#> 15
+#> [ CPUFloatType{} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_transpose.html b/dev/reference/torch_transpose.html index 421057cec95c66e490829dd80508d07571526302..ab23a195c5b4dcdd4fde33663c361828aedaabbb 100644 --- a/dev/reference/torch_transpose.html +++ b/dev/reference/torch_transpose.html @@ -1,79 +1,18 @@ - - - - - - - -Transpose — torch_transpose • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Transpose — torch_transpose • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,27 +111,21 @@

Transpose

-
torch_transpose(self, dim0, dim1)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim0

(int) the first dimension to be transposed

dim1

(int) the second dimension to be transposed

- -

transpose(input, dim0, dim1) -> Tensor

+
+
torch_transpose(self, dim0, dim1)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim0
+

(int) the first dimension to be transposed

+
dim1
+

(int) the second dimension to be transposed

+
+
+

transpose(input, dim0, dim1) -> Tensor

@@ -218,45 +134,44 @@ The given dimensions dim0 and dim1 are swapped.

The resulting out tensor shares it's underlying storage with the input tensor, so changing the content of one would change the content of the other.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_randn(c(2, 3))
-x
-torch_transpose(x, 1, 2)
-}
-#> torch_tensor
-#> -1.8193  0.4626
-#> -0.1021 -0.9560
-#> -0.0041  2.3591
-#> [ CPUFloatType{3,2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_randn(c(2, 3))
+x
+torch_transpose(x, 1, 2)
+}
+#> torch_tensor
+#>  1.8015 -0.5876
+#> -1.0501  1.0284
+#>  0.4838 -0.2184
+#> [ CPUFloatType{3,2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_trapz.html b/dev/reference/torch_trapz.html index efdc374511dae9f767252b5e135b8aa3b4fb43d5..054a378b3ad9d0eaa54f1643c1718fb5790a6b57 100644 --- a/dev/reference/torch_trapz.html +++ b/dev/reference/torch_trapz.html @@ -1,79 +1,18 @@ - - - - - - - -Trapz — torch_trapz • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Trapz — torch_trapz • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_trapz(y, dx = 1L, x, dim = -1L)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
y

(Tensor) The values of the function to integrate

dx

(float) The distance between points at which y is sampled.

x

(Tensor) The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention \(\int_a^b f = -\int_b^a f\) is followed).

dim

(int) The dimension along which to integrate. By default, use the last dimension.

- -

trapz(y, x, *, dim=-1) -> Tensor

+
+
torch_trapz(y, dx = 1L, x, dim = -1L)
+
+
+

Arguments

+
y
+

(Tensor) The values of the function to integrate

+
dx
+

(float) The distance between points at which y is sampled.

+
x
+

(Tensor) The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention \(\int_a^b f = -\int_b^a f\) is followed).

+
dim
+

(int) The dimension along which to integrate. By default, use the last dimension.

+
+
+

trapz(y, x, *, dim=-1) -> Tensor

Estimate \(\int y\,dx\) along dim, using the trapezoid rule.

-

trapz(y, *, dx=1, dim=-1) -> Tensor

- +
+
+

trapz(y, *, dx=1, dim=-1) -> Tensor

As above, but the sample points are spaced uniformly at a distance of dx.

+
-

Examples

-
if (torch_is_installed()) {
-
-y = torch_randn(list(2, 3))
-y
-x = torch_tensor(matrix(c(1, 3, 4, 1, 2, 3), ncol = 3, byrow=TRUE))
-torch_trapz(y, x = x)
-
-}
-#> torch_tensor
-#> -1.7574
-#>  0.0709
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+y = torch_randn(list(2, 3))
+y
+x = torch_tensor(matrix(c(1, 3, 4, 1, 2, 3), ncol = 3, byrow=TRUE))
+torch_trapz(y, x = x)
+
+}
+#> torch_tensor
+#> -1.5980
+#>  0.4931
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_triangular_solve.html b/dev/reference/torch_triangular_solve.html index b02526b022a405861144515893c1eb0250c52e91..0d85952a16476da57ff0cdd9d4948e0ddbaf74bf 100644 --- a/dev/reference/torch_triangular_solve.html +++ b/dev/reference/torch_triangular_solve.html @@ -1,79 +1,18 @@ - - - - - - - -Triangular_solve — torch_triangular_solve • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triangular_solve — torch_triangular_solve • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,41 +111,31 @@

Triangular_solve

-
torch_triangular_solve(
-  self,
-  A,
-  upper = TRUE,
-  transpose = FALSE,
-  unitriangular = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) multiple right-hand sides of size \((*, m, k)\) where \(*\) is zero of more batch dimensions (\(b\))

A

(Tensor) the input triangular coefficient matrix of size \((*, m, m)\) where \(*\) is zero or more batch dimensions

upper

(bool, optional) whether to solve the upper-triangular system of equations (default) or the lower-triangular system of equations. Default: TRUE.

transpose

(bool, optional) whether \(A\) should be transposed before being sent into the solver. Default: FALSE.

unitriangular

(bool, optional) whether \(A\) is unit triangular. If TRUE, the diagonal elements of \(A\) are assumed to be 1 and not referenced from \(A\). Default: FALSE.

- -

triangular_solve(input, A, upper=TRUE, transpose=False, unitriangular=False) -> (Tensor, Tensor)

+
+
torch_triangular_solve(
+  self,
+  A,
+  upper = TRUE,
+  transpose = FALSE,
+  unitriangular = FALSE
+)
+
+
+

Arguments

+
self
+

(Tensor) multiple right-hand sides of size \((*, m, k)\) where \(*\) is zero of more batch dimensions (\(b\))

+
A
+

(Tensor) the input triangular coefficient matrix of size \((*, m, m)\) where \(*\) is zero or more batch dimensions

+
upper
+

(bool, optional) whether to solve the upper-triangular system of equations (default) or the lower-triangular system of equations. Default: TRUE.

+
transpose
+

(bool, optional) whether \(A\) should be transposed before being sent into the solver. Default: FALSE.

+
unitriangular
+

(bool, optional) whether \(A\) is unit triangular. If TRUE, the diagonal elements of \(A\) are assumed to be 1 and not referenced from \(A\). Default: FALSE.

+
+
+

triangular_solve(input, A, upper=TRUE, transpose=False, unitriangular=False) -> (Tensor, Tensor)

@@ -234,54 +146,53 @@ with the default keyword arguments.

torch_triangular_solve(b, A) can take in 2D inputs b, A or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs X

+
-

Examples

-
if (torch_is_installed()) {
-
-A = torch_randn(c(2, 2))$triu()
-A
-b = torch_randn(c(2, 3))
-b
-torch_triangular_solve(b, A)
-}
-#> [[1]]
-#> torch_tensor
-#> -13.1331  -9.7320   3.9804
-#>  -5.9878  -5.3316   1.3763
-#> [ CPUFloatType{2,3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> -0.5985  0.9713
-#>  0.0000  0.2471
-#> [ CPUFloatType{2,2} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+A = torch_randn(c(2, 2))$triu()
+A
+b = torch_randn(c(2, 3))
+b
+torch_triangular_solve(b, A)
+}
+#> [[1]]
+#> torch_tensor
+#>   0.6662  -2.1877  -0.2814
+#>   1.2484  30.9253   5.2644
+#> [ CPUFloatType{2,3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> -1.0806 -0.0484
+#>  0.0000  0.0760
+#> [ CPUFloatType{2,2} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tril.html b/dev/reference/torch_tril.html index b9a2ed99209e051286f7bf5bdc17f82d3a6f5cf2..1fc9d83a6d4c15b3be82bd23cef385e1090edf4d 100644 --- a/dev/reference/torch_tril.html +++ b/dev/reference/torch_tril.html @@ -1,79 +1,18 @@ - - - - - - - -Tril — torch_tril • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tril — torch_tril • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_tril(self, diagonal = 0L)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

diagonal

(int, optional) the diagonal to consider

- -

tril(input, diagonal=0, out=NULL) -> Tensor

+
+
torch_tril(self, diagonal = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
diagonal
+

(int, optional) the diagonal to consider

+
+
+

tril(input, diagonal=0, out=NULL) -> Tensor

@@ -220,50 +138,49 @@ diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices \(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where \(d_{1}, d_{2}\) are the dimensions of the matrix.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-a
-torch_tril(a)
-b = torch_randn(c(4, 6))
-b
-torch_tril(b, diagonal=1)
-torch_tril(b, diagonal=-1)
-}
-#> torch_tensor
-#>  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000
-#>  1.6171  0.0000  0.0000  0.0000  0.0000  0.0000
-#>  0.1809 -0.7544  0.0000  0.0000  0.0000  0.0000
-#> -0.4218 -1.2127 -1.1258  0.0000  0.0000  0.0000
-#> [ CPUFloatType{4,6} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+a
+torch_tril(a)
+b = torch_randn(c(4, 6))
+b
+torch_tril(b, diagonal=1)
+torch_tril(b, diagonal=-1)
+}
+#> torch_tensor
+#>  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000
+#>  1.4302  0.0000  0.0000  0.0000  0.0000  0.0000
+#>  0.3984  0.2092  0.0000  0.0000  0.0000  0.0000
+#> -0.5634 -0.4056 -2.3276  0.0000  0.0000  0.0000
+#> [ CPUFloatType{4,6} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_tril_indices.html b/dev/reference/torch_tril_indices.html index 2cd66b2aaaf7305c030e27c257e5d5b8a342fb80..54f19baa499b6675641cea49e8967eebfea2427e 100644 --- a/dev/reference/torch_tril_indices.html +++ b/dev/reference/torch_tril_indices.html @@ -1,79 +1,18 @@ - - - - - - - -Tril_indices — torch_tril_indices • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tril_indices — torch_tril_indices • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,42 @@

Tril_indices

-
torch_tril_indices(
-  row,
-  col,
-  offset = 0,
-  dtype = torch_long(),
-  device = "cpu",
-  layout = torch_strided()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
row

(int) number of rows in the 2-D matrix.

col

(int) number of columns in the 2-D matrix.

offset

(int) diagonal offset from the main diagonal. Default: if not provided, 0.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

layout

(torch.layout, optional) currently only support torch_strided.

- -

Note

+
+
torch_tril_indices(
+  row,
+  col,
+  offset = 0,
+  dtype = torch_long(),
+  device = "cpu",
+  layout = torch_strided()
+)
+
+
+

Arguments

+
row
+

(int) number of rows in the 2-D matrix.

+
col
+

(int) number of columns in the 2-D matrix.

+
offset
+

(int) diagonal offset from the main diagonal. Default: if not provided, 0.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
layout
+

(torch.layout, optional) currently only support torch_strided.

+
+
+

Note

-
When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
+
When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
 prevent overflow during calculation.
-
- -

tril_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

+
+
+
+

tril_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

@@ -252,44 +163,43 @@ diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices \(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where \(d_{1}, d_{2}\) are the dimensions of the matrix.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-a = torch_tril_indices(3, 3)
-a
-a = torch_tril_indices(4, 3, -1)
-a
-a = torch_tril_indices(4, 3, 1)
-a
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+a = torch_tril_indices(3, 3)
+a
+a = torch_tril_indices(4, 3, -1)
+a
+a = torch_tril_indices(4, 3, 1)
+a
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_triu.html b/dev/reference/torch_triu.html index a5eebc66b9850d47092268e733d253ddcaf5bddb..03a1d31459f4e53b700027aa48379f290b38a0df 100644 --- a/dev/reference/torch_triu.html +++ b/dev/reference/torch_triu.html @@ -1,79 +1,18 @@ - - - - - - - -Triu — torch_triu • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triu — torch_triu • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_triu(self, diagonal = 0L)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

diagonal

(int, optional) the diagonal to consider

- -

triu(input, diagonal=0, out=NULL) -> Tensor

+
+
torch_triu(self, diagonal = 0L)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
diagonal
+

(int, optional) the diagonal to consider

+
+
+

triu(input, diagonal=0, out=NULL) -> Tensor

@@ -220,52 +138,51 @@ diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices \(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where \(d_{1}, d_{2}\) are the dimensions of the matrix.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(3, 3))
-a
-torch_triu(a)
-torch_triu(a, diagonal=1)
-torch_triu(a, diagonal=-1)
-b = torch_randn(c(4, 6))
-b
-torch_triu(b, diagonal=1)
-torch_triu(b, diagonal=-1)
-}
-#> torch_tensor
-#> -0.9039  0.2676 -0.5836  1.0136  0.3033 -0.6250
-#>  1.1673  0.3175  1.6993 -0.9980 -0.8675  1.0028
-#>  0.0000 -1.0054  0.1130  1.0869 -1.6897  0.7564
-#>  0.0000  0.0000 -1.1161 -0.3349 -1.5448  0.9295
-#> [ CPUFloatType{4,6} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(3, 3))
+a
+torch_triu(a)
+torch_triu(a, diagonal=1)
+torch_triu(a, diagonal=-1)
+b = torch_randn(c(4, 6))
+b
+torch_triu(b, diagonal=1)
+torch_triu(b, diagonal=-1)
+}
+#> torch_tensor
+#>  1.6667  0.1364 -0.1780  2.4322  0.7140  0.9880
+#>  1.1858  0.8444 -0.6649 -0.8724  1.9515 -1.4735
+#>  0.0000  1.0973  0.4507  0.8721  0.7713 -0.1108
+#>  0.0000  0.0000  1.1485 -0.3038 -0.6244 -0.0548
+#> [ CPUFloatType{4,6} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_triu_indices.html b/dev/reference/torch_triu_indices.html index c4960036b506541a040563018a6a974a5661a438..43659c844bd45fc3b0e234cf2f66c3bcd7fed7bf 100644 --- a/dev/reference/torch_triu_indices.html +++ b/dev/reference/torch_triu_indices.html @@ -1,79 +1,18 @@ - - - - - - - -Triu_indices — torch_triu_indices • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Triu_indices — torch_triu_indices • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,53 +111,42 @@

Triu_indices

-
torch_triu_indices(
-  row,
-  col,
-  offset = 0,
-  dtype = torch_long(),
-  device = "cpu",
-  layout = torch_strided()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
row

(int) number of rows in the 2-D matrix.

col

(int) number of columns in the 2-D matrix.

offset

(int) diagonal offset from the main diagonal. Default: if not provided, 0.

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

layout

(torch.layout, optional) currently only support torch_strided.

- -

Note

+
+
torch_triu_indices(
+  row,
+  col,
+  offset = 0,
+  dtype = torch_long(),
+  device = "cpu",
+  layout = torch_strided()
+)
+
+
+

Arguments

+
row
+

(int) number of rows in the 2-D matrix.

+
col
+

(int) number of columns in the 2-D matrix.

+
offset
+

(int) diagonal offset from the main diagonal. Default: if not provided, 0.

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, torch_long.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
layout
+

(torch.layout, optional) currently only support torch_strided.

+
+
+

Note

-
When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
+
When running on CUDA, `row * col` must be less than \eqn{2^{59}} to
 prevent overflow during calculation.
-
- -

triu_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

+
+
+
+

triu_indices(row, col, offset=0, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor

@@ -252,44 +163,43 @@ diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices \(\lbrace (i, i) \rbrace\) for \(i \in [0, \min\{d_{1}, d_{2}\} - 1]\) where \(d_{1}, d_{2}\) are the dimensions of the matrix.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-a = torch_triu_indices(3, 3)
-a
-a = torch_triu_indices(4, 3, -1)
-a
-a = torch_triu_indices(4, 3, 1)
-a
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+a = torch_triu_indices(3, 3)
+a
+a = torch_triu_indices(4, 3, -1)
+a
+a = torch_triu_indices(4, 3, 1)
+a
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_true_divide.html b/dev/reference/torch_true_divide.html index 4afb4420be758f5278b8715077bdecdd50efc63d..cc035395eed2734a4f068254fc702db4ad38e509 100644 --- a/dev/reference/torch_true_divide.html +++ b/dev/reference/torch_true_divide.html @@ -1,79 +1,18 @@ - - - - - - - -TRUE_divide — torch_true_divide • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -TRUE_divide — torch_true_divide • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,72 +111,67 @@

TRUE_divide

-
torch_true_divide(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the dividend

other

(Tensor or Scalar) the divisor

- -

true_divide(dividend, divisor) -> Tensor

+
+
torch_true_divide(self, other)
+
+
+

Arguments

+
self
+

(Tensor) the dividend

+
other
+

(Tensor or Scalar) the divisor

+
+
+

true_divide(dividend, divisor) -> Tensor

Performs "true division" that always computes the division in floating point. Analogous to division in Python 3 and equivalent to -torch_div except when both inputs have bool or integer scalar types, +torch_div except when both inputs have bool or integer scalar types, in which case they are cast to the default (floating) scalar type before the division.

$$ \mbox{out}_i = \frac{\mbox{dividend}_i}{\mbox{divisor}} $$

+
-

Examples

-
if (torch_is_installed()) {
-
-dividend = torch_tensor(c(5, 3), dtype=torch_int())
-divisor = torch_tensor(c(3, 2), dtype=torch_int())
-torch_true_divide(dividend, divisor)
-torch_true_divide(dividend, 2)
-}
-#> torch_tensor
-#>  2.5000
-#>  1.5000
-#> [ CPUFloatType{2} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+dividend = torch_tensor(c(5, 3), dtype=torch_int())
+divisor = torch_tensor(c(3, 2), dtype=torch_int())
+torch_true_divide(dividend, divisor)
+torch_true_divide(dividend, 2)
+}
+#> torch_tensor
+#>  2.5000
+#>  1.5000
+#> [ CPUFloatType{2} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_trunc.html b/dev/reference/torch_trunc.html index faeda7546bd5a160dc9fd0e5eee7cb4a78983ca5..3100a769f668f9fb8f7e9d8f02188af52d05b6d8 100644 --- a/dev/reference/torch_trunc.html +++ b/dev/reference/torch_trunc.html @@ -1,79 +1,18 @@ - - - - - - - -Trunc — torch_trunc • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Trunc — torch_trunc • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_trunc(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

trunc(input, out=NULL) -> Tensor

+
+
torch_trunc(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

trunc(input, out=NULL) -> Tensor

Returns a new tensor with the truncated integer values of the elements of input.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(4))
-a
-torch_trunc(a)
-}
-#> torch_tensor
-#> -0
-#> -0
-#>  0
-#> -1
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(4))
+a
+torch_trunc(a)
+}
+#> torch_tensor
+#> -0
+#>  1
+#> -0
+#> -1
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_unbind.html b/dev/reference/torch_unbind.html index 1c7223640526ac397a17c53db3842e61d797be8f..147906f6695369d40a3b7fb686fd5f157a9dbec1 100644 --- a/dev/reference/torch_unbind.html +++ b/dev/reference/torch_unbind.html @@ -1,79 +1,18 @@ - - - - - - - -Unbind — torch_unbind • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unbind — torch_unbind • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_unbind(self, dim = 1L)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the tensor to unbind

dim

(int) dimension to remove

- -

unbind(input, dim=0) -> seq

+
+
torch_unbind(self, dim = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to unbind

+
dim
+

(int) dimension to remove

+
+
+

unbind(input, dim=0) -> seq

Removes a tensor dimension.

Returns a tuple of all slices along a given dimension, already without it.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_unbind(torch_tensor(matrix(1:9, ncol = 3, byrow=TRUE)))
-}
-#> [[1]]
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#> [ CPULongType{3} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  4
-#>  5
-#>  6
-#> [ CPULongType{3} ]
-#> 
-#> [[3]]
-#> torch_tensor
-#>  7
-#>  8
-#>  9
-#> [ CPULongType{3} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_unbind(torch_tensor(matrix(1:9, ncol = 3, byrow=TRUE)))
+}
+#> [[1]]
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#> [ CPULongType{3} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#>  4
+#>  5
+#>  6
+#> [ CPULongType{3} ]
+#> 
+#> [[3]]
+#> torch_tensor
+#>  7
+#>  8
+#>  9
+#> [ CPULongType{3} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_unique_consecutive.html b/dev/reference/torch_unique_consecutive.html index 8d76b08807533f6d76cb3ada422d567b2fb014a0..bacc3f1fd184fff254e5d0c5880ea5d78620fc6d 100644 --- a/dev/reference/torch_unique_consecutive.html +++ b/dev/reference/torch_unique_consecutive.html @@ -1,79 +1,18 @@ - - - - - - - -Unique_consecutive — torch_unique_consecutive • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unique_consecutive — torch_unique_consecutive • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,101 +111,92 @@

Unique_consecutive

-
torch_unique_consecutive(
-  self,
-  return_inverse = FALSE,
-  return_counts = FALSE,
-  dim = NULL
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor

return_inverse

(bool) Whether to also return the indices for where elements in the original input ended up in the returned unique list.

return_counts

(bool) Whether to also return the counts for each unique element.

dim

(int) the dimension to apply unique. If NULL, the unique of the flattened input is returned. default: NULL

- -

TEST

+
+
torch_unique_consecutive(
+  self,
+  return_inverse = FALSE,
+  return_counts = FALSE,
+  dim = NULL
+)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor

+
return_inverse
+

(bool) Whether to also return the indices for where elements in the original input ended up in the returned unique list.

+
return_counts
+

(bool) Whether to also return the counts for each unique element.

+
dim
+

(int) the dimension to apply unique. If NULL, the unique of the flattened input is returned. default: NULL

+
+
+

TEST

-

Eliminates all but the first element from every consecutive group of equivalent elements.

.. note:: This function is different from [`torch_unique`] in the sense that this function
+

Eliminates all but the first element from every consecutive group of equivalent elements.

.. note:: This function is different from [`torch_unique`] in the sense that this function
     only eliminates consecutive duplicate values. This semantics is similar to `std::unique`
     in C++.
-
+
+
-

Examples

-
if (torch_is_installed()) {
-x = torch_tensor(c(1, 1, 2, 2, 3, 1, 1, 2))
-output = torch_unique_consecutive(x)
-output
-torch_unique_consecutive(x, return_inverse=TRUE)
-torch_unique_consecutive(x, return_counts=TRUE)
-}
-#> [[1]]
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#>  1
-#>  2
-#> [ CPUFloatType{5} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#> [ CPULongType{0} ]
-#> 
-#> [[3]]
-#> torch_tensor
-#>  2
-#>  2
-#>  1
-#>  2
-#>  1
-#> [ CPULongType{5} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+x = torch_tensor(c(1, 1, 2, 2, 3, 1, 1, 2))
+output = torch_unique_consecutive(x)
+output
+torch_unique_consecutive(x, return_inverse=TRUE)
+torch_unique_consecutive(x, return_counts=TRUE)
+}
+#> [[1]]
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#>  1
+#>  2
+#> [ CPUFloatType{5} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> [ CPULongType{0} ]
+#> 
+#> [[3]]
+#> torch_tensor
+#>  2
+#>  2
+#>  1
+#>  2
+#>  1
+#> [ CPULongType{5} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_unsafe_chunk.html b/dev/reference/torch_unsafe_chunk.html index bdbb8b9ad3cc3d7e5e9c64fc5c89a895922ea7dc..055375a574c97eb4101d89f6927c25664b4923f7 100644 --- a/dev/reference/torch_unsafe_chunk.html +++ b/dev/reference/torch_unsafe_chunk.html @@ -1,79 +1,18 @@ - - - - - - - -Unsafe_chunk — torch_unsafe_chunk • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unsafe_chunk — torch_unsafe_chunk • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,34 +111,29 @@

Unsafe_chunk

-
torch_unsafe_chunk(self, chunks, dim = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) the tensor to split

chunks

(int) number of chunks to return

dim

(int) dimension along which to split the tensor

- -

unsafe_chunk(input, chunks, dim=0) -> List of Tensors

+
+
torch_unsafe_chunk(self, chunks, dim = 1L)
+
+
+

Arguments

+
self
+

(Tensor) the tensor to split

+
chunks
+

(int) number of chunks to return

+
dim
+

(int) dimension along which to split the tensor

+
+
+

unsafe_chunk(input, chunks, dim=0) -> List of Tensors

-

Works like torch_chunk() but without enforcing the autograd restrictions +

Works like torch_chunk() but without enforcing the autograd restrictions on inplace modification of the outputs.

-

Warning

- +
+
+

Warning

This function is safe to use as long as only the input, or only the outputs @@ -224,32 +141,29 @@ are modified inplace after calling this function. It is user's responsibility to ensure that is the case. If both the input and one or more of the outputs are modified inplace, gradients computed by autograd will be silently incorrect.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_unsafe_split.html b/dev/reference/torch_unsafe_split.html index b98bf0422fec0c5fe65ab3b9bddf8b763701f1bc..eaa42746023eacd78e8a4b88b2b880311cc1710c 100644 --- a/dev/reference/torch_unsafe_split.html +++ b/dev/reference/torch_unsafe_split.html @@ -1,79 +1,18 @@ - - - - - - - -Unsafe_split — torch_unsafe_split • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unsafe_split — torch_unsafe_split • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,35 +111,30 @@

Unsafe_split

-
torch_unsafe_split(self, split_size, dim = 1L)
- -

Arguments

- - - - - - - - - - - - - - -
self

(Tensor) tensor to split.

split_size

(int) size of a single chunk or -list of sizes for each chunk

dim

(int) dimension along which to split the tensor.

- -

unsafe_split(tensor, split_size_or_sections, dim=0) -> List of Tensors

+
+
torch_unsafe_split(self, split_size, dim = 1L)
+
+
+

Arguments

+
self
+

(Tensor) tensor to split.

+
split_size
+

(int) size of a single chunk or +list of sizes for each chunk

+
dim
+

(int) dimension along which to split the tensor.

+
+
+

unsafe_split(tensor, split_size_or_sections, dim=0) -> List of Tensors

-

Works like torch_split() but without enforcing the autograd restrictions +

Works like torch_split() but without enforcing the autograd restrictions on inplace modification of the outputs.

-

Warning

- +
+
+

Warning

This function is safe to use as long as only the input, or only the outputs @@ -225,32 +142,29 @@ are modified inplace after calling this function. It is user's responsibility to ensure that is the case. If both the input and one or more of the outputs are modified inplace, gradients computed by autograd will be silently incorrect.

+
+
-
- +
- - + + diff --git a/dev/reference/torch_unsqueeze.html b/dev/reference/torch_unsqueeze.html index b7627039c94faf5159e8b0dd2662d839a01e779b..c5da35ccccdb9af8f12be86ee3a5aad52637da8c 100644 --- a/dev/reference/torch_unsqueeze.html +++ b/dev/reference/torch_unsqueeze.html @@ -1,79 +1,18 @@ - - - - - - - -Unsqueeze — torch_unsqueeze • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Unsqueeze — torch_unsqueeze • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,23 +111,19 @@

Unsqueeze

-
torch_unsqueeze(self, dim)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int) the index at which to insert the singleton dimension

- -

unsqueeze(input, dim) -> Tensor

+
+
torch_unsqueeze(self, dim)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int) the index at which to insert the singleton dimension

+
+
+

unsqueeze(input, dim) -> Tensor

@@ -215,46 +133,45 @@ specified position.

A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze applied at dim = dim + input.dim() + 1.

+
-

Examples

-
if (torch_is_installed()) {
-
-x = torch_tensor(c(1, 2, 3, 4))
-torch_unsqueeze(x, 1)
-torch_unsqueeze(x, 2)
-}
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#>  4
-#> [ CPUFloatType{4,1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x = torch_tensor(c(1, 2, 3, 4))
+torch_unsqueeze(x, 1)
+torch_unsqueeze(x, 2)
+}
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#>  4
+#> [ CPUFloatType{4,1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_vander.html b/dev/reference/torch_vander.html index 0f42c8144b17066d1861d3a7a018d85b3a6b9118..e09c92477e46943e27b9c156784f1973b8a95133 100644 --- a/dev/reference/torch_vander.html +++ b/dev/reference/torch_vander.html @@ -1,79 +1,18 @@ - - - - - - - -Vander — torch_vander • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Vander — torch_vander • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,29 +111,23 @@

Vander

-
torch_vander(x, N = NULL, increasing = FALSE)
- -

Arguments

- - - - - - - - - - - - - - -
x

(Tensor) 1-D input tensor.

N

(int, optional) Number of columns in the output. If N is not specified, -a square array is returned \((N = len(x))\).

increasing

(bool, optional) Order of the powers of the columns. If TRUE, -the powers increase from left to right, if FALSE (the default) they are reversed.

- -

vander(x, N=None, increasing=FALSE) -> Tensor

+
+
torch_vander(x, N = NULL, increasing = FALSE)
+
+
+

Arguments

+
x
+

(Tensor) 1-D input tensor.

+
N
+

(int, optional) Number of columns in the output. If N is not specified, +a square array is returned \((N = len(x))\).

+
increasing
+

(bool, optional) Order of the powers of the columns. If TRUE, +the powers increase from left to right, if FALSE (the default) they are reversed.

+
+
+

vander(x, N=None, increasing=FALSE) -> Tensor

@@ -222,47 +138,46 @@ If increasing is TRUE, the order of the columns is reversed \(x^0, x^1, ..., x^{(N-1)}\). Such a matrix with a geometric progression in each row is named for Alexandre-Theophile Vandermonde.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_tensor(c(1, 2, 3, 5))
-torch_vander(x)
-torch_vander(x, N=3)
-torch_vander(x, N=3, increasing=TRUE)
-}
-#> torch_tensor
-#>   1   1   1
-#>   1   2   4
-#>   1   3   9
-#>   1   5  25
-#> [ CPUFloatType{4,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_tensor(c(1, 2, 3, 5))
+torch_vander(x)
+torch_vander(x, N=3)
+torch_vander(x, N=3, increasing=TRUE)
+}
+#> torch_tensor
+#>   1   1   1
+#>   1   2   4
+#>   1   3   9
+#>   1   5  25
+#> [ CPUFloatType{4,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_var.html b/dev/reference/torch_var.html index cef75d6f0253d62d2ca737860bd45a5a88cb2180..d2b8605eb5afbcacdc823a82aa0356a17b3dbbe8 100644 --- a/dev/reference/torch_var.html +++ b/dev/reference/torch_var.html @@ -1,79 +1,18 @@ - - - - - - - -Var — torch_var • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Var — torch_var • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_var(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

correction

The type of correction.

unbiased

(bool) whether to use the unbiased estimation or not

keepdim

(bool) whether the output tensor has dim retained or not.

- -

var(input, unbiased=TRUE) -> Tensor

+
+
torch_var(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
correction
+

The type of correction.

+
unbiased
+

(bool) whether to use the unbiased estimation or not

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

var(input, unbiased=TRUE) -> Tensor

Returns the variance of all elements in the input tensor.

If unbiased is FALSE, then the variance will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

-

var(input, dim, keepdim=False, unbiased=TRUE, out=NULL) -> Tensor

- +
+
+

var(input, dim, keepdim=False, unbiased=TRUE, out=NULL) -> Tensor

@@ -233,55 +146,54 @@ biased estimator. Otherwise, Bessel's correction will be used.

dimension dim.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is FALSE, then the variance will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_var(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_var(a, 1)
-}
-#> torch_tensor
-#>  0.2976
-#>  0.9193
-#>  0.2206
-#>  0.5509
-#> [ CPUFloatType{4} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_var(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_var(a, 1)
+}
+#> torch_tensor
+#>  2.2763
+#>  1.1094
+#>  0.1851
+#>  0.1529
+#> [ CPUFloatType{4} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_var_mean.html b/dev/reference/torch_var_mean.html index 56df8f585fe355571fa7e50238be9162a3c41e7c..bb31c3590824493254b353396d0ad933c3c1ee87 100644 --- a/dev/reference/torch_var_mean.html +++ b/dev/reference/torch_var_mean.html @@ -1,79 +1,18 @@ - - - - - - - -Var_mean — torch_var_mean • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Var_mean — torch_var_mean • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,43 +111,34 @@

Var_mean

-
torch_var_mean(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - -
self

(Tensor) the input tensor.

dim

(int or tuple of ints) the dimension or dimensions to reduce.

correction

The type of correction.

unbiased

(bool) whether to use the unbiased estimation or not

keepdim

(bool) whether the output tensor has dim retained or not.

- -

var_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

+
+
torch_var_mean(self, dim, correction, unbiased = TRUE, keepdim = FALSE)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
dim
+

(int or tuple of ints) the dimension or dimensions to reduce.

+
correction
+

The type of correction.

+
unbiased
+

(bool) whether to use the unbiased estimation or not

+
keepdim
+

(bool) whether the output tensor has dim retained or not.

+
+
+

var_mean(input, unbiased=TRUE) -> (Tensor, Tensor)

Returns the variance and mean of all elements in the input tensor.

If unbiased is FALSE, then the variance will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

-

var_mean(input, dim, keepdim=False, unbiased=TRUE) -> (Tensor, Tensor)

- +
+
+

var_mean(input, dim, keepdim=False, unbiased=TRUE) -> (Tensor, Tensor)

@@ -233,65 +146,64 @@ biased estimator. Otherwise, Bessel's correction will be used.

dimension dim.

If keepdim is TRUE, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. -Otherwise, dim is squeezed (see torch_squeeze), resulting in the +Otherwise, dim is squeezed (see torch_squeeze), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).

If unbiased is FALSE, then the variance will be calculated via the biased estimator. Otherwise, Bessel's correction will be used.

+
-

Examples

-
if (torch_is_installed()) {
-
-a = torch_randn(c(1, 3))
-a
-torch_var_mean(a)
-
-
-a = torch_randn(c(4, 4))
-a
-torch_var_mean(a, 1)
-}
-#> [[1]]
-#> torch_tensor
-#>  0.2998
-#>  1.6334
-#>  1.4659
-#>  0.2498
-#> [ CPUFloatType{4} ]
-#> 
-#> [[2]]
-#> torch_tensor
-#>  0.2094
-#> -0.2655
-#> -0.4453
-#> -0.0898
-#> [ CPUFloatType{4} ]
-#> 
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a = torch_randn(c(1, 3))
+a
+torch_var_mean(a)
+
+
+a = torch_randn(c(4, 4))
+a
+torch_var_mean(a, 1)
+}
+#> [[1]]
+#> torch_tensor
+#>  0.2556
+#>  0.5874
+#>  0.5766
+#>  1.2497
+#> [ CPUFloatType{4} ]
+#> 
+#> [[2]]
+#> torch_tensor
+#> -0.3067
+#>  0.8035
+#> -0.3865
+#>  0.7109
+#> [ CPUFloatType{4} ]
+#> 
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_vdot.html b/dev/reference/torch_vdot.html index 2c3a1df65cf06ac4e26bb09bcb0be0a64fd5a61d..80c7acde68f713f26f960155f6e13d9301b94109 100644 --- a/dev/reference/torch_vdot.html +++ b/dev/reference/torch_vdot.html @@ -1,79 +1,18 @@ - - - - - - - -Vdot — torch_vdot • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Vdot — torch_vdot • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_vdot(self, other)
- -

Arguments

- - - - - - - - - - -
self

(Tensor) first tensor in the dot product. Its conjugate is used -if it's complex.

other

(Tensor) second tensor in the dot product.

- -

Note

+
+
torch_vdot(self, other)
+
+
+

Arguments

+
self
+

(Tensor) first tensor in the dot product. Its conjugate is used +if it's complex.

+
other
+

(Tensor) second tensor in the dot product.

+
+
+

Note

This function does not broadcast .

-

vdot(input, other, *, out=None) -> Tensor

- +
+
+

vdot(input, other, *, out=None) -> Tensor

Computes the dot product (inner product) of two tensors. The vdot(a, b) function handles complex numbers differently than dot(a, b). If the first argument is complex, the complex conjugate of the first argument is used for the calculation of the dot product.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_vdot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1)))
-if (FALSE) {
-a <- torch_tensor(list(1 +2i, 3 - 1i))
-b <- torch_tensor(list(2 +1i, 4 - 0i))
-torch_vdot(a, b)
-torch_vdot(b, a)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_vdot(torch_tensor(c(2, 3)), torch_tensor(c(2, 1)))
+if (FALSE) {
+a <- torch_tensor(list(1 +2i, 3 - 1i))
+b <- torch_tensor(list(2 +1i, 4 - 0i))
+torch_vdot(a, b)
+torch_vdot(b, a)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_view_as_complex.html b/dev/reference/torch_view_as_complex.html index 174c7b127fcc908d82f341fac774817079901d67..08ecc054b61a745bf25699c69eb49ba28247d632 100644 --- a/dev/reference/torch_view_as_complex.html +++ b/dev/reference/torch_view_as_complex.html @@ -1,79 +1,18 @@ - - - - - - - -View_as_complex — torch_view_as_complex • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -View_as_complex — torch_view_as_complex • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

View_as_complex

-
torch_view_as_complex(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

view_as_complex(input) -> Tensor

+
+
torch_view_as_complex(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

view_as_complex(input) -> Tensor

@@ -210,50 +130,50 @@ tensor of size \(m1, m2, \dots, mi, 2\), this function returns a new complex tensor of size \(m1, m2, \dots, mi\) where the last dimension of the input tensor is expected to represent the real and imaginary components of complex numbers.

-

Warning

- +
+
+

Warning

torch_view_as_complex is only supported for tensors with -torch_dtype torch_float64() and torch_float32(). The input is +torch_dtype torch_float64() and torch_float32(). The input is expected to have the last dimension of size 2. In addition, the tensor must have a stride of 1 for its last dimension. The strides of all other dimensions must be even numbers.

+
-

Examples

-
if (torch_is_installed()) {
-if (FALSE) {
-x=torch_randn(c(4, 2))
-x
-torch_view_as_complex(x)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+if (FALSE) {
+x=torch_randn(c(4, 2))
+x
+torch_view_as_complex(x)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_view_as_real.html b/dev/reference/torch_view_as_real.html index a7ab1cabb1f6afc89a08180216b5e5790b7ecd22..35cf8efd1ec81c3732f6ac57cdca046649390ef3 100644 --- a/dev/reference/torch_view_as_real.html +++ b/dev/reference/torch_view_as_real.html @@ -1,79 +1,18 @@ - - - - - - - -View_as_real — torch_view_as_real • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -View_as_real — torch_view_as_real • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,19 +111,17 @@

View_as_real

-
torch_view_as_real(self)
- -

Arguments

- - - - - - -
self

(Tensor) the input tensor.

- -

view_as_real(input) -> Tensor

+
+
torch_view_as_real(self)
+
+
+

Arguments

+
self
+

(Tensor) the input tensor.

+
+
+

view_as_real(input) -> Tensor

@@ -209,47 +129,47 @@ size \(m1, m2, \dots, mi\), this function returns a new real tensor of size \(m1, m2, \dots, mi, 2\), where the last dimension of size 2 represents the real and imaginary components of complex numbers.

-

Warning

- +
+
+

Warning

torch_view_as_real() is only supported for tensors with complex dtypes.

+
-

Examples

-
if (torch_is_installed()) {
-
-if (FALSE) {
-x <- torch_randn(4, dtype=torch_cfloat())
-x
-torch_view_as_real(x)
-}
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (FALSE) {
+x <- torch_randn(4, dtype=torch_cfloat())
+x
+torch_view_as_real(x)
+}
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_vstack.html b/dev/reference/torch_vstack.html index 6f2df597c53f21bc51ac66f7a2c0d447ca1bb64b..b67bfe3f77b2015f0ccf5504b13499d151892cfe 100644 --- a/dev/reference/torch_vstack.html +++ b/dev/reference/torch_vstack.html @@ -1,79 +1,18 @@ - - - - - - - -Vstack — torch_vstack • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Vstack — torch_vstack • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_vstack(tensors)
- -

Arguments

- - - - - - -
tensors

(sequence of Tensors) sequence of tensors to concatenate

- -

vstack(tensors, *, out=None) -> Tensor

+
+
torch_vstack(tensors)
+
+
+

Arguments

+
tensors
+

(sequence of Tensors) sequence of tensors to concatenate

+
+
+

vstack(tensors, *, out=None) -> Tensor

Stack tensors in sequence vertically (row wise).

This is equivalent to concatenation along the first axis after all 1-D tensors -have been reshaped by torch_atleast_2d().

+have been reshaped by torch_atleast_2d().

+
-

Examples

-
if (torch_is_installed()) {
-
-a <- torch_tensor(c(1, 2, 3))
-b <- torch_tensor(c(4, 5, 6))
-torch_vstack(list(a,b))
-a <- torch_tensor(rbind(1,2,3))
-b <- torch_tensor(rbind(4,5,6))
-torch_vstack(list(a,b))
-}
-#> torch_tensor
-#>  1
-#>  2
-#>  3
-#>  4
-#>  5
-#>  6
-#> [ CPUFloatType{6,1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+a <- torch_tensor(c(1, 2, 3))
+b <- torch_tensor(c(4, 5, 6))
+torch_vstack(list(a,b))
+a <- torch_tensor(rbind(1,2,3))
+b <- torch_tensor(rbind(4,5,6))
+torch_vstack(list(a,b))
+}
+#> torch_tensor
+#>  1
+#>  2
+#>  3
+#>  4
+#>  5
+#>  6
+#> [ CPUFloatType{6,1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_where.html b/dev/reference/torch_where.html index 8a76db75c80b4260c07309bee12d6b09cc28dfcc..a52c2aaaeb9f8a33d095a0995c1b38b6396f89b1 100644 --- a/dev/reference/torch_where.html +++ b/dev/reference/torch_where.html @@ -1,79 +1,18 @@ - - - - - - - -Where — torch_where • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Where — torch_where • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_where(condition, self, other)
- -

Arguments

- - - - - - - - - - - - - - -
condition

(BoolTensor) When TRUE (nonzero), yield x, otherwise yield y

self

(Tensor) values selected at indices where condition is TRUE

other

(Tensor) values selected at indices where condition is FALSE

- -

Note

+
+
torch_where(condition, self, other)
+
+
+

Arguments

+
condition
+

(BoolTensor) When TRUE (nonzero), yield x, otherwise yield y

+
self
+

(Tensor) values selected at indices where condition is TRUE

+
other
+

(Tensor) values selected at indices where condition is FALSE

+
+
+

Note

-
The tensors `condition`, `x`, `y` must be broadcastable .
-
- -

See also torch_nonzero().

-

where(condition, x, y) -> Tensor

+
The tensors `condition`, `x`, `y` must be broadcastable .
+
+

See also torch_nonzero().

+
+
+

where(condition, x, y) -> Tensor

@@ -229,53 +146,53 @@ \end{array} \right. $$

-

where(condition) -> tuple of LongTensor

- +
+
+

where(condition) -> tuple of LongTensor

torch_where(condition) is identical to torch_nonzero(condition, as_tuple=TRUE).

+
-

Examples

-
if (torch_is_installed()) {
-
-if (FALSE) {
-x = torch_randn(c(3, 2))
-y = torch_ones(c(3, 2))
-x
-torch_where(x > 0, x, y)
-}
-
-
-
-}
-
+
+

Examples

+
if (torch_is_installed()) {
+
+if (FALSE) {
+x = torch_randn(c(3, 2))
+y = torch_ones(c(3, 2))
+x
+torch_where(x > 0, x, y)
+}
+
+
+
+}
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_zeros.html b/dev/reference/torch_zeros.html index cc0d30aa7a97f8b7686f039b06ac280b1bcc152a..8d92aa55739fa3e327beef790103fdc18ce889c7 100644 --- a/dev/reference/torch_zeros.html +++ b/dev/reference/torch_zeros.html @@ -1,79 +1,18 @@ - - - - - - - -Zeros — torch_zeros • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Zeros — torch_zeros • torch - - - - - - - - + + -
-
- -
- -
+
-
torch_zeros(
-  ...,
-  names = NULL,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
...

a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

names

optional dimension names

dtype

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

layout

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

- -

zeros(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

+
+
torch_zeros(
+  ...,
+  names = NULL,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE
+)
+
+
+

Arguments

+
...
+

a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

+
names
+

optional dimension names

+
dtype
+

(torch.dtype, optional) the desired data type of returned tensor. Default: if NULL, uses a global default (see torch_set_default_tensor_type).

+
layout
+

(torch.layout, optional) the desired layout of returned Tensor. Default: torch_strided.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, uses the current device for the default tensor type (see torch_set_default_tensor_type). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
+
+

zeros(*size, out=NULL, dtype=NULL, layout=torch.strided, device=NULL, requires_grad=False) -> Tensor

Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.

+
-

Examples

-
if (torch_is_installed()) {
-
-torch_zeros(c(2, 3))
-torch_zeros(c(5))
-}
-#> torch_tensor
-#>  0
-#>  0
-#>  0
-#>  0
-#>  0
-#> [ CPUFloatType{5} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+torch_zeros(c(2, 3))
+torch_zeros(c(5))
+}
+#> torch_tensor
+#>  0
+#>  0
+#>  0
+#>  0
+#>  0
+#> [ CPUFloatType{5} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/torch_zeros_like.html b/dev/reference/torch_zeros_like.html index c43e3111ec12f0e70aaf48ceff88d2a1dadce829..a6c1f27bdaf364299a6147943cc6c6cf98dd61b1 100644 --- a/dev/reference/torch_zeros_like.html +++ b/dev/reference/torch_zeros_like.html @@ -1,79 +1,18 @@ - - - - - - - -Zeros_like — torch_zeros_like • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Zeros_like — torch_zeros_like • torch - - - - - - - - + + -
-
- -
- -
+
@@ -189,96 +111,84 @@

Zeros_like

-
torch_zeros_like(
-  input,
-  dtype = NULL,
-  layout = torch_strided(),
-  device = NULL,
-  requires_grad = FALSE,
-  memory_format = torch_preserve_format()
-)
- -

Arguments

- - - - - - - - - - - - - - - - - - - - - - - - - - -
input

(Tensor) the size of input will determine size of the output tensor.

dtype

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

layout

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

device

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

requires_grad

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

memory_format

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

- -

zeros_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

+
+
torch_zeros_like(
+  input,
+  dtype = NULL,
+  layout = torch_strided(),
+  device = NULL,
+  requires_grad = FALSE,
+  memory_format = torch_preserve_format()
+)
+
+
+

Arguments

+
input
+

(Tensor) the size of input will determine size of the output tensor.

+
dtype
+

(torch.dtype, optional) the desired data type of returned Tensor. Default: if NULL, defaults to the dtype of input.

+
layout
+

(torch.layout, optional) the desired layout of returned tensor. Default: if NULL, defaults to the layout of input.

+
device
+

(torch.device, optional) the desired device of returned tensor. Default: if NULL, defaults to the device of input.

+
requires_grad
+

(bool, optional) If autograd should record operations on the returned tensor. Default: FALSE.

+
memory_format
+

(torch.memory_format, optional) the desired memory format of returned Tensor. Default: torch_preserve_format.

+
+
+

zeros_like(input, dtype=NULL, layout=NULL, device=NULL, requires_grad=False, memory_format=torch.preserve_format) -> Tensor

Returns a tensor filled with the scalar value 0, with the same size as input. torch_zeros_like(input) is equivalent to torch_zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device).

-

Warning

- +
+
+

Warning

As of 0.4, this function does not support an out keyword. As an alternative, the old torch_zeros_like(input, out=output) is equivalent to torch_zeros(input.size(), out=output).

+
-

Examples

-
if (torch_is_installed()) {
-
-input = torch_empty(c(2, 3))
-torch_zeros_like(input)
-}
-#> torch_tensor
-#>  0  0  0
-#>  0  0  0
-#> [ CPUFloatType{2,3} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+input = torch_empty(c(2, 3))
+torch_zeros_like(input)
+}
+#> torch_tensor
+#>  0  0  0
+#>  0  0  0
+#> [ CPUFloatType{2,3} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/with_detect_anomaly.html b/dev/reference/with_detect_anomaly.html index 405c1d3a57834781527090e7170a384b8500aa45..c41754ee47937dc0165db5866e0be66ec5ac1ad8 100644 --- a/dev/reference/with_detect_anomaly.html +++ b/dev/reference/with_detect_anomaly.html @@ -1,85 +1,24 @@ - - - - - - - -Context-manager that enable anomaly detection for the autograd engine. — with_detect_anomaly • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Context-manager that enable anomaly detection for the autograd engine. — with_detect_anomaly • torch - - - - - - - - + + -
-
- -
- -
+
-

This does two things:

    -
  • Running the forward pass with detection enabled will allow the backward +

    This does two things:

    • Running the forward pass with detection enabled will allow the backward pass to print the traceback of the forward operation that created the failing backward function.

    • Any backward computation that generate "nan" value will raise an error.

    • -
    +
+
+
with_detect_anomaly(code)
-
with_detect_anomaly(code)
- -

Arguments

- - - - - - -
code

Cod that will be execued in the detect anomaly context.

- -

Warning

- +
+

Arguments

+
code
+

Cod that will be execued in the detect anomaly context.

+
+
+

Warning

This mode should be enabled only for debugging as the different tests will slow down your program execution.

+
-

Examples

-
if (torch_is_installed()) {
-x <- torch_randn(2, requires_grad = TRUE)
-y <- torch_randn(1)
-b <- (x^y)$sum()
-y$add_(1)
-
-try({
-
-b$backward()
-
-with_detect_anomaly({
-  b$backward()
-})
-
-})
-
-}
-#> Error in (function (self, inputs, gradient, retain_graph, create_graph)  : 
-#>   one of the variables needed for gradient computation has been modified by an inplace operation: [CPUFloatType [1]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
-
+
+

Examples

+
if (torch_is_installed()) {
+x <- torch_randn(2, requires_grad = TRUE)
+y <- torch_randn(1)
+b <- (x^y)$sum()
+y$add_(1)
+
+try({
+
+b$backward()
+
+with_detect_anomaly({
+  b$backward()
+})
+
+})
+
+}
+#> Error in (function (self, inputs, gradient, retain_graph, create_graph)  : 
+#>   one of the variables needed for gradient computation has been modified by an inplace operation: [CPUFloatType [1]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
+
+
+
-
- +
- - + + diff --git a/dev/reference/with_enable_grad.html b/dev/reference/with_enable_grad.html index fad2bbe42cb1fe41d7db2678d671e6b751c0ce3f..f40b6a8aec541cdb43f7afadb07304ff780204a1 100644 --- a/dev/reference/with_enable_grad.html +++ b/dev/reference/with_enable_grad.html @@ -1,80 +1,19 @@ - - - - - - - -Enable grad — with_enable_grad • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Enable grad — with_enable_grad • torch - - - - - - - - + + -
-
- -
- -
+

Context-manager that enables gradient calculation. -Enables gradient calculation, if it has been disabled via with_no_grad.

+Enables gradient calculation, if it has been disabled via with_no_grad.

-
with_enable_grad(code)
- -

Arguments

- - - - - - -
code

code to be executed with gradient recording.

- -

Details

+
+
with_enable_grad(code)
+
+
+

Arguments

+
code
+

code to be executed with gradient recording.

+
+
+

Details

This context manager is thread local; it will not affect computation in other threads.

+
-

Examples

-
if (torch_is_installed()) {
-
-x <- torch_tensor(1, requires_grad=TRUE)
-with_no_grad({
-  with_enable_grad({
-    y = x * 2
-  })
-})
-y$backward()
-x$grad
-
-}
-#> torch_tensor
-#>  2
-#> [ CPUFloatType{1} ]
-
+
+

Examples

+
if (torch_is_installed()) {
+
+x <- torch_tensor(1, requires_grad=TRUE)
+with_no_grad({
+  with_enable_grad({
+    y = x * 2
+  })
+})
+y$backward()
+x$grad
+
+}
+#> torch_tensor
+#>  2
+#> [ CPUFloatType{1} ]
+
+
+
-
- +
- - + + diff --git a/dev/reference/with_no_grad.html b/dev/reference/with_no_grad.html index 398e2fcdef8fa0640f6e6eaf63dabba1d41b2a72..2c3890238944b9c848b18238ea1da736055aadfa 100644 --- a/dev/reference/with_no_grad.html +++ b/dev/reference/with_no_grad.html @@ -1,79 +1,18 @@ - - - - - - - -Temporarily modify gradient recording. — with_no_grad • torch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Temporarily modify gradient recording. — with_no_grad • torch - - - - - - + + - - -
-
- -
- -
+
@@ -189,56 +111,52 @@

Temporarily modify gradient recording.

-
with_no_grad(code)
- -

Arguments

- - - - - - -
code

code to be executed with no gradient recording.

- - -

Examples

-
if (torch_is_installed()) {
-x <- torch_tensor(runif(5), requires_grad = TRUE)
-with_no_grad({
-  x$sub_(torch_tensor(as.numeric(1:5)))
-})
-x
-x$grad
-
-}
-#> torch_tensor
-#> [ Tensor (undefined) ]
-
+
+
with_no_grad(code)
+
+ +
+

Arguments

+
code
+

code to be executed with no gradient recording.

+
+ +
+

Examples

+
if (torch_is_installed()) {
+x <- torch_tensor(runif(5), requires_grad = TRUE)
+with_no_grad({
+  x$sub_(torch_tensor(as.numeric(1:5)))
+})
+x
+x$grad
+
+}
+#> torch_tensor
+#> [ Tensor (undefined) ]
+
+
+
-
- +
- - + + diff --git a/dev/sitemap.xml b/dev/sitemap.xml new file mode 100644 index 0000000000000000000000000000000000000000..cda1543cb727d616eaf227c286ef439846224e86 --- /dev/null +++ b/dev/sitemap.xml @@ -0,0 +1,2019 @@ + + + + /LICENSE-text.html + + + /LICENSE.html + + + /articles/distributions.html + + + /articles/examples/basic-autograd.html + + + /articles/examples/basic-nn-module.html + + + /articles/examples/dataset.html + + + /articles/examples/index.html + + + /articles/extending-autograd.html + + + /articles/index.html + + + /articles/indexing.html + + + /articles/installation.html + + + /articles/loading-data.html + + + /articles/python-to-r.html + + + /articles/serialization.html + + + /articles/tensor/index.html + + + /articles/tensor-creation.html + + + /articles/torchscript.html + + + /articles/using-autograd.html + + + /authors.html + + + /index.html + + + /news/index.html + + + /reference/AutogradContext.html + + + /reference/Constraint.html + + + /reference/Distribution.html + + + /reference/as_array.html + + + /reference/autograd_backward.html + + + /reference/autograd_function.html + + + /reference/autograd_grad.html + + + /reference/autograd_set_grad_mode.html + + + /reference/backends_mkl_is_available.html + + + /reference/backends_mkldnn_is_available.html + + + /reference/backends_openmp_is_available.html + + + /reference/broadcast_all.html + + + /reference/call_torch_function.html + + + /reference/contrib_sort_vertices.html + + + /reference/cuda_current_device.html + + + /reference/cuda_device_count.html + + + /reference/cuda_get_device_capability.html + + + /reference/cuda_is_available.html + + + /reference/dataloader.html + + + /reference/dataloader_make_iter.html + + + /reference/dataloader_next.html + + + /reference/dataset.html + + + /reference/dataset_subset.html + + + /reference/default_dtype.html + + + /reference/distr_bernoulli.html + + + /reference/distr_categorical.html + + + /reference/distr_chi2.html + + + /reference/distr_gamma.html + + + /reference/distr_mixture_same_family.html + + + /reference/distr_multivariate_normal.html + + + /reference/distr_normal.html + + + /reference/distr_poisson.html + + + /reference/enumerate.dataloader.html + + + /reference/enumerate.html + + + /reference/get_install_libs_url.html + + + /reference/index.html + + + /reference/install_torch.html + + + /reference/install_torch_from_file.html + + + /reference/is_dataloader.html + + + /reference/is_nn_buffer.html + + + /reference/is_nn_module.html + + + /reference/is_nn_parameter.html + + + /reference/is_optimizer.html + + + /reference/is_torch_device.html + + + /reference/is_torch_dtype.html + + + /reference/is_torch_layout.html + + + /reference/is_torch_memory_format.html + + + /reference/is_torch_qscheme.html + + + /reference/is_undefined_tensor.html + + + /reference/jit_compile.html + + + /reference/jit_load.html + + + /reference/jit_save.html + + + /reference/jit_save_for_mobile.html + + + /reference/jit_scalar.html + + + /reference/jit_trace.html + + + /reference/jit_trace_module.html + + + /reference/jit_tuple.html + + + /reference/linalg_cholesky.html + + + /reference/linalg_cholesky_ex.html + + + /reference/linalg_cond.html + + + /reference/linalg_det.html + + + /reference/linalg_eig.html + + + /reference/linalg_eigh.html + + + /reference/linalg_eigvals.html + + + /reference/linalg_eigvalsh.html + + + /reference/linalg_householder_product.html + + + /reference/linalg_inv.html + + + /reference/linalg_inv_ex.html + + + /reference/linalg_lstsq.html + + + /reference/linalg_matrix_norm.html + + + /reference/linalg_matrix_power.html + + + /reference/linalg_matrix_rank.html + + + /reference/linalg_multi_dot.html + + + /reference/linalg_norm.html + + + /reference/linalg_pinv.html + + + /reference/linalg_qr.html + + + /reference/linalg_slogdet.html + + + /reference/linalg_solve.html + + + /reference/linalg_svd.html + + + /reference/linalg_svdvals.html + + + /reference/linalg_tensorinv.html + + + /reference/linalg_tensorsolve.html + + + /reference/linalg_vector_norm.html + + + /reference/load_state_dict.html + + + /reference/lr_lambda.html + + + /reference/lr_multiplicative.html + + + /reference/lr_one_cycle.html + + + /reference/lr_scheduler.html + + + /reference/lr_step.html + + + /reference/nn_adaptive_avg_pool1d.html + + + /reference/nn_adaptive_avg_pool2d.html + + + /reference/nn_adaptive_avg_pool3d.html + + + /reference/nn_adaptive_log_softmax_with_loss.html + + + /reference/nn_adaptive_max_pool1d.html + + + /reference/nn_adaptive_max_pool2d.html + + + /reference/nn_adaptive_max_pool3d.html + + + /reference/nn_avg_pool1d.html + + + /reference/nn_avg_pool2d.html + + + /reference/nn_avg_pool3d.html + + + /reference/nn_batch_norm1d.html + + + /reference/nn_batch_norm2d.html + + + /reference/nn_batch_norm3d.html + + + /reference/nn_bce_loss.html + + + /reference/nn_bce_with_logits_loss.html + + + /reference/nn_bilinear.html + + + /reference/nn_buffer.html + + + /reference/nn_celu.html + + + /reference/nn_contrib_sparsemax.html + + + /reference/nn_conv1d.html + + + /reference/nn_conv2d.html + + + /reference/nn_conv3d.html + + + /reference/nn_conv_transpose1d.html + + + /reference/nn_conv_transpose2d.html + + + /reference/nn_conv_transpose3d.html + + + /reference/nn_cosine_embedding_loss.html + + + /reference/nn_cross_entropy_loss.html + + + /reference/nn_ctc_loss.html + + + /reference/nn_dropout.html + + + /reference/nn_dropout2d.html + + + /reference/nn_dropout3d.html + + + /reference/nn_elu.html + + + /reference/nn_embedding.html + + + /reference/nn_fractional_max_pool2d.html + + + /reference/nn_fractional_max_pool3d.html + + + /reference/nn_gelu.html + + + /reference/nn_glu.html + + + /reference/nn_group_norm.html + + + /reference/nn_gru.html + + + /reference/nn_hardshrink.html + + + /reference/nn_hardsigmoid.html + + + /reference/nn_hardswish.html + + + /reference/nn_hardtanh.html + + + /reference/nn_hinge_embedding_loss.html + + + /reference/nn_identity.html + + + /reference/nn_init_calculate_gain.html + + + /reference/nn_init_constant_.html + + + /reference/nn_init_dirac_.html + + + /reference/nn_init_eye_.html + + + /reference/nn_init_kaiming_normal_.html + + + /reference/nn_init_kaiming_uniform_.html + + + /reference/nn_init_normal_.html + + + /reference/nn_init_ones_.html + + + /reference/nn_init_orthogonal_.html + + + /reference/nn_init_sparse_.html + + + /reference/nn_init_trunc_normal_.html + + + /reference/nn_init_uniform_.html + + + /reference/nn_init_xavier_normal_.html + + + /reference/nn_init_xavier_uniform_.html + + + /reference/nn_init_zeros_.html + + + /reference/nn_kl_div_loss.html + + + /reference/nn_l1_loss.html + + + /reference/nn_layer_norm.html + + + /reference/nn_leaky_relu.html + + + /reference/nn_linear.html + + + /reference/nn_log_sigmoid.html + + + /reference/nn_log_softmax.html + + + /reference/nn_lp_pool1d.html + + + /reference/nn_lp_pool2d.html + + + /reference/nn_lstm.html + + + /reference/nn_margin_ranking_loss.html + + + /reference/nn_max_pool1d.html + + + /reference/nn_max_pool2d.html + + + /reference/nn_max_pool3d.html + + + /reference/nn_max_unpool1d.html + + + /reference/nn_max_unpool2d.html + + + /reference/nn_max_unpool3d.html + + + /reference/nn_module.html + + + /reference/nn_module_list.html + + + /reference/nn_mse_loss.html + + + /reference/nn_multi_margin_loss.html + + + /reference/nn_multihead_attention.html + + + /reference/nn_multilabel_margin_loss.html + + + /reference/nn_multilabel_soft_margin_loss.html + + + /reference/nn_nll_loss.html + + + /reference/nn_pairwise_distance.html + + + /reference/nn_parameter.html + + + /reference/nn_poisson_nll_loss.html + + + /reference/nn_prelu.html + + + /reference/nn_relu.html + + + /reference/nn_relu6.html + + + /reference/nn_rnn.html + + + /reference/nn_rrelu.html + + + /reference/nn_selu.html + + + /reference/nn_sequential.html + + + /reference/nn_sigmoid.html + + + /reference/nn_smooth_l1_loss.html + + + /reference/nn_soft_margin_loss.html + + + /reference/nn_softmax.html + + + /reference/nn_softmax2d.html + + + /reference/nn_softmin.html + + + /reference/nn_softplus.html + + + /reference/nn_softshrink.html + + + /reference/nn_softsign.html + + + /reference/nn_tanh.html + + + /reference/nn_tanhshrink.html + + + /reference/nn_threshold.html + + + /reference/nn_triplet_margin_loss.html + + + /reference/nn_triplet_margin_with_distance_loss.html + + + /reference/nn_utils_clip_grad_norm_.html + + + /reference/nn_utils_clip_grad_value_.html + + + /reference/nn_utils_rnn_pack_padded_sequence.html + + + /reference/nn_utils_rnn_pack_sequence.html + + + /reference/nn_utils_rnn_pad_packed_sequence.html + + + /reference/nn_utils_rnn_pad_sequence.html + + + /reference/nnf_adaptive_avg_pool1d.html + + + /reference/nnf_adaptive_avg_pool2d.html + + + /reference/nnf_adaptive_avg_pool3d.html + + + /reference/nnf_adaptive_max_pool1d.html + + + /reference/nnf_adaptive_max_pool2d.html + + + /reference/nnf_adaptive_max_pool3d.html + + + /reference/nnf_affine_grid.html + + + /reference/nnf_alpha_dropout.html + + + /reference/nnf_avg_pool1d.html + + + /reference/nnf_avg_pool2d.html + + + /reference/nnf_avg_pool3d.html + + + /reference/nnf_batch_norm.html + + + /reference/nnf_bilinear.html + + + /reference/nnf_binary_cross_entropy.html + + + /reference/nnf_binary_cross_entropy_with_logits.html + + + /reference/nnf_celu.html + + + /reference/nnf_contrib_sparsemax.html + + + /reference/nnf_conv1d.html + + + /reference/nnf_conv2d.html + + + /reference/nnf_conv3d.html + + + /reference/nnf_conv_tbc.html + + + /reference/nnf_conv_transpose1d.html + + + /reference/nnf_conv_transpose2d.html + + + /reference/nnf_conv_transpose3d.html + + + /reference/nnf_cosine_embedding_loss.html + + + /reference/nnf_cosine_similarity.html + + + /reference/nnf_cross_entropy.html + + + /reference/nnf_ctc_loss.html + + + /reference/nnf_dropout.html + + + /reference/nnf_dropout2d.html + + + /reference/nnf_dropout3d.html + + + /reference/nnf_elu.html + + + /reference/nnf_embedding.html + + + /reference/nnf_embedding_bag.html + + + /reference/nnf_fold.html + + + /reference/nnf_fractional_max_pool2d.html + + + /reference/nnf_fractional_max_pool3d.html + + + /reference/nnf_gelu.html + + + /reference/nnf_glu.html + + + /reference/nnf_grid_sample.html + + + /reference/nnf_group_norm.html + + + /reference/nnf_gumbel_softmax.html + + + /reference/nnf_hardshrink.html + + + /reference/nnf_hardsigmoid.html + + + /reference/nnf_hardswish.html + + + /reference/nnf_hardtanh.html + + + /reference/nnf_hinge_embedding_loss.html + + + /reference/nnf_instance_norm.html + + + /reference/nnf_interpolate.html + + + /reference/nnf_kl_div.html + + + /reference/nnf_l1_loss.html + + + /reference/nnf_layer_norm.html + + + /reference/nnf_leaky_relu.html + + + /reference/nnf_linear.html + + + /reference/nnf_local_response_norm.html + + + /reference/nnf_log_softmax.html + + + /reference/nnf_logsigmoid.html + + + /reference/nnf_lp_pool1d.html + + + /reference/nnf_lp_pool2d.html + + + /reference/nnf_margin_ranking_loss.html + + + /reference/nnf_max_pool1d.html + + + /reference/nnf_max_pool2d.html + + + /reference/nnf_max_pool3d.html + + + /reference/nnf_max_unpool1d.html + + + /reference/nnf_max_unpool2d.html + + + /reference/nnf_max_unpool3d.html + + + /reference/nnf_mse_loss.html + + + /reference/nnf_multi_head_attention_forward.html + + + /reference/nnf_multi_margin_loss.html + + + /reference/nnf_multilabel_margin_loss.html + + + /reference/nnf_multilabel_soft_margin_loss.html + + + /reference/nnf_nll_loss.html + + + /reference/nnf_normalize.html + + + /reference/nnf_one_hot.html + + + /reference/nnf_pad.html + + + /reference/nnf_pairwise_distance.html + + + /reference/nnf_pdist.html + + + /reference/nnf_pixel_shuffle.html + + + /reference/nnf_poisson_nll_loss.html + + + /reference/nnf_prelu.html + + + /reference/nnf_relu.html + + + /reference/nnf_relu6.html + + + /reference/nnf_rrelu.html + + + /reference/nnf_selu.html + + + /reference/nnf_sigmoid.html + + + /reference/nnf_smooth_l1_loss.html + + + /reference/nnf_soft_margin_loss.html + + + /reference/nnf_softmax.html + + + /reference/nnf_softmin.html + + + /reference/nnf_softplus.html + + + /reference/nnf_softshrink.html + + + /reference/nnf_softsign.html + + + /reference/nnf_tanhshrink.html + + + /reference/nnf_threshold.html + + + /reference/nnf_triplet_margin_loss.html + + + /reference/nnf_triplet_margin_with_distance_loss.html + + + /reference/nnf_unfold.html + + + /reference/optim_adadelta.html + + + /reference/optim_adagrad.html + + + /reference/optim_adam.html + + + /reference/optim_asgd.html + + + /reference/optim_lbfgs.html + + + /reference/optim_required.html + + + /reference/optim_rmsprop.html + + + /reference/optim_rprop.html + + + /reference/optim_sgd.html + + + /reference/optimizer.html + + + /reference/pipe.html + + + /reference/reexports.html + + + /reference/slc.html + + + /reference/tensor_dataset.html + + + /reference/threads.html + + + /reference/torch_abs.html + + + /reference/torch_absolute.html + + + /reference/torch_acos.html + + + /reference/torch_acosh.html + + + /reference/torch_adaptive_avg_pool1d.html + + + /reference/torch_add.html + + + /reference/torch_addbmm.html + + + /reference/torch_addcdiv.html + + + /reference/torch_addcmul.html + + + /reference/torch_addmm.html + + + /reference/torch_addmv.html + + + /reference/torch_addr.html + + + /reference/torch_allclose.html + + + /reference/torch_amax.html + + + /reference/torch_amin.html + + + /reference/torch_angle.html + + + /reference/torch_arange.html + + + /reference/torch_arccos.html + + + /reference/torch_arccosh.html + + + /reference/torch_arcsin.html + + + /reference/torch_arcsinh.html + + + /reference/torch_arctan.html + + + /reference/torch_arctanh.html + + + /reference/torch_argmax.html + + + /reference/torch_argmin.html + + + /reference/torch_argsort.html + + + /reference/torch_as_strided.html + + + /reference/torch_asin.html + + + /reference/torch_asinh.html + + + /reference/torch_atan.html + + + /reference/torch_atan2.html + + + /reference/torch_atanh.html + + + /reference/torch_atleast_1d.html + + + /reference/torch_atleast_2d.html + + + /reference/torch_atleast_3d.html + + + /reference/torch_avg_pool1d.html + + + /reference/torch_baddbmm.html + + + /reference/torch_bartlett_window.html + + + /reference/torch_bernoulli.html + + + /reference/torch_bincount.html + + + /reference/torch_bitwise_and.html + + + /reference/torch_bitwise_not.html + + + /reference/torch_bitwise_or.html + + + /reference/torch_bitwise_xor.html + + + /reference/torch_blackman_window.html + + + /reference/torch_block_diag.html + + + /reference/torch_bmm.html + + + /reference/torch_broadcast_tensors.html + + + /reference/torch_bucketize.html + + + /reference/torch_can_cast.html + + + /reference/torch_cartesian_prod.html + + + /reference/torch_cat.html + + + /reference/torch_cdist.html + + + /reference/torch_ceil.html + + + /reference/torch_celu.html + + + /reference/torch_celu_.html + + + /reference/torch_chain_matmul.html + + + /reference/torch_channel_shuffle.html + + + /reference/torch_cholesky.html + + + /reference/torch_cholesky_inverse.html + + + /reference/torch_cholesky_solve.html + + + /reference/torch_chunk.html + + + /reference/torch_clamp.html + + + /reference/torch_clip.html + + + /reference/torch_clone.html + + + /reference/torch_combinations.html + + + /reference/torch_complex.html + + + /reference/torch_conj.html + + + /reference/torch_conv1d.html + + + /reference/torch_conv2d.html + + + /reference/torch_conv3d.html + + + /reference/torch_conv_tbc.html + + + /reference/torch_conv_transpose1d.html + + + /reference/torch_conv_transpose2d.html + + + /reference/torch_conv_transpose3d.html + + + /reference/torch_cos.html + + + /reference/torch_cosh.html + + + /reference/torch_cosine_similarity.html + + + /reference/torch_count_nonzero.html + + + /reference/torch_cross.html + + + /reference/torch_cummax.html + + + /reference/torch_cummin.html + + + /reference/torch_cumprod.html + + + /reference/torch_cumsum.html + + + /reference/torch_deg2rad.html + + + /reference/torch_dequantize.html + + + /reference/torch_det.html + + + /reference/torch_device.html + + + /reference/torch_diag.html + + + /reference/torch_diag_embed.html + + + /reference/torch_diagflat.html + + + /reference/torch_diagonal.html + + + /reference/torch_diff.html + + + /reference/torch_digamma.html + + + /reference/torch_dist.html + + + /reference/torch_div.html + + + /reference/torch_divide.html + + + /reference/torch_dot.html + + + /reference/torch_dstack.html + + + /reference/torch_dtype.html + + + /reference/torch_eig.html + + + /reference/torch_einsum.html + + + /reference/torch_empty.html + + + /reference/torch_empty_like.html + + + /reference/torch_empty_strided.html + + + /reference/torch_eq.html + + + /reference/torch_equal.html + + + /reference/torch_erf.html + + + /reference/torch_erfc.html + + + /reference/torch_erfinv.html + + + /reference/torch_exp.html + + + /reference/torch_exp2.html + + + /reference/torch_expm1.html + + + /reference/torch_eye.html + + + /reference/torch_fft_fft.html + + + /reference/torch_fft_ifft.html + + + /reference/torch_fft_irfft.html + + + /reference/torch_fft_rfft.html + + + /reference/torch_finfo.html + + + /reference/torch_fix.html + + + /reference/torch_flatten.html + + + /reference/torch_flip.html + + + /reference/torch_fliplr.html + + + /reference/torch_flipud.html + + + /reference/torch_floor.html + + + /reference/torch_floor_divide.html + + + /reference/torch_fmod.html + + + /reference/torch_frac.html + + + /reference/torch_full.html + + + /reference/torch_full_like.html + + + /reference/torch_gather.html + + + /reference/torch_gcd.html + + + /reference/torch_ge.html + + + /reference/torch_generator.html + + + /reference/torch_geqrf.html + + + /reference/torch_ger.html + + + /reference/torch_greater.html + + + /reference/torch_greater_equal.html + + + /reference/torch_gt.html + + + /reference/torch_hamming_window.html + + + /reference/torch_hann_window.html + + + /reference/torch_heaviside.html + + + /reference/torch_histc.html + + + /reference/torch_hstack.html + + + /reference/torch_hypot.html + + + /reference/torch_i0.html + + + /reference/torch_iinfo.html + + + /reference/torch_imag.html + + + /reference/torch_index.html + + + /reference/torch_index_put.html + + + /reference/torch_index_put_.html + + + /reference/torch_index_select.html + + + /reference/torch_inverse.html + + + /reference/torch_is_complex.html + + + /reference/torch_is_floating_point.html + + + /reference/torch_is_installed.html + + + /reference/torch_is_nonzero.html + + + /reference/torch_isclose.html + + + /reference/torch_isfinite.html + + + /reference/torch_isinf.html + + + /reference/torch_isnan.html + + + /reference/torch_isneginf.html + + + /reference/torch_isposinf.html + + + /reference/torch_isreal.html + + + /reference/torch_istft.html + + + /reference/torch_kaiser_window.html + + + /reference/torch_kthvalue.html + + + /reference/torch_layout.html + + + /reference/torch_lcm.html + + + /reference/torch_le.html + + + /reference/torch_lerp.html + + + /reference/torch_less.html + + + /reference/torch_less_equal.html + + + /reference/torch_lgamma.html + + + /reference/torch_linspace.html + + + /reference/torch_load.html + + + /reference/torch_log.html + + + /reference/torch_log10.html + + + /reference/torch_log1p.html + + + /reference/torch_log2.html + + + /reference/torch_logaddexp.html + + + /reference/torch_logaddexp2.html + + + /reference/torch_logcumsumexp.html + + + /reference/torch_logdet.html + + + /reference/torch_logical_and.html + + + /reference/torch_logical_not.html + + + /reference/torch_logical_or.html + + + /reference/torch_logical_xor.html + + + /reference/torch_logit.html + + + /reference/torch_logspace.html + + + /reference/torch_logsumexp.html + + + /reference/torch_lstsq.html + + + /reference/torch_lt.html + + + /reference/torch_lu.html + + + /reference/torch_lu_solve.html + + + /reference/torch_manual_seed.html + + + /reference/torch_masked_select.html + + + /reference/torch_matmul.html + + + /reference/torch_matrix_exp.html + + + /reference/torch_matrix_power.html + + + /reference/torch_matrix_rank.html + + + /reference/torch_max.html + + + /reference/torch_maximum.html + + + /reference/torch_mean.html + + + /reference/torch_median.html + + + /reference/torch_memory_format.html + + + /reference/torch_meshgrid.html + + + /reference/torch_min.html + + + /reference/torch_minimum.html + + + /reference/torch_mm.html + + + /reference/torch_mode.html + + + /reference/torch_movedim.html + + + /reference/torch_mul.html + + + /reference/torch_multinomial.html + + + /reference/torch_multiply.html + + + /reference/torch_mv.html + + + /reference/torch_mvlgamma.html + + + /reference/torch_nanquantile.html + + + /reference/torch_nansum.html + + + /reference/torch_narrow.html + + + /reference/torch_ne.html + + + /reference/torch_neg.html + + + /reference/torch_negative.html + + + /reference/torch_nextafter.html + + + /reference/torch_nonzero.html + + + /reference/torch_norm.html + + + /reference/torch_normal.html + + + /reference/torch_not_equal.html + + + /reference/torch_ones.html + + + /reference/torch_ones_like.html + + + /reference/torch_orgqr.html + + + /reference/torch_ormqr.html + + + /reference/torch_outer.html + + + /reference/torch_pdist.html + + + /reference/torch_pinverse.html + + + /reference/torch_pixel_shuffle.html + + + /reference/torch_poisson.html + + + /reference/torch_polar.html + + + /reference/torch_polygamma.html + + + /reference/torch_pow.html + + + /reference/torch_prod.html + + + /reference/torch_promote_types.html + + + /reference/torch_qr.html + + + /reference/torch_qscheme.html + + + /reference/torch_quantile.html + + + /reference/torch_quantize_per_channel.html + + + /reference/torch_quantize_per_tensor.html + + + /reference/torch_rad2deg.html + + + /reference/torch_rand.html + + + /reference/torch_rand_like.html + + + /reference/torch_randint.html + + + /reference/torch_randint_like.html + + + /reference/torch_randn.html + + + /reference/torch_randn_like.html + + + /reference/torch_randperm.html + + + /reference/torch_range.html + + + /reference/torch_real.html + + + /reference/torch_reciprocal.html + + + /reference/torch_reduction.html + + + /reference/torch_relu.html + + + /reference/torch_relu_.html + + + /reference/torch_remainder.html + + + /reference/torch_renorm.html + + + /reference/torch_repeat_interleave.html + + + /reference/torch_reshape.html + + + /reference/torch_result_type.html + + + /reference/torch_roll.html + + + /reference/torch_rot90.html + + + /reference/torch_round.html + + + /reference/torch_rrelu_.html + + + /reference/torch_rsqrt.html + + + /reference/torch_save.html + + + /reference/torch_scalar_tensor.html + + + /reference/torch_searchsorted.html + + + /reference/torch_selu.html + + + /reference/torch_selu_.html + + + /reference/torch_sgn.html + + + /reference/torch_sigmoid.html + + + /reference/torch_sign.html + + + /reference/torch_signbit.html + + + /reference/torch_sin.html + + + /reference/torch_sinh.html + + + /reference/torch_slogdet.html + + + /reference/torch_solve.html + + + /reference/torch_sort.html + + + /reference/torch_sparse_coo_tensor.html + + + /reference/torch_split.html + + + /reference/torch_sqrt.html + + + /reference/torch_square.html + + + /reference/torch_squeeze.html + + + /reference/torch_stack.html + + + /reference/torch_std.html + + + /reference/torch_std_mean.html + + + /reference/torch_stft.html + + + /reference/torch_sub.html + + + /reference/torch_subtract.html + + + /reference/torch_sum.html + + + /reference/torch_svd.html + + + /reference/torch_symeig.html + + + /reference/torch_t.html + + + /reference/torch_take.html + + + /reference/torch_tan.html + + + /reference/torch_tanh.html + + + /reference/torch_tensor.html + + + /reference/torch_tensordot.html + + + /reference/torch_threshold_.html + + + /reference/torch_topk.html + + + /reference/torch_trace.html + + + /reference/torch_transpose.html + + + /reference/torch_trapz.html + + + /reference/torch_triangular_solve.html + + + /reference/torch_tril.html + + + /reference/torch_tril_indices.html + + + /reference/torch_triu.html + + + /reference/torch_triu_indices.html + + + /reference/torch_true_divide.html + + + /reference/torch_trunc.html + + + /reference/torch_unbind.html + + + /reference/torch_unique_consecutive.html + + + /reference/torch_unsafe_chunk.html + + + /reference/torch_unsafe_split.html + + + /reference/torch_unsqueeze.html + + + /reference/torch_vander.html + + + /reference/torch_var.html + + + /reference/torch_var_mean.html + + + /reference/torch_vdot.html + + + /reference/torch_view_as_complex.html + + + /reference/torch_view_as_real.html + + + /reference/torch_vstack.html + + + /reference/torch_where.html + + + /reference/torch_zeros.html + + + /reference/torch_zeros_like.html + + + /reference/with_detect_anomaly.html + + + /reference/with_enable_grad.html + + + /reference/with_no_grad.html + +